This morning I had a bit of an argument on Twitter with Ryan Avent about the future of self-driving cars. He thinks his infant daughter will never need to learn to drive because self-driving cars will be ready for prime time before she reaches her 16th birthday in 2026. I’m more skeptical. I think self-driving cars are coming, but I doubt we’ll see them available to consumers before the 2030s. So I proposed, and Ryan accepted, the following bet:
I bet you $500 that on your daughter’s 16th birthday, it won’t be possible and legal for someone with no driver’s license to hop into a self-driving car in DC, give it an address in Philly, take a nap, and wake up at her destination 3-4 hours later (depending on traffic, obviously).
The car must be generally commercially available–not a research prototype or limited regulatory trial. It can be either purchased or a rented “taxi.” And obviously there can’t be anyone helping to guide the vehicle either in the car or over the air.
Tom Lee has agreed to be the judge in case the outcome is disputed 16 years hence, with the following addition provision:
Let me suggest the proviso that the “nap” criterion in the bet be eliminated if it turns out that by 2026 nootropic pharmaceutical or cybernetic interventions mean that sleep is no longer possible.
So that’s the bet. Let me give a couple of thoughts for why I think I’m going to win.
First, although there are already self-driving cars on the road, those cars are “self-driving” with some major caveats. There’s a human being behind the wheel who takes control when the car approaches a potentially tricky situation like a pedestrian or bicyclist. I haven’t talked to the team that made the Google car, but unless there’s been fantastic progress since I talked to some AI researchers in 2008, the vehicles are probably not equipped to deal gracefully with adverse weather conditions like fog, rain, and ice. They’ll get steadily better at this, but it’s going to take a lot of work to reach the point where they can safely handle all the real-world situations they might encounter.
Second, the path from research prototype to mainstream commercial product is almost always longer than people expect. Building a system that works when built in a lab and operated by sophisticated technical staff is always much easier than building a system that’s simple and user-friendly enough for use by the general public. Commercial products need to work despite the abuse and neglect of their indifferent users.
And the challenge is much greater when you’re dealing with questions of life or death. One of the reasons that consumer electronics have advanced so rapidly is that the failure modes for these products generally aren’t terrible. If your cell phone drops a call, you just shrug and wait until reception improves. Obviously, if your self-driving car has a bug that causes it to crash, you’re going to be pretty upset. So a commercial self-driving car product would need to be over-engineered for safety, with redundant systems and the ability to detect and recover from instrument and mechanical failures. Presumably the Google car doesn’t do this; it relies on the human driver to notice a problem and grab the wheel to recover.
A third obstacle is liability. If a human driver crashes his car, he may get sued for it if he harms a third party, but the settlement amount is likely to be relatively small (at most the value of his insurance coverage) and the car manufacturer probably won’t be sued unless there’s evidence of a mechanical defect. In contrast, if a self-driving car crashes, the car manufacturer is almost guaranteed to be sued, and given the way our liability system works the jury is likely to hand down a much larger judgment against the deep-pocketed defendant. And this will be true even if the car’s overall safety record is much better than that of the average human driver. The fact that the crash victim was statistically less likely to crash in the self-driving car will not impress the jury who’s just heard testimony from the victim’s grieving husband. So self-driving cars will need to be much safer than normal human-driven cars before car companies will be prepared to shoulder the liability risk.
Finally, there’s the regulatory environment. Regulators are likely to be even more risk-averse than auto company executives. People are much more terrified of dying in novel ways than in mundane ones: that’s why we go to such ridiculous effort to make air travel safer despite the fact that air travel is already much safer than driving: the TSA gets blamed if a terrorist blows up an airplane. They don’t get blamed if the cost and inconvenience of flying causes 1000 extra people to die on the highways. By the same token, if regulators approve a self-driving car that goes on to kill someone, they’ll face a much bigger backlash than if their failure to approve self-driving cars lead to preventable deaths at the hands of human drivers. So even if the above factors all break in Ryan’s favor, I’m counting on the timidity of government regulators to drag the approval process out beyond 2026.
Will I win? I hope I don’t. The benefits of self-driving cars—both to me personally and to society at large—would dwarf the value of the bet. But unfortunately I think Ryan’s daughter is going to need to get a driver’s license, because self-driving cars won’t come onto the market until after she reaches adulthood.
I wrote a series of articles for Ars Technica about self-driving cars in 2008.
Update: Ryan makes his case. I’ll just make one final point: I think 16 years may be enough time to overcome either the technical or the legal hurdles alone. But the two problems will have to be attacked in sequence, not in parallel. That is, the policy debate won’t begin in earnest until prototype self-driving cars start to show safety records comparable to human-driven cars; this still seems to be a decade off at least. And then in the best case there will be several years of foot-dragging by policymakers.