This morning I had a bit of an argument on Twitter with Ryan Avent about the future of self-driving cars. He thinks his infant daughter will never need to learn to drive because self-driving cars will be ready for prime time before she reaches her 16th birthday in 2026. I’m more skeptical. I think self-driving cars are coming, but I doubt we’ll see them available to consumers before the 2030s. So I proposed, and Ryan accepted, the following bet:
I bet you $500 that on your daughter’s 16th birthday, it won’t be possible and legal for someone with no driver’s license to hop into a self-driving car in DC, give it an address in Philly, take a nap, and wake up at her destination 3-4 hours later (depending on traffic, obviously).
The car must be generally commercially available–not a research prototype or limited regulatory trial. It can be either purchased or a rented “taxi.” And obviously there can’t be anyone helping to guide the vehicle either in the car or over the air.
Tom Lee has agreed to be the judge in case the outcome is disputed 16 years hence, with the following addition provision:
Let me suggest the proviso that the “nap” criterion in the bet be eliminated if it turns out that by 2026 nootropic pharmaceutical or cybernetic interventions mean that sleep is no longer possible.
So that’s the bet. Let me give a couple of thoughts for why I think I’m going to win.
First, although there are already self-driving cars on the road, those cars are “self-driving” with some major caveats. There’s a human being behind the wheel who takes control when the car approaches a potentially tricky situation like a pedestrian or bicyclist. I haven’t talked to the team that made the Google car, but unless there’s been fantastic progress since I talked to some AI researchers in 2008, the vehicles are probably not equipped to deal gracefully with adverse weather conditions like fog, rain, and ice. They’ll get steadily better at this, but it’s going to take a lot of work to reach the point where they can safely handle all the real-world situations they might encounter.
Second, the path from research prototype to mainstream commercial product is almost always longer than people expect. Building a system that works when built in a lab and operated by sophisticated technical staff is always much easier than building a system that’s simple and user-friendly enough for use by the general public. Commercial products need to work despite the abuse and neglect of their indifferent users.
And the challenge is much greater when you’re dealing with questions of life or death. One of the reasons that consumer electronics have advanced so rapidly is that the failure modes for these products generally aren’t terrible. If your cell phone drops a call, you just shrug and wait until reception improves. Obviously, if your self-driving car has a bug that causes it to crash, you’re going to be pretty upset. So a commercial self-driving car product would need to be over-engineered for safety, with redundant systems and the ability to detect and recover from instrument and mechanical failures. Presumably the Google car doesn’t do this; it relies on the human driver to notice a problem and grab the wheel to recover.
A third obstacle is liability. If a human driver crashes his car, he may get sued for it if he harms a third party, but the settlement amount is likely to be relatively small (at most the value of his insurance coverage) and the car manufacturer probably won’t be sued unless there’s evidence of a mechanical defect. In contrast, if a self-driving car crashes, the car manufacturer is almost guaranteed to be sued, and given the way our liability system works the jury is likely to hand down a much larger judgment against the deep-pocketed defendant. And this will be true even if the car’s overall safety record is much better than that of the average human driver. The fact that the crash victim was statistically less likely to crash in the self-driving car will not impress the jury who’s just heard testimony from the victim’s grieving husband. So self-driving cars will need to be much safer than normal human-driven cars before car companies will be prepared to shoulder the liability risk.
Finally, there’s the regulatory environment. Regulators are likely to be even more risk-averse than auto company executives. People are much more terrified of dying in novel ways than in mundane ones: that’s why we go to such ridiculous effort to make air travel safer despite the fact that air travel is already much safer than driving: the TSA gets blamed if a terrorist blows up an airplane. They don’t get blamed if the cost and inconvenience of flying causes 1000 extra people to die on the highways. By the same token, if regulators approve a self-driving car that goes on to kill someone, they’ll face a much bigger backlash than if their failure to approve self-driving cars lead to preventable deaths at the hands of human drivers. So even if the above factors all break in Ryan’s favor, I’m counting on the timidity of government regulators to drag the approval process out beyond 2026.
Will I win? I hope I don’t. The benefits of self-driving cars—both to me personally and to society at large—would dwarf the value of the bet. But unfortunately I think Ryan’s daughter is going to need to get a driver’s license, because self-driving cars won’t come onto the market until after she reaches adulthood.
I wrote a series of articles for Ars Technica about self-driving cars in 2008.
Update: Ryan makes his case. I’ll just make one final point: I think 16 years may be enough time to overcome either the technical or the legal hurdles alone. But the two problems will have to be attacked in sequence, not in parallel. That is, the policy debate won’t begin in earnest until prototype self-driving cars start to show safety records comparable to human-driven cars; this still seems to be a decade off at least. And then in the best case there will be several years of foot-dragging by policymakers.
I’m excited to see how this works out, though as you know I think yours is probably the smart side of the bet to take.
Here’s a question I’d love to hear your thoughts on: *where* will the first self-driving cars be deployed? In the same way that countries deploying telephone systems frequently wind up with much better systems than the US’s early-adopter mess of noisy copper and slow wireless data, could a country that is only now beginning the build its highway system be the first to support automated cars? Built into this question is the assumption that roadway design could make car automation much easier (passive IR elements to encode location information/mark lanes, for instance) — I could be completely wrong about that, of course. The fact that there’d be smaller existing car fleets to accommodate would probably also help.
This is complicated by the fact that poorer countries are well-positioned to build these roads, but richer countries are in a better position to make the cars that drive on them. But perhaps there’s a middle ground. I don’t know enough about India and China’s infrastructure to say whether it’s plausible for them to deploy those systems in a different way, but they’re obvious candidates for this theory.
Anyway it at least seems possible that the “DC to Philly” part of the bet could be what decides it in your favor.
My best guess, following Brad Templeton, is an authoritarian Asian regime like China or Singapore. I think we might reach technical parity between software and human drivers sometime in the 2020s. The question is whose political system will be best able to weather the short-term populist backlash that will follow the first few self-driving car crashes. Western democracies have a lot of advantages, but this kind of policy risk-taking is not among them. Also, roads in developing countries tend to be more dangerous, so it’s easier for self-driving cars to do better than the status quo there. Plus, for reasons Templeton has laid out, self-driving taxis make small, light, cheap vehicles more economically viable, so the advantages might be more compelling in poor countries.
I don’t actually think architectural features like lane markers will matter very much. Getting cars to drive on typical roads under normal circumstances is practically a solved problem today. What’s hard is managing the various failure modes. And infrastructure doesn’t help you there because by definition you want it to work even in environments where the markers are missing, sensors get confused, etc.
I’m not sure I agree with you on negligence, and can see a couple of counter arguments against this leading to more litigation and liability.
Any self-driving cars will have to be approved by the Department of Transportation (or perhaps by Congressional or state legislation), no?
Once you have a federal agency saying something is acceptable and safe, there is a lot of caselaw saying that a court can’t find individuals who are complying with an agency’s regulations negligent.
Scott,
That’s a good point. I’m not sure how strong that presumption is, though. For example, hasn’t Merck faced hundreds of lawsuits from Vioxx users despite FDA approvals for that drug?
As someone who hasn’t read much about this, I imagine the hardest part isn’t automating driving a car per se but handling driving with many other cars and the little quirks of driving surrounded by others? How well is the tech replicating soft signals of driving (waiving someone in front or being waived, for instance)? Will require cars to communicate with each other other directly?
Mike,
I think you’re right that this sort of thing is one of the key challenges. The 2007 Urban Challenge required cars to drive around a simulated urban environment and interact with other robocars as well as human drivers. AFAIK, they didn’t make any use of “soft signals,” and indeed this is one of the really hard things about interacting with pedestrians: to know if it’s safe to go you often have to read their body language. My sense is that self-driving cars will need to find other ways to deal with these situations because reading body language is definitely not going to be viable in the next couple of decades.
There will be “legacy” cars on the roads for the foreseeable future, so any solution that requires car-to-car communication won’t work. Probably self-driving cars will just have to be conservative, courteous drivers and give human vehicles a wide berth. They also have the advantage of heightened reaction times, so if a human driver behaves erratically they may have a better chance of avoiding a collision.
With all due respect, as the parent of a blind child, I sincerely hope you lose your bet. Nothing would give her greater independence as an adult than the ability to travel non-walkable distances by herself. It would truly transform the lives of blind adults, and make a dent in the ridiculously high unemployment rate among the blind (estimated variously from 33% to as high as 70%).
I hope you lose too — I think everyone does! But I think you will probably win. It’s my guess that we’ll see some intermediate solution relatively soon — like, you manually drive your car on normal streets, but you can go on autopilot when you pull onto a highway on ramp. Highway navigation seems like it would be much easier to handle.
Who wins the bet in that case? 🙂
Arguments For:
Once the system is capable of demonstrably out-performing human drivers (with regard to safety) – then it would be no stretch to imagine a future where it might not even be legal for a human to drive. Or; it might be prohibitively expensive, from an insurance standpoint.
As an analogy: in First-Person-Shooter type video games, AI characters can already certainly outperform humans (in the game environment, game physics, controlled events), where reflex and “sub-tactical” decision-making are concerned. They really must be “crippled” in order to make games playable. But humans are still better at more big-picture tactical decisions – (bots can usually be observed, their behavior predicted, and outsmarted, through repetition.)
It’s not clear how that currently applies to; how quickly sensors can detect obstacles, how quickly the CPU can process, and assess a response, and actuate the vehicle’s controls – or whether that’s faster/safer than a human can do it. (and from a “systems” perspective – will this “unnaturally fast” driver’s reactions cause human drivers to react differently? – one vehicle in traffic is part of a complex system, and all of the cars interact).
It might also be – that self-driving cars can “pack tighter” on freeways, and drive at higher speeds with consistency, unlike human drivers, who have been shown to hit the brakes inconsistently when freeway traffic tightens, causing traffic jams. Human drivers might even be banned from rush-hour driving, if this is the case.
Arguments Against:
Having looked at some example cars (not in person, but via photos in articles on the internet), and the collection of sensors, electronics, and such, that are required to *accomplish* this, (and as much of a futurist as I personally am), I’d say that this hardware is non-trivial. I don’t see it becoming trivial, anytime soon. From the perspective of materials-cost, power consumption, weight, and physical size, and given the current automotive trends of – cost reduction, efficiency gains and emissions-reduction, adding this type of system to the “car of today” – let alone the “car of the future” is a really significant change.
One can look back 16 years, in automotive history, at the car of 1994/5, and see, really, very little significant change. (Actually, I’m seeing a lot less interior room. . . )
Look forward 16 years, and given the direction in which cars must surely evolve (hybrids, electrics) given the assumptions we have today, do you see, for example, where, a couple of cubic feet of hardware would go? Given the possibility that the car of 16 years from now, is electric, can you see where the power for the active sensors and cpu is going to come from, how that’s going to affect performance and range? And looking 16 years into the future – at typical middle-class standard of living, and average income, will the average consumer pay MORE for a self-drive feature, that leaves them with less interior room (carrying capacity), a heavier car, with less range, flexibility, and power?
Only if they have no choice (because all manufacturers have gone the same direction). Or, if I’m under estimating by how much this technology can be miniaturized and optimized for the automotive environment. Or, if some of my above, assumptions have come to pass, and these features are incorporated by legislative (or judicial) fiat.
I’m personally on Ryan’s side in this, for entirely cynical reasons. Since the degree of interest-group capture in the Congress in quite high, I am guessing that it actually goes through before the engineers who’re implementing the system are entirely comfortable with it happening. After all, it has the potential to sell an entire generation of new cars (which interests rich people) and to enhance the viability of the suburbs for a little bit longer (which interests old people).
I hope your wrong and I’m guessing that you might be.
1. I imagine that self driving cars will come about through very small incremental changes to car hardware ie. cars that stay in the lane, break when they sense oncoming collisions ect.
2. We will move in gradual steps where you are required to have your hands accessible to the wheel. You have to have be awake. You can’t be chemically impaired ect. and slowly those go away.
3. 16 years will erode lots of uncertainty to AI.
4. People really want to drive without having to pay attention.
If you are wrong, I’m guessing that the age of the child or the sleeping part is what wins the bet for you.
I hope you win. I passionately love to drive, and I hate, hate, hate the idea of giving up the pleasure of driving. I wonder if we could achieve a similar reduction in highway injuries at a much lower cost by investing more in driver training?