The Case against DDOS

Last night I wrote a slightly hyperbolic tweet about the Anonymous denial of service attacks, and I’ve gotten a surprising amount of pushback on it. So I thought I’d expand on my thinking here.

In a distributed denial of service attack, or DDOS, a large number of computers send data to a target computer with the intent of saturating its network links and/or overloading the server, thereby “denying service” to actual users of that server. In this case Anonymous, a group of online vigilantes, have launched DDOS against MasterCard, PayPal, and other companies that have taken anti-Wikileaks steps that they (and I) don’t approve of.

Evgeny Morozov points me to this post, which reports that a German court was apparently persuaded that DDOS attacks are a form of civil disobedience, like a sit-in. This comparison strikes me as not just wrong but kind of ridiculous.

The Internet is a collaborative network built on strong implicit norms of trust. There’s no global governance body or formal enforcement mechanisms for many of the Internet’s norms, but things work pretty well because most people behave responsibly. This responsible behavior comes in two parts. Ordinary users obey the norms without even knowing about them because they are baked into the hardware and software we all use. For example, all your life you’ve been observing the TCP backoff norm, probably without knowing about it, because your computer’s networking stack has been programmed to follow it.

Then there’s a worldwide community of engineers and sysadmins who collaborate to track down problems and cut off the small minority of people who abuse the Internet’s norms. The decentralized nature of the Internet means that no single administrator has all that much power, so their ability to respond to an attack often depends on cooperation from the systems administrators who run the network from which the attack originates. These folks are fighting a continuous, largely invisible, battle to keep the Internet running smoothly. The fact that most people never think about them is a testament to how well they do their job.

DDOS attacks work by exploiting the Internet’s open architecture and flouting its norms. Most computers on the Internet are provisioned with significantly more bandwidth than they’re expected to be using at any given moment; this allows us to have fast downloads when we need them, while leaving the extra capacity available for others to use when we don’t need it. Similarly, servers depend on relatively good behavior from client computers. Major Internet protocols like TCP/IP and HTTP don’t have any formal mechanism for limiting the amount of server capacity used by any given client, they simply trust that the vast majority of clients won’t behave maliciously. Systems administrators deal with the small minority that do behave maliciously on a case-by-case basis.

I’d be willing to bet that at this very moment, a small army of sysadmins at Anonymous’s various targets, and their ISPs, are working around the clock to respond to Anonymous’s attacks. They’re probably not getting paid overtime. These folks likely had no influence over their superiors’ decisions with respect to Wikileaks. And indeed, given the pro-civil-liberties slant of geeks in general, I bet a lot of them are themselves Wikileaks supporters. Some of them may even be exerting what small influence they have inside their respective companies to stand up to the government’s attacks on Wikileaks.

DDOS attacks take advantage of, and deplete, the Internet’s reservoir of trust. They are something like a kid who lives in a small town where no one locks their doors going into his neighbors’ houses and engaging in petty vandalism. The cost of his behavior isn’t so much cleaning up the vandalism as the fact that if more than a handful of people behaved that way everyone in town would be forced to put locks on their doors. Likewise, the damage of a DDOS attack isn’t (just) that the target website goes down for a few hours, it’s that sysadmins around the world are forced to build infrastructure to combat future DDOS attacks.

The comparison to sit-ins is particularly absurd because the whole point of a sit-in is its PR value. You’re trying to call the public’s attention to a business’s misbehavior and motivate other customers of that entity to pressure the business to change its behavior. You do this by being unfailingly polite and law-abiding (aside from the trespass of the sit-in itself), and by being willing to spend some time in prison to demonstrate your sincerity and respect for the law. In contrast, the people who are prevented from using MasterCard’s website may not even realize that Anonymous is responsible, and to the extent they do find out it’s through media accounts that are (justifiably) universally negative. In addition to all the other problems with what they’re doing, it’s a terrible PR strategy that generates sympathy for Anonymous’s targets and reinforces the public’s impression of Wikileaks as a rogue organization.

I suspect that most of the Anonymous participants simply don’t know any better. If this arrest is representative, the people involved are literal and metaphorical children, throwing high-tech temper tantrums without any real understanding of the consequences of their actions. These attacks are doing no serious damage to the nominal targets of the attacks and they create zero incentive for other corporate entities to change their behavior vis-a-vis Wikileaks. But they do significant and lasting damage to a variety of third parties. I don’t literally want them to “rot in prison,” but I’ll have zero sympathy if they’re caught and prosecuted.

Update: One final obvious point that I forgot to mention: while I don’t know the details of this particular attack, it’s relatively common for DDOS attacks to utilize botnets, a.k.a. networks of computers that have been remotely compromised and are being used without their owners’ knowledge or permission. Even if everything I wrote above is wrong, the use of botnets—for this or any other purpose—is flatly immoral and illegal, and no DDOS attack that utilizes them should be considered a legitimate form of political protest.

Posted in Uncategorized | 40 Comments

Framing the DREAM Act

Reihan responds. His thoughts are, as always, worth reading in full. But let me just quickly comment on this:

But my sense is that there is an upper bound on the number of foreigners that U.S. citizens will welcome to work and settle in the United States in any given year. I don’t know what that number is, but I imagine it’s not much higher than, say, 1.5 million per annum at the very high end. I am willing to accept that as a starting point, i.e., we’re not going to allow 3 million or 7 million or even 1.6 million. Chances are that a number smaller than 1.5 million would reflect the preferences of a voting majority, e.g., 800,000.

I don’t think this model of how the electorate thinks about immigration—first decide how many “slots” there are going to be and then decide how to dole them out—bears any relationship to reality. I think Reihan is right that if you ask the average voter how many immigrants she’d like to admit each year, she’d give a depressingly small answer. But fortunately, public opinion on this (like every other issue) is underdetermined, incoherent and highly susceptible to framing. There are many voters who think we admit too many immigrants in general but can be persuaded that it’s worth making an exception for certain immigrants whose situations seem particularly sympathetic. So not only does each DREAM kid not take up a full “slot,” I suspect that the process of debating and enacting the DREAM Act will actually increase the number of “slots” by improving the public view of undocumented immigrants in general.

This is how politics works. If you want fewer abortions you focus on “partial birth” abortions. If you want legal pot, you start with medical marijuana. If you want universal vouchers, you start by focusing on vouchers for kids in failing schools. If you want to end the estate tax, you focus on the relatively small minority of families who are forced to sell off their business to pay the tax man. This kind of half-measure is not only much easier to enact, but it also tends to move public opinion to be more favorable to the 200 proof version. In an ideal world, voters would be perfectly rational and omniscient and we wouldn’t have to play these kinds of games. But they’re not, so we do.

Posted in Uncategorized | 4 Comments

The Implicit Message of the DREAM Act

I have a lot of respect for my friend Reihan Salam, but boy was this frustrating to read:

As I understand it, the DREAM Act implicitly tells us that I should value the children of unauthorized immigrants more than the children of other people living in impoverished countries. If we assume that all human beings merit equal concern, this is obviously nonsensical. Indeed, all controls on migration are suspect under that assumption.

Even so, there is a broad consensus that the United States has a right to control its borders, and that the American polity can decide who will be allowed to settle in the United States. Or to put this another way, we’ve collectively decided that the right to live and work in the U.S. will be treated as a scarce good.

So look, there are two basic ways to look at a political issue: on the policy merits and on how it fits into broader ideological narratives. On the policy merits, the case for DREAM is simple and compelling: there are hundreds of thousands of kids who, through no fault of their own, are trapped in a kind of legal limbo. We should provide them with some way to get out of that legal limbo. I can think of any number of ways to improve the DREAM Act, but this is the only bill with a realistic chance of passing Congress in the near future, and it’s a lot better than nothing.

Reihan’s objects that “we’ve collectively decided” that the opportunity to live and work in the United States “will be treated as” a scarce good. I suspect he’s chosen this weird passive-voice phrasing because he knows better than to straight up claim that the opportunity to live in the United States is a scarce good. It’s not. We should let the DREAM kids stay here and we should be letting a lot more kids from poorer countries come here. Doing the one doesn’t in any way prevent us from doing the other.

OK, so that’s the policy substance. Now let’s talk about the politics. Reihan’s position here is superficially similar to my stance on the Founder’s Visa: I opposed it because it was a largely symbolic gesture that will help only a tiny number of people (many of whom don’t especially need it) while reinforcing a political narrative I find odious: that having more foreigners around is a burden we’re willing to accept only if those foreigners provide large benefits to Americans. Reihan is, I take it, making a roughly similar claim: that DREAM helps a relatively small number of people, that the people it helps aren’t necessarily the most deserving, and that DREAM reinforces an objectionable political narrative.

I don’t think any of these claims stand up to scrutiny. On the first two points, DREAM is simply not in the same league as the Founder’s Visa. The Founder’s Visa would help a tiny number of unusually privileged would-be immigrants. DREAM would help a much larger number of relatively poor (by American, if not world, standards) immigrants.

So that brings us to the core political question: does passing DREAM “implicitly tell us” something we’d rather not be told? This is where I think Reihan is furthest off base. From my perspective, the fundamental question in the immigration debate is: do we recognize immigrants as fellow human beings who are entitled to the same kind of empathy we extend to other Americans, or do we treat them as opponents in a zero-sum world whose interests are fundamentally opposed to our own? Most recent immigration reform proposals, including the Founder’s Visa and the various guest worker proposals, are based on the latter premise: immigrants in general are yucky, but certain immigrants are so useful to the American economy that we’ll hold our collective noses and let them in under tightly control conditions.

The DREAM Act is different. The pro-DREAM argument appeals directly to Americans’ generosity and sense of fairness, not our self-interest. The hoops kids must go through to qualify for DREAM are focused on self-improvement for the kids themselves, not (like the Founders Visa) on maximizing benefits for American citizens. There’s no quota on the number of kids who are eligible, and at the end of the process the kids get to be full-fledged members of the American community.

Nothing about this says that we should “value the children of unauthorized immigrants more than the children of other people living in impoverished countries.” I wish Congress would also enact legislation to help children of people living in impoverished countries. If Reihan has a realistic plan for doing that, I’ll be among its earliest and most enthusiastic supporters. Unfortunately, I think the political climate in the United States makes that unlikely to happen any time soon. But that’s not the fault of the DREAM Act or its supporters. And voting down DREAM will make more ambitious reforms less, not more, likely.

Posted in Uncategorized | 44 Comments

Bottom-Up Chat: Stephen Smith and Market Urbanism

Grad school has kept me too busy to do a lot of blogging recently, but I tomorrow night we’ll be doing on of our periodic chats here at Bottom-Up. My guest will be Stephen Smith of the excellent Market Urbanism blog. He’s a recent graduate of Georgetown University with a degree in international political economy, and will soon be moving back to DC to start an internship at Reason magazine. We’ll talk about libertarianism, urbanism, and the (depressingly small) overlap between the two. And anything else that strikes your fancy.

Please join us tomorrow (Wednesday) night starting at 9:30 PM Eastern. Just click “General Chat” in the lower-right-hand corner of the browser window.

Update: The chat is finished. We had a lively discussion, the transcript of which should still be visible for a while. Click the “general chat” tab below to read it.

Posted in Uncategorized | Leave a comment

The Master Switch and State-Worship

Over at the Technology Liberation Front in recent weeks Adam Thierer has been doing a series of posts about Tim Wu’s new book, The Master Switch. Adam wasn’t a fan. Wu himself jumped in with a response, where he focused on the nature of libertarianism, and suggesting that Adam is ignoring the libertarian-friendly aspects of his book.

I jumped into the debate with a guest post of my own:

Adam began his first post by stating that he “disagrees vehemently with Wu’s general worldview and recommendations, and even much of his retelling of the history of information sectors and policy.” This is kind of silly. In fact, Adam and Wu (and I) want largely the same things out of information technology markets: we want competitive industries with low barriers to entry in which many firms compete to bring consumers the best products and services. We all reject the prevailing orthodoxy of the 20th century, which said that the government should be in the business of picking technological winners and losers. Where we disagree is over means: we classical liberals believe that the rules of property, contract, and maybe a bit of antitrust enforcement are sufficient to yield competitive markets, whereas left-liberals fear that too little regulation will lead to excessive industry concentration. That’s an important argument to have, and I think the facts are mostly on the libertarians’ side. But we shouldn’t lose sight of the extent to which we’re on the same side, fighting against the ancient threat of government-sponsored monopoly.

My friend Kerry Howley coined the term “state-worship” to describe libertarians who insist on making the government the villain of every story. For most of history, the state has, indeed, been the primary enemy of human freedom. Liberals like Wu are too sanguine about the dangers of concentrating too much power in Washington, D.C. But to say the state is an important threat to freedom is not to say that it’s the only threat worth worrying about. Wu tells the story of Western Union’s efforts to use its telegraph monopoly to sway the election of 1876 to Republican Rutherford B. Hayes. That effort would be sinister whether or not Western Union’s monopoly was the product of government interference with the free market. Similarly, the Hays code (Hollywood’s mid-century censorship regime) was an impediment to freedom of expression whether or not the regime was implicitly backed by the power of the state. Libertarians are more reluctant to call in the power of the state to combat these wrongs, but that doesn’t mean we shouldn’t be concerned with them.

You can read the rest of my response over at TLF.

The Master Switch is a great read, and I expect to write more about it in the future.

Posted in Uncategorized | 4 Comments

Open User Interfaces and Power Users

I know I said I’d write about Google next, but I wanted to comment on the discussion in the comments to Luis’s post on open UIs. Here’s Bradley Kuhn, executive director of the Software Freedom Conservancy:

I think you may have missed on fundamental fact that Tim *completely* ignored: there are many amazingly excellent free software UIs. He can get to this fundamental misconception because of the idea that UIs are somehow monolithically “good” or “bad”. I actually switched to GNU/Linux, in 1992, because the UI was the best available. I’ve tried briefly other UIs, and none of them are better — for the type of user I am: a hacker-mindset who wants single key strokes fully programmable in every application I use. Free Software excels at this sort of user interface.

This is precisely because the majority of people in the communities are those types of users. The only logical conclusion one can make, therefore, is that clearly Free Software communities don’t have enough members from the general population who aren’t these types of users. There is no reason not to believe when Free Software communities become more diverse, UIs that are excellent for other types of users will emerge.

I know exactly what Bradley is talking about here. For more than a decade, I’ve been a religious user of vi, a text editor that’s been popular with geeks since the 1980s. It has many, many features, each of which is invoked using a sequence of esoteric keystrokes. An experienced vi user can edit text much more efficiently than he could using a graphical text editor like Notepad.

But the learning curve is steep; one has to invest hundreds of hours of practice before the investment pays off in higher productivity. And indeed, the learning curve never really flattens: even after ten years and thousands of hours as a vi user, there are still many vi features I haven’t learned because the only way to learn about them is to spend time reading through documentation like this.

So there are two competing theories of user interface quality here. Bradley’s theory says that the best user interface is the one that maximizes the productivity of its most proficient user. Mine says that the best user interface helps a new user become proficient as quickly as possible.

Obviously, there’s no point in saying one of these theories is right and the other is wrong. Bradley should use the tools he finds most useful. But I think it’s pretty obvious that my conception is more broadly applicable. For starters, the technically unsavvy vastly outnumber the technically savvy in the general human population. While power users like Bradley may enjoy learning about and configuring their software tools, most users view this sort of thing as a chore; they want software tools that work without having to think about them too much, just as they want cars that get them to their destination without having to open the hood.

Bradley speculates that free software will start to accommodate these users as more of them join the free software movement. But I think this fundamentally misunderstands the problem. By definition, the users who prefer to spend as little time as possible thinking about a given piece of software are not the kind of users who are going to become volunteer developers for it. The “scratch your own itch” model of software development simply can’t work if the “itch” is that the software demands too much from the user.

With that said, I think it’s a mistake to draw too sharp a distinction between “power users” and the general public. I’m pretty clearly a “power user” when I’m working on my own Mac-based laptop and Android-based phone, yet when I borrow my wife’s Windows laptop and Blackberry phone I spend a fair amount of time poking around trying to figure out how to do basic stuff. Even after I’ve used a product for a while, I still sometimes want to do stuff with it that I haven’t done before. Back when I used Microsoft Office on a regular basis, I frequently found myself wanting to use a new feature and wasting a lot of time trying to get it to work properly. And recent developments, like “Web 2.0” and the emergence of “app stores,” means that I’m trying out a lot more new software than I used to. A good user interface helps me to quickly figure out if the application does what I want it to.

I was willing to invest a lot of time in becoming a vi expert because editing text is a core part of the craft of programming. But I don’t want to invest a lot of time learning to operate my iPod more effectively. Even those of us who gain deep expertise with a few software products will and should be relative amateurs with respect to most of the software we use. And so a big part of good user interface design is making the software accessible to new users.

Posted in Uncategorized | 2 Comments

Luis Villa on Open vs. Bottom-Up

As usual, I agree with pretty much everything Luis Villa has to say about yesterday’s post here:

Tim makes the assumption that open projects can’t have strong, coherent vision- that “[t]he decentralized nature of open source development means that there’s always a bias toward feature bloat.” There is a long tradition of the benevolent but strong dictator who is willing to say no in open projects, and typically a strong correlation between that sort of leadership and technical success. (Linux without Linus would be Hurd.) It is true that historically these BDFLs have strong technology and implementation vision, but pretty poor UI design vision. There are a couple of reasons for this: hackers outnumber designers everywhere by a large number, not just in open source; hackers aren’t taught design principles in school; in the open source meritocracy, people who can implement almost always outrank people who can’t; and finally that many hackers just aren’t good at putting themselves in someone else’s shoes. But the fact that many BDFLs exist suggests that “open” doesn’t have to mean “no vision and leadership”- those can be compatible, just as “proprietary” and “essentially without vision or leadership” can also be compatible.

This isn’t to say that open development communities are going to suddenly become bastions of good design any time soon; they are more likely to be “bottom up” and therefore less vision-centered, for a number of reasons. Besides the problems I’ve already listed, there are also problems on the design side- several of the design texts I’ve read perpetuate an “us v. them” mentality about designers v. developers, and I’ve met several designers who buy deeply into that model. Anyone who is trained to believe that there must be antagonism between designers and developers won’t have the leadership skills to become a healthy BDFL; whereas they’ll be reasonably comfortable in a command-and-control traditional corporation (even if, as is often the case, salespeople and engineering in the end trump design.) There is also a platform competition problem- given that there is a (relatively) limited number of people who care about software design, and that those people exclusively use Macs, the application ecosystem around Macs is going to be better than other platforms (Linux, Android, Windows, etc.) because all the right people are already there. This is a very virtuous cycle for Apple, and a vicious one for most free platforms. But this isn’t really an open v. closed thing- this is a case of “one platform took a huge lead in usability and thereby attracted a critical mass of usability-oriented designers” rather than “open platforms can’t attract a critical mass of usability-oriented designers”. (Microsoft, RIM, and Palm are all proof points here- they had closed platforms whose applications mostly sucked.)

There’s more good stuff where that came from. I think a big part of the problem here is that I chose my title poorly. It should have been “Bottom-up UI Design Sucks,” which is closer to what I was trying to say. I definitely didn’t mean to pick on free software in particular; Luis is quite right that there are plenty of proprietary software vendors who suffer from exactly the same kind of problems.

Posted in Uncategorized | 2 Comments

Open User Interfaces Suck

Two Great User Interfaces

On his Surprisingly Free podcast last week, Jerry Brito had a great interview with Chicago law professor Joseph Isenbergh about the competition between open and closed systems. As we’ve seen, there’s been a lot of debate recently about the iPhone/Android competition and the implications for the perennial debate between open and closed technology platforms. In the past I’ve come down fairly decisively on the “open” side of the debate, criticizing Apple’s iPhone App Store and the decision to extend the iPhone’s closed architecture to the iPad.

In the year since I wrote those posts, a couple of things have happened that have caused some evolution in my views: I used a Linux-based desktop as my primary work machine for the first time in almost a decade, and I switched from an iPhone to a Droid X. These experiences have reminded me of an important fact: the user interfaces on “open” devices tend to be terrible.

The power of open systems comes from their flexibility and scalability. The TCP/IP protocol stack that powers the Internet allows a breathtaking variety of devices—XBoxen, web servers, iPhones, laptops and many others— to talk to each other seamlessly. When a new device comes along, it can be added to the Internet without modifying any of the existing infrastructure. And the TCP/IP protocols have “scaled” amazingly well: protocols designed to connect a handful of universities over 56 kbps links now connect billions of devices over multi-gigabit connections. TCP/IP is so scalable and flexible because its designers made as few assumptions as possible about what end-users would do with the network.

These characteristics—scalability and flexibility—are simply irrelevant in a user interface. Human beings are pretty much all the same, and their “specs” don’t really change over time. We produce and consume data at rates that are agonizingly slow by computer standards. And we’re creatures of habit; once we get used to doing things a certain way (typing on a QWERTY keyboard, say) it becomes extremely costly to retrain us to do it a different way. And so if you create an interface that works really well for one human user, it’s likely to work well for the vast majority of human users.

The hallmarks of a good user interface, then, are simplicity and consistency. Simplicity economizes on the user’s scarce and valuable attention; the fewer widgets on the screen, the more quickly the user can find the one she needs and move on to the next step. And consistency leverages the power of muscle memory: the QWERTY layout may have been arbitrary initially, but today it’s supported by massive human capital that dwarfs whatever efficiencies might be achieved by switching to another layout.

To put it bluntly, the open source development process is terrible at this. The decentralized nature of open source development means that there’s always a bias toward feature bloat. If two developers can’t decide on the right way to do something, the compromise is often to implement it both ways and leave the final decision to the user. This works well for server software; an Apache configuration file is long and hard to understand, but that’s OK because web servers mostly interact with other computers rather than people, so flexibility and scalability are more important than user-friendliness. But it tends to work terribly for end-user software, because compromise tends to translate into clutter and inconsistency.

In short, if you want to create a company that builds great user interfaces, you should organize it like Apple does: as a hierarchy with a single guy who makes all the important decisions. User interfaces are simple enough that a single guy can thoroughly understand them, so bottom-up organization isn’t really necessary. Indeed, a single talented designer with dictatorial power will almost always design a simpler and more consistent user interface than a bottom-up process driven by consensus.

This strategy works best for products where the user interface is the most important feature. The iPod is a great example of this. From an engineering perspective, there was nothing particularly groundbreaking about the iPod, and indeed many geeks sneered at it when it came out. What the geeks missed was that a portable music player is an extremely personal device whose customers are interacting with it constantly. Getting the UI right is much more important than improving the technical specs of adding features.

By paring the interface down to 5 buttons and a scroll wheel, Apple enabled new customers to learn how to use it in a matter of seconds. The uncluttered, NeXT-derived column view made efficient use of the small screen and provided a consistent visual metaphor across the UI. And by coupling the iPod with its already-excellent iTunes software, Apple was able to offload many functions, like file deletion and playlist creation, onto the user’s PC. You can only achieve this kind of minimalist elegance if a single guy has ultimate authority for the design.

The iPhone also bears the hallmarks of Steve Jobs’s top-down design philosophy. It’s a much more complex device than the iPod, but Apple goes to extraordinary lengths to ensure that every iPhone app “feels” like it was designed by the same guy. They’ve created a visual language that allows experienced iPhone users to tell at a glance how a new application works. As just one example, dozens of applications use the “column view” metaphor popularized by the iPod. The iPhone lacks a hardware back button, but every application puts the software “back” button in exactly the same place on the screen. An iPhone user quickly develops “muscle memory” to automatically click that part of the screen to perform a “back” operation. The decision to use the upper-left-hand corner of the screen (rather than the lower-left, say) was much less important than getting every application to do it the same way.

In short, I don’t think it’s a coincidence that the devices with the most elegant UIs come from a company with a top-down, almost cult-like, corporate culture. In my next post I’ll talk about how Google’s corporate culture shapes its own products.

Update: Reader Gabe points to this excellent 2004 post by John Gruber on the subject of free software usability.

Posted in Uncategorized | 27 Comments

A Bet

This morning I had a bit of an argument on Twitter with Ryan Avent about the future of self-driving cars. He thinks his infant daughter will never need to learn to drive because self-driving cars will be ready for prime time before she reaches her 16th birthday in 2026. I’m more skeptical. I think self-driving cars are coming, but I doubt we’ll see them available to consumers before the 2030s. So I proposed, and Ryan accepted, the following bet:

I bet you $500 that on your daughter’s 16th birthday, it won’t be possible and legal for someone with no driver’s license to hop into a self-driving car in DC, give it an address in Philly, take a nap, and wake up at her destination 3-4 hours later (depending on traffic, obviously).

The car must be generally commercially available–not a research prototype or limited regulatory trial. It can be either purchased or a rented “taxi.” And obviously there can’t be anyone helping to guide the vehicle either in the car or over the air.

Tom Lee has agreed to be the judge in case the outcome is disputed 16 years hence, with the following addition provision:

Let me suggest the proviso that the “nap” criterion in the bet be eliminated if it turns out that by 2026 nootropic pharmaceutical or cybernetic interventions mean that sleep is no longer possible.

So that’s the bet. Let me give a couple of thoughts for why I think I’m going to win.

First, although there are already self-driving cars on the road, those cars are “self-driving” with some major caveats. There’s a human being behind the wheel who takes control when the car approaches a potentially tricky situation like a pedestrian or bicyclist. I haven’t talked to the team that made the Google car, but unless there’s been fantastic progress since I talked to some AI researchers in 2008, the vehicles are probably not equipped to deal gracefully with adverse weather conditions like fog, rain, and ice. They’ll get steadily better at this, but it’s going to take a lot of work to reach the point where they can safely handle all the real-world situations they might encounter.

Second, the path from research prototype to mainstream commercial product is almost always longer than people expect. Building a system that works when built in a lab and operated by sophisticated technical staff is always much easier than building a system that’s simple and user-friendly enough for use by the general public. Commercial products need to work despite the abuse and neglect of their indifferent users.

And the challenge is much greater when you’re dealing with questions of life or death. One of the reasons that consumer electronics have advanced so rapidly is that the failure modes for these products generally aren’t terrible. If your cell phone drops a call, you just shrug and wait until reception improves. Obviously, if your self-driving car has a bug that causes it to crash, you’re going to be pretty upset. So a commercial self-driving car product would need to be over-engineered for safety, with redundant systems and the ability to detect and recover from instrument and mechanical failures. Presumably the Google car doesn’t do this; it relies on the human driver to notice a problem and grab the wheel to recover.

A third obstacle is liability. If a human driver crashes his car, he may get sued for it if he harms a third party, but the settlement amount is likely to be relatively small (at most the value of his insurance coverage) and the car manufacturer probably won’t be sued unless there’s evidence of a mechanical defect. In contrast, if a self-driving car crashes, the car manufacturer is almost guaranteed to be sued, and given the way our liability system works the jury is likely to hand down a much larger judgment against the deep-pocketed defendant. And this will be true even if the car’s overall safety record is much better than that of the average human driver. The fact that the crash victim was statistically less likely to crash in the self-driving car will not impress the jury who’s just heard testimony from the victim’s grieving husband. So self-driving cars will need to be much safer than normal human-driven cars before car companies will be prepared to shoulder the liability risk.

Finally, there’s the regulatory environment. Regulators are likely to be even more risk-averse than auto company executives. People are much more terrified of dying in novel ways than in mundane ones: that’s why we go to such ridiculous effort to make air travel safer despite the fact that air travel is already much safer than driving: the TSA gets blamed if a terrorist blows up an airplane. They don’t get blamed if the cost and inconvenience of flying causes 1000 extra people to die on the highways. By the same token, if regulators approve a self-driving car that goes on to kill someone, they’ll face a much bigger backlash than if their failure to approve self-driving cars lead to preventable deaths at the hands of human drivers. So even if the above factors all break in Ryan’s favor, I’m counting on the timidity of government regulators to drag the approval process out beyond 2026.

Will I win? I hope I don’t. The benefits of self-driving cars—both to me personally and to society at large—would dwarf the value of the bet. But unfortunately I think Ryan’s daughter is going to need to get a driver’s license, because self-driving cars won’t come onto the market until after she reaches adulthood.

I wrote a series of articles for Ars Technica about self-driving cars in 2008.

Update: Ryan makes his case. I’ll just make one final point: I think 16 years may be enough time to overcome either the technical or the legal hurdles alone. But the two problems will have to be attacked in sequence, not in parallel. That is, the policy debate won’t begin in earnest until prototype self-driving cars start to show safety records comparable to human-driven cars; this still seems to be a decade off at least. And then in the best case there will be several years of foot-dragging by policymakers.

Posted in Uncategorized | 12 Comments

Openness, Vegetarianism, and Lived Experiences

Last week, Russ Roberts had libertarian tech policy scholar Tom Hazlett on his excellent EconTalk podcast to talk about the Google-vs-Apple battle in the mobile phone market, and the implications for open and closed platforms. One of my favorite things about the EconTalk is that when Roberts and a guest agree about a topic, he usually tries pretty hard to provide a fair and sympathetic account of the “other side,” so that listeners can get a realistic idea of the shape of debate and decide for themselves which side is right. He didn’t do that here. Instead, Hazlett repeatedly heaped scorn on the pro-openness side of the debate, and while Roberts was more diplomatic, he didn’t really make any effort to explain why the pro-openness side thinks as it does, even for the sake of argument.

I jumped into the comment section and pointed this out to him, and it became clear that Roberts is less hostile to the pro-openness position than genuinely unfamiliar with it. After an hour in which both he and his guest speculated that pro-openness advocates were elitist, irrational, religious, and so forth, he seemed surprised by my suggestion that there was “another side” he should have been more respectful to. He seems to believe that there’s just two sides: a pro-market side that believes that “the market will sort that out if we let it,” and a pro-regulation side that wants the government to mandate the use of open technologies. The possibility that the open-vs-closed debate might be orthogonal to the free-markets-vs-regulation debate—that one can be pro-openness and anti-regulation—seemed to be a surprise to him, despite the fact that he’s had guests like that on his show in the past.

I asked Jerry Brito for his take, and he suggested an analogy that I think might help shed some light on the issue. Consider vegetarians. People become vegetarians for a wide variety of reasons, but the most common reason in the US is probably a concern for animal welfare. The Roberts/Hazlett discussion reminded me a bit of a debate over food policy between two people who have never seen a farm animal. The debate might focus on whether meat tastes better than non-meat alternatives, and speculate on why people become vegetarians: maybe they’re elitist? Maybe men become vegetarians to score with vegetarian women? Maybe they hate capitalist factory farms?

If you ask an actual vegetarian, you’ll find that most vegetarians aren’t just concerned with the intrinsic qualities of meat—whether it “works” at nourishing people—but with the effect of meat-eating on animals. But if you’ve never seen a farm animal, this argument will always have a pie-in-the-sky vibe to it. Similarly, if you’ve never developed software, pro-openness arguments will seem vague and esoteric. It requires a leap of imagination to understand someone else’s concerns without a common frame of reference. And if you’re primed to view those concerns in terms of an existing ideological debate, such as markets vs. regulation, you’re even less likely to take those concerns seriously.

Partisans for openness don’t necessarily consider a Droid a better phone than an iPhone in the narrow sense that it has a better UI or more useful applications. (To the contrary, many lament that Apple is ahead on this score) Rather, they believe that buying an iPhone helps to shape the technology marketplace in ways that have negative long-term consequences for society. They believe that open technologies better promote values like free expression, individual autonomy, privacy, and human creativity.

This argument isn’t about government regulation. There are plenty of libertarian vegetarians who choose not to eat meat but don’t advocate making meat-eating illegal. To say that “the market will sort out” whether to eat meat is to entirely miss the point, because the point of vegetarianism is to change peoples’ preferences through persuasion, not merely to satisfy their existing preferences more effectively. By the same token, the goal of many openness advocates isn’t to make proprietary phones illegal, but rather to convince people to voluntarily buy open products because doing so provides large benefits for society at large. Libertarians don’t have to agree with that goal, but there’s no reason for them to be hostile to it.

Update: I wrote this post before reading this excellent article Roberts wrote about Linux in 2003. Roberts is clearly familiar with at least some of the argument for openness. Which makes the tone of his Hazlett podcast all the more puzzling. He seems to understand the practical advantages of the free software model, but not the ideology that motivates many of the volunteers to devote so many hours to the project. But the two can’t really be separated. The free software movement would be vastly less successful if not for the ideologically-motivated actions of Richard Stallman, Jeremy Allison, and dozens of others. If you think free software projects like Linux are a glorious thing, then you should take seriously the values and concerns of the people behind them. Especially if your podcast is published using an Apache web server.

Update 2: I endorse Jerry’s take on this subject.

Posted in Uncategorized | 4 Comments