A Bit of Self-Promotion

Scott Woolley of Fortune made my day by writing this flattering profile of yours truly:

Much as libertarians argue that supporting freedom in both the bedroom and the boardroom is not only a viable political philosophy but a logically consistent one, Lee argues that techies need to consider the “the possibility that the open-vs-closed debate might be orthogonal to the free-markets-vs-regulation debate—that one can be pro-openness and anti-regulation.”

Asked about the Republican opposition to net neutrality he is withering, saying that the right has blundered into an ignorant opposition to open networks. “Free marketeers…because they see people use left-wing rhetoric to talk about this openness stuff they assume ‘I must be on the other side,'” Lees sighs. “The dynamic becomes self perpetuating.”

Lee sees Republican opposition to unlicensed spectrum (of the sort that makes Wi-Fi possible) or any alteration of the patent rights as similarly ignorant, based on what he calls “a vulgar version of the Coase Theorem.”

That theorem, which helped economist Ronald Coase win the Nobel prize, states that as long as property rights are clearly defined the market will maximise wealth irrespective of who starts out owning the property in question.

Lee’s says that libertarians often ignore Coase’s critical caveat—that his theorem holds only in the absence of transactions costs, which in the real world tend to be substantial. Thus the vulgar version becomes: “More property rights are always better.”

For readers who are interested in more details about these arguments, I discussed spectrum policy and property rights back in April. I wrote about the Coase theorem and patent policy last year (and here’s more on patents in the software industry). And I did a pro-market, pro-openness paper on network neutrality for Cato a couple of years ago.

Posted in Uncategorized | 2 Comments

On the Constitutionality of ObamaCare

I get what Julian, Radley, and Megan are saying, and in principle I agree with them. A fair-minded reading of the constitution and the debates that surrounded its enactment makes it pretty clear that the founders’ goal was to create a federal government of far more limited powers than the one we’ve got. But I’m finding it awfully hard to get excited about the federalist boomlet sparked by Judge Hudson’s ruling that the ObamaCare insurance mandate is unconstitutional. I’m not a big fan of ObamaCare, and I wouldn’t be too sad to see portions of it struck down by the courts. But the rank opportunism of the Republican position here is so obvious that I have trouble working up much enthusiasm.

There’s nothing particularly outrageous about the health care mandate. The federal government penalizes people for doing, and not doing, any number of things. I’m currently being punished by the tax code for failing to buy a mortgage, for example. I’d love it if the courts embraced a jurisprudence that placed limits on the federal government’s ability to engage in this kind of social engineering via the tax code. But no one seriously expects that to happen. The same Republican members of Congress who are applauding Hudson’s decision have shown no qualms about using the tax code for coercive purposes.

The test case for conservative seriousness about federalism was Raich v. Gonzales, the medical marijuana case. Justices Scalia and Kennedy flubbed that opportunity, ruling that a woman growing a plant in her backyard was engaging in interstate commerce and that this activity could therefore be regulated by the federal government. If Scalia and Kennedy now vote with the majority to strike down portions of ObamaCare, it will be pretty obvious that they regard federalism as little more than a flimsy pretext for invalidating statutes they don’t like. Or, worse, for giving a president they don’t like a black eye.

Now, to be clear, libertarians like my colleagues at Cato aren’t guilty of hypocrisy on this score. But by jumping on a bandwagon driven by hypocrites and partisan hacks, I worry that they’ll permanently damage the brand of constitutionally limited government.

Posted in Uncategorized | 24 Comments

The Case against DDOS

Last night I wrote a slightly hyperbolic tweet about the Anonymous denial of service attacks, and I’ve gotten a surprising amount of pushback on it. So I thought I’d expand on my thinking here.

In a distributed denial of service attack, or DDOS, a large number of computers send data to a target computer with the intent of saturating its network links and/or overloading the server, thereby “denying service” to actual users of that server. In this case Anonymous, a group of online vigilantes, have launched DDOS against MasterCard, PayPal, and other companies that have taken anti-Wikileaks steps that they (and I) don’t approve of.

Evgeny Morozov points me to this post, which reports that a German court was apparently persuaded that DDOS attacks are a form of civil disobedience, like a sit-in. This comparison strikes me as not just wrong but kind of ridiculous.

The Internet is a collaborative network built on strong implicit norms of trust. There’s no global governance body or formal enforcement mechanisms for many of the Internet’s norms, but things work pretty well because most people behave responsibly. This responsible behavior comes in two parts. Ordinary users obey the norms without even knowing about them because they are baked into the hardware and software we all use. For example, all your life you’ve been observing the TCP backoff norm, probably without knowing about it, because your computer’s networking stack has been programmed to follow it.

Then there’s a worldwide community of engineers and sysadmins who collaborate to track down problems and cut off the small minority of people who abuse the Internet’s norms. The decentralized nature of the Internet means that no single administrator has all that much power, so their ability to respond to an attack often depends on cooperation from the systems administrators who run the network from which the attack originates. These folks are fighting a continuous, largely invisible, battle to keep the Internet running smoothly. The fact that most people never think about them is a testament to how well they do their job.

DDOS attacks work by exploiting the Internet’s open architecture and flouting its norms. Most computers on the Internet are provisioned with significantly more bandwidth than they’re expected to be using at any given moment; this allows us to have fast downloads when we need them, while leaving the extra capacity available for others to use when we don’t need it. Similarly, servers depend on relatively good behavior from client computers. Major Internet protocols like TCP/IP and HTTP don’t have any formal mechanism for limiting the amount of server capacity used by any given client, they simply trust that the vast majority of clients won’t behave maliciously. Systems administrators deal with the small minority that do behave maliciously on a case-by-case basis.

I’d be willing to bet that at this very moment, a small army of sysadmins at Anonymous’s various targets, and their ISPs, are working around the clock to respond to Anonymous’s attacks. They’re probably not getting paid overtime. These folks likely had no influence over their superiors’ decisions with respect to Wikileaks. And indeed, given the pro-civil-liberties slant of geeks in general, I bet a lot of them are themselves Wikileaks supporters. Some of them may even be exerting what small influence they have inside their respective companies to stand up to the government’s attacks on Wikileaks.

DDOS attacks take advantage of, and deplete, the Internet’s reservoir of trust. They are something like a kid who lives in a small town where no one locks their doors going into his neighbors’ houses and engaging in petty vandalism. The cost of his behavior isn’t so much cleaning up the vandalism as the fact that if more than a handful of people behaved that way everyone in town would be forced to put locks on their doors. Likewise, the damage of a DDOS attack isn’t (just) that the target website goes down for a few hours, it’s that sysadmins around the world are forced to build infrastructure to combat future DDOS attacks.

The comparison to sit-ins is particularly absurd because the whole point of a sit-in is its PR value. You’re trying to call the public’s attention to a business’s misbehavior and motivate other customers of that entity to pressure the business to change its behavior. You do this by being unfailingly polite and law-abiding (aside from the trespass of the sit-in itself), and by being willing to spend some time in prison to demonstrate your sincerity and respect for the law. In contrast, the people who are prevented from using MasterCard’s website may not even realize that Anonymous is responsible, and to the extent they do find out it’s through media accounts that are (justifiably) universally negative. In addition to all the other problems with what they’re doing, it’s a terrible PR strategy that generates sympathy for Anonymous’s targets and reinforces the public’s impression of Wikileaks as a rogue organization.

I suspect that most of the Anonymous participants simply don’t know any better. If this arrest is representative, the people involved are literal and metaphorical children, throwing high-tech temper tantrums without any real understanding of the consequences of their actions. These attacks are doing no serious damage to the nominal targets of the attacks and they create zero incentive for other corporate entities to change their behavior vis-a-vis Wikileaks. But they do significant and lasting damage to a variety of third parties. I don’t literally want them to “rot in prison,” but I’ll have zero sympathy if they’re caught and prosecuted.

Update: One final obvious point that I forgot to mention: while I don’t know the details of this particular attack, it’s relatively common for DDOS attacks to utilize botnets, a.k.a. networks of computers that have been remotely compromised and are being used without their owners’ knowledge or permission. Even if everything I wrote above is wrong, the use of botnets—for this or any other purpose—is flatly immoral and illegal, and no DDOS attack that utilizes them should be considered a legitimate form of political protest.

Posted in Uncategorized | 40 Comments

Framing the DREAM Act

Reihan responds. His thoughts are, as always, worth reading in full. But let me just quickly comment on this:

But my sense is that there is an upper bound on the number of foreigners that U.S. citizens will welcome to work and settle in the United States in any given year. I don’t know what that number is, but I imagine it’s not much higher than, say, 1.5 million per annum at the very high end. I am willing to accept that as a starting point, i.e., we’re not going to allow 3 million or 7 million or even 1.6 million. Chances are that a number smaller than 1.5 million would reflect the preferences of a voting majority, e.g., 800,000.

I don’t think this model of how the electorate thinks about immigration—first decide how many “slots” there are going to be and then decide how to dole them out—bears any relationship to reality. I think Reihan is right that if you ask the average voter how many immigrants she’d like to admit each year, she’d give a depressingly small answer. But fortunately, public opinion on this (like every other issue) is underdetermined, incoherent and highly susceptible to framing. There are many voters who think we admit too many immigrants in general but can be persuaded that it’s worth making an exception for certain immigrants whose situations seem particularly sympathetic. So not only does each DREAM kid not take up a full “slot,” I suspect that the process of debating and enacting the DREAM Act will actually increase the number of “slots” by improving the public view of undocumented immigrants in general.

This is how politics works. If you want fewer abortions you focus on “partial birth” abortions. If you want legal pot, you start with medical marijuana. If you want universal vouchers, you start by focusing on vouchers for kids in failing schools. If you want to end the estate tax, you focus on the relatively small minority of families who are forced to sell off their business to pay the tax man. This kind of half-measure is not only much easier to enact, but it also tends to move public opinion to be more favorable to the 200 proof version. In an ideal world, voters would be perfectly rational and omniscient and we wouldn’t have to play these kinds of games. But they’re not, so we do.

Posted in Uncategorized | 4 Comments

The Implicit Message of the DREAM Act

I have a lot of respect for my friend Reihan Salam, but boy was this frustrating to read:

As I understand it, the DREAM Act implicitly tells us that I should value the children of unauthorized immigrants more than the children of other people living in impoverished countries. If we assume that all human beings merit equal concern, this is obviously nonsensical. Indeed, all controls on migration are suspect under that assumption.

Even so, there is a broad consensus that the United States has a right to control its borders, and that the American polity can decide who will be allowed to settle in the United States. Or to put this another way, we’ve collectively decided that the right to live and work in the U.S. will be treated as a scarce good.

So look, there are two basic ways to look at a political issue: on the policy merits and on how it fits into broader ideological narratives. On the policy merits, the case for DREAM is simple and compelling: there are hundreds of thousands of kids who, through no fault of their own, are trapped in a kind of legal limbo. We should provide them with some way to get out of that legal limbo. I can think of any number of ways to improve the DREAM Act, but this is the only bill with a realistic chance of passing Congress in the near future, and it’s a lot better than nothing.

Reihan’s objects that “we’ve collectively decided” that the opportunity to live and work in the United States “will be treated as” a scarce good. I suspect he’s chosen this weird passive-voice phrasing because he knows better than to straight up claim that the opportunity to live in the United States is a scarce good. It’s not. We should let the DREAM kids stay here and we should be letting a lot more kids from poorer countries come here. Doing the one doesn’t in any way prevent us from doing the other.

OK, so that’s the policy substance. Now let’s talk about the politics. Reihan’s position here is superficially similar to my stance on the Founder’s Visa: I opposed it because it was a largely symbolic gesture that will help only a tiny number of people (many of whom don’t especially need it) while reinforcing a political narrative I find odious: that having more foreigners around is a burden we’re willing to accept only if those foreigners provide large benefits to Americans. Reihan is, I take it, making a roughly similar claim: that DREAM helps a relatively small number of people, that the people it helps aren’t necessarily the most deserving, and that DREAM reinforces an objectionable political narrative.

I don’t think any of these claims stand up to scrutiny. On the first two points, DREAM is simply not in the same league as the Founder’s Visa. The Founder’s Visa would help a tiny number of unusually privileged would-be immigrants. DREAM would help a much larger number of relatively poor (by American, if not world, standards) immigrants.

So that brings us to the core political question: does passing DREAM “implicitly tell us” something we’d rather not be told? This is where I think Reihan is furthest off base. From my perspective, the fundamental question in the immigration debate is: do we recognize immigrants as fellow human beings who are entitled to the same kind of empathy we extend to other Americans, or do we treat them as opponents in a zero-sum world whose interests are fundamentally opposed to our own? Most recent immigration reform proposals, including the Founder’s Visa and the various guest worker proposals, are based on the latter premise: immigrants in general are yucky, but certain immigrants are so useful to the American economy that we’ll hold our collective noses and let them in under tightly control conditions.

The DREAM Act is different. The pro-DREAM argument appeals directly to Americans’ generosity and sense of fairness, not our self-interest. The hoops kids must go through to qualify for DREAM are focused on self-improvement for the kids themselves, not (like the Founders Visa) on maximizing benefits for American citizens. There’s no quota on the number of kids who are eligible, and at the end of the process the kids get to be full-fledged members of the American community.

Nothing about this says that we should “value the children of unauthorized immigrants more than the children of other people living in impoverished countries.” I wish Congress would also enact legislation to help children of people living in impoverished countries. If Reihan has a realistic plan for doing that, I’ll be among its earliest and most enthusiastic supporters. Unfortunately, I think the political climate in the United States makes that unlikely to happen any time soon. But that’s not the fault of the DREAM Act or its supporters. And voting down DREAM will make more ambitious reforms less, not more, likely.

Posted in Uncategorized | 44 Comments

Bottom-Up Chat: Stephen Smith and Market Urbanism

Grad school has kept me too busy to do a lot of blogging recently, but I tomorrow night we’ll be doing on of our periodic chats here at Bottom-Up. My guest will be Stephen Smith of the excellent Market Urbanism blog. He’s a recent graduate of Georgetown University with a degree in international political economy, and will soon be moving back to DC to start an internship at Reason magazine. We’ll talk about libertarianism, urbanism, and the (depressingly small) overlap between the two. And anything else that strikes your fancy.

Please join us tomorrow (Wednesday) night starting at 9:30 PM Eastern. Just click “General Chat” in the lower-right-hand corner of the browser window.

Update: The chat is finished. We had a lively discussion, the transcript of which should still be visible for a while. Click the “general chat” tab below to read it.

Posted in Uncategorized | Leave a comment

The Master Switch and State-Worship

Over at the Technology Liberation Front in recent weeks Adam Thierer has been doing a series of posts about Tim Wu’s new book, The Master Switch. Adam wasn’t a fan. Wu himself jumped in with a response, where he focused on the nature of libertarianism, and suggesting that Adam is ignoring the libertarian-friendly aspects of his book.

I jumped into the debate with a guest post of my own:

Adam began his first post by stating that he “disagrees vehemently with Wu’s general worldview and recommendations, and even much of his retelling of the history of information sectors and policy.” This is kind of silly. In fact, Adam and Wu (and I) want largely the same things out of information technology markets: we want competitive industries with low barriers to entry in which many firms compete to bring consumers the best products and services. We all reject the prevailing orthodoxy of the 20th century, which said that the government should be in the business of picking technological winners and losers. Where we disagree is over means: we classical liberals believe that the rules of property, contract, and maybe a bit of antitrust enforcement are sufficient to yield competitive markets, whereas left-liberals fear that too little regulation will lead to excessive industry concentration. That’s an important argument to have, and I think the facts are mostly on the libertarians’ side. But we shouldn’t lose sight of the extent to which we’re on the same side, fighting against the ancient threat of government-sponsored monopoly.

My friend Kerry Howley coined the term “state-worship” to describe libertarians who insist on making the government the villain of every story. For most of history, the state has, indeed, been the primary enemy of human freedom. Liberals like Wu are too sanguine about the dangers of concentrating too much power in Washington, D.C. But to say the state is an important threat to freedom is not to say that it’s the only threat worth worrying about. Wu tells the story of Western Union’s efforts to use its telegraph monopoly to sway the election of 1876 to Republican Rutherford B. Hayes. That effort would be sinister whether or not Western Union’s monopoly was the product of government interference with the free market. Similarly, the Hays code (Hollywood’s mid-century censorship regime) was an impediment to freedom of expression whether or not the regime was implicitly backed by the power of the state. Libertarians are more reluctant to call in the power of the state to combat these wrongs, but that doesn’t mean we shouldn’t be concerned with them.

You can read the rest of my response over at TLF.

The Master Switch is a great read, and I expect to write more about it in the future.

Posted in Uncategorized | 4 Comments

Open User Interfaces and Power Users

I know I said I’d write about Google next, but I wanted to comment on the discussion in the comments to Luis’s post on open UIs. Here’s Bradley Kuhn, executive director of the Software Freedom Conservancy:

I think you may have missed on fundamental fact that Tim *completely* ignored: there are many amazingly excellent free software UIs. He can get to this fundamental misconception because of the idea that UIs are somehow monolithically “good” or “bad”. I actually switched to GNU/Linux, in 1992, because the UI was the best available. I’ve tried briefly other UIs, and none of them are better — for the type of user I am: a hacker-mindset who wants single key strokes fully programmable in every application I use. Free Software excels at this sort of user interface.

This is precisely because the majority of people in the communities are those types of users. The only logical conclusion one can make, therefore, is that clearly Free Software communities don’t have enough members from the general population who aren’t these types of users. There is no reason not to believe when Free Software communities become more diverse, UIs that are excellent for other types of users will emerge.

I know exactly what Bradley is talking about here. For more than a decade, I’ve been a religious user of vi, a text editor that’s been popular with geeks since the 1980s. It has many, many features, each of which is invoked using a sequence of esoteric keystrokes. An experienced vi user can edit text much more efficiently than he could using a graphical text editor like Notepad.

But the learning curve is steep; one has to invest hundreds of hours of practice before the investment pays off in higher productivity. And indeed, the learning curve never really flattens: even after ten years and thousands of hours as a vi user, there are still many vi features I haven’t learned because the only way to learn about them is to spend time reading through documentation like this.

So there are two competing theories of user interface quality here. Bradley’s theory says that the best user interface is the one that maximizes the productivity of its most proficient user. Mine says that the best user interface helps a new user become proficient as quickly as possible.

Obviously, there’s no point in saying one of these theories is right and the other is wrong. Bradley should use the tools he finds most useful. But I think it’s pretty obvious that my conception is more broadly applicable. For starters, the technically unsavvy vastly outnumber the technically savvy in the general human population. While power users like Bradley may enjoy learning about and configuring their software tools, most users view this sort of thing as a chore; they want software tools that work without having to think about them too much, just as they want cars that get them to their destination without having to open the hood.

Bradley speculates that free software will start to accommodate these users as more of them join the free software movement. But I think this fundamentally misunderstands the problem. By definition, the users who prefer to spend as little time as possible thinking about a given piece of software are not the kind of users who are going to become volunteer developers for it. The “scratch your own itch” model of software development simply can’t work if the “itch” is that the software demands too much from the user.

With that said, I think it’s a mistake to draw too sharp a distinction between “power users” and the general public. I’m pretty clearly a “power user” when I’m working on my own Mac-based laptop and Android-based phone, yet when I borrow my wife’s Windows laptop and Blackberry phone I spend a fair amount of time poking around trying to figure out how to do basic stuff. Even after I’ve used a product for a while, I still sometimes want to do stuff with it that I haven’t done before. Back when I used Microsoft Office on a regular basis, I frequently found myself wanting to use a new feature and wasting a lot of time trying to get it to work properly. And recent developments, like “Web 2.0” and the emergence of “app stores,” means that I’m trying out a lot more new software than I used to. A good user interface helps me to quickly figure out if the application does what I want it to.

I was willing to invest a lot of time in becoming a vi expert because editing text is a core part of the craft of programming. But I don’t want to invest a lot of time learning to operate my iPod more effectively. Even those of us who gain deep expertise with a few software products will and should be relative amateurs with respect to most of the software we use. And so a big part of good user interface design is making the software accessible to new users.

Posted in Uncategorized | 2 Comments

Luis Villa on Open vs. Bottom-Up

As usual, I agree with pretty much everything Luis Villa has to say about yesterday’s post here:

Tim makes the assumption that open projects can’t have strong, coherent vision- that “[t]he decentralized nature of open source development means that there’s always a bias toward feature bloat.” There is a long tradition of the benevolent but strong dictator who is willing to say no in open projects, and typically a strong correlation between that sort of leadership and technical success. (Linux without Linus would be Hurd.) It is true that historically these BDFLs have strong technology and implementation vision, but pretty poor UI design vision. There are a couple of reasons for this: hackers outnumber designers everywhere by a large number, not just in open source; hackers aren’t taught design principles in school; in the open source meritocracy, people who can implement almost always outrank people who can’t; and finally that many hackers just aren’t good at putting themselves in someone else’s shoes. But the fact that many BDFLs exist suggests that “open” doesn’t have to mean “no vision and leadership”- those can be compatible, just as “proprietary” and “essentially without vision or leadership” can also be compatible.

This isn’t to say that open development communities are going to suddenly become bastions of good design any time soon; they are more likely to be “bottom up” and therefore less vision-centered, for a number of reasons. Besides the problems I’ve already listed, there are also problems on the design side- several of the design texts I’ve read perpetuate an “us v. them” mentality about designers v. developers, and I’ve met several designers who buy deeply into that model. Anyone who is trained to believe that there must be antagonism between designers and developers won’t have the leadership skills to become a healthy BDFL; whereas they’ll be reasonably comfortable in a command-and-control traditional corporation (even if, as is often the case, salespeople and engineering in the end trump design.) There is also a platform competition problem- given that there is a (relatively) limited number of people who care about software design, and that those people exclusively use Macs, the application ecosystem around Macs is going to be better than other platforms (Linux, Android, Windows, etc.) because all the right people are already there. This is a very virtuous cycle for Apple, and a vicious one for most free platforms. But this isn’t really an open v. closed thing- this is a case of “one platform took a huge lead in usability and thereby attracted a critical mass of usability-oriented designers” rather than “open platforms can’t attract a critical mass of usability-oriented designers”. (Microsoft, RIM, and Palm are all proof points here- they had closed platforms whose applications mostly sucked.)

There’s more good stuff where that came from. I think a big part of the problem here is that I chose my title poorly. It should have been “Bottom-up UI Design Sucks,” which is closer to what I was trying to say. I definitely didn’t mean to pick on free software in particular; Luis is quite right that there are plenty of proprietary software vendors who suffer from exactly the same kind of problems.

Posted in Uncategorized | 2 Comments

Open User Interfaces Suck

Two Great User Interfaces

On his Surprisingly Free podcast last week, Jerry Brito had a great interview with Chicago law professor Joseph Isenbergh about the competition between open and closed systems. As we’ve seen, there’s been a lot of debate recently about the iPhone/Android competition and the implications for the perennial debate between open and closed technology platforms. In the past I’ve come down fairly decisively on the “open” side of the debate, criticizing Apple’s iPhone App Store and the decision to extend the iPhone’s closed architecture to the iPad.

In the year since I wrote those posts, a couple of things have happened that have caused some evolution in my views: I used a Linux-based desktop as my primary work machine for the first time in almost a decade, and I switched from an iPhone to a Droid X. These experiences have reminded me of an important fact: the user interfaces on “open” devices tend to be terrible.

The power of open systems comes from their flexibility and scalability. The TCP/IP protocol stack that powers the Internet allows a breathtaking variety of devices—XBoxen, web servers, iPhones, laptops and many others— to talk to each other seamlessly. When a new device comes along, it can be added to the Internet without modifying any of the existing infrastructure. And the TCP/IP protocols have “scaled” amazingly well: protocols designed to connect a handful of universities over 56 kbps links now connect billions of devices over multi-gigabit connections. TCP/IP is so scalable and flexible because its designers made as few assumptions as possible about what end-users would do with the network.

These characteristics—scalability and flexibility—are simply irrelevant in a user interface. Human beings are pretty much all the same, and their “specs” don’t really change over time. We produce and consume data at rates that are agonizingly slow by computer standards. And we’re creatures of habit; once we get used to doing things a certain way (typing on a QWERTY keyboard, say) it becomes extremely costly to retrain us to do it a different way. And so if you create an interface that works really well for one human user, it’s likely to work well for the vast majority of human users.

The hallmarks of a good user interface, then, are simplicity and consistency. Simplicity economizes on the user’s scarce and valuable attention; the fewer widgets on the screen, the more quickly the user can find the one she needs and move on to the next step. And consistency leverages the power of muscle memory: the QWERTY layout may have been arbitrary initially, but today it’s supported by massive human capital that dwarfs whatever efficiencies might be achieved by switching to another layout.

To put it bluntly, the open source development process is terrible at this. The decentralized nature of open source development means that there’s always a bias toward feature bloat. If two developers can’t decide on the right way to do something, the compromise is often to implement it both ways and leave the final decision to the user. This works well for server software; an Apache configuration file is long and hard to understand, but that’s OK because web servers mostly interact with other computers rather than people, so flexibility and scalability are more important than user-friendliness. But it tends to work terribly for end-user software, because compromise tends to translate into clutter and inconsistency.

In short, if you want to create a company that builds great user interfaces, you should organize it like Apple does: as a hierarchy with a single guy who makes all the important decisions. User interfaces are simple enough that a single guy can thoroughly understand them, so bottom-up organization isn’t really necessary. Indeed, a single talented designer with dictatorial power will almost always design a simpler and more consistent user interface than a bottom-up process driven by consensus.

This strategy works best for products where the user interface is the most important feature. The iPod is a great example of this. From an engineering perspective, there was nothing particularly groundbreaking about the iPod, and indeed many geeks sneered at it when it came out. What the geeks missed was that a portable music player is an extremely personal device whose customers are interacting with it constantly. Getting the UI right is much more important than improving the technical specs of adding features.

By paring the interface down to 5 buttons and a scroll wheel, Apple enabled new customers to learn how to use it in a matter of seconds. The uncluttered, NeXT-derived column view made efficient use of the small screen and provided a consistent visual metaphor across the UI. And by coupling the iPod with its already-excellent iTunes software, Apple was able to offload many functions, like file deletion and playlist creation, onto the user’s PC. You can only achieve this kind of minimalist elegance if a single guy has ultimate authority for the design.

The iPhone also bears the hallmarks of Steve Jobs’s top-down design philosophy. It’s a much more complex device than the iPod, but Apple goes to extraordinary lengths to ensure that every iPhone app “feels” like it was designed by the same guy. They’ve created a visual language that allows experienced iPhone users to tell at a glance how a new application works. As just one example, dozens of applications use the “column view” metaphor popularized by the iPod. The iPhone lacks a hardware back button, but every application puts the software “back” button in exactly the same place on the screen. An iPhone user quickly develops “muscle memory” to automatically click that part of the screen to perform a “back” operation. The decision to use the upper-left-hand corner of the screen (rather than the lower-left, say) was much less important than getting every application to do it the same way.

In short, I don’t think it’s a coincidence that the devices with the most elegant UIs come from a company with a top-down, almost cult-like, corporate culture. In my next post I’ll talk about how Google’s corporate culture shapes its own products.

Update: Reader Gabe points to this excellent 2004 post by John Gruber on the subject of free software usability.

Posted in Uncategorized | 27 Comments