Kaleidescopic Embryos and the Evolution of the Link Economy

kaleidoscope

In earlier posts I’ve linked to Climbing Mount Improbable, Richard Dawkins’s excellent explanation of the theory of evolution. One of my favorite chapters was Chapter 7, titled “Kaleidoscopic Embryos.” It surveys some of the striking regularities in the body plans of living organisms. For example, almost every animal possesses left-right symmetry. Many animals have other symmetries. Starfish, for example, are commonly 5-way radially symmetrical. Also extremely common is segmentation. The bodies of worms and centipedes essentially consist of repeated copies of the same basic segment pattern, with minor variations from one segment to the next. And indeed, more complex organisms frequently have echoes of this body pattern. Human spines, for example, consist of repeated instances of virtually identical vertebrae.

Dawkins calls these development plans “kaleidoscopic” because these organisms develop by repetition of simpler patterns, just as a kaleidoscope creates pretty pictures by using mirrors to repeat patterns of light. He argues that the emergence of kaleidoscopic development patterns is the result of a kind of “meta-evolution.” Segmentation and symmetry don’t directly improve the organism’s reproductive fitness, but it makes it “better at evolving.” Imagine, for example, if each of a starfish’s five arms evolved separately. If a mutation caused an improvement in one arm, the animal would only get one-fifth the benefit a symmetrical starfish would receive.

423437808_b4c48b356cThere an important lesson here for the debate over the decline of the newspaper industry. Take, for example, Richard Posner’s widely criticized post calling for changes to copyright law to forestall the demise of the newspaper industry. In his model of the news business, there are some mainstream news companies who are in the business of producing “the news.” Then there are a bunch of other Internet firms that have a basically parasitic relationship with the mainstream media outlets. Posner writes that “it is much easier to create a web site and free ride on other sites than to create a print newspaper and free ride on other print newspapers”

Posner concludes that changing copyright law may be needed to “keep free riding on content financed by online newspapers from so impairing the incentive to create costly news-gathering operations” that the AP and Reuters become the only remaining producers of news.

Posner suffers from a lack of imagination, and in particular a failure to appreciate that the news industry is subject to its own kind of “meta-evolution.” On one level, Individual news outlets compete with each other for readers and advertisers. Some attract readers and prosper; others lose readers and go out of business.

Richard Posner

Richard Posner

If that were the whole story, Posner’s argument would be pretty compelling. Luckily, it’s not. At the same time individual news organizations are competing against one another, we’re also seeing a kind of memetic evolution. News organizations are constantly experimenting with new ways to do business, and successful experiments soon get copied by other news outlets.

One of the most important examples, which I alluded to in one of my first posts, is the emergence of the link economy. Web-native publications pay a great deal of attention to who they link to, and who links to them. Over time, complex norms have evolved governing when links are expected. When an online outlet is perceived (inaccurately, perhaps) as having violated those norms, they are subject to retribution in the form of other publications withholding links to them.

I know from personal experience that online news outlets take this kind of thing very seriously. For example, Ars vehemently insisted that this was a genuine coincidence (FWIW, I believed them). They understood that a reputation for failing to credit the sources of their stories would have a big negative effect on their traffic.

There are two things to note about this. First, the link economy helps to solve the problem that Posner is worried about: how to make sure that original news-gathering is properly rewarded. As the news business gets more decentralized, inbound links will become a more and more important source of traffic. A norm of linking to your sources ensures that sites that do real reporting will get more traffic. And a norm of never linking to sites that fail to observe the first norm ensures that it is widely followed.

Second, it’s not a coincidence that this link-oriented “social contract” has emerged in the blogosphere. Norms are subject to competitive pressures just as individual sites are. Over time, practices that make the blogosphere work better will grow with the sites that adopt them. This is a kind of “meta-evolution”: just as changes in the structure of animals’ body made beneficial mutations more likely, so the structure of the news business is changing in ways that enhances the incentives to produce original content.

Unfortunately, this is hard to explain to people with a newspaper-centric worldview. The modern newspaper industry has norms of its own that were forged in a very different technological environment. The “social contract” among newspapers is based on the assumption that all major newspapers will be large, hierarchical organizations like them, and that each will employ a large number of reporters producing original content. And so when they look at the decentralized structure of the online news business, it doesn’t look like an alternative way to organize the enterprise of gathering news. It just looks like they’re cheating.

Posted in Uncategorized | Leave a comment

Turning PACER Around, One Document at a Time

497364007_b28f03366a

Back in April I wrote a story for Ars Technica about PACER, the federal judiciary’s website(s) for public access to court records. Transparency is a fundamental principle of our judicial system because it allows us to fully understand the laws that bind us. Yet when it comes to accessing judicial records online, the judiciary falls well short of this ideal. PACER locks documents behind a paywall—charging 8 cents per “page”—and forces users to use a cumbersome search interface that’s inscrutible to non-lawyers.

recaplogo-betaTonight I’m excited to announce RECAP, a project I’ve been working on for a few months. It’s co-authored by fellow grad student Harlan Yu and friend Steve Schultze, under the supervision of my advisor Ed Felten.

What RECAP does is very simple: whenever a user downloads (and pays for) a document from PACER, RECAP helps the user automatically send a copy of that document to a public archive hosted by the Internet Archive. In addition (and here’s the real selling point for users), if a user searches for a document that’s already in the public archive, RECAP will notify the user of this fact and give him the option of downloading the free version, saving the user money on PACER fees. Users can (to paraphrase Carl Malamud) save money as they save public access to the law.

Why is this important? Well, as some of my colleagues have pointed out, there’s tremendous potential for private-sector actors to do useful things with public data once it’s been pried out of the hands of the government. I don’t know what those uses might be for judicial records, but I’m confident there will be some. Moreover, judicial records are particularly important because legal decisions are binding precedent. If it’s hard for me to access the law, it’s hard for me to learn what the law requires of me. So think it’s unacceptable that almost 20 years after the emergence of the web, judicial records still aren’t freely and publicly available.

The success of our project will depend on convincing lawyers to download our plugin. So if you know any lawyers, please pass the news along. The installation process is painless and the software is extremely user-friendly.

Posted in Uncategorized | 12 Comments

Patent Obviousness and Tacit Knowledge

In a decision that seems to be tailor-made to highlight the problems with the patent system, a Texas judge has ordered Microsoft to stop selling Word because it infringes a patent held by a small company called i4i. An IM discussion on the subject with Julian Sanchez inspired this sharp post. Julian looked at the patent in question and found it forehead-slappingly obvious, and we got into an IM discussion about how patents like this get through the review process. Julian’s initial theory was that the patent office needed to hire more computer programmers, but I was skeptical:

383476178_8fe0f5e767

Tim thinks that’s not really the problem—the problem is that if an applicant wants to appeal, the examiner, who may well be a programmer, has to defend his subjective judgment of what’s “obvious” with some kind of explicit argument. And the result (says Tim) is that in practice the “non-obviousness” requirement has been largely conflated with a review of the “prior art” or previous related inventions. The upshot is that unless someone else has done almost exactly the same thing before, you’ve got a good shot at getting the patent. Maybe this is motivated by a version of the no-five-dollar-bills-on-the-sidewalk fallacy in economics: If nobody has done it before, it can’t have been all that obvious. But, of course, in a rapidly evolving area of technology, someone’s always going to be the first to do something obvious.

I think the source of the problem in the patent system may be linked to a point Friedrich Hayek made long ago about our tendency to overrate the economic importance of theoretical knowledge and vastly underestimate the importance of tacit or practical knowledge. The non-obviousness requirement, tied to the standard of an observer skilled in the appropriate art, is supposed to make the patent system sensitive to this kind of knowledge. But if examiners have to defend their judgments of obviousness, they’re essentially being required to translate their tacit knowledge into explicit knowledge—to turn an inarticulate knack into a formal set of rules or steps. And Hayek’s point was that this is often going to be difficult, if not impossible…

If you ask me how I knew the way to go about writing the translation program in question, I’m not sure I could tell you—just as we sometimes find ourselves at a loss when we’re asked to give explicit directions for a route we know by heart. Things that are “obvious” are often the hardest to explain or articulate explicitly, precisely because we’re so accustomed to apprehending them by an unconscious (and possibly itself quite dizzyingly complicated) process. The very term “obvious” comes from the Latin obviam for “in the way”—that is, right in front of you, where you can’t help but see it. Except the visual processing system we “use” automatically is vastly more sophisticated than what we’re (thus far) capable of designing. If you had to describe explicitly the unconscious process by which you see what’s right in front of you, it wouldn’t seem “obvious” at all. The same, I expect, goes for the knack of knowing how to go about solving a particular problem in coding or engineering—with the result that the patent system systematically undervalues the tacit knowledge embedded in those skill sets until it’s embedded in a piece of “prior art.”

It strikes me that what’s obviousness is a lot like humor. Take this comic, for example:
cant_sleep

I think it’s hilarious. Chances are if you’re a programmer you do to, and otherwise you probably don’t. And it wouldn’t do any good for me to write a paragraph explaining that it’s about the 16-bit binary representation of integers; explaining a joke famously never works. You either “get it” or you don’t. And to get it, you have to share a body of common knowledge and experiences with the person who told the joke.

Something similar is true of obviousness. Ideas are not intrinsically obvious or non-obvious. They’re obvious relative to some body of knowledge or experience. This is why patent law requires that an idea be obvious to a “person having ordinary skill in the art”—in this case, a typical computer programmer.

62925785_4ff0661633The patent system is based on the premise that the body of knowledge necessary to evaluate the obviousness of an invention can easily be obtained by a judge, perhaps with the aid of expert witness testimony. But I think there’s reason to be skeptical of this assumption. To be sure, a programmer can explain to a non-programmer how any given software technique works. But understanding how a technology works is very different from understanding whether it’s obvious or not. If you want to understand why programmers consider a concept obvious or non-obvious, you need a command of the body of knowledge programmers have, which is another way of saying you need to become a programmer.

I hadn’t realized this when I started this post, but this is actually a corollary of the Ronald Burt insight I quoted in my very first post: ideas that are commonplace in one community will seem highly creative in others. And that means we should expect generalist judges to be lousy at judging which ideas are obvious in fields they don’t know well. Which to a first approximation is all of them.

Posted in Uncategorized | 4 Comments

“That Was the Point”

In comments to my last post, Kevin Donovan makes a good point:

I don’t think that’s quite fair. The partying teenager doesn’t know that they may become the somber politician. Or the celebrity. Nor do they necessarily know that certain pictures are in existence (maybe lying dormant on a high school friend’s hard drive, ready to be shared).

I think we’re in for a lot of surprises and pain for my generation, but ultimately people will have to accept that young people do certain things that their older selves would not.

To be clear, I’m not claiming that there are no consequences to putting embarrassing photos on Facebook. Anything you put online could be saved by anyone who sees it, and could surface years later. That could be embarrassing, especially if you wind up getting nominated for a cabinet post.

But here’s the thing: we’ve now had three presidents in a row that are widely believed to have engaged in illicit drug use in college. When Bill Clinton faced questions about his drug use, he was forced to make the ridiculous argument that he “didn’t inhale.” By the time Barack Obama was running for president, he was able to say “I inhaled—that was the point.”

2830201175_ec6b525972Public attitudes are changing rapidly, and I think the Internet will only accelerate that development. When Kevin gets nominated to be head of the World Bank in 2029, half the people on his Senate confirmation committee will have been users of Facebook (or its non-walled-garden successor) for the preceding 20 years. I suspect most of them will intuitively understand that it’s inappropriate to reject an otherwise-qualified nominee because they made a lewd gesture to a camera 20 years earlier.

Of course, any compromising pictures of Kevin that exist will probably get circulated, and he might find that personally embarrassing. But I suspect that personal embarrassment will be the extent of the “serious consequences.” Rejecting a job candidate because of the embarrassing photos on his Facebook page will seem as silly in 2029 as rejecting a presidential candidate because he inhaled does today.

Posted in Uncategorized | 2 Comments

The “Enormous Consequences” of Misunderstanding Facebook

3185261537_afe1eab383

This sentiment gets repeated so casually and matter-of-factly that it has almost become a cliché:

Social-networking sites allow members to create individualized pages that often include personal information such as relationship status, age, city of residence and birth date, as well as photos, videos and personal comments.

Yet there can be enormous consequences.

That alcohol-related post-prom picture? Someday an employer or college admission officer might come across it with a quick click on Google. Hitting delete to get rid of a questionable photo won’t help. The digital imprint never goes away and could be flitting across computer screens around the world.

The specific example of “enormous consequences” offered here is distinctly underwhelming. In the first place, has the author ever used Facebook? The photos you upload there aren’t available with a “quick click on Google.” Typically, your Facebook photos are only available within your social network. And if you like, you can restrict access further, specific lists of named friends. All the major social networking sites offer similar features.

Second, this paints an incredibly unflattering picture of the nation’s employers. First we’re supposed to believe that this prospective employer is nosy enough to gain unauthorized access to your Facebook page. Then we’re supposed to believe that he’s got so much free time on his hands that he’s willing to sort through hundreds of mostly-innocuous photos in order to find that one embarrassing post-prom picture. And finally, we’re supposed to believe that he’s going to be so priggish that once he finds a picture of you doing body shots, he’s going to stop considering your application.

I suppose there are a few employers out there that fit that description. But frankly, I wouldn’t want to work for them. Moreover, this type of employer almost certainly skews older, so today’s high school kids are less likely to ever encounter someone like that than I am.

955839104_b42f32d69eFinally, there’s the claim that “the digital imprint never goes away and could be flitting across computer screens around the world.” It’s also true that the Earth “could be” hit by an asteroid this weekend. In practice, if you’re not a celebrity, the world just isn’t likely to care about your big beer pong victory. So they’re not likely to show up on Google, and if you delete them they’ll almost certainly stay deleted.

Now, to reiterate, I don’t really fault this particular reporter, who’s simply repeating the conventional wisdom. I think the problem is that this is a subject where peoples’ intuitions are lagging way behind the pace of technological change.

Before the emergence of the web, information fell into basically two categories: public or private. Information that got published (say in a newspaper) was public and available to everyone. Not only that, but information in the newspaper was highly visible; you could expect that lots of people in your town will notice if a picture of you goes into the paper. On the other hand, information that didn’t get published (say the photos in your photo album) was private and in practice only available to a tiny handful of people. “Public” and “private” were distinct categories, with very little in between.

Now there’s a spectrum. Putting a photo on Facebook is obviously more public than keeping it in a traditional photo album. But it’s also a lot less public than publishing it in the newspaper. Unfortunately, most people don’t have an intuitive category for this kind of information-sharing. And so (especially if they have limited personal experience actually using them), they lump it into the “public” category. And then they get alarmed at the idea of kids putting embarrassing pictures “on the Internet.” But “the Internet” isn’t one big publication that’s available to everyone. It’s a lot of individual sites, the most popular of which offer all sorts of ways you can protect your privacy.

Posted in Uncategorized | 5 Comments

The Closing of the American iPhone

2660158673_511f8e4941_b

There’s been a lot of controversy recently over Apple’s tight-fisted control over the market for iPhone applications. Apple reviews every application submitted for sale via the iPhone App Store and regularly rejects applications that don’t meet its standards. More galling to iPhone developers, Apple is sometimes vague about why an application gets rejected, leaving a developer who’s poured months of effort into an application stranded.

The latest round of debate was sparked by Apple’s decision to reject the Google Voice application. That got the attention of the FCC, which sent letters to Apple, Google, and AT&T demanding an explanation for the decision. My former blogging colleagues Adam Thierer and Jerry Brito weren’t too happy about this, pointing out that it’s not clear the FCC has any authority in this area. And my friend Peter Suderman chimed in in support of Apple, noting that Apple has done more to enhance the dynamism of the wireless industry than the FCC ever will.

This is an especially worrisome trend because we’re rapidly approaching a world in which every digital device has a wireless card. So if the FCC has the authority to regulate the software on the iPhone because it happens to have wireless capabilities, the same reasoning would seem to give the FCC authority over almost any digital device. And that seems like a bad idea. There’s no reason to think the software industry—wireless or otherwise—needs more government oversight. And if there is going to be government oversight, there’s no reason to think the FCC is especially qualified to provide it.

I was, therefore, a little disappointed to see the Electronic Frontier Foundation’s Fred Von Lohmann jump on this particular bandwagon. I’ve been an EFF donor for years, and I think they do incredibly important work defending civil liberties. One of the things I’ve always admired about them is that they’ve always stayed tightly focused on their core mission of defending the freedom of the Internet from encroachment by the legal system. For example, they wisely stayed out of the network neutrality debate, recognizing that there were strong civil liberties arguments on both sides. Cheerleading for greater FCC regulation of the cell phone industry seems to me rather far afield from their traditional focus on defending digital freedom.

With that said, I think it’s important that critics of regulation not over-state their case. In particular, I don’t think there’s any reason to think that (as Peter puts it) “closed networks have spurred technological developments.” The iPhone is a hit because it’s a brilliant product, with excellent software and brilliant industrial design. Its popularity has little or nothing to do with the fact that it’s a closed platform operating on a closed network.

1032116553_330fe49fd9An analogy might help make this point clear: many movie theaters sell relatively cheap tickets and make their profits selling expensive refreshments. To make this business model work, movie theaters spend resources to prevent their customers from bringing their own food into the theater. Now, I certainly think it’s reasonable to say that movie theaters should be free to pursue this business model. And obviously many consumers believe they’re getting a good deal. But I don’t think any of them would cite their inability to bring food into the theater as one of the theater’s selling points. The policy is clearly designed to benefit the theater, not its patrons.

The same point applies to closed technology platforms. I certainly don’t think it should be illegal for companies to build closed networks or platforms. But it’s a little silly to pretend that companies’ efforts to lock down their products somehow improve the products or promote competition. The iPhone would be an even better (although possibly less profitable) product if customers had the freedom to install unauthorized applications. And the cellular market would be even more competitive if consumers had greater freedom to combine any device with any network. The iPhone is a great product in spite of, not because of, the restrictions Apple and AT&T place on it.

Finally, libertarians in particular should bear in mind the point Ryan Radia makes here. Apple’s stranglehold over the iPhone software market only exists thanks to the Digital Millennium Copyright Act:

The DMCA hasn’t stopped millions of iPhone owners from jailbreaking their phones and installing Cydia, an unofficial alternative to the official iPhone App Store. Cydia, which lets users download banned iPhone apps like Google Voice, has been installed on a whopping one in ten iPhones, according to its developers.

But jailbreaking programs and applications like Cydia are in risky legal territory. Developers who circumvent the iPhone’s copy protection systems are at risk of being sued by Apple, as are users who run jailbreaking software. Apple maintains that jailbreaking software is illegal under federal law, though it has not taken legal action against any unauthorized iPhone developers to date.

In a free market, consumers would have a choice between using Apple-approved software or switching to competing software offered by a variety of third-party vendors. Unfortunately, the DMCA has pushed the “jailbreaking” community underground, resulting in a world in which only the most technically-savvy users have a choice. I’m not convinced this is such a serious problem that we need to give the FCC the power to regulate the mobile software industry. But it is a problem, and I think it’s important to say so.

Posted in Uncategorized | 8 Comments

The Intelligent Design Fallacy

The Discovery Institute, a conservative think tank based in Seattle, has become the global headquarters for anti-Darwin agitation. The Institute has groomed a roster of credentialed commentators who are more than happy to explain how and why Darwin got it wrong. In its place, they offer a concept called “intelligent design.” The idea is that someone (they carefully avoid saying who) must have guided the process of evolution (they carefully avoided saying how) in order to produce the life we see around us today.

800px-Nautilus_pompilius_(head)As the centerpiece of their argument, they point to examples of what they regard as “irreducible complexity.” These are structures such as eyes, wings, or bacterial flagella that, they claim, could not have arisen via a gradual process of evolution. They suggest, for example, that there’s no way an eye could have evolved gradually. After all, a halfway-developed eye would be worse than useless. So either a full-blown eye emerged in one step (a fantastically unlikely event) or evolution must have had help from an “intelligent designer.”

The problem with this argument is that the premise isn’t true: nature is actually full of examples of “proto-eyes” at various stages of development. Richard Dawkins has a particularly lucid explanation in Climbing Mount Improbable. And Wikipedia also does a decent job of explaining the likely stages of eye evolution. Moreover, there’s evidence that eyes evolved several different times in different parts of the animal kingdom—many different animals have eyes that work in ways that are functionally similar but different in subtle ways that suggest separate evolution.

Richard Dawkins

Richard Dawkins

Similar explanations have been made of other phenomena generally offered as examples of “irreducible complexity”; Dawkins offers a detailed explanation for how wings might have evolved in Mount Improbable. Here is an explanation of where flagella could have come from.

So on the merits, this argument isn’t very strong. Biologists do, in fact, have plausible explanations for how most of the commonly-cited examples examples of “irreducible complexity” could have developed gradually. However, I don’t think scientific credibility is really the point. The theory of intelligent design isn’t designed to win scientific arguments so much as exploit a common cognitive bias among non-scientists. As I argued on Thursday: people are used to thinking about the world in terms of direct cause-and-effect relationships. If they see an orderly result, they assume that some specific person or thing must have orchestrated that result.

I think there’s a deep connection between the mistaken intuition that drives support for intelligent design and Jimmy Wales and Larry Sanger’s mistakes in building Nupedia. They’re really two sides of the same coin: Intelligent design proponents see an complex, orderly outcome and they conclude that there must have been someone overseeing the process. Wales and Sanger wanted a complex, orderly outcome, and they believed that they needed someone overseeing the process in order to get there.

WTC_Tower_2_collapse
My friend Julian Sanchez once called this phenomenon the “intelligent design fallacy,” and it’s a term I plan to use regularly on this blog. Once you start thinking about the intelligent design fallacy as a systematic cognitive error, you start to see it everywhere you look. A lot of conspiracy theories are just special cases of the intelligent design fallacy: a phenomenon (like AIDS or the collapse of the towers on 9/11) may have a plausible naturalistic explanation, but people seem to leap to the conclusion that there must have been a human being orchestrating it. On the other side of the coin, there are lots of examples where people want to produce some outcome (like an encyclopedia or an operating system) and mistakenly assume that that outcome can only be achieved if a hierarchical institution supervises the entire process.

Posted in Uncategorized | 11 Comments

Innovation Often Means Doing Less

Mike Masnick links to Peter Abraham, a sports reporter who covers the Yankees. Abraham worries about the future of his profession:

I’m usually flattered if some other blog links to my work. I figure anything that brings more readers here has to be good. But for every responsible blogger out there, there are other who cut and paste the work of others and either pass it off as their own or barely credit the author.

If you know the solution, contact the newspaper industry because you will be a well-paid consultant. The problem will soon be this: If newspapers decide they can’t afford beat writers, where will that information come from? Somebody has to get on the plane, go to Toronto and ask the questions.

Mike’s observations about this are astute as always: rip-off blogs rarely get much traffic, and it’s not obvious that the world needs a dozen people covering the Yankees. But the thing that caught my eye was that last sentence: “Somebody has to get on the plane, go to Toronto and ask the questions.”

84655830_c671f4b359_o

Actually, no they don’t. At the risk of belaboring the obvious, there are a lot of people in Toronto. Many of them are good writers. Some of them even cover sports for a living. And the Internet makes it easy to transmit content from place to place. So there are plenty of places the information can “come from,” and plenty of ways information about the game in Toronto can get back to readers in New York.

Most obviously, Toronto presumably has sports reporters of its own. They presumably cover Yankees away games. So one obvious approach would be for New York publications to syndicate the content of Toronto publications when the Yankees are in Toronto, and for the opposite to occur when the Blue Jays are in New York.

Now, I don’t necessarily think this is the best way to do sports reporting. And I don’t think we’re headed for a grim future when reporters can never afford plane tickets. But Abraham is asking the wrong question. The question is: “how do we make sure fans have good coverage of their favorite sports teams?” Maybe that will continue to involve Abraham flying to Toronto. But maybe it won’t; the Internet may help us come up with better ways of doing things.

2815043167_6f887ef011

That might sound like nitpicking, but I think it illustrates an important point about the kinds of arguments newspaper partisans make. More often than not, they start from the assumption that newspapers need to continue doing all the stuff they’re doing now, and then they complain that their revenues are no longer sufficient to cover those costs.

Innovation often involves abandoning an old, expensive process in favor of a new, cheaper one. We don’t have as many telephone operators, linotype operators, or stenographers as we used to because we developed technologies that allow us to do those jobs a lot more efficiently. And it’s because of these kinds of changes that businesses have been able to cut costs, lower their prices, and ultimately make us all richer.

So it’s important to distinguish between reporting in the abstract and the particular activities of today’s dominant news firms. The former is important and worth preserving. The latter simply isn’t. And when you equate the two, you wind up reasoning backwards: trying to figure out how to finance unnecessary expenditures rather than thinking about which expenditures are unnecesary. The problem is that if you’ve spent your whole professional life working at newspapers, as many print reporters have, it can be rather difficult to see the difference.

Posted in Uncategorized | 4 Comments

Disclosure

Picture 1

I’m a fan of the increasingly popular practice of bloggers proactively disclosing any financial arrangements that might influence their objectivity. Independent bloggers have to make a living like everyone else, and reasonable people disagree about which types of financial arrangements compromise a blogger’s objectivity. The good thing about transparency is that it allows readers to see what potential conflicts exist and decide for themselves which ones they’re comfortable with.

So here is my disclosure statement. The thumbnail version is as follows: No one pays me to write this blog or exercizes any editorial control over it. My income over the last 18 months has come from a mixture of freelance writing income, a grad school stipend, web development, and a fellowship from a libertarian-leaning organization called IHS. I do no lobbying or PR work, and none of my income sources limit my freedom to say what I please on this blog.

Posted in Uncategorized | Leave a comment

Wikipedia as a Bottom-Up Process

In March 2000, an entrepreneur named Jimmy Wales hired a newly-minted philosopher named Larry Sanger to start a new kind of encyclopedia. Called Nupedia, its goal was to harness the power of the Internet to build a high-quality, volunteer-driven reference work for the masses. Anxious to maximize quality, Wales and Sanger instituted a rigorous peer-review process: Before content appeared on the site, it had to be thoroughly scrutinized by subject-matter experts to ensure it was of the highest quality.

The project went nowhere. The review process proceeded at a glacial pace, with only a handful of articles being published in the first year. In January 2001, Sanger suggested augmenting Nupedia with a side-project called Wikipedia, which anyone would be able to edit without going through the cumbersome Nupedia review process. They didn’t expect Wikipedia articles to be very good, but they hoped they might produce some raw material that could later be fashioned into publishable articles by the Nupedia editors.

Of course, you know how this story ends. Nupedia continued to languish, producing a grand total of 24 articles in its lifetime. Wikipedia took off faster than anyone—including Sanger and Wales themselves—could have imagined. It had 10,000 articles before its first birthday and 100,000 articles before its second. Wales soon realized that the Wikipedia effort didn’t need an editor-in-chief and laid Sanger off. And the site kept growing; it’s on track to reach 3 million articles by the end of 2009.

Wikipedia’s success was so counter-intuitive that even its founders didn’t anticipate how well it would work. If you’ll forgive me for citing Clay Shirky again (I’ll be doing that a lot on this blog), here’s how he described it in an interview I did with him last year:

When people first hear about wikis, they always say, “Wow, that’s really cool.” And then about two minutes later, everybody has the same reaction: “Oh, that actually won’t work.” People just don’t think that that pattern can create new value. The social problems seem too obvious and intractable. The mind-shift is… that you have to start looking at Wikipedia as a different kind of product than Britannica. Wikipedia actually is an ongoing process…

It’s like a coral reef. The action is actually in the edits and the arguing: the living part of it. The calcified deposits, the stuff that nobody’s arguing about any longer, is what most people experience as Wikipedia, in the same way that most people experience a coral reef as the residue of what all the little creatures are doing.

One of the mistakes people tend to make in evaluating bottom-up systems is that they tend to underestimate how much happened “under the hood” to produce a finished product. Each little creature toils away his whole life to add a tiny bit of material to the coral reef. Similarly, it’s not uncommon for Wikipedia editors to thousands of words arguing over one paragraph or even one sentence of an article. The Wikipedia article on Michael Jackson is about 15,000 words long. The discussion section on the Michael Jackson goes on for 27 pages, each containing about 10,000 words of discussion. All that complexity is hidden from a casual user, who only sees the tidy, finished product and not the long, messy process that produced it.

To bring things back to yesterday’s discussion of Charles Darwin, I think a big part of the reason people find evolution counterintuitive is that it's really hard to grasp the concept of a billion years. Lots of people think the human body is too complex to be the result of trial and error, but that's because it's extremely difficult to grasp just how much trial and error it took. Indeed, it likely took so many steps that we couldn’t possibly understand them all even if we tried. Therefore, the key to thinking clearly about it is the one Shirky recommends: to focus on understanding the the process on its own terms. We can make some useful observations about how natural selection and the Wikipedia editing process work, but we’re never going to be able to understand or predict how the processes will play out in every individual circumstance.

Posted in Uncategorized | 7 Comments