Hobbies Don’t Need “Incentives for Participation”

Eric Goldman

Eric Goldman

Ars Technica writes up law professor Eric Goldman’s argument that Wikipedia is doomed. Since 2005, Goldman has been predicting that Wikipedia would start to decline by 2010, and to his credit (I guess) he has stuck by his prediction despite mounting evidence to the contrary. His latest effort is a law review article laying out his case with a surfeit of footnotes.

For the most part, the paper re-hashes the same arguments that Wikipedia’s critics have always made: that editing Wikipedia brings few financial or other extrinsic rewards, and that Wikipedia will therefore have difficulty recruiting enough people to make the site viable. As Goldman himself acknowledges, this is not a new argument. Goldman’s novel claim is that “xenophobia” and rising barriers to entry will drive away new editors. And without new editors, Wikipedia will experience a “labor squeeze” as veteran editors move onto new activities. Goldman explains the xenophobia point like this:

Unregistered or unsophisticated users do not comply with Wikipedia’s cultural rituals, such as signing talk pages. By failing to conform to the rituals, these contributors implicitly signal that they are Wikipedia outsiders, which increases the odds that Wikipedia insiders will target their contributions as a threat. As one book says, “If you’re editing and aren’t logged in, you’re in some sense a second-class citizen on the site. Expect less tolerance of minor infractions of policy and guidelines.” This insider xenophobia is a more significant incursion on free editability than any technological measure because it leads to quick screening of user contributions—both illegitimate and legitimate.

There’s an awful lot of hand-waving going on here. It’s obviously true that people who conform to Wikipedia’s various policies and traditions are more likely to have their edits respected than those who don’t. But this doesn’t tell us about the significance of this effect. Goldman cites research suggesting that 25 percent of edits by novice editors are reverted, up from 10 percent in 2003. Both the figure and the trend strike me as totally consistent with a healthy Wikipedia. Obviously, as Wikipedia matures, there will be fewer opportunities for constructive edits and more occasion for vandalism, so we’d expect the figure to rise over time. And the 25 percent of edits that get reverted obviously aren’t going to be randomly distributed. Some of them will be troublemakers, bad spellers, or people with political axes to grind. If you know what you’re talking about and make a sincere effort to improve an article, your reversion rate will be less than 25 percent.

3813804642_9bdf8e1281One of the most important rules of Wikipedia is to “ignore all rules” if they conflict with making Wikipedia better, and Wikipedians take this principle seriously. Contributors who contribute useful content but fail to observe all of Wikipedia’s niceties will likely find their contribution “cleaned up” by an experienced editor, and may receive a note from that editor explaining the relevant policy. Moreover, Wikipedians tend to evaluate edits, not editors. Figuring out whether an editor is an “insider” or not is more work than seeing whether the person’s edits made any sense.

These problems aside, I think the fundamental problem with Goldman’s analysis—and that of most of Wikipedia’s critics—is that he misunderstands what Wikipedia is. Wikipedia isn’t a commercial effort that needs to recruit a “labor force.” It’s a hobby like fly fishing or knitting. And the pool of potential “labor” for hobbies is enormous relative to Wikipedia’s needs. As usual, Clay Shirky makes the point best:

If you take Wikipedia as a kind of unit, all of Wikipedia, the whole project–every page, every edit, every talk page, every line of code, in every language that Wikipedia exists in–that represents something like the cumulation of 100 million hours of human thought. I worked this out with Martin Wattenberg at IBM; it’s a back-of-the-envelope calculation, but it’s the right order of magnitude, about 100 million hours of thought.

And television watching? Two hundred billion hours, in the U.S. alone, every year. Put another way, now that we have a unit, that’s 2,000 Wikipedia projects a year spent watching television. Or put still another way, in the U.S., we spend 100 million hours every weekend, just watching the ads. This is a pretty big surplus. People asking, “Where do they find the time?” when they’re looking at things like Wikipedia don’t understand how tiny that entire project is.

Goldman thinks editing Wikipedia sounds tedious and unrewarding. I feel the same way about gardening. Yet lots of people spend hours every week trying to get their azaleas to bloom. This is only puzzling to economists and law professors who have gotten used to thinking like them. Keeping the world’s free encyclopedia tidy requires a small fraction of the effort required to keep the world’s gardens well-tended. Wikipedia will be undermined by its lack of incentives around the same time the nation’s gardens go to seed because people realize they’re never going to be profitable.

Posted in Uncategorized | Leave a comment

Nathan Myhrvold: Bottom-Up Thinker?

Earlier this year, Princeton’s alumni magazine did a glowing profile (I’m guessing all of their profiles are glowing) of Nathan Myhrvold. I learned that he and I have several things in common. Like me, he was once a grad student at Princeton. Also like me, he “seems to have a strong libertarian streak.” Most surprising to me, he apparently considers himself an enthusiast for bottom-up processes:

Now look again at that Dunkleosteus: Those fangs aren’t really fangs. They’re not teeth. They’re sharpened jawbones. Teeth didn’t yet exist on the planet Earth when this creature roamed the seas.

“It’s convergent evolution,” Myhrvold says, and traces with his finger the sharp edges of the proto-teeth. “Notice the bevel is different on each side so they would self-sharpen.”

He adds, “After this, teeth were invented.”

But they were not actually invented, he quickly adds. They evolved. They emerged. There was no one drawing up blueprints for teeth. The market existed for sharp mouth accoutrements; nature innovated to fill that niche.

All of which raises a rather profound question: Were teeth inevitable? And what about the innovations in our own world — are they the result of carefully orchestrated projects, schemes, and hard work, or do they tend to bubble up from a million accidents and casual inspirations?

“Broadly, overall, the way society works is emergent, and it is built on progress — it generally runs downhill toward something better,” Myhrvold says.

6a00d8341c98a353ef00e54f614b188834-800wiI don’t doubt Myrhvold’s sincerity when he pronounces himself a believer in emergent, bottom-up processes. But these beliefs do seem to be in tension with his enthusiasm for drastically expanding the role of patents in the high-tech economy. Because there’s nothing bottom-up about the patent system.

Bottom-up systems are characterized by continuous, vigorous competition. This is obvious in the case of evolution: organisms mutate in many directions at once, and then the fittest animals are selected by the impersonal forces of natural selection. We can see the same dynamic at work in competitive markets. For example, consumers can now choose from dozens of different “social networking” websites. The process of market competition has produced some tentative winners—Facebook and Twitter—but there’s no guarantee their dominance will last. At any time, another firm could come along and knock them from their perches, just as Facebook did to MySpace, and as MySpace previously did to Friendster.

3064100312_13f90f31dbThings would work very differently in the patent-centric technology industry Myrhvold is working to build. The free-wheeling competition of today’s online marketplace exists only because of what Myhrvold calls the “culture of infringement”—that is, a tendency among Silicon Valley firms to ignore the patent system. Myhrvold would replace today’s free-wheeling online marketplace, in which anyone can enter any market at any time, with a much more centralized and bureaucratic process in which winners are chosen by the patent clerks and judges, not consumers. That would probably mean a social networking market controlled by Friendster, and Mark Zuckerberg being forced to go hat in hand to ask Facebook for permission to enter the marketplace.

Whatever else you might say about it, this certainly is not a bottom-up vision for the software industry. Indeed, patent-dominated industries tend to be controlled by one or a handful of large firms, at least until the relevant patents expire. The early sewing machine industry was dominated by a cartel that controlled entry to the sewing machine industry in the 1850s and 1860s. The early telephone industry was controlled by the Bell Company, after Alexander Graham Bell beat a leading competitor to the patent office by a few hours. Competition in the early motion picture industry was preserved only thanks to a “culture of infringement” among independent movie producers who openly defied the monopolistic Motion Picture Patents Company.

84383040_64352bffc0Now, this isn’t always a decisive argument against patent protection. Awarding patents to the first inventor of some technology does create incentives for invention. And if an industry is already highly concentrated, then concerns about the monopolistic tendencies of the patent system may not matter. The pharmaceutical industry fits this profile, and patents seem to work well there.

But whatever you might say about these arguments (and I personally find them convincing for some industries), they certainly are not bottom-up arguments. If you think technological innovation tends to “bubble up from a million accidents and casual inspirations,” you ought to be skeptical of policies that give a handful of large corporations the power to decide who may compete with them.

Posted in Uncategorized | 2 Comments

Nathan Myhrvold’s Evil Genius

Nathan Myhrvold

Nathan Myhrvold

Last year I wrote that Intellectual Ventures is a kind of reductio ad absurdum of our flawed patent system. It’s a firm that literally does nothing useful, its only business is the acquisition and licensing of patents. Not only does it have no intention of commercializing the technologies it “invents,” its business model is based on minimizing the amount of research performed per patent obtained. In Malcolm Gladwell’s brilliant (if inadvertent) exposé of IV, he describes how IV hires smart people to participate in brainstorming sessions and then has patent lawyers immediately file patent applications for every idea that comes up during the discussion, without bothering to actually implement any of them, or even devoting much effort to verifying that they actually work. IV then approaches firms that are doing the hard work of implementing “their” ideas and demands a cut of their profits.

Myhrvold’s firm illustrates in a way that no law review article could the extent to which the patent system punishes firms that actually produce useful products. Firms whose business models involve actual innovation have to show restraint in exploiting their patent portfolios. If they don’t, there’s a high probability that some of their adversaries will countersue and both firms will be dragged into a legal quagmire. But if litigation is your only business, then you’re not vulnerable to retaliatory infringement lawsuits, so you can exploit your patent portfolio much more aggressively. Many small “patent troll” firms have exploited this flaw in the past, but Myhrvold is the first person to recognize that it can be exploited in a systematic, large-scale fashion.

Until recently, one of the few points Myhrvold could make in his own favor is that he hadn’t started suing firms that declined to license his patent portfolio. I say “until recently” because we’re now learning that the lawsuits have started. IV has begun selling off chunks of its patent portfolio to people like Raymond Niro with well-deserved reputations for being “patent trolls.” Threatening to sell patents to a third party who will sue you is more subtle than threatening to sue you directly, but the threat is just as potent. Myhrvold’s “sales pitch” to prospective licensees just got a lot more convincing.

The fundamental question we should be asking about this business strategy is how it benefits anyone other than Myhrvold and the patent bar. Remember that the standard policy argument for patents is that they incentivize beneficial research and development. Yet IV’s business model is based on the opposite premise: produce no innovative products, spend minimal amounts on research and development, and make a profit by compelling firms that are producing products and investing in R&D to pay up. Not only does this enrich Myhrvold at everyone else’s expense, but it also reduces the incentive to innovate, because anyone who produces an innovative product is forced to share his profits with Intellectual Ventures. Patents are supposed to make innovation more profitable. Myhrvold is using the patent system in a way that does just the opposite. In thinking about how to reform the patent system, a good yardstick would be to look for policy changes that would tend to put Myhrvold and his firm out of business.

Posted in Uncategorized | 54 Comments

Bottom-Up Thinking about Google’s Card Catalog

3287986172_f7f153f5be

My Advisor, Ed Felten, has a post examining the problem of metadata errors in Google’s Book Search catalog:

Some of the errors are pretty amusing, including Dickens writing books before he was born, a Bob Dylan biography published in the nineteenth century, Moby Dick classified under “computers”. Nunberg called this a “train wreck” and blamed Google’s overaggressive use of computer analysis to extract bibliographic information from scanned images.

Things really got interesting when Google’s Jon Orwant replied (note that the red text starting “GN” is Nunberg’s response to Orwant), with an extraordinarily open and constructive discussion of how the errors described by Nunberg arose, and the problems Google faces in trying to ensure accuracy of a huge dataset drawn from diverse sources.

Orwant starts, for example, by acknowledging that Google’s metadata probably contains millions of errors. But he asserts that that is to be expected, at least at first: “we’ve learned the hard way that when you’re dealing with a trillion metadata fields, one-in-a-million errors happen a million times over.”

Ed’s conclusion is a good illustration of the difference between top-down and bottom-up thinking:

What’s most interesting to me is a seeming difference in mindset between critics like Nunberg on the one hand, and Google on the other. Nunberg thinks of Google’s metadata catalog as a fixed product that has some (unfortunately large) number of errors, whereas Google sees the catalog as a work in progress, subject to continual improvement. Even calling Google’s metadata a “catalog” seems to connote a level of completion and immutability that Google might not assert. An electronic “card catalog” can change every day — a good thing if the changes are strict improvements such as error fixes — in a way that a traditional card catalog wouldn’t.

Top-down thinkers want to build finished products whose errors have all been corrected before release. Bottom-up thinkers recognize that this is impossible, so they accept that some errors will occur and focus on building processes that reduce the number of errors over time. For really big projects, the top-down approach is simply delusional: if you think your billion-record dataset has no errors, it’s more likely that you’re fooling yourself than that you actually have a perfect quality-control system.

The kind of transparency Google is practicing here is also crucial to bottom-up efforts. Given that large, complex systems inevitably have errors, it’s important for the institutions in charge of those systems to be open about the kinds of errors that occur and the steps being taken to correct them. This has two benefits. First, third parties will often be able to help correct errors, but they can only do that if they’re given reasonable access to the data set. Second, and more important, it allows users of the dataset to understand the appropriate level of skepticism they should apply to information they find in the dataset. A bottom-up world is a world in which end users have to take a bit more responsibility for verifying information they receive from not-necessarily-authoritative sources. Right now, the Google Book search dataset has enough errors that you should really double-check its answer against other sources in cases where accuracy is a high priority.

Posted in Uncategorized | Leave a comment

Disruptive Innovation and the Death of the Recording Industry

Last week I quoted a Wired article that discusses the rise of the MP3 format as a disruptive threat to the recording industry. The story of the recording industry’s decline is complicated because there are actually two different factors at work. The recording industry likes to focus on “piracy” as the main cause of its declining fortunes. But although copyright infringement has certainly done some short-term damage to the recording industry’s bottom line, I think the long-run problem facing the recording industry is a structural problem that has little to do with copyright infringement.

I’ve argued before that the recording industry and the newspaper industry are facing the same basic problem. Both industries are fundamentally in the content-distribution business. Newspapers are in the business of shipping newsprint to consumers. Record labels are in the business of shipping discs (first vinyl, then plastic) to consumers. These content-distribution technologies are to the Internet what the horse and buggy was to the internal combustion engine.

2170846681_97ea8fdef2The recording industry, like the newspaper industry, likes to think of itself as being in the content-creation business. But that’s largely wishful thinking. Musicians don’t sign on to record labels because they need their help to make music—typically, musicians have been making music for years before they get their first record contract. Rather, they sign onto a label because before the Internet a record deal was the only way to distribute their music to a national audience.

It’s true that when an artist signs a recording contract, she gets some financial assistance from the labels. Some of that money often goes to pay for studio time, but a lot of it is usually spent on promotional activities. And although musicians obviously benefit from a label’s effort to promote their work, the primary beneficiary of money spent on promotion is the label itself. Here’s why: publishing a CD involves high fixed costs, so a label needs to sell thousands of copies just to break even. Hence, once the decision was made to publish a given album, the label needs to ensure that it will sell well. From a label’s perspective, publishing an album that sold 500 copies is much worse than not publishing at all.

The Internet changed that. Today, releasing an album is a virtually risk-free decision, so the lavish promotional campaigns that typified major-label album releases are no longer needed. Musicians can simply release music on their websites, or through online services like iTunes and Amazon. Musicians don’t need to do a lot of promotion if they don’t want to, but the Internet helps here, too, giving bands a number of tools to promote organic, viral growth in their fan bases.

353462946_0460bb2fb4So the investments labels make in their musicians aren’t investments that are required for music publication, they’re investments that are required for the capital-intensive process of releasing music in CD format. Now that CD distribution is being rendered obsolete, there just isn’t any need for enormous music-publishing companies. We know from Coase that large firms exist only when there are benefits (such as economies of scale) to large size that exceed the inherent inefficiencies of bureaucratic management. Distributing a CD to a national audience is an activity where size is an advantage. Recording an album and releasing it on the Internet just isn’t.

It’s true that the prevalence of copyright infringement has accelerated the decline of the recording industry. If we could somehow outlaw peer-to-peer file sharing, it would probably postpone the recording industry’s demise by a few years. But the long-run trend has little to do with copyright infringement, and everything to do with technological change. The core competence of the labels has always been shipping plastic discs across the country. The Internet is rapidly rendering that music distribution method obsolete. And so there’s every reason to expect that the firms built around that technology will themselves go out of business.

As Mike Masnick has pointed out before, the CD business is not the same thing as the music industry. The vast majority of music has always been made by people who didn’t have recording contracts, and there’s no reason to think people will stop making music in a post-CD world. Being a rock star may become somewhat less lucrative in a post-CD world, but it will still be a tremendously high-status achievement. It seems implausible that we’ll ever have a shortage of people making and publishing music.

Posted in Uncategorized | 3 Comments

Why I’m an Optimist about the Future of News

Reader Rhayader wants to know what I think of this David Simon story about the decline of investigative reporting Baltimore:

There is a lot of talk nowadays about what will replace the dinosaur that is the daily newspaper. So-called citizen journalists and bloggers and media pundits have lined up to tell us that newspapers are dying but that the news business will endure, that this moment is less tragic than it is transformational.

Well, sorry, but I didn’t trip over any blogger trying to find out McKissick’s identity and performance history. Nor were any citizen journalists at the City Council hearing in January when police officials inflated the nature and severity of the threats against officers. And there wasn’t anyone working sources in the police department to counterbalance all of the spin or omission.

I don’t regard “so-called citizen journalists” as the be-all and end-all of 21st century journalism. There are some tasks where amateur reporting will be perfectly adequate, but there are other circumstances where only a professional journalist with a thick rolodex will get the job done. My beef isn’t with the concept of professional journalism, but with the notion that professional reporters need to be embedded in monolithic, vertically-integrated institutions like daily newspapers in order to do their jobs.

Dave Weigel

Dave Weigel

A few examples. The Washington Independent is a non-profit organization that employs serious journalists like my friends Spencer Ackerman and Dave Weigel to cover national security and Republican politics, respectively. These guys are not “citizen journalists”—they’re serious reporters who track down leads, cultivate sources, and so forth. The Windy has several sister publications, such as the Colorado Independent and the Minnesota Independent, that employ serious journalists to cover state and local news. I’ve previously discussed the work of my friend Radley Balko, who reports on precisely the sort of abuses of power that Simon writes about. In particular, he’s devoted three years to uncovering malfeasance by Mississippi medical examiner Steven Hayne, and he’s covered a number of other cases of wrongful convictions. Josh Marshal’s for-profit TPM media empire employs several full-time reporters, and was crucial in breaking the US attorney’s scandal. Wired employs several top-notch reporters for its website, including the good folks at the Threat Level blog, which regularly does original reporting. Similarly, CBS News recently purchased CNet, which employed top-notch reporters like Declan McCullagh.

These are the names that I can list off the top of my head. I’m sure I could come up with a much longer list if I put my mind to it. But it’s probably true that all these organizations put together still don’t employ as many serious reporters as newspapers do. Yet I think there are reasons to be optimistic. One thing holding back online journalism is that the majority of readers still prefer to read newspapers, and newspapers are still around churning out content. It’s reasonable to expect web-based outlets to experience their most dramatic growth after newspapers start failing, because that’s when consumers will be looking for new sources of content and advertisers will be looking for new ways to reach customers.

But the more fundamental reason for optimism is the basic economics of the situation. Publishing on the web is dramatically cheaper than publishing in print, and it seems extremely implausible that cheaper publishing would lead to the production of less (or lower-quality) content. As an illustration, allow me to elaborate on one possible “business model” for 21st century reporting: journalistic philanthropy. Right now, there are a handful of cities, including Pittsburgh and Washington, DC, where ideologically-inclined philanthropists have used their financial resources to prop up newspapers of their preferred (usually conservative) ideological leaning. Richard Mellon Scaife, for example, spends millions of dollars covering the Pittsburgh Tribune-Review’s overhead each year so it can continue offering a conservative alternative to the metro area’s leading Pittsburgh Post-Gazette.

Richard Mellon Scaife

Richard Mellon Scaife

This is a fantastically expensive hobby. Scaife has apparently poured between $140 million and $244 million into the Tribune-Review over the last two decades. The reason it’s so expensive isn’t that reporting or editorializing costs that much, it’s that newspapers have such high fixed costs. Running a second paper in a city that’s not large enough to support two dailies requires spending millions of dollars on redundant printing presses, distribution networks, and the like. As a result, only the wealthiest individuals can afford to subsidize a daily newspaper.

The web radically democratizes this process. If you want to start a daily newspaper, you need to be prepared to spend tens of millions of dollars doing it. But the Internet leaves a lot of room for less expensive ventures. A million dollars a year is enough to support a modest web-based publication like the Minnesota Independent with half a dozen reporters doing in-depth investigative reporting in a metro area. Even $100,000/year would be enough to pay a full-time reporter to cover city hall in a smaller town. And of course, there’s no reason the money has to come entirely from one individual—it’s not hard to imagine an NPR-pledge-drive model where ordinary citizens band together to support independent journalism.

Just to be clear, my point isn’t that non-profit journalism is “the answer” to the future of journalism, or even that philanthropy will be the most important source of support for investigative journalism. If I had to guess, I’d predict that for-profit business models will continue to be more importnat. Rather, my point is to illustrate how falling distribution costs expand the range of possible strategies for producing high-quality reporting. The barriers to entry are a lot lower than they used to be, which means a lot more people can experiment and fail cheaply.

I don’t deny that the transition to a post-newspaper news business may be rocky. There probably will be a short-term drop in the supply of certain types of high-quality reporting, as incumbent newspapers lay off experienced reporters faster than new ventures can hire them. I certainly don’t want to be seen as “dancing on the grave” of newspapers.

But I do think that the long-term result will be a stronger, more effective news industry. If you’ve spent much of your adult life working for a newspaper, it’s hard to imagine a news industry where newspapers don’t play a central role. But the fact that people have difficulty imagining alternatives is not particularly strong evidence that no alternatives exist. Instead, it’s simply a sign that the news industry is increasingly becoming a bottom-up system, and one of the hallmarks of bottom-up systems is that they tend to be unpredictable.

Posted in Uncategorized | 3 Comments

EFF Wanders off the Reservation

I consider the Electronic Frontier Foundation to be the most important defender of freedom online. They led the fight against warrantless wiretapping, and they’re far and away the most important organization defending fair use in an age of ever-expanding copyright protection. One of the things that I’ve especially admired about the organization is that they’ve carefully maintained their focus on the defense of civil liberties. For example, they remained neutral during the network neutrality fight, neither endorsing regulations nor opposing them. This ensured that both wings of the EFF donor base—the progressives and the libertarians—felt comfortable donating. Had they taken a position on network neutrality, they would have divided their donor base and thereby reduced their effectiveness at their core mission of defending civil liberties online.

2246984141_0f4f1994caIn 2006, I chided them for jumping on the anti-AOL bandwagon when AOL introduced its “GoodMail” plan giving priority access to its customers’ inboxes to those who paid for the privilege. Whatever the merits of the GoodMail plan (personally I wouldn’t have wanted it on my email account), AOL was a private company in a competitive market, and AOL’s email policies just weren’t a civil liberties issue. As a EFF contributor, I was concerned that money I had expected would go to defend civil liberties was instead being diverted to other causes.

Over at the Technology Liberation Front, Adam Thierer points to an even more egregious departure by EFF from its civil liberties mission. EFF has apparently signed on to a coalition seeking new regulations of data collection for use in targeted advertising. Now, this isn’t an issue I’ve given a ton of thought to, so I don’t have a strong opinion on the subject. Berin Szoka has been doing some good work on the subject, and I’m inclined to agree with his conclusion that regulating behavioral advertising is certainly premature and probably counterproductive.

But the more important point, in my view, is that this is the first time I can remember that EFF has called for more government regulation of wholly private transactions. Even if regulating behavioral advertising were good public policy, it strikes me as inappropriate for EFF to be lobbying for it. They’re a civil liberties organization; people expect their donations to go toward fighting government restrictions on peoples’ freedom. If EFF abandons their traditional message discipline and begins indiscriminately backing causes its staff happens to support, it’s going to make many potential EFF donors, including this one, think twice about donating. I hope this campaign proves to be a one-time occurrence and not the start of a broader trend.

Posted in Uncategorized | 4 Comments

I Didn’t Invent the Web

I like my name, but one of the unfortunate things about working in technology policy is that I’m sometimes confused for Timothy Berners-Lee, the guy who invented the World Wide Web. So let me disabuse people of that misconception: I’m not Mr. Berners-Lee, nor am I related to him. He’s a 50-something Brit who runs the W3C and does research at MIT. I’m a 20-something American grad student who studies at Princeton. So I hope any readers who thought they were reading Mr. Berners-Lee’s blog will continue reading mine, but I want to make sure no one is being misled. I’ve added a disclaimer to the first sentence of my about page to be sure people know who they’re reading.

Posted in Uncategorized | 2 Comments

Patents, Copyrights, and Software

Reader Dale B. Halling left the following comment that articulates two major misconceptions that one frequently encounters in the software patent debate:

There is no doubt that the patent system should be more accessible, less complicated, and less expensive. Inventing is creative, this is not unique to developers of software. Writers would be and have been historically disgusted by people that can pirate their writings. In the 19th century copyrights were national (like patents are today), so foreign publishers stole the works of famous authors and reproduced them in their country without paying the author. Mark Twain suffered this fate particularly at the hands of English publishers. I am sure the English publishers did not see why they should have to fuss with copyrights of American writers and vice versa. The moral point is the same with software developers who do not want to bother with those complicated patents – stealing is stealing.

Some people believe that software should not be patentable. The arguments against software patents have a fundamental flaw. As any electrical engineer knows and software developer should know, solutions to problems implemented in software can also be realized in hardware, i.e., electronic circuits. The main reason for choosing a software solution is the ease in implementing changes, the main reason for choosing a hardware solution is speed of processing. Therefore, a time critical solution is more likely to be implemented in hardware. While a solution that requires the ability to add features easily will be implemented in software. Software is just a method of converting a general purpose electronic circuit (computer) into a application specific electronic circuit. As a result, to be intellectually consistent those people against software patents also have to be against patents for electronic circuits.

Reader Rhayader offers a good response to the first point: stealing is always deliberate, but patent infringement is frequently accidental. If you survey recent software patent litigation, you’ll find many cases where the defendant had never heard of the plaintiff or its technology until after it had developed its own product. “Stealing is stealing” just doesn’t apply in these cases.

I think the second argument about the equivalence of software and hardware is based on a misunderstanding of how the patent system works. It’s common to talk about patents as though they cover particular products or devices. But it’s more precise to say that patents cover particular characteristics of devices or processes. So if you’ve created an innovative new widget, your patent application can’t just describe how your widget works. It also needs to identify the specific characteristics of the widget that you believe are novel and non-obvious, and therefore entitled to patent protection.

So a “no software patents” rule simply says that the software aspects of an invention cannot be patented. Mathematical algorithms—which is all software is—ought not to be patentable whether they’re implemented as software or on silicon. But that doesn’t mean I’m against semiconductor patents in general. Rather, I think patents should cover aspects of a semiconductor device that doesn’t reduce to mathematics. For example, I wouldn’t have allowed C. A. R. Hoare to patent the quicksort algorithm. And I wouldn’t have allowed someone to patent the concept of hardware-accelerated quicksort. However, someone could patent a particular, non-obvious design for a hardware quicksort implementation. That’s not a patent on the algorithm, but on a specific machine for implementing the algorithm.

In other words, a software-related patent must be tied to a specific machine or physical process. That’s similar to the standard the Federal Circuit articulated in In re Bilski. One of the great virtues of this approach is that it frees computer programmers from worrying about inadvertent patent infringement. Because by definition, a “pure” software implementation of an algorithm isn’t tied to any specific machine or physical process. So you’d only have to worry about patent infringement if you were implementing an algorithm in silicon. Building hardware is much more expensive than building software, so the requirement to hire a patent lawyer is much less burdensome.

Posted in Uncategorized | 1 Comment

Disruptive Innovation and Urban Decline

I’m excited to learn that Ryan Avent has been reading Bottom-Up, and he has a really interesting post examining the implications of disruptive innovation on the growth and decline of cities:

When a metropolitan area has an old, successful, established industry as its economic driver, that area builds its infrastructure and institutions around that industry. These institutions are likely to be unwilling and unable to accomodate and support growth industries. We can think about legislators in a Rust Belt state who fight to protect old industries even when the protections they seek would undermine growth industries. Or banks in old manufacturing centers that are reluctant to invest in start-ups with sharply different practices from the old giants.

If you have a daring new idea, you don’t take it to someone who’s living fat off something which has worked for decades. You take it to someone who is hungry. Many of the Sunbelt boom towns which have sprung up over the past half century grew at the start by accepting what investment they could. I’m reminded of my hometown, where leaders were anxious to attract high-tech investments to their new Research Triangle Park. It was lack of better options that gave them the idea in the first place — something which might not have occured to leaders in a city where hundreds of thousands of people earned good union wages in manufacturing plants. And while leaders definitely wanted to craft a research environment, they took the investments they could get. Not having recently been on top of the world, they had the benefit of not suffering from wounded pride when less-than-glamorous operations came to invest.

This is a really good point. He goes on to explore some of the possible policy implications:

There are tricky implications to this. It suggests, for instance, that the availability of new metropolitan areas is crucial in maintaining a flexible, growing economy. That creative destruction doesn’t just mean the scrapping of once-proud firms but of whole cities. It also suggests that my previous prescription for fighting urban decline — a program of temporary fiscal support — could be counterproductive. It might delay inevitable economic adjustments.

98179995_26bafe813eThis seems right to me. The broader point here, I think, is that city leaders, like corporate CEOs, inevitably overestimate how much control they have over the overall direction of the complex ecosystems they nominally run. Once a negative feedback loop has started in a metropolitan area, there just aren’t that many policy levers available to reverse the process. Conversely, once a city has started benefitting from a positive feedback loop, as Detroit did 100 years ago and Silicon Valley does today, city governments can probably screw up a lot and still enjoy a healthy economy.

The main thing city leaders can do is ensure their cities offer fertile soil for new industries that could lead to positive feedback loops down the road. That means good infrastructure, reasonably low taxes, regulations that are predictable and not overly burdensome, and so forth. But disruptive innovation is intrinsically difficult to predict in advance, so efforts to promote specific industries or technologies—like St. Louis’s ongoing effort to transform itself into the Silicon Valley of biotech—are doomed to failure. Precisely because you can’t predict which disruptive technologies will prove important, it’s important to create a business climate that hospitable to businesses in general.

Posted in Uncategorized | Leave a comment