Richard Epstein’s Top-Down Vision for the Software Industry

Richard Epstein

Richard Epstein

Richard Epstein is a giant of libertarian political philosophy, but I frequently find his writings on technology issues frustrating. As I’ve written before, his limited knowledge of the IT industry tends to show when he’s writing about tech policy issues. He and Scott Kieff (whose prior writing for Cato I critiqued here) penned a brief in Bilski that takes a strong position in favor of continuing the permissive standards of patentable subject matter. Consider this paragraph about why patents are helpful in high-tech industries:

The credible threat of a published patent’s right to exclude acts like a beacon in the dark, drawing to itself all those interested in the patented subject matter. This beacon effect motivates those diverse actors to interact with one another and with the patentee, starting conversations among the relevant parties.

On one level, this analysis is simply out of touch with the realities of the software industry. There’s nothing beacon-like about software patents. Software companies do not use patents as a mechanism for finding technologies or business partners. Patents tend to be written in unintelligible legalese, they’re not well indexed, and they issue years after they’re filed. They’re completely irrelevant to the day-to-day process of product development in the software industry. I’ve never met a software developer who regards the patent database as a useful source of information about software inventions, nor can I think of an example of a software company (Intellectual Ventures doesn’t count) that uses patents as a central part of its product-development strategy.

Companies don’t need patents to “start conversations,” or to “motivate” other companies to work with them. The software industry has developed lots and lots of mechanisms for communications and coordination. There are standards-setting bodies, conferences, computer science journals, blogs, technology demos, aggregators, venture capital pitches and so forth. The only advantage of coordination by patent is that patents allow you to coerce your “business partners” to the negotiating table whether they want to be there or not. And frankly, “conversations” started under duress are unlikely to lead to anything productive.

On a deeper level, I think this analysis reflects a perversely top-down understanding of the innovative process. The unstated assumption here is that progress requires a kind of technological central planning: the legal system picks one firm to have the exclusive right to develop a given technology, and that firm makes all the key decisions and collects the profits (or suffers the losses). I think there’s ample evidence that at least in the software industry, this isn’t how progress tends to happen. Yahoo didn’t get the search engine patent in 1995 and “draw to itself” all the parties interested in search engines. Rather, Yahoo built one search engine, Altavista built another, Excite built a third, and so forth. And then we used the free market to decide which search engines would succeed. Ultimately, a plucky startup called Google came along and clobbered all those established search engine markets. This was possible only because the search engine market wasn’t clogged up with patents.

Now, Epstein is a smart guy who generally has a clear understanding of and appreciation for market competition. So I’m not sure what to make of his apparent fondness for central planning in this instance. I suspect the problem is that he simply doesn’t know very much about how the software industry works. Certainly, his utter bafflement at the success of open source software suggests a relatively shallow understanding of the software industry. Epstein has a general inclination in favor of strong patents (based on what I regard as a misguided analogy between patents and property rights), and I suppose that from a 50,000 foot level, this seems like a plausible application of those priors to the software industry.

Posted in Uncategorized | 1 Comment

The Question-Begging Argument for Software Patents

I’m reading some of the amicus briefs in the Bilski case, and I’m struck by how vacuous they are. Consider this passage of Yahoo’s brief, purporting to give an example of the kind of technology patents ought to cover:

A concrete example illustrates this conceptual failing. Much of the popular music to which consumers listen today is heard in “MP3” format. MP3 is a standard for compressing digital audio files—a compression algorithm based on characteristics of human hearing that removes approximately 90% of the data from digital music files without substantially affecting humans’ perception of the reproduced sound. The MP3 algorithm can be thought of as a complex mathematical formula with a specific application. But it does not result in any “physical” transformation—only the digital data are altered. Nor is it “tied” to any “particular” machine—indeed, while a “particular” machine could certainly be built to run the algorithm, one of its chief benefits is that it may be run on any “general purpose” computer. The process may, in other words, be instantiated in either software or hardware. Few would doubt that this is the kind of technological advance meriting patent protection and, in fact, the PTO issued a patent for MP3 technology. See U.S. Patent No. 5,579,430 (filed Jan. 26, 1995 with a priority date of
Apr. 17, 1989). But this innovation would not appear to be patentable under a strict interpretation of the machine-or-transformation test.

Now there are a number of interesting things one could say about MP3 patents. We could mention, for example, that open standards like this are particular vulnerable to patent holdup problems, where parties encourage the use of technologies in which they happen to have pending patents. We could mention that the proliferation of codec patents has hampered the development of open source media software. We could discuss whether the people who acquired these patents were the key figures in developing the MP3 format.

But the Yahoo! brief does nothing of the sort. Indeed, this paragraph is all that’s said about MP3 patents. Yahoo’s lawyers apparently find it so obvious that this is something that should be patent-eligible that no further comment is needed. As a member of the (apparently small) class of people who don’t regard patent 5,579,430 as obviously desirable, this is a little galling.

I’m hoping I’ll come across meatier arguments as I read more briefs. But in my experience, substantive defenses of software patents are few and far between. Rather, you tend to get a lot of hand-waving and question-begging. Most writing about patent law is by and for patent lawyers, who tend to think the world would be a better place if patent lawyers had their fingers in a larger share of the economy. So most defenses of software patents tend to start from the premise that patents always (or almost always) promote innovation, and doesn’t bother to delve too deeply into the specifics of how software patents might promote or discourage innovation.

Posted in Uncategorized | Leave a comment

Newspapers are the Original Walled Gardens

Paul Graham has a great new essay out looking at the decline of the content industries:

A copy of Time costs $5 for 58 pages, or 8.6 cents a page. The Economist costs $7 for 86 pages, or 8.1 cents a page. Better journalism is actually slightly cheaper.

Almost every form of publishing has been organized as if the medium was what they were selling, and the content was irrelevant. Book publishers, for example, set prices based on the cost of producing and distributing books. They treat the words printed in the book the same way a textile manufacturer treats the patterns printed on its fabrics.

Economically, the print media are in the business of marking up paper. We can all imagine an old-style editor getting a scoop and saying “this will sell a lot of papers!” Cross out that final S and you’re describing their business model. The reason they make less money now is that people don’t need as much paper.

Newspapers like to think of themselves as being in the same business as (if vastly superior to) the Huffington Post or Slate. But I think it’s more accurate to describe them as being in the same business as Verizon and Comcast. Or, more precisely, as the predecessor to walled-garden information networks such as the classic AOL. Newspapers have always been information “networks” that efficiently delivered information—news but also display ads, classified ads, public service announcements, legal notices, coupons, and so forth—to large numbers of people. The technological limitations of this “network” required it to be vertically integrated, and so newspapers are in “the content business” by necessity. But “content” has never been a free-standing business, and it is suffering from the same basic problem that killed AOL: any one organization can’t compete with the combined efforts of thousands of different firms on the open network. Once people get access to an information network that was not vertically integrated, there was no particular reason for them to continue consuming content provided by their previous, vertically-integrated, network providers.

I also think Graham is right later in the essay to note that iTunes isn’t selling content so much as levying a kind of convenience tax. The iTunes store is the most convenient way to get content onto the iPod, and Apple has kept the cost low enough that most people aren’t motivated to look for an alternative. Likewise, the Kindle is profitable only because people want Kindles and Amazon makes it hard to get content for the Kindle any way other than through them. Here again, the business model is to bundle content with delivery. I think this is a temporary artifact of closed platforms and industries that still have one foot in the 20th century. The market for copies of articles or songs is unlikely to ever be as large as it was in the much less competitive world of 20th century media. And basic economics tells us that in the long run, most information will fall to its marginal cost, which is zero.

Posted in Uncategorized | 2 Comments

Google Book Deal Dead

Under criticism from all directions, Google and its adversaries have filed for permission to abandon the settlement agreement they announced last October and go back to the drawing board. They’re asking for a conference with the judge that will be used to schedule further steps in the case. This means the case is likely to stretch into next year. It’ll be interesting to see if they can put together a deal that serves the interests of the parties and satisfies at least some of the critics.

Posted in Uncategorized | Leave a comment

Getting Fractal on Network Neutrality

My lefty alter ego Tom Lee weighs in on the network neutrality debate. After an excessively generous hat tip toward my Cato paper, he focuses his criticism on Julian’s post. Tom is unimpressed by Julian’s concerns about the risks of case-by-case adjudication of network neutrality concerns.

37395606_12247da77bTom suggests that we’re looking at an unbridgeable ideological chasm, and maybe that’s true. But I want to (as Julian puts it) “go a little fractal” and highlight the structural similarity between the arguments on the two sides of the debate. As I pointed out yesterday, the Genachowski was quite right to argue that the key virtue of the Internet is that it “pushes decision-making and intelligence to the edge of the network,” which preserves “the freedom to innovate without permission.”

The most obvious problem with top-down, permission-based systems is that the decision-maker might just make bad high-level policy decisions. AT&T might, for example, decide to block VoIP traffic to protect its legacy telephony business. This is the kind of scenario that attracts most of the attention in the network neutrality debate.

But I think this isn’t the only, or even the most important, reason this kind of setup is a bad idea. Consider Larry Lessig’s work on the problems created by a “permission culture” in the copyright context. Lessig tells the story of a filmmaker who tried to get permission to show a few seconds of the Simpsons in the background of a shot in a documentary he was working on. After a bunch of calling around, he finally reached the right person at Fox, who demanded $10,000 for permission to include the clip. Else wound up digitally editing the Simpsons clip out of the scene.

The thing to note about this is that the senior leadership of Fox almost certainly did not make a conscious decision to start demanding outrageous amounts of money for the right to use trivial snippets of its content. This is probably a case of bureaucratic incompetence, not greed. Moreover, even if he’d gotten a reasonable answer, it still would have been a problem that he had to spend so much time on the phone.

The same points apply to the iPhone example I discussed in my previous post. The various problems with the iPhone app approval process aren’t (just) cases of Apple being greedy. Some of the decisions have been so transparently stupid that they can only be the result of incompetence on the part of individual Apple employees. In other cases, the problem seems to be that management guidance to rank-and-file Apple reviewers was vague, and so inconsistent results were reached. And in many cases, the problem isn’t that a bad decision was reached, but rather that it took an unreasonably long time for the answer. That’s problematic even if the ultimate answer is the “right” one.

2330121290_4fcc1ee2b3The problem with a permission culture, then, isn’t just that the high-level policies might be bad (although of course they might). The more serious problem is the process of permission-seeking inevitably introduces unnecessary and often costly friction. Tom writes rather casually about ISPs “getting on the phone with Washington before they get on the phone with Cisco.” We could just as easily say that iPhone developers should “get on the phone with Cupertino” before they start writing a new application. For a variety of reasons, this doesn’t actually work very well. “Cupertino” may not have formulated a policy on the app you’re thinking about creating (indeed, this is especially likely if your app is unusually innovative). It may stall or give vague, evasive answers. It may give different answers to different firms with substantially similar products. And of course, an informal assurance from some Apple employee is no guarantee that the bureaucracy won’t change its mind after you’ve sunk thousands of hours into developing and tesitng your product.

All the same problems apply to a rule that says network providers need to manage their networks in a “reasonable” fashion, with the precise definition of reasonableness deferred to future case-by-case adjudication. Chairman Genachowski is not going to spend all his time taking calls from mid-level Verizon engineers seeking clarification on what this means. In practice, when a Verizon engineer wants to know what he’s allowed to do, he’s going to have to ask his boss’s boss to “get on the phone with Washington.” The Verizon executive will, in turn, wind up talking to some mid-level FCC bureaucrat, and may or may not succeed in clearly communicating what Verizon wants to do. And bureacrats—whether they work in Cupertino or Washington—have no particularly incentive to give prompt, clear answers that might come back to bite them later. So there’s likely to be a lengthy and inconclusive back-and-forth between the FCC bureaucrat and the Verizon one. By the time any sort of conclusion is reached, the engineer will probably have moved onto some other idea.

Now, I think Tom’s answer is that he basically views this as a bug rather than a feature:

If regulation is what it takes to convince Verizon that it’s selling a commodity, that’s fine by me. The market that’s going to grow up on top of that system is more important than the principle of avoiding any imposition on the ISPs. Put another way: it may be that the drab, regulated water system has cheated me of consumer-facing innovations like leased rootbeer faucets, or additives that would save me from cleaning the tub as often. But I don’t care about that: those gimmicks — any water utility gimmicks — are of miniscule importance compared to having a water system that works safely, predictably and efficiently. If Comcast needs to become a more boring place to work and a slightly less exciting business than it otherwise might be, it’s not going to bother me even a little bit.

3431087786_e5943b9b47Now, on the technical question here I think I’m closer to Tom than Julian is. I want Comcast to provide me with a cheap, fast, “dumb” pipe, and I can’t think of any “premium” services I’m particularly itching for Comcast to provide to me. Moreover, I suspect AT&T and Verizon will be able to take care of themselves in the regulatory arena: the process may be wasteful, but if they really want to do something, they can probably persuade the FCC to let them do it.

But we’re not just talking about regulating cable and telco incumbents. We’re talking about rules that will apply to wireless in addition to wired infrastructure, and to new firms as well as incumbents. On the wireless side, carriers are facing genuine and serious bandwidth constraints. There are a number of ways to deal with bandwidth limitations: you can meter by the bit, impose a cap on total bandwidth, deliberately throttle each user below the full capacity of the network, or some combination of the three. You can also use a variety of indirect mechanisms to limit bandwidth consumption. The iPhone is an example of this: Apple basically conserves bandwidth by declining to approve the most bandwidth-hogging applications.

I honestly have no idea what “reasonable network management” means for a product like the iPhone. I think you could make the argument that consistency requires the FCC to order Apple to re-design the iPhone to make it an open platform. That strikes me as a bad idea for a variety of reasons that are probably best left for a future post. But the more fundamental point is that I don’t want a future in which Apple has to “get on the phone with Washington” before it makes architectural changes to the iPhone. Even if we assume the FCC will craft a perfectly rational policy, the very process of asking permission is going to impede the rate of progress.

Finally, I think it’s naive to assume that network providers will always be big, lumbering companies, or that network neutrality regulations will never be turned into a weapon against challengers. I’ve written at length about the story of MCI, whose entry into the long distance market the FCC delayed by a decade. Presumably, Congress could not have foreseen in the 1930s that the rules they were enacting would have an anti-competitive effect a quarter century down the road. But the fact that some industry is currently dominated by a handful of huge corporations does not mean that it will always be so, and we should therefore be cautious about the danger that such an assumption could become a self-fulfilling prophesy.

Posted in Uncategorized | 2 Comments

Julius Genachowski and the Bottom-Up Internet

genachowski

I liked FCC chairman Julius Genachowski’s Monday speech at the Brookings Institution. I’ve argued before that network neutrality regulations are a bad idea, and the speech didn’t change my mind. I share the concerns of my colleagues Julian Sanchez and Jim Harper. But those disagreements aside, I think he did a fine job of articulating what’s great and worth preserving about the open Internet:

Historian John Naughton describes the Internet as an attempt to answer the following question: How do you design a network that is “future proof”–that can support the applications that today’s inventors have not yet dreamed of? The solution was to devise a network of networks that would not be biased in favor of any particular application. The Internet’s creators didn’t want the network architecture–or any single entity–to pick winners and losers. Because it might pick the wrong ones. Instead, the Internet’s open architecture pushes decision-making and intelligence to the edge of the network–to end users, to the cloud, to businesses of every size and in every sector of the economy, to creators and speakers across the country and around the globe. In the words of Tim Berners-Lee [nb: no relation to the host of this blog], the Internet is a “blank canvas”–allowing anyone to contribute and to innovate without permission.

This strikes me as an accurate and important observation about why the Internet has been so successful. And if readers will forgive me for my solipsism, I think there’s a close relationship between this argument and the central theme of this blog: “allowing anyone to contribute and to innovate without permission” is an essential property of most bottom-up systems, including those I’ve examined here on the blog. Wikipedia, for example, is the encyclopedia anyone can edit. Free software is available for modification and redistribution by anyone. Disruptive innovation in the news business is lowering barriers to entry and allowing new firms to enter markets formerly dominated by incumbents.

I particularly liked this passage, in which he zoomed in on one of the key characteristics of bottom-up systems:

[An end to network neutrality] would deny the benefits of predictable rules of the road to all players in the Internet ecosystem. And it would be a dangerous retreat from the core principle of openness–the freedom to innovate without permission–that has been a hallmark of the Internet since its inception, and has made it so stunningly successful as a platform for innovation, opportunity, and prosperity.

2667037981_081ea2bd2aThis is a point some network neutrality critics fail to appreciate. The importance of “predictable rules of the road” can be best understood by contrasting it with what happens when people are left at the mercy of a top-down decision-maker. In his famous wireless Carterfone paper, Tim Wu quotes one engineer who described developing for proprietary cell phone platforms as “a tarpit of misery, pain and destruction.” The problem was that each of the four national carriers has its own distinct, Byzantine bureaucracy for deciding which applications will be allowed on its network. Getting the approval of all four was a major headache. And of course, in the two years since Wu’s paper came out, Apple has entered the cell phone market and behaved basically the same way. The Cuptertino firm blocked one iPhone application because it included a link to a parody of a famous scene from the movie Downfall. In another case it blocked an application on the grounds that it “ridicules public figures.” More recently, it gave Google the runaround about exactly what was wrong with its Google Voice application. The bottom line is an incredibly long and frustrating process, in which would-be developers are asked to sink large quantities of time and effort into products that may be rejected by Apple for any reason or no reason at all.

Bottom-up systems are different. In the free software world, for example, licenses like the GPL guarantee freedom to use and modify software in perpetuity, without permission from anyone. As I pointed out last week the GPL will likely shield MySQL users from the worst consequences if Oracle decides it’s not interested in supporting the product. Likewise, blogging software removes the layers of bureaucracy that once separated a writer from his audience, allowing readers to enjoy the fresh, unfiltered output of their favorite writers.

The same principle applies to the open Internet. I don’t have to negotiate separately for access to each of the dozens of different TCP/IP networks in the world. As long as my packets follow the rules, I can join any network on Earth and be reasonably confident they’ll get to their destination. This is not only convenient for me, but it’s a tremendous boon for online entrepreneurs like my non-namesake Tim Berners-Lee, who was able to create the World Wide Web on a shoestring budget because the basic infrastructure had already been created.

Now, I hasten to add that this doesn’t necessarily mean that top-down systems like the iPhone ought to be illegal. I think Apple’s app-review process is obnoxious and counterproductive, but Apple has built an otherwise great product, which I voluntarily purchased, and I think Apple is entitled to run its App Store however it likes. Similarly, I don’t think being pro-network neutrality necessarily implies being pro-network neutrality regulation. There are a number of reasons to think that regulation is unnecessary and likely to backfire. But regardless, I think Genechowski’s underlying thesis is basically right: the Internet has been a resounding success because it’s a decentralized, bottom-up network that “allows anyone to contribute and to innovate without permission.”

Posted in Uncategorized | Leave a comment

Newspaper Bailouts: Just Say No

A year ago, I would have assumed that it was unnecessary to even address the possibility, but in the wake of the bank and auto bailouts, and the continued, precipitous decline of the newspaper industry, we’re starting to see semi-serious proposals for a government bailout of the news industry. Over at the Technology Liberation Front, Adam Thierer has a good roundup of the key arguments against bailing out the newspapers.

Probably the most important point is the one made by Slate‘s Jack Schafer:

The government’s attempt to prop up newspapers with rewrites of the tax code or Sarkozy-esque direct subsidies of government advertising and free subscriptions for young people interferes with the already-in-progress transition from print to digital news delivery that’s been accelerating for the past 15 years—or longer. Propping up troubled papers has a cost. It weakens the enterprises that are rising from below to compete with them to deliver advertising and, yes, deliver news. I can think of no better way to hinder the rise of such Web sensations as Politico and Talking Points Memo than rewriting the rules to benefit newspapers.

It’s not clear what will happen to the newspapers, but the worst possible outcome would be for the government to create a news business dominated by undead newspapers. Newspapers that are perpetually in the red but propped up by government subsidies would lack the institutional independence to provide a real check on elected officials. But even more important, by allowing them to continue producing news at a loss, indefinitely, they’ll stifle the creation of web-native alternatives and make readers dependent on these feeble institutions. If the newspapers are going to fail (and I suspect most of them will) they should be allowed to fail swiftly and decisively, so that their audiences will be available for new, more innovative news outlets. Zombie car companies and zombie banks are bad enough.

Posted in Uncategorized | 1 Comment

GBS Settlement Gets Another Critic

2630711546_a0fce730f1The chorus of voices against the Google Book Search deal keeps getting louder. Earlier this month, most of Google’s competitors and a raft of public interest organizations filed formal comments opposing the deal. Then a couple of weeks ago, the Librarian of Congress weighed in in opposition. Late on Friday evening, the Department of Justice filed the government’s official brief. The bottom line: “As presently drafted, the Proposed Settlement does not meet the legal standards this Court must apply.”

James Grimmelmann has a good rundown of the government’s concerns:

The DoJ, speaking on behalf of the United States, has two broad areas of concern: fairness to copyright owner class members and protecting competition…

On the class-action fairness side, the DoJ is concerned about whether the named plaintiffs adequately looked out for orphan owners and for foreign owners. If the settlement were flipped to opt-in for out-of-print works, that would solve a lot of the orphan issues, but the DoJ also suggests that other modifications to reduce the conflicts of interest (such as better escrowing procedures) could work, too. As for foreign owners, the DoJ’s idea is that the named plaintiffs should include class representatives who are foreign copyright owners for in-print and out-of-print books…

On the antitrust side, the DoJ is deliberately cautious, saying it has an open investigation. I suspect that one of the reasons for caution is to avoid committing while negotiations are still open. They raise four issues, some of which I simply hadn’t noticed.

The antitrust concerns are fairly technical. The bottom line, though, is that the Obama administration has thrown its weight squarely against this deal in its present form. I have to imagine that the judge will reject the deal in its current form and order the parties back to the negotiating table. The real question is what guidance, if any, the judge will give the parties, and if so how large the changes he demands will be. He could also respond to the criticism by making structural changes to the negotiation, such as appointing independent counsel to represent foreign rights holders or the owners of orphan works.

The government was careful to emphasize that they’re not opposed to a deal in principle. Presumably they don’t want to kill a deal outright, and they also may want to give themselves some leverage to influence the shape of the second draft. This case is far fro over.

Posted in Uncategorized | Leave a comment

Patents and the Coase Theorem

Ronald Coase

Ronald Coase

One of the most famous essays in economics is Ronald Coase’s “The Problem of Social Cost.” Its key argument, which was later dubbed the Coase Theorem by George Stigler, says that in a world with zero transaction costs, the initial allocation of rights doesn’t matter because people will negotiate toward an allocation of rights that maximizes total social utility.

Coase illustrates this principle with an example involving a farmer and a rancher who occupy adjacent parcels. The rancher’s cattle sometimes stray onto the farmer’s land and damage his crops. Coase’s claim is that it doesn’t matter whether the law holds the rancher liable for the damage to the farmer’s crops or not: either way, the rancher and farmer will reach a bargain that maximizes the joint value of the rancher and farmer’s output.

The argument (which is worth reading in its entirety) proceeds as follows. If the law requires the rancher to pay the farmer compensation, then the rancher will only expand his herd if the increased profits from doing so exceed the compensation he will be forced to pay the farmer as a result. Conversely, if the law does not require the rancher to compensate the farmer, then if the farmer’s crops are worth more than the rancher’s cattle, the farmer will pay the rancher not to expand the size of his herd. Coase’s theorem says that the size of the herd—and hence, the total value of the goods produced by the farmer and rancher combined—will be the same under either legal regime, and that the amount produced will maximize social welfare.

1217819946_27d26fa617In subsequent writing, Coase has emphasized that his point was not that this was a realistic model for how the world actually worked, but rather that economists need to take more seriously the importance of transaction costs in economic analysis. Many free-market scholars in the law-and-economics tradition employ Coase’s argument in making their case for reducing transaction costs and thereby increasing market efficiency.

A version of this argument pops up pretty frequently in discussions of patent policy. We saw some examples in last week’s comments regarding my criticism of Intellectual Ventures. Robb Shecter, for example, argued that firms like IV are valuable because they “introduce liquidity into the system.” While Robb didn’t explicitly reference Coase, the key assumption seems to be that greater liquidity gets us closer to that zero-transaction-cost world in which patents are allocated to their highest-value use. F. Scott Kieff, writing for the Cato Institute, offered a similar criticism of the Supreme Court’s LG v. Quanta decision. By placing limits on patent holders’ ability to sub-divide patent rights, Kiff warned that the court would “frustrate patent deals by taking contracting options off the table.”

This approach to patent policy strikes me as misguided. The fundamental problem is that it forgets that a patent is not a productive asset like a truck or a factory. A liquid truck market is a good thing because the highest-valued use of a truck will likely be the use that gets the most valuable cargo to consumers. In contrast, a patent by itself produces nothing of value. It is not an input to any productive process. It simply entitles its holder to sue those who enter a particular market. There is, therefore, no a priori reason to think that a more liquid market for patents will enhance social welfare. To the contrary, patents are valuable precisely because they allow firms to increase their profits by doing things that economists generally regard as economically damaging: litigate and limit competition.

Indeed, in some industries, high transaction costs are probably the only thing preventing patents from bringing business grinding to a halt. In the software industry, for example, there is a large number of broad, vague patents that are routinely infringed by numerous software firms. The only reason we still have a relatively healthy software industry is that it’s more work than it’s worth to find and sue all the people who are infringing—which, to a first approximation, is everyone. This is not a process we want to make more efficient.

8131351_4121c3a783And this is where IV comes in. IV’s business model is to amass such an imposing packet thicket that, simply as a statistical matter, it’s probable that any given technology company infringes a large number of its patents. Once IV has a suitably imposing patent thicket, it benefits from a twisted kind of economy of scale. It no longer has to bother with cataloging the patents any given company infringes; it’s enough to identify a few representative examples and then gesture in the general direction of its menacing 27,000-patent portfolio. This will induce most companies to pay up without a fight, leaving IV with plenty of resources to make examples of the few that resist.

This “reduces transaction costs” in a sense, but it’s a mistake to assume this makes it socially beneficial. Coase’s theorem is fundamentally about two productive parties negotiating toward an arrangement that maximizes their joint product. It’s not about a situation in which one firm is in the business of producing wealth and the other is in the business of using the threat of lawsuits to extract it from the first party. When the patent system is as dysfunctional as ours is, the limited liquidity of the patent market is a crucial check on the amount of damage these firms can do. The patent system’s flaws aren’t Nathan Myhrvold’s fault, but firms like his greatly magnify the damage they can cause.

Posted in Uncategorized | 1 Comment

WLF against GBS Deal

Over at the Cato blog, I point out that the Washington Legal Foundation opposes the Google Book Search deal.

Posted in Uncategorized | Leave a comment