Defending the Neutral Internet in Civil Society

fcxlogo

I’m at Students for Free Culture’s annual conference. As I said a couple of weeks ago, I think the growth and enthusiasm of the free culture movement is really exciting. When I was an undergrad at the University of Minnesota, I helped create a group called Students for Fair Copyright that ran events like this that helped educate people about the ways that the digital Millennium Copyright Act threatens individual liberty. Running a DMCA-reform group in 2001 was tough sledding. Interest in the issue was pretty much limited to computer geeks, and the group petered out after we did a handful of events in the Fall of 2001. So it’s amazing to find, 8 years later, that there are dozens of active and enthusiastic student groups doing the same kind of work.

I was lucky to be invited to speak on a panel on network neutrality. I reprised the themes of my Cato paper, and I particularly focused on the need to distinguish between network neutrality as a technical principle and network neutrality as a regulatory regime. The dynamics of the legislative process has unfortunately caused a lot of people to assume that if you support the open, bottom-up Internet, then you must support giving the FCC authority to mandate network openness. Conversely, people assume that if you oppose government regulation of the Internet then you must be in favor of network providers adopting discriminatory policies. I think this is a mistake; it’s perfectly consistent to support the open Internet while opposing government regulation to mandate that outcome.

During the question-and-answer session, people asked how one can go about promoting network neutrality if not through the legislative process. I pointed to several specific examples. There are evasion technologies like BitTorrent header encryption. There are tools like Herdict and Google’s forthcoming discrimination detector. And people who are members of university communities can lobby the relevant decision-makers to protect network neutrality on their own networks.

But listing specific examples like this runs the risk of missing the more fundamental point: controlling the Internet is hard because it’s so large, decentralized and diverse. A determined ISP can probably use technologies like deep packet inspection to block the use of any given application or website. But the very complexity and diversity of the Internet means that these tools are inherently clumsy. Using them invariably produces collateral damage in the form of both angry customers and bad PR.

And users have a fundamental advantage: there are far more of us than there are of them. If a significant fraction of users actively resist network discrimination by adopting circumvention tools, ISPs simply don’t have the manpower to keep up. And even passive resistance can be effective. When a clumsy effort to block BitTorrent breaks unrelated services being used by other users, this is going to hurt the ISP’s profits even if the users don’t realize why their network suddenly stopped working.

So there are two general ways that civil society can help to preserve network neutrality. One is user education. Educated users are much harder to coerce than ignorant ones. Users who don’t understand the value of open networks are more likely to allow themselves to be herded into walled gardens. Educated users are likely to be much more resistant to such efforts.

Second, we can promote the development of open, decentralized software platforms. Monolithic, homogenous online services are relatively easy to filter or block. In contrast, open platforms—and especially free software—supports the development of a “long tail” of software and websites that’s much harder for a network owner to understand and control.

The great thing about these trends is that they’re likely to happen with or without an organized, top-down effort. Open software platforms have been gaining market share for decades simply because they work better. Users are learning about the advantage of open networks as they use them. Organizations like Students for Free Culture accelerate the process, and that’s great. But I think that fundamentally, the open Internet will be preserved because it works better than the alternatives, and people will defend it because it’s in their interest to do so.

Posted in Uncategorized | Leave a comment

The Illusion of Control in Vietnam

I’ve said before that if you want to understand the limitations of hierarchical organization, a good place to start is with the world’s most powerful hierarchical institution: the US military. And to study the military’s flaws, the obvious place to start is with its biggest failure: the war in Vietnam. To that end (and at the suggestion of Arnold Kling), I’ve been reading The Best and the Brightest, David Halberstam’s ironically-named critique of the mistakes that led to the Vietnam War.

Maxwell Taylor

Maxwell Taylor

The debates over the Iraq and Afghanistan wars have made discussion of the Vietnam War so common that it has become a cliche. I think Matt is right that reading books about Vietnam isn’t going to tell us what to do in Afghanistan. In any event, I’m not an expert on foreign policy and so that won’t be my focus. Rather, I’m interested in understanding the characteristic pathologies of large, hierarchical institutions, and how those pathologies contributed to mistakes in Vietnam. In particular, I’m interested in understanding how the military’s hierarchical structure constrained the choices of Pres. Kennedy, Pres. Johnson, and senior decision-makers in the Vietnam War.

One of Halberstam’s recurring themes is that the presidents and cabinet officials who were nominally in charge of the war had much less control than they thought they did. The story begins in the early days of the Kennedy administration, when General Maxwell Taylor recommended dispatching combat troops to Vietnam to prop up the government of Ngo Dinh Diem. The recommendation was hotly debated inside the Kennedy administration, and was ultimately rejected in favor of a more incremental approach. Halberstam explains (pp. 175-179):

On November 11 [1961], there was a new [Defense Secretary Robert] McNamara paper, done jointly with [Secretary of State Dean] Rusk, which reflected the President’s position. It was a compromise with the bureaucracy, particularly the military, and a compromise with the unstated, unwritten pressures against losing a country. Kennedy would send American support units and American advisors, but not American combat troops. We would help the South Vietnamese help themselves. If there really was something to South Vietnam as a nation and it really wanted to remain free, as we in the West defined freedom, then we would support it. We would send our best young officers to advise down to the battalion level, we would ferry the ARVN into battle against the elusive Vietcong, and we would, being good egalitarians, pressure Diem to reform and broaden the base of a creaky government and modernize his whole society…

Dean Rusk

Dean Rusk

For many reasons, the Taylor-Rostow report was far more decisive than anyone realized, not because Kennedy did what they recommended, but because in doing less than it called for, he felt he was being moderate, cautious. There was an illusion that he had held the line, whereas in reality he was steering us deeper into the quagmire. He had not withdrawn when a contingent of 600 men ther had failed, and now he was escalating that commitment to 15,000, which meant that any future decision on withdrawal would be that much more difficult. And he was escalating not just the troop figure but changing a far more subtle thing as well. Whereas there had been a relatively low level of verbal commitment—speeches, press conferences, slogans, fine words—his Administration would now have to escalate the rhetoric considerably to justify the increased aid, and by the same token, he was guaranteeing that an even greater anti-Communist public relations campaign would be needed in Vietnam to justify the greater commitment. He was expanding the cycle of American interest and involvement in ways he did not know…

While the president had the illusion that he had held off the military, the reality was that he had let them in. They now began to dominate the official reporting, so that the dispatches which came into Washington were colored through their eyes. Now they were players, men who had a seat at the poker table; they would now, on any potential dovish move, have to be dealt with. He had activated them, and yet at the same time had given them so precious little that they could always tell their friends that they hadn’t ever been allowed to do what they really wanted. Dealing with the mliitary, once their foot was in the door, both Kennedy and Johnson would learn, was an awesome thing. The failure of their estimates along the way, point by point, meant nothing. It did not follow, as one might expect, that their credibility was diminished and that there was now less pressure from them, but the reverse. It meant that there would be an inexorable pressure for more—more men, more hardware, more targets—and that with the military, short of nuclear weapons, the due bills went only one way, civilian to military. Thus one of the lessons for civilians who thought they could run small wars with great control was that to harness the military, you had to harness them completely; that once in, even partially, everything began to work in their favor.

In the next few posts, I’ll examine why a president who is nominally the boss of the military brass found it so difficult to bend them to his will. And I’ll explore what lessons the story has for hierarchical institutions more generally.

Posted in Uncategorized | Leave a comment

Recapping the Challenges of Top-Down Organization

2223726960_cc91af1726
During November, I did a series of posts examining some of the systematic weaknesses of top-down social structures. This month I’ll be returning to that theme, and I thought I’d start by summarizing the key points I made in my earlier posts.

The disadvantages of top-down social systems can be appreciated from two vantage points: from the bottom of the hierarchy, and from the top. For those at the bottom of large hierarchies, the primary problem is a shortage of freedom. As Paul Graham explains, a basic premise hierarchical organization is that, at each level of the hierarchy, a single guy (the boss) acts as a stand-in for all the people below him. So if your boss manages 10 people, you’re effectively sharing one person’s worth of freedom with 9 other people. And the larger the company, and the deeper the organizational hierarchy, the less freedom any given employee enjoys.

This can be a real problem for organizational productivity because good ideas rarely respect the chain of command. Almost everyone who has worked in a large organization has a story in which their efforts to improve their organization’s efficiency or effectiveness was thwarted by bureaucratic obstacles. Which is to say that the constraints of hierarchical organization left them insufficient freedom to put their ideas into practice.

From the top of the hierarchy, the fundamental problem is a shortage of information. A major function—perhaps the major function—of a manager is to act as an “information funnel.” A manager gathers and synthesizes information from below her and then transmits the most pertinent bits of information to the management layers above her.

Transmitting all of that information in its raw form would be overwhelming to its recipients, so good abstractions are needed. Budgets, organizational charts, product lines, brands, mission statements, and the like are tools that allow senior managers to make some sense out of the activities of the hundreds or thousands of people below them. Good abstractions also help senior management to clearly and crisply communicate instructions to the people below them.

But abstractions are leaky. They can only imperfectly represent the messy underlying reality. Which means that feedback is crucial for effective decision-making. Everyone makes mistakes, but effective organizations give their decision-makers rapid and clear feedback when they screw up.

One difficulty is that the information conduits—middle managers—are not disinterested parties. It’s their job to pick and choose which information to pass along, and they have powerful incentives to emphasize information that makes themselves look good and leave bad news on the cutting room floor. As a result, the information that comes out of the “information funnel” tends to be a lot rosier than the information that went in. Indeed, it’s common for the leaders of large, dysfunctional organizations to be among the last to discover problems with their decisions. Because the people under them are providing them with a steady stream of seemingly-positive feedback, they feel like they’re well-informed about the state of the organization even as they ignore festering problems. The consequences of this are never good; they range from comical to catastrophic.

So viewed from the top of the organizational hierarchy, the fundamental disadvantage of hierarchical management is that it asks managers to make decisions affecting thousands (or even millions) of people with limited, fragmentary, and distorted information. A good manager understands this and devises strategies to detect leaky abstractions before they become serious problems. A bad manager is blissfully ignorant of this danger and drives his organization off a cliff.

Posted in Uncategorized | Leave a comment

Twitter and Luddism

George Packer laments the fact that Twitter is replacing books. I suspect he’s overstating his case—there are still lots of books being written and read, but Matt Yglesias gets to the more fundamental point:

Despite his protestations to the contrary, this just amounts to Packer offering a luddite argument. The life of a prosperous American man circa 1960 was pretty good. No risk of starvation, no idiocy of rural life, decent job stability, etc. For your leisure time you have many books to enjoy, can listen to records (or the radio), go to the movies, or watch one of three television networks. Plenty of social problems around, but nobody was writing about “the crisis of the under-entertained American” or anything like that. And yet just consider the volume of new books that have been written in the past 50 years. Just consider the volume of new good books that have been written in the past 50 years. And yet the earth still revolved around its axis in 24 hours and around the globe in 365 days. All those new books represent a loss of time available to read all the great pre-1960 books. Less Hamlet, less Great Gatsby, less Moby Dick, less Crime and Punishment, less Decline and Fall of the Roman Empire, and we mourn the loss of these great works!

Obviously, though, the publication of new books is progress rather than regress. A person who chose to never read a single piece of post-1960 fiction could still live a rich and full life. He could even adopt a sneering attitude toward people who insisted on reading new novels. And people who subscribe to cable television (later: DVRs). And people who buy VCRs (later: DVD players). And people who read blogs (later: Twitter feeds). But what does it really amount to? To take advantage of new opportunities to do new things means, by definition, to reduce the extent to which one takes advantage of old opportunities to do old things. One shouldn’t deny that the losses involved are real—of course they are—but simply point out that it’s unavoidable. To say, “aha! this is the thing—this Twitter, these blogs—that’s crowded books out of my life” is a kind of confusion. Life is positively full of these little time-crunches. The fact that something displaces something of value doesn’t mean that it has no value, it just means that it’s new. To displace old things is in the nature of new things, and to cite the fact of displacement as the problem with the new thing really is just to object to novelty.

Literate people have been taught to venerate book-reading, and for good reason. Human civilizations have accumulated a tremendous amount of knowledge and wisdom over the last few centuries, and reading books is a good way to absorb some of it and put it to use in our lives. But the key part is the reading part, not the book part. The Internet is accelerating the rate at which our culture can produce, disseminate, and absorb knowledge, and Twitter is one of the many technologies helping us do that. Just this morning, for example, I learned from Bram Cohen about this interesting paper arguing that “Happy Birthday” isn’t under copyright. I learned that Chris Hayes will write a book that I hope to have time to read. Late last night, Kerry Howley tweeted about compiling a list of classic take-down essays, and Radley Balko responded with a link to this classic Mencken obituary of William Jennings Bryan.

I don’t see any obvious criteria by which to judge the Mencken essay or the copyright paper—and the literally thousands of other items of reading material I’ve discovered via Twitter—as more or less important than whatever book happens to be at the top of my to-read stack. There’s nothing magical about the printed form. Maybe those particular subjects don’t interest you, but fortunately there are lots of people tweeting about subjects that don’t interest me but might interest you. Frankly, Twitter is exactly as interesting as you are. If you tried Twitter and found it boring or shallow, that’s probably because you chose to follow boring, shallow people. It’s not really fair to blame Twitter for that.

Posted in Uncategorized | 4 Comments

The Economist on Bilski

The Economist has a good write-up of the sorry state of the patent system and the Supreme Court’s impending Bilski decision:

Another field where patenting is pursued aggressively is semiconductors. But it is done there not so much to make money, nor even to bar others from using the acquired know-how. Its main purpose is for negotiating cross-licensing deals with competitors. Of necessity, inventions in chipmaking rely on lots of existing technology, which is itself covered by hundreds of patents held by numerous other firms. Without a large portfolio of patents to trade beforehand, semiconductor firms developing incrementally improved products (next-generation microprocessors and memory chips, for instance) would run into litigation and injunctions at every turn.

Pursuing patents aggressively for cross-licensing agreements has little to do with encouraging innovation, though. Indeed, by increasing transaction costs, such deals are in effect a tax on innovation. By the same token, how much of a contribution have the 12,000 or so business processes patented annually in America (but few places elsewhere) made to innovation? Precious little, by all accounts. It is hard enough to find evidence (outside the pharmaceutical and biotech industries) showing that the patent system generally spurs innovation. It is harder still to find justification for business-process patents.

What is clear is that the “non-obviousness” part of the test for patentability has not been applied anywhere near rigorously enough to internet and business-process patents. Because they lack a history of “prior art” to refer to, examiners and judges have granted a lot of shoddy patents for software and business processes.

One place the Economist errs is here:

It is not simply a failure of the United States Patent and Trademark Office (USPTO) to scrutinise applications more rigorously. The Federal Circuit (America’s centralised court of appeal, established in 1982 to hear, among numerous other things, patent disputes) has been responsible for a number of bizarre rulings. Because of its diverse responsibilities, the Federal Circuit—unlike its counterparts in Europe and Japan—has never really acquired adequate expertise in patent jurisprudence.

The reality is close to the opposite: patents dominate the Federal Circuit’s docket, and as a consequence the court tends to strongly reflect the pro-patent views of the patent bar. I’ve argued before that we’d get better results if patent appeals were handled by the regular appeals system, with its 11 circuit courts.

Posted in Uncategorized | 1 Comment

The Growth of Bottom-up Culture

A brilliant meditation by Julian Sanchez on the evolution of bottom-up remix culture:

Posted in Uncategorized | Leave a comment

Empowering Amateurs is a Good Thing

I’ve beaten the “economics of e-books” horse to within an inch of its life, so I’ll make one more point and then leave the poor horse alone. One point that tends to be missed when people worry about how writers or musicians will make money is that it’s far from obvious what the optimal number of professional writers or musicians is.

1126424211_cd61865831Consider baseball. There are tens of millions of people who play baseball and softball (I’ll just use “baseball” to refer to both for convenience). They span all ages, races, and social classes. And they do it for a wide variety of reasons. Some are kids who are doing it because their parents made them. Some are high school kids who do it to raise their status at school and become more attractive to the opposite sex. Some are college kids with sports scholarships. A huge number are adults who are doing it for the exercise or as an excuse to drink beer. And a tiny fraction plays baseball as a full-time job.

It’s hard to see any reason to be concerned, as a public policy matter, with the fraction of baseball players who play professionally. I don’t think I’ve ever heard a baseball fan complain that there aren’t enough games to watch. Indeed, the number of baseball games being played could drop by a couple orders of magnitude and it still wouldn’t be physically possible for a hard-core fan to watch them all.

You could make the same point about a wide variety of other cultural activities. Knitting and quilting, wine and beermaking, and cooking are all cultural activities that people perform at a wide variety of skill levels and with a wide variety of “business models.” They’re all “industries” in which the low end of the market is dominated by people who have entered it as a hobby, social activity, or retirement project. My wife reguarly buys $20 worth of wool and then spends 100 hours knitting a sweater with it. Obviously she’s not going to be able to sell the sweater at a profit, but that’s not the point.

3355674016_c291cac4daThere’s something perverse about the way the 20th-century book and recording industries were driven almost entirely by commercial considerations. There’s no reason book-writing or music-recording should be a primarily commercial activity, any more than ice skating, crocheting, or playing tennis are. But the limits of 20th-century printing, pressing, and distribution technology forced anyone who wanted to reach a large audience to employ a commercial business model. The vast majority of people who would have liked to offer books or music to a large audience didn’t have the opportunity to do so at all. The Internet is restoring a healthier balance to these cultural “industries,” allowing people like my former co-blogger Brian Moore to write as a hobby without necessarily expecting to ever quit his day job.

It’s hard to predict exactly how this will affect the number or compensation of professional writers or musicians. As a sometime freelance writer, I certainly hope that the market for paid writing will expand, and I suspect that will hold true in the long run. But it’s also not clear why this should be a matter of concern from a public policy perspective. If the market for paid writing shrinks, it will be because the amateur stuff is good enough to meet more of the demand. That’s bad for sometime professional writers like me. But it’s a good thing for the public as a whole—both because they get more stuff to read and because some of them get the satisfaction that comes from writing for a non-trivial audience.

Posted in Uncategorized | 8 Comments

Making Money from Free Books

Conan O'Brien: struggling to make ends meet with free content

Conan O'Brien: struggling to make ends meet with free content

When you predict that the price of a particular kind of content will go to zero, a lot of people assume that means that the producers of that content will be unable to feed their families. Yet the world is full of counter-examples. Lots and lots of people earn a living—and some get filthy rich—making content that they (or their employers) give away for free. The writers at Slate and Ars Technica do it. The writers at the Washington City Paper do it. Radio talk show hosts do it. Conan O’Brien got filthy rich doing it. I did it before I started grad school. Will Wilkinson is doing it as I write this!

Still, people don’t seem to find these examples very persuasive. Many people have an intuition that there are some kinds of content (television, radio, weekly newspapers, blogs) that are naturally monetized through ads, and there are other kinds of content (books, movies, CDs) that can only be monetized effectively by selling copies. So let’s take a look at an industry that has traditionally been focused on the sale of copies, but is currently seeing that business model challenged: the music industry.

It’s important to draw a distinction between the recording industry and the broader music industry of which it is a part. I think we all want a future in which enough musicians can make a living that we’ll have a generous supply of recorded music to listen to. But it won’t necessarily continue to be produced be produced by the small number of vertically-integrated, high-overhead firms that dominated the recording industry in the late 20th century.

The recording industry is doing poorly, but the news in the broader music industry isn’t so grim. Check out this chart from the Times of London spotted (as usual) by Mike Masnick:

4109806964_2ff1ff969e-1

The red line at the top, “recorded revenue (to labels)” is the one that has been getting all the press lately. It’s falling like a rock, whcih is bad news for the employees and shareholders of record labels. The light green line, “live revenue (to artists)” hasn’t gotten as much attention, but I think it’s just as important. It’s going up rapidly, and that’s good news for musicians. The dark green line below that, “PRS revenue,” are the royalties paid when copyrighted music is played in public venues. As we might expect, this revenue stream has been relatively constant over the last few years.

If you’re a British musician, this chart is fantastic news. The bulk of the revenue from CD sales go to labels, not artists. In contrast, musicians get the lion’s share of live show revenues. And so if consumers are shifting their spending from CDs to concert tickets, that’s great news for musicians. Moreover, this chart makes clear the fundamentally antagonistic relationship between musicians and their labels. When a customer buys a CD, most of the revenue goes to the label. When a customer buys a concert ticket, most of the revenue goes to the musician. So the musician’s interest is in having as many people as possible listen to her music, whether or not they pay for it. In contrast, the label wants to maximize CD sales, regardless of the effect of the band’s fan base. This analysis helps explain the ridiculous situation where OK Go has been prevented by its label from allowing embedding of its latest music videos. The benefits of embedding—a larger fan base—accrue mostly to OK Go. The benefits of disabling embedding—slightly higher YouTube ad revenues—accrue mostly to EMI.
Continue reading

Posted in Uncategorized | 4 Comments

Postrel on E-Book Prices and Demand Elasticity

Virginia Postrel makes the case for cheap e-books:

The common intuition is that e-books should be cheap because they aren’t physical–no printing, no shipping. Ah, say contrarians, printing and shipping make up only a tiny fraction of a book’s costs. E-books aren’t really cheap.

Like publishers themselves apparently, these wise guys are using the wrong cost figures. To calculate the cost of a copy, they’re loading on fixed “pre-production” costs like the editor’s salary and the publisher’s rent. They’re including the marketing budget. But these are fixed costs. They don’t change when you produce another copy. They may be important when deciding whether to publish a book at all, but once the money has been spent they’re irrelevant to what you charge for a given copy. Optimal pricing should be based on the marginal cost of that incremental copy. Cover that incremental cost, and selling one more copy is profitable. The common intuition that e-books should be cheap reflects this basic microeconomics: Producing and delivering another e-copy costs next to nothing.

The other side of the equation is consumer response: How many more copies will people buy if the price goes down? Or, in economic lingo, what is the price elasticity of demand? Book publishers talk (and often act) as though book buyers aren’t particularly price sensitive. The Borders and Barnes & Noble coupons in my email suggest otherwise. So does what little academic research exists on the subject. In a paper looking at people buying physical books using a shopbot, economists Erik Brynjolfsson, Astrid Andrea Dick, and Michael D. Smith found very large elasticities: A 1 percent drop in price increased units sold by 7 percent to 10 percent.

Of course, people who use shopbots are likely to be more price sensitive than average. But there’s anecdotal evidence that prices matter a lot for e-books. As The New York Times reported recently, most of the books on the Kindle bestseller list are being given away for free. And comments on various discussion threads among Kindle users suggest that many are bargain hunters looking for a good, cheap read rather than a specific title.

Posted in Uncategorized | 3 Comments

Copyright and the “Right to Profit”

Over at the America’s Future Foundation website, Sonny Bunch responds with indignation to Matt Yglesias’s argument about the inevitability of free music. He starts by quoting the following excerpt from Matt’s post:

It is, of course, possible that at some point the digital music situation will start imperiling the ability of consumers to enjoy music. The purpose of intellectual property law is to prevent that from happening, and if it does come to pass we’ll need to think seriously about rejiggering things.

Bunch responds:

No! False! The purpose of intellectual property law has very little to do with Matt Yglesias being able to enjoy a wide variety of new music. The purpose of intellectual property law is to protect the intellectual property created by artists so they are rewarded for their efforts. The purpose of intellectual property law is to punish people who steal that which isn’t theirs.

I have trouble getting too worked up about the semantic question of whether copyright infringement is “really” theft or not. I don’t engage in illegal file sharing and I don’t condone the practice. But at the same time, there are important differences between literal theft and copyright infringement, and I don’t think it’s particularly illuminating to equate the two.

But I do think Bunch is on shaky theoretical ground. America’s Founders had a pretty clear view of this subject, which they enshrined in our Constitution, and it’s at odds with the story Bunch is trying to tell. The Founders placed property rights protection in the Fifth Amendment, reflecting its status as a fundamental right. In contrast, the copyright clause appears in Article I, Section 8. That’s a section that enumerates the powers of Congress, not the rights of citizens. Indeed, the Constitution does not require Congress to grant copyrights at all, and contains no specific protections for copyright holders. To the contrary, the only specific requirement is a limitation on copyright protection; it requires that copyrights—unlike traditional property rights—be “for limited times.” Finally, the Constitution contains an explicit statement that the purpose of copyright is a utilitarian one: to “promote the progress of science and the useful arts.”

Indeed, if Bunch seriously believes that the function of copyright law is to “punish people who steal that which isn’t theirs,” I would be curious to know whether he obtained Matt’s permission before quoting his blog post. This, of course, is permitted under copyright’s fair use doctrine. But if copyright is just another form of property rights, then theft is theft. I don’t think there’s a section in Locke’s Second Treatise that says stealing is OK if it’s done in small increments.

I was also puzzled by Bunch’s argument that copyright law is justified by artists’ “right to profit from their labors.” This is a peculiar argument to see on a blog of a free-market organization. In a free market, people do not have a right to profit from their labors. To the contrary, the genius of capitalism is precisely that profits are determined by consumers through the market process. Sometimes people make poor business choices and lose money. Sometimes increased competition pushes down prices and drives the least-efficient producers out of business. This is, in fact, exactly what’s happening to incumbent recording labels. That’s unfortunate for shareholders of those companies, but for the rest of us it’s simply part of the market process that has made us such a wealthy nation.

Similarly with authors, artists, and other creative people. Their compensation should be set by market forces. As I’ll be explaining in a future post, I don’t think zero-priced content means that musicians or authors won’t be able to make a living. But that’s neither here nor there as a policy matter. The fundamental point is that copyright is not a welfare program for musicians or authors. The function of the copyright system is not to ensure artists can “profit from their labors,” it’s to benefit the general public by “promoting the progress of science and the useful arts.”

Posted in Uncategorized | 11 Comments