Why Books Want to Be Free

29860073_7c6c96a87e_b

Yesterday I sketched a model of pricing in the traditional book industry. The question I’d like to address now is what this model implies for the future of the eBook industry.

My argument leans heavily on the proposition that the price of content tends not to vary with the quality of that content. So before I get into my predictions, it’s worth commenting on how universal this “law of one price” seems to be. The price of content varies by medium (hardcover fetches more than paperback) and timeliness (first-run movie theaters charge more than second-run movie theaters). But in virtually every competitive market for mass-market content, prices tend not to vary with the quality of the content itself. New music CDs are almost always priced at $10-15, new movies at $15-20. Magazines sell for $3-$5 a copy at the newsstand. Movie theaters charge the same price for tickets whether the movie is a $200 million blockbuster or a $2 million indy movie. There’s every reason to think that the mature eBook industry will conform to the same pattern: a “standard” price will emerge, and publishers won’t deviate from it very much.

What will that price be? The obvious point is that the marginal cost of “producing” and distributing a new copy of an eBook is very close to zero. So assuming a competitive market, we should expect prices to be pushed down to zero.

3878640866_51f1281091There are two factors that will push the prices of eBooks down to zero. First, the supply of eBooks is likely to expand dramatically, as publishers will have every incentive to publish a lot of books that they would have judged to be not good enough for paper printing. This increased competition will put downward pressure on prices. Second, the market for eBooks has no natural floor. Almost any price—$10, $5, even $1—is still well above marginal cost. And so once eBook publishers start cutting prices to build market share, there’s no obvious stopping point.

This is a fairly simple—some would say simplistic—argument. And it’s an argument that tends to trigger strong disagreement. Matt Yglesias, for example made a version of this argument about music earlier today, and got a lot of comments like this:

Making a song requires time from a songwriter, skill from a performer to learn the parts, and the expertise of a recordist and the use of the recordist’s highly specialized equipment.

The distribution ought to be slightly higher than the cost of distribution, which via the internet is close to zero. But you are forgetting the costs to make the song in the first place.

Of course, Matt was not “forgetting” the cost of making the song. Rather, he understands that prices in a competitive mrket tend toward marginal cost (the cost of the last unit), not average cost.

Still, it’s not crazy to think that in a market where books cost $0, so few authors will be willing to write new books that consumers will be willing to start paying again. But I do think it’s misguided. There are a lot of people who will happily write books (and songs) for very little money. Publishers routinely reject manuscripts that represent hundreds of hours of an aspiring author’s work. Most of these manuscripts are terrible, of course, but a significant number are not—the publisher simply judged them insufficiently good to recoup the cost of printing on paper. That calculus will change dramatically when publishing costs next to nothing. It will be worth taking a chance on even books that have a very modest chance of success. And some of those, will, in fact be popular with readers.

And this, in turn, means that the John Grishams and Agatha Christies of the world will have a lot more competition. Thousands of talented writers who failed to persuade a publisher to print their books on dead trees will now have the opportunity to publish directly to peoples’ Kindles. Some fraction of them will catch the imagination of readers and find a large audience. And that puts downward pressure on Grisham’s book advances in two distinct ways: because publishers are cutting their prices to compete with other publishers publishers, and because publishers have many more popular writers from whom to choose.

And as we’ve seen earlier, the price of content almost never varies by quality. Avatar cost $200 million to make and has been widely hailed by critics, yet tickets cost roughly the same as tickets to Jennifer’s Body, which cost $16 million to make and was widely panned. So it’s hard to imagine a stable equilibrium where new authors’ eBooks go for $1 but John Grisham’s eBooks go for $10, because content consumers tend to be price-sensitive.

If that sounds like hand-wavy theorizing to you, consider that it perfectly describes today’s blogosphere. There are millions of blogs in the world. The overwhelming majority of them are not very good, but even the top 1 percent still represents vastly more content than any one person can read. And the competition has made it virtually impossible for bloggers to charge for copies of their work. Even the most popular blogs are available for free online. Every “A-list” blogger understands that that he’d lose 90 percent of his readers overnight if he tried to charge a subscription fee.

Notice that this is true even though some of the top bloggers are extremely talented and have large and growing audiences. The zero price of blog content isn’t a negative judgment about blog quality. It’s simply a reflection of supply and demand. Even at a price of zero, the supply of high-quality content exceeds the attention span of the average reader by a huge margin. Which means that the equilibrium price is zero. There’s no reason to think the economics of eBooks will be any different.

In my final post in this series, I’ll look at what this analysis implies for content creators. The short version: we’re not all going to starve to death.

Update: One point that I should have made explicitly is that this argument has absolutely nothing to do with illegal file sharing. Obviously, illicit file sharing has accelerated the decline of the recording industry, and may very well have the same effect on the eBook market. But prices would continue trending downward even if the recording industry figured out a way to completely stop illicit file-sharing, because lower prices are what you always get when barriers to entry fall and competition increases.

Posted in Uncategorized | 13 Comments

Ignorance and Competition in the Book Market

3151423_6a4b75e6e1_b

I’ve been having a long Twitter discussion with Will Wilkinson about the economics of the book industry. Will wanted to know how authors could make money without “digital rights management” technology, and I replied by saying that writing a book has always been less about making money than it is about promoting the book’s author, and I suggested that the activity is going to become a lot less lucrative in the future.

To think clearly about the future of the book industry, it’s essential to understand its present. In particular, to make an educated guess about what the price of a book will be in the future, we need to understand why books cost what they cost today. I think there are two key insights that largely explain the structure of the book market. First, publishers are really bad at predicting whether any given book will be a success. And second, there is an almost unlimited supply of aspiring authors, some non-trivial fraction of whom would, if given the opportunity, produce a best-selling book. Call these the ignorance and competition assumptions, respectively.

Now, today’s book prices have two distinct and puzzling characteristics. First, book prices don’t vary much and don’t seem to be correlated with popularity or quality. Most categories of hardcover books cost around $25.00. Publishers do not seem to charge extra for books by famous people. They don’t jack up the price if a book is well-reviewed, or offer discounts they’re panned in the New York Review of Books. As Paul Graham has observed, publishers seem to price their books in rough proportion to the cost of the raw materials: longer books are somewhat more expensive, but what’s actually printed on the page has relatively little effect on a book’s price.

This can be explained by ignorance (on the part of customers) and competition (among publishers). A customer generally don’t get to read a book before you buy it, and other indicators of quality, such as reviews, are only weakly correlated with a given customer’s enjoyment of a book. Moreover, there are lots and lots of books to choose from. These factors, together, make the book-buying public strongly price-sensitive. A publisher selling a $40 book in a market where the norm was $25 would lose a lot of customers. Because book-buying is always a hit-or-miss affair, few would want that specific book enough to pay an extra $15 for it, while most would have plenty of other books of similar perceived quality and dramatically lower price.

The second puzzling characteristic is that books are dramatically more expensive than their cost of raw materials. Printing and distributing a book costs around $5. This means that every book sold for $25 represents a huge profit to be divided among the bookseller, publisher, and author. This too is explained by ignorance, this time on the part of publishers. The print process is characterized by high fixed costs and economies of scale. This means you have to sell several thousand copies of a book to recoup the costs of printing it. Most books do not hit this target and so lose their publishers money. Hence, when you buy a book for $25, you’re not only covering the $5 it cost to print and distribute that book, but you’re also helping to defray the costs of several other books that wound up in the remainder bin.

With this background, it should be easy to see how authors’ compensation is determined. To simplify the math a bit, let’s assume there are only two outcomes for a book: hit or not-hit. Then the value of a manuscript is the profit from a hit, times the probability of a hit, minus the losses from a non-hit times the probability of a non-hit. Our ignorance assumption means that the probability of a hit is low, which means that the expected value of printing a book—and hence the value of a manuscript from a first time author—is very low. And this is what we see in the marketplace. When a publisher takes a chance on a non-famous, first-time author, the advance tends to be relatively small.

Things look different for repeat authors because the ignorance assumption doesn’t apply with the same force. Stephen King and J.K. Rowling have demonstrated that they can write books that appeal to large audiences. And this means that not only is the payoff for a hit higher, but the probability of a hit is much higher as well. And this puts them in an extremely strong bargaining position with publishers and allows them to become very wealthy. Hence, we see a highly skewed distribution of earnings, with a tiny minority of authors getting multiple hits and making millions of dollars, while a huge number of authors write only one or two books, get paid very little for it, and fade back into obscurity.

Now imagine a world with omniscient publishers. Every publisher can now predict exactly how many books any given author will sell. This will have two effects. First, obviously, publishers will no longer print money-losing books. Only those books that can recoup their costs will be printed. And second, given my competition assumption, many more best-selling authors will be discovered. Both of these developments will push prices downward. The ability to avoid wasting money on duds means that publishers have a lot of room to cut prices. And publishers’ ability to find new bestselling authors greatly increases the number of bestselling books that can be printed. A world of omniscient publishers would be a world of commodity publishing: publishers would get much smaller margins and bestselling authors would get much smaller advances.

So let’s return to our own, non-omniscient world. We might say that what makes a best-selling author valuable isn’t just his writing talent—an author was probably just as talented before he was discovered as after—but in the knowledge that the author is, in fact, capable of producing best-selling books. And producing this knowledge is (or at was until recently) really expensive—to find one John Grisham, you had to publish a bunch of books by unknowns and see which ones sell. This means that the people who have convinced a publisher to bear the costs of “discovering” them have what amounts to a uniquely valuable credential. They can extract significant rents because even though there are likely plenty of others who could produce novels of similar quality, it’s too expensive to figure out who they are.

In my next post, I’ll explore how these considerations shake out in the age of the Kindle.
3151479_68dbb337cf_b

Posted in Uncategorized | 9 Comments

The Bottom-Up Revolution in Trucking

2891788700_da043baf6b

There’s a strong argument to be made that the Jimmy Carter administration was the most libertarian-friendly of the last half-century. One of the administration’s signal accomplishments was the deregulation of the trucking industry. Jesse Walker tells the story:

Consider the farm policies established during the New Deal. Franklin Roosevelt’s agricultural advisers fell, roughly speaking, into two competing categories. One group, representing the old agrarian anti-monopolist tradition, wanted to level the playing field for smaller operators. The others saw big business as an ally, not an enemy; they believed, as Hamilton puts it, that the feds should “cooperate with monopolistic meatpackers and milk distributors to achieve efficiencies in the mass production and mass distribution of food.” The second group quickly became dominant, and the policies that followed encouraged consolidation and privilege: Price supports fed the biggest agricultural interests, dairy regulations locked a milk cartel into place, and acreage reduction requirements led to evictions of tenant farmers.

A similar fate befell the young trucking industry. After the Motor Carrier Act of 1935, drivers who wanted to start a new trucking firm “suddenly needed much more than just a truck and trailer to start in business,” Hamilton explains. “They needed to gain operating authority as well, which the ICC granted only after lengthy and expensive proceedings meant to discourage competition.” There was one bright spot in the law, though—a rare victory for the populist elements of the administration. Agriculture Secretary Henry Wallace “recognized that independent truckers might undermine the monopoly power of railroad-based food processors,” so he endorsed an exemption to the ICC’s restrictions on the trucking trade. Drivers hauling farm products would be relatively unregulated, a decision that allowed a fleet of tiny trucking firms to flourish. Meanwhile, in the rest of the industry, the government’s rules favored large, established companies—and, later, the Teamsters, who negotiated sweetheart contracts with the cartel while disdaining independent drivers.

The Interstate Commerce Commission maintained a tightly-regulated trucking cartel for a half-century until the late 1970s:

Mike Parkhurst was a trucker turned reporter whose magazine, Overdrive, aspired to speak for the independent owner-operator; it was filled with exposés and editorials attacking the Teamsters union, the Interstate Commerce Commission (ICC), and the maze of state and federal rules that befuddled and burdened the ordinary driver. In his magazine and in testimony before Congress, Parkhurst called for a sweeping deregulation of his industry, a push that culminated with the Motor Carrier Act of 1980. The new law, sponsored by Sen. Ted Kennedy (D-Mass.) and signed by President Jimmy Carter, radically reduced the ICC’s authority, eliminating entry barriers, price controls, and other policies that had protected a cartel of carriers from competition. Before 1980, independent truckers had been limited to transporting farm commodities. Under the new rules, thousands of new firms flooded into the remainder of the industry, driving down prices for manufacturers and consumers alike.

The debate over deregulation during the 1970s is interesting because it didn’t break down along traditional partisan or ideological lines. The leading advocates were liberals—Stephen Breyer, Ted Kennedy, Jimmy Carter—but the movement also had significant support from the free-market right and from small entrepreneurs. Large, incumbent firms in these industries joined forces with their associated unions to oppose reform.

The battle, in other words, was between advocates of competition and advocates of corporatism. The corporatists dominated Washington policymaking in many industries from the New Deal until the Nixon years. For reasons that aren’t clear to me, their power collapsed in the mid-1970s. And the result was a sweeping transformation of the American transportation and communications industries whose benefits we continue to enjoy today.

Posted in Uncategorized | 16 Comments

Why Geeks Hate the iPad

3608538842_ea3026c2fa

Alex Payne, an engineer at Twitter, explains why he’s “disturbed” by the iPad:

The thing that bothers me most about the iPad is this: if I had an iPad rather than a real computer as a kid, I’d never be a programmer today. I’d never have had the ability to run whatever stupid, potentially harmful, hugely educational programs I could download or write. I wouldn’t have been able to fire up ResEdit and edit out the Mac startup sound so I could tinker on the computer at all hours without waking my parents. The iPad may be a boon to traditional eduction, insofar as it allows for multimedia textbooks and such, but in its current form, it’s a detriment to the sort of hacker culture that has propelled the digital economy.

2320949433_30cb1c9c8cI think virtually every computer programmer has a story like this. Some of us started in grade school—I demoed a simple BASIC program I’d written for show-and-tell in the second grade. Others didn’t find their knack for programming until after the graduated from college. But in any case it was tremendously important that we could sit down at the computers we (or our parents) already owned and start screwing around with them. We didn’t have to order special unlocked developer computers, nor did we have to submit our programs to Apple before they’d run on our friends’ computers.

I think the difference in lived experience largely explains the sharply divergent reaction you see to this issue between programmers and non-programmers. For the general public, the openness of a digital gadget is an entirely abstract issue, like whether the product is environmentally friendly or was made in a sweatshop. But there’s nothing abstract about it for those of us who regularly open up a command line. Using a locked-down computer feels like using a pair of safety scissors. It isn’t just that it’s likely to be a less innovative platform in the abstract—though it is. It’s that it’s conspicuously lacking what we view as core functionality.

zipad340xNow, the obvious response is that Payne and I are not the target audience for the iPad, and we shouldn’t complain if Apple produces a product that works for everyone else. Which is fair enough—I certainly don’t want to stop Apple from making the kinds of products it wants to, or customers from buying the products they like. But it’s important to bear in mind that it’s in your interest to be using the same platform as the geeks, because (as Paul Graham has pointed out) we’re likely to come up with innovations that you’ll find useful. And we’ll probably share them with you—but only if we’re using the same platform.

Posted in Uncategorized | 18 Comments

Authority vs. Involvement in the News Business

50636767_7f187cf4aa

Via Mike Masnick, Guardian editor Alan Rusbridger has a great piece explaining what’s at stake in the paywall debate:

The second issue it raises is the one of ‘authority’ versus ‘involvement’. Or, more crudely, ‘Us versus Them’. Again, this is similar to the other two forks in the road, but not quite the same. Here the tension is between a world in which journalists considered themselves – and were perhaps considered by others – special figures of authority. We had the information and the access; you didn’t. You trusted us filter news and information and to prioritise it – and to pass it on accurately, fairly, readably and quickly. That state of affairs is now in tension with a world in which many (but not all) readers want to have the ability to make their own judgments; express their own priorities; create their own content; articulate their own views; learn from peers as much as from traditional sources of authority. Journalists may remain one source of authority, but people may also be less interested to receive journalism in an inert context – ie which can’t be responded to, challenged, or knitted in with other sources. It intersects with the pay question in an obvious way: does our journalism carry sufficient authority for people to pay – both online (where it competes in an open market of information) and print?

Or to put it another way, do we want a top-down journalism industry in which readers passively consume what reporters dish out? Or do we want a bottom-up journalism industry in which readers have the opportunity to be an active part of the journalistic process? The former is arguably better for professional reporters. But I think the latter is better for almost everyone else.

Posted in Uncategorized | Leave a comment

The New York Times vs. Google

In Tuesday’s post about the New York Times and its paywall, I made the passing comment that Sergey Brin and Larry Page might be able to design a paywall that wouldn’t hurt the paper’s bottom line. But after thinking about it some more, I think this is actually wrong. One of the most distinctive things about Google’s business strategy is its absolutely relentless focus on improving customer satisfaction to the exclusion of all else—including short-term revenue generation. Not only does Google regularly release products with no apparent prospects of monetization, the company will actually change its products in ways that reduce revenue but improve the user experience. For example: Google’s decision to enable POP and IMAP access for GMail, which means less time spent on the GMail website looking at ads (at least in the short run). Or the decision to penalize advertisers whose websites load slowly.

This is not altruism on Google’s part. While I’m obviously not privy to the thinking of senior management, there are some solid business reasons for behaving this way. First, the economics of information goods means that they can afford to give their products away for free. Once Google has created a new software product, the marginal cost of providing it to one more user is very low. This means that unlike a typical business, Google doesn’t care about costs per customer. More customers are almost always a benefit, even if many of those customers are contributing very little to the bottom line.

Second, I think Google understands that its brand is its most valuable asset. Behaving in a promiscuously pro-consumer fashion has given the company a sterling reputation with consumers. Indeed, Google’s branding is so strong that just adding Google branding to competitors’ search results causes customers to rate those results more highly. Every time Google improves one of its services, it strengthens the “halo effect” around the Google brand and thereby gives a boost to every other product in the Google stable.

Finally, and most importantly, the web is a young medium and it’s hard to predict exactly where opportunities for monetization will pop up. By worrying about maximizing “eyeballs” now, Google puts itself in the best possible position to exploit unexpected opportunities that do arise. For example, it’s not clear what the dominant business model for online video will be, but it’s likely that YouTube’s huge user base will be an asset once someone figures it out.

How does this apply to the New York Times? One thing to keep in mind is that a site’s heaviest users are the most valuable. They not only see the most ads, but they’re also the most likely to help the site in other ways: promoting its content, participating in its communities, trying out experimental new features, and so forth. If the Times demands that these users pay for access, some of them will leave. But more importantly, forcing users to pay will subtly alter users’ attitude toward the site. People will do things to help out a site they feel warm and fuzzy about that they won’t do for a site with which they feel they merely have a business relationship. Much of Yelp’s success, for example, comes from its habit of assiduously rewarding its most prolific reviewers. The Times should be looking to build that kind of relationship with its readers, not trying to wring subscription fees out of them.

Moreover, the business imperatives of the paywall will necessarily discourage certain types of experimentation because the people running the paywall will fight to kill products they view as holes in the dike. Reduced experimentation will mean missed opportunities. Obviously, I can’t predict exactly which opportunities will be misssed, but that’s the point. Neither can the Times executives, which is why it’s risky to close off any doors.

One obvious retort is that Google’s massive profits give it the luxury of making long-term bets without immediate prospects of monetization. The Times, in contrast, is struggling to make ends meet so they need revenue now. This is a fair point, but I think it’s worth looking at things the other way around: Google’s huge profits are due, at least in part, to its pursuit of a promiscuously pro-consumer business strategy over the last decade. We’ll never know how much damage the last paywall experiment did to the Times‘s reputation and traffic, but I bet it was significant. Likewise, we don’t know what kinds of successes the Times might have had if it had experimented more aggressively with new products a la Google a decade ago when it was still flush with cash. Conceivably, the Times could have captured the classified market before Craig Newmark (another guy with a relentless focus on customer satisfaction) did.

Of course, the Times didn’t do this because it didn’t have Google’s corporate culture. And it still doesn’t, so it will probably go for the short-term revenue offered by the paywall.

Posted in Uncategorized | 9 Comments

The Case against the iPad

ipad

Apple released a new product, called the iPad, today. For those of you who don’t spend your days glued to Twitter, you can view all the details at Apple’s website. I’m not impressed. I’m a lifelong Mac fanboy, so I’m not averse to buying Apple stuff. But I don’t understand who this product is marketed for, and I’m disappointed that Apple has decided to adopt the iPhone’s locked-down platform strategy.

The iPad appears to be Steve Jobs’s attempt to roll back the multi-decade trend toward more open computing platforms. Jobs’s vision of the future is one that revolves around a series of proprietary “stores”—for music, movies, books, and so forth—controlled by Apple. And rather than running the applications of our choice, he wants to limit users to running Apple-approved software from the Apple “app store.”

I’ve written before about the problems created by the iPhone’s top-down “app store.” The store is an unnecessary bottleneck in the app development process that limits the functionality of iPhone applications and discourages developers from adopting the platform. Apple has apparently chosen to extend this policy—as opposed to the more open Mac OS X policy—to the iPad.

With the iPhone, you could at least make the argument that its restrictive application approval rules guaranteed the reliability of the iPhone in the face of tight technical constraints. The decision not to allow third-party apps to multitask, for example, ensures that a misbehaving app won’t drain your iPhone’s battery while it runs in the background. And the approval process makes it less likely that a application crash could interfere with the core telephone functionality.

But these considerations don’t seem to apply to the iPad. Apple is attempting to pioneer a new product category, which suggests that reliability is relatively less important and experimentation more so. If a misbehaving application drains your iPad battery faster than you expected, so what? If you’re reading an e-book on your living room couch, you probably have a charger nearby. And it’s not like you’re going to become stranded if your iPad runs out of batteries the way you might without your phone. On the other hand, if the iPad is to succeed, someone is going to have to come up with a “killer app” for it. There’s a real risk that potential developers will be dissuaded by Apple’s capricious and irritating approval process.

Finally, there’s the iBook store, Apple’s answer to the Kindle. From all indications, the books you “buy” on an iPad will be every bit as limited as the books you “buy” on the Kindle; if you later decide to switch to another device, there’s no easy (or legal) way to take your books with you. I think this is an issue that a lot of Kindle owners haven’t thought through carefully, and that it will trigger a backlash once a significant number of them decide they’d like to try another device.

This is of a piece with the rest of Apple’s media strategy. Apple seems determined to replicate the 20th century business model of paying for copies of content in an age where those copies have a marginal cost of zero. Analysts often point to the strategy as a success, but I think this is a misreading of the last decade. The parts of the iTunes store that have had the most success—music and apps—are tied to devices that are strong products in their own right. Recall that the iPod was introduced 18 months before the iTunes Store, and that the iPhone had no app store for its first year. In contrast, the Apple TV, which is basically limited to only playing content purchased from the iTunes Store, has been a conspicuous failure. People don’t buy iPods and iPhones in order to use the iTunes store. They buy from the iTunes store because it’s an easy way to get stuff onto their iPods and iPhones.

Apple is fighting against powerful and fundamental economic forces. In the short term, Apple’s technological and industrial design prowess can help to prop up dying business models. But before too long, the force of economic gravity will push the price of content down to its marginal cost of zero. And when it does, the walls of Apple’s garden will feel a lot more confining. If “tablets” are the future, which is far from clear, I’d rather wait for a device that gives me full freedom to run the applications and display the content of my choice.

Update: I guess I’ve been brainwashed by my iPhone not to notice this, but the other glaring flaw, as this post explains, is the lack of standard ports. The net effect of this is, again, to give Apple complete control over the platform’s evolution, because the only way to interact with the thing is through the proprietary dock connector. Again, this made a certain amount of sense on the iPhone, where space, weight, and ergonomics are at a premium. But it’s totally unacceptable for a device that aims to largely displace my laptop. Hell, even most video game consoles have USB ports.

Posted in Uncategorized | 40 Comments

Boaz on Avatar

My erstwhile boss David Boaz says that Avatar is an allegory about property rights:

People have traveled to Pandora to take something that belongs to the Na’vi: their land and the minerals under it. That’s a stark violation of property rights, the foundation of the free market and indeed of civilization.

Sure, the Na’vi — who, like all of the people in lefty dreams, are psychically linked to one another and to all living creatures — probably view the land as their collective property. At least for human beings, private property rights are a much better way to secure property and prosperity. Nevertheless, it’s pretty clear that the land belongs to the Na’vi, not the Sky People.

Conservatives rallied to the defense of Susette Kelo when the Pfizer Corp. and the city of New London, Conn., tried to take her land. She was unreasonable too, like the Na’vi: She wasn’t holding out for a better price; she just didn’t want to sell her house. As Jake tells his bosses, “They’re not going to give up their home.”

“Avatar” is like a space opera of the Kelo case, which went to the Supreme Court in 2005. Peaceful people defend their property against outsiders who want it and who have vastly more power. Jake rallies the Na’vi with the stirring cry “And we will show the Sky People that they cannot take whatever they want! And that this is our land!”

Posted in Uncategorized | Leave a comment

Will the NYTimes Paywall “Work?”

The New York Times says it plans to introduce a paywall next year. Tom Lee says it won’t work. Jerry Brito says it might:

If the NYT website’s readership is like anything else, there’s probably a power law at work. A small minority of readers make up a sizable percentage of pages read. Put another way, there’s probably a small minority of users that read a considerable amount more articles than the average user. They need to set the number of monthly free articles high enough so that the bottom (say) 98% of readers never even notice that there’s a paywall, but the top 2%, which are presumably devoted NYT fans would be affected.

Now, that’s easy to say, but figuring out the balance might be very tricky. Whatever it does, the NYT doesn’t want to affect the ad-revenue-generating traffic it’s getting. How does it do that? First, it seems to me, is that the NYT has to be realistic about how many people it can get to subscribe. It’s going to be a tiny, tiny number, but that’s money it’s leaving on the table right now. So the number of monthly free articles needs to be in the 100+ range, not the Financial Times‘ 10 a month. Second, it needs to set a reasonable subscription price. The top readers are probably devoted fans, but that doesn’t mean they’ll pay anything, and if they don’t the NYT will lose not only the subscription, but the ad revenue it now generates from those folks.

This analysis makes a lot of sense, but I think it underestimates the danger that the metering mechanism introduces frictions that discourages casual users from using the site. The plan is apparently to force users to sign up for a nytimes.com account before they can read articles. This is a fairly small hurdle, but it is a hurdle. Some fraction of users who encounter a registration form will be annoyed and push the “back” button.

Not only does the Times lose some ad revenue when users decline to register, but some of the lost readers are influential bloggers, Diggers, Tweeters, and so forth. Hence, the registration requirement may cost thousands of readers who will never even show up in the visitor logs. And this effect can snowball: fewer Diggs and tweets means fewer new readers, and the lost readers means even fewer Diggs and tweets.

These effects are really difficult to measure, and as a consequence they’re constantly being underestimated. And they’re especially likely to be underestimated by a large, conservative bureaucracy like the New York Times company. For example, the Times continues to shoot itself in the foot by refusing to offer full-text RSS feeds, making people like me less likely to read their blogs. Similarly EMI, publisher of OK Go’s new album, is shooting itself in the foot by disabling embedding of the latest OK Go videos, dramatically reducing the chance the the videos will go viral as OK Go’s previous videos have.

I think this is largely a consequence of the phenomenon I discussed back in November: the way middle managers act as “information funnels” between rank-and-file workers and senior management. The information funnel tends to overweight short-term, quantitative arguments. If middle manager Smith says he has a plan that will produce a million dollars in revenue this year, while middle manager Jones says the plan will stunt long-term growth and cost tens of millions of dollars over the next decade, senior management is more likely to go with Smith’s recommendation even if Jones is right. This is not only because senior management tends to be risk-averse, but also because Jones’s argument is likely to be much subtler (harder to summarize with bullet points in a PowerPoint presentation) and is therefore more likely to get mangled as it makes its way up the organizational hierarchy.

So it might be true that a New York Times Company run by Sergey Brin and Larry Page could design a paywall that could generate (a relatively small amount of) subscription revenues without undermining its existing advertising business. But the actual New York Times Company is likely to build its paywall in a greedy and inept manner that costs far more in long-term advertising revenue than it generates in subscription revenue.

Posted in Uncategorized | 3 Comments

Chris Berg on Haiti and Immigration

Chris Berg makes the case for expanded immigration from Haiti to the West:

According to a 2008 study by the Centre for Global Development, Haitian immigrants in the US earn on average six times more than equally educated Haitians who stay home. It would be more effective and efficient to allow Haitians to move to other countries than wait for the international community or aid organisations or the Haitian Government to repair two centuries of institutional failure.

Immigration away from Haiti will actually help Haiti. Foreign aid to the country may be substantial, but it is overwhelmed by what expat Haitians send home. In 2008, foreign governments gave Haiti $US912 million. Haitian expats sent back at least $US1.3 billion, according to the most conservative estimates. Other estimates suggest unreported remittances to Haiti might account for up to a third of Haiti’s total GDP.

And while much foreign aid is delivered directly to the Haitian Government (which doesn’t have a wonderful track record in using it well), these remittances go straight to the Haitian people.

I think it’s great that American celebrities hosted a telethon that raised $57 million for Haiti. But a one-time infusion of $57 million pales in comparison to the hundreds of millions of dollars in remittences that could be generated every year if we allowed a significant number of Haitian nationals work in the United States. That’s the cause American celebrities should be promoting if they really want to help the Haitian people.

Posted in Uncategorized | Leave a comment