Online News as a Disruptive Technology

In my last post I promised to consider how online news organizations can produce expensive content like reporting from Iraq.

Sites wanting to produce high-quality, expensive content face a chicken-and-egg problem. If you have a large audience, you can spread the costs of producing high-quality content over a larger base of readers. And if you have have a lot of high-quality content, that content will draw large numbers of readers. But if you have neither a large readership nor a lot of high-quality content, how do you get there?

The answer, of course, is that you bootstrap the process with cheap, sensationalistic content. You serve up smut, inane lists, and unpaid punditry. This kind of content is extremely cheap to produce and it draws a lot of readers.

But precisely because these kinds of content are so cheap, they attract a lot of competition and don’t stay profitable for very long. And so to keep growing, the largest and most successful sites move “up market” by hiring actual reporters who can produce original, non-sensationalistic content—the kind of content that smaller competitors can’t easily duplicate. The largest sites are in the best position to do this because they can spread the fixed costs of having a salaried reporters over a larger number of ad impressions.

This process shouldn’t surprise us, because it perfectly fits Clay Christensen’s model of disruptive innovation. Consider this example from the steel industry, drawn from the Christensen paper I wrote about last month:

Minimills first became technologically viable in the mid-1960s. The quality of the steel that minimills initially produced was poor because they melted scrap of uncertain and varying chemistry in their furnaces. The only market that would buy what the minimills made was the concrete reinforcing bar, or rebar, market because the specifications for rebar are loose. Once rebar is buried in cement, you can’t verify whether the steel has met the specifications. Rebar was therefore an ideal market for low-quality steel.

As the minimills attacked the rebar market, the integrated mills were actually happy to be rid of that business. Their gross profit margins on rebar often hovered near 7 percent…

All was well in this relationship until 1979, when the minimills finally succeeded in driving the last integrated mill out of the rebar market. Historical pricing statistics show that the price of rebar then collapsed by 20 percent. Why? A low-cost strategy only works when there are high-cost competitors in your market. After the last integrated mill had fled up-market and the low-cost minimill was only pitted against other low-cost minimills in a commodity market, competition quickly drove prices down to the point that none of them could make money.

The minimills soon looked up-market, and what they saw spelled relief. If they could just figure out how to make bigger and better steel—shapes such as angle iron, rails, and rods—they could roll tons of money again because the margins there were 12 percent. As the minimills extended their ability and attacked that tier of the market, the integrated mills were again relieved to be rid of that business because it just didn’t make sense to defend a 12-percent-margin business when the alternative was to invest to gain share in structural beams, where margins were 18 percent…

Peace characterized the industry until 1984 when the minimills finally succeeded in driving the last integrated mill out of the bar, rod, and rail market, which caused the minimills to reap the same reward for their victory: With low-cost minimill pitted against low-cost minimill, prices collapsed by 20 percent.

The minimills had to move up-market again.

The story ends with the minimills driving most of the integrated steel mills into bankruptcy.

Of course news isn’t steel. But I think this analogy helps us understand the extremely long chart that accompanies Nate Silver’s post about the NYT paywall. It shows which news outlets are most often credited with breaking news stories. The top 50 slots are dominated by traditional news organizations. Online-only websites are much further down the list, and they produce a trivial fraction of the overall reportage. But the distribution of topics among those sites is interesting. Most of them fall into a handful of categories: tech and gadgets, celebrity gossip, and politics. The web increasingly dominates these categories of news.

The disruptive technology of the web is busy devouring the rebar market of the news business. The most successful sites are getting tired of the thin margins at the lowest rungs of the latter and have started looking upward. The New York Times alone generated $387.3 million in digital revenue last year. That might not seem like a lot of money to the grey lady, but it looks like a huge jackpot to a still-small company like the Huffington Post. They—and dozens of their competitors—are working hard to find ways to take a piece of that pie.

Now obviously this isn’t an explanation of how web-based sites will produce high-end reporting. I’ve suggested a few possible strategies in previous posts, but I think focusing too much on the specifics will miss the broader trend. Without the technological and cultural baggage of a print past, web-only publications inherently have lower overhead. And the smaller average size of web-based publications means that the rate of experimentation is much higher. It’s only a matter of time before somebody figures out how to apply the low-costs tools of the web to high-value reporting. And the nimble, collaborative nature of the web means that successful models will be copied rapidly.

People look at today’s Huffington Post and conclude that the web can only do cheap, sensationalistic content. But in 1980, people looked at the minimills (and the microcomputer) and dismissed them as curiosities that could only serve the lowest rungs of their respective markets. But that was a misunderstanding of the economics of disruptive technologies. They always start at the low end of the market, but they rarely stay there.

Posted in Uncategorized | 6 Comments

Web Specialization vs. Newspaper Autarky

Judging from the comments on my two posts last week on reporting and paywalls, I didn’t do a good job of making my case. I think that’s partly because this is an argument about culture as much as economics. Traditional newspaper journalists have a certain set of assumptions about the nature of journalism, the obligations of journalists, and the proper relationship between a journalist and her readers. The web has its own culture with its own set of answers to these questions. Unfortunately, the “debate” over the Times paywall has largely consisted of people on each side loudly reiterating their own side’s assumptions without paying much attention to what the other side has to say. Needless to say, that hasn’t been very illuminating.

So if you’ll forgive me for yet another post (actually two!) on an over-blogged subject, I want to step back and talk explicitly about some of the baseline assumptions made by these two cultures (the web and the print press, respectively). Every culture is solipsistic, with the result that assumptions that seem almost self-evident in one culture will strike the other culture as implausible or even perverse. It’s only by talking about these assumptions explicitly that we can make some sense of the ongoing argument between these communities.

As is often the case, the web/print culture clash is rooted in the differing capabilities of their respective technologies. Daily newspapers were born in an era of information scarcity. Distributing a newspaper is expensive enough that most people tended to take just one.

This meant that a newspaper had to be all things to all people: they needed to cover a full spectrum of topics and do it in a way that didn’t assume the reader had access to any other publications.

The web is obviously different. There are thousands of websites that offer news coverage. Readers don’t generally go to a single news website and read it “cover to cover.” Rather, they sample from a wide variety of sources. And this means that news sites don’t need to be all things to all readers. They can focus on a particular topic to cover in depth. They can write for a narrow audience, skipping background exposition that a general-interest outlet would have to cover.

Most importantly, they can link to other sites. Links are profoundly important for the character of the web because they allow writers to make incremental contributions to ongoing conversations. Rather than trying to be “all things to all people,” writers can link to and quote from conversations that are happening somewhere else and then make a (possibly small) contribution of their own. This division of labor opens the conversation up to many more people. A nuclear physicist may not feel competent to write an AP-style news story about the Fukushima nuclear incident, but he can link to a news story about it and then offer his expert opinion. And his thoughts can, in turn, be incorporated into future writing by non-experts.

This helps explain one of the perpetual sources of friction between mainstream media outlets and bloggers. Mainstream media outlets prize original reporting to the point that they’ll perform completely redundant reporting rather than citing another journalist’s work. In contrast, on the web it’s considered perfectly kosher to quote and link to another news outlet without doing original reporting. What’s not kosher online is failing to give credit to the original source for a story—even if subsequent reports are independently reported.

Indeed, the norm of linking to original sources is strongly enforced within the blogosphere. If a site develops a reputation for failing to credit sites that break stories, other sites will “boycott” it by refusing to link to its stories. The result is an economic model for rewarding sites that do original reporting. Web traffic operates as a de facto currency.

What we have here, at root, is a conflict between autarky and the division of labor. The newspaper style of news gathering is expensive because, like any autarkic system, it eschews the benefits of specialization. The Times does some kinds of reporting superbly and efficiently, but it does many other kinds of reporting clumsily and wastefully. On the web, in contrast, each news outlet focuses on producing the types of information it can produce most efficiently and “trades” with other publications for other kinds of information. The result is a decentralized system that produces news at much lower average cost than the centrally-planned model of a newspaper.

OK, so with that background, it’s worth going back to our running example of covering an away baseball game. Newspaper partisans want to know who is going to pay to send a New York reporter to Toronto to cover a Yankees away game. The answer is that this is the wrong question. The right question is: how is a New York publication going to get information about a Toronto baseball game to its readers? And the answer is there’s likely to be plenty of information about the game on the web already. So the 21st century news outlet’s job isn’t necessarily to do original reporting. Rather, its job is largely to help readers find the information that’s already out there, possibly with a bit of original reporting to fill in the gaps.

The obvious objection is that there’s no guarantee that there will be any coverage for the New York publication to link to. But although this is true in a trivial sense, it’s not actually much of an objection. There’s no law or institution guaranteeing I’ll be able to buy groceries next year, but I don’t lose sleep over the possibility that all the supermarkets in my town will go out of business—I’m confident someone will find a way to satisfy my demand for food. Similarly, people want to read about Yankees games in Toronto. If there’s a shortage of sports reporting in Toronto, one of the millions of people in Toronto will step in and fill the gap. Asking for an explanation of precisely how this will be done is the same kind of conceptual error as asking which farmers will produce the food I’ll have for dinner next week. The content in question isn’t too expensive to produce and there’s a financial incentive to produce it. So someone will.

That’s fine for baseball games, I hear you say, but who’s going to pay for the New York Times‘s Baghdad bureau? There probably aren’t a lot of Iraqis who will step up to write about the war for an American audience, no matter how much traffic such a blog would generate. I’ll address that question in my next post.

Posted in Uncategorized | 2 Comments

Is Pro Publica “Awful” and “Leftist”?

Reihan Salam was kind enough to link to Monday’s post comparing the New York Times to Pro Publica, which led Matthew Vadum of the Capital Research Center to lambast him for promoting what Vadum considers an “awful leftist media outlet.” I’m familiar enough with Pro Publica’s work to know they’re not awful and not particularly leftist, so I asked for details. He pointed me to this CRC report purporting to document the left-wing biases of the organization.

The report is six pages long. It spends the first three pages criticizing its founders, Herb and Marion Sandler, for their role in the subprime mortgage crisis and their left-leaning bias. Not until page 4 of the 6-page report does it get around to telling us anything about the organization’s work. Here is CRC’s evidence that Pro Publica “churns out little more than left-wing hit pieces”:

  • CRC faults Pro Publica’s coverage of the ACORN scandal: “On Oct. 16, ProPublica’s website linked to an ABC News story entitled, ‘Experts: McCain ACORN Fears Overblown.'” And “on Oct. 29, a ProPublica reporter ignored the ACORN voter fraud reports and wrote a story instead about the background of a public affairs group that had attacked ACORN in a prepared advertisement in the New York Times.” The report notes that the Sandlers are supporters of ACORN.
  • According to CRC, Pro Publica focused too much time investigating Sarah Palin’s role in the “Road to Nowhere” and Alaska earmarks.
  • Pro Publica largely ignored the Jeremiah Wright controversy.
  • The report lists various Obama administration scandals that Pro Publica failed to cover. However, it acknowledges that there were stories about Timothy Geithner and Tom Daschle’s tax troubles. Also “ProPublica reporters should receive high praise for their stories on Obama’s stimulus package and banking bailouts, on recent business and financial scandals, and on other issues related to open records and open government.”

And, um, that’s it. Now keep in mind that this is supposed to be the case against Pro Publica. One would have expected them to focus on the worst examples of left-wing bias and downplay their best and most balanced reporting. So it’s truly remarkable that their strongest evidence ranges from actually good editorial decisions (ignoring Jeremiah Wright) to quibbles about story selection (supposedly too much criticism of Palin and not enough of Obama). That’s remarkably weak sauce.

In particular, the CRC failed to find so much as a whiff of actual journalistic malfeasance. Nor is there any attempt at the kind of statistical analysis that could reveal a systematic partisan slant to their work. In short, no evidence that the organization’s work is either “awful” or “leftist.”

There’s a difference between good reporting that happens to be written by a left-leaning reporter (or funded by a left-leaning philanthropist) and partisan hackery. Conservative non-profits like CRC—and conservative media organizations like Fox News—have built an empire on conflating the two. But there is a difference, and it’s important to resist efforts to blur it.

Posted in Uncategorized | 3 Comments

MetroPCS as the New T-Mobile

As a counterpoint to the arguments I made yesterday, Reihan Salam points me to this article about MetroPCS, which is on deck to be the new #4 wireless carrier:

MetroPCS targets big city markets and keeps their prices low by only using their own networks in dense urban areas that are cheap to serve. They have been adept at securing roaming agreements to use competitor’s networks in other areas, so their cost to serve 90% of the country is a fraction of what it costs the larger companies. In fact, last year it cost the company only $18.49 a month to serve the average customer. That lets MetroPCS cut its rates to just about half what AT&T and Verizon charge, only $40 per month (including taxes) for unlimited talk, text, and Web, and about $50 per month for the same plan with a 4G smartphone. That’s a pretty impressive profit margin. And since AT&T and Verizon don’t release their cost per customer, but we can bet it’s a lot higher.

This highlights a couple of important points. First, obviously it’s a bit of a simplification to say that there are only 4 wireless carriers in the United States. There are, in fact, more than a dozen wireless carriers offering service in at least parts of the US. Three or four of them operate their own networks in parts of the country and have a non-trivial number of subscribers. We can certainly hope that this merger will cause MetroPCS to position itself as the new T-Mobile, catering to customers who care more about low prices and freedom than comprehensive coverage.

Second, this provides some evidence that cellular carriers are capital-constrained more than spectrum-constrained. It sounds like MetroPCS’s strategy is to save money by only building out its network in dense areas. I don’t know if that means it only has spectrum licenses in those dense urban areas, or if it has licenses elsewhere that it’s under-utilizing.

On the other hand, this article suggests at least one important way the merger is likely to be bad for MetroPCS: it depends on the larger carriers for roaming access. Presumably consolidation increases the pricing power of the larger carriers and makes it harder for MetroPCS to rely (some might say “free-ride”) on the infrastructure of its larger rivals.

The fundamental question, which I don’t know enough about the spectrum market to answer, is how much MetroPCS and other smaller carriers could grow before hitting a hard brandwidth constraint. If they only have enough bandwidth to serve a fraction of the customers the major carriers do, then it’s hard for them to serve as effective competitors. But it’s also possible that their networks are small because it’s just too expensive to expand them, in which case the argument for blocking the merger would be much weaker.

Posted in Uncategorized | 1 Comment

Limited Government and the Spectrum Market

AT&T, the second-largest mobile phone company, recently announced that it intends to purchase T-Mobile, the number four wireless firm. Jerry Brito has a typically insightful post in which he argues that this merger is all about spectrum scarcity:

If a carrier wants to acquire more spectrum to meet consumer demand for new services, it can’t thanks to the artificial scarcity created by federal policies that dedicate vast swaths of the most valuable spectrum to broadcast television and likely inefficient government uses. It’s gratifying to see the FCC now confronting the “spectrum crunch,” but waiting for a deal to be brokered on incentive auctions is a luxury carriers don’t have.

I think this analysis is correct as far as it goes. But I think it’s worth thinking about the implications in more detail.

There’s been a ton of discussion of whether the merger would increase or reduce prices. That’s important, of course, but it’s only part of the story. Wireless connectivity is not a commodity like electric power or iron ore. Consolidation has implications not only for our pocketbooks, but also for consumer choice and innovation.

Consider the handset market. Over the last few years, mobile platform vendors like Apple and Google have been fighting with wireless companies like Verizon and AT&T for control over the user experience. Verizon apparently refused to carry the original iPhone because Steve Jobs demanded too much creative control. Google has taken a more conciliatory stance toward the carriers, allowing them to customize the Android operating system for their own customers. Still, the introduction of Android represented a dramatic shift of power from Verizon to Google; Verizon’s willingness to take this step was almost certainly a reaction to AT&T’s success with the iPhone.

The long-run balance of power between the software industry and the wireless industry depends a lot on their respective concentration. Generally speaking, the more concentrated industry will be able to dictate terms to the less-concentrated one. The existence of T-Mobile, Verizon, and Sprint gives Apple leverage in its negotiations with AT&T, just as the existence of Google, Microsoft, and Palm give AT&T leverage against Apple. Take T-Mobile out of equation and power shifts from Silicon Valley to the carriers. Personally, I’d rather have Apple and Google in charge than AT&T and Verizon.

And wireless carrier competition is particularly important for small hardware and software firms trying to break into the wireless market. Large, bureaucratic companies tend to resist disruptive technologies. The fewer wireless carriers there are, the harder time the inventor of the next iPhone or Kindle will have finding a partner willing to carry it. It’s not a coincidence that T-Mobile, the smallest of the national carriers, also has a reputation for running the most open network.

Competition offers other non-price benefits too. Chris Soghoian points out that T-Mobile currently offers industry-leaing privacy protections for its customers. After the merger, that option is likely to go away. The larger carriers also have a long history of crippling various features of mobile phones that they believe will harm their business models. The existence of smaller, more accommodating firms like T-Mobile provides an important check on these practices.

Fine, I hear you say, but shouldn’t we be minimizing government interference with the free market? We absolutely should. But we have to keep the big picture in mind as we think about what that means.

Consider the 1984 breakup of AT&T. In some sense, this was government interference in the marketplace. But in the larger context of federal telecom policy, it was clearly an effort to undo some of the damage done by earlier government policies. After decades of pro-monopoly telecom regulation, AT&T had become a de facto creature of the state, enjoying outsized profits and control over their customers thanks to government regulation. Breaking the company up was a way of reducing this state-conferred power and getting us closer to the state we would have reached in a competitive market.

Similarly, in 2004 Cato published a paper by Lawrence J. White advocating government regulation of Fannie Mae and Freddie Mac. These were nominally private companies, but White recognized that their outsized profitability flowed from privileges that had been conferred on them by the government. Once again, the limited government position involved targeted regulation of private parties to prevent them from further exploiting their government-conferred privileges.

A similar principle seems to apply here. The Big Four wireless carriers are not monopolies or government-sponsored entities, but like Ma Bell they enjoy enhanced profits and power thanks to government-created barriers to entry. And it seems likely that their profits and power would be magnified by further consolidation. Blocking the merger, then, might be less a matter of the government interfering with the free market—which it has been doing constantly since 1926 anyway—as trying to ensure its ongoing interference with the free market doesn’t have larger anticompetitive effects than necessary. In other words, once the government has created a four-member oligopoly by force of law, it has an extra responsibility to prevent collusion among its members.

Does that mean the FCC should block the merger? I’m not sure. Maybe wireless networks are so expensive to build that the market just can’t support more than three of them no matter how much spectrum is available. It’s certainly possible that the merged AT&T-Mobile will enjoy lower costs thanks to economies of scale, and that competition with Verizon and Sprint will force them to pass those savings along to consumers. But I’m skeptical. And even if post-merger prices would decline, it’s not clear that these savings are worth losing the non-price advantages of competition.

The Clinton-era FCC reportedly had a rule prohibiting any single cellular firm from controlling more than a third of the available spectrum. The rule was dropped by the Bush FCC, but maybe it’s time to bring it (or maybe even a more stringent version) back. One advantage of this rule is that it’s clear and objective, with limited potential for favoritism. Another advantage is that it doesn’t involve the government directly second-guessing the outcome of the market process. If Jerry and I are wrong to think the industry is primarily constrained by spectrum, then the firms could divest enough spectrum to get under the cap before completing the merger. That would allow them to realize whatever economies of scale might exist on the network-building side while ensuring that there’s enough spectrum available for other firms wanting to compete with them.

Update: Jim Harper points out that the Cato paper’s first recommendation was full privatization of Fannie and Freddie, and that government regulation was presented as a fallback position. He’s absolutely right, and I should have been more clear about that.

But I think this underscores my point. My first choice would be comprehensive spectrum reform that would put the majority of the electromagnetic spectrum under a flexible-use property regime. Then there’d be plenty of spectrum available to firms that wanted to enter the wireless market and no need for government review of wireless mergers.

But that isn’t the world we live in, and so we have to think about second-best policies. And as with Fannie and Freddie, I think that might mean a certain amount of regulation. Those regulations should be as narrowly targeted as possible on mitigating the problems with the broader regulatory structure—in this case, the anti-competitive effects of artificial spectrum scarcity.

Posted in Uncategorized | 15 Comments

Misguided Moralism in the Paywall Debate

Last week I got in a bit of an argument with Adam Thierer, Randy Picker, and others about the New York Times paywall. I think a paywall is a bad business strategy, but my opposition to paywalls is mostly a matter of (as I tweeted to Adam) “personal principle rather than business advice.” Adam seemed confused by that statement, so let me see if I can elaborate.

This hilarious post from the Monkey Cage, “Monkey Cage to Begin Charging NY Times Employees for Access,” captures the essence of my objection in a funny way. There are millions of people on the web competing for the attention of readers. I am one of them. I’m writing this paragraph because I want people to read it. Having readers is valuable. They yield advertising revenue, interesting comments, professional opportunities, and more. I regard each and every one of my readers—you—as doing me a favor. So thanks.

In economic terms, what’s happening is that I’m giving you something—a copy of this post—whose marginal cost to me is basically zero. You’re giving me something—your time—that is far more scarce and valuable. Since I’m getting the better end of the deal, I need to work hard to make sure you’re getting enough value out of the deal to entice you back in the future.

So I find it pretty rich when a site thinks it’s doing me a favor by letting me read its content. To be sure, the New York Times is a great website. But there is far more free, good content on the web than I could possibly find time to read. My RSS reader is full of smart bloggers I wish I had time to read. So the Times should be grateful for the time I devote to their website rather than one of the many alternatives.

I’m belaboring this point because there’s a kind of twisted moralism underlying a lot of discussion of this issue. Partisans for mainstream media sites like to portray consumers as deadbeats for preferring not to pay for online content. I think this partly reflects the general sense of entitlement among mainstream media outlets that is a holdover from the days when technological constraints made content a lot scarcer than it is today. And I think it also has to do with a point Mike Masnick made years ago: people find the number zero really counterintuitive. People find it hard to understand how you can make money giving away content, despite the fact that we’re literally surrounded by examples of companies making billions of dollars doing just that. And when people find an economic situation confusing, they often apply an inappropriate moral gloss to it.

So it’s worth saying explicitly that people who prefer free content have nothing to be embarrassed about. We’re just doing what savvy consumers in every competitive market do: looking for the best deal. If anything, it’s tacky for a media organization that’s been fortunate to receive a reader’s valuable attention to demand that she pony up some cash as well. The reader is doing the publisher a favor by reading its content, not the other way around.

Posted in Uncategorized | 16 Comments

Shoe-Leather Reporting at the New York Times

The New York Times‘s says they’re going to take another stab at erecting a paywall, just four years after they abandoned their previous effort. On Friday, I got into a debate with Dan Rothschild about it. Dan wrote that “if advertising were a silver bullet, presumably someone would have figured out how to really make it work by now.” This left me scratching my head, because of course there are lots and lots of examples of news websites that turn a profit without charging subscription fees. But Dan says these examples don’t count:

Shoe-leather reporting is extremely valuable… The opinion and inside baseball sites that you mentioned can’t exist without the work the Times and others do… Those sites are low cost because they don’t have bureaus in Chicago and San Francisco and Houston, much less London and Frankfurt and Cairo and Jerusalem and Singapore. And I haven’t seen a business model wherein well-compensated, talented journalists do investigative reporting around the world outside of traditional news media.

I’ve written before about the “shoe leather” argument. While there are undoubtedly some kinds of stories a publication can only get by putting a reporter on a plane or staffing a foreign bureau, the Internet has dramatically reduced the universe of stories for which that is true. Sports seems like a pretty clear example of this: if you’re a reporter for the New York Times, and you want to get your readers information about a Yankees away game in Toronto, there are lots of ways to do that other than “shoe-leather” reporting: you can solicit first-hand fan reports or link to coverage from local publications, for example. Or you could even watch the game on television. There are a lot of beats that require less shoe leather than they used to.

But even setting that kind of efficiency issue aside, there’s also a basic point about what the New York Times actually does. The NYTimes.com website has 12 virtual “sections.” Of these, three and a half (World, U.S., N.Y./Region, and some of “Business”) feature a significant amount of “shoe leather” reporting. The next three (Technology, Science, and Health) are technical subjects that can be handled at least as well by specialized web publications. Technology, the beat I know best, is already dominated by web-native publications like TechCrunch and CNet. The last five sections (sports, opinion, arts, style, and travel) are the kind of content that involves relatively little shoe-leather reporting. Niche websites, amateur blogs, and user-generated content sites could easily replace these sections.

Now add to this Clay Shirky’s point that reporters are a relatively small fraction of the staff of a newspaper. Shirky counted 6 out of 59 reporters for the Columbia Daily Tribune, a relatively small daily in Missouri. The ratio seems to be a little higher at the Times: the annual report tells me there are 3094 employees in the The New York Times Media Group, while there are reportedly around 1200 editorial employees—about 40 percent of the staff. Do the math (3.5/12*1200/3094), and you find that, very roughly, around 11 percent of Times employees are directly contributing to the “shoe leather reporting” process. Now, the “shoe-leather” sections probably consume more resources than the others, but even adjusting for that it’s hard to believe that more than 20 percent of Times revenues are devoted in covering these beats.

Compare that to Pro Publica, a non-profit organization dedicated entirely to funding serious investigative reporting. They have a budget of $9 million, and according to their annual report almost two-thirds of that went to “News salaries, payments and benefits.” They list a staff of about 30 reporters and editors, compared with just 8 executives and administrative staff.

So even if you believe that a purely advertising-supported web won’t be able to support an adequate amount of shoe-leather reporting, voluntarily subscribing to a paywalled Times, despite the existence of high-quality, free alternatives like CNN and the BBC, seems silly. If serious news is what I want, then I should donate to an organization that focuses on producing it. About 65 cents of every dollar I give to Pro Publica will go to support serious, public-interested newsgathering. It makes no sense to instead give money to an organization that will spend less than 20 cents of every dollar on shoe-leather reporting as a means to its primary goal of making the Sulzberger family wealthier.

Posted in Uncategorized | 14 Comments

Intellectuals and Political Coalitions

Matt Yglesias points to this fascinating paper about the influence of intellectuals on political coalitions:

Following Converse’s advice that ideology is the product of a “creative synthesis,” conducted by a narrow group of intellectuals, this paper reports on attempts to study ideology at its point of creation. I develop a measure of ideology expressed among pundits, based on coded opinion pieces in magazines and newspapers from 1830 to 1990. I use this measure to test the impact of ideas on party coalitions. I argue that ideologies, as created by intellectuals, strongly influence the coalitions that party leaders advance. In three cases – the realignment on slavery before the Civil War, the Civil Rights realignment in the mid-20th century, and the party change on abortion more recently – there is evidence that intellectuals reorganize the issues before parties realign around them. This evidence suggests that the patterns of “what goes with what” that intellectuals design have an impact on the nature of political cleavages.

This is why I find it silly when people dismiss the possibility left-libertarian politics by pointing out that liberal and libertarian groups typically find themselves on opposite sides of contemporary political battles. Politicians, activists, and septegenarian billionaires are lagging indicators of ideological trends. The right-leaning politics of the Koch brothers are the result of intellectual arguments that happened in libertarian circles in the 1960s and 1970s. Contemporary libertarian politics is right-leaning because a previous generation of libertarian intellectuals (Friedman, Hayek, Rand) chose to focus primarily on “right-wing” issues like taxes and deregulation. But there’s nothing inevitable about this. If the present generation of libertarian intellectuals chose to focus on “left-wing” issues—war, civil liberties, immigration, urbanism, patent reform, gay rights, etc—then the next generation of libertarian donors, activists, and politicians would likely see the Democrats, rather than the Republicans, as natural allies.

Posted in Uncategorized | 3 Comments

Summer Writing

A quick personal note: unlike the past couple of summers, when I did software engineering work, this summer I’m hoping to spend my time writing about public policy. I’ve gotten a couple of good offers, but I’m hoping this blog has readers who know of other opportunities. So if you’re an editor at a media outlet, think tank, or other institution that hires writers like me—or you’re willing to introduce me to one—please drop me a line. I’m mostly looking for freelance work I can do from Philadelphia, but I’d consider relocating for the right full-time opportunity. Thanks!

Posted in Uncategorized | Leave a comment

Supermarkets, Congestion Tolling, and Free Markets

A few days ago I happened to stop by the local supermarket during the post-work rush. When I was ready to check out all the regular lanes had long lines. Ordinarily, I wouldn’t mind waiting a few minutes, but on this particular evening I had dinner plans I couldn’t be late for. So I shelled out the extra $6 for the express lane so I could skip the lines.

Perceptive readers will surmise that I made up the previous paragraph. In reality, grocery stores don’t impose congestion charges on their customers. Nor do many other types of private businesses. I don’t think I’ve ever had a customer support line offer to expedite my call for a fee. I’ve heard you can get a table faster at some restaurants by bribing the Maitre d’, but I’ve never seen a restaurant formalize the practice. Generally speaking, when businesses experience temporary spikes in demand, they serve customers on a first-come-first-serve basis; they don’t auction off the temporarily scarce capacity to the highest bidder.

These examples came to mind as I was reading the comments on my recent article about congestion pricing of the highways. One of the striking things about the congestion pricing debate is the stark divide between elites and the general public. Prominent intellectuals from across the political spectrum—from free-market transportation experts to left-wing bloggers are supportive of the idea. In contrast, the readership of Ars Technica, like voters generally, is overwhelmingly opposed. And their criticisms were not limited to the privacy issues that were the focus of my article.

Many supporters of congestion pricing chalk this up to voter ignorance. They assume that people will like congestion pricing once they have a chance to try it. But the supermarket example should make us skeptical of that conclusion. The grocery business is an intensely competitive one. If it were true that people could be won over to this kind of scheme once they had a chance to try it, you’d expect some entrepreneurial grocery store owner to give it a try. Yet I’ve lived in half a dozen different metropolitan areas and I’ve never seen a supermarket that utilized congestion pricing on its checkout lanes.

I think there are two reasons that people hate congestion pricing. First, we have strong and sophisticated social norms, cultivated since we were young children, for waiting in lines. This bit of self-organization is extremely important for the smooth functioning of civil society. We see waiting your turn as an obligation we have to one another, and therefore not as an obligation that a supermarket or transportation agency can waive in exchange for a cash payment. I suspect customers would see people using a tolled checkout lane as breaking an implicit social contract.

More importantly, customers would be suspicious that the supermarket was deliberately under-staffing the free lanes to gin up demand for the express ones. And this wouldn’t be a crazy suspicion! In the low-margin grocery business, it would be a pretty effective way for a manager to pump up his short-term profits, while the long-term harm to the store’s reputation would be hard for the corporate office to quantify.

This latter concern seems particularly relevant to the case of toll roads. The revenue-maximizing pricing schedule is not the same as the congestion-minimizing schedule. An effective congestion-pricing scheme might generate relatively little revenue if people shift their driving to off-peak times (which is the whole point). The operator of a monopolistic toll road will face a constant temptation to boost revenues by limiting throughput on free lanes and jacking up the off-peak toll rates. The widespread voter perception that they’ve “already paid for” many tolled roads through other taxes isn’t exactly right as a matter of fiscal policy, but I think it’s based on a sound intuition: there’s no reason to think the political process will set tolls in a way that’s either fair or economically efficient.

I think there’s a lesson here about the relationship between free markets and individual liberty. Free markets promote freedom by enhancing choice and competition. Consumers patronize those businesses that do the best job of satisfying their preferences, and businesses that fail to satisfy their customers eventually go out of business. Yet here’s a case where, in the name of free markets, advocates of congestion tolling are advocating the use of a market mechanism that private firms in actual competitive markets rarely use.

I’ve written before about cases where people become so enthusiastic about the “market” part of free markets that they pay short shrift to the “free” part. Markets are a means to the end of satisfying consumer preferences. Markets cater to consumers’ actual preferences, not to the preferences economists think they ought to have. And government services should follow the same philosophy. The fact that some economists have decided that a particular consumer preference is irrational isn’t a good reason to disregard it. And doing so certainly has nothing to do with free markets.

To be clear, I think there are limited circumstances where congestion tolling makes sense. There’s a lot to be said for the emerging HOT lane model, which has the virtue that the adjacent free lane checks the ability of the tolling authority to charge monopolistic rates. With budgets tight, tolling may be the only way new road construction can be financed in some places. And if the capacity is genuinely new, this can allay public suspicions that they’re being asked to “pay twice” for the same pavement.

But despite the enthusiasm of organizations like the Reason Foundation, the debate about whether to finance road construction using tolls or gas taxes has nothing in particular to do with freedom or free markets. I’m skeptical of any proposal that involves dramatically expanding the government’s ability to monitor and control the peoples’ movements—even if those new powers are deployed in the service of a “market-based” scheme of road pricing.

Posted in Uncategorized | 27 Comments