Newspapers are in the Newspaper Business

Joe Weisenthal

Joe Weisenthal

One of the people whose work has shaped my thinking on the role of disruptive innovation in declining industries is Clusterstock editor (and Techdirt alum) Joe Weisenthal. There’s a common argument that declining companies need to ask themselves “what business they’re in,” and answer the question in a way that allows the firm’s continued survival. James Surowieki notes that Theodore Levitt once argued that the problem with the railroad industry is that they didn’t realize they were in “the transportation industry,” a realization which would, supposedly, have inspired them to be early leaders in the trucking industry. Similarly, the argument goes, if the newspapers realized they were in “the information business,” they could have become early leaders in the move to the web. Joe’s response to this struck me as being spot-on:

I don’t buy this. What expertise did the rail companies have in trucking? It’s like the idea that the oil companies are actually “energy” companies, who should be investing heavily in solar and wind. Maybe. But what expertise or competencies do they have in this type of thing? None. If there were any examples of companies having had the ability to do something like this, that’d be one thing. But Surowiecki’s hindsight reasoning doesn’t strike me as good business advice.

Now here’s some actual business advice that might work for companies in dying industries: Recognize that the end is in sight and stop reinvesting in the core business. Trim costs and distribute the cash to shareholders until it runs out.

This isn’t sexy or anything worthy of Surowiecki, but companies actually do it. Like Earthlink, the ISP, which is making money hand over fist by not reinvesting in a dying business. Or the beeper companies, which actually still make a lot of money (doctors still use beepers) etc. etc.

I think a big part of the problem here is that people have trouble thinking clearly about what a CEO’s job is. There’s a widespread assumption, driven by the tendency to anthropomorphize corporations, that the CEO’s top priority is to ensure his firm’s survival, and his second priority is to promote growth. But this is wrong. The CEO’s job is to maximize shareholder returns. Sometimes that means aggressive growth. But in other circumstances it means not squandering shareholder resources trying to expand in a declining industry, or in a hail-mary attempt to enter new markets for which the firm has no particular expertise. If your firm is currently profitable, as the newspapers were in the late 1990s, it might be better to just pay out larger dividends and plan for the firm to shrink gradually over time.

If (God forbid) I ran a mid-sized metropolitan daily, I think I’d follow the opposite of the common advice: recognize that the core competence of a newspaper is printing and distributing newsprint, and figure out how to make that process as profitable as possible. In particular, I think the newspapers’ core competence is not the production of content. As more and more cheap content gets produced online, newspapers should be looking for ways to re-package that content and sell it to their existing consumers. TechCrunch’s partnership with the Washington Post is a good model. My guess is that syndicating TechCrunch content is much more cost-effective than producing it themselves. The long-term goal should be to have all the non-local content—book, movie, and music reviews, international news, science and technology, health and medicine, non-local sports and business—produced by third parties and syndicated cheaply. They should find the most talented bloggers in their metropolitan area and pay them a nominal amount to syndicate some of their posts as columns. They should get over the stigma associated with wire copy. The newspaper of the future should have a bare-bones editorial staff tracking down affordable (ideally free) content for syndication combined with a modest staff of reporters to cover local stories that aren’t being covered by third parties (and even that might shrink over time as online sites do a better job of covering local stories).

2218475995_90ca204fe1It’s only a slight exaggeration to say that in the long run, the goal should be to make the newspaper a dead-tree RSS reader. The majority of people born before 1965 want to get their news in the familiar newspaper format. Newspapers can serve as profitable intermediaries between them and the thriving world of online content. This is a (literally) dying market, but it’ll be around for another 20 years at least, and there’s every reason to think it can be profitable if newspapers understand that those are their core customers. These are relatively wealthy customers with relatively inelastic demand, so there’s probably a lot of room to raise subscription rates to compensate for falling circulation. In the long run, one of two things will happen. One possibility is that sometime in the 2030s, there will be so few newspaper readers left that subscription revenues don’t even cover printing costs and newspapers have to close up shop. The other is that the newspaper will become an expensive niche product, like The Economist or even National Journal today: supported by high subscription fees from a small minority of people who are willing to pay a premium for the physical format. Either way, there should be another couple of decades of profitability before newspapers run out of customers.

People are right to tell newspapers they should ask themselves what business they’re in. The problem is that the answers to these questions tend to be driven by wishful thinking. Newspapers would like to dominate Internet-based reporting the same way they dominated the 20th century news business. They’d like to get most people born after 1980 to read newspapers. But that’s not going to happen. And the sooner they realize that, the sooner they can re-focus on their core business, which has always been printing and distribution more than journalism

Posted in Uncategorized | 3 Comments

Charles Murray’s Moral Relativism on Torture

Charles Murray

Charles Murray

A common trope of the right is to accuse the left of “moral relativism.” On this view, conservatives believe in an objective sense of right and wrong, while liberals believe that morality is just a matter of personal preference. So does that make Charles Murray a liberal?

The late New Yorker film critic Pauline Kael famously said after Nixon’s landslide reelection, “How can he have won? Nobody I know voted for him.” My proposition for today is that the entire White House suffers from the Kael syndrome.

It was the only explanation I could think of as I watched the news last night about the coming prosecution of CIA interrogators…

[Democrats] won the election with a candidate who sounded centrist running against an exceptionally weak Republican opponent. But they’ve been in the bubble too long. They really think that the rest of America thinks as they do. Nothing but the Pauline Kael syndrome can explain the political idiocy of letting Attorney General Eric Holder go after the interrogators.

Julian Sanchez points out how silly this is:

Even if Murray were right about the optics of a prosecution, surely it’s wrong that “nothing” could explain the decision to go ahead. One wacky possibility: The attorney general believes that crimes may have been committed, and if so, those responsible should be held accountable, while the president either shares this belief or at least is reluctant to intervene for political reasons to quash an investigation. And yet a lot of analysts get awfully postmodern when they’re talking about the prospect of investigations, taking it for granted that there simply are no right answers: Any legal reasoning, however specious, simply reflects one more difference of opinion—and you can’t prosecute someone for having different opinions, right? Some conservatives, to be sure, are willing to defend the practices of the interrogators or the opinions of the Office of Legal Counsel on the merits, but others seem to step back and take the meta-view that so long as some sufficiently politically powerful group was and is willing to mount that defense, it must fall within the realm of reasonable disagreement—and therefore outside the realm of actual legal consequences for wrongdoing. Attempts to establish any kind of real accountability are only intelligible as partisan “witch hunts.” In this case, the insistence on an amoral perspective undermines the analysis even in purely descriptive terms, since it excludes motivational explanations of the actors’ behavior that don’t reduce to a strategic bid for political advantage. I’m pretty cynical on this front myself, but it seems a bit much to rule it out a priori.

abu-graibIt’s been fascinating to see how torture apologists have steadily moved the goal posts as more and more evidence of torture has emerged. I remember back in 2005, when John McCain was pushing for a ban on torture, critics argued that Abu Graib was an isolated incident by a few bad apples and Congress should leave well enough alone. Then when it became clear that the CIA had used waterboarding, the argument shifted a bit. They still insisted that waterboarding wasn’t technically torture, but we started to hear people making the “contrarian” point that maybe it was OK to torture in the event of a really serious emergency. Now we’ve got solid evidence that the CIA routinely employed a variety of “enhanced interrogation techniques” in non-emergency situations. So torture apologists have abandoned ticking time bomb scenarios and switched to this postmodern theory that nobody really knows what torture is. And as a consequence, not only shouldn’t we prosecute anyone, but we shouldn’t even bother investigating to see what laws might have been broken.

Murray justifies this by noting that lots of people engage in “amoral” analysis of other issues, such as the death penalty and abortion. But these aren’t cases where one side champions a “moral” perspective and the other champions an “amoral” one. They’re cases with strong moral claims on both sides. I’m honestly not even sure which side of these debates he regards as the amoral ones. In any event, the fact that some people disagree about which side in a debate is the moral one doesn’t mean we should throw up our hands and stop trying to figure out which side is the moral one.

Posted in Uncategorized | 2 Comments

Software Patents from the Bottom-Up

The law is always a game for insiders, but patent law is almost in a class by itself. Debates about patent law are dominated by practicing patent attorneys and law professors (who are often former patent attorneys). This is perhaps unsurprising because patent law is mind-numbingly esoteric. But it’s also a real problem. There’s a massive gap between how the patent system looks to the patent bar, which is intimately familiar with it, and how it looks to the rest of the world.

117048243_7cc6bb0b87One consequence of this is that arguments about patent policy tend to focus on the minutia of legal doctrine, with relatively little attention paid to the concerns of the innovators whose efforts the patent system is supposed to encourage. This problem is especially severe in the software industry, where patents are intensely controversial. I’ve talked to a number of patent scholars with a legal background, and they’re consistently baffled by the intensity of many programmers’ hostility toward software patents.

In a new piece for Cato’s TechKnowledge newsletter, I try to explain many geeks’ hostility to software patents with an analogy:

Imagine the outcry if the courts were to legalize patents on English prose. Suddenly, you could get a “literary patent” on novels employing a particular kind of plot twist, on news stories using a particular interview technique, or on legal briefs using a particular style of argumentation. Publishing books, papers, or articles would expose authors to potential liability for patent infringement. To protect themselves, writers would be forced to send their work to a patent lawyer before publication and to re-write passages found to be infringing a literary patent.

Most writers would regard this as an outrageous attack on their freedom. Some people might argue that such patents would promote innovation in the production of literary techniques, but most writers would find that beside the point. It’s simply an intolerable burden to expect writers to become experts on the patent system, or to hire someone who is, before communicating their thoughts in written form.

Over the last 15 years, computer programmers have increasingly faced a similar predicament. We use programming languages to express mathematical concepts in much the same way that authors use the English language to express other types of ideas. Unfortunately, the recent proliferation of patents on software has made the development and use of software legally hazardous.

2976875100_90ec499bb9I think patent scholars would do well to pay a lot more attention to how the patent system is experienced by individuals who are required to obey it, rather than focusing on abstract doctrinal questions that are of interest only to patent attorneys. We might call this a bottom-up perspective on patent law. For example, I spent the summer developing software for Dancing Mammoth, the company that also hosts this blog. If Dancing Mammoth were really serious about avoiding patent infringement, it probably should have hired a patent lawyer to verify that each line of code I wrote didn’t infringe one of the hundreds of thousands of software patents in existence. Obviously, this would be completely impractical, as the patent attorney’s fees would likely exceed my own salary, so like most software firms they didn’t do that.

Now, I don’t know of any patents I infringed, but as a statistical matter it’s likely that I infringed some. Fortunately, it’s pretty unlikely anyone will sue me or Dancing Mammoth for any infringement we may have committed, because there are other potential targets with much deeper pockets. But that hardly justifies this situation where everyone’s a lawbreaker but most people don’t get caught. Small firms do get sued for inadvertent software patent infringement. Laws that are virtually impossible to follow are bad laws, regardless of how infrequently they’re actually applied.

The fundamental issue is that developing software is not a capital-intensive commercial activity like many other industries that receive patent protection; it’s an expressive activity that is practiced by numerous people far outside of what is conventionally described as “the software industry.” Patent law is an incredibly complex, expensive, and hard-to-understand body of law. It should be limited to industries where the costs of hiring patent lawyers is a relatively small fraction of the costs of the underlying innovations. The pharmaceutical industry probably fits this pattern. The software industry clearly does not.

A final note: the concept of literary patents is not original to me. Richard Stallman employed the same analogy in 2005 (I discovered his version only after submitting mine, otherwise I would have included a link in the story). His take is well worth reading, as it does a great job of illustrating the kind of irrationality that crops up when patents are applied to the software industry.

Posted in Uncategorized | 5 Comments

Disruptive Innovation at Wired

2664860727_3771228db4

So I was getting ready to link to this piece by Robert Capps in Wired about the rise of “good enough” technology. I was planning to point out that this was just another term for disruptive innovation. I was congratulating myself on this deep insight, until I got to page 2:

To a degree, the MP3 follows the classic pattern of a disruptive technology, as outlined by Clayton Christensen in his 1997 book The Innovator’s Dilemma. Disruptive technologies, Christensen explains, often enter at the bottom of the market, where they are ignored by established players. These technologies then grow in power and sophistication to the point where they eclipse the old systems.

That is certainly part of what happens with Good Enough tech: MP3s entered at the bottom of the market, were ignored, and then turned the music business upside down. But oddly, audio quality never really readjusted upward. Sure, software engineers have cooked up new encoding algorithms that produce fuller sound without drastically increasing file sizes. And with recent increases in bandwidth and the advent of giant hard drives, it’s now even possible to maintain, share, and carry vast libraries of uncompressed files. But better-sounding options have hardly gained any ground on the lo-fi MP3. The big advance—the one that had all the impact—was the move to easier-to-manage bits. Compared with that, improved sound quality just doesn’t move the needle.

So… disruptive innovation is everywhere, and I’m hardly the only person to have noticed it. The MP3 example is actually an interesting one about which I’ll probably have more to say in a future post.

Posted in Uncategorized | 6 Comments

Disruptive Innovation and the Importance of Failing Cheaply

A key part of Christensen’s thesis is the idea that disruptive technologies inevitably catch up to incumbent technologies over time. I think the reasons for this are worth exploring in a bit more detail, because it helps explain why Christensen’s practical advice winds up being so underwhelming.

Technological progress is largely a matter of trial and error. People take existing technologies and experiment with using them in new ways. Most of the experiments fail, but a few succeed and get incorporated into the stock of knowledge on which subsequent experiments can be based. This process of cumulative trial and error, repeated over centuries, is responsible for the advanced technologies we have today.

The key feature of disruptive technologies is that they speed up the pace of experimentation by making failures cheaper. When a technology is expensive, people are going to reserve its use for proven applications. Few companies can afford to buy a $20,000 computer just to let its engineers play with it. But if a technology is cheap, then lots of people can get their hands on them, and people are more likely to put them to more experimental use. An engineer won’t have to work very hard to persuade his boss to buy him a $1000 computer and let him screw around with it.

There are at least two reasons large firms have trouble dealing with disruptive technologies. One is that they’re bad at failing cheaply. They’ve got a lot of overhead, and they’ve got a reputation to protect. They can’t release a half-baked prototype and then abandon it if it doesn’t pan out. They can’t easily hire people and then lay them off a few months later if the project they’re working on fails to bear fruit. Instead, they tend to have big, expensive, high-profile failures. And these kinds of failures tend to derail the careers of the people leading them, even if they looked like good bets from an ex ante perspective. As a result, larger, more established firms will tend to be more risk-averse.

444699189_7da67279af

Second, decentralization is essential for this trial-and-error process to work. As I noted in my very first post, good ideas are often the result of “cross-pollination” among diverse groups of people. Putting all your would-be innovators under one roof makes it more likely they’ll all get stuck in the same intellectual rut.

Notice that there’s little a company’s senior management can add to this process. No employee is going to be willing to take risks with the CEO looking over his shoulder. And there’s even less for an outside expert to contribute, even if he happens to be “the world’s foremost authority on disruptive innovation.” Understanding the process of disruptive innovation in the abstract doesn’t give you any special insight into any specific technology or market. And given the importance of experimentation it’s especially silly for an industry trade group to hire a consultant and then encourage all its members to implement the recommendations. You want diversity, not homogeneity, in the strategies being tried by different newspapers.

Still, I think it’s a mistake to judge Christensen too harshly. His job as a business professor is to give people advice about how to run their businesses. I don’t think the advice he has given will save them from bankruptcy, but it’s not like there are other consultants who would have given better advice. And if Christensen had simply re-hashed the blunt, fatalistic lessons of the first four chapters of his book, it’s unlikely that would have been well-received. Denial is a powerful thing.

Posted in Uncategorized | Leave a comment

Newspapers Are Doomed, and It’s Not Executives’ Fault

If I ran the world, no one would be allowed to opine about the decline of the newspaper industry until they’d read The Innovator’s Dilemma. The web is so clearly a disruptive technology, and the newspaper industry is so clearly following the trajectory Christensen describes in his book, that it’s hard to think clearly about the process if you haven’t grasped Christensen’s key insights. To review, the key attribute of a disruptive technology is that when it’s introduced into the marketplace, it is cheaper but also markedly inferior to the incumbent technology, as judged by the criteria of the dominant technology’s customers. Internet-based news clearly fits this pattern. As newspaper people never tire of reminding us, Internet-based news outlets rarely have the resources to staff expensive foreign bureaus, conduct in-depth fact-checking, fly sports reporters to away games, hire teams of lawyers, and so on. If the Huffington Post or TechCrunch were judged as newspapers, they would be pretty lousy ones.

What we learn from The Innovator’s Dilemma is this state of affairs is completely normal when a disruptive technology invades an established industry. If you talked to a DEC executive in 1978 about the Apple II, he would have pointed out, correctly, that it was vastly inferior to the computers DEC sold. It had less memory, fewer peripherals, a slower processor, less disk space, was less reliable, had inferior tech support, and so on. Yet it turns out this didn’t matter, for two reasons. First, there were many, many people for whom the microcomputer was “good enough.” And second, as Apple, IBM, and others began to sell microcomputers at high volume, they steadily closed the performance gap. By the 1980s, minicomputers were still vastly more expensive than microcomputers, but their technical advantages had become much smaller.

Disruptivetechnology

So newspaper partisans are absolutely right to point out that newspapers continue to be superior to online news sources in a number of respects. But they’re completely wrong to think this can save them. Because as Mike Masnick explained recently (and as illustrated by the above chart), Christensen’s work tells us that newspapers’ performance advantages won’t last—online news companies will figure out ways to negate the newspapers’ traditional advantages in reliability, comprehensiveness, and so forth. At the same time, newspapers are probably not going to figure out how to get their costs down to the level where they’re competitive with web-only publications.

Now, I think this second point is something that the newspaper industry’s critics often fail to grasp. Newspaper CEOs are often portrayed as ignorant luddites that are clinging to paper despite the obvious advantages of the Web. They’re frequently told that if they would just “change their business moel,” everything would be OK. I hope it’s clear from Tuesday’s post why this is wrong. A newspaper is a large, bureaucratic institution whose every department is carefully crafted to maximize its profitability as a publisher of newspapers. It owns printing presses, warehouses, delivery trucks, newspaper boxes, and other assets that are totally useless for publishing a website. It has a large staff of paper boys, sales representatives, typesetters, customer service representatives and so forth who would have to be retrained or (more likely) laid off. And most importantly, it has an institutional culture that’s fundamentally about publishing newspapers. The St. Louis Post-Dispatch can’t just stop printing newspapers any more than DEC could have just stopped making minicomputers. Doing so would have meant discarding the vast majority of what made the company valuable.

120042982_6794196fa8 I actually think that some of the leading newspapers are doing astonishingly well given the economic constraints they face. The New York Times in particular has done a magnificent job of building a world-class, innovative website. It not only does a great job of presenting the Times print content, but it’s also chock full of interactive features, blogs, and even innovative software releases. Similarly, the Washington Post was an early pioneer in the web-based journalism business. In fact, they followed the exact strategy Christensen recommends for incumbents facing disruptive innovation: to build a subsidiary that’s physically and logically separate from the main company. That’s why Washingtonpost.Newsweek Interactive is headquartered in Arlington, VA, across the Potomac from the Washington Post Company’s DC headquarters. And I think the effort has borne fruit; the Washington Post has one of the best and most popular news websites in the world.

Which isn’t to say their websites will necessarily be profitable enough to allow their parent companies to escape bankruptcy (The Post may avoid bankruptcy simply because it owns a profitable test-prep subsidiary). But I’m skeptical that they, or any newspaper, could have done very much better. Very few firms have ever survived disruptive innovations in their core markets, and the ones that do survive almost always emerge bruised and battered. It would be par for the course if most newspapers went bankrupt. It will be far more surprising if a significant number of them avoid bankruptcy.

Now, I should acknowledge that Christensen himself has written about this subject and he doesn’t entirely agree with me. He penned an article in 2006 arguing that “although it’s too early to point to billion-dollar businesses, we have seen mind-sets shift and managers get excited as they see the massive growth potential that still exists for the industry.” What follows is frankly underwhelming. Christensen gestures in the general direction of some sensible but not terribly original business models, and promises more details in a report Christensen did for the American Press Institute. If you’ll forgive my cynicism, I wonder if he sugar-coated his conclusions a bit for the benefit of his sponsor.

Indeed, I think a similar tension is evident in The Innovator’s Dilemma itself. In the introduction, Christensen writes “colleagues who have read my academic papers reporting the findings recounted in chapters 1 through 4 were struck by their near-fatalism.” But he then goes on to reassure readers that “there are, in fact, sensible ways to deal effectively with this challenge.” I wonder if he turned in fatalistic first draft, only to have his publisher sit him down and explain that business books are sold to corporate executives, and corporate executives aren’t likely to buy a book that tells them they’re business is doomed no matter what they do.

In any event, Christensen never really delivers on his promise of solutions to the innovator’s dilemma. The strategies he suggest are fine as well as they go, but he finds only a handful of companies that have (narrowly) escaped bankruptcy, compared with many companies that simply went broke. Whether the API likes it or not, that’s the fate that likely awaits most of the nation’s newspapers—not because newspapers are badly run, but simply because they’re based on an outmoded technology for distributing the news.

Posted in Uncategorized | 2 Comments

The Case for a New Church Committee

On Monday, I mentioned in passing the Church Committee, the post-Watergate Congressional committee that uncovered evidence of massive lawbreaking by the executive branch. The Committee’s report was incredibly important in helping the public understand the depth and breadth of Cold War lawlessness during the previous three decades. When Cato asked me to pen the chapter on electronic surveillance in this year’s edition of the Cato Handbook on Policy, I included a recommendation that Congress should launch a modern-day successor to the Church Committee.

abuIn the last few months, I’ve been pleased to see that people smarter than me have been having the same idea. The latest is the Nation‘s Chris Hayes, who has a great cover story calling on Congress to launch a wide-ranging investigation of executive branch lawbreaking. I think the most important point is this one:

Since the committee began in the wake of Nixon’s resignation and revelations about his deceptions, abuses and sociopathic pursuit of grudges, Church and many Democrats had every reason to believe they would be chiefly unmasking the full depths of Nixon’s perfidy. Quickly, however, it became clear that Nixon was a difference in degree rather than a difference in kind. Kennedy and Johnson had, with J. Edgar Hoover, put in place many of the illegal policies and programs. Secret documents obtained by the committee even revealed that the sainted FDR had ordered IRS audits of his political enemies. Republicans on the committee, then, had as much incentive to dig up the truth as did their Democratic counterparts.

As historian Kathy Olmsted argues in her book Challenging the Secret Government, Church was never quite able to part with this conception of good Democrats/bad Republicans. Confronted with misdeeds under Kennedy and Johnson, he chose to view the CIA as a rogue agency, as opposed to one executing the president’s wishes. This characterization became the fulcrum of debate within the committee. At one point Church referred to the CIA as a “rogue elephant,” causing a media firestorm. But the final committee report shows that to the degree the agency and other parts of the secret government were operating with limited control from the White House, it was by design. Walter Mondale came around to the view that the problem wasn’t the agencies themselves but the accretion of secret executive power: “the grant of powers to the CIA and to these other agencies,” he said during a committee hearing, “is, above all, a grant of power to the president.”

A contemporary Church Committee would do well to follow Mondale’s approach and not Church’s. It must comprehensively evaluate the secret government, its activities and its relationship to Congress stretching back through several decades of Democratic and Republican administrations. Such a broad scope would insulate the committee from charges that it was simply pursuing a partisan vendetta against a discredited Republican administration, but it is also necessary to understand the systemic problems and necessary reforms.

450px-Abu_Ghraib_39This is a case where political expedience and justice point in the same direction. A thorough investigation will undoubtedly uncover numerous examples of abuses of power under the Bush administration. But Bill Clinton was hardly a civil libertarian himself. Thoroughly investigating abuses of power under Clinton (and under Reagan and Bush I) will serve two important purposes. First, of course, it will help to deflect spurious charges that the investigation is a partisan witchhunt. But more importantly, it will likely underscore the point that abuses of power are a bipartisan phenomenon. The problem is not just that George Bush was an exceptionally bad president (although of course he was). The problem is that presidents are almost always power-hungry, and our system of government lacks adequate checks and balances against abuses of power by the executive branch. The abuses of the Bush/Cheney years may provide the political momentum we need to fix the problem. But the problem is bigger than any one administration.

Thus far, Congress has shown a disappointing lack of courage when it comes to pursuing evidence of executive-branch lawlessness. We already have several clear examples of lawbreaking by the Bush administration. Thus far, Congress’s only reaction has been to give a get out of jail free card to some of the participants. There’s an obscene bipartisan consensus that we should never “look backwards,” no matter how egregiously the law might have been broken. But the recent waves of revelations have been so stomach-turning that it might just break down that elite consensus. Even the most jaded political hack must have second thoughts when he reads detailed accounts of government officials torturing dozens of prisoners. The people who did these things need to be brought to justice. And that will only happens if Congress first unearths the details on exactly how extensive the lawbreaking was.

Posted in Uncategorized | Leave a comment

Bottom-Up Thinker: Clayton M. Christensen

innovators-dilemmaClayton M. Christensen’s The Innovator’s Dilemma is one of those instant classics whose central concepts have spread far beyond those who’ve actually read the book. As a result, the phrase is commonly used as a generic buzzword in discussions of rapid technological progress. That’s unfortunate because the book itself has a subtle and important thesis that’s not widely understood.

Christiensen gives his central concept, “disruptive technology,” a precise meaning. The key characteristic of a disruptive technology is that at its introduction, it is markedly inferior to the then-dominant technology, as judged by the existing base of customers. A classic example is the microcomputer. When the first microcomputers were released in the late 1970s by Apple, Commodore, and others, they were inferior in almost every respect to the minicomputers and mainframes that then dominated the computer market. People bought microcomputers for one of two reasons: they couldn’t afford a minicomputer, or they had an application where the microcomputer’s unique advantages (i.e. smaller size) were a particular advantage.

450px-Pdp-11-40It’s important to understand that the innovator’s dilemma is not that disruptive technologies are “so innovative” that incumbent firms can’t keep up with them. To the contrary, disruptive technologies are often relatively pedestrian from an engineering point of view. Minicomputer manufacturers would have had no difficulty entering the microcomputer market if they’d wanted to. Rather, the innovator’s dilemma is that incumbents find it extremely difficult to make disruptive technologies profitably. Christensen describes the dilemma on pp. 91-2:

A characteristic of each value network is a particular cost structure that firms within it must create if they are to provide the products and services in the priority their customers demand. Thus, as the disk drive makers became large and successful within their “home” value network, they developed a very specific economic character: tuning their levels of effort and expenses in research, development, sales, marketing, and administration to the needs of their customers and the challenges of their competitors. Gross margins tended to evolve in each value network to levels that enabled better disk drive makers to make money, given these costs of doing business.

In turn, this gave these companies a very specific model for improving profitability. Generally, they found it difficult to improve profitability by hacking out cost while steadfastly standing in their mainstream market: The research, development, marketing, and administrative costs they were incurring were critical toremaining competitive in their mainstream business. Moving upmarket toward higher-performance products that promised higher gross margins was usually a more straightforward path to profit improvement. Moving downmarket was anathema to that objective.

For example DEC, the firm that led the minicomputer market in the 1970s, charged tens of thousands of dollars for each PDP-11. It was extremely difficult for a firm used to making $20,000 per computer to start selling computers for a small fraction of that price (pp. 126-7):

Four times between 1983 and 1995, DEC introduced lines of personal computers targeted at consumers, products that were technologically much simpler than DEC’s minicomputers. But four times it failed to build businesses in this value network that were perceived within the company as profitable. Fourt times it withdrew from the personal computer market. Why? DEC launched all four forays from within the mainstream company. For all the reasons so far recounted, even though executive-level decisions lay behind the move into the PC business, those who made the day-to-day resource allocation decisions in the company never saw the sense in investing the necessary money, time, and energy in low-margin products that their customers didn’t want. Higher-performance initiatives that promised upscale margins, such as DEC’s super-fast Alpha microprocessor and its adventure into mainframe computers, captured the resources instead.

450px-Apple-IIThe deeper lesson of The Innovator’s Dilemma, then, is about the inflexibility of hierarchal organizations. I’ve written before about peoples’ tendency to view the world in anthropomorphized terms. People have a tendency to do this to companies too: to talk about a company like DEC as if it were a gigantic person that could have simply decided one day to stop making minicomputers and start making microcomputers, the same way I decided to stop working as a writer so I could go to grad school.

But companies aren’t big people, and it’s a mistake to think of them that way. In 1983, any given engineer at DEC could have easily quit his job making minicomputers and taken a job at Apple or IBM making microcomputers. But it would have been much harder for DEC as an institution to make that same transition. Turning DEC into a microcomputer company would have required a wrenching, years-long struggle to essentially build a new company from the ground up. Indeed, as Christensen documents, the few firms that have successfully pulled off such a transition have done it by essentially growing a new company inside the existing one: senior management would start a subsidiary devoted to the disruptive technology and keep it insulated from the parent company’s managerial structure. The hope was that by the time the parent company fell on hard times, the subsidiary would hopefully have grown enough to sustain the overal company’s profitability. There are a few examples of this strategy working, but it’s an extremely risky and difficult process.

So far I’ve described top-down thinking as the tendency to underestimate the effectiveness of bottom-up processes like evolution or Wikipedia, based on the assumption that decentralized systems can’t work well without someone “in charge.” The Innovator’s Dilemma critiques the flip-side of this fallacy: the tendency to believe that when an organization does have someone in charge of it, that that person has a lot of control over the organization’s behavior. In reality, hierarchical organizations have an internal logic that severely constrains the options of the people in charge of them. Bottom-up thinkers in both cases focus on the complexity of the underlying systems, and resist the urge to over-simplify the situation by focusing too much on the people in charge (or lack thereof).

Posted in Uncategorized | 6 Comments

How Airplane Crashes Are Like Judith Miller’s Reporting

285835148_dfe47e1a43

My post about Alex Jones’s Fresh Air sparked a really interesting email discussion with a reader who pointed out another commonly-cited advantage of newspapers: their superior accuracy. Now, I think the accuracy of mainstream media outlets is sometimes overstated (see Jayson Blair, Dan Rather, and Judith Miller, for example), but it’s true that mainstream media sources make fewer mistakes per word than blogs do. However, I don’t think this is as big an advantage as people steeped in newspaper culture imagine. Writers should obviously make a reasonable effort to be accurate, but I think it is possible to demand too much accuracy. I think that sometimes we place unrealistic pressures on journalists to verify everything they writer, rather than simply acknowledging when they’re repeating something they haven’t been able to fully verify. The reader replied:

I think that saying we’re putting unrealistic pressures on journalists to verify everything is akin to saying that we are putting too much pressure on pilots to stay awake while flying planes. It’s a journalist’s job to get as close to 100% of the truth as possible, just as it’s a pilot’s job to be as close to 100% perfect when flying a plane.

This is a good point as far as it goes. Obviously, we’d rather have reporters be more rather than less accurate. But I also think it matters what the consequences of failure are. If you’re flying an airplane, the consequences are very high, so you invest a lot of resources in a safe landing. Likewise, if you’re investigating a potentially career-ending allegation of sexual abuse, then again the consequences for mis-reporting are high so you’d better be sure you’re right. On the other hand, if you’re a reporter who receives a tip about an upcoming Apple product, it might make more sense to just pass along the rumor and let the readers decide how credible it is. If it turns out to be wrong, there’s no particular harm done as long as you’re clear that it’s just a rumor.

384696997_f0ed3d2887I think a lot more of the news is in this latter category than mainstream journalists like to admit. A variant of Linus’s Law applies: “given enough eyeballs, all rumors are shallow.” That is, there’s no particular reason to think any given reporter is going to be in the best position to corroborate or debunk a rumor. In many cases readers (or other bloggers) might be much better positioned to substantiate or debunk a claim than the original reporter. On the other hand, a reporter can burn up a ton of time tracking down a minor detail on an unfinished story. It might make sense to simply disclose to the reader that the detail couldn’t be verified, and move on to the next story.

I think one of the reasons that newspapers have set such a high bar for the accuracy of their stories is because there was so little competition in the late 20th century newspaper market. If you’re the only major newspaper in a metropolitan area and you print an untrue rumor, that can do a lot of damage because it will be read by a ton of people with little opportunity for response. In contrast, in a world where there are hundreds of blogs in a metropolitan area, each with relatively small audiences, it’s much less damaging if one of them posts something that turns out to be untrue: many fewer people will read it, and it’s far more likely that others will notice the error and quickly correct it.

The New York Times getting a story wrong is like an airplane crash (or much worse if it leads to an unnecessary war in Iraq). Me getting a story wrong on this blog is like falling off my bike. That’s still bad, of course, but the amount of damage is much less. It would obviously be silly to say that bikes are more dangerous than airplanes because bikes crash more often—any given crash is much, much less painful. Similarly, bloggers indisputably make more mistakes than newspaper reporters, but this doesn’t necessarily mean newspapers are better. The blogosphere has superior methods for quickly catching and correcting mistakes that do happen.

Posted in Uncategorized | Leave a comment

Philosophizing about Health Care

Julian offers a thought experiment designed to sharpen progressives’ thnking about health care. He writes:

For the purposes of our example, suppose that the correct conception [of justice in our hypothetical society] seeks to neutralize to some extent the effects of bad luck, so that someone who is burdened with health problems, either congential or as a result of accident, may be entitled to a greater share of social resources by way of compensation. Also suppose that, unlike most social democracies, this market egalitarian society does not generally go in for direct government provision of goods, but instead, having ensured that everyone has their fair share of all-purpose resources—in other words, wealth and income—allows adults to secure these goods for themselves. Imagine that this is a generally affluent society, and in it there lives a Mr. Rich, who is as well off as anyone else—and perhaps, if this is compatible with your preferred conception of economic justice, economically better off than most. As he gets on in years, he is diagnosed with a serious condition that will shorten his life—though appropriate medical care can affect how much it is shortened. If necessary, according to your preferred conception and the specific facts of the case, his share of social wealth may be augmented through redistribution to compensate for this stroke of bad fortune.

Though he could expend some of his share on the appropriate medical treatments and be left with enough to maintain a perfectly decent quality of life, Mr Rich decides to use his resources in service of other projects: Perhaps he decides to travel to parts of the world he’d always wanted to see, or endow a library, or in other ways enhance the quality of his remaining years. As a result of this, suppose he reaches a point where he is no longer able to afford the medical treatments that would extend his life. Can he still claim a right against society to be provided with care? Or are his rights exhausted by his consumption of what, by stipulation, is his fair share of aggregate social resources? Can society fairly say: “We’ve given you what you had a right to already, and you opted against using it for health care”?

Julian is trained as a philosopher, and philosophers construct thought experiments like this as a way of simplifying and hopefully clarifying thorny moral issues. The problem, I think, is that his simplifying assumptions are doing an awful lot of the heavy lifting here. In particular, I don’t think a progressive should concede that it’s reasonable for Mr Rich’s income can easily be “augmented through redistribution to compensate for this stroke of bad fortune.”

3311666603_f1b63599e0To see the problem, let’s make the scenario a little more specific. Let’s say Mr. Rich’s doctor has discovered that he has a tumor. After further analysis, the doctor thinks there’s a 90 percent chance that the tumor is benign, and a 10 percent chance that it’s a rare and deadly form of cancer. The recommended treatment for the deadly form of cancer costs $1 million and is extremely unpleasant for the patient. So unpleasant, in fact, that Mr. Rich likely would opt against receiving it even if it cost him nothing—the 90 percent chance of unnecessarily reduced quality of life outweigh the 10 percent chance of saving one’s life.

In Julian’s hypothetical society, things look different. The doctor isn’t sure if the tumor is malignant or not, but he does know that if he says it’s malignant his patient gets a million dollars. So as long as he can make a plausible case that the tumor might be malignant, he signs the paperwork saying so and the patient gets his million dollars. And once word gets out that an adverse diagnosis gets you a fat check, there’s going to be a cottage industry in . (much as we’ve seen with Social Security disability payments)

To avoid being robbed blind, the government would have to develop an elaborate bureaucracy for independently reviewing doctors’ diagnoses. Indeed, given the stronger incentives patients would face to game the system—a negative diagnosis gets them cash rather than unnecessary medical treatments—the required bureaucracy would probably be greater than the present-day Medicare bureaucracy, and probably even greater than the bureaucracy you’d need to administer a full-blown single-payer system.

185127815_8da1f40020Even if this perverse incentive problem could be solved, I think there’s a deeper philosophical problem with the concept of writing people checks to compensate them ex ante for their medical misfortunes. Let’s say Mr. Rich gets his million dollars and spends it on treatments that wind up not working. This is another stroke of bad luck. Does it entitle him to another check from the government? Do we say that he only gets a check if he complied with the government’s recommended treatment regimen? Or do we take a harder line and say that he already got his ex ante share of the medical resources, and he’s now out of luck.

All of which is to say that assuming the government can figure out how much health care money each citizen is entitled to is assuming away most of what’s hard about health care policy. The egalitarianism-vs-paternalism tension Julian is hinting at is an interesting and important one in a number of different policy contexts. But I’m not convinced it’s especially important to the health care debate.

Posted in Uncategorized | 1 Comment