Clayton M. Christensen’s The Innovator’s Dilemma is one of those instant classics whose central concepts have spread far beyond those who’ve actually read the book. As a result, the phrase is commonly used as a generic buzzword in discussions of rapid technological progress. That’s unfortunate because the book itself has a subtle and important thesis that’s not widely understood.
Christiensen gives his central concept, “disruptive technology,” a precise meaning. The key characteristic of a disruptive technology is that at its introduction, it is markedly inferior to the then-dominant technology, as judged by the existing base of customers. A classic example is the microcomputer. When the first microcomputers were released in the late 1970s by Apple, Commodore, and others, they were inferior in almost every respect to the minicomputers and mainframes that then dominated the computer market. People bought microcomputers for one of two reasons: they couldn’t afford a minicomputer, or they had an application where the microcomputer’s unique advantages (i.e. smaller size) were a particular advantage.
It’s important to understand that the innovator’s dilemma is not that disruptive technologies are “so innovative” that incumbent firms can’t keep up with them. To the contrary, disruptive technologies are often relatively pedestrian from an engineering point of view. Minicomputer manufacturers would have had no difficulty entering the microcomputer market if they’d wanted to. Rather, the innovator’s dilemma is that incumbents find it extremely difficult to make disruptive technologies profitably. Christensen describes the dilemma on pp. 91-2:
A characteristic of each value network is a particular cost structure that firms within it must create if they are to provide the products and services in the priority their customers demand. Thus, as the disk drive makers became large and successful within their “home” value network, they developed a very specific economic character: tuning their levels of effort and expenses in research, development, sales, marketing, and administration to the needs of their customers and the challenges of their competitors. Gross margins tended to evolve in each value network to levels that enabled better disk drive makers to make money, given these costs of doing business.
In turn, this gave these companies a very specific model for improving profitability. Generally, they found it difficult to improve profitability by hacking out cost while steadfastly standing in their mainstream market: The research, development, marketing, and administrative costs they were incurring were critical toremaining competitive in their mainstream business. Moving upmarket toward higher-performance products that promised higher gross margins was usually a more straightforward path to profit improvement. Moving downmarket was anathema to that objective.
For example DEC, the firm that led the minicomputer market in the 1970s, charged tens of thousands of dollars for each PDP-11. It was extremely difficult for a firm used to making $20,000 per computer to start selling computers for a small fraction of that price (pp. 126-7):
Four times between 1983 and 1995, DEC introduced lines of personal computers targeted at consumers, products that were technologically much simpler than DEC’s minicomputers. But four times it failed to build businesses in this value network that were perceived within the company as profitable. Fourt times it withdrew from the personal computer market. Why? DEC launched all four forays from within the mainstream company. For all the reasons so far recounted, even though executive-level decisions lay behind the move into the PC business, those who made the day-to-day resource allocation decisions in the company never saw the sense in investing the necessary money, time, and energy in low-margin products that their customers didn’t want. Higher-performance initiatives that promised upscale margins, such as DEC’s super-fast Alpha microprocessor and its adventure into mainframe computers, captured the resources instead.
The deeper lesson of The Innovator’s Dilemma, then, is about the inflexibility of hierarchal organizations. I’ve written before about peoples’ tendency to view the world in anthropomorphized terms. People have a tendency to do this to companies too: to talk about a company like DEC as if it were a gigantic person that could have simply decided one day to stop making minicomputers and start making microcomputers, the same way I decided to stop working as a writer so I could go to grad school.
But companies aren’t big people, and it’s a mistake to think of them that way. In 1983, any given engineer at DEC could have easily quit his job making minicomputers and taken a job at Apple or IBM making microcomputers. But it would have been much harder for DEC as an institution to make that same transition. Turning DEC into a microcomputer company would have required a wrenching, years-long struggle to essentially build a new company from the ground up. Indeed, as Christensen documents, the few firms that have successfully pulled off such a transition have done it by essentially growing a new company inside the existing one: senior management would start a subsidiary devoted to the disruptive technology and keep it insulated from the parent company’s managerial structure. The hope was that by the time the parent company fell on hard times, the subsidiary would hopefully have grown enough to sustain the overal company’s profitability. There are a few examples of this strategy working, but it’s an extremely risky and difficult process.
So far I’ve described top-down thinking as the tendency to underestimate the effectiveness of bottom-up processes like evolution or Wikipedia, based on the assumption that decentralized systems can’t work well without someone “in charge.” The Innovator’s Dilemma critiques the flip-side of this fallacy: the tendency to believe that when an organization does have someone in charge of it, that that person has a lot of control over the organization’s behavior. In reality, hierarchical organizations have an internal logic that severely constrains the options of the people in charge of them. Bottom-up thinkers in both cases focus on the complexity of the underlying systems, and resist the urge to over-simplify the situation by focusing too much on the people in charge (or lack thereof).
While organizational inflexibility is certainly a lesson for this book, I think that the issue of incentives and payoffs are key. As Christiansen points out (and I don’t have my copy handy at the moment) a small company with a disruptive technology values small contracts far more than a company in an established market. Thus, they are likely to innovate to win these whereas it doesn’t matter to a large company in an established market. What comes to mind, actually, is Leavitt’s discussion of real estate commissions in “Freakonomics”.
To me, one of the lessons of ID was that established companies could do little to avert disruption short of creating a “skunk works” like independent division that was organizationally (and perhaps also geographically) removed from the powerful profit centers. HP and inkjets comes to mind.
A friend passed on Rubin’s “The Myth of Accountability” paper recently… a very fascinating look at things related to top-down, bottom-up stuff. Have you read that? Might be a good reading for the policy reading group when school starts…
I haven’t read the book, and probably won’t if the author thinks “disruptive innovations” always start off inferior to established ones. Atom bombs, the SR-71, the Me262, steam engines, Xerox machines, IC’s versus circuits: most disruptive innovations, in fact, start off vastly superior to established ones. Duh.
It is an excellent point about organizations, however. You can see this effect most clearly in the National Lab system and NASA, which as far as I can tell, is more or less not worth what we spend on it because they’re all oriented towards technologies and practices of the 1950s.
Scott, I am not sure how you define “vastly superior”, but the initial introduction of the ME262 was not an unqualified success. You can even argue that, in terms of military requirements and procurement, it did more to damage the Luftwaffe than to benefit it. There’s no doubt that the ME 262 took resources from established, and successful, production lines, absorbed materials and technical personnel, and returned relatively little on the investment in terms of actual performance. How does this equate to “vastly superior”?
.
As for the thesis overall – it seems plausible that a disruptive technology would be generally inferior at its inception, given that all prototypes and first generation products have teething troubles, and are often not helped by being rushed into production as rapidly as possible. However, surely disruptive technologies must have, or seem to promise, a comparative advantage in some significant area. Otherwise, you’d be looking at another of the “seemed like a good idea at the time” one-minute wonders.
Scott, it’s important to understand that Christensen had a very specific definition for “disruptive innovation.” It’s not just a generic term for technological progress. It specifically describes a situation in which an expensive, complex technology is replaced by a simpler, cheaper technology. So the atom bomb, for example, was not a disruptive technology as he defined it, since it was neither cheaper nor simpler than conventional weapons. So your examples don’t disprove Christensen’s thesis as he defined it.
Nick, the comparative advantage of disruptive technologies is often that they’re simply much cheaper than older technologies. In the long run, they often improve and displace the older technologies, but at the outset they’re often adopted by people who couldn’t afford the earlier technology.
As I remember the book, “complex technology is replaced by a simpler, cheaper technology” is a little misleading. The “replacement” often isn’t total; the old-line technology and builders end up in niche markets/uses. And usually the innovation develops an entirely new class of customers and uses in addition to mostly replacing the old. IBM still sells a few mainframes, just as Dell is still selling some desktop PC’s.