Clayton M. Christensen’s The Innovator’s Dilemma is one of those instant classics whose central concepts have spread far beyond those who’ve actually read the book. As a result, the phrase is commonly used as a generic buzzword in discussions of rapid technological progress. That’s unfortunate because the book itself has a subtle and important thesis that’s not widely understood.
Christiensen gives his central concept, “disruptive technology,” a precise meaning. The key characteristic of a disruptive technology is that at its introduction, it is markedly inferior to the then-dominant technology, as judged by the existing base of customers. A classic example is the microcomputer. When the first microcomputers were released in the late 1970s by Apple, Commodore, and others, they were inferior in almost every respect to the minicomputers and mainframes that then dominated the computer market. People bought microcomputers for one of two reasons: they couldn’t afford a minicomputer, or they had an application where the microcomputer’s unique advantages (i.e. smaller size) were a particular advantage.
It’s important to understand that the innovator’s dilemma is not that disruptive technologies are “so innovative” that incumbent firms can’t keep up with them. To the contrary, disruptive technologies are often relatively pedestrian from an engineering point of view. Minicomputer manufacturers would have had no difficulty entering the microcomputer market if they’d wanted to. Rather, the innovator’s dilemma is that incumbents find it extremely difficult to make disruptive technologies profitably. Christensen describes the dilemma on pp. 91-2:
A characteristic of each value network is a particular cost structure that firms within it must create if they are to provide the products and services in the priority their customers demand. Thus, as the disk drive makers became large and successful within their “home” value network, they developed a very specific economic character: tuning their levels of effort and expenses in research, development, sales, marketing, and administration to the needs of their customers and the challenges of their competitors. Gross margins tended to evolve in each value network to levels that enabled better disk drive makers to make money, given these costs of doing business.
In turn, this gave these companies a very specific model for improving profitability. Generally, they found it difficult to improve profitability by hacking out cost while steadfastly standing in their mainstream market: The research, development, marketing, and administrative costs they were incurring were critical toremaining competitive in their mainstream business. Moving upmarket toward higher-performance products that promised higher gross margins was usually a more straightforward path to profit improvement. Moving downmarket was anathema to that objective.
For example DEC, the firm that led the minicomputer market in the 1970s, charged tens of thousands of dollars for each PDP-11. It was extremely difficult for a firm used to making $20,000 per computer to start selling computers for a small fraction of that price (pp. 126-7):
Four times between 1983 and 1995, DEC introduced lines of personal computers targeted at consumers, products that were technologically much simpler than DEC’s minicomputers. But four times it failed to build businesses in this value network that were perceived within the company as profitable. Fourt times it withdrew from the personal computer market. Why? DEC launched all four forays from within the mainstream company. For all the reasons so far recounted, even though executive-level decisions lay behind the move into the PC business, those who made the day-to-day resource allocation decisions in the company never saw the sense in investing the necessary money, time, and energy in low-margin products that their customers didn’t want. Higher-performance initiatives that promised upscale margins, such as DEC’s super-fast Alpha microprocessor and its adventure into mainframe computers, captured the resources instead.
The deeper lesson of The Innovator’s Dilemma, then, is about the inflexibility of hierarchal organizations. I’ve written before about peoples’ tendency to view the world in anthropomorphized terms. People have a tendency to do this to companies too: to talk about a company like DEC as if it were a gigantic person that could have simply decided one day to stop making minicomputers and start making microcomputers, the same way I decided to stop working as a writer so I could go to grad school.
But companies aren’t big people, and it’s a mistake to think of them that way. In 1983, any given engineer at DEC could have easily quit his job making minicomputers and taken a job at Apple or IBM making microcomputers. But it would have been much harder for DEC as an institution to make that same transition. Turning DEC into a microcomputer company would have required a wrenching, years-long struggle to essentially build a new company from the ground up. Indeed, as Christensen documents, the few firms that have successfully pulled off such a transition have done it by essentially growing a new company inside the existing one: senior management would start a subsidiary devoted to the disruptive technology and keep it insulated from the parent company’s managerial structure. The hope was that by the time the parent company fell on hard times, the subsidiary would hopefully have grown enough to sustain the overal company’s profitability. There are a few examples of this strategy working, but it’s an extremely risky and difficult process.
So far I’ve described top-down thinking as the tendency to underestimate the effectiveness of bottom-up processes like evolution or Wikipedia, based on the assumption that decentralized systems can’t work well without someone “in charge.” The Innovator’s Dilemma critiques the flip-side of this fallacy: the tendency to believe that when an organization does have someone in charge of it, that that person has a lot of control over the organization’s behavior. In reality, hierarchical organizations have an internal logic that severely constrains the options of the people in charge of them. Bottom-up thinkers in both cases focus on the complexity of the underlying systems, and resist the urge to over-simplify the situation by focusing too much on the people in charge (or lack thereof).