The point I made on Friday about the dangers of over-reliance on (possibly faulty) data seems obvious in hindsight. You might think that McNamara just happened to have an unusual blind spot, and that his problems could have been avoided relatively easily by applying an appropriate dose of skepticism.
But the problem is much harder to avoid than it seems at first glance. And it’s hardly limited to the military. Recall that one of the defining characteristics of disruptive technology is that it’s typically inferior to existing technologies if measured by conventional means—that is, according to conventional statistics. In the 1970s, for example, microcomputers were inferior to minicomputers by almost every conventional criteria. Incumbents like DEC analyzed the microcomputer using conventional measures like speed, storage space, reliability, and peripheral support, and concluded that they weren’t serious threats to its business. But of course the traditional statistics, which were geared toward the needs of existing customers, didn’t do a good job of capturing the unique advantages—small size, low power consumption, and most importantly low price—of the microcomputer.
Problems with faulty data tend to sneak up on well-established organizations when the assumptions on which the organization was founded cease to apply. One reason the military was so confident in its statistics in the 1960s is that similar statistics had worked well in the three conventional wars American had fought in the previous half-century. Knowing how many Nazis you’ve killed and how many tons of bombs you’ve dropped on Dresden really were reasonable proxies for how well the war was going overall.
On paper, an Apple II was inferior to a PDP just as on paper, the Vietcong were inferior to the US military. But that’s because the “paper” measurements were measurements of the wrong variables. Distilling a product or an army to a collection of facts and figures drained them of the context that might have allowed a smart commander or business executive to realize that the world had changed.
Large organizations are especially prone to making this kind of error. There are usually people near the bottom of an organization who figure out relatively quickly that the official statistics were wrong. Rank-and-file soldiers knew the war was going poorly, just as many DEC engineers and sales people knew that the microcomputer was a serious competitive threat. Indeed, in The Innovator’s Dilemma Christensen observes that there are usually engineers in incumbent firms lobbying their bosses to let them enter disruptive markets. In some cases, they get as far as building prototypes. In other cases, they get frustrated and leave to start their own companies.
This is a problem that’s particularly severe for larger organizations. Large companies have more intra-bureaucratic politics, and having “hard” numbers at your fingertips is a tremendous advantage in such fights. Typically, it doesn’t matter very much if you have the “right” statistics—even bad statistics are better than nothing. When debates over war strategy occurred inside the Johnson administration, the hawks at the Pentagon had reams of extremely detailed (albeit mostly worthless) data. They could give extremely precise statistics about the (apparent) successes of various military campaigns: number of enemy killed, number of bombs dropped, territory cleared, etc. The doves at the State Department, in contrast, were trying to make arguments that were much subtler. They lacked the military’s resources for collecting data, and their claims wouldn’t have been easy to quantify anyway. So you had a situation where one side of the debate always came prepared with precise statistics and detailed plans, while the other side raised concerns that seemed extremely vague, abstract, and hypothetical. The people with the statistics were usually wrong, but they tended to get their way anyway.
This sort of problem is much less likely to crop up in small organizations. Senior executives is close enough to the rank and file that they’re likely to be familiar with the context surrounding official statistics. A good CEO at a small firm will see and talk to ordinary employees often enough that he’ll catch wind of problems long before they begin showing up in the official statistics. In a very large organization, in contrast, there are several layers of bureaucracy between the guys doing the work and the guys making the decisions. As a consequence, managers often have to rely on statistics stripped of context. And so they continue to believe they have a winning product—or military strategy—right up until the moment that disaster strikes.