A while back I wrote about the trouble that can occur when the managers of large organizations overestimate the utility of large data sets and sophisticated statistical tools, with Robert McNamara’s problems in Vietnam as a poster child. In 2010 that example seems remote, so let’s talk about an example that’s closer to home: the financial crisis.
We’ve seen that decision-makers are at greatest risk of being seduced by data when there are several layers of abstraction between themselves and the people on the “front lines.” Consider the case of making a loan. In a traditional small bank, you might have a president who establishes guidelines for the kinds of loans that are issued, and a handful loan officers who evaluate individual applications and make decisions based on the guidelines. There’s not much danger of the bank president getting out of touch with reality here: if a loan officer thinks the rules don’t make sense, he can probably go down the hall and talk to the president about his concerns.
Things get trickier as the bank gets bigger. There will be more applications to evaluate, and that requires branches, more loan officers and a more complex bureaucracy to manage them. This can create problems. For example, if loan officers are paid on commission, they might not have much incentive to report if the rules are too lax.
Still, things get a whole lot worse when securitization comes along. Securitization works like this: a bank approves a bunch of loans to various customers, combines those loans into a big bundle called a Collateralized Debt Obligation, slices them into “tranches” (so that each slice has a piece of each loan), and sells the slices to a bunch of third parties.
Now there are many layers of abstraction and bureaucracy between the guy evaluating an individual loan application and the guy who will ultimately lose his money if the loan goes bad. And the incentives have become extremely perverse. The bank earns fees the moment it originates a loan, but it may bear little or no long-term risk for making bad loans. Meanwhile, the assets are so fragmented that it’s not practical for buyers to do due diligence. In hindsight, this seems like an obviously terrible idea. Yet lots of otherwise smart people endorsed the concept. What happened, in a nutshell, is that the financial industry fell prey to the same intellectual error that befell Robert McNamara: confusing large amounts of data for knowledge.
In the last decades of the 20th Century, Wall Street developed what they thought were sophisticated statistical tools that allowed them to accurately estimate the riskiness of complex portfolios without firsthand knowledge of the underlying assets. For example, as banks got larger, they increasingly relied on numerical standards like income and credit scores, rather than more subjective personal factors, to decide which loans to approve. This made a certain amount of sense because as banks got larger, it became more important to have consistent standards across the organization.
Second, investors increasingly relied on a handful of large credit ratings agencies who evaluated CDOs for riskiness. The firms creating the CDOs carefully packaged these bundles of securities so that as many slices as possible would get an “investment grade” rating. This was important not only because it gave buyers confidence, but also because government regulations mandated that financial institutions hold a certain fraction of their balance sheets in investment grade assets.
Finally, the buyers themselves developed (ostensibly) sophisticated statistical techniques like value at risk that purported to give the institution a high-confidence estimate of the maximum amount the institution could lose on a given portfolio. Wall Street had armies of well-paid “quants” whose job it was to compute these figures.
As the system became more complex and centralized in Wall Street, it became increasingly difficult for the people creating the securities to do a “reality check.” The Wall Street quant who never looked at individual mortgage applications is in precisely the same situation as a general who’s never been to the front lines in Vietnam. In both cases, the people who are on the front lines had a strong incentive to skew the data to make themselves look good. And they began doing just that. Loan officers began encouraging their customers to fudge the information on their loan application to ensure they’d be approved. Companies came up with ever more elaborate CDO structures designed to convince ratings agencies to give a large fraction of their securities investment-grade ratings. So for Wall Street executives, the numbers looked great right up until the moment his balance sheet imploded.
The fundamental problem, I think, was the size of the firms and the complexity of the financial instruments. No statistic can perfectly summarize a messy, real-world asset. There’s no substitute for understanding the (literal) “facts on the ground”: for knowing something about the individual properties and property owners who are the ultimate anchor for any loan. And the more complex and fragmented an asset is, the harder it is to perform due diligence. Yet as firms grow larger, senior management is forced to rely on increasingly abstract statistical measures. Ken Lewis couldn’t possibly have developed an in-depth understanding of all the CDOs (and other exotic financial instruments) Bank of America was buying in the years before the financial crisis, both because they were extremely complex and because there were just too many of them.
There’s been a lot of talk about the need for a systematic risk regulator to monitor the entire financial sector and raise the alarm if major financial institutions are making loans that are too risky. But if the financial system continues to be structured the way it was in 2007, then it’s not clear how much good such a regulator can do because he’ll be fundamentally in the same boat as the CEO. The balance sheets of the largest banks were so complex that regulators had no real alternative than to rely on statistics supplied by the bank itself. And that means that the regulator is going to have the same blind spots as the bank’s management.
So the most important part of any reform package has to be limiting the size of financial firms. A financial institution with $2 trillion of assets under management is a recipe for disaster. Smaller firms not only reduce the risk of too big to fail problems, which is important in its own right, but it also makes it easier for everyone—bank executives, regulators, and members of the general public—to understand what these institutions are doing. No executive can possibly manage it responsibly, and no regulator can possibly understand it well enough to conduct meaningful oversight.
Unfortunately, the Brown-Kaufman amendment to the financial reform bill, which would have limited the growth of large financial firms, failed in May. All signs point to continued consolidation on Wall Street. Which seems like a recipe for another crisis and more bailouts in the coming years.