A few months ago I did a post about the role top-down decision-making played in the financial crisis. I wasn’t super satisfied with my finished product, and it turns out I should have just waited a few weeks for this amazing article from Amar Bhidé to come out:
In recent times, though, a new form of centralized control has taken root—one that is the work not of old-fashioned autocrats, committees, or rule books but of statistical models and algorithms. These mechanistic decision-making technologies have value under certain circumstances, but when misused or overused they can be every bit as dysfunctional as a Muscovite politburo. Consider what has just happened in the financial sector: A host of lending officers used to make boots-on-the-ground, case-by-case examinations of borrowers’ creditworthiness. Unfortunately, those individuals were replaced by a small number of very similar statistical models created by financial wizards and disseminated by Wall Street firms, rating agencies, and government-sponsored mortgage lenders. This centralization and robotization of credit flourished as banks were freed from many regulatory limits on their activities and regulators embraced top-down, mechanistic capital requirements. The result was an epic financial crisis and the near-collapse of the global economy. Finance suffered from a judgment deficit, and all of us are paying the price.
As we rebuild from the economic crisis, we must renew the search for the appropriate balance—in finance and in other endeavors—not just between centralization and decentralization but also between case-by-case judgment and standardized rules. The right level of control is an elusive and moving target: Economic dynamism is best maintained by minimizing centralized control, but the very dynamism that individual initiative unleashes tends to increase the degree of control needed. And how to centralize—whether through case-by case judgment, a rule book, or a computer model—is as difficult a question as how much. But these are questions that we cannot afford to stop asking…
Over the past several decades, centralized, mechanistic finance elbowed aside the traditional model. Loan officers made way for mortgage brokers. At the height of the housing boom, in 2004, some 53,000 mortgage brokerage companies, with an estimated 418,700 employees, originated 68% of all residential loans in the United States. In other words, fewer than a third of all loans were originated by an actual lender. The brokers’ role in the credit process is mainly to help applicants fill out forms. In fact, hardly anyone now makes case-by-case mortgage credit judgments. Mortgages are granted or denied (and new mortgage products like option ARMs are designed) using complex models that are conjured up by a small number of faraway rocket scientists and take little heed of the specific facts on the ground.
It’s one of those essays that’s hard to excerpt because the whole thing is good. Check it out.
While I’m sympathetic to the idea in general, does the “we relied too much on software to gauge loans!” argument really work for the recent housing crash? Lots of humans totally failed to predict it as well. Is there any data that indicates that lending institutions that relied less on models did better? (There may be, I wasn’t being rhetorical) I recognize that previous conventional wisdom can be embedded in code and thereby be applied when not relevant, but what the software was doing in 2005 was basically in line with the judgment of the people running it — that is, if all their computers had gone down, they probably would have made similar loan decisions (as far as I know) because everyone thought home prices would always.
Now, maybe I’m being uncharitable, and he’s just saying that without the robo-centralization, it wouldn’t have been as bad, because they couldn’t conglomerate into super institutions, but that it still would have been a painful crash. Which I’d agree with, and also say that there were many other factors as well.
I haven’t read the whole article yet (I plan to) but from what you’ve excerpted, it sounds like he’s claiming at least part of the problem stemmed from too much abstraction (a topic you’ve covered a few times, I believe). Credit scores and financial algorithms being abstractions of the ‘case-by-case’ examinations that were once done for potential borrowers.
I agree, there should be a balance between the use of abstractions and direct investigation. However, that balance must be kept with the individual institutions at their own peril. Only by allowing firms to risk reward and failure will we find the right balance. (Of course, that presupposes the requirement that firms are actually allowed to fail.)
Brian, I don’t know if the crux of the author’s article was that we were relying too much on automation, but on large, general algorithms which don’t account for artificial incentives, human nature and other quickly changing events ‘on the ground.’ Unfortunately, if a single lender was in fact doing 100% manual investigation on their loans, they would have been dragged down with the crisis, too.
Sean: yeah, I definitely agree that when you have a computer program for predicting things like you describe, you’re going to have problems. I’m just not sure that human investigators wouldn’t have had the same blind spot. Plus, humans are also very likely to get into a routine and not look outside the box.
To sum up, I am totally for the benefits of decentralization, I’m just not sure that the computer systems deserve the blame here.
And I’m totally with you on the risk/reward thing. If the penalties for failure are perceived to be zero, both humans and the algorithms they write will over extend themselves.
I haven’t read the essay. Does he present evidence that human judgement was any better? Ian Ayres suggests that usually performs worse than even the most simplistic algorithms.
I’ve usually heard that no-doc loans were a big problem. Maybe people didn’t respect data enough!