DUBAI—The consequences of the Japanese earthquake—especially the ongoing crisis at the Fukushima nuclear power plant—resonate grimly for observers of the American financial crash that precipitated the Great Recession. Both events provide stark lessons about risks and about how badly markets and societies can manage them.
Of course, in one sense, there is no comparison between the tragedy of the earthquake—which has left more than 25,000 people dead or missing—and the financial crisis, to which no such acute physical suffering can be attributed. But when it comes to the nuclear meltdown at Fukushima, there is a common theme in the two events.
Experts in both the nuclear and finance industries assured us that new technology had all but eliminated the risk of catastrophe. Events proved them wrong: Not only did the risks exist, but their consequences were so enormous that they easily erased all the supposed benefits of the systems that industry leaders promoted.
Before the Great Recession, America's economic gurus—from the head of the Federal Reserve to the titans of finance—boasted that we had learned to master risk. "Innovative" financial instruments such as derivatives and credit-default swaps enabled the distribution of risk throughout the economy. We now know that they deluded not only the rest of society but themselves.
These wizards of finance, it turned out, didn't understand the intricacies of risk, let alone the dangers posed by "fat-tail distributions"—a statistical term for rare events with huge consequences, sometimes called "black swans." Events that were supposed to happen once in a century—or even once in the lifetime of the universe—seemed to happen every 10 years. Worse, not only was the frequency of these events vastly underestimated; so was the astronomical damage they would cause—something like the meltdowns that keep dogging the nuclear industry.
Research in economics and psychology helps us understand why we do such a bad job in managing these risks. We have little empirical basis for judging rare events, so it is difficult to arrive at good estimates. In such circumstances, more than wishful thinking can come into play: We might have few incentives to think hard at all. On the contrary, when others bear the costs of mistakes, the incentives favor self-delusion. A system that socializes losses and privatizes gains is doomed to mismanage risk.
Indeed, the entire financial sector was rife with agency problems and externalities. Ratings agencies had incentives to give good ratings to the high-risk securities produced by the investment banks that were paying them. Mortgage originators bore no consequences for their irresponsibility, and even those who engaged in predatory lending or created and marketed securities that were designed to lose did so in ways that insulated them from civil and criminal prosecution.
This brings us to the next question: Are there other "black swan" events waiting to happen? Unfortunately, some of the really big risks that we face today are most probably not even rare events. The good news is that such risks can be controlled at little or no cost. The bad news is that doing so faces strong political opposition, because there are people who profit from the status quo.
We have seen two of the big risks in recent years, but have done little to bring them under control. By some accounts, the way the last crisis was managed may have increased the risk of a future financial meltdown.
Too-big-to fail banks, and the markets in which they participate, now know that they can expect to be bailed out if they get into trouble. As a result of this "moral hazard," these banks can borrow on favorable terms, giving them a competitive advantage based not on superior performance but on political strength. While some of the excesses in risk-taking have been curbed, predatory lending and unregulated trading in obscure over-the-counter derivatives continue. Incentive structures that encourage excess risk-taking remain virtually unchanged.