The problem in all of these cases is that the safety system introduced what an engineer would call a new "failure mode"—in other words, a new way for things to go wrong. And that was precisely the problem in the financial crisis.
The notorious repackaged mortgages—in the form of residential mortgage-backed securities (RMBS) and collateralised debt obligations (CDOs)—were financial safety systems that offered exciting new ways to blow things up. The safety mechanism was a complex legal structure to alter the distribution of the risks from mortgage defaults. In principle, this made the risks more comprehensible and the right sort of investor could take on the right degree of risk. In practice, the repackaged mortgages—especially when the repackaging was repeated several times—made the behaviour of the risks as incomprehensible as a malfunctioning nuclear reactor.
For instance, investors had to take a view on how likely mortgage defaults were, and the extent to which—like London buses—they all arrived at the same time. (Clearly, there was some degree of clustering: the challenge was to estimate it without much data.) Few people realised that the same CDO safety system, which appeared to parcel risks in a predictable way, also made investors vulnerable to errors in their assumptions. A one-in-a-million chance of taking a loss could become a million-to-one chance against getting any money back at all.
Part of the problem was what risk experts call "risk compensation". Just as safety belts appear to encourage drivers to feel safe and take more risks, the apparent safety of CDOs encouraged banks to bet their entire franchise on them. (In both cases, bystanders often become casualties.) But the subtler effect was that of these new failure modes.
A second spectacular example is another financial safety system, the credit default swap. Credit default swaps were explicitly designed as a safety measure, a form of insurance against a company, including a subprime CDO, failing to pay its debts. Many major banks turned to the insurance giant AIG, or to "monoline" insurers, which sold credit default swaps to insure the banks' investments. When the investments went bad, AIG and the monolines couldn't pay—and the safety measure suddenly became a source of systemic risk. Bonds are given a credit rating by rating agencies, but if the bonds are insured they inherit the credit rating of their insurer. When AIG's credit rating was downgraded, so were the ratings of the bonds it was insuring—which meant some banks were forced to sell them to meet their regulatory obligations. Financial institutions didn't have to have any involvement at all with subprime products: if they were holding bonds that AIG or the monolines had insured, they could be sucked into the crisis by the unexpected interaction of a safety regulation and an insurance-based safety mechanism.
A series of measures intended to guarantee the safety of individual financial institutions had brought the system to its knees. To industrial safety experts, such unintended consequences are commonplace. So if a Rube Goldbergesque accretion of one safety system after another is not the solution to industrial or financial catastrophes, what is?
. . .
The 1979 crisis at Three Mile Island remains the closest the American nuclear industry has come to a major disaster. It would have been far less grave had the operators understood what was happening. Coolant pumps were useless because a maintenance error had trapped them behind closed valves. Another valve jammed in the open position, allowing pressurised radioactive water at more than 1,000° C to shoot into the sump below the reactor, eventually exposing the reactor core itself and risking a complete and catastrophic meltdown.
The operators were baffled by the confusing instrumentation in the control room. One vital warning light was obscured by a paper repair tag hanging from a nearby switch. The control panel seemed to show the jammed-open valve had closed as normal—in fact, it merely indicated that the valve had been "told" to close, not that it had responded. Later, the supervisor asked an engineer to check a temperature reading that would have revealed the truth about the jammed valve, but the engineer looked at the wrong gauge and mistakenly announced that all was well.
All these errors were understandable given the context. More than 100 alarms were filling the control room with an unholy noise. The control panels were baffling: they displayed almost 750 lights, each with letter codes, some near the relevant flip switch and some far. Red lights indicated open valves or active equipment; green indicated closed valves or inactive equipment. But since some of the lights were typically green and others were normally red, it was impossible even for highly trained operators to scan the winking mass of lights and immediately spot trouble.
I asked Philippe Jamet, the head of nuclear installation safety at the International Atomic Energy Agency, what Three Mile Island taught us. "When you look at the way the accident happened, the people who were operating the plant, they were absolutely, completely lost," he replied.
Jamet says that since Three Mile Island, much attention has been lavished on the problem of telling the operators what they need to know in a format they can understand. The aim is to ensure that never again will operators have to try to control a misfiring reactor core against the sound of a hundred alarms and in the face of a thousand tiny winking indicator lights.
At Hinkley Point B, next to the main plant, is a low-rise office building of an inoffensive style that has adorned countless nondescript business parks. At the heart of that building is the simulator: a near-perfect replica of Hinkley Point B's control room. The simulator has a 1970s feel, with large sturdy metal consoles and chunky Bakelite switches. Modern flat-screen monitors have been added, just as in the real control room, to provide additional computer-moderated information about the reactor. Behind the scenes, a powerful computer simulates the nuclear reactor and can be programmed to behave in any number of inconvenient ways.
"There have been vast improvements over the years," explained Steve Mitchelhill, the simulator instructor who showed me around. "Some of it looks cosmetic, but it isn't. It's about reducing human factors."
"Human factors", of course, means mistakes by the plant's operator. And Mitchelhill goes out of his way to indicate a deceptively simple innovation introduced in the mid-1990s: coloured overlays designed to help operators understand, in a moment of panic or of inattention, which switches and indicators are related to each other.
The lesson for financial regulators might seem obscure. But at key points during the crisis, they were as "lost" as the operators of Three Mile Island. For example, as Lehman Brothers teetered on the brink of insolvency, all eyes were on the doomed bank. Tim Geithner, the man responsible for supervising Wall Street's banks, had a meeting at the request of Robert Willumstad, the chief executive of AIG. As an insurance company, AIG was regulated by the US Treasury and by state regulators, so it was far from obvious why Willumstad was Geithner's problem. Geithner was exhausted after an overnight flight and distracted by what appeared to be the overwhelmingly important question: what to do about Lehman Brothers. According to journalist Andrew Ross Sorkin, Geithner kept Willumstad waiting because he was on the phone to Dick Fuld of Lehman Brothers, and fidgeted throughout the meeting because he didn't really understand why Willumstad wanted to see him. Willumstad, eager to get some support from the Federal Reserve but anxious not to panic Geithner, handed him a briefing note summarising how exposed banks were to a potential failure at AIG. When he left, Geithner filed the note with barely a glance and went back to the problem of Lehman Brothers. Five days later the government realised that AIG was about to wreck the financial system and gave it a vast injection of capital.
TODAY IN SLATE
The Most Terrifying Thing About Ebola
The disease threatens humanity by preying on humanity.
I Bought the Huge iPhone. I’m Already Thinking of Returning It.
Scotland Is Just the Beginning. Expect More Political Earthquakes in Europe.
Students Aren’t Going to College Football Games as Much Anymore
And schools are getting worried.
Two Damn Good, Very Different Movies About Soldiers Returning From War
Lifetime Didn’t Think the Steubenville Rape Case Was Dramatic Enough
So they added a little self-immolation.
- Protesters Take to the Streets to Sound Alarm on Climate Change in New York, Across the World
- Knife-Carrying White House Jumper is Vet who Feared “Atmosphere Was Collapsing”
- North Korea: American Sentenced to Hard Labor Wanted to Become “Second Snowden”
- Almost One in Four Americans Support Idea of Splitting From the Union
Blacks Don’t Have a Corporal Punishment Problem
Americans do. But when blacks exhibit the same behaviors as others, it becomes part of a greater black pathology.