Future Tense

Letting Autopilot Off the Hook

Why do we blame humans when automation fails?

panicked behind the wheel.
Advances in autonomous technology shouldn’t have to ever create unsafe or unfair situations for the humans who use them.

Milenko Bokan/Thinkstock

While media pronouncements about the wonders and perils of automated control in transportation may feel modern, they are far from a recent phenomenon. “Like it or not, the robots are slowly taking over a driver’s chores,” warned a journalist in a 1958 review of the first car with cruise control, a Chrysler Imperial. In fact, autopilot technology in airplanes evolved alongside the development of fixed-wing flight in the early 20th century, and as early as 1916, a New York Times article describing an aviation autopilot announced, “New device makes airships foolproof!”

While most discussions around autonomous cars are set in the world of hypothetical thought experiments, we might get the best perspective on the future implications of self-driving cars by examining the past. Histories of proto-autonomous systems, like cruise control and autopilot, can foreshadow the ways in which an initially disruptive technology gets incorporated into the social fabric of society.

As part of a two-year–long project at the research institute Data & Society examining the social implications of artificial intelligence, my colleague Tim Hwang and I wanted to understand how then-new issues of liability had been dealt with in the context of autopilot and cruise control. In a case study we published last year that examined aviation autopilot litigation, we observed a counterintuitive focus on the pilot as the locus of responsibility, even as control over flight had been increasingly replaced by automated systems. In our judgment, the degree of control over an action, and the degree of responsibility for that action, were misaligned.

You can see a good example of this incongruity in contemporary aviation. A modern aircraft spends most of its time in the air under the control of a set of technologies, including an autopilot, GPS, and flight management system, that govern almost everything it does. Yet even while the plane is being controlled by software, the pilots in the cockpit are legally responsible for its operation. U.S. Federal Aviation Administration regulations specify this directly, and courts have consistently upheld it. So when something goes wrong, pilots become “moral crumple zones”—largely totemic humans whose central role becomes soaking up fault, even if they had only partial control of the system. Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component— accidentally or intentionally—that bears the brunt of the moral and legal responsibilities when the overall system malfunctions.

The metaphor of the moral crumple zone isn’t just about scapegoating. The term is meant to call attention to the ways in which automated and autonomous systems deflect responsibility in unique, systematic ways. While the crumple zone in a car is meant to protect the human driver, the moral crumple zone protects the integrity of the technological system, itself.

In a paper presented earlier this year at the law and technology conference WeRobot, we described a few examples of accidents in which moral crumple zones have emerged. The clearest example is the case of Air France Flight 447. En route from Brazil to France in 2009, the Airbus 330 had flown into a storm, and ice crystals had formed on the plane’s pitot tubes, a part of the avionics system that measures air speed. The frozen pitot tubes sent faulty data to the autopilot. The autopilot did what it was designed to do in the absence of data: It automatically disengaged, bouncing control of the aircraft back to the pilots. Caught by surprise, they were overwhelmed by an avalanche of information, flashing lights, loud warning signals, confusing instrument readings. In the words of the official French report, they lost “cognitive control of the situation.” A series of errors and incorrect maneuvers by the pilots ended in the plane’s crashing into the Atlantic Ocean, killing all 228 people on board.

News coverage of the accident report emphasized this series of pilot errors. Sometimes, they explained that there were other factors involved, including a known but not yet fixed mechanical problem with pitot tubes failing due to icing in A330s. What contemporary reports did not point out—though scholarship in human-factors engineering emphasized and a later essay in Vanity Fair brought to public attention—was that the pilots’ mistakes, some of them novice errors, may have been at least partly due to the automation. They had to take charge of the aircraft under conditions of weather and altitude that modern pilots rarely experience, because the autopilot is almost always in control. In addition to changing the relationship between control and responsibility, automated systems change the very kind of control that can be exercised by a human operator.

Commercial aviation has become tremendously safer over the years, thanks largely to automation. Nonetheless, the idea that automation eliminates human error is dangerous. Automation has not eliminated human error so much as created opportunities for new kinds of error. For example, as pilots fly less and “supervise” automatic systems more, their basic flying skills can atrophy—the so-called “automation paradox.” In fact, pilot awareness generally decreases with increased automation. Part of the problem is that cultural perceptions of automation tend to elevate automation and criticize humans.

Designers, engineers, regulators, and even the general public have created an inconsistent dynamic in which automation is seen as safer and superior in most instances, unless something goes wrong, at which point humans are regarded as safer and superior. Unfortunately, creating this kind of role for humans, who must jump into an emergency situation at the last minute, is something humans do not do well. The “human in the loop” becomes the weak link, rather than the point of stability. For instance, when the A330’s predecessor, the A320, was unveiled in 1986, an aviation expert was quoted as saying that the plane’s all-new fly-by-wire system would “be smart enough to protect the airplane and the people aboard it from any dumb moves by the pilot.” In the case of AF 447, automation seems, instead, to have contributed to those “dumb moves.”

Will a similar situation face those operating driverless cars? There are important differences between aviation and automotive contexts, as well as the degree of autonomy that may be in place in cars. Still, intelligent and autonomous systems in every form have the possibility to generate moral crumple zones because they distribute control, often in obfuscated ways, among multiple actors across space and time. In the various approaches being pursued by companies, we see different ways in which human operators might get caught in moral crumple zones.

In the case of Tesla, a human-in-the-loop design paradigm, like that of aviation automation, is creating a potentially unsafe role for humans within the system. Consider what happened in May, when a Tesla Model S Autopilot crashed into a stopped van on a European highway. The dashcam video shows a van parked with its hazard lights on in the left lane, and the Tesla, also in the left lane, following the moving car in front of it, drives on and in fact speeds up, slamming into the parked vehicle. The Tesla had been “locked on” to the moving car in front of it, which had swerved into the right lane to avoid the van. The Tesla, following only the car’s speed and not its physical path, did not register the significance of the swerve. The driver of the Model S did nothing, explaining that it “did not brake as it normally does.” The driver, based on past experience, expected the autopilot to work perfectly.

The problem is that the Tesla autopilot does not work perfectly every time. Tesla’s response to this crash, and others like it, is that users of its autopilot systems have been warned to keep hands on the wheel at all times. Despite the plethora of YouTube videos and media coverage of Tesla’s autonomous autopilot mode which suggest the contrary, officially, a driver is always responsible. Elon Musk, referring to an earlier release of Tesla Autosteer software, emphasized,

It’s almost to the point where you can take your hands off … but we’re very clearly saying this is not a case of abdicating responsibility. … [The system] requires drivers to remain engaged and aware when Autosteer is enabled. Drivers must keep their hands on the steering wheel.

Unfortunately, like the pilot in an automated cockpit, taking control at the last minute when the automation fails is exactly the kind of task that humans are poorly suited to undertake.

In contrast, Google designers seem by and large aware of the pitfalls that surround supervised automation. Google’s self-driving car program has switched focus after making the decision that it could not reasonably solve the “handoff problem”—having the car handle all the driving except the most unexpected or difficult situations.

Still, the Google driverless car may be creating unintended consequences for how we think about responsibility on the roads. For example, the current conversations around driverless-car accidents, not withstanding the most recent accident, emphasize the infallibility of the technology. When Google first made public the accident record of its self-driving car tests in 2015, the announcement and subsequent press coverage declared that none of the 12 accidents had been caused by the Google car; all were the fault of human drivers. As a safety precaution, a human driver had always been present during testing, prepared to take over if anything went wrong, and in fact, one of the times a Google car was in an accident was when it was being driven entirely by a human in a parking lot.

But there was one surprising pattern: 10 out of the 12 crashes were rear-end accidents. Perhaps these kinds of accidents are the most common on the stop-and-go streets of Palo Alto. It is also possible that the Google car effectively caused some of the accidents in that it was driving in a way contrary to the expectations of the drivers around it. Driving is as much about reacting to other drivers, being able to anticipate what they are likely to do, as it is about obeying stop signs and avoiding obstacles. Maybe the Google car is more cautious or slow than most drivers in the area, and so the human drivers anticipated the car’s movement incorrectly. The accidents might have been caused by a fundamental miscommunication between a driverless car and a human-driven car. In this instance, responsibility is shifted to other drivers on the road, and these human drivers enter the moral crumple zone, taking on responsibility for a failure where, in fact, control over the situation is shared.

Current thinking around autonomous technologies tends to isolate the technological artifact away from the social conditions of its use and production. Automated and autonomous transportation systems can and will make our travel safer. But advances in technology shouldn’t have to ever create unsafe or unfair situations for the humans who use them. We need to demand designers, manufacturers, and regulators pay attention to the reality of the human in the equation. At stake is not only how responsibility may be distributed in any robotic or autonomous system, but also how the value and potential of humans may be allowed to develop in the context of human-machine teams.

This article is part of the self-driving cars installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on self-driving cars:

Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.