Future Tense

Get Off the Trolley Problem

Self-driving cars shouldn’t have to choose who to protect in a crash.

The Mercedes-Benz F015 Luxury in Motion concept car, a self-driving, hydrogen-electric plug-in hybrid, makes its debut at the 2015 International Consumer Electronics Show.

The Mercedes-Benz F015 Luxury in Motion concept car, a self-driving, hydrogen-electric plug-in hybrid, at the 2015 International Consumer Electronics Show.

Chris Farina/Corbis via Getty Images

Imagine you are driving down a two-lane road at about 45 miles per hour, cruising home. You see a group of kids walking home from school about 100 yards ahead. Just as you’re about to pass by them, an oncoming 18-wheeler swerves out of its lane and is about to hit you head on. You have seconds, tops, to decide: Sacrifice yourself, or hit the children so you can avoid the truck.

I like to think that, if asked in advance, most people would choose not to plough into the kids. As the automation of driving advances, there’s a way to “hard-code” that decision into vehicles. Many cars already detect whether a toddler in a driveway is about to be run over by a driver with a blind spot. They even beep when other vehicles are in danger of being bumped. Transitioning from an alert system to a hard-wired hard stop is technically possible. And if that’s possible, so is an automatic brake that would prevent a driver from swerving to save herself at the expense of many others.

But the decision can also be coded the other way—to put the car occupants’ interests above all others. Christoph von Hugo, Mercedes’ manager of driver assistance systems, active safety, and ratings, appeared to push this vision of the future of more fully autonomous vehicles in a recent article in Car and Driver. “You could sacrifice the car, but then the people you’ve saved, you don’t know what happens to them after that in situations that are often very complex, so you save the ones you know you can save,” he said. “If you know you can save at least one person, at least save that one. Save the one in the car.” (Mercedes has since said that Hugo was “quoted incorrectly” and that “[f]or Daimler it is clear that neither programmers nor automated systems are entitled to weigh the value of human lives. Our development work focuses on completely avoiding dilemma situation by, for example, implementing a risk-avoiding operating strategy in our vehicles.”)

Some ethicists classify decisions like von Hugo’s as a solution to a “trolley problem,” after the famous series of thought experiments presented by Judith Jarvis Thomson to challenge simple utilitarianism. Jarvis Thomson, a professor of philosophy, stylized ethical dilemmas in a series of hypotheticals. Would you divert an oncoming trolley away from hitting five schoolchildren if your decision meant it killed one person instead? Would you push a very heavy person over a bridge onto the tracks in front of the trolley to slow it down and keep it from hitting another person? The trolley problem was a classic example of an “intuition pump,” capable of eliciting responses ranging from the judicious to the zany. It’s even satirized in memes .

So how do you solve a trolley problem? Some believe the answer is to give car owners ever more granular control. Enlightened drivers might choose a general rule of “save me first” but soften it to include more self-sacrificial options in case of mass casualties. Or they might not. Mere awareness that others are not willing to sacrifice for the common good could tip the system toward selfishness, or worse. The same individualism that has undermined U.S. organ donation rates would probably be even more influential in driver decision-making here.

So perhaps increasingly autonomous cars should abide by common rules, setting the same terms of safety and danger for all. The Moral Machine project at Massachusetts Institute of Technology is soliciting feedback on user responses to ethical dilemmas. With a large enough data set on how research subjects respond to simulated crashes, programmers might try to assure that car code of the future reflects our current judgments (or at least those of the people who participate in the Moral Machine). For example, if 80 percent of subjects chose self-sacrifice in the “hit the truck or the children” scenario at the beginning of this article, that could become the coded rule for such tragic choices. Programmers might also tilt the code in a more utilitarian direction, nudging automation toward better societal outcomes.

Noodling about variations on the trolley problem could occupy car-makers, programmers, and research subjects for years. What if only one child were sacrificed by a decision to avoid the truck? Do elderly persons deserve more, less, or the same consideration as children? But a better question might be: Why are automobiles traveling so close to pedestrians in the first place? The nonprofit safety advocacy organization Transportation for America has studied the enormous (and troubling) variation among pedestrian death rates in major American cities. The worst places, such as Florida suburbs and exurbs, feature urban design that makes it all too easy for drivers of any stripe—man or machine—to crash into pedestrians. Safety is not just a problem of code—physical infrastructure matters, too. And the disastrous scenario with the 18-wheeler and the group of kids might never happen if proper dividers separate oncoming lanes of traffic.

Even if those stronger barriers don’t come to pass, though, worry over trolley problems should not freeze autonomous car initiatives. Human error is the root cause of thousands of traffic deaths each year. The Department of Transportation has rightly prioritized self-driving cars’ development, and local authorities could do more to advance their adoption. But the question of who is sacrificed in tragic scenarios is not one that can be submerged in the general utilitarian calculus of lives saved via robot cars. Both law and software code have an expressive function as well, favoring some of our values over others.

To preserve those values, we need to avoid uncoordinated, individualized programming choices made by each individual automaker. Libertarians might call the “driver-first” approach an inevitable, market-based “solution” to trolley problems. But the market here wouldn’t be complete without giving potential victims of the car a chance to pay its programmers not to hit them. It’s not hard to imagine who would win that bidding war. For self-driving cars, a “devil take the hindmost” option of self-protection above all else would further erode already fraying social solidarity.

It’s important to remember, though, that this isn’t the only moral problem that comes with increasing highway automation. As Kate Crawford and Ryan Calo argue,

The trolley problem offers little guidance on the wider social issues at hand: the value of a massive investment in autonomous cars rather than in public transport; how safe a driverless car should be before it is allowed to navigate the world (and what tools should be used to determine this); and the potential effects of autonomous vehicles on congestion, the environment or employment.

There is already concern that the firms most likely to control fleets of self-driving cars, such as Uber or Google, aim to replace (rather than complement) existing public infrastructure. We could call this the “no trolley, bus, or subway” problem: increasing carbon footprints, congestion, and marginalization of underserved communities thanks to bad transport policy.

There will always be conflicts among cars, pedestrians, robots, drones, and bikers over the proper share of space and respect each deserves. We need individualistic, technical solutions to some of the problems that will result as new modes of driving arise and robot delivery services share sidewalks with people. But we also need holistic, big-picture thinking. As policymakers set the rules of the road for 21st-century mobility, they should listen to the urban planners, social scientists, and advocates who’ve spent decades thinking about how to build better, more livable communities. Transport isn’t just a technical problem: It’s a human and social one, with political implications far beyond arid intellectual models of utilitarian markets.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.