Fender-benders tell us more about self-driving cars than the trolley problem could.

Fender-Benders Tell Us More About Self-Driving Cars Than the Trolley Problem Ever Could

Fender-Benders Tell Us More About Self-Driving Cars Than the Trolley Problem Ever Could

The citizen’s guide to the future.
April 13 2017 7:15 AM
FROM SLATE, NEW AMERICA, AND ASU

In Praise of Self-Driving Cars and Fender-Benders

Minor crashes will tell us more about autonomous vehicles than the trolley problem ever could.

An Uber self-driving car drives.
An Uber self-driving car drives on March 28 in San Francisco.

Justin Sullivan/Getty Images

Two weeks ago, a driver failed to yield to another vehicle making a turn at a cross street just minutes away from my office in Tempe, Arizona. The two cars collided, causing the one making the turn to roll on its side. Sadly, this kind of thing happens all the time. A colleague who lives in the neighborhood in which the crash happened told us, “That intersection has crashes weekly—and not just fender-benders. … [T]wo weeks ago, there was a car in our yard.”

So why did this particular crash make the national news? It just so happened that one of those vehicles was driving itself.

Advertisement

As part of a national test of its autonomous vehicle program, Uber has in recent months introduced self-driving cars in San Francisco, Pittsburgh, and our own town of Tempe. These cars see the world around them using a powerful array of sensors in combination with a LIDAR (Light Detection and Ranging) beam on the roof. The cars use LIDAR to emit light or radio waves into their immediate surroundings, and then those waves bounce back according to the environment’s shape and characteristics—much like the way a bat uses echolocation to “see” its prey at night. The cars are also equipped with optical sensors for knowing when a traffic light goes from yellow to red, for example, and cameras for seeing pedestrians and other obstacles.

Just as with any crash, the news coverage immediately following the Tempe incident raised questions of who was at fault. But in this case, the discussion focused on whether the Uber self-driving car could have anticipated the collision and moved out of the way. Uber temporarily took the self-driving fleet off the road while it assessed the situation, but it has since put the cars back on the road a few days later, and we’ve started passing them on the street again.

This wasn’t the first high-profile mishap with an autonomous vehicle. Back in September, one of Google’s self-driving cars was involved in a crash with a public transit bus. The car was driving itself at the time, but the human driver slammed on the brakes when he anticipated the collision. Unfortunately, neither he nor the car were fast enough. No one was hurt, thankfully, but the incident prompted similar questions of why the Google car failed to get out of the way.

We study the risks of emerging technologies, so we’re very glad to see people beginning to think through some of the stickier questions of responsibility when it comes to these vehicles. When analysts and policymakers talk about these kinds of events, they often turn toward theoretical and philosophical thought experiments such as the trolley problem.

Advertisement

Even if you didn’t know that’s what it was called, you have likely encountered a variant of the trolley problem. It usually goes something like this:

You see a runaway trolley barreling down some tracks, quickly approaching a junction. Just past that junction, several people are bound to the tracks, unable to free themselves. The trolley is heading straight for them. For some reason, you are standing next to a lever that switches the tracks, so it seems you might be able to avert the coming the disaster—but then you look down the other tracks and see a baby lying there. You have two options:

1. Do nothing, making you complicit in the deaths of five innocent people
2. Save the five by switching the tracks, allowing the trolley to kill the one baby

In the case of self-driving cars, this problem often comes in the form of asking whether we care more about pedestrians or vehicle passengers, or the passengers of the autonomous vehicle or the human drivers with whom it shares the road. MIT even made a game out of it.

Advertisement

It’s important to think about ethical decision-making in autonomous cars. But the trolley problem, with its action movie–like scenario, can overshadow questions that are more mundane but also more pertinent to most people. It’s not as much fun to have a philosophical conversation about how real people in real situations deal with the risks of living with these new technologies. Most of us won’t have to make a life-or-death decision like in the trolley problem, but we may well have to deal with technologies that decide who gets the raw end of a car-on-car, or car-on-human, situation. Take the woman suffering whiplash after her self-driving car braked too fast or the school crossing guard, accustomed to making eye contact with drivers when putting up stop signs, who now has to learn to trust autonomous vehicles to brake for kids walking to school. So many of the questions here are mundane: How will this technology change the shape of personal injury law? What do self-driving cars mean for license regulations?

Of course, there’s no such thing as a completely benign crash. Someone always pays a price for vehicle-related damage, whether it’s bumps and bruises, stress and anxiety, or loss of earnings (especially if driving is your business). But fender-benders involving self-driving cars that don’t cause deep and lasting damage do provide insights into challenges that are easily overlooked by the philosophically minded.

Here, there’s something to be learned from relatively low-stakes crashes like the one in Tempe, but only if we step back and resist the temptation to turn these individual lives affected into players in grand moral and ethical dramas. And in the case of the Tempe crash, the local news media did an excellent job of doing just that.

According to the Arizona Republic, the human driver reported in her statement to police that “As far as I could tell, the third lane had no one coming in it so I was clear to make my turn. Right as I got to the middle lane about to cross the third I saw a car flying through the intersection but couldn’t brake fast enough to completely avoid collision.” It’s interesting that she focuses not on who or what was driving the other car, but instead on the vehicle was doing in relation to her.

Advertisement

Local reporting also highlights the statements from two witnesses, who each provided a different account of who was at fault. One echoed the police report in stating that the human-driven vehicle struck the Uber. But the other said that it was the Uber vehicle’s “fault for trying to beat the light and hitting the gas so hard.”

This latter statement in particular is telling in that the witness, perhaps unconsciously, assumed that the Uber had motivations behind its behavior, just like a human driver—in this case, beating the light to avoid getting held up.

These statements speak volumes about how people perceive and interact with self-driving cars: not as some esoteric technology demanding deep levels of ethical and philosophical angst, but as vehicles that can cause harm and sometimes behave unpredictably, just like human-driven cars.

It’s this rubber-meets-the-road type of pragmatism that we need when thinking through the very real challenges and opportunities of self-driving vehicles and that fender-benders give us invaluable insights into.

And those insights help us focus on potential impacts that aren’t as dramatic as deaths or serious injuries, but are nevertheless important to those that suffer them. For instance, local news outlets emphasized the fact that there were no passengers in the self-driving Uber. If there were, how—if at all—would the car have modified its behavior to prevent minor injuries, including whiplash? And if the car could have taken evasive action, how would it be programmed to balance potential bumps and bruises to passengers and driver with the potentially more serious injuries associate with a collision?

There’s a really important national debate to be had about how to define the role of autonomous vehicles in our increasingly complex society, but let’s not get sidetracked by armchair philosophizing or tantalizing moral dilemmas. Instead, we should be looking at real challenges faced by real people.

And for goodness’ sake, let’s not overreact and force every self-driving Uber to execute a hand brake turn when it senses hesitation in a human driver, turning our neighborhood here in Arizona into a scene from Fast and Furious 9: Uber Tempe Drift.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.

Elizabeth Garbee is a Ph.D. student at the School for the Future of Innovation in Society at Arizona State University, studying risk innovation and science education policy.

Andrew Maynard is a leading expert on the responsible development and use of emerging technologies and is the director of the ASU Risk Innovation Lab.