Future Tense

The Big Problem With Self-Driving Cars

And how Toyota thinks it can solve it.

Toyota showroom
A good, old-fashioned human-driving Toyota.

Photo by Toru Hanai/Reuters

Who should drive cars? It’s a more complicated question than it might seem.

For decades the answer in the United States has been “people over the age of 16 (or so, depending on the state) who have passed a driving test.” We humans aren’t doing such a great job of it, however. More than 30,000 Americans die on the country’s roads every year.

Recently, companies like Google, Uber, and Tesla have presented us with an alternative answer: Artificially intelligent computers should drive our cars. They’d do a far safer job of it, and we’d be free to spend our commutes doing other things.

There are, however, at least two big problems with computers driving our cars.

One is that, while they may drive flawlessly on well-marked and well-mapped roads in fair weather, they lag behind humans in their ability to interpret and respond to novel situations. They might balk at a stalled car in the road ahead, or a police officer directing traffic at a malfunctioning stoplight. Or they might be taught to handle those encounters deftly, only to be flummoxed by something as mundane as a change in lane striping on a familiar thoroughfare. Or—who knows?—they might get hacked en masse, causing deadlier pileups than humans would ever blunder into on our own.

Which leads us to the second problem with computers driving our cars: We just don’t fully trust them yet, and we aren’t likely to anytime soon. Several states have passed laws that allow for the testing of self-driving cars on public roadways. In most cases, they require that a licensed human remain behind the wheel, ready to take over at a moment’s notice should anything go awry.

Engineers call this concept “human in the loop.”

It might sound like a reasonable compromise, at least until self-driving cars have fully earned our trust. But there’s a potentially fatal flaw in the “human as safety net” approach: What if human drivers aren’t a good safety net? We’re bad enough at avoiding crashes when we’re fully engaged behind the wheel, and far worse when we’re distracted by phone calls and text messages. Just imagine a driver called upon to take split-second emergency action after spending the whole trip up to that point kicking back while the car did the work. It’s a problem the airline industry is already facing as concerns mount that automated cockpits may be eroding pilots’ flying skills

Google is all too aware of this problem. That’s why it recently shifted its approach to self-driving cars. It started out by developing self-driving Toyota Priuses and Lexus SUVs—highway-legal production cars that could switch between autonomous and human-driving modes. Over the past two years, it has moved away from that program to focus on building a new type of autonomous vehicle that has no room for a human driver at all. Its new self-driving cars come with no steering wheel, no accelerator, no brakes—in short, no way for a human to mess things up. (Well, except for the ones with whom it has to share the roads.) Google is cutting the human out of the loop.

Car companies are understandably a little wary of an approach that could put an end to driving as we know it and undermine the very institution of vehicle ownership. Their response, for the most part, has been to develop incremental “driver-assistance” features like adaptive cruise control while resisting the push toward fully autonomous vehicles.

On Friday, however, Toyota announced plans to spend $50 million on an artificial intelligence program of its own. It launched the initiative with a bang, hiring Defense Advanced Research Projects Agency robotics expert Gill Pratt and founding joint research centers at two of the world’s most prestigious robotics labs (the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory and the Stanford Artificial Intelligence Laboratory). Those are not the moves of a company that is burying its head in sand and hoping the Google car goes up in flames.

What’s most interesting about the Toyota news, however, is not the money or the actors involved. It’s the company’s fresh approach to the “human in the loop” problem.

Rather than build self-driving cars, Toyota’s stated goal is to build “intelligent” cars that augment rather than usurp their human drivers. The cars of the future, Fei-Fei Li of the Stanford Artificial Intelligence Laboratory told me, could function as “guardian angels” that quietly keep tabs on what we’re doing behind the wheel—and step in to save us when we screw up. For instance, they might learn to anticipate obstacles in the roadway, like a children’s ball bouncing into the street, and brake automatically to avoid them. Or they might sense when we’re growing tired behind the wheel and alert us before we fall asleep.

Li actually used the term “human in the loop” when describe Toyota’s approach to me, as did Pratt in an illuminating interview with the New York Times’ John Markoff. But I’d frame it differently. In the vision of the future they sketched, it isn’t the human who’s there mainly to make sure the computer doesn’t go haywire. It’s the other way around.

Maybe we should call that “computer in the loop.”

It isn’t an utterly novel approach. A cynic would point out that it’s conveniently consistent with the sort of incremental technological progress the mainstream auto industry has already embraced. Features that come standard on many cars today, like rearview radars and cameras that help you park, are examples of technology intervening in specific scenarios to save us from ourselves (though interestingly, a recent study suggested that people don’t always use these features). Li pointed out that even antilock brakes could be considered a form of vehicle intelligence.

It is, however, different in a crucial way from the approach that Google, Tesla, and others are taking today. Google is trying to build a self-driving car that does everything for us. Tesla is working on “autopilot” technology that does the relatively easy stuff for us—like negotiating light traffic on a freeway commute—while leaving the hard stuff to human drivers for the time being. Toyota, as Pratt explained it, appears to be interested in technology that does some of the really hard stuff for us—like sensing an impending collision and taking evasive action—while leaving us in control the rest of the time.

Li told me she doubts we’ll be ready to cede the wheel to fully driverless cars anytime soon. But she left open the possibility that Toyota’s approach could converge with those of Google and Tesla in the long term, as intelligent cars gradually earn our trust. “Nobody would now advocate for a ‘human in the loop’ for washing clothes,” she said. “We’re very happy the washing machine takes on the entire task for us. Could it be that eventually driving will become like clothes washing, and we really won’t want to participate? I don’t know. It’s an interesting question.”

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.