Google self-driving cars lack a human’s intuition for what other drivers will do.

A Google Car’s Crash Shows the Real Limitation of Driverless Vehicles

A Google Car’s Crash Shows the Real Limitation of Driverless Vehicles

The citizen’s guide to the future.
March 1 2016 12:23 PM

The Trollable Self-Driving Car

Humans are pretty good at guessing what others on the road will do. Driverless cars are not—and that can be exploited.

Google self-driving car.
Look out!

Elijah Nouvelage/Reuters

On Feb. 14, a Google self-driving car attempted to pass a municipal bus in Mountain View, California. The bus did not behave as the autonomous car predicted, and the self-driving car crashed into it while attempting to move back into its lane. The Google car was traveling at the stately speed of 2 mph, and there were no injuries. Google released a statement accepting fault and announcing that it was tweaking its software to avoid this type of collision in the future.

There is good reason to believe, though, that tweaks to the software might not be enough. What led the Google car astray was the inability to correctly guess out what the bus driver was thinking and then react to it. Google said in its statement:

Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day.

Yes, people sometimes misunderstand one another’s intentions on the road. Still, people have an intuitive fluency with this kind of social negotiation. Self-driving cars lack that fluency, and achieving it will be incredibly difficult.

For the past five years, my collaborators and I in the Vision Sciences Lab at Harvard University have been exploring the differences in capabilities between people and today’s best AIs. My studies have focused on simple tasks, like detecting a face in a still image, where AIs have become reasonably skilled. But I have become increasingly unsettled by the implications of our research for very challenging AI tasks. I am especially concerned by the implications for the extremely challenging task of driving a car. Self-driving cars have enormous promise. The improvements to traffic, safety, and the mobility of the elderly could be dramatic. But no matter how capable the AI, humans just behave differently.

In February the National Highway and Traffic Safety Administration ruled that the AI software controlling a self-driving car can count as a driver, smoothing the road for nationwide testing of self-driving cars. They did this despite the fact that, as security researcher Mudge pointed out on Twitter, the NHTSA lacks a methodology for determining whether the software works correctly. Still, the federal government is moving quickly to support self-driving cars. Transportation Secretary Anthony Foxx has proposed spending $4 billion to help bring them to market, while private corporations have made massive, ongoing investments. Projects from companies like Google and Tesla Motors have been the most visible, but traditional car companies like Toyota (which is pouring $1 billion into a new AI institute) and GM (which has committed $500 million to a joint venture developing autonomous urban cars with Lyft) are spending freely to develop this technology as well. These billions of dollars may be pushing us toward technology that creates as many problems as it solves.

The biggest difference in capability between self-driving cars and humans is likely to be theory of mind. Researchers like professor Felix Warneken at Harvard have shown that even very young children have exquisitely tuned senses for the intentions and goals of other people. Warneken and others have argued this is the core of uniquely human intelligence.


Researchers are working to build robots that can mimic our social intelligence. Companies like Emotient and Affectiva currently offer software with some ability to read emotions on faces. But so far no software remotely approaches the ability of humans to constantly and effortlessly guess what other people want to do. The human driving down that narrow street may say to herself “none of these oncoming cars are will let me go unless I’m a little bit pushy” and then act on that instinct, but behaving that way will be one of the greatest challenges of making human-like AI.

The ability to judge intention and respond accordingly is also central to driving. From determining whether a pedestrian is going to jaywalk to slowing down and avoiding a driver who seems drunk or tired, we do it constantly while behind the wheel. Self-driving cars can’t do this now. They likely won’t be able to do it for years. But this isn’t just about routine-but-confusing interactions like that between the Google self-driving car and the Mountain View bus.

Even the best AIs are easy to fool. State-of-the-art object recognition systems can be tricked into thinking a picture of an orange is really an ostrich. Self-driving cars will be no different. They will make errors—which is not so bad on the face of it, as long as they make fewer than humans. But the kinds of errors they make will be errors a human would never make. They will mistake a garbage bag for a running pedestrian. They will mistake a cloud for a truck.

All of this means that self-driving cars will be incredibly easy to troll. Sticking a foot out in the road or waving a piece of cloth around might be enough to trigger an emergency stop. Tapping the brakes of your car could trigger a chain reaction of evasive maneuvers. Perhaps a few bored 12-year-olds could shut down L.A. freeways with the equivalent of smiley faces painted on balloons.

To be clear, I invented these examples. However, security researchers have already shown that laser range-finding systems can be trivially fooled into thinking a collision is imminent. As self-driving cars increase in complexity (and they are among the most complex computer systems ever made) and as their sensors get more complex, the number of ways they can fail will increase. These failures will almost all be completely different from the ways that human drivers can fail.

Roads are shared resources. Car commuters, pedestrians, bicycles, taxis, delivery vehicles and emergency vehicles all occupy the same space. Self-driving cars introduce a whole new category of road user. And it’s a new category of road user that entirely lacks an understanding that all those road users share.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.

Sam Anthony studies the intersection of human and computer vision at the Vision Sciences Lab at Harvard University. Follow him on Twitter.