Future Tense

How Tesla Fixed a Deadly Flaw in Its Autopilot

The ingenious solution that could make self-driving cars much safer—if it works.

Tesla CEO Elon Musk
Tesla CEO Elon Musk speaks about new autopilot features during a Tesla event in Palo Alto, California, on Oct. 14, 2015.

Reuters/Beck Diefenbach

Tesla believes it has found a solution to the flaw in its autopilot system that led to the death of a Model S driver on a Florida highway in May.

It’s the kind of fix only Tesla could make.

CEO Elon Musk on Sunday announced a software update for its vehicles that significantly changes how autopilot works, without changing any of the hardware involved. Until now, the autopilot feature—which can self-pilot the car for stretches of highway driving—has relied primarily on a video camera and image-processing software to see the road ahead. A radar system and ultrasonic sensors provided additional data, but the system was programmed not to act on radar data alone due to some fundamental limitations of the technology.

With version 8.0 of the company’s software, which will be automatically installed in Teslas around the world in the coming weeks, that has changed. Bolstered by new signal-processing techniques and a crowdsourcing system called “fleet learning,” radar now enjoys a co-starring role alongside video in the autopilot system. The latest software also includes a new mechanism to prevent drivers from repeatedly ignoring warnings to keep their hands on the wheel. Together, Musk says he’s confident these changes will help make autopilot much safer than before—eventually, up to three times safer than manual driving.

If Musk is right, this could be a turning point in the company’s development of self-driving car technology. A demonstrably safer autopilot would go a long way toward quieting skeptics who believe Tesla’s approach to automation is fundamentally flawed. (I’ve been among these skeptics.) And the company’s ability to introduce the change to all of its autopilot-equipped vehicles automatically, via an over-the-air software update, reminds us that Tesla is playing by a different set of rules than its established auto-industry rivals.

Yet, as Musk acknowledged, there are few guarantees when it comes to highway safety. And Tesla’s newfound reliance on radar introduces some risks of its own.

Within the auto industry, autopilot has been controversial from the outset. Where other automakers proceeded cautiously with automation, Tesla launched autopilot in fall 2015 with the promise that it could take control entirely for stretches of highway driving. After a series of YouTube videos showed drivers goofing off or even sleeping at the wheel, the company added some restrictions designed to ensure human attentiveness. But it has resisted calls from critics to curtail or even disable autopilot until it could guarantee that drivers won’t abuse it.

Skeptics’ worst fears seemed to be born out when Tesla disclosed in June that a Model S driver, Joshua Brown, had died at the wheel while his car was on autopilot in Florida the month before. As Brown cruised down the highway, perhaps watching a Harry Potter movie while the car steered itself, a semi truck made a left turn in front of him. Autopilot is supposed to brake for solid obstacles in the car’s path, but on a brightly lit day, its computer vision system apparently failed to distinguish between the white side of the truck’s trailer and an overhead highway sign. The Model S plowed into the trailer broadside without braking.

It’s exactly the kind of mistake that an alert human driver would never make. For that reason, it was widely viewed as damaging not only to Tesla’s reputation but to the long-term future of vehicle automation. Other companies working on self-driving cars, including Ford and Volvo, were quick to distance themselves from Tesla’s approach. They argued that relying on human drivers to cover for the technology’s shortcomings was a recipe for disaster. Instead, they’re focusing on building cars with more advanced sensor systems, such as LIDAR, that can drive themselves without human intervention.

They might still be right. But Tesla believes it has numbers on its side, and its case is getting stronger.

After Brown died, Musk defended Tesla’s technology by pointing out that, on average, one person dies on U.S. highways for every 90 million vehicle miles traveled. In contrast, people had driven Tesla’s autopilot 130 million miles before the first confirmed death. That number, Musk said on Sunday, is now up to 200 million.

Asked whether the new software would have prevented the sort of crash that claimed Joshua Brown’s life, Musk said, “We believe it would have.” He admitted that “these things cannot be said with absolute certainty,” but says he believes the company’s new autopilot system will eventually be up to three times safer than manual driving.

How will it accomplish that? The short answer is, “It’s complicated.” But Musk attempted to give a longer and more descriptive answer in an hour-long conference call with reporters on Sunday along with a post published on the Tesla blog.

Radar, Musk said, is problematic as a control mechanism for autopilot, because the signal doesn’t readily distinguish between different types of objects. For instance, the dish-shaped bottom of a metal soda can reflects radio waves in a way that could make it appear to be a deadly obstacle. Yet those same waves will travel right through a solid oak tree, rendering it nearly invisible. To avoid constantly screeching to a halt in front of soda cans, Tesla’s autopilot system did not treat radar data alone as sufficient to trigger emergency braking. Instead it relied on processing data from the car’s video camera, which can readily identify many common obstacles based on their appearance.

But computer vision comes with some weaknesses of its own, as Brown’s crash highlighted. For instance, inclement weather or glare from the sun can hinder the software’s ability to distinguish objects that can be seen clearly on radar. Ditto unusual objects that the software hasn’t been trained to recognize, like a heap of scrap metal or a pile of feathers in the roadway. One should trigger emergency braking, while the other can be driven straight through—but the computer doesn’t know that.

It’s the kind of conundrum that might seem to require a whole new suite of hardware and sensors to solve. That would mean, at best, a massive recall of existing vehicles. At worst, it could doom the entire autopilot project, which was premised on the idea that computer vision and radar can make for a cheap, road-ready alternative to pricier and more complex LIDAR-based systems.

But Tesla has at its disposal the power to update its cars’ firmware remotely, a practice that it pioneered in the auto industry. And the company believes it has found a way to use that firmware that to address autopilot’s shortcomings.

Specifically, Musk said Tesla has improved its collection and processing of radar signals, so that it can use a series of rapid-fire snapshots of the same object to generate a sort of 3-D picture of it. “By comparing several contiguous frames against vehicle velocity and expected path, the car can tell if something is real and assess the probability of collision,” the blog post explains.

Meanwhile, the company plans to use what it calls “fleet learning” to identify and map obstacles that have the potential to flummox its radar system. For instance, a rise or dip in the road ahead can make a highway sign or overpass appear to be directly in the vehicle’s path. It’s a problem Tesla is “uniquely” positioned to solve, Musk said, because it tracks all of its vehicles and can log how they respond to a given situation.

When the new software is first deployed, the radar alone won’t trigger emergency braking. Instead, the company’s vehicles will simply log and map objects that show up on radar as potential obstacles, taking note when the car passes safely under or over them. Over time, the result will be a crowdsourced, geocoded whitelist of illusory obstacles for other vehicles to ignore. As false positives become rarer, autopilot will begin to trigger braking when radar identifies obstacles that don’t appear on the whitelist—gradually at first, then more sharply over time as the system’s confidence in its radar data grows.

In fact, Musk argued, Tesla’s improved radar system has some advantages on the vaunted LIDAR systems that Google and others are using to develop fully autonomous cars. It is untroubled by rain and snow, and it can penetrate some types of visible objects to suss out what’s behind them. Tesla’s improved radar system will also bounce signals off the road beneath the car in front of it to ascertain what’s happening two cars ahead.

Tesla’s rethinking of radar is the sort of ingenious solution—applying existing technologies in novel ways to seemingly insurmountable problems—for which it and Musk have become justly famous. Yet it’s also the sort of intricate technological workaround that may turn out to have unforeseen blind spots of its own.

To that end, Musk said, Tesla is also adding one more mechanism designed to keep drivers from abdicating their responsibility to stay alert while on autopilot. Those new to the system, Musk said, actually tend to use it rather cautiously. But some grow complacent as they become increasingly comfortable with autopilot. For instance, some have learned to drive nearly hands-free for long stretches by periodically nudging the wheel whenever the car beeps an alert to keep their hands on it. With the new software, Musk said, a driver who does this more than three times in a single hour will be compelled to pull over and park before they can use autopilot again.

None of this will bring back Joshua Brown, of course. And it may be of equally little solace to the family of whoever is the next to die behind the wheel of a Tesla on autopilot. But Musk said Sunday that he doesn’t regret rolling out autopilot when he did, and he said the company accepts the inevitability that others will be hurt or killed in the future.

“There won’t ever be zero fatalities,” he said. “There won’t ever be zero injuries.” Rather, the new software is about “minimizing the probability” of injury or death. And on that score, he’s “highly confident this will be quite a substantial improvement.”

Musk did spare, on Sunday’s call, the briefest moments for reflection. “Obviously I wish we could have done it earlier,” he said of the autopilot update, alluding vaguely to Brown’s death. There was a slight pause, and then he was back on track, confident as ever: “But the perfect is the enemy of the good. You can’t just come fully formed into some ideal solution. It’s not possible to do that for—well, for anything.”

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.