Is Tesla’s style of autopilot a bad idea? Volvo, Google, and others think so.

Why Tesla’s Autopilot Might Be Fatally Flawed

Why Tesla’s Autopilot Might Be Fatally Flawed

The citizen’s guide to the future.
July 6 2016 5:18 PM
FROM SLATE, NEW AMERICA, AND ASU

Is Autopilot a Bad Idea?

Why Ford, Google, Volvo and others think Tesla is wrong about automation.

2017 Ford Fusion.
2017 Ford Fusion.

Ford

Ford, like most major automakers these days, is developing high-tech driver assistance technologies for its vehicles. It’s touting, for example, a new adaptive cruise control feature in the 2017 Ford Fusion that works not only on the highway but also in stop-and-go traffic. At the same time, Ford is investing in the long-term development of fully self-driving cars—the kind that take you from point A to point B without any human intervention. It believes they’re part of the future, and it doesn’t want to be left out.

Will Oremus Will Oremus

Will Oremus is Slate’s senior technology writer. Email him at will.oremus@slate.com or follow him on Twitter.

But there’s one big thing that Ford isn’t doing when it comes to vehicle automation, CEO Mark Fields told me in an interview earlier this year. It isn’t building anything resembling Tesla’s autopilot system, which made headlines Thursday when news emerged that a driver, Joshua Brown, had collided fatally with a semitrailer truck while the feature was engaged.

Advertisement

Ford isn’t alone in its aversion to the type of automation that Tesla is now building into its vehicles. Google has eschewed it, too. And a Volvo executive recently derided autopilot as an unsafe “wannabe” posing as a more advanced system.

It’s too soon to say they were right, and no one is crowing in the wake of the wreck that killed a 40-year-old ex–Navy SEAL. (The crash, which happened May 7, was disclosed to the public on June 30.) But privately, many in the industry viewed something like that accident as inevitable with Tesla’s technology. And while it’s unlikely to halt progress on self-driving cars, the crash could be a significant setback for the particular brand of automation that Tesla has pioneered.

To understand what Tesla’s autopilot mode represents—and why some companies are steering clear—it’s helpful to first understand the National Highway Traffic Safety Administration’s admittedly wonky, five-level classification system for vehicle automation. Here it is, straight from the NHTSA website:

No-Automation (Level 0): The driver is in complete and sole control of the primary vehicle controls—brake, steering, throttle, and motive power—at all times.
Function-specific Automation (Level 1): Automation at this level involves one or more specific control functions. Examples include electronic stability control or pre-charged brakes, where the vehicle automatically assists with braking to enable the driver to regain control of the vehicle or stop faster than possible by acting alone.
Combined Function Automation (Level 2): This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.
Limited Self-Driving Automation (Level 3): Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control. The driver is expected to be available for occasional control, but with sufficiently comfortable transition time. The Google car is an example of limited self-driving automation.
Full Self-Driving Automation (Level 4): The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles.
Advertisement

NHTSA’s scale isn’t universally agreed-upon. SAE International has its own scale that includes a Level 5, although the first three levels are similar. And there are some intelligent critiques of both. Still, the guidelines can be instructive as to how both the industry and regulators tend to think about vehicle automation.

It’s relatively common for new cars to come with some form of Level 1 automation. And most car companies are moving toward Level 2, at least in their more high-tech models, if they aren’t already there.

Level 3 is a different story: A vehicle with Level 3 automation is one in which the driver can relax a bit, at least under routine highway driving conditions, and let the software do the work. But the driver still has to keep an eye on the road and be ready to take over in an emergency. As NHTSA notes, Google’s original self-driving cars—in which a human driver sat at the wheel and took over when things got dicey—were an example of Level 3 automation.

But at some point in its testing, Google decided that Level 3 automation was not a good idea. The problem? When machines are doing most of the routine work, humans become the weak link. Their attention naturally wavers, leaving them unready to take over in the sort of emergency that would necessitate human involvement. For that reason, Google fundamentally rethought its approach to vehicle automation and decided to devote all its resources to Level 4 technology. Accordingly, it came out with a self-driving car prototype that was truly “driverless”—it doesn’t even have a gas pedal, brake, or steering wheel. Taking the human out of the loop, Google came to believe, was the only way to make self-driving cars truly safe.

Advertisement

So which level is Tesla’s autopilot? Interestingly, that’s a matter of some dispute. Tesla bills the system as an example of Level 2 automation, albeit a more advanced version than any other on the market today. It combines adaptive cruise control, automatic steering, automatic lane changes, and automatic emergency steering, which combine to do just about everything the driver would normally do under routine highway conditions. (One exception: You still have to flip the turn signal if you want it to change lanes.) Yet Tesla’s software requires drivers to agree that they’ll “remain engaged and aware” and keep their hands on the steering wheel while using autopilot. In the event of an accident, they’re still responsible.

This approach makes sense if your goal is to automate driving to the greatest extent possible while relying on a technology that’s not yet advanced enough to be fully trusted with people’s lives. In Tesla’s view, it’s a way to “reduce the driver’s workload” while also making cars safer by allowing the software to take over in an emergency.

But others in the industry don’t buy it. In our interview, Fields referred to Autopilot as a Level 3 technology—the kind his company is studiously avoiding. “We were kind of thinking through that, and wondering, ‘What’s the level of driver engagement?’ ” he said, when I asked him why Ford wasn’t pursuing a Tesla-style system of its own. “If a customer’s going to be paying for that level of feature, our concern is, what’s the line in the sand where the driver is going to say, ‘Oh, the vehicle will take care of that’?”

In other words, Ford didn’t see a use case for Level 3 automation that would justify the cost—unless it were to allow drivers to sit back and relax, which would pose safety concerns, since Level 3 systems aren’t designed to be fully reliable in an emergency.

Advertisement

Likewise, Trent Victor, Volvo’s senior technical leader for crash avoidance, told the Verge in April that Tesla’s autopilot “gives you the impression that it’s doing more than it is,” implicitly encouraging drivers to zone out. Like Ford, Volvo is keeping its Level 2 driver assistance efforts separate from the Level 4 autonomous vehicle it plans to test in 2017.

In 2015, when Toyota announced a $50 million dive into artificial intelligence for vehicles, it said its efforts would focus on building “intelligent” cars, rather than autonomous ones. Whereas Tesla’s autopilot takes over the easy work and expects the driver to take over under duress, Toyota says it will design systems that do essentially the opposite. That is, they’ll leave the bulk of the driving to the driver but step in with evasive action in case of emergency.

There are a few companies that are taking a similar approach to Tesla’s. A notable example is Mercedes, whose Drive Pilot system is also capable of taking the wheel under certain conditions, albeit for much shorter stretches. But last week’s fatal crash might make the public and the press re-examine its enthusiasm for such systems, if they weren’t already having second thoughts based on all those YouTube videos of Model S drivers going “hands-free” (or even, in one case, taking a nap). The May 7 crash victim was among those who posted such videos, and there are indications he may have been watching a Harry Potter movie when he died.

Tesla has already installed warnings and sensors to try to keep drivers alert at the wheel, and perhaps it will take new steps in light of Brown’s death. But if Tesla really wants drivers to treat autopilot as a Level 2 technology rather than Level 3, there might be a better solution. Instead of adding features to the system, Tesla could take some away—like the autosteer feature, which navigates without human intervention. That would leave the software with adaptive cruise control, automatic lane changes, and automatic emergency steering, along with automatic parallel parking.

The sum of those parts would be a system much less exciting than autopilot. But maybe that’s not a bad thing.