Future Tense

We Finally Know Exactly What Happened in Last Year’s Fatal Tesla Autopilot Crash

There’s good news and bad news.

A Tesla Model S involved in the fatal crash on May 7, 2016
The top third of a Tesla Model S was sheared off in a fatal collision with a tractor-trailer truck in Florida last year.

Robert VanKavelaar/Handout via Reuters

In May 2016, the inevitable happened: A Tesla driver was killed in a crash while his car was on Autopilot, the company’s impressive yet controversial semiautonomous driving software.

Tesla and authorities reported at the time that the Model S had “passed under” a semitrailer that had been making a left turn across traffic on a Florida freeway, with the trailer shearing the top off the car. But other details were sketchy, including how fast the Tesla had been going, how the driver failed to see the semitruck in broad daylight, and whether the vehicle’s Autopilot system had warned the driver in any way. The truck driver told the Associated Press that he had heard what sounded like a movie blaring from the car after it came to a rest farther down the highway.

This week, the National Transportation Safety Board cleared up some lingering questions when it published the factual findings of its investigation into the crash. The driver had indeed been going hands-free—and not just a little bit. According to the 538-page report, the driver had been using Autopilot for 37 minutes, out of which he had placed his hands on the wheel for a grand total of just 25 seconds. This despite seven separate visual warnings from the system, which flashed the message, “Hands required not detected.” Clearly the driver was determined to use Autopilot as if it were a fully autonomous driving system, rather than the safety mechanism that Tesla says it is meant to be.

However, reports that the driver had been watching a Harry Potter movie, perhaps on an in-dash DVD player, appear to have been unfounded. The NTSB’s main witness to the crash, who saw the car up close after it came to a stop, said there was no audio or video entertainment system playing. (The truck driver who had told the AP he thought he heard a movie playing in the car admitted he was not close enough to the vehicle to see anything or to be certain of what he heard.)

We may never know what the Model S driver was paying attention to instead of the road at the time of his fatal accident. We do know that he took no evasive action whatsoever. The NTSB report says he had set the software to a cruising speed of 74 mph shortly before the crash, and it was apparently still going that speed when the car hit the truck’s trailer broadside.

Interestingly, that suggests the truck driver could have foreseen the crash—if he had assumed the car would maintain a constant speed. The witness said the Model S was visible to the truck over the crest of a rise in the freeway for “several seconds” before the truck began its left turn. The implication is that the driver either didn’t see the Model S coming or assumed—perhaps naturally, given the truck’s size and visibility—that the Tesla’s driver would brake or change lanes to avoid the collision. That seems like a safe assumption when the driver is human but perhaps less so when the driver is an automated system. (Samuel English Anthony wrote compellingly last year about some of the expectations problems that can arise when autonomous systems and human drivers share the road.)

To be clear, this is conjecture: The truck driver declined to be interviewed by the NTSB and reportedly claimed to the Florida Highway Patrol that he didn’t see the Tesla coming. Still, it points to one of the big problems for autonomous driving systems, which is that their behavior might be logical on paper yet surprising to other drivers accustomed to sharing the road with fellow humans.

The NTSB report does not yet offer any analysis or conclusions as to who was at fault in the crash. That will come in a follow-up to this factual presentation of the evidence. However, Reuters reports that the truck driver has been charged with a right-of-way violation and is due to appear in court Wednesday.

The National Highway Traffic Safety Administration concluded its own investigation in January without finding that Tesla’s Autopilot system was a safety hazard. In fact, it estimated that the software reduces the likelihood of an accident by 40 percent, which is a compelling statistic in Tesla’s favor. But that was a broader probe of the system’s overall safety more than it was an assessment of the specifics of the fatal Florida crash. (I reported last August on one particularly dramatic instance in which the system may have saved a driver who suffered a pulmonary embolism while at the wheel.)

So what can we take away from the fatal accident now that a year has passed and key facts have been established?

First, it was a sobering reminder that autonomous driving software, for all its impressive achievements, remains far from perfect. Tesla’s Autopilot may have saved more lives than it has cost, but there are still circumstances in which it is prone to error in ways that an attentive human driver never would be.

Second, Tesla’s Autopilot system was (and remains) a work in progress, and it’s clear in retrospect that it was released to the public without safety features that could have helped to prevent at least one death. To its credit, Tesla was quick to address two major deficiencies that the crash highlighted. First, it modified the software so that it will stop working if the driver repeatedly ignores warnings to take the wheel. Second, it tweaked the driving system itself to allow automatic emergency braking in cases where the radar detects an obstacle but the computer vision system doesn’t. These changes have likely decreased the chances of an accident happening in just this way again.

Finally, the fact that the driver seems to have blithely ignored seven visual warnings underscores what some critics had charged all along: that Tesla’s warning system seemed more like a fig leaf to cover its legal liability than a genuine deterrent to hands-free driving. It was apparent from the beginning that Tesla’s Autopilot tacitly encouraged drivers to take their hands off the wheel even as it ostensibly prohibited them from doing so. The company has since remedied this, at least to some extent. But the tension between Autopilot as a safety feature and Autopilot as a convenience or luxury feature remains.

The road to fully autonomous vehicles was always bound to be bumpy. More people will almost certainly die along the way. But a year after the first U.S. fatality from a semiautonomous driving system, progress continues apace. It appears so far that both the public and the government are prepared to tolerate a few casualties as long as the net safety equation appears to be positive. For Tesla and other self-driving car companies, taking to heart the lessons of crashes like this one will increase the chances that it stays that way.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.