Future Tense

Code Is My Co-pilot

Tesla insists its controversial autopilot software is saving lives. Can it convince the rest of us?

87363936
A Tesla Model X Crossover, on display in September in Fremont, California.

Justin Sullivan/Getty Images

On a balmy Tuesday afternoon in late July, 37-year-old attorney Joshua Neally left work early. He climbed into his new Tesla Model X to drive the 45 minutes from law office in Springfield, Missouri, to his house in Branson, Missouri. He was going home to celebrate his daughter’s fourth birthday.

He steered the electric luxury SUV into the gathering rush-hour traffic on Highway 68 and turned on autopilot, a feature unique to Tesla that allows a car to pilot itself—braking, accelerating, steering—for long stretches of freeway driving. It’s a feature that has drawn rebukes from rival companies and sparked investigations by federal regulators after a driver named Joshua Brown was killed in a crash in Florida while using it. Although a Tesla with autopilot is not a true self-driving car, the company’s technology has become a bellwether for Silicon Valley’s ambition to replace human drivers with software.

Neally knew about the Florida crash and the furor that followed. But he had already ordered his Model X after years of waiting and saving, and he was undeterred. When it arrived, he nicknamed it Ender, after the protagonist in the novel Ender’s Game. By July 26, after a week of driving the Model X, he had grown to cautiously trust it to handle the bulk of his hilly, curvy, sometimes traffic-y commute. “I’m not a daredevil,” he told me. “I promised my wife I’d always be paying attention.” He doesn’t drive hands-free, or play Jenga, or nod off, or watch Harry Potter movies, as Brown may have been doing when he plowed into the trailer of a semi truck. He admits, however, that he sometimes checks email or sends text messages on his phone.

Joshua Neally in his Tesla.

Joshua Neally

Neally was about 5 miles out of Springfield, near a set of interchanges just beginning to clog with merging vehicles, when he felt something coil and stiffen in his abdomen. At first he thought it was a pulled muscle. But the pain forked upward from his stomach, he said, until it felt like “a steel pole through my chest.” When it refused to subside, Neally remembers calling his wife and agreeing through gasps that he should probably go to the emergency room.

He doesn’t remember much of the drive after that.

Doctors in Branson told Neally later that he’d suffered a pulmonary embolism, a potentially fatal obstruction of a blood vessel in his lungs. They told him he was lucky to have survived. If you ask Neally, however, he’ll tell you he was lucky to be driving a Tesla. As he writhed in the driver’s seat, the vehicle’s software negotiated 20-plus highway miles to a hospital just off an exit ramp. He manually steered it into the parking lot and checked himself into the emergency room, where he was promptly treated. By night’s end he had recovered enough to go home.

Did autopilot save Neally? It’s hard to say. He acknowledges that, in retrospect, it might have been more prudent to pull over and call an ambulance. But the severity of what was happening dawned on him slowly, and by the time it had, he reckoned he could reach the hospital quicker via autopilot than ambulance. He also wonders whether, without autopilot, he might have lost control of the car and in effect become a deadly projectile when those first convulsions struck.

Neally’s experience is unusual. It doesn’t prove autopilot’s worth as a safety feature any more than Brown’s death disproves it. Yet Neally’s story is the latest of several that have emerged since the Florida crash to paint a fuller picture of autopilot’s merits, in addition to its by now highly publicized dangers. These stories provide at least a measure of anecdotal support for Tesla’s claims that its own data show autopilot—imperfect as it is—is already significantly safer than the average human driver.

That’s going to be a tough sell, though, to the public and regulators alike. Brown’s death ignited a backlash that had been brewing since Tesla CEO Elon Musk announced autopilot in a heavily hyped, Steve Jobs–like launch event in October 2014. Rival car companies felt from the start that Tesla was rolling out autonomous driving features too aggressively, before the technology was safe enough to earn consumers’ trust. The skepticism intensified after Tesla activated the feature last fall, and drivers immediately began posting YouTube videos of themselves abusing it. Tesla calls autopilot a “beta” feature and requires the driver to agree to pay full attention and keep hands on the wheel while it’s in use. But, despite some safety checks introduced in January, the car will still drive itself if the driver goes hands-free.

By mid-July, when a second Tesla Model S crashed while on autopilot on an undivided highway in Montana, Tesla had become the subject of three federal investigations. The National Highway Traffic Safety Administration was looking into the cause of the Florida crash. The National Transportation Safety Board was examining whether autonomous driving technology was a hazard to safety. Even the Securities and Exchange Commission opened a probe into claims by Fortune magazine that Musk had failed to disclose the Florida autopilot crash to investors in a timely manner, even as he sold some of his own stock in the company. (Tesla has vehemently disputed the claims.)

It added up to a grim cloud over both the company and self-driving car technology, whose future depends on drivers, bureaucrats, and chest-pounding politicians all agreeing to place human lives in the hands of potentially deadly robots. Even Consumer Reports, which has championed the Tesla Model S as one of the greatest cars ever made, called for Tesla to disable autopilot until the technology became more reliable.

“By marketing their feature as ‘Autopilot,’ Tesla gives consumers a false sense of security,” Consumer Reports Vice President Laura MacCleery said. “In the long run, advanced active safety technologies in vehicles could make our roads safer. But today, we’re deeply concerned that consumers are being sold a pile of promises about unproven technology.”

I’m among the critics who have suggested, both before and after the furor over Brown’s death, that Tesla had implemented and publicized the technology in a potentially perilous way. Despite its name, Tesla’s autopilot feature does not give the cars full autonomy, like the concept vehicles made by Google that you might spot tooling around Mountain View, California. The first time I test-drove a Tesla with autopilot, I wrote a review calling it “a safety feature that could be dangerous,” because it encourages drivers to relax while relying on them to take over at a moment’s notice. After Brown’s death, I wrote that the entire autopilot concept might be flawed.

Yet Tesla insists that calls for it to disable autopilot are shortsighted. In fact, the company argues that the critics have it backward: Given that its internal testing data suggest the feature drives more safely than humans do, Tesla maintains that it would be irresponsible and dangerous not to offer autopilot to its customers.

It’s a typically brash stance from a company that has never backed down from a public relations battle, and it’s tempting to dismiss it as another example of Musk’s hubris. Yet, as usual, Tesla makes a strong case for itself. Pressed to defend autopilot’s safety record, the company disclosed to me the process by which it tested and eventually decided to activate the feature to consumers.

First, the company developed the software and tested it in millions of miles’ worth of computer simulations, using real-world driving data gathered by the sensors on the company’s cars. Next it activated autopilot in about 300 vehicles driven by the company’s own testers, who drove it every day and subjected it to challenging circumstances. (Musk was among them.) Then it introduced autopilot “inertly” via software update into the vehicles of existing Tesla drivers for a testing phase that it called “silent external validation.” In this mode, the autopilot software logged and analyzed every move it would have made if active but could not actually control the vehicle. In this way, Tesla gained millions of miles’ worth of data on autopilot’s performance in consumers’ vehicles before it ever took effect. Finally, the company activated the feature for some 900 consumers who volunteered to test it and provide subjective feedback. Throughout the process, Tesla says, it released updates to improve the software, and by the end it was clear to the company that drivers would be safer on the road with autopilot than without it. At that point, Tesla argues, it would have been a disservice to its drivers to keep the feature inactive.

Without taking Tesla’s word for it, it’s tough to empirically validate Musk’s contention that autopilot is already saving a significant number of lives. One confounding factor is that we’re less likely to hear about it when something goes right with self-driving features than when something goes wrong. Given that Tesla says it anonymizes its tracking data for customers’ privacy, there’s no way for the public to know about these close calls unless drivers self-report them, as Neally did to Tesla after his pulmonary embolism. (Neally agreed to tell his story to Slate after I asked the company for real-world examples of autopilot functioning as a critical safety feature.) Even when we do know about these, it’s hard to prove the counterfactual that someone would have died if the automation hadn’t kicked in.

Still, Neally’s case isn’t the first in which Tesla safety features appear to have averted catastrophe. In Washington on July 16, a Model S was driving on New York Avenue when a pedestrian stepped in front of it. The car slammed on its own brakes, and no one was hurt. The incident was glossed in headlines as one in which autopilot may have saved a pedestrian’s life. That isn’t quite accurate, though: Autopilot was turned off at the time, the company told me. It was actually Tesla’s automatic emergency braking system that kicked in. That’s a safety feature that dozens of other car models already offer and which may come standard in all U.S.-made vehicles by 2022. Tesla deserves credit for implementing it, but not for pioneering it.

In another instance, the dashcam on an Uber driver’s Model S captured a scary close call in which a sedan suddenly turned left in front of him, at night, in the rain, with no time to steer around it. Before the driver could react, the Model S braked sharply. It jerked to a stop a few feet from the car, which it otherwise would have plowed into broadside. In that case, it appears that autopilot was in fact engaged.

Meanwhile, the NHTSA has concluded that the fatal Florida crash should not set back efforts to make the roads saver through automation. The auto industry “cannot wait for perfect” to develop and deploy potentially lifesaving technology, NHTSA head Mark Rosekind said.

It’s fair to remain skeptical when Musk claims that autopilot would save 500,000 lives a year if it were deployed universally. Unless the company were to release all its testing and tracking data, which it declines to do, we can’t possible verify its calculations. One of the few specific figures that the company publicized in its blog post was that autopilot had been safely used in more than 130 million miles of driving before the first fatality, which is a higher ratio of miles to deaths than the U.S. or global averages. But just one more autopilot-related fatality tomorrow would undermine that claim. The math required to demonstrate conclusively that autopilot is safer than human drivers would be more nuanced, examining injury accidents as well as fatalities and controlling for biases such as the recommended use of autopilot predominantly on highways under favorable driving conditions.

What we know at this point is that autopilot can hurt or kill people if used improperly and that it also has the potential to save people. It’s also fair to assume that the technology will get safer over time as Tesla and other companies study and learn from its errors. The only question is whether the public can or should tolerate its rare mistakes in the meantime.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.