Future Tense

The Paradox of the Self-Driving Car

Tesla’s amazing autopilot mode is meant to make us safer drivers. What if it does the opposite?

Tesla Motors P85D Model S.

A demonstration of the Tesla Motors P85D Model S at Sports Authority Field in Denver, March 13, 2015.

Photo By Craig F. Walker/The Denver Post

I’m driving a Tesla Model S southbound on Manhattan’s treacherous, narrow-laned West Side Highway, my fingers whitening as I guide the car through a series of curves and tangles of traffic at 55 mph. Then I press a blue button, take my foot off the gas, and drop my hands from the wheel. Now the Model S is driving me.

Through freeway driving conditions hairier than Jacob deGrom’s head, the car maneuvers with aplomb. It’s programmed to keep a safe following distance from the vehicle in front of it, braking and accelerating at the optimal times to maintain its momentum without sacrificing safety. Its side-mounted sensors keep track of the cars on either side: When one edges into its lane, the Tesla carefully steers clear while warning me to retake control.

Here I should probably mention that I was technically never supposed to take my hands off the wheel in the first place. Shortly after I do, the Model S beeps a warning and a message flashes on the dashboard: “Hold steering wheel.” The reason for that is about to become frighteningly clear.

With traffic seemingly easing a bit, I let the autopilot take over again, then decide to flip the right-turn signal, testing Tesla’s “automatic lane change” feature. As the car ahead of me in the middle lane brakes, the Model S brakes gently and eases into the right lane … and then keeps bearing to the right, edging ever closer to the concrete wall that marks the freeway’s far right boundary. In an instant, my faith falters, and I grab the wheel and jerk it away from the barrier. Phew.

In retrospect, that may not have been necessary. At that point on the West Side Highway, the road lacks painted boundaries, which the autopilot system typically relies on for lane-keeping. Instead, the highway is bounded only by concrete barriers. To a human, it’s obvious what the concrete signifies. Not so for a computer vision system that is still in its infancy and hasn’t been specifically trained to recognize walls as such. The Tesla spokeswoman who was riding shotgun told me she’s confident the car’s collision-avoidance sensors would have reacted and corrected course had I waited a moment longer. But she acknowledged that the software still has a lot to learn, including the fact that people tend to get really uncomfortable when their speeding car hovers within inches of a concrete surface.

This is exactly why Tesla is adamant that its autopilot technology is not meant to take the place of human drivers. Even the most optimistic experts believe it will be several years before the technology for fully autonomous vehicles—the kind that companies such as Google, Audi, and Toyota are already testing via high-profile prototypes—is ready for the commercial market. Google in particular has resolved not to try to market its driverless vehicles until it believes they’re so safe that they don’t need humans behind the wheels at all. In fact, its current prototype removes the wheel from the equation entirely, along with all the other manual controls—and for good reason.

Tesla CEO Elon Musk agrees with Google that fully autonomous cars may be the future. But he isn’t content to wait for the technology to be perfected before trying at least some elements of it out on the public. Instead, Tesla is pushing the boundaries of what’s already possible, with features like self-parking and automatic lane change that go a step beyond what mainstream automakers have dared to try. The presumed involvement of human drivers is the safety net that makes these high-wire tricks possible (and legal).

But what if it turns out that human drivers don’t make for a very good safety net?

In the days after Tesla released its autopilot technology via software update to drivers around the country, videos began to pop up on YouTube showing giddy Model S owners trying the same stunt I did: “Look ma, no hands!” More than one also showed the car suddenly veering toward a barrier, or oncoming traffic, and coming perilously close to crashing before the driver regained control. The title of one video: “Tesla Autopilot tried to kill me!

The most alarming clip, however, came from the Netherlands, where a Model S driver filmed himself sitting in the back seat while the car drove itself.

In Tesla’s earnings call last week, Musk acknowledged the reports of autopilot errors, which he called “not surprising.” Downplaying the current version of the software as “a beta release,” he said it will learn and improve with time. In contrast, he seemed genuinely concerned about the YouTube videos showing drivers abandoning the wheel (something Tesla’s software specifically warns against). “This is not good,” he deadpanned.

Tesla thought it had measures in place to dissuade people from such risky antics. For instance, autopilot mode is designed to first issue a series of warnings, and ultimately slow the car to a stop if the driver unbuckles his seat belt or gets out of his seat. Somehow the Dutch driver appears to have circumvented those precautions. Musk said the company plans to address the problem with fresh restrictions on when and how autopilot can be used.

That seems like the right response. One idea might be to use Tesla’s mapping software to geofence the autopilot system so that it only works on approved highways. (Presumably, Manhattan’s West Side Highway would not be among them, at least until Tesla’s software has had more time to learn its contours.) And it might go some way toward allaying the concerns of established automakers who fear Tesla’s aggressive rollout of autopilot could turn the world against self-driving cars before they even arrive. “If there is a major accident involving automated driving, technological advancement will stop suddenly,” Toyota’s president recently warned reporters at the Tokyo Motor Show, according to the Daily Kanban.

Yet there’s a contradiction at the heart of the autopilot concept that could frustrate even the most careful efforts to implement it.

Tesla will tell you these features are meant for safety and convenience—to alleviate the tedium of long freeway commutes and as an extra set of virtual eyes on the road, vigilant of impending trouble. But if that’s all this was, the company could have forgone the “wow” factor of features like automatic lane changing and simply labeled its technology “adaptive cruise control,” as other automakers have. What Musk is really selling is a tantalizing sneak preview of our self-driving future. The primary appeal of owning a Tesla with autopilot is not the slight chance that it will save you from an accident at some point. It’s the heady feeling of being among the first to possess a dazzling new technology.

The problem is, when used properly, autopilot isn’t all that dazzling once the novelty wears off. No one can tell that the car is steering itself if you have your hands on the wheel the whole time. Worse, it doesn’t actually save you much work if you have to remain constantly alert to keep autopilot from suddenly steering you into a wall. If anything, keeping your hands on the wheel and your eyes on the road while autopilot steers the car is even more boring than actually driving—which makes it all the more difficult to pay full attention.

This isn’t to say that autopilot is without value. Before the company uploaded the feature to the public, one of its lead engineers gave me a ride in another P85D with a prerelease version of the software, this time in a very different setting. He took me up the San Francisco Peninsula in light midday traffic on an immaculately paved, meticulously painted section of Interstate 280 that’s billed as “the world’s most beautiful freeway.” The car made easy work of the hills and switchbacks overlooking Crystal Springs Reservoir, allowing the driver to relax a bit and enjoy the scenery even as he remained ready to intervene if necessary. Its driving style was precise, careful, and, well, robotic.

The engineer told me the technology has made his daily commute more pleasant. But it also seemed to defeat some of the purpose of buying a $100,000-plus supercar that can hold its own in a drag race with a Ferrari. If you’re going to let your car pilot itself, why splurge on one whose selling point is that it’s a joy to drive? 

In theory, beyond the element of convenience, the software adds a layer of redundancy that makes everyone on the road safer. Musk said Tesla’s data show the feature has already helped to prevent crashes, and it has yet to cause one. Even if it makes some mistakes, autopilot might still be safer than the human drivers it’s meant to assist. If nothing else, a Tesla on autopilot is a Tesla that’s not being whipped from 0 to 60 in under three seconds. It’s one that will brake automatically to avoid a disastrous pileup on a Seattle street. (Granted, you don’t necessarily need a fancy system like Tesla’s for that.) And it’s certainly safer than a car without autopilot whose driver is paying little heed to the road ahead. Thanks to our collective smartphone addiction, there is no shortage of those these days.  

In practice, however, Tesla’s autopilot doesn’t just complement the efforts of the human drivers who use it—it alters their behavior. It strengthens the temptation to check your email while stuck in traffic. It might even be the thing that convinces a drunk person at a party to go ahead and try to drive home. If Tesla can’t successfully clamp down on autopilot abuses via software update, I wouldn’t be surprised to see these very arguments advanced in a slew of lawsuits in the coming months.

Like many of Google’s solutions, it makes perfect sense—on a whiteboard. In the real world, the company is likely to find it awfully difficult to sell the public on driverless cars if humans don’t first learn to trust the technology more gradually, through experience with autopilot software like Tesla’s. Legislators, too, are going to look unkindly upon fully self-driving cars if they find that partially self-driving cars are more trouble than they’re worth. In other words, if Tesla’s autopilot experiment were to go up in flames, Google’s could be collateral damage.

I’ve written before about the problem of the “human in the loop,” and the scary start to Tesla’s autopilot program illustrates it vividly. Google’s solution was to cut the human out of the loop. (Toyota, as I explained, has a different solution that’s worth exploring.)