Future Tense

The Terrifying Technological Unknown

Why people are so bad at assessing the potential dangers of self-driving cars.

Google's self-driving Lexus cars outside the Google X labs in Mountain View, CA.
People will adapt to the self-driving car, just as they have adapted to the printing press, telephone, and the internet. Above, Google’s self-driving Lexus cars in Mountain View, California.

Brooks Kraft LLC/Getty Images

Hysteria over emerging technologies is at least as old as the printing press. Even the wheel might have evoked concerns about people straying too far, with the complexity of travel overwhelming their minds. With every subsequent technological advance, from electricity, to the telephone, to radio, to the loudspeaker, to the internet and email and Tinder, people have warned that these tools will compromise our mental well-being, our intelligence, our safety, our ability to communicate, and our moral compasses. Therefore, it is unsurprising that people have voiced similar concerns regarding driverless cars, a technology that will soon become commonplace.

Many simply feel a general uneasiness about being the passenger in a self-driving car, while others have expressed worry about an FBI report suggesting these cars can become deadly weapons when in the wrong hands. Still others have noted that the cars are susceptible to cyberattacks whereby hackers could break into the autonomous systems to control them remotely, essentially cutting the brakes from afar.

Much historical technological hysteria has proven to be overstated, and although the driverless car represents a new frontier in transportation, fears around this technology are likely exaggerated as well. Although it is impossible to gauge the safety of self-driving cars until they become the dominant mode of transportation, preliminary studies suggest self-driving cars will be far safer than human-driven ones in terms of crashes and resultant fatalities. But if you need reassuring, consider two well-established lines of psychological research, on how laypeople evaluate risk and how people forecast their future emotional states.

Decades of research on how people evaluate risk shows that emotion, rather than logical calculation, drives judgments. Whether or not people’s worries about risks of driverless cars are well-founded, they are likely not rational in the strictest sense. For example, we tend not to rationally compute the probability of a nuclear meltdown occurring by multiplying that probability by some quantified measure of the nuclear meltdown’s undesirability. Instead, we experience some emotional response to the event, which then translates into an estimation of risk.

Psychologist Paul Slovic and his colleagues are responsible for much of the overwhelming evidence that people evaluate risk-based situational features that evoke emotion rather than on expected utility. In one example of this pioneering work, they showed that people rank the riskiness of various activities (nuclear power, motor vehicles, handguns) based on principles such as whether the activity evoked dread or whether people undertake the activity voluntarily—neither of which factor into a rational cost-benefit analysis. For example, in one study, people judged nuclear power to be the riskiest of 30 activities presented even though the risk of death from nuclear power is far less than the risk of swimming, which participants judged to be of little concern. Two factors in particular appear to drive risk judgments: the involvement of technology (particularly novel and unknown technologies) and the perceived certainty that adverse consequences would result in death. These factors help explain fears about the driverless car. The car’s reliance on novel and poorly understood autonomous technology and concerns about how the car would navigate a life-or-death situation clearly situates it at the top of the flawed risk-meter.

Other work of Slovic’s examined judgments of risk regarding automobiles specifically and sheds more light onto people’s fears about forthcoming self-driving cars. This study presented participants with 40 different potential automobile defects—like an airbag failing to deploy or the steering gear mechanism loosening—and asked them to rate each on a variety of dimensions, including: how much they dreaded a potential mishap resulting from the defect, the likelihood that the defect would cause property damage, likelihood that the defect would injure vehicle passengers or bystanders, likelihood that the manufacturers knew about it, and likelihood that the driver would notice the problem before it could cause an accident, and perceived ability to control the car if the defect caused a mishap. Participants also evaluated how risky they thought the defect was. Slovic and his team found two specific factors that guide people’s risk judgments: inability to control the car in the event of a mishap and foreseeability on the part of the manufacturer.

Again, these two factors suggest intuition rather than a rational calculation. Neither uncontrollability nor foreseeability necessarily map onto any statistical estimation of actual risk, yet their emotional pull guided participants’ judgments. Interestingly, these two factors also very much capture what consumers seem to fear when they anticipate self-driving cars: an intelligent system with autonomy and intimate knowledge of its strengths and weaknesses and also one that is capable of putting passengers in a situation in which they completely lack control.

By understanding what factors guide risk judgments, we can better understand where fears about driverless cars originate, but how might we communicate that these fears are misguided? The first suggestion is that presenting evidence—like the fact that after driving 1.3 million miles in seven years, Google’s self-driving car has only gotten into one accident that was even ambiguously the car’s fault—is unlikely to change minds. If people do not evaluate risk rationally, then a rational presentation of the evidence will not affect people’s fears. Speaking to people’s intuitions, however, can change minds. For example, framing information about risk in terms of potential lives saved rather than potential lives lost increases support for an activity. Car companies could easily communicate the self-driving car’s capacity to reduce drunk driving and other potentially fatal driving behavior, as captured in this tweet:

Another way to calm people’s concerns is to minimize the factors known to evoke anxiety, like the novelty of the technology. In unpublished work, my co-author and I have shown reliably that when described as novel, people consider a technological procedure that creates fundamental change to humans’ lives to be immoral and to constitute “playing God.” (We purposely kept the “technological procedure” vague for this research.) Yet, perceptions of playing God and immorality decrease dramatically when we describe this procedure as well-established. In the case of the self-driving car, communicating to consumers that the car uses the same remote-sensing technology as existing airplanes and helicopters could make people feel more comfortable.

Another reason to feel reassured is that people also tend to overestimate their emotional reactions to future events. A large body of research by psychologists Daniel Gilbert and Timothy Wilson documents these “affective forecasting errors.” This work shows that people consistently overestimate how bad they would feel when experiencing various events, including learning undesired results of a pregnancy or HIV test, experiencing a romantic breakup, or watching one’s favorite political candidate or sports team lose. We often fail to recognize the capacity for our psychological immune system to make sense of negative events. People quickly adapt to scary new technologies, just like they get used to Instagram changing its logo or Twitter moving a button 1 millimeter to the left. Yet this ability to respond to minor and major adversity remains underdetected, so people keep mispredicting over and over again.

People will adapt to the self-driving car, just as they have adapted to the printing press, telephone, and the internet. The human mind is made for sense-making, and when we encounter a new situation that evokes fear, anxiety, and dread, our psychological immune system takes these emotions and explains them to the self, in other words, rationalizing them. This system has been essential for human survival, enabling us to cope with global disaster, heartbreak, and personal injury. And it will pacify us into accepting whatever panic-evoking technology Google and the automobile companies decide we should have, for better or for worse.

This article is part of the self-driving cars installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on self-driving cars:

Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.