Future Tense

Seeing Human

If you give a driverless car a name, you’re less likely to blame it after an accident.

Google self-driving car
Hello, my name is …

Photo by Mark Wilson/Getty Images

Last September, a story about American soldiers in Iraq who held “funerals” for military robots caught the interest of many, illuminating a unique feature of modern warfare. Research and interviews conducted by Julie Carpenter at the University of Washington demonstrated that military personnel who worked frequently with robots designed to disarm explosives tended to anthropomorphize these machines, empathizing with them, assigning personalities to them, and acknowledging their emotions. These soldiers found companionship with their robots during the chaos of war.

Months earlier I had attended a small meeting of practitioners, politicians, academics, and civic leaders to discuss specifically the topic of dehumanization. There, I listened to a decorated Marine and Iraq war veteran describe with contrition the unfortunate necessity of dehumanizing the enemy to cope with the stresses of combat. One term the Marines used to describe dead Iraqis, he said, was “Crispies,” and this sort of moral disengagement can alleviate the guilt and pain associated with inflicting harm on others.

The juxtaposition between soldiers’ extreme humanization of nonhumans and extreme dehumanization of human beings suggests the fluidity with which people “see” human. Rather than simply categorizing the world into “human” and “nonhuman” as though we could flip a switch to delineate between the two, people see humanness on a spectrum, an ability that has become all the more necessary as technology has become more humanlike.

Recent research that I conducted with Joy Heafner and Nicholas Epley on people’s responses to driving an autonomous car suggests that people are exceedingly willing to grant these cars’ humanlike intelligence, and the more humanlike features the car conveys—when the car possesses a name, voice, and gender—the more people trust it to operate competently, as a human driver would. The rise of autonomous, humanlike technology, such as the self-driving cars that will soon populate the highways, raises psychological questions that we are trying to answer. The first is a matter of ethics: When a human and an autonomous agent work together, who is to blame when things go wrong?

The ethical question has been discussed quite intensely with regard to military robots. Ethicist Ron Arkin has noted that one problem with lethal autonomous systems—that is, drones programmed to kill, is the issue of “establishing responsibility—who’s to blame if things go wrong with an autonomous robot?” Given that the Geneva Convention does not govern our everyday lives, the more mundane, but no less important, version of this issue is how to determine fault and responsibility when a society of driverless cars starts getting into accidents.

The research we conducted begins to shed light on this issue. Our initial research question was simply how people would respond to driving an autonomous car. We used a very realistic driving simulator—while personally testing its efficacy for our experiment, I caught myself looking over my left shoulder to change lanes—and randomly assigned participants who came to our downtown Chicago laboratory to drive in one of three conditions. In a normal condition, participants drove down city streets and highways braking, steering, and accelerating as one would with a typical car. In a pure autonomous condition, the car itself, and not the driver, controlled braking, steering, and acceleration. In an anthropomorphic condition, the car operated just as in the pure autonomous condition but was also given a gender (female), a name (Iris), and a human voice that informed the driver of its various functions.

After a short six-minute driving course, we asked participants to answer various questions about the car. We found that the more humanlike the car was, the more they considered the car to be intelligent, the more they trusted the car, and the more they enjoyed their driving experience overall.

After the initial course and evaluation, participants drove through a residential course that we designed so that everyone experienced an unavoidable accident—in all conditions, another car backed out of its driveway, crashing into participants. Following this accident, we then asked questions about how responsible the car was for the accident, as well as how responsible related entities—the engineer who designed the car as well as the car company—were for the accident. We also asked how much punishment they would dole out to the car, the engineer, and the company if such an accident occurred in the real world. Unsurprisingly, people who drove the car with autonomous features rather than normal features reported more inclination to blame and punish the car, engineer, and company. However, people reported less blame and punishment for the car, engineer, and company when the autonomous car also had humanlike features. Thus, in the case of an accident clearly caused by another car, people treated the anthropomorphic car like a trusted humanlike driver. If a competent driver you trusted were hit by another vehicle, you would likely hold him less responsible than you would an incompetent driver, even if in both cases the driver was blindsided.

To assess “trust” in the car from a more physiological and behavioral perspective, we also measured participants’ heart rates during this accident and had research assistants evaluate how startled participants appeared. Just as participants reported trusting the anthropomorphic car compared with the purely autonomous car, participants who drove with the anthropomorphic car also had reduced heart rates and startle responses to this accident, suggesting a greater degree of comfort with the car.

These findings again demonstrate how easily people can shift into “seeing human” in nonhuman objects, with potentially consequential implications for ethical and legal judgments. Our study showed that simple autonomy made people see the car as a responsible agent, but adding humanlike features generated trust and assumed competence.

Beyond addressing questions about how we will assign blame to the autonomous nonhumans of tomorrow (insurance companies are surely taking notes), these findings also give insight into the inverse process of dehumanization. Just as the mere addition of a voice and gender leads people to treat the car as humanlike, when we fail to perceive these features in other people, we are more likely to treat them as mindless objects. From enemies in war to the homeless person asking for change to the co-worker just a few cubicles down, failing to notice these simple features can lead to overlooking others’ capacity for rational thought and conscious experience, with potentially disastrous consequences.

Technology’s critics, from Karl Marx to the present day, have expressed concern that advancements in autonomous and humanlike machinery will encourage dehumanization, with technology substituting for human interaction. However, it is equally likely that technological progress will shed light on new ways of seeing human. If nothing more, such advancements illuminate people’s proclivity to see humanity all around them—as long as the right cues are present.

This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.