Sometime in the future, you decide to buy a humanoid robot. It can perform chores, maybe even help out with care for children or an elderly relative. You and your family know it’s a machine, but you’ve come to identify with it. You’ve named it. You may even speak to it, like a pet, when no one else is around. But the first time it makes a mistake and, say, drops a dish it’s putting away, how will you react? Is the machine culpable?
That’s the question guiding a new paper, “Do People Hold a Humanoid Robot Morally Accountable for the Harm It Causes?” (PDF), which was presented last month at the Human-Robot Interaction conference in Boston.
In an experiment, undergraduate students were put in a room with a humanoid robot named Robovie (and, at some points in the interaction, a researcher). Robovie’s speech and movements were controlled, Wizard of Oz-style, by two operators in another room. After some carefully crafted chit-chat, Robovie directed the student in a scavenger hunt. If the student found at least seven of the requested objects, she would win $20. But the hunt was rigged: It would be difficult for a student not to find more than seven of the items—and regardless of the objects found, Robovie declared that he or she had failed to win the prize. Watch a "losing" participant argue with Robovie in a video provided by the University of Washington.
After receiving the (incorrect) bad news, the students were individually interviewed about their perceptions of the robot and whether it was responsible for its mistakes. Here’s what happened:
We found that 65% of the participants attributed some level of moral accountability to Robovie for the harm that Robovie caused the participant by unfairly depriving the participant of the $20.00 prize money that the participant had won. … [P]articipants held Robovie less accountable than they would a human but more accountable than they would a machine. Thus as robots gain increasing capabilities in language comprehension and production, and engage in increasingly sophisticated social interactions with people, it is likely that many people will hold a humanoid robot as partially accountable for a harm that it causes.
To evaluate the level of responsibility people assign to robots, it’s helpful to know how people categorize the machines. The researchers found:
When asked whether Robovie was a living being, a technology, or something in-between, participants were about evenly split between “in-between” (52.5%) and “technological” (47.5%). In contrast, when asked the same question about a vending machine and a human, 100% responded that the vending machine was “technological,” 90% said that a human was a “living being,” and 10% viewed a human as “in-between.”
I’m curious about the 10 percent who viewed humans as “in between.” Do they think that we are already cyborgs?