Future Tense

How Human Do We Want Our Robots To Be?: A Future Tense Event Recap.

Lily Hay Newman, Lance Gharavi, Woodrow Hartzog, Christine Rosen, and Patric Verrone discuss “How Human Do We Want Our Robots To Be?” at New America.

New America

When we imagine a roboticized future, we tend to think in terms of conflict, telling ourselves stories about android assassins and killer computers. But on Jan. 20, 2016, Future Tense convened a group of experts to discuss a different set of dangers: those that might arise from living at peace with robots. The important question, the panelists suggested, isn’t whether robots will want to kill all humans—it’s how human we want those robots to be.

Slate’s Lily Hay Newman, who served as moderator for the event, noted early on that we haven’t talked much about what it’ll be like to live side by side with mechanized beings. Though self-driving cars—not to mention robot butchers—seem to be on the verge of large-scale deployment, we’ve given relatively little thought to what will happen as we incorporate such creations into our ordinary lives. How might robot nannies affect children’s development? What will it be like to dance with a robot?

The central issue may come down to what Christine Rosen, senior editor of the New Atlantis, called “the Stepford Wife problem,” which she described as the probability that we’ll end up with emotional attachments to our robots. But Woodrow Hartzog, a law professor at Samford University and the owner of a Roomba nicknamed Rocko, argued that there’s nothing wrong with developing an emotional attachment to a robot. Still, issues do arise, he said, when we trick ourselves into believing that those nonhuman entities can reciprocate our affection. In other words, we should worry less about killer robots than deceptive ones, whether their deceptions arise by accident or design.

This concern came up in a different way when Newman asked the panelists whether robots should be allowed to lie to their human owners. Hartzog insisted that such questions force us to remember that robots are essentially tools. As such, whether a robot should lie depends on its basic purpose. Patric Verrone, writer and producer of Futurama, noted that it’s sometimes frustrating when a spell checker repeatedly informs us of our errors, but at core that’s a spell checker’s job—we wouldn’t want it to deceive us, even if it could make us happier by doing so. But in other circumstances, it might be different. For instance, with a care-taking robot in a hospice setting, Hartzog suggested, “Brutal honesty could be horrible, right?”

But Verrone said that for robots to work in these terms, they’d have to be capable of making highly subjective decisions based on circumstantial observations. Ultimately, that’s an issue of free will, a capacity that robots, by most understandings, shouldn’t—and arguably can’t—truly possess. Given that we fear the possibility of genuinely autonomous creations, Verrone proposed that we’re unlikely to build it into them in the first place.

According to Lance Gharavi, an associate professor of theater at Arizona State University, the question of free will rapidly resolves into a problem of desire. Steering the conversation into philosophical terrain, he observed that we can’t even say definitely whether humans have free will. But, he continued, if a robot has desires, even if those desires just involve the need to appropriately serve its master, then it can suffer. And if it can suffer, we have an ethical responsibility toward it. For Hartzog, on the other hand, the ethical stakes of humanlike robots have more to do with the ways that we relate to humans. Paraphrasing the claims of the Masschusetts Institute of Technology’s Kate Darling, he suggested that if we’re being cruel to robots, we really need to talk about what that cruelty does to us, not about how it affects our nonsentient creations.

The significance of this point grew clearer when Newman asked the panelists to imagine a situation in which a robot was better able to fulfill emotional and interpersonal responsibilities than a human spouse. Verrone suggested that in such a circumstance, we might like the things a robot does for us, but that doesn’t really mean that we like the robot. Accordingly, our responses to them say more about us than they do about our mechanical “friends.” Ultimately, the most important function of humanlike robots may therefore be that they force us to interrogate what humanness entails in the first place. Human has never been an especially stable concept, Gharavi said. By showing us a slightly different version of ourselves, robots might help us understand that category a little more expansively.