Future Tense

The Emotional Uncanny Valley

How zombies could be the future of artificial intelligence.

robot intelligence.
Robots were designed to be “zombies,” not self-aware in the way we are.

Photo illustration by Juliana Jiménez. Photo by Thinkstock.

In July, news headlines blared that robots passed a “self-awareness” test. Naturally, the articles made the semi-joking references to Skynet and robot overlords that seem to accompany every minor development in robotics or artificial intelligence. Missing from the coverage was any thoughtful discussion about what it means for a robot to pass a self-awareness test.

Typically, artificial intelligence “breakthroughs” don’t live up to the press release. (See, for example, the misleading claims that an AI was as smart as a 4-year-old.) But if anything, the journalists here undersold the test’s true implications. No, the robots aren’t really “self-aware” in the way we are. They were designed to be “zombies” that—aside from their lack of conscious experience, feelings and sensations, and subjective conscious experience—can reproduce behavior that we perceive to be self-aware. How humans relate to these robot zombies will likely be an enormous social problem, a kind of emotional uncanny valley that robot designers will need to use every trick in their toolbox to surmount if robots will live among us. And even that may not be enough to help us adjust to the notion that your future co-worker may be “behaviorally indistinguishable” from you but nonetheless will still be “no more conscious than a rock.”

On July 17, researchers at the Rensselaer Polytechnic Institute executed an interesting experiment to produce what media outlets dubbed a “hint of self-awareness.”

Researchers told three robots they were going to be given a pill that would disable their ability to speak. No pills were actually administered, but the functionality was programmed into a button located on the robot. The button was then pressed for two of the three robots, preventing them from answering when a researcher asked, “Which pill did you receive?” In response to this question, the one robot with the ability to respond rose to its feet and said, “I don’t know.” But of course, that’s a contradiction. If the robot is able to speak, that means there’s no way it could have actually been given the pill. A second later, the robot realizes this and says “Sorry, I know now. I was able to prove that I was not given a dumbing pill.”

However, in reporting the robot as “self-aware,” all but Vice Motherboard’s intrepid robotic scribe Jordan Pearson failed to question what definition of “self-awareness” motivated the test. By looking at the writings of RPI test administrator Selmer Bringsjord, we can see both that the test did not tell the world what many in the media believed that it did. What it does say has some genuinely science fiction-like implications. To understand, relatively nontechnically, what this means, let us turn to William Shakespeare. Shakespeare’s character Shylock, facing prejudice against Jews, famously pleaded for his own humanity in a Merchant of Venice soliloquy:

I am a Jew. Hath not a Jew eyes? Hath not a Jew hands, organs, dimensions, senses, affections, passions? Fed with the same food, hurt with the same weapons, subject to the same diseases, healed by the same means, warmed and cooled by the same winter and summer, as a Christian is? If you prick us, do we not bleed? If you tickle us, do we not laugh? If you poison us, do we not die? And if you wrong us, shall we not revenge?

To Shylock, a large part of what makes him human is that he looks like a human (he has eyes, organs), has the biology of a human (subject to the same diseases, healed by the same means, warmed and cooled by the same winter and summer, dies when poisoned, bleeds when pricked), and behaves like a human (laughs when tickled, revenges when wronged). All of this, however, might be functionally mimicked in some form artificially. Artificial organs, for example, can crudely mimic human biology. And scientists have built biomimetic robots with the morphology of animals ranging from insects to dogs that are capable of animal-like behavior. However, when Shylock speaks of his “senses, affections, and passions,” a much harder problem is raised.

Imagine a situation in which we encounter a human-like entity (let us call it Robo-Shylock) that is the most realistic replication of a human science has ever encountered. It has hands and organs exactly identical to yours. It can be fed with the same food, injured by the same weapons, healed with the same medicines, and cooled by the same winter and summer as us. If you prick it, it bleeds. If you tickle it, it laughs. If you poison it, it dies a biologically naturally death. And if you get on its bad side, it gets even. However, there is one prominent catch.

Yes, when you prick it, it bleeds. But while Robo-Shylock shows what we might regard as outward signs of pain and appropriate behavioral reactions to being pricked, internally it has no concept of pain or being pricked. It does not experience a sensation of pain the way we would, despite bleeding and reacting as if it has been pricked. If such a scenario like the plot of a bad 1950s science fiction movie to you, then you are not alone. Robo-Shylock is what philosophers of mind dub a “philosophical zombie” or “p-zombie” for short. The idea of p-zombies is controversial, with some philosophers declaring the whole notion absurd and others saying “yes, zombies may walk among us. Who cares?”

One believer in the notion of zombies is RPI’s Bringsjord, who argues in his writings that he believes that machines will never be anything more than p-zombies. Even that is still big news, because it implies that p-zombies are both possible and can be engineered to pass objective tests of mental ability and skill. Bringsjord, in a paragraph that many reporting on his July test did not read, lays out and defends this position eloquently:

Bringsjord doesn’t believe that any of the artificial creatures featured in the present paper are actually self-conscious. He has explained repeatedly that genuine phenomenal consciousness is impossible for a mere machine to have, and true self-consciousness would require phenomenal consciousness. … [i]n short, computing machines, AIs, robots, and so on are all “zombies,” but these zombies can be engineered to pass tests. [The experimental approach] avoids endless philosophizing in favor of determinate engineering aimed at building AIs that can pass determinate test … engineering to tests is fortunately engineering, not a matter of metaphysics

Yes, we may never be able to make true “conscious” machines. But, as Bringsjord said in an interview with Vice’s Pearson, the mathematical correlates of consciousness may be modeled and engineered sufficiently to pass objective tests of mental functioning. Bringsjord might be able to engineer a Robo-Shylock that could pass the test of convincing us that it feels pain when pricked, even if it does not actually experience the sensation of pain.

Following Alan Turing’s example (albeit differing in what kind of test he envisions), Bringsjord is interested only in whether a machine can convincingly resemble what we consider to be self-awareness through engineering. Or, in Bringsjord’s own words, he is doing engineering rather than metaphysics. Instead of p-zombies being an abstract thought experiment debated in obscure philosophy journals, zombielike computer programs that behaviorally resemble us yet nonetheless lack conscious experience, sensations, and subjective experience are a possibility.

If Bringsjord is indeed correct, his method of engineering has some earth-shattering social implications. What would you do if a future co-worker were a p-zombie? Perhaps it may not matter. Some philosophers believe that we attribute belief, desires, and intentions automatically to an embarrassing variety of nonhuman objects. And good design tricks such as snarky responses to AI jokes can encourage us to warm up to even the most nonhuman of objects in appearance and form.

Or you may, despite all of the engineering and design tricks in the world, still feel disgusted, threatened, or even terrified by the notion of a machine that can fake a set of seemingly authentic behaviors and reactions. Beyond finding it creepy, it might even cause you to question your own humanity merely to know that a machine could be engineered to be exactly like you, save for a few vague and amorphous things (“consciousness,” “self-awareness,” “sensations and feelings”) that have no bearing on the way that the robot can functionally replicate your external behaviors.

Of course science—as opposed to engineering—is ultimately inconclusive, and it remains to be seen whether Bringsjord is ultimately right about his p-zombie engineering theory. His is just one of many methods of theorizing about and engineering intelligent machines, and the AI field has seen other grandiose promises about the ability to make intelligent machines since Turing’s famous 1950 paper on the imitation game. But if your next co-worker in the cubicle across from you is a (p-)zombie, don’t say I didn’t give you ample forewarning.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.