Future Tense

“Emotion Is a Side Effect of Intelligence”

Nic Kelman, author of How to Pass as Human, imagines how an A.I. might see the human world.

humanoid robot.

Nic Kelman’s new novel takes the form of the diary of an android.

Photo illustration by Juliana Jiménez. Photo by Thinkstock.

An android wakes on an operating table, suddenly conscious, its purpose at first a mystery. The world around it is strange, complex, full of possibility and promise.

This is the opening of Nic Kelman’s new book, How to Pass as Human, which takes the form of a diary composed by an artificial intelligence known as Android 0. The first being of his kind, Android 0 sets out to blend in with the world around him. In the process, he immerses himself in everything from the arbitrariness of human gender norms to our peculiar relationship with technology. Shortly before the book’s release, Kelman—who studied brain and cognitive science as an undergrad at the Massachsetts Institute of Technology—and I spoke about the clichés of artificial intelligence stories, the interdependence of cognition and emotion, and his hopeful cynicism.

In How to Pass as Human, you take the perspective of an android trying to make sense of humans. Was that difficult?

The biggest challenge was avoiding all the clichés and stereotypes about artificial intelligence. The thing that drives me crazy about all the current reporting and popular impression of A.I. right now is the premise we’re going to invent a superintelligence tomorrow. There are a lot of people who think that the singularity and artificial intelligence will work in the next five or 10 years. But when you dig into the real research, it’s much less sexy. And a lot of the more recent developments are still incremental advances on technologies that were there 20 years ago.

We’re much more likely to invent an intelligence that’s like a dog than we are to end up dealing with Skynet. That’s partly because we still have absolutely no idea how the brain works. I think our fascination with trying to recreate it is about trying to understand ourselves.

Did any other stories shape the way you thought about artificial intelligence?

The Pygmalion myth inspired me more than recent science-fiction stories about artificial intelligence. In a weird way, I think My Fair Lady is more of an influence on the book than I, Robot.

I wanted to avoid the big clichés. So it was a matter of looking at Terminator or looking at the movie A.I. or any number of other narratives like that and saying, “What’s always being repeated? Let’s just make sure we don’t do that.” One of the biggest clichés—even The Simpsons has made fun of it—is the conclusion of the story where the computer goes “I know now why you cry.” The android in How to Pass as Human is different in that he’s created with the capacity for emotion. The focus of the story throughout is him experiencing those emotions for the first time.

Why are we so preoccupied with pulling artificial intelligence down to our level? Why is it so important to us that they know why we cry?

I think it starts in the ’80s with the advent of the personal computer, this idea that we might be eclipsed by our own creation. There’s a lot of fear and anxiety around that. It’s easy to conceive of an abstract intelligence that doesn’t experience emotions. And that’s scary. But the reality is that intelligence and emotions aren’t separate.  

In How to Pass as Human, the main character analyzes humans from the outside and recognizes that most of our decisions are emotional decisions. We try to justify them with some kind of logical or rational process. Most of the time, if not all the time, it’s a way to convince ourselves that we’ve consciously done the right thing, even though we made the decision on a gut level.

If we look at some of the ways that researchers are developing artificial intelligence now—through deep learning, iterative neural networks, and so on—the systems they’re creating don’t seem likely to respond emotionally in ways that would be recognizable to us. Is it a fantasy to imagine that our artificial intelligences will feel in the same way that we feel?

I don’t think it’s a fantasy; I think it’s a necessity. Or, at the very least, emotion is a side effect of intelligence. If we do accidentally manage to create something intelligent, I would guess that it would exhibit some kind of emotion as a side effect of intelligent action in the world. And I’m not alone in thinking this, by the way.

From an evolutionary biology perspective, emotions are very efficient survival tools. Our purpose is to recreate and perpetuate our particular strands of DNA. Emotions are very good tools for doing that, whether you’re talking about love or hate or fear. These are things that help us continue to survive and protect our children, put roofs over our heads, and fend off bears. The idea that an artificial intelligence would have emotions that are then connected to its purpose makes sense. It may be that emotions are the natural result of the purpose, whatever that purpose is, or turn out to be the only way to motivate an intelligence to stay on purpose, to stay on track. So we would have to build them in, otherwise it just sits there and watches TV all day.

But would those emotional responses be hard-wired from the start? Or would they be emergent properties?

That is one of the key themes of the book. Android 0 is a reactive mechanism, something like a newborn baby that has the capacity for thought and for emotion. Until you let it interface with the world and experience the world that intelligence and those emotions can’t develop.

Some real-world researchers have been taking a similar approach, trying to raise A.I.s in the way you might a human baby. In some ways, what you have here is a science-fictional anticipation of where that might take us.

That’s exactly right. Just stacking up three blocks is so complex that it’s still very difficult to model every possible outcome of that physical system. That’s why physicists still try to reduce systems to their simplest version. Otherwise the math is just too complicated. So the idea is that if you’re not just talking about stacking up three blocks, but actually living in the world, the number of possible outcomes is infinite. Every second that passes, anything could happen. It’s impossible to build a brute-force system that could anticipate every outcome and then have some preprogrammed response to it. The only way to do it is to let an intelligence learn. The way that a human child develops in the world may be the only way to create intelligence.

A lot of what Android 0 learns in his investigations reflects pretty poorly on our species. Is that bleak perspective something you’d want people to take away from this book?

Look, I’m a cynic, and I’m definitely misanthropic. But at the same time, the single thing that makes me a misanthrope is that human beings have so much potential to be wonderful to each another and we spend most of our time not doing that. I’d like to think that the book strikes a balance between those two things. At the end of the day, I think that’s our core conflict as human beings. Can we overcome our selfishness and our fear?

You look around the world, and from my perspective it’s a pretty dark place these days. It’s worthy of cynical reflection, but as human beings we have the potential to be much more than that.

This interview has been edited and condensed for clarity.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.