Future Tense

What’s the Deal With Artificial Intelligence Killing Humans?

Your 101 guide to whether or not computers are going to murder us.

Illustration by Natalie Matthews-Ramo

This seems like a rough time to be human: Artificial intelligences are beating us at Go, getting better at driving cars, and doing all sorts of other stuff. How much longer until they just rise up and kill us?

Longer than you might think, and though there are good reasons for caution and concern, a lot of the talk you hear about Terminator-type scenarios is excessively alarmist. Read an article on, say, the rise of robot butchers, and you’ll inevitably find commenters worrying that the system is going to go haywire and attack its human masters. Even when they’re a little joke-y, these responses tend to bear the trace of the old Luddite anxiety that machines are somehow fundamentally opposed to humanity.

If you really get into it with A.I. researchers, you’ll find that most of them aren’t really worried about murder-bots actively looking to KILL ALL HUMANS. Instead, they’re concerned that we don’t really know what we’re getting into as we rapidly engineer systems that we can barely comprehend, let alone control. It’s this concern that’s led Elon Musk—who’s supported all sorts of A.I. research—to describe artificial intelligence as an “existential threat.” He seems concerned that we may not be able to direct the forces that we’re calling into being.

BOSNIA-SCIENCE/
In the broadest sense, Google Maps is employing A.I. when it helps you find a route to your destination.

Dado Ruvic / Reuters

So when you say “artificial intelligence,” what do you mean, exactly?

Basically, we’re talking about computer systems and algorithms that can form conclusions and determine their actions without direct human intervention. That doesn’t mean that they have human-like minds, but they may be capable of equaling—and often exceeding—human cognitive capacities with regard to specific tasks. In the broadest sense, Google Maps is employing A.I. when it helps you find a route to your destination. And the self-driving cars that might soon carry us along those routes are using A.I. to evaluate road conditions and otherwise keep us safe.

Let’s talk short-term. Is A.I. dangerous to us right now?

TOYOTA-SAFETY/
The logo of Toyota Motor Corp’s self-driving technology “Mobility Teammate Concept.”

Yuya Shino / Reuters

Not “dangerous” in the material sense. A lot of the most pressing concerns are much more quotidian. Jeff Goodell observes in Rolling Stone that they threaten to put a lot of people out of work. Computer-operated vehicles, for example, could take away jobs from countless truck and cab drivers. (Goodell is a fellow at New America; New America is a partner with Slate and Arizona State University in Future Tense.) As Madeleine Clare Elish recently argued in Future Tense, such fears may be overblown, since these programs may end up operating more like co-workers than anything else. Still, it seems entirely likely that A.I. will change the ways we labor—and that could change a great deal about the way we live.

Can we get back to the murder stuff? Didn’t Musk issue some warning about weaponized robots?

Sort of. The Future of Life Institute, an organization that Musk has funded to the tune of $10 million, put out a public letter on the topic in July 2015. The letter, which was co-signed by thousands of researchers, scientists, and others, defines “autonomous weapons” as systems that “select and engage targets without human intervention.” FLI’s letter states that the technology is “feasible within years,” meaning that it represents a plausible concern if not an immediate menace.

But Musk and his co-signees aren’t actually worried about these (still theoretical) machines going berserk. Instead, their concerns, which are two-fold, are much more practical. First, they suggest that autonomous weapons may mess with other A.I. research priorities, pulling focus from more peaceful applications. Second, they’re worried that “new tools for killing people” will prompt an A.I. arms race that would inevitably lead to autonomous weapons showing up on the black market, at which point terrorists and tyrants would have access to them.

In other words, the problem isn’t about what computers will do to humans; it’s about what humans will do with computers.

There’s got to be some reason to worry about A.I. in and of itself, though, right? What’s our nightmare scenario?

If there is, worrying about robots is probably the wrong way to go. When we imagine humanoid robots coming for us, we’re already engaged in a sort of compensatory fantasy. A version of A.I. that looks and acts like us—the titular killer from Terminator, for example—is one that we can understand and oppose. But as George Dvorsky writes in Gizmodo, it’s a mistake to conflate A.I. and robotics. An A.I. that really wanted to wipe us out would be more likely to “unleash a biological plague, or instigate a nanotechnological gray goo disaster” than it would be to send Arnold Schwarzenegger.

Tesla factory where a robot installs the laminated glass roof on a Model S.
Tesla factory where a robot installs the laminated glass roof on a Model S.

Steve Jurvetson

Ultimately, though, true synthetic superintelligences would be operating on an altogether different cognitive plane than us, meaning that if they were to come after us, they would probably do so in ways that would be almost incomprehensible. One of the most frightening scenarios—the notorious Roko’s Basilisk thought experiment—posits a situation where an A.I. creates an exact simulation of your consciousness and tortures you for all eternity. That’s a scary prospect, but most A.I. theorists would probably agree that it’s pretty farfetched.

So if A.I. isn’t going to kill us, what should we be worrying about?

Any time you give a machine power, it’s going to be at least a little dangerous. Rolling Stone’s Jeff Goodell tells a story about finding himself in the passenger seat of a self-driving car cruising around Mountain View, California. Afterward, he realized that the “Lexus is, potentially, a killer robot,” but not because it’s out to get him or anyone else. Instead, he’s relying on the quality of the code that powers the vehicle not to unexpectedly swerve into oncoming traffic or drive him off a bridge. “One flawed algorithm and I’m dead,” Goodell writes.

BRITAIN-SCIENCE/
Stephen Hawking in 2015.

Toby Melville / Reuters

On the other end of the spectrum, there are those like Stephen Hawking who fear that A.I. may be too good at what it does. Hawking, another one of the signatories on that FLI letter, holds in a Reddit Ask Me Anything thread, “The real risk with A.I. isn’t malice but competence.” A super-smart A.I. that’s very good at accomplishing some specific task might perceive us as obstacles to the realization of that goal and treat us accordingly. To develop an example that Hawking alludes to in his AMA, an A.I. responsible for dam maintenance might end up flooding a town that happened to be in the way of a river it was diverting. Focused on water levels and such, it might not consider the human lives it snuffed out to be consequential.

In both Goodell and Hawking’s scenarios, the real risk is abdication of responsibility. Goodell’s car is potentially dangerous because it might not perform as expected, while Hawking’s superintelligence is scary because we don’t know what to expect from it. Despite their differences, both proposals suggest that A.I. threatens us because we’ve invested it with too much of our own power. That’s a problem even if Goodell’s Lexus doesn’t wake up one day and decide to claim the world for its own.

So how can we keep those nightmare scenarios from happening?

Some, such as the Machine Intelligence Research Institute’s Eliezer Yudkowsky, have proposed that we need to build A.I. that is friendly to humans. Thanks to the complexities of machine learning, computers can come at their tasks in unpredictable ways, potentially creating new problems thanks to their single-minded focus on finding novel solutions to existing ones. At MIRI, this is known as the “alignment problem”—the discrepancy between the ways machines complete tasks and what’s actually good for the humans who assigned those tasks in the first place. Much as you’d want to understand some basic structural engineering principles before trying to build a bridge, MIRI holds that we need to pursue research into the mathematical underpinnings of alignment itself before we go about actually creating superintelligent systems.

As others have suggested, we may need to find ways to simulate social intelligence, working to help A.I.s develop more sophisticated theories of mind. In practice, that means teaching them to predict what the humans around them are thinking and helping them to act accordingly. That might sound scary—if you’re already afraid of A.I.s, you probably don’t want them trying to read your thoughts—but it’s something that humans do all of the time. You engage in a version of it, for example, when you go to give your friend a hug because you think she looks sad. With A.I., trying to implement something like this might entail working to help computers understand linguistic tone, and not just simple meaning.

Is that enough? Will it save us?

It’s hard to say. Even if A.I.s can learn to guess what we’re thinking, we’re never going to be able to think like them. As Google’s Deep Dream famously demonstrates, machine-learning algorithms tend to generate fundamentally alien forms of intelligence. They’re bound to see patterns and attempt solutions that are wholly asymmetrical to our ways of coming at the same issues. Even literally treating a robot like a child, as one Berkeley lab has, is no guarantee that it will come to share our appetite for play—or our sense of morality.

But to describe artificial intelligence as alien isn’t necessarily to describe it as threatening. While cautions and concerns are important, many researchers argue that we shouldn’t let fear guide our relationship to A.I. more generally. As Adam Elkus, a New America cybersecurity fellow, argues in Future Tense, giving ourselves over to technopanic may actually impede our ability to engage with—and benefit from—the very systems that we’re developing. And as FLI’s public letter show, for now, at least, humans are still far more dangerous than any A.I.

There’s much more to these issues, of course, which is why we’ll be exploring this topic in detail for our April Futurography course. We’re eager to know more about what you think: What are your concerns? And what questions can we answer?

This article is part of the A.I. installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on A.I.:

Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.