Future Tense

The Real Reason Elon Musk Is Worried About Killer Robots

Elon Musk joined Stephen Hawking and over 1,000 others to propose a ban on research into autonomous weapons.

Photo by Kevork Djansezian/Getty Images

If you believe Elon Musk, you should be very, very afraid of killer robots, but maybe not for the reason you think. In an open letter published Tuesday by the Future of Life Institute, Musk, Stephen Hawking, and thousands of co-signatories call for a “ban on offensive autonomous weapons beyond meaningful human control.” This is the kind of phrase that summons up images of Arnold Schwarzenegger in the Terminator films, but that’s not what Musk and his collaborators seem to have in mind.

Nevertheless, it’s this familiar image of dystopian robopocalypse that opens all too many stories about the letter. The Washington Post, New York Times, and Huffington Post—to name but three examples—all illustrate their articles on the topic with Terminator stills. Though the articles’ authors don’t come out and say it, the connotations are clear: The robots are coming, and they want your blood.

Far from worrying that artificially intelligent killing machines are going to wipe out humanity, however, FLI has a more immediately relevant concern: research priorities. Musk has famously described artificial intelligence as an “existential threat.” But he’s also helped back research to help society “reap the benefits” of artificial intelligence “while avoiding potential pitfalls.”

This is not the first time the FLI has broached the issues surrounding A.I. through an open letter. In a previous missive, issued in January, the institute had proposed that researchers should work to “maximize the societal benefit of A.I.” by ensuring that intelligent systems “do what we want them to do.” While the attached statement of research priorities touched on autonomous weapons, it did so only in passing, offering little indication as to whether and how considerations of them should proceed.

A careful reading of the FLI’s latest open letter on autonomous warfare reveals that its authors aim to correct this oversight. “If any major military power pushes ahead with A.I. weapon development, a global arms race is virtually inevitable,” they write. Here, the danger isn’t so much that the technology will become more and more powerful but that more and more research energy will be directed toward military A.I. As it does, there will be fewer resources available to those hoping to design A.I. that preserves and sustains life.

The letter also suggests that as autonomous weapons become easier to produce, they will inevitably fall into the “hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.” While this is a serious and real concern, it is a far cry the hyperbolic fantasies suggested by comparisons to the Terminator films. FLI isn’t worried that A.I. will set out to kill humans. It’s concerned that humans will use A.I. to more efficiently kill one another.

Far from warning of an impending robopacalypse, then, FLI and the letter’s many co-signatories are encouraging us to rethink the way we approach A.I. today. The letter compares its proposed moratorium on autonomous weapons development to bans on chemical and biological warfare. Refraining from research into these areas doesn’t mean A.I. is on the verge of destroying all life—just that we don’t feel such research contributes to the experience of living. As Cecilia Tilli, who signed the January FLI artificial-intelligence letter, wrote in Slate, “being mindful doesn’t mean that experts believe danger lurks behind the next advance in artificial intelligence.”

It’s unfortunate that the FLI’s letter has contributed to fears about A.I. Adam Elkus has argued that such excessive concerns only make it harder for most of us to educate ourselves about what’s really going on. If we’re really going to follow the advice of Musk, Hawking, and their co-signatories, we should focus more clearly on A.I.’s “great potential to benefit humanity” and work to ensure that it can do so.