Elon Musk artificial intelligence: Why you shouldn’t be afraid of AI.

Elon Musk Calls Artificial Intelligence “Our Biggest Existential Threat.” He’s Wrong.

Elon Musk Calls Artificial Intelligence “Our Biggest Existential Threat.” He’s Wrong.

The citizen’s guide to the future.
Oct. 31 2014 10:51 AM
FROM SLATE, NEW AMERICA, AND ASU

Don’t Fear Artificial Intelligence

Elon Musk calls AI “our biggest existential threat.” He’s wrong.

Tesla founder and chief executive Elon Musk unveils the new Tesla 'D' model in Los Angeles on October 9, 2014.
Tesla founder and chief executive Elon Musk unveils the new Tesla ‘D’ model in Los Angeles on Oct. 9, 2014.

Photo by Mark Ralston/AFP/Getty Images

Ever since the 1927 film Metropolis introduced movie viewers to the first cinematic evil robot (a demagogic, labor activist-impersonating temptress), society has reacted to the cumulative influx of artificial intelligence, robots, and other intelligent systems with a mixture of wonder and sheer terror. Computer scientists work to counterbalance these fears by striving to make “moral” machines and/or human-friendly AI. Yet the core flaw of this effort is that it assumes that the technology—and not our emotional, human reactions to it—is the problem. Adapting to the complexities of a “second machine age” will require addressing the understandable fears without succumbing to them. Unfortunately, our own tendencies to indulge in overwrought fear mongering could hinder our own autonomy in a world that may come to be powerfully shaped by autonomous machines.

Tesla CEO and famous technology innovator Elon Musk has repeatedly warned about AI threats. In June, he said on CNBC that he had invested in AI research because “I like to just keep an eye on what's going on with artificial intelligence. I think there is a potential dangerous outcome there.” He went on to invoke The Terminator. In August, he tweeted that “We need to be super careful with AI. Potentially more dangerous than nukes.” And at a recent MIT symposium, Musk dubbed AI an “existential threat” to the human race and a “demon” that foolish scientists and technologists are “summoning.” Musk likened the idea of control over such a force to the delusions of “guy[s] with a pentagram and holy water” who are sure they can control a supernatural force—until it devours them. As Musk himself suggests elsewhere in his remarks, the solution to the problem lies in sober and considered collaboration between scientists and policymakers. However, it is hard to see how talk of “demons” advances this noble goal. In fact, it may actively hinder it.

First, the idea of a Skynet scenario itself has enormous holes. While computer science researchers think Musk’s musings are “not completely crazy,” they are still awfully remote from a world in which AI hype masks less artificially intelligent realities that our nation’s computer scientists grapple with:

Yann LeCun, the head of Facebook’s AI lab, summed it up in a Google+ post back in 2013: “Hype is dangerous to AI. Hype killed AI four times in the last five decades. AI Hype must be stopped.” … Forget the Terminator. We have to be measured in how we talk about AI. … the fact is, our “smartest” AI is about as intelligent as a toddler—and only when it comes to instrumental tasks like information recall. Most roboticists are still trying to get a robot hand to pick up a ball or run around without falling over, not putting the finishing touches on Skynet.
Advertisement

LeCun and others are right to fear the consequences of hype. Failure to live up to sci-fi–fueled expectations, after all, often results in harsh cuts to AI research budgets. But that’s by no means the only risk inherent in Musk’s talk of supernatural (not artificial) intelligence.

Technology law and policy specialist Adam Thierer has developed a theory of something he calls the “technopanic”—a moral panic over a vague, looming technological threat driven by crowd irrationality and threat inflation rather than sensible threat assessment. For example, instead of sensible policy discussions about the problems of cybersecurity, policy and media institutions trumpet the threat of a “cyber Pearl Harbor” that devastates America’s information infrastructure.

Never mind that even Stuxnet’s devastating impact was overhyped. Disregard more mundane but nonetheless serious issues of bugs in widely used open-source software like OpenSSL and the UNIX Bash shell. Pay no attention to the inconvenient fact that the entirely self-inflicted problem of our own government’s insatiable desire to compromise consumer security with law enforcement backdoors puts the average user in just as much peril as any notional superhacker’s evil designs. When America believes a looming “cyber Pearl Harbor” is on the way, no one wants to be the 21st-century Admiral Husband E. Kimmel.

Thierer diagnoses six factors that drive technopanics: generational differences that lead to fear of the new, “hypernostalgia” for illusory good old days, the economic incentive for reporters and pundits to fear-monger, special interests jostling for government favor, projection of moral and cultural debates onto new technologies, and elitist attitudes among academic skeptics and cultural critics disdainful of new technologies and tools adopted by the mass public. All of these are perfectly reasonable explanations, but a seventh factor also matters: the psychological consequences of human dependence on complex technology in almost all areas of modern life.

Advertisement

As sociologists of technology argue, we depend on technology we ourselves cannot understand or control. Instead, we are forced to trust that the systems and subsystems we depend on and the experts who maintain them function as advertised. Passengers may have vague notions of the physics behind flight, but not the formulas used to calculate the mechanics used to keep the airplane flying. Moreover, no single engineer on the design team of the plane has full knowledge of every component. Complex yet absolutely crucial technologies like airplanes are foreign and mysterious to us. Yet this, if anything, underplays the problem. Contra Star Trek, for many users their iPhone or iPad is the “undiscovered country.”

In this light, Arthur C. Clarke’s famous quote that advanced technology is “indistinguishable from magic” explains why Musk reached for explicitly occult imagery more characteristic of Buffy the Vampire Slayer than anything out of Stuart Russell and Peter Norvig’s widely used AI textbook. Modern technology to us is a kind of black magic, shrouded in mysticism and occlusion and dominated by a select coterie of sorcerers who conjure up spells with C++ and Java instead of a “pentagram and holy water.”

Rhetoric like Musk’s is not harmless. As sociologist of technology Sean Lawson argues, fear of drones has already resulted in draconian restrictions on nongovernmental-unmanned-aerial-system use that stifle innovation and trample on our civil liberties. As Lawson notes, the Federal Aviation Administration has sought to prevent volunteers from using drones to find missing persons and even threatened a news outlet looking to publish footage recorded by consumer drones. While Musk may hope that his concern drives sensible anticipatory regulation by domestic and international authorities, it’s hard to see why loose talk of AI demon-summoning contributes to anything except the kind of regulatory bungling that Lawson documents.

But the biggest negative impact of AI fear mongering may not lie in the regulatory realm. Instead, it could very well reinforce and worsen the state of learned helplessness that characterizes the average Joe or Jane’s relationship to and dependence on complex technology. At best, computing is a necessary chore for many users. At worst, computing is bewildering and alienating, sometimes requiring intervention of technical specialists with arcane knowledge bases. Experts often lament that the mass public and the people who represent them are ignorant of technological details and thus make poor choices concerning technology in both day-to-day life and regulatory policy.

Advertisement

Technopanics didn’t create the divide between the Linuxless masses and the Geek Squad—but they arguably worsen it. When public figures like Musk characterize emerging technologies in mystical, alarmist, and metaphorical terms, they abandon the very science and technology that forged innovations like Tesla cars for the superstition and ignorance of what Carl Sagan famously dubbed the “demon-haunted world.” Instead of helping users understand, adapt to, and even empathize with the white-collar robot that may be joining their workplace, Musk’s remarks encourage them to fear and despise what they don’t understand. It is fitting that Musk’s remarks come so close to Halloween, as his rhetoric resembles that of the village elder in an old horror movie who whips up the villagers to bear pitchforks and torches to kill the monster in the decrepit old castle up the hill.

The greatest tragedy of the emergent AI technopanic that Musk fuels is that it may reduce human autonomy in a world that may one day be driven by increasingly autonomous machine intelligence. Experts tell us that emerging AI technologies will fundamentally reshape everything from romantic relationships to national security. They could be wrong, as AI has an unfortunate history of failing to live up to expectations. Let’s assume, however, that they are right. Why would it be in the public interest to—through visions of demons, wizards, and warlocks—contribute to an already growing divide between the technologists who make the self-driving cars and the rest of us who will ride in them?

Debates in AI and public policy often hinge on trying to parse precisely what machine autonomy represents, but you don’t need a Ph.D. in computer science or even a Github account to know what it means to be an autonomous human interacting with technology. It’s understanding (at least on some level) and being able to make confident decisions about the ways we use everyday technology. (Perhaps if users were encouraged to take charge of technology instead of fearing it, they wouldn’t need to take so many trips to the Genius Bar.) Yes, Musk is right that AI can’t be left purely to the programmers. But worrying about science fiction like Skynet could just reinforce the “digital divide” between the tech’s haves and have-nots.

If Musk redirected his energies and helped us all learn how to understand and control intelligent systems, he could ensure that the technological future is one that everyone feels comfortable debating and weighing in on. A man of his creative-engineering talents and business instincts surely could help ensure that we get a Skynet for the John and Sarah Connors of the world, not just the Bay Area tech elites.* Granted, AI for the masses might not be Mars colonization or the Hyperloop. But it’s far more socially beneficial (and potentially profitable for tech gurus like Musk) than simply raging against the machine.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.

*Correction, Oct. 31, 2014: This article originally misspelled the last names of Terminator characters John and Sarah Connor. (Return.)