Elon Musk artificial intelligence: Why you shouldn’t be afraid of AI.

Elon Musk Calls Artificial Intelligence “Our Biggest Existential Threat.” He’s Wrong.

The citizen’s guide to the future.
Oct. 31 2014 10:51 AM
FROM SLATE, NEW AMERICA, AND ASU

Don’t Fear Artificial Intelligence

Elon Musk calls AI “our biggest existential threat.” He’s wrong.

Tesla founder and chief executive Elon Musk unveils the new Tesla 'D' model in Los Angeles on October 9, 2014.
Tesla founder and chief executive Elon Musk unveils the new Tesla ‘D’ model in Los Angeles on Oct. 9, 2014.

Photo by Mark Ralston/AFP/Getty Images

Ever since the 1927 film Metropolis introduced movie viewers to the first cinematic evil robot (a demagogic, labor activist-impersonating temptress), society has reacted to the cumulative influx of artificial intelligence, robots, and other intelligent systems with a mixture of wonder and sheer terror. Computer scientists work to counterbalance these fears by striving to make “moral” machines and/or human-friendly AI. Yet the core flaw of this effort is that it assumes that the technology—and not our emotional, human reactions to it—is the problem. Adapting to the complexities of a “second machine age” will require addressing the understandable fears without succumbing to them. Unfortunately, our own tendencies to indulge in overwrought fear mongering could hinder our own autonomy in a world that may come to be powerfully shaped by autonomous machines.

Tesla CEO and famous technology innovator Elon Musk has repeatedly warned about AI threats. In June, he said on CNBC that he had invested in AI research because “I like to just keep an eye on what's going on with artificial intelligence. I think there is a potential dangerous outcome there.” He went on to invoke The Terminator. In August, he tweeted that “We need to be super careful with AI. Potentially more dangerous than nukes.” And at a recent MIT symposium, Musk dubbed AI an “existential threat” to the human race and a “demon” that foolish scientists and technologists are “summoning.” Musk likened the idea of control over such a force to the delusions of “guy[s] with a pentagram and holy water” who are sure they can control a supernatural force—until it devours them. As Musk himself suggests elsewhere in his remarks, the solution to the problem lies in sober and considered collaboration between scientists and policymakers. However, it is hard to see how talk of “demons” advances this noble goal. In fact, it may actively hinder it.

First, the idea of a Skynet scenario itself has enormous holes. While computer science researchers think Musk’s musings are “not completely crazy,” they are still awfully remote from a world in which AI hype masks less artificially intelligent realities that our nation’s computer scientists grapple with:

Yann LeCun, the head of Facebook’s AI lab, summed it up in a Google+ post back in 2013: “Hype is dangerous to AI. Hype killed AI four times in the last five decades. AI Hype must be stopped.” … Forget the Terminator. We have to be measured in how we talk about AI. … the fact is, our “smartest” AI is about as intelligent as a toddler—and only when it comes to instrumental tasks like information recall. Most roboticists are still trying to get a robot hand to pick up a ball or run around without falling over, not putting the finishing touches on Skynet.
Advertisement

LeCun and others are right to fear the consequences of hype. Failure to live up to sci-fi–fueled expectations, after all, often results in harsh cuts to AI research budgets. But that’s by no means the only risk inherent in Musk’s talk of supernatural (not artificial) intelligence.

Technology law and policy specialist Adam Thierer has developed a theory of something he calls the “technopanic”—a moral panic over a vague, looming technological threat driven by crowd irrationality and threat inflation rather than sensible threat assessment. For example, instead of sensible policy discussions about the problems of cybersecurity, policy and media institutions trumpet the threat of a “cyber Pearl Harbor” that devastates America’s information infrastructure.

Never mind that even Stuxnet’s devastating impact was overhyped. Disregard more mundane but nonetheless serious issues of bugs in widely used open-source software like OpenSSL and the UNIX Bash shell. Pay no attention to the inconvenient fact that the entirely self-inflicted problem of our own government’s insatiable desire to compromise consumer security with law enforcement backdoors puts the average user in just as much peril as any notional superhacker’s evil designs. When America believes a looming “cyber Pearl Harbor” is on the way, no one wants to be the 21st-century Admiral Husband E. Kimmel.

Thierer diagnoses six factors that drive technopanics: generational differences that lead to fear of the new, “hypernostalgia” for illusory good old days, the economic incentive for reporters and pundits to fear-monger, special interests jostling for government favor, projection of moral and cultural debates onto new technologies, and elitist attitudes among academic skeptics and cultural critics disdainful of new technologies and tools adopted by the mass public. All of these are perfectly reasonable explanations, but a seventh factor also matters: the psychological consequences of human dependence on complex technology in almost all areas of modern life.

As sociologists of technology argue, we depend on technology we ourselves cannot understand or control. Instead, we are forced to trust that the systems and subsystems we depend on and the experts who maintain them function as advertised. Passengers may have vague notions of the physics behind flight, but not the formulas used to calculate the mechanics used to keep the airplane flying. Moreover, no single engineer on the design team of the plane has full knowledge of every component. Complex yet absolutely crucial technologies like airplanes are foreign and mysterious to us. Yet this, if anything, underplays the problem. Contra Star Trek, for many users their iPhone or iPad is the “undiscovered country.”