Future Tense

Killer Robots? Lost Jobs?

The threats that artificial intelligence researchers actually worry about.

musk hawking gates.

Elon Musk, Stephen Hawking, and Bill Gates .

Photo illustration by Sofya Levina. Images by Mike Windle/Thinkstock, Bryan Bedder/Thinkstock, and Joshua Lott/Getty Images.

The recent win of AlphaGo over Lee Sedol—one of the world’s highest ranked Go players—has resurfaced concerns about artificial intelligence. Yes, IBM Deep Blue’s win over chess master Garry Kasparov in 1997 and IBM Watson’s 2011 Jeopardy! victory over the two highest-earning champions had a similar effect, but this time the win comes after a year full of warnings about impending changes to our lives and society, good and bad, that will result from artificial intelligence. We have heard about A.I. stealing jobs, killer robots, algorithms that help diagnose and cure cancer, competent self-driving cars, perfect poker players, and more. It seems that for every mention of A.I. as humanity’s top existential risk, there is a mention of its power to solve humanity’s biggest challenges. Demis Hassabis—founder of Google DeepMind, the company behind AlphaGo—views A.I. as “potentially a meta-solution to any problem,” and Eric Horvitz—director of research at Microsoft’s Redmond, Washington, lab—claims that “A.I. will be incredibly empowering to humanity.” By contrast, Bill Gates has called A.I. “a huge challenge” and something to “worry about,” and Stephen Hawking has warned about A.I. ending humanity. Elon Musk has likened A.I. to “summoning up the demon” and donated $10 million toward efforts to keep it safe—but he’s also created his own A.I. company. What’s going on? If the experts can’t agree on whether A.I. is something to be feared or embraced, how can laypeople make up their minds?

The term artificial intelligence conjures up images of humanlike robots with superior mental and/or physical capabilities. But this association is due mostly to science fiction—and some early hubris from A.I. researchers—and neglects the historical, more accurate meaning of the term. The term, as coined by John McCarthy, was meant to name a new field of study in computer science, distinct from cybernetics. The goal of this new field was to create generally intelligent machines, but as years passed and progress eluded researchers, it refocused on creating machines that acted intelligently, at least in some constrained domains, and regardless of how this was achieved. As a result, the field of artificial intelligence turned its focus to narrow A.I., systems that are highly capable but only in some constrained domain (like Google’s search algorithm, Bloomberg’s trading software, or Netflix’s movie recommendation system).

By contrast, general artificial intelligence, or A.G.I., has remained extremely elusive, to the point of being considered a nonfundable research goal and the pursuit of only brave, lone researchers. But this has all changed with recent progress in deep learning, a subfield of A.I. that is causing most of the current hype. New techniques, improved computational power, and loads of data have recently helped boost this approach. Some believe it will be sufficient to take us all the way to A.G.I, while others remain loyal to the rules- and logic-based approach (known as “good old-fashioned A.I.,” or GOFAI).

As always, it is hard to predict how often a breakthrough will happen and how far it will take us, but recent surveys reveal an optimistic outlook. In 2012 Vincent Müller and Nick Bostrom of the University of Oxford asked 550 A.I. experts how soon we should expect “high-level machine intelligence,” defined as one that can carry out most human professions at least as well as a typical human. The median estimate for a 10-percent likelihood was 2022, 2040 for a 50-percent likelihood, and 2075 for a 90-percent likelihood. Further, when asked about the overall impact of high-level machine intelligence on humanity, half the experts thought it would be positive, but 48 percent thought it would be mostly neutral or bad. That divide isn’t surprising, given the ups and downs of the history of artificial intelligence—and its highly uncertain future.

What this means is that fears about A.I. take a very different form depending on whom you ask. For those working on improving the capabilities of narrow A.I., combining and further developing current approaches can be a very successful way of dealing with difficult topics (like language understanding), while eschewing the question of “real intelligence.” As long as the system performs well, it does not matter how close it gets to general intelligence, let alone intentionality, consciousness, self-awareness, sentience, or whatever you think the mark of intelligence à la humanity is. For these researchers, the true dangers of developing A.I. are mostly constrained to displacing human labor, particularly those laborers who were doing “menial” tasks now performed by the machines.

By contrast, researchers focused on developing A.G.I. need to consider the consequences of the biggest technological change humanity will face. Stuart Russell, a professor of computer science at University of California, Berkeley, and co-author of the standard textbook in A.I., points out that if the goal of the field is to develop ever more capable artificial intelligence, researchers ought to consider how risky this endeavor is. In particular, researchers should emphasize the social benefits of A.I. over its capabilities and take care that the systems they create behave as they were intended to behave. As he puts is, “we need to build intelligence that is provably aligned with human values.” (Read an interview with Russell in Future Tense.)

But not all computer scientists are so amenable to discussing the dangers of A.I., even if they are working towards the goal of A.G.I. One reason is that they believe that politicians and policymakers do not have sophisticated knowledge of the field and are prone to be convinced by alarmist messages. They believe that raising these issues will lead to cuts in funding, first and foremost, and only secondarily to sensible discussions of what is likely—or even possible—in the field of A.I. Other researchers’ funding directly depends on developing technology for Defense Advanced Research Projects Agency, or DARPA, so raising the alarm over A.I. —especially “killer robots”—is not in their best interest. (And, of course, there is the question of scientific drive, regardless of the consequences. As Geoffrey Hinton—the godfather of deep learning—put it, for many “the prospect of discovery is too sweet.”)

In addition, many see A.G.I. as more of an ideal to strive for, rather than an actual possibility (at least for the foreseeable future), so there is no need to fret about it now. Some researchers think that the benefits of developing an A.G.I. far outweigh the risks, and the question of control is a moot point. Intelligent systems will be developed by humans and controlled by humans, so there is nothing to worry about.

On this last issue, A.I. safety researchers strongly disagree: There is no reason to believe that we will be able to control generally intelligent, let alone superintelligent, systems. In fact, we have no idea of what a world with (super)intelligent machines would be like. But we have at least a good idea of potentially harmful scenarios—like, say, an A.I. gaining unilateral control of all resources—so the earlier we start thinking about it, the better.

If we suspend judgment on the probability (or even possibility) of A.G.I., the biggest concern right now about machine intelligence is the impact that it will have on jobs. According to a study by Carl Frey and Michael Osborne of the University of Oxford, almost 50 percent of jobs in the U.S. and U.K. are susceptible of automation. Some people believe that this is no different than every other automation that we’ve seen: Jobs will be lost to machines, but other jobs will be created instead. Others think that the automation of jobs will lead to increased productivity but decreased employability. In this scenario, we’ll need serious re-thinking of the distribution of income—perhaps via a universal basic income—if we do not want a large part of the population to be left without a way to subsist. Further, the lack of employment for a large number of people could have other serious, negative consequences, like a lack of sense of purpose, depression and other mental illnesses, and the loss of a means of sociability. Concerns about the role of autonomous artificial agents in war and political dissent have been discussed by Heather Roff in this recent Slate article. There are also issues related to privacy, cybersecurity, law, and ethics.

As interactions with machines increase in our daily life, we will need to learn to imbed them with values that are aligned with ours. But the task is hard: We need to first determine what those values are, and, once we do, program those values into the machine so that they can be reliably applied, without extremely unwanted consequences. This is no easy task. Importantly, it is a social challenge. At the moment, the war we must fight is not against intelligent agents but rather against those making decisions on the basis of corporate profits or narrow interests and not with the common good or the future of humanity in mind.

This article is part of the artificial intelligence installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on artificial intelligence:

Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.