Future Tense

Striking the Balance on Artificial Intelligence

We need to be cautious about A.I. But that doesn’t mean you should fear it.

The dangers of AI.
Vigilance about artificial intelligence doesn’t mean fear.

Photo illustration by Slate. Photo by Thinkstock

In January, I joined Stephen Hawking, Elon Musk, Lord Martin Rees, and other artificial intelligence researchers, policymakers, and entrepreneurs in signing an open letter asking for a change in A.I. research priorities. The letter was the product of a four-day conference (held in Puerto Rico in January), and it makes three claims: 

  • Current A.I. research seeks to develop intelligent agents. The foremost goal of research is to construct systems that perceive and act in a particular environment at (or above) human level.
  • A.I. research is advancing very quickly and has great potential to benefit humanity. Fast and steady progress in A.I. forecasts a growing impact on society. The potential benefits are unprecedented, so emphasis should be on developing “useful A.I.,” rather than simply improving capacity.
  • With great power comes great responsibility. A.I. has great potential to help humanity but it can also be extremely damaging. Hence, great care is needed in reaping its benefits while avoiding potential pitfalls.

In response to the release of this letter (which anyone can now sign and has been endorsed by more than 6,000 people), news organizations published articles with headlines like:

Elon Musk, Stephen Hawking Warn of Artificial Intelligence Dangers.” “Don’t Let Artificial Intelligence Take Over, Top Scientists Warn.” “Big Science Names Sign Open Letter Detailing AI Danger.” “AI Has Arrived, and That Really Worries the World’s Brightest Minds.”

There is a disconnect here—the letter was an appeal to sense, not a warning of impending danger. But it’s not surprising: Alongside increased investment in A.I. and renewed hope in its capabilities, there have been plenty of headlines warning us about the end of mankind, the need for protection from machines, real-life Terminators, and keeping Skynet at bay. As someone involved in A.I. safety research, I know these headlines misrepresent our concerns and the state of the field by blurring the distinction between narrow and strong A.I. and their distinct probabilities and risks. Understanding the difference helps explain why we do indeed need to be mindful of A.I. downfalls. But being mindful doesn’t mean that experts believe danger lurks behind the next advance in artificial intelligence.

Most current A.I. systems are narrow: They’re extremely efficient but nowhere near general intelligence. Their capabilities are specific, with very little flexibility. For example, a system that plays poker is usually terrible at chess. This lack of adaptability distinguishes current A.I. systems from science fiction’s portrayals of artificial intelligence—think of the difference between your GPS system and 2001 Space Odyssey’s Hal or Blade Runner’s replicants. While the latter are general thinkers—A.I.s that tackle the whole range of problems humans deal with in everyday life—a GPS system can only tell you about your location. The difference is crucial when evaluating the present and future of A.I. research.

For decades A.I. research made progress on concrete, specific tasks (recognizing faces, playing games, answering queries) while failing at big-picture problems (moving around an unconstrained environment, understanding and producing language). However, the current boom, fueled by progress in deep learning and traditional techniques, makes it likely that progress on big-picture questions will be steadier from now on. The potential benefits are extraordinary, but it also raises the question of possible harms. The crucial issue is how to reap the benefits while avoiding unlikely but catastrophic consequences.

The benefits of narrow A.I. systems are clear: They free up time by automatically completing tasks that are time-consuming for humans. They are not completely autonomous, but many require only minimal human intervention—the better the system, the less we need to do. A.I.s can also do other useful things that humans can’t, like proving certain mathematical theorems or uncovering hidden patterns in data.

Like other technologies, however, current A.I. systems can cause harm if they fail or are badly designed. They can also be dangerous if they are intentionally misused (e.g., a driverless car carrying bombs or a drone carrying drugs). There are also legal and ethical concerns that need to be addressed as narrow A.I. becomes smarter: Who is liable for damages caused by autonomous cars? Should armed drones be allowed total autonomy?

Special consideration must be given to economic risks. The automation of jobs is on the rise. According to a study by Carl Frey and Michael Osborne (who are my colleagues at the University of Oxford), 47 percent of current U.S. jobs have a high probability of being automated by 2050, and a further 23 percent have a medium risk. Although the consequences are uncertain, some fear that increased job automation will lead to increased unemployment and inequality.

Given the already widespread use of narrow A.I., it’s easy to imagine the benefits of strong A.I. (also known as artificial general intelligence, or AGI). AGI should allow us to further automate work, amplify our ability to perform difficult tasks, and maybe even replace humans in some fields. (Think of what a fully autonomous, artificial surgeon could achieve.) More importantly, strong A.I. may help us finally solve long-standing problems—even deeply entrenched challenges like eradicating poverty and disease.  

But there are also important risks, and humanity’s extinction is only the most radical. More intermediate risks include general societal problems due to lack of work, extreme wealth inequality, and unbalanced global power.

Given even the remote possibility of such catastrophic outcomes, why are some people so unwilling to consider them? Why do people’s attitudes toward AGI risk vary so widely? The main reason is that two forecasts get confused. One concerns the possibility of achieving AGI in the foreseeable future; the other concerns its possible benefits. These are two different scenarios, but many people confuse them: “This is not happening any time soon” becomes “AGI presents no risks.”

In contrast, for many of us AGI is an actual possibility within the next 100 years. In that case, unless we prepare ourselves for the challenge, AGI could present serious difficulties for humanity, the most extreme being extinction. Again, these worries might just be precautionary: We don’t know when AGI is coming and what its impact will be. But that’s why we need to investigate the matter: Assuming that nothing bad will happen is just negligent wishful thinking.

In Superintelligence, Nick Bostrom of the Oxford Martin School considers the most extreme case, where the A.I. system has achieved and vastly surpassed human intelligence, becoming superintelligent. One of the main concerns here is how to make sure that the system acts in a way that is beneficial to humanity or “human-friendly.” The main problem is one of control: how to make sure that, once in existence, the machine will act in accordance to the goals we want it to pursue, rather than its own deviant goals. According to Bostrom, there are two strategies to approach the “control problem.” The first is to make sure that the machine’s capabilities are constrained so that it cannot pursue its own goals. The second approach is to endow the machine with goals that are in line with human goals, so that the machine chooses to act in ways that are beneficial to humanity.

Both strategies present difficulties. In particular, the second strategy requires solving two difficult issues: the technical problem of “loading” values into a system, and the philosophical problem of determining what those values should be. Given that the question of what is the correct moral theory remains unsettled after thousands of years, it would be quite optimistic to expect A.I. engineers to solve it and learn to implement it in the next century—let alone the next few decades.

It could well be that we are lucky and we don’t have to solve these issues—who knows, maybe A.I. will solve these problems for us. But the general view is that we can’t leave it to chance: The consequences are too important to just play dice with the future of humanity.

In sum, the goal of outlining the risks of AGI is not to instill fear, but to emphasize how little progress has been made in solving these issues, and how necessary A.I. safety research is to ensure we are prepared for what may come. Though the media mostly focus on Terminator–like scenarios, it is important to keep in mind that the path to strong A.I. is paved with uncertainty and that the correct attitude is one of caution, rather than plain optimism (or pessimism). In line with this, the open letter—signed by A.I. and A.I.–safety researchers alike—focuses mostly on the beneficial aspects of A.I. research, while also stressing the need to avoid “potential pitfalls.”

After all, avoiding dangerous situations should be a goal for any human industry, especially one that has the potential to forever change the course of humanity.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.