We Need to Stop Killer Robots Now

Stories from New Scientist.
May 26 2013 6:30 AM

Stop Killer Robots

Autonomous weapons systems could lead to disaster.

A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square on April 23, 2013, in London. The Campaign to Stop Killer Robots is calling for a pre-emptive ban on lethal robot weapons that could attack targets without human intervention.
A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square on April 23, 2013, in London. The Campaign to Stop Killer Robots is calling for a pre-emptive ban on lethal robot weapons that could attack targets without human intervention.

Photo by Oli Scarff/Getty Images

Artificial intelligence expert Mark Bishop says a ban on weapons that can deploy and destroy without human intervention is vital. He is a professor of cognitive computing at Goldsmiths, University of London, and chairs the Society for the Study of Artificial Intelligence and the Simulation of Behavior.

Simon Makin: What is the Campaign to Stop Killer Robots?
Mark Bishop: It is a confederation of non-governmental organizations and pressure groups lobbying for a ban on producing and deploying fully autonomous weapon systems, where the ability of a human to both choose the precise target and intervene in the final decision to attack is removed.

SM: How close are we to this?
MB: Examples already exist. Some, such as the Phalanx gun system, used on the majority of U.S. Navy ships to detect and automatically engage incoming threats, have been around for some time. Another is the Israeli Harpy "fire-and-forget" unmanned aerial vehicle, which will seek out and destroy radar installations.

Advertisement

SM: What's driving the technology's development?
MB: Current Western military strategy focuses more on drones than on traditional forces, but remote-controlled drones are vulnerable to hijacking. Fully autonomous systems are virtually immune to this. They also lower costs. This means manufacturers sell more, so there is a commercial imperative to develop autonomous systems and for governments to deploy them.

SM: What are the dangers?
MB: There are reasons to doubt whether autonomous systems can appropriately judge the need to engage, react to threats proportionately, or reliably discriminate between combatants and civilians. Also, when you get complex software systems interacting, there is huge potential for unforeseen consequences. A vivid example was seen on Amazon in 2011 when pricing bots raised the cost of a book, The Making of a Fly, to more than $23 million.

SM: So, you are worried about escalation?
MB: Yes. In South Korea scientists are developing a robot to patrol the border with North Korea. If this was deployed and incorrectly or disproportionately engaged, it is easy to imagine a minor border incursion escalating into a serious confrontation. Even more frighteningly, in 1983, during the U.S. military exercise Able Archer, Russian automatic defense systems falsely detected an incoming missile, and it was only a Russian colonel's intervention that averted nuclear war. But the potential for escalation gets particularly scary when you have autonomous systems interacting with other autonomous systems.

SM: Couldn't robots reduce risk to humans?
MB: There is a case, put forward by people such as U.S. roboticist Ronald Arkin, that robots might make more dispassionate assessments than grieving or revenge-seeking soldiers. Not only does this not address the problem of escalation, it also only holds water if systems can reliably decide when to engage, judge proportionality, and accurately discriminate targets.

SM: So what should we do?
MB: The technology behind autonomous systems has other uses, such as the Google car driving system, so banning development would be difficult. Instead, we must focus on a global treaty banning deployment of autonomous weapons.

This article originally appeared in New Scientist.

TODAY IN SLATE

Politics

Blacks Don’t Have a Corporal Punishment Problem

Americans do. But when blacks exhibit the same behaviors as others, it becomes part of a greater black pathology. 

I Bought the Huge iPhone. I’m Already Thinking of Returning It.

Scotland Is Just the Beginning. Expect More Political Earthquakes in Europe.

Lifetime Didn’t Think the Steubenville Rape Case Was Dramatic Enough

So they added a little self-immolation.

Two Damn Good, Very Different Movies About Soldiers Returning From War

Medical Examiner

The Most Terrifying Thing About Ebola 

The disease threatens humanity by preying on humanity.

Students Aren’t Going to College Football Games as Much Anymore, and Schools Are Getting Worried

The Good Wife Is Cynical, Thrilling, and Grown-Up. It’s Also TV’s Best Drama.

  News & Politics
Weigel
Sept. 20 2014 11:13 AM -30-
  Business
Business Insider
Sept. 20 2014 6:30 AM The Man Making Bill Gates Richer
  Life
Quora
Sept. 20 2014 7:27 AM How Do Plants Grow Aboard the International Space Station?
  Double X
The XX Factor
Sept. 19 2014 4:58 PM Steubenville Gets the Lifetime Treatment (And a Cheerleader Erupts Into Flames)
  Slate Plus
Slate Picks
Sept. 19 2014 12:00 PM What Happened at Slate This Week? The Slatest editor tells us to read well-informed skepticism, media criticism, and more.
  Arts
Brow Beat
Sept. 20 2014 1:52 PM Julian Casablancas’ New Album Sounds Like the Furthest Thing From the Strokes
  Technology
Future Tense
Sept. 19 2014 6:31 PM The One Big Problem With the Enormous New iPhone
  Health & Science
Bad Astronomy
Sept. 20 2014 7:00 AM The Shaggy Sun
  Sports
Sports Nut
Sept. 18 2014 11:42 AM Grandmaster Clash One of the most amazing feats in chess history just happened, and no one noticed.