Future Tense

Who Will Be Accountable for Military Technology?

As drones, robots, and even enhanced soldiers take the battlefield, questions of responsibility get more complicated.

A crew member of the USS Vincennes checks a guided missile launcher in 2002. The technologically advanced Vincennes shot down an Iranian passenger jet in 1988.

Photo by Gabriel Mistral/Gettyimages.

Also in Slate: Brad Allenby and Carolyn Mattick explain why new technologies mean we need to rewrite the “rules of war,” and Fritz Allhoff examines the paradox of nonlethal weapons.

The “Global Campaign To Stop Killer Robots” kicked off in New York on Oct. 21. Nobel Peace Prize Laureate Jody Williams urged the nations of the world to act against lethal autonomous robots, declaring them “beyond the pale.” Williams is not alone; on CNN earlier in October, Peter Bergen, the author of several best-selling books about Osama Bin Laden, also argued for a convention regulating lethal robots. The International Committee for Robot Arms Control, a group of academic experts on robot technologies and international security, is on board as well. The pressure on the robots is mounting.

Underlying the debate about “killer robots” is concern that machines are not, and cannot, be legally accountable for their actions. As professor Oren Gross of the University of Miami School of Law told this year’s inaugural “We Robot” conference on robots and the law in April, domestic and international law are not well suited to dealing with robots that commit war crimes.

As technology advances, we face a very real danger that it will become increasingly difficult to hold those who wage war on our behalf accountable for what they do. Artificial intelligence is not the only technology to pose such accountability problems. Others, such as bio-enhancement, do, too, although of a different sort.

Machines entirely capable of replacing humans are not yet on the market, but robotic systems capable of using lethal force without a human in the loop do already exist. The U.S. Navy’s Aegis Combat System, which is capable of autonomously tracking enemy aircraft and guiding weapons onto them, is an example. But if a robot system goes “rogue” and commits what for a human would be a crime, there would not be much point in arresting the machine. Our gut instinct is that somebody should be held accountable, but it is difficult to see who. When the USS Vincennes shot down an Iranian airliner in 1988, killing 290 civilians, there were real people whose behavior one could investigate. If the Aegis system on the Vincennes had made the decision to shoot all by itself, it would have been much harder. When a robot decides, clear lines of responsibility are absent.

For obvious reasons, human beings do not like this. An experiment conducted by the Human Interaction With Nature and Technological Systems Lab at the University of Washington had a robot, named Robovie, lie to students and cheat them out of a $20 reward. Sixty percent of victims could not help feeling that Robovie was morally responsible for deceiving them. Commenting on the future use of robots in war, the HINTS experimenters noted in their final report that a military robot will probably be perceived by most “as partly, in some way, morally accountable for the harm it causes. This psychology will have to be factored into ongoing philosophical debate about robot ethics, jurisprudence, and the Laws of Armed Conflict.” Quite how this could be done is unclear.

An assumption of human free will is fundamental for any system of legal accountability. Unfortunately, the more the cognitive sciences develop, the more they suggest that our moral reasoning lies largely outside of our control—and can even be manipulated.

An example is transcranial magnetic stimulation. Experiments with TMS reveal that you can alter somebody’s moral reasoning using a powerful magnet. Unscrupulous military leaders could artificially distort their subordinates’ morality for the worse by attaching a TMS unit to their helmets. Yet if a soldier committed war crimes because somebody else had turned off his morals, it is hard to see how we could hold him responsible for his actions.

The Defense Advanced Research Projects Agency is already investigating using a portable TMS machine to counter fatigue. Professor Jonathan Moreno of the University of Pennsylvania told the Independent newspaper last year, “There is talk of TMS machines being used on battlefields within 10 years and in 10 years more in helmets.” This is all part of a significant effort which DARPA is putting into bio-enhancements in order to create a “supersoldier.”

Bio-enhancement per se is nothing new. Physical exercise, after all, is a method of enhancing the body. What is changing is the scale of the potential transformation. DARPA is funding scores of projects designed to improve humans, to give them, among other things, more energy, a greater ability to withstand cold, and a capacity to function well with less sleep. Scientists have already succeeded in artificially boosting muscle growth in monkeys by modifying their genes. Others are looking at cognitive enhancements which could allow soldiers to think faster and to control machines with their brains. Robocop is still some way off, but a stronger, smarter soldier is quite conceivable.

This phenomenon could, in the long term, create some serious difficulties for civil-military relations. We already have trouble holding our military forces to account due to the widespread tendency to see servicemen and servicewomen as somehow superior to ordinary people—why should soldiers be entitled to board airplanes early, for instance?

There is a prevalent attitude that makes it very difficult to criticize anything the military does. Imagine what it would be like if bio-enhancement meant that it was actually true that the troops were “better” people than civilians. The deference many feel toward the military would be stronger than ever: We might believe that we had no right telling people who are better than we are what to do or how to do it. The social divide between the military and the rest of society would deepen further. There is already a worrying tendency among some soldiers to look on their political masters with contempt. With bio-enhancement, an even greater arrogance might arise. This distinct caste of special people would be less inclined to listen to the civilian authorities.

These new technologies are not all bad, of course. Robots, for instance, offer some definite advantages. Unlike soldiers, super or not, they are not subject to emotions and as a result they may make better judgments. The Aegis system on the Vincennes recorded the Iranian plane as climbing and not as descending to attack. Left to its own devices, it might not have fired. That leaves us with a dilemma: Should we choose the human, who is prone to mistakes but can at least be held responsible for what he does, or should we go with the robot, which produces better outcomes but cannot be held to account?

There is no easy answer to this question, but we must address it before we deploy new superweapons or develop biologically enhanced warriors. Banning killer robots and other technologies may not be the solution, but as citizens of democratic states, it is both our right and our responsibility to consider whether the military advantages these technologies bring are worth the cost they may impose on our democratic order.

This article was inspired by the 2012 Chautauqua Council on Emerging Technologies and 21st Century Conflict, sponsored by Arizona State University’s Consortium for Emerging Technologies, Military Operations, and National Security and Lincoln Center for Applied Ethics and held at the Chautauqua Institution in New York. Future Tense is a partnership of Arizona State, the New America Foundation, and Slate magazine.