Future Tense

What Do People Around the World Think About Killer Robots?

International law about autonomous weapons systems needs to consider public opinion.

An Israeli flag flutters next to an Harpy IAI-MBT Attack Drone as part of the Israeli display during the 44th Paris Air Show at Le Bourget Airport, north of Paris, France, June 21, 2001.
This Harpy IAI-MBT attack drone is not an autonomous weapon, but it is part of the way there.

Francois Mori/Getty Images

Over the last few years, global leaders have started debating how to handle the prospect of autonomous weapons—aka killer robots—capable of selecting and engaging targets without human intervention. The implications here are, of course, enormous: Such a system would be able to identify a potential target and decide to fire upon it without a human telling it exactly what to do or perhaps even knowing what it’s going to do. While no military has announced that it possesses autonomous weapons, some countries’ armed forces do possess systems capable of loitering in an area “hunting for a target” and then firing upon it, such as the Israeli Harpy and Harop. Others have systems that can navigate by themselves, communicate with other weapons, and “decide” which target to fire upon from a preselected area or class of targets, like the U.S. Long-Range Anti-Ship Missile. These arms are halfway to a true autonomous weapon.

International discussions over the desirability or legality these weapons in armed conflict are mostly confined to the U.N. Convention on Conventional Weapons. For the past three years, member states sought out expert advice in an “informal” (that is, not on the record) series of meetings. These experts came from the fields of robotics, artificial intelligence, weapons, ethics, and law. For example, I testified in 2015 and 2016 on the strategic rationale for pursuing such weapons as well as the present-day capabilities of militaries. In December 2016, the states agreed to formally take up the issue in a Group of Governmental Experts—really just the same thing they had been doing but now on the record—and continue to debate whether such weapons need new regulation under international law or even a preemptive ban on their deployment and use in future conflicts.

Whether at the U.N. or in policy and academic circles, the arguments about autonomous weapons often focus on whether they will comply with existing laws of armed conflict. Advocates for their development suggest that well-designed autonomous systems should be considered a new kind of precision-guided munition: one with potential to limit destruction, save lives, and make decisions outside the fog of war. Opponents say their use would cross a clear legal line. Fully autonomous machines, they suggest, lack the human judgment necessary to distinguish between combatants and civilians, or to use only proportionate and necessary force.

But this debate, valuable though it may be, misses a very important point: whether these systems are contrary to a set of shared human values. Is the development and deployment of systems that can decide which targets to attack on their own, without human intervention, morally objectionable? Luckily, there is actually a principle in international law that attempts to give some sort of grounding here: the Martens Clause.

In 1899, Friedrich Martens, a Russian delegate to the first Hague Peace Conferences, introduced the idea that because the laws of war are not exhaustive, there may be cases where civilians and combatants alike still deserve certain protections for reasons not found in explicit positive law. These reasons, he explained, were due to normative “principles of humanity and the dictates of public conscience.” This statement, now known as the Martens Clause, is considered customary international law. Even though the principle has been passed down from the Hague Peace Conferences to the Protocol Additional to the Geneva Conventions, states do not need to sign these treaties to be bound by it. It has been invoked in the Nuremberg tribunals after WWII as well as in the International Court of Justice to outlaw the use of nuclear weapons.

The Martens Clause appears to be the key to resolving much of the dispute over autonomous weapons systems because it provides the necessary grounding for moral questions in international law, and it gives an opening for us to actually grasp what might be considered the “dictates of public conscience.” In other words, we can put to side the question of whether technology can act in a particular way at a particular time, and instead ask whether the technology should do so. As the International Committee of the Red Cross explains, “there is a related question of whether the principles of humanity and the dictates of public conscience (the Martens Clause) allow life and death decisions to be taken by a machine with little or no human control.” But how would we begin to say that autonomous weapons systems uphold or violate the “principles of humanity and the dictates of public conscience?” How would we know whose principles are upheld or what “humanity” really believes?

Answering this entails that much depends on who is judging what constitutes the “dictates of public conscience.” Many international lawyers suggest, for instance, that it is really a public opinion issue, while others argue instead that it is an interpretative issue for states (and lawyers) to decide. Up until this point, there has been no good way to privilege public feeling or state interests because there was no good way to get at international public opinion. Recently, however, this changed—especially in reference to the killer robot debate.

On Tuesday, the polling firm IPSOS released results from the first global public opinion survey with representative sampling that included a question on autonomous weapons. Public opinion surveys up until now have focused primarily on polling people in the United States, with some work in other Western states. The results from surveys in the U.S. do not generalize well to what the international public conscience could be, and the other surveys have small sample sizes, which are difficult to extrapolate. IPSOS, on the other hand, has data from 23 countries, with respondent pools of more than 1,000 in each, from almost every area of the globe. For the first time, we can empirically evaluate whether a coherent public opinion could act as a proxy for the Martens Clause on the issue of killer robots.

The IPSOS survey asked respondents:

The United Nations is reviewing the strategic, legal and moral implications of autonomous weapons systems. These systems are capable of independently selecting targets and attacking those targets without human intervention; they are thus different than current day “drones” where humans select and attack targets. How do you feel about the use of autonomous weapons in war?

Respondents were able to answer that they “strongly supported, supported, opposed, strongly opposed” or were “uncertain.” The findings are quite interesting.

Of the total number of respondents, 24 percent support, while 56 percent oppose the use of autonomous weapons. With the exception of two outliers, China and India, the trends across states and within them are almost the same. (In the Chinese case, 47 percent of respondents support while 36 percent oppose. In the Indian case, 60 percent support while 31 percent oppose).

What does this mean for the debate happening in Geneva and around the globe? Well, it appears that regardless of the region of the world, the majority of people surveyed oppose them. We might still inquire as to exactly why this is so, but the first attempt at measuring the dictates of public conscience is quite clear: There is more uniformity than one may have anticipated. This survey, then, could be one way the member states can begin to discuss the possible regulation or prohibition of these weapons, for only a couple of states appear to support their use.

You could still object that the people within each of these states are not the states themselves, and that the state must decide what its security interests and threats are. That’s the reason why they are driving for the creation and adoption of such technologies. While this may certainly be true, it overlooks two important points. First, many of the justifications for autonomous weapons systems’ adoption lie in moral reasons. Second, since states are supposed to be the representatives of their people’s will, ignoring the empirical evidence that many do not want such systems built or used in war would signal that democracy is not that important.

Looking ahead, we might want to now push ahead on two fronts: understanding the outliers and using these findings to push policy. As a political scientist, I’m interested in why China and India are significantly different than the rest of the states polled. Perhaps they are more accepting of technological change, or perhaps they are more open to new weapons that appear to provide them with a strategic advantage against a rival or adversary. As someone interested in policy, I’m interested in how we can bring the voices of the people into the halls of power, such as the U.N. or within state capitals. The member states of the Convention on Conventional Weapons will be meeting in August and November this year, and this is a chance to provide them with evidence of what their people think about killer robots.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.