Future Tense

What Human Rights Watch’s “Case Against Killer Robots” Gets Wrong About Military Realities

A model of an unmanned aerial vehicle outside the White House, as part of a protest against the use of drones

Photo by BRENDAN SMIALOWSKI/AFP/GettyImages

On Monday, to much fanfare, Human Rights Watch released a new report titled “Losing Humanity: The Case Against Killer Robots.” Unfortunately, the report misses a potential opportunity: Instead of contributing to an important discussion of challenging and serious issues, it merely reinforces an increasingly dysfunctional fantasy of human vs. machine.

The report begins by arguing that “killer robots,” defined as “fully autonomous weapons that could select and engage targets without human intervention,” might be “developed within 20 to 30 years” and therefore, “a preemptive prohibition on their development and use is needed.” Others will undoubtedly point out that such technologies have in fact already been deployed in places such as the Korean DMZ, and that calls for bans on major technology systems based on dystopian hypotheticals are at best ill-informed. The International Committee of the Red Cross, for example, takes a more nuanced view of drones and robotic systems, noting the possibility that they can be used in such a way as to reduce collateral damage and civilian injury and death.

The fundamental problem with the report, however, is its adoption of the Frankenstein worldview. The “humans vs. technology” myth is quite powerful, especially in Western cultures, and always popular with audiences. But it is just too flawed and oversimplistic a foundation on which to build policy formulation.  

First, consider the “human vs. machine” assumption apparent not just in the title, but underlying the entire analysis. That’s not the way the world works. The young do not reject social networking, the latest smartphone, hybrid vehicles, or custom pharma … and, increasingly, neither do their elders. They do not fight technology—they adopt it. This is not a Matrix human vs. machine world; this is the world of the ape in 2001: A Space Odyssey, in which humans blend with their technologies. Indeed, it is sometimes argued that one of the reasons humans grew to dominate their planet is precisely because they integrate so easily and so effortlessly with their technological environment. The Frankensteinian label of “killer robots” is good PR, and good fantasy noir, but blinks reality.

And the second problem, of course, is that reality. The reasons that robotic systems of many different kinds are being deployed are basic and compelling: They reduce military casualties and, at least in some cases, they reduce civilian casualties. Most importantly, though, they are essential under modern battlefield conditions, in which the complexity of the systems involved, the amount of information flowing through military systems, and the cycle time of combat are simply beyond human perceptual and cognitive capabilities. Increasingly, cognition and behavior in battlefield space arise from integrated techno-human systems such as the proposed PIXNET system—which would integrate visible and infrared camera feeds with helmet-mounted heads-up displays, interface with Android smartphones, and run all the appropriate apps—or the Cognitive Technology Threat Warning System that links human brainwaves, sensors, cognitive algorithms, and information flows from various devices.

It is not that emerging military and security technologies don’t raise critical ethical, policy, and governance issues. Clearly, they do. It is that to be useful, a discussion must be based on the demands of the real world, and the reality of augmented cognition and integrated techno-human systems, not “killer robots” derived from Frankensteinian fantasy. This report, unfortunately, substitutes righteous outrage over straw robots for a serious contribution to that necessary dialog.

Earlier in Slate, Allenby and Carolyn Mattick explained why new technologies mean we need to rewrite the “rules of war,” Fritz Allhoff examined the paradox of nonlethal weapons, and Paul Robinson discussed the accountability gap that arises with some military technologies.