Future Tense

Killer Robots on the Battlefield

The danger of using a war of attrition strategy with autonomous weapons.

United Nations offices in Geneva.

The United Nations offices in Geneva, where members will soon meet to discuss autonomous weapons.

Palis Michalis/Thinkstock

On April 11, member states at the United Nations will meet at another informal meeting of experts under the auspices of the Convention on Certain Conventional Weapons, a treaty that prohibits weapons deemed to have indiscriminate effects or to cause excessive injury. They are meeting for the third time in Geneva to consider whether to preemptively ban autonomous weapons—or, more colloquially, “killer robots.” As the member states gear up to hear another five days of testimony from experts—of which I am one—it would be a useful exercise to consider the thinking behind creating and deploying autonomous weapons.

For those new to these terms, lethal autonomous weapons are considered weapons that can “select and engage” targets without the intervention of a human operator. Unlike “drones” that are piloted by human beings, and where human operators identify and authorize target engagement, autonomous weapons would be capable of finding, selecting, and engaging targets without human oversight. These could be considered as “defensive” weapons, such as the U.S. Counter Rocket, Artillery, and Mortar system, which intercepts incoming munitions at speeds too quick for a human to react, or they could look much different, such as missiles, ground vehicles, or naval vessels that are capable of operating on their own with sophisticated automated target-recognition software. They key is not what the weapon looks like but that it is delegated the task—independent of environment—to select and engage targets by itself.

This autonomous robotic future is currently lauded by the United States, particularly through its Third Offset Strategy. The Third Offset made its first debut in a think tank paper several years ago. It says that warfare in the “robotic age” will require “rapid advances” in massive computing power (and thus advances in microelectronics), miniaturization, 3-D printing, big data, and stronger artificial intelligence. And if humans are going to keep up with these new technologies and the speed at which they move, it stands to reason, we will also need further advancements in virtual reality, cognitive science, biology, and pharmacology to enable human enhancement.

That may sound reasonable—the new technologies transforming our lives will also transform the battlefield. But while it isn’t identified as such, the Third Offset Strategy is actually something many would identify as pernicious: not a strategy of offset but one of attrition, something that policymakers rarely openly endorse or readily accept.

Since witnessing Germany’s strategy at Verdun in 1916 to “bleed France white,” the international community came to view wars of attrition as bloody, indecisive, and politically distasteful. Yet 100 years on, a movement is afoot to bring back this exact strategy. The difference from 1916 to 2016 is clear: Modern militaries want to avoid mass casualties by relying on increased distances and (potentially) autonomous weapons. But the potential results are the same: deleterious effects on civilian populations and instability in the international system.

Attrition is usually associated with the destruction of military forces, a wearing down of each side to the point where one opponent can no longer fight. Indeed, strategies of attrition typically evolve when one or both sides know that a single decisive battle is impossible and that it or they would rather find themselves fighting in an ongoing and iterative conflict to maximize its probability of success. Indeed, according to J. Boone Bartholomees, a professor of military history at the U.S. Army War College, “the measure of success is how much one hurts the enemy,” rendering it “incapable of defending itself and … destroyed, leaving exactly the same strategic outcome as an annihilation victory.”

It seems beyond reason to start with a strategy of attrition—it would be better to try to dominate from the outset. This is what the Third Offset parades itself as doing: using maneuver, deception, denial, or just relying on “superior technology” to win. This is in fact the Offset strategy that the United States adopted early on in the Cold War. It knew that the former Soviet Union possessed more troops, occupied more land, and maintained similar force posturing. So the U.S. devoted much time and energy into science and technology to boost its competitive edge, developing stronger capabilities (like precision-guided munitions, sensors, and stealth) to offset the Soviet advantage. Of course, that both sides were nuclear powers was also a factor, but the logic still remains true.

Turning back to the Third Offset Strategy, we must look to what it would require to work. To enable the masses of lethal loitering munitions and autonomous weapons in every domain (air, land, sea, and cyber) that the strategy espouses, the U.S. would also require ubiquitous sensor networks, for otherwise the systems would not be able to find their targets very well. And only then will the U.S. be able to utilize these systems to deliver devastating first salvos—for instance, by taking out all enemy air defenses and counterstrike capabilities.

On the other hand, if a nation cannot totally “offset” an adversary, it might just rely on a straight deterrence strategy: A country might just forgo the attempt at swift and decisive victory and focus instead on creating a deterrent effect through the open commitment to make war as terrible as possible. In other words, a nation would try to find as many bodies, missiles, tanks, ships, or whatever to match the adversary. The rub is that if two countries have roughly equal military capabilities, any subsequent fight would be long, drawn out, and may not bend to either side’s favor—that is, it would end up being a war of attrition.  

The Third Offset frames itself as an “offset” that requires the building of sophisticated weapons that can operate without human intervention, probably behind enemy lines.  Moreover, the Third Offset also follows up with a secondary strategy of deterrence. The problem, however, is that it is in how the logic of these two arguments work, especially when taken together. To see this, let us turn to earlier versions of the Third Offset.

In 2014, Bob Work, then CEO of the Center for New American Security and now deputy secretary of defense, wrote:

Instead of building ever-smaller numbers of exquisite crewed [manned] platforms [planes, tanks, ships, etc.] to penetrate an enemy’s battle networks, large quantities of low-cost, expendable unmanned systems can be produced to allow U.S. forces to overwhelm enemy defense with favorable cost-exchange ratios.

Further,

rapid advances in computing power, big data, artificial intelligence, minituraization, robotics and additive manufacturing [3-D printing], among others will make unmanned systems increasingly capable, autonomous and more cost-effective.

Therefore,

As more and more adversaries begin to employ guided munitions [i.e. reach power parity] and as large numbers of effective and low-cost unmanned systems proliferate, mass will likely once again become more prominent in U.S. military force-on-force calculations. And, because of the high cost of people and manned platforms, the impetus toward impetus toward greater battlefield mass is more likely to be reflected in greater numbers of unmanned systems. [Emphasis mine]

In short, the Third Offset looks to advance research, development, and deployment of autonomous systems (and their enabling technologies) to “offset” any rising powers (like Russia and China). It also attempts to prepare for mass-on-mass battles between autonomous systems. The systems would have to be autonomous because the types of masses required here would be too many for humans to operate, so they would have to do so on their own. Ostensibly this looks like deterrence again, but it is not.

In reality this strategy is really a wind-up for attrition—for a time when robotic technologies amass against one another (or, most worryingly, against personnel) and superior numbers end up being the deciding factor of the day.

Yet this mass-based strategy is highly unsound and extremely unstable. First, it is unlikely that any near-peer adversaries would field these technologies against each other (and this is supposedly the justification for creating them). Just like how I know not to go picking fights with people who could (potentially) beat me up, states are relatively choosey in who they fight and when. They prefer to utilize their technological superiority against weaker opponents. I cannot here survey the history of war, but rarely do states decide to pick fights with adversaries they know could beat them (or have a decent probability of beating them).

So either the Third Offset’s posturing is purely a deterrent strategy, because it is really linked to a secondary strategy of attrition, or it is just a blunt promise of attrition. Proponents like to claim that robotic unmanned and autonomous weapons will be doing the fighting, but it is dangerous to suspect this. It is not riskless war. It is risky war, especially in its potential for 1) mass human rights violations and 2) international destabilization.

If states do not typically fight near-peer adversaries, that means they fight weaker ones. In that case, it is naive to think that protected persons—like civilians—will not bear the brunt of harm (no matter how precise a weapon might be) when war visits their neighborhood. Precision-guided munitions don’t necessarily insulate civilians from harm; if a precise weapon is deployed to take out water pumps and electricity, everyone suffers.

It’s also important to remember that the creation of autonomous weapons systems, deployed in swarms to counter an adversary’s ability to utilize air defenses, is only a short-lived strategy. While I am not privy to classified information on my adversary’s capabilities, I would say that if I have a swarm, I better bet that my adversary has one, too. Why wouldn’t it? The U.S. has, after all, published its intent to create them.

Thus my strategy of amassing large amounts of cheap autonomous robots to overcome my adversary’s defenses (mass on mass) can only work for so long. My adversary will create them too. And we will learn again—as we did with WWI and Vietnam—that it is not about more munitions but smarter ones. Herein the danger lies.

If we grant that a strategy of mass on mass and robotic attrition is either unsustainable or unfruitful (because who cares about dysfunctional or nonfunctioning robots?), then we must meet this with a counter response. And that counter is greater strategic surprise, maneuver, and deception. Except, the only way for us to truly “offset” among the robot carnage is through better sensors and stronger artificial intelligence.

Stronger A.I., or at least the pursuit of stronger A.I., by nation states is a very dangerous and unstable situation for the international community. As I have written before, A.I. has the potential for massive benefits to society, if it is pursued responsibly. Pursuing it with the intent of weaponization and high-speed escalation threatens all avenues for international stability, crisis negotiation, and diplomacy. There is no room for human interaction, relationships, trust, confidence-building, or anything other than iterative and rapid reactions. What is more, the move for stronger A.I. on autonomous weapons means that these machines will require the ability to learn. They need this ability if they are to be cost-effective and secure against external threats. Yet once the U.S. fields learning machines, it is only a short time before the race is on for everyone to do so.

The idea that one can merely engage in standoff, riskless, or extended warfare to guarantee security is beyond myopic. Indeed the entire strategic logic proffered in the Third Offset is unsound and dangerous, for it forgets hard lessons learned on battlefields in the past—and it fails to see how these technologies will usher in very different problems that the first two “offsets” never had to worry about.

This article is part of the artificial intelligence installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on artificial intelligence:

Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.