Future Tense

Algorithms of War

The dangers of using decision-making technology in sensitive international affairs.

UN Building.
Discussions on autonomous weapons have reached the United Nations, and observers have reported that the U.K. and U.S. are attempting to water down international agreements banning the use of this technology. 

Photo by Arnaldo Jr/Shutterstock

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. On Thursday, Dec. 10, Future Tense will host a three-hour conversation on “The Tyranny of Algorithms” in Washington, D.C. For more information and to RSVP, visit the New America website.

A few months ago, Facebook patented an algorithm that would enable lenders to reject a credit application based on the credit scores of the applicant’s friends. Government departments and law enforcement have begun using algorithms in valiant efforts to improve welfare distribution and prevent crime—but some argue that these uses actually perpetuate poverty and inequality by treating correlation as causation. Algorithms are products of human engineers, after all, and biases can be as invisible as they are pervasive.

Algorithms are also being programmed to make life-and-death decisions. With the provocative headline “Why Self-Driving Cars Must Be Programmed to Kill,” MIT Technology Review joined others who have long discussed the ethical choices self-driving cars must learn to make. For example, if a car with a single passenger is about to crash into a crowd of people, should it swerve to avoid the group? Even if that would kill its passenger?

But discussions of algorithmic accountability should not be confined to domestic and legal issues. Try as countries might to control the trade of technology through export agreements, algorithms are finding their way into the confusing realm of international affairs. Diplomatic protocol has developed over centuries to govern interaction between states, and now computer protocols are threatening to disrupt these processes—and if we are not careful, the results could be disastrous. Reliance, or even overreliance, on algorithms in international policy and security settings, in which every decision is subjective, should be provoking far more debate than the subject is getting.

Take international security, for example. Companies like Palantir use algorithms to help human analysts uncover trends and correlations across multiple data sets in order to track terror cells or plan military campaigns. Drone targeting is increasingly based on algorithmic calculations, and other algorithms are being programmed to detect suspicious computer activity that could reveal a cyberattack, because humans simply can’t process all of the data being collected.

During the Cold War, American and Soviet policymakers would have had at least a few minutes to asses intelligence about a possible incoming nuclear attack and decide whether and exactly how to respond. At the height of the Cuban Missile Crisis, a bug in the Soviet missile defense radar system seemed to show that the United States had launched a massive nuclear strike. A Soviet officer went against protocol and insisted the signal was a false alarm, and his judgment helped to prevent a retaliatory strike and the beginning of World War III.

In a destructive cyberattack, we won’t have the luxury of time for such judgments and so may be tempted to preemptively deploy programs that use advanced algorithms to determine the sources of the attacks and respond automatically. But attributing hacks to individuals, let alone nation-states, can take months and is far from foolproof. Attackers can use proxies to obscure their identity or conduct false-flag attacks, and the risk of incorrect attribution in substantial.

So what happens when the algorithms get it wrong? What if a Russian hacker stages an attack from a Ukrainian computer targeting the United States? The CBS series Madam Secretary is exploring a similar plot line, which certainly makes for good television. But this is serious: The Pentagon has maintained for years that cyberattacks can justify kinetic, military response. If an algorithm “hacks back” at the wrong attacker, are we prepared for the consequences?

Following the Sony hack in 2014, the company and law enforcement scrambled to figure out who was behind the attack. Even after the attacker was internally identified as North Korea, the United States would have had to think hard about how to respond. Just as in a kinetic attack, public attribution of the action to a nation-state might demand some kind of proportionate response lest the U.S. look weak geopolitically. The decision to name North Korea as the perpetrator came from the White House Situation Room, which means it would have involved the country’s top political and military leaders, all of whom were likely briefed by dozens of support and intelligence staff. A senior administration official confirmed that there “was a significant debate within the administration about whether or not to take that step.” 

Any sort of automated, algorithmic response to such an intrusion would have taken away that crucial time for debates about whether to tolerate violations of sovereignty, appropriate action when dealing with a nuclear-armed rogue nation, and calculations of global perception of U.S. response in the face of an unprecedented cyberattack. While a resulting “hack-back” might not cause an immediate physical response, it could be misinterpreted and lead to unintended escalation, or bring down the network of an innocent intermediary.

Algorithms are becoming exponentially more advanced, and “artificial intelligence” is just another phrase to describe their complexity. Products and services are using technologies like deep learning and neural networks that allow computers to navigate increasingly complex decisions and problems, and much more quickly. Machine learning moves beyond programmers crafting specific formulas. Instead, computer programs build their own algorithms through training, feedback and iteration, and are therefore much more difficult to understand and control. Technology and legal experts warn that even the original program’s coders may not be able to explain the behavior of such algorithms, with some experts even proposing that algorithms be regulated as autonomous entities.

While fully autonomous weapons haven’t emerged quite yet, South Korea reportedly has robots patrolling its northern border that are capable of making a decision to fire without human input. Meanwhile, artificial intelligence researchers say the jump to full autonomous capability is only a few years away. Discussions on autonomous weapons have reached the United Nations, and observers have reported that the U.K. and U.S. are attempting to water down international agreements banning the use of this technology. When human soldiers make a mistake or defy orders on the battlefield, they can be investigated and sanctioned by domestic and international law. If self-taught algorithms cause a problem, it will be much harder to find out what happened. Who will be held responsible? The country that deployed the weapon? The company that built it?

Quantum computing, the next big leap in technical advancement, will give computers the capacity to analyze every possible outcome to a question simultaneously—blowing away current limitations to computer intelligence that constrain the capability of today’s algorithms. Political theorist James Der Derian predicts that such an advance will have dramatic implications for international relations, propelling any country possessing the technology into a seat of extraordinary power and even provoking a new kind of arms race.

Technology, big data, and the algorithms that power it all can enhance the ability of policymakers to understand whether what they are doing is working on a large scale. But geopolitics is more than a series of mathematical equations. Until computers can understand psychology, interpret body language, and read between the lines of diplomatic cables, countries and policymakers should to be very careful about entrusting algorithms with the delicate tasks of statecraft and war.