A brief guide to the weapons of cyberconflict.

A Brief Guide to the Weapons of Cyberconflict

A Brief Guide to the Weapons of Cyberconflict

The citizen’s guide to the future.
March 8 2016 9:45 AM

A Brief Guide to the Weapons of Cyberconflict

The software behind the headlines.

Ashton Carter.
Secretary of Defense Ashton Carter testifies on Capitol Hill in Washington, D.C., Feb. 25.

Jim Watson/Getty Images

On Wednesday, March 9, New America’s Cybersecurity Initiative will host its annual Cybersecurity for a New America Conference in Washington, D.C. This year’s conference will focus on securing the future cyberspace. For more information and to RSVP, visit the New America website.


It was just after midnight as seven Israeli fighter aircraft slipped toward a half-built nuclear reactor nestled in an unobtrusive slice of desert near the Syrian city of Deir ez-Zor. The seven were a consummate expression of conventional war—metal airframes built around kerosene-sucking jet engines spewing heat and power. Slung under each were tubes loaded with explosives, the same bombs and missiles that have been the mainstay of airpower, with only some change, for a century. Rather than being met by a swarm of opposing jets and antiaircraft missiles, this group of nonstealthy aircraft flew on toward their target, untouched and likely undetected. For all of their fearsome conventional might, these planes heralded a new kind of conflict, one enabled by software and digital manipulation.


Conflict in cybersecurity falls into one of two related buckets—the use of software and hardware tools to enable conventional warfighting, so-called cybered conflict, and tools used purely to target information systems and the physical equipment connected to them. Operation Orchard was the former, made possible by Israeli success disabling Syrian air-defense networks with a piece of malicious software to blind the systems or display false targets. The strike group wasn’t accompanied by an armada of escorts, blasting every antiaircraft and radar installation in sight. Instead, these defenses were taken down without a sound, thanks to a piece of malicious software transmitted from an unobtrusive support aircraft.

Conflict in cyberspace isn’t really what you think. It’s mostly not about exploding power plants, toppling corporate giants, or exacting revenge for Seth Rogen comedies. It is a conflict over information. To keep secret that which we value and steal or expose that which others value. To create uncertainty, to breed inaction.

The tools of this conflict are software—code that allows an attacker to gain access to a computer or network and manipulate it to his own ends. The result is a series of pitched battles between information assurance professionals and malicious attackers of various stripes. This same process is taking place every day, with states and nonstate groups working to gather intelligence on targets, to gain access to their systems, and—every so often—to execute a malicious payload.

The tools of cyberconflict vary widely by their intended goal but generally come with three basic components: a propagation method, exploits, and a payload. The propagation method is any means of transporting code from its origin to a target, such as a portable flash drive or spear-phishing emails, in which attackers masquerade as co-workers or other trustworthy people to send infected files. (Spear-phishing emails were, for example, part of the cause of the Russian government’s success gaining access to unclassified email systems used by the Defense Department, the White House, and the State Department.) Exploits are small pieces of code written to take advantage of software vulnerabilities—features or flaws that allow a third party to gain access and take control of a computer. The payload is the purpose: code written to do something malicious, like deleting data or causing physical destruction.


Malware’s three components work in concert but have substantially different roles. Exploits are critically important; they don’t do anything directly malicious, but they do open the door for a payload. Stuxnet, perhaps the most famous piece of malware ever built, was able to move across networks because of a vulnerability in the Windows operating system’s print service. An exploit in Stuxnet took advantage of this flaw to by sending specially formatted print requests to uninfected machines, which would then spread the malware to a new machine instead of printing anything. Developed by the United States and Israel to damage Iran’s nuclear enrichment program, Stuxnet used this flaw to sprint from computer to computer until it located those controlling the centrifuges.

The propagation method spreads malware, but the payload is the thing designed to have an effect. Stuxnet’s payload, which targeted centrifuges being used to separate different isotopes of uranium in order to obtain pure fissile material, was a marvel of engineering. The code was designed to hide in the centrifuge’s programmed logic and carry out the attack without alerting Iranian staff. The only indication of the malware’s presence would have the machines hum at a slightly higher frequency than normal for a few minutes each month.

Building a destructive payload, like the one in Stuxnet, takes time and a fair degree of expertise. The attacker has to know a great deal about the hardware systems in use at a potential target and the programming language this device’s software is written in. Once a payload is constructed, it then has to be tested for reliability under a number of different conditions to ensure that it will work correctly. Over months, Stuxnet worked to change rotor speeds and valve sequences to slowly degrade the centrifuges, weakening the metal, and causing them to fail much faster than designed.

But these sort of destructive attacks are the exception rather than the norm. The tools of this conflict are used far more often, and to much greater effect, to gather intelligence about opponent’s information systems, to sow chaos and confusion, and to disrupt key services. On the battlefield, a piece of malicious software could be used to target opponent radio networks, disabling receivers or deleting the codes used to encrypt transmissions thereby rendering the devices as secret as a walkie-talkie bought at RadioShack. States can also use Wi-Fi networks in an urban environment to search for connected devices of insurgents communicating over computers or mobile phones. In the beginning of 2016, Secretary of Defense Ash Carter gave testimony to Congress indicating the DOD’s Cyber Command, or CYBERCOM, was preparing to ramp up operations to target ISIS commanders’ computer networks and mobile phones. Software tools, developed by CYBERCOM, could be used to disable ISIS­-employed devices and disrupt their operations or to spy on their users and provide their location for airstrikes or capture. Nonsoftware tools have an impact on communications as well. Electromagnetic weapons, which generate a surge of electrical energy to disable devices, are still largely hypothetical but could conceivably be used on the battlefield or to target civilian systems. Traditional radio jammers play a role as well. Pumping enough energy into an opponent’s wireless network could shut it down and wreak havoc with their ability to communicate.


Malicious software doesn’t need to be as sophisticated as that of Stuxnet to be effective, either. It’s much simpler to create a payload to launch denial of service attacks or steal user credentials than to create destructive physical effects. In December 2015, a Ukrainian energy production network went offline, leaving tens of thousands without power and requiring operators to trudge between substations to manually reset equipment. The attack was made possible by a variant of the BlackEnergy malware, a reconnaissance tool used to illicitly map networks and gather information that has been found in power plants across the United States going back to 2011. In Ukraine the attackers used this reconnaissance tool to locate a trusted username and password for the power company and manually log in to disable key systems.

At this point, it’s a stretch to call most of these tools “weapons.” Weapons are devices that can kill people and break things, and there have been few to no fatalities and little physical destruction directly attributable to malicious software. However, malicious software and hardware devices are the tools of an evolving conflict both within computer networks and in the very real world. Policy on how the United States, for example, builds, buys, and uses these tools is still in varying stages of development. The debates over how best to use these tools (and the legal framework to place them in) is ongoing in countries around the world. And while the frequency of attacks remains low, what was once novel and fantastic in Stuxnet has become more familiar. The conflict over information, it seems, has only begun.

This article is part of the cyberwar installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on cyberwar:

Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter