It’s easier than ever to launch a large-scale attack like WannaCry.

It’s Easier Than Ever to Launch a Large-Scale Attack Like WannaCry. How Do We Stop It?

It’s Easier Than Ever to Launch a Large-Scale Attack Like WannaCry. How Do We Stop It?

The citizen’s guide to the future.
May 17 2017 10:56 AM

Malicious Cyber Capability Is Spreading. How Do We Stop It?

It’s easier than ever to launch a large-scale attack using leaked information.


Wikimedia Commons

A global outbreak of ransomware is rapidly infecting machines in critical and not-so-critical infrastructure across the globe, including the National Health Service in the United Kingdom, a Spanish internet service provider, the German rail system, and mall billboards in Singapore. This digital pandemic illustrates a challenge that the cybersecurity community has been wrestling with for nearly a decade: how to counter the spread of malicious cyber capability.

To help inform this conversation, let’s first step back and review what we know about WannaCry, the ransomware sprinting across globe. As has been widely reported, the malware leverages an exploit developed by the U.S. National Security Agency. The exploit, which was called EternalBlue, “works reliably against computers running Microsoft Windows XP,” as Ars Technica put it. The developers of WannaCry combined this Windows exploit with code that allowed the ransomware to spread without so much as a keystroke or click from either the operator or the victim, locking machines and demanding ransom. How, you might ask, did this exploit reach the authors of WannaCry (which several groups have suggested is in North Korea)? In simple terms: The Shadow Brokers, the group that has spent the last few months leaking NSA tools, essentially made it open-source.


Because of difficulties associated with pushing patches designed to block an exploit out to the public—it takes a long time for everyone to click on those annoying little security updates, and some portion of the population never will—open-sourcing exploits like this is often a bad idea. It simultaneously notifies the software manufacturers and potential attackers of the bug. The Shadow Brokers/WannaCry case is just one demonstration of the growing challenge of countering the spread of malicious cyber capability. The code for Carberp (a “botnet creation kit”) was posted online and precipitated the outbreak of the Carbanak malware used to steal cash from ATMs. Rumors persist that versions of the BlackEnergy trojan—twice leveraged to shut off portions of the Ukrainian power grid—have been floating around in malware forums.

In 2013 and in response to the publicity of Stuxnet, the campaign that sabotaged the Iranian nuclear enrichment program, Gen. Michael Hayden noted that the time we live in “has the whiff of August 1945. Someone, probably a nation-state, just used a cyber weapon in a time of peace … to destroy what another nation could only describe as their critical infrastructure.” To Hayden, it was abundantly clear that cyber-insecurity could threaten global stability, yet the international community was ill-equipped to handle the problem.

Today, when policymakers around the world contemplate the intersection of cybersecurity and global stability, they focus their time, money, and effort into developing concepts around norms for responsible state behavior—in other words, what states and other international actors should and should not do in cyberspace. They have not paid enough attention to the other side of the same stability-regime coin: limiting what groups can and cannot do. This means a combination of hardening our own systems against attacks and, likely, somehow countering the proliferation of capability—the possibility of which requires a great deal more exploration from researchers.

This research will be important because there are several problems when it comes to countering the spread of malicious software. Chief among the challenges here is the notion that malware, the “weapon of cyberconflict,” is only a portion of the problem. The tool itself isn’t the only thing bad actors need—they must have the knowledge of how to leverage it as well. In any case the capability—the code and how to use it—is not physical. It’s knowledge or information. And it’s easier to lock down a physical object than it is to stop the spread of information.


Second, somewhat counterintuitively, there are people who argue that the open spread of malicious capability is actually beneficial to those trying to defend against cyberattacks. If the exchange of tools and practices happens in the open, defenders have a better sense of what and who they are trying to protect against.

Third, the cybersecurity community cannot afford to institute blanket restrictions on the exchange of malware. When actively defending against an attack or remediating an incident, defenders and responders share artifacts with colleagues to gain insight on how to counter the attack. More often than not, these artifacts could only be described as malware.

So what can we do? For starters, the policy community needs to understand that not all malicious cyber capability is made equal. We know that the capability behind the Stuxnet campaign that sabotaged the Iranian nuclear facility at Natanz is different from Zeus, which enabled financial and other cybercrime around the world, which is different from the Mirai botnet, which caused the Dyn internet outage in October 2016. And all of these tools are constructed and operate differently from WannaCry. Just as cybertools are vastly different in construction and effect, we likely need a variety of policy tools to address them. Wrapping our heads around what these capabilities are, how they differ, and how they spread is a massive first step.

If we can do that, we can then look to other fields, like biosecurity, pathogen and disease control, counternarcotics, and counter– money-laundering and small arms trade, which could shed light and provide frameworks for addressing diffusion problems. Think, for example and somewhat ironically, given one of the targets of this ransomware, about the World Health Organization’s model of a contingency fund for emergencies, which, if adopted for cybersecurity, would unlock funds to help the community fight fires on a global scale. This type of framework might be leveraged to help the defensive cybersecurity community address transnational threats like the Mirai botnet and clean up the mess left by widespread ransomware. Similarly, the cybersecurity community can likely draw lessons about where and how to break up illicit markets from the experiences of the counternarcotics community to help address the spread of malware between criminal groups.

Western policymakers are not the only ones who see WannaCry as a catalyst to renew discussion. Chinese academic Shen Yi’ writes, “all countries that are willing to take responsibility, including the United States, should advocate as soon as possible to promote a global cyber non-proliferation mechanism.” In a polarized world, there may be space for some form of transnational cooperation on this issue. But first, we need to fill the knowledge gap.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.

Robert Morgus is a researcher with New America's Cybersecurity Initiative and International Security Program.