The NSA hack shows why the government shouldn’t stockpile software vulnerabilities.

The NSA Hack Shows Why the U.S. Government Shouldn’t Stockpile Software Vulnerabilities

The NSA Hack Shows Why the U.S. Government Shouldn’t Stockpile Software Vulnerabilities

Future Tense
The Citizen's Guide to the Future
Aug. 19 2016 12:45 PM

The NSA Hack Shows Why the U.S. Government Shouldn’t Stockpile Software Vulnerabilities

466108156-secretary-of-defense-ash-carter-delivers-remarks-to-an
Secretary of Defense Ash Carter delivers speaks to Cyber Command troops and National Security Agency employees on March 13, 2015.

Photo by Chip Somodevilla/Getty Images

Earlier this week, top secret code written by one of the NSA’s most clandestine branches was released on the internet. Among other things, it contains a cache of technologically sophisticated hacking tools. The content is from 2013, and various experts, including former NSA staff, have confirmed that it looks to be genuine. Much of this advanced technology uses existing vulnerabilities—security flaws in software and hardware—to attack systems, break through firewalls, and gain access to private networks. In this case, the tools target routers made by both U.S. and Chinese companies, including Cisco and Fortinet.

Critics have long alleged that the U.S. government stockpiles too many vulnerabilities. Various branches of the government have responded with claims that they disclose 91 percent of vulns they find, and that their alleged stockpile of zero-days (previously unknown vulnerabilities, so the vendor has had “zero-days” to fix them) is exaggerated. But this release by the “Shadow Brokers” has proven that the NSA does have at least a few vulnerabilities that it has kept to itself.

Advertisement

There is a relatively unknown process that the government uses to evaluate the vulnerabilities that the government finds or acquires. As far as we know, the Vulnerabilities Equities Process, or VEP, has been in place since 2010, but was not particularly active until 2013. After the Snowden revelations, which included discussion of vulnerabilities, a number of policymakers, advocates, academics and technologists criticized potential stockpiling. In response the government “reinvigorated” the VEP. Under the post-Snowden process, vulnerabilities are supposed to be reviewed by a group of representatives from government agencies who then decide whether the information should be shared with the company that built the product so that it can be patched, or whether the government may keep the information to itself for offensive and defensive purposes. But then, in 2014, the Heartbleed vulnerability threatened two-thirds of the internet, and the NSA was accused of knowing about it beforehand. In response, the White House posed a public list of considerations for when an agency proposes temporarily withholding knowledge of a vulnerability, including rating the risk of leaving it unpatched, identifying the harm that a hostile nation could do with it, and diagnosing the likelihood that someone else will discover it.

There are still serious questions about the functioning of the VEP, the most serious being that, in fact, it may be holding back, or not reviewing, some potentially dangerous zero-day vulnerabilities—in which case the vendor that maintains the software would not know that they existed. Some have called for a more transparent process with almost automatic disclosure, while others argue we need more information before pushing for reform.

So what does this week’s hack mean for the VEP? We know that as of 2013 these vulnerabilities were in the government’s possession and that some of them were still zero-days until the Shadow Brokers released them. This raises two possible, mutually exclusive scenarios for how the VEP was used. The government may have reviewed the information and decided the vulnerabilities were worth holding onto, which means that we have proof that at least some significant vulnerabilities are being kept secret. The other possibility is that these exploits might not have been reviewed by the VEP agencies—though this wouldn’t necessarily have violated procedure, because the decision to retain the information leaked by the Shadow Brokers may have been made before the VEP was standard practice. We just don’t know enough about how the VEP functioned originally to say—which is itself a problem.

This leak of NSA exploits brings to the forefront many questions that security experts have long been asking about the VEP: Is every single vulnerability reviewed by a broader process? What type of vulnerabilities are exempt or retained? Does the NSA alone get to decide what secrets are worth keeping? If the same data were hacked in 2015, after the VEP was supposed to be fully active, would fewer vulnerabilities show up in the data dump? One of the most common counter arguments to questions about the security and efficacy of the VEP is basically “you would trust our policies, if you knew what we knew.” Well, now we know a bit of it, and the information doesn’t inspire confidence. If the Shadow Brokers’ hack is a test of the government’s policies on disclosure of zero-days, they are clearly falling short.

The hack also challenges other parts of the government’s argument for vulnerability non-disclosure—first, that its security measures are strong enough that their secret stash of exploits won’t be exposed, and second, that the vulnerabilities they retain don’t need to be patched because they won’t be found by a bad actor who will exploit them. We now know that, at least, the first is false. As for the second, the NSA’s “nobody but us” argument—which it’s also used in its fight against encryption—is extremely unrealistic. The very real threat of non-disclosure of vulnerabilities cannot be downplayed with arguments that the NSA is uniquely capable of finding zero-days and impervious to cyberattacks.

So here we stand, with highly dangerous NSA hacking tools available for anyone to download and a cache of others up for sale on the black market. If government hacks are our only window into the transparency and efficacy of how the government deals with vulnerabilities, then the Shadow Brokers are nowhere near the biggest of our cybersecurity problems.

Future Tense is a partnership of SlateNew America, and Arizona State University.

Andi Wilson is a policy analyst at New America’s Open Technology Institute, focusing on cybersecurity, encryption, surveillance, and vulnerabilities equities.