Future Tense

When Should the Government Stockpile Software Vulnerabilities?

A new process attempts to help guide decision-making.

Should the government disclose the weak links it finds?

iStock

Intelligence agencies collect and protect secrets. Given their line of work, this default position makes perfect sense a lot of the time—but sometimes that secrecy actually ends up making us less safe, especially when the secrets have to do with computer security. So on Wednesday, the White House released new guidelines describing how the government will make decisions about which computer vulnerabilities to keep secret and which to release so they can be patched.

Perhaps not coincidentally, the announcement follows closely on the publication last week of a long New York Times article analyzing the consequences of the National Security Agency’s decisions not to reveal certain computer system vulnerabilities that it discovered while trying to find ways into adversaries’ machines. Some of those vulnerabilities, and the NSA’s tools for exploiting them, were later obtained and sold by a group known as the Shadow Brokers and used to launch massive international ransomware attacks earlier this year, including WannaCry and NotPetya, which were based on an NSA tool called EternalBlue that exploited a vulnerability in Microsoft Windows. Those attacks cost companies hundreds of millions of dollars and billions of dollars globally, in addition to interrupting critical services such as medical care at many hospitals worldwide.

At the time of the attacks, the NSA came under fire for not alerting users and software manufacturers sooner about the vulnerabilities it had exploited in building the EternalBlue tool. As critics see it, the NSA could have helped prevent a major cyberthreat, rather than inadvertently helped create one, by disclosing the vulnerability so that Microsoft could patch it. (The NSA did, eventually, inform Microsoft of the vulnerability so the company could patch their software—but first the agency exploited the vulnerability for five years to collect intelligence, revealing it only after they learned that EternalBlue had been stolen.) Microsoft president and chief legal officer Brad Smith called out the U.S. government for “stockpiling” vulnerabilities rather than reporting them to software vendors.

Now the government has a newly released vulnerabilities equities process charter to help guide when to stockpile and when to disclose. The 14-page document outlining how the government will make those decisions moving forward doesn’t shed a whole lot of light on which types of vulnerabilities will be kept secret for internal use by the government and which will be disclosed so that vendors can patch them. Instead, it lays out the process for making that decision—who will be involved, what factors should be considered—while still allowing for the necessary degree of secrecy and case-by-case analysis. This process itself is not brand new—it was developed in 2008 and 2009—but the government did not release the details publicly until January 2016, after the Electronic Frontier Foundation filed a lawsuit under the Freedom of Information Act. Unlike the newly released charter, the previous heavily redacted version, dated February 2010 and released in 2016, did not include a list of specific considerations or questions that the government would take into account.

So the new charter is definitely a step forward—both because it isn’t redacted and because it lays out more clearly what factors go into deciding whether the government discloses a vulnerability. This type of process-based transparency about how decisions are made and who is making them is all we can realistically expect or hope for from the intelligence community. Undoubtedly, and reasonably, there will still be vulnerabilities that are not disclosed by the government, and the government may still make some very costly mistakes about what it chooses to keep secret (especially if it is not ultimately successful at keeping those vulnerabilities out of hackers’ hands).

But it does at least suggest that those decisions be handled methodically and with attention to the viewpoints of many different stakeholders, not just the intelligence agencies responsible for finding the vulnerabilities. For instance, in addition to representatives from the military and intelligence agencies, the Equities Review Board described in the charter includes representatives from the Office of Management and Budget, the State Department, the Treasury Department, the Justice Department, the Department of Homeland Security, the Department of Commerce, and the Energy Department. That means that the decisions aren’t just up to those whose primary concern is how to gather as much intelligence as possible—there will be input from people thinking about the possible economic and diplomatic consequences of leaving popular software products unpatched. (The previous version of the process, released in 2016, allowed that representatives from the Treasury, State, Justice, Energy, and Commerce departments could be involved in the process depending on whether they had an interest in a particular vulnerability but did not require their participation.)

It’s unclear, however, what concerns will carry the most weight when it ultimately comes to deciding what to do about a newly discovered vulnerability. The considerations the Equities Review Board will take into account, as outlined in the charter, are unlikely to lead to obvious or clear consensus in many cases.

For instance, one set of considerations asks questions like, “How severe is the vulnerability?” and “What are the potential consequences of exploitation of this vulnerability?” Meanwhile, other questions attempt to get at how useful the vulnerability might be if it remained the sole knowledge of the U.S. government. For instance, “What is the demonstrated value of this vulnerability for intelligence collection, cyber operations, and/or law enforcement evidence collection?”

But many of the vulnerabilities that are likely to be most valuable for intelligence collection or cyber operations are the very same ones that will have the most severe potential consequences if exploited by malicious actors. That’s because the same things that make a vulnerability valuable to the government—that it’s found in a widely used piece of software, for instance, or that it allows for total control of a compromised device—are exactly the same things that make it so dangerous if it falls into the wrong hands. EternalBlue was almost certainly an incredibly valuable asset for the intelligence community—until it became a huge liability for everyone else.

So while it’s good to see the U.S. government asking all the right questions about computer system vulnerabilities it discovers, what really matters is how they weigh conflicting answers against one another. One promising sign is that the new charter seems to imply a default position of disclosure, stating, “In the vast majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest.” On the other hand, the NSA has maintained for years that it discloses “more than 90 percent” of the serious flaws it discovers so the default position already appears to have been one of disclosure even while EternalBlue was being created and used. Ultimately, what matters most is not how many vulnerabilities are disclosed but which ones.

It also matters how those disclosures happen—who is told about vulnerabilities and on what timeline—but that process seems to be largely beyond the purview of the charter. The charter does hint at a range of disclosure options, though, including, “disseminating mitigation information to certain entities without disclosing the particular vulnerability, limiting use of the vulnerability by the USG in some way, informing U.S. and allied government entities of the vulnerability at a classified level, and using indirect means to inform the vendor of the vulnerability.”

The process outlined by the White House gives the government a lot of leeway and flexibility to determine which concerns are most important when deciding what to do about a vulnerability and how best to deal with it. That was probably inevitable—all computer vulnerabilities are different and there’s no one-size-fits-all approach that is likely to work for all of them. Still, it’s promising to see a little bit more clarity emerging around how these decisions are made and growing recognition that they should not be made lightly.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.