Future Tense

To Disclose or Not to Disclose

We need to revamp the system the government uses to decide whether to stockpile vulnerabilities like the one behind the WannaCry ransomware.

It can be frustrating to consumers victimized by hackers and other bad actors to find out the government knew about the vulnerability all along.

iStock

In recent days the ransomware campaign known as WannaCry has captivated the attention of information security practitioners, policymakers, and ordinary users around the world. The blame game has predictably ensued, citing multiple parties as responsible. Some have criticized Microsoft, the manufacturer of the vulnerable systems, while others have blamed the National Security Agency, which reportedly knew for years of the vulnerability that WannaCry exploited. There are also those who blame the Shadow Brokers, the hacking group that publicized the stolen tools, thereby subjecting vulnerable computers to what could very well be the largest-scale cyberattack in history.

Commentators and ideologues will continue to point fingers and argue over culpability. But it would be more constructive to consider WannaCry as a case study for reforming the federal government’s policies around discovering and disclosing computer vulnerabilities. These vulnerabilities, when kept secret, can help our intelligence community gain the very insights that policymakers and the military rely on to protect the American people. At the same time, disclosing the vulnerabilities to vendors (like Microsoft, in the case of WannaCry) can protect consumers from being victimized by hackers and other bad actors.

But first, it is helpful to understand what WannaCry is. WannaCry exploits a vulnerability in a Windows protocol called Server Message Block, or SMB, that is common across a variety of Microsoft products. The NSA reportedly discovered this vulnerability in the legitimate pursuit of its mission, and the federal government decided to retain it for its own intelligence collection purposes rather than disclose it to Microsoft. It was not until NSA realized that it lost control of the information that it eventually notified Microsoft, which resulted in a critical patch being released in March. Unfortunately hundreds of thousands of users either didn’t get the memo or were not able to deploy the patch across all of their systems—a complicated process for large enterprises like health care organizations—before the ransomware outbreak.

Some critics argue that intelligence agencies should disclose every vulnerability that they discover—but that perspective underestimates how critical these exploits can be to protecting Americans’ safety and prosperity. Should the federal government, for example, disclose a flaw in Apple’s iOS if that same bug is enabling intelligence collection on Kim Jong-un’s iPhone? I think most Americans—and perhaps even Apple itself—would favor retaining that vulnerability. However, as is often the case with intelligence and law enforcement, a rigid process must govern how America achieves the ever-delicate balance between institutional and individual interests, national security and personal privacy, and justice and information security in the digital age.

Enter the Vulnerability Equities Process, or VEP, which was originally established during the Bush administration to adjudicate the question of whether to disclose or retain vulnerabilities discovered by the intelligence community, specifically the NSA. It was a noble attempt to institute some controls. But the intelligence community’s recent track record with safeguarding its cyber tools hasn’t been very good, shifting the paradigm.

The primary problem with the current VEP is that it is naturally biased in favor of intelligence and law enforcement practitioners. After all, it was developed almost exclusively by government agencies and therefore naturally protects government interests—namely foreign intelligence operations and law enforcement investigations—from the potentially crippling effects of vulnerability disclosures. If the FBI is conducting a lawful wiretap by exploiting an unknown or “zero-day” vulnerability on the subject’s PC, a simple patch from the vendor could compromise the entire investigation. On the other hand, if a bad actor discovers this same vulnerability before a patch is deployed, or if the FBI loses control of the exploit, then every person with the same laptop is potentially vulnerable.

In these sensitive cases the government is often in the unenviable position of not being able to defend its decisions. It is entirely possible that the tactical or strategic intelligence value of this vulnerability was so high that the nation believed it could not afford to disclose it in the interest of public safety. But there is no way of knowing the thinking behind the decision to retain the WannaCry vulnerability instead of sharing it with Microsoft. And that is what needs to change.  For starters, here are some steps to improve the status quo.

First, and to this end, there must be transparent criteria governing the decision of whether to disclose a vulnerability. The last time this debate reared its head was in 2014 in the wake of the Heartbleed bug, another vulnerability that was known to the NSA for years and ultimately adversely affected users on a massive scale. In an effort to downplay allegations that the government was stockpiling exploits and increase public confidence in the VEP, the Obama administration declared a policy of “bias toward disclosure” except if there is a “clear national security or law enforcement need.” The White House also declared that the Equities Review Board—the interagency body that ultimately votes on each decision—would consider, among other factors, “the extent of the vulnerable systems’ use of Internet infrastructure,” “the risks posed and the harm that could be done if the vulnerability is left unpatched,” and “whether the vulnerability can be patched or otherwise mitigated.” But it’s not clear what constitutes sufficient “clear national security or law enforcement need” to supersede the other factors—all of which clearly apply to the WannaCry campaign. These factors should be subject to public comment and annual review by Congress.

Second, and closely related, if the Equities Review Board is truly interested in a “bias toward disclosure” policy, it must include private, nongovernmental representation. Presumably nearly all of the exploits presented to the ERB serve some national security or law enforcement need—otherwise the government would report the flaw to the manufacturing vendor. But only by introducing unaffiliated stakeholders—like consumers or people from industry—can the process be objective. In this regard, it is also worth involving partner nations that possess many of the same intelligence and commercial stakes abroad as we do here. In doing so we might even learn that other foreign intelligence services have discovered the same vulnerability, which would strengthen the case for disclosure.

Third, Congress should increase its oversight of our intelligence community’s information security practices, with an emphasis on mitigating insider threats like the ones that have dogged the NSA and CIA of late. Vulnerability retention, while necessary at times, is increasingly risky when the intelligence community is constantly leaking information. In the example of Kim Jong-un’s iPhone, the cost-benefit calculus is dramatically altered if the iOS flaw is subject to theft or loss. For this reason it is in the intelligence community’s interest to enhance its internal cybersecurity posture. It is the height of all ironies when the world’s leading cryptographic organization cannot secure its own tools. If this trend continues it is hard to imagine the intelligence community legitimately advocating for the retention of any zero-day vulnerability, let alone one that if lost could adversely impact millions of computers.

Finally, it is inherently difficult for the public to measure the utility of the Vulnerability Equities Process. After all, private citizens and businesses lack any insight into decisions that might risk personal privacy, or corporate reputations and bottom lines. But congressional oversight is necessary to track, on a quantitative and qualitative basis for comparative purposes, the intelligence value of retained vulnerabilities. If the NSA could provide the public with convincing metrics about the value of keeping the WannaCry vulnerability secret for years, much of today’s uproar would be muted.

The good news is that the WannaCry attack, while horribly disruptive, is stimulating national and international debate on this critical subject. On Wednesday, a bipartisan bill addressing the Vulnerability Equities Process was introduced in Congress. The legislation “formally kicks off the debate over whether and how to codify the VEP, which presently exists only as a function of administration policy,” as Lawfare put it. That’s an encouraging sign that reform is on the horizon. We are one step closer to having a more balance, objective, and transparent vehicle for adjudicating what will be increasingly common and consequential decisions.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.