War Stories

Damaged Goods

How the NSA traveled down a slippery slope—and how it can regain Americans’ trust.

An undated aerial handout photo shows the National Security Agency (NSA) headquarters building in Fort Meade, Maryland.

An undated aerial photo shows the National Security Agency headquarters building in Fort Meade, Md.

Handout photo by Reuters

“When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it after you’ve had your technical success.” —J. Robert Oppenheimer

The father of the atomic bomb made this observation in 1951 while testifying before a panel that wound up revoking his security clearance as a result of reports that he’d opposed going ahead with the much more powerful hydrogen bomb. He was explaining to the panel why he’d initially supported the H-bomb project—it was so “technically sweet” that “the moral and ethical and political issues” dropped by the wayside.

Technical sweetness may explain how the National Security Agency put in place the massive surveillance programs that Edward Snowden has revealed in recent weeks.

Consider this. The core mission of the NSA, ever since it was founded in 1952, has been “signals intelligence”—intercepting all manner of communications sent or received by the enemy. The task has been getting more challenging as the means of communication have evolved from radio antennae and the telephone to satellites, fiber optics, cellphones, and the Internet. It has become harder still in the past dozen years, as the enemies to be tracked have expanded to include not only nation-states, but also amorphous, decentralized terrorist groups.

And so, when the NSA’s allies and affiliates in the corporate software world came up with devices that can intercept, sift through, collate, and parse patterns from everything, in near-instantaneous time—well, it was all so “technically sweet,” the natural inclination among those in charge would have been, as Oppenheimer said, to “go ahead and do it.”

There were, of course, legal limits to what they could do. And in the past, these limits served as firm obstacles. Back in the early 1980s, when White House and Pentagon officials became aware that the government’s computer networks were vulnerable to hacking, the NSA proposed putting itself in charge of software security—but leaders in Congress rejected the idea, noting that, by statute, the NSA must have nothing to do with domestic surveillance.

In 2002, the Defense Advanced Research Projects Agency, the Pentagon’s high-tech research and development shop, put Adm. John Poindexter, who had been Ronald Reagan’s national security adviser, in charge of a massive surveillance program called Total Information Awareness—which was essentially a somewhat cruder version of the programs that we know are in place now. The 9/11 terrorist attacks had occurred just months before; most Americans were willing to make compromises in the interests of national security—but only to a point. When news reports described the vast scope of Poindexter’s TIA program, Congress ordered it shut down, for the same reason: Foreign intelligence programs—whether run by the CIA, the NSA, or the Pentagon—had no business snooping within the United States.

But sometime over the past decade, the means of surveillance became so fast and vast—so technically sweet—that the decision was made to “go ahead and do it.”

And it’s not hard to see why. Threats from al-Qaida and its affiliates were still active—and, with the Cold War gone, they marked the only plausible threats to the U.S. “homeland.” Several plots, though much smaller in scale than the 9/11 attacks, had just barely been foiled in the years since. FBI director Robert Mueller told a House committee in June that the 9/11 plot might have been disrupted if today’s technologies had been around in 2001. Apparently, one of the intercepted phone calls—the calls that led the intelligence community to warn President George W. Bush of a possible impending attack—had been made from San Diego, but the monitors had no way of knowing that at the time. Had they known, Mueller claimed, the caller could have been tracked down and detained.

NSA officials also testified that the most far-flung of these tools, called XKeyscore, has helped capture more than 300 terrorists. On the other hand, Sen. Patrick Leahy says he’s not convinced it’s done much to capture any.

Who’s right? I don’t know. I’m pretty sure Glenn Greenwald doesn’t know either; nor, it’s quite likely, does Edward Snowden. Does Leahy really know? Do Sens. Ron Wyden and Mark Udall, who are also on the Senate committee, know? They should, one way or the other. Should the rest of us? Obviously, we can’t be expected to know everything about how XKeyscore or some other program helped capture the terrorists, if it in fact did. But now that the secrets have been spilled, now that the whole program is under discussion, we need to know some more of the details.

Sunlight pains the whole culture of the NSA and those with lifelong careers there. Throughout its existence, everything about the agency—everything about signals intelligence and communications intercepts—has been highly classified. Until recently, the very fact that there were intercepts was highly classified, and disclosure of that fact was a serious crime. Intelligence agencies are, by nature, insular and secretive; the NSA, by far the largest and most secretive U.S. intelligence agency, is exponentially so.

There’s the scene in Dr. Strangelove when the president lets the Russian ambassador into the war room so that he can see the common predicament of both superpowers, and the Air Force chief of staff (played by George C. Scott) sputters, “But he’ll see the Big Board!”

That’s the situation in which the NSA chiefs now find themselves. They have to show us the Big Board—not all of it, or maybe they don’t have to show it to us (i.e., to you and me). But the secrecy has been too tight, the few public statements on the matter have been too vague or deceptive, the level of distrust is rising so steeply that the program itself is in jeopardy. A recent amendment to cancel the program lost in the House of Representatives, 217–225, a startlingly narrow margin. Those who manage or support the program should be the keenest to open the curtains.

It’s easy to see the logic by which the NSA managers widened the scope of their surveillance. At first, they focused on tracking traffic patterns. Some phone number in the United States was calling suspicious people or places in, say, Pakistan. It might be useful to find out whose phone number it was. It might then be useful to find out what other people that person has been calling or emailing, and then it might be useful to track their phone calls and email patterns. Before you know it, they’re storing data on millions of people, including a lot of Americans. Then maybe one day, they track someone—a phone number or email address they’d never come across before—engaged in some very suspicious activity. They wish that they’d been tracking this person for some time, so they could go back and see if a pattern exists without having to wait for one to emerge. Then they learn that they can do this; new technology makes it possible. So they scoop up and store everything from everybody. They even convince themselves that they’re not “collecting” data from American citizens (as that would be illegal); no, they’re just storing it; the collecting doesn’t happen until they actually go retrieve it from the files. (James Clapper, director of national intelligence, actually made this claim.)

A widespread criticism of the intelligence failure on 9/11 was that the FBI, CIA, NSA, and the other pertinent agencies had tracked down a lot of facts—a lot of data points—but they didn’t, or couldn’t, “connect the dots.” I’ve never completely bought this notion; a lot of the failure stemmed from routine screw-ups. But let’s stipulate there’s something to it. What if new technology could give the NSA so many dots, a seamless stream of dots, all the data points in the world—the fantasy-come-true of universal surveillance—that nobody would need to connect the dots, because the dots practically connect themselves?

They would need some legal authority for this, so they ask the FISA court—created by Congress in the 1978 Foreign Intelligence Surveillance Act—to rule on whether this is permissible, and the court complies. Specifically, it rules that they can do this, as long as the material they’re storing is “relevant” to an investigation of terrorism, and the court buys the logic that the agency might need to go fetch data retroactively in such a probe. Therefore, everything is “relevant.”

The catch, as we now know, is that all of this—the ever-expanding surveillance in time and space, the reasoning behind it, and the FISA court ruling that approves it—has evolved at such high levels of secrecy that only a handful of people in Congress (very few people anywhere outside the NSA, and probably not all that many inside) know anything about it. This, it turns out, is what Wyden, a member of the Senate Intelligence Committee, meant when he cryptically said, way back in October 2011, that “there are two Patriot Acts in America”—the one that anybody can read and a “secret interpretation that the executive branch uses” but that nobody on the outside knows about at all. The public Patriot Act allows “bulk” collection of data; the secret interpretation defines “bulk” far more bulkily than anyone could have imagined.

And here’s the problem. The program was supposed to have checks and balances, but really it doesn’t, not anymore. The FISA court was created in the wake of Sen. Frank Church’s hearings, which unveiled decades of illegal conduct by the U.S. intelligence community. Intelligence agencies had to get a warrant from the FISA court before they could conduct surveillance of suspected spies inside the United States.

This mandate became problematic when the NSA started doing data-mining; the issue was no longer individual warrants for specific intercepts but rather permission to perform massive sweeps. But even then, the FISA court had the authority only to approve or deny a specific request. The law never meant for the court to issue rulings on the scope of its own authority or to interpret broad issues of law.

The court’s powers were expressly limited because its sessions, deliberations, and rulings were secret; the court issued its extraordinary ruling on the meaning of “relevant” data without letting anyone know about it. In other words, the NSA’s authority and the Patriot Act’s scope were expanded way beyond almost everyone’s understanding—in total secrecy. Theoretically, the ruling could have been appealed to the Supreme Court. (The new surveillance programs, even more than the earlier ones, raise constitutional issues, especially regarding the Fourth Amendment, barring unlawful search and seizure.) But in reality, this was impossible because no one knew the ruling had been made—and because there is no procedure for appealing a FISA court decision.

So here are a few modest reforms, which should be acceptable to all parties, whatever their views of the surveillance program itself:

First, prohibit the FISA court from ruling on the scope of its own authority or interpreting the broad laws that govern the NSA’s powers. It should behave more like a municipal court than the Supreme Court.

Second, appoint a panel of judges or lawyers within the FISA court—constitutional advocates—who, as they see fit, can argue against the intelligence community’s case for surveillance. (James Robertson, a former FISA court judge, recently endorsed this idea.)

Third, broaden the personnel of the FISA court. At the moment, all but one of its judges were appointed by the Supreme Court’s Chief Justice John Roberts. A better idea would be to limit the judges’ terms to three or four years instead of the current seven (to avoid getting sucked into the secrecy culture) and to let each Supreme Court justice appoint one FISA court judge. The FISA court is inherently dealing with constitutional questions; all surveillance brushes up against the Fourth Amendment to some extent. If the court’s proceedings must remain secret, at least its composition could reflect the balance of the court that deals with these issues.

Fourth, declassify FISA court rulings after a certain amount of time, maybe two or three years, if possible shorter. (Agencies may argue for redactions to protect legitimate security interests.) Also declassify the records—perhaps the transcripts—of cases where the court first modified a request before approving it.

Fifth, and perhaps most important, some sort of panel—maybe the Senate Intelligence Committee, but preferably an independent group of trustworthy Americans (whoever they might be)—needs to conduct a thorough investigation of whether, and to what extent, these surveillance programs truly have prevented terrorist strikes. Most of the details may need to remain secret, but we need to know the essential answer to the question. If they have been of great benefit, public knowledge of that fact may shore up support for the programs. If they haven’t been, then it’s time to cut them back.

Either way, the balance of security and civil liberties needs rethinking—or, rather, it needs thinking, because the enhanced intrusions went into effect, for better or worse or both, without any outside pondering at all.