Future Tense

Did the FBI Break Tor?

The bureau exploited a vulnerability in the anonymous Web browsing system to arrest criminals. That could leave activists and others at risk, too.

TOR and the FBI.

Photo illustration by Slate. Photos by Roberto Saporito/Thinkstock and Orlando Nunez/Flickr.

Last month, the FBI and a coalition of international law enforcement agencies announced that they’d arrested 17 people and brought down more than 400 illegal websites that were supposed to have been “hidden” by the powerful Internet anonymizing platform Tor. Operation Onymous’ biggest catch was Blake Benthall, the alleged operator of Silk Road 2.0, a marketplace infamous for peddling illicit drugs.

Although the number of websites actually affected in the large cybersecurity raid was soon revised dramatically downward (and is under some interesting dispute), the arrests and takedowns sent the tech media into a bit of a frenzy. Illicit enterprises use Tor, true, but so do dissidents trying to evade repressive governments, whistleblowers, journalists, and even governments trying to protect themselves and the flow of information.

Articles originally described the raid as a “dark market massacre” and a “scorched-earth purge of the Internet underground.” Some cited conspiracylike theories on how the FBI was able to “break Tor,” theories conjured from potentially relevant and ethically troubled research out of Carnegie Mellon University. Others misinterpreted computer security research to make scary and inaccurate claims that more than 80 percent of Tor users could be de-anonymized.

But the fear-mongering and general hyperbole in the press missed a well-known truth: that vulnerabilities in computer security systems, including Tor, are common, if not expected. So much so that in a blog post on the recent raid, Tor itself even says, “In a way, it’s even surprising that hidden services have survived so far.”

No one in computer security has quite worked out a foolproof way to be anonymous online. It’s a really difficult, unsolved problem. But that’s also no reason to panic. To understand why all the fuss is a bit overblown, you first need to understand how anonymity on the Internet and the mythical beast of Tor actually work. (If you want to skip the Tor tutorial, click here.)

When you open a website at home—say, Google—your computer sends a request for information to the public IP address associated with Google’s servers. In response, Google shoots data back to your computer’s public IP address. Google can see who is surfing, and anyone who might be peeking at Internet traffic can see that you and Google are “talking” to each other.

Internet proxies were the first way developed to provide some anonymity online. So instead of sending the request directly to Google, your computer sends it to an intermediary (the “proxy”), which forwards the request to Google. Google in turn responds through the proxy. This way, Google sees only the proxy’s IP address, not yours. The inherent pitfall is that the proxy knows both the sender and receiver—anyone who hacks the middleman can see both ends.

Tor solves that problem by routing your data through multiple intermediaries or “relays,” usually three on one “Tor circuit.” To increase security, every data packet is also encased in layers of encryption, each of which can be peeled off only by a specific relay on the circuit. This “onion encryption” is actually how “The Onion Router,” or TOR, gets its name. (Turns out that anonymity on the Internet, like a certain big green friendly ogre, means you have layers)

Tor’s relays come courtesy of individual volunteers who choose to run the software on their own machines. Ostensibly this works because there are enough different people running these intermediaries that no entity can monopolize the system. Tor also provides a service that allows people to publish websites while keeping their IP addresses hidden. These “hidden services” are what the media frequently refer to as the “dark” or “deep” Web.

To connect to a hidden site, you arrange a “meet in the middle.” Hidden sites publish a kind of public pseudonym in a directory and include a list of relays through which to initiate a connection. You tell the hidden service (through its introduction nodes) to meet you at a personally selected “rendezvous point.” At—or rather, through—the meeting point node, the hidden service confirms its identity and passes you information.And every single time you or the hidden service connects to any other node (the meeting point, the directory of hidden services, etc.), you connect through one anonymizing Tor circuit.

A plethora of research papers detail vulnerabilities in Tor and ways to attack it. That’s a very good thing, because the more it’s studied, the greater the chance someone will find a problem and Tor can fix it. The computer security research community, of which Tor is a part, is all about open sharing of such information, because then vulnerabilities can be fixed and systems made more secure. In fact, the entity that has published the most comprehensive details on how the FBI could have unmasked Tor users in the recent Operation Onymous is Tor itself.

In the blog post published two days after the raid, Tor starts by pointing out that although it doesn’t know yet how the compromise was done (and unsurprisingly, Europol et al declined to provide details), it also isn’t aware of any recent attacks or new vulnerabilities in the system. It may simply not have found them yet. Or it’s also possible that Operation Onymous didn’t require a new Tor vulnerability at all. That’s where some seemingly unethical research from over the summer potentially comes into play.

This past July Tor announced it had shut down a five-month-long combined “Sybil” and “traffic confirmation attack,” allegedly carried out by researchers at CERT, a computer security research institute at Carnegie Mellon University. Those same researchers were supposed to give a talk on “breaking Tor” at the Black Hat security conference in August but retracted it at the request of CMU’s lawyers. That research (i.e., the attack), the retracted talk, and its potential relationship with the recent Operation Onymous have been generating a lot of chatter and have led to many of the conspiracylike theories seen in the press.

To computer security researchers, the CERT research is most notable not for having “broken” Tor (i.e., exploiting a vulnerability). It’s notable because of the ethical transgressions in the way the researchers did their work. But more on that in a second.

A traffic confirmation attack is one of the most well-known ways to assault Tor. To carry it out, you need to be able to control the first and last relays of Tor circuits. Once in control, you secretly tag data packets when they enter the network and check those tags when they exit. This way you can figure out who is talking to whom.

A common way to gain control of those relays is through a “Sybil attack,” where you flood the system with your own relays, so that you can dominate parts of the network. (Recent research shows that it’s not that expensive to do this; after all, there are only 6,000-plus relays currently on Tor.) This Sybil attack exploits an inherit vulnerability of Tor’s design: its reliance on volunteers to create the network.

Usually when computer security researchers find a vulnerability in a security system like Tor, they test it on a simulation in the lab. And in a typical traffic confirmation attack, the people initiating the attack will tag the data (or “traffic”) surreptitiously, so that only they can see those tags. The CERT researchers did neither of those things.

In a phone conversation, Ari Feldman, a computer security professor at the University of Chicago, tells me, “What was particularly concerning to the operators of Tor and to outside observers was not just that they [the CERT researchers] actually tried to carry out this attack on the live Tor network, but the way they tagged traffic was such that anyone observing the Tor network could also read and interpret the tags that they added.”

In other words, anyone else who was paying attention over the five-month span—including, say the FBI or Europol—could de-anonymize users on the Tor network, too. That’s not because they’d found a vulnerability, but because they could simply piggyback on the CERT researchers’ irresponsible, dangerous tagging system.

Once Tor found the attack, it kicked out CERT’s relays, patched the vulnerabilities, and announced it. But CERT researchers were not very forthcoming to Tor with details of the attack, in contrast with the open disclosure practices of the community. (All of this is particularly troubling when considering that CERT is an organization “dedicated to improving the security and resilience of computer systems and networks,” not breaking them. Neither CERT nor CMU’s legal counsel responded to Slate’s requests for comments on the matter or to this respected researcher about it.)

Any correlation between the July attack and Operation Onymous is technically speculation. But in July, Tor publicly noted the potential for an intelligence agency to take advantage of the five-month attack, and in November, it again pointed out that CERT’s attack could have played a key role in the recent raid.

The official complaint filed against Benthall, the alleged operator of Silk Road 2.0, claims that the FBI de-anonymized him in May of this year, exactly during the period of time when the CERT research had made all of Tor vulnerable.

So the FBI may not have needed to do much of its own technical magic at all. (Or very little technical work against Tor, considering that bugs in Web applications that interface with Tor have been exploited to attack the network in the past, as well.) And according to the indictment, old-fashioned, undercover sleuthing by law enforcement agencies, not to mention carelessness on the part of Tor users, seems to also have played a key role in the latest raid.

We may not be able to say exactly how Operation Onymous did its damage. But all that hype over the breaking of Tor? The fact is that Tor, like just about every other computer software and security system ever designed, is broken to begin with. Who finds a vulnerability and what he or she chooses to do with it is ultimately what matters.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.