Bitwise

There Will Be More Blood

The Heartbleed security flaw is bad. And it’s not the only one.

Right now, damage control efforts around Heartbleed are sucking up all the oxygen.

Photo illustration by Lisa Larson-Walker

The Heartbleed bug affects everyone save the most tech-illiterate doomsday preppers. It’s a severe security flaw so widespread that Google, Yahoo, Facebook, Tumblr, Instagram, Netflix, Wikipedia, and countless other sites have all fallen prey to it for the past two years. While those sites are busy patching their systems and checking for breaches, and while you are changing your passwords, I’ve got some more bad news for you: There is an even worse bug out there. It affects many of these same sites, and while the extent of its threat is not yet fully known, it should be taken very seriously. None of the major Internet companies have fixed this bug yet, and many of them are doing absolutely nothing about it.

The bug does not have a name. This is because the bug has not yet been found—at least, not by anyone who’s telling. But it is almost certainly there, and it’s likely there’s more than one of them. Right now, damage control efforts around Heartbleed are sucking up all the oxygen, but very soon it will be time for what engineers call a postmortem.

That discussion is going to be difficult. Anyone who tells you that there are positively no other bugs in OpenSSL code, where Heartbleed was found, or in any of the other core pieces of open source infrastructure, is full of it—or at least more optimistic than any competent engineer should be. OpenSSL is a networking security package in ubiquitous use by the biggest tech companies around. It is maintained by a part-time team of about 10 people, all but one of whom work a day job. OpenSSL co-founder Steve Marquess writes, “These guys don’t work on OpenSSL for money. They don’t do it for fame. They do it out of pride in craftsmanship and the responsibility for something they believe in.” Their hard work is effectively pro bono, their Web page asks for donations, and their budget is less than $1 million a year. They are not a nonprofit, because it “would require more of an investment in time and money than we can justify at present,” according to Marquess.

The good thing about the Heartbleed debacle is it’s sufficiently arcane that Congress isn’t going to be jumping in and Monday-morning quarterbacking the way they did with healthcare.gov. Unlike Apple’s embarrassingly simple “goto fail” bug in OS X, from February (which I explained here), Heartbleed requires a bit more technical knowledge to grasp. This xkcd comic explains it well; any C programmer will recognize it and shake their head, as it represents a sort of vulnerability all too common in C code. Here is the fix, which (a) seems good, but (b) makes me wish the code could be rewritten from scratch in a safer language. Unfortunately, that’s akin to wishing that the Los Angeles highway system or the New York subway system were rebuilt from scratch. Once code like this is out in the wild and everyone is using it, changing it even slightly is like switching horses midstream.

So I do not blame the engineers, who are the unappreciated sewer workers of the software world, doing the dirty job that keeps things running and getting very little appreciation for it. The bug was created by someone not even working on the project, who added a small feature that was reviewed by a single OpenSSL member, who also did not catch the bug.

Are there other zero-day bugs in OpenSSL, or in other core infrastructure like Apache or BIND? Almost certainly. And OpenSSL’s lack of funding is not the only problem. After all, Apple’s “goto fail” disaster was created by one of the richest companies on Earth. Target managed to lose 40 million credit-card numbers to hackers. Microsoft patches its Windows security vulnerabilities monthly if not more often—and sometimes it tells the government about them first—and somewhat shady operatives around the world trade so-called “zero day” exploits that haven’t yet been discovered by the security community. The NSA spends your hard-earned tax dollars purchasing security exploits from firms like the French Vupen. In 2012, the going price for an unrevealed iOS exploit was up to $250,000, according to a Forbes investigation; a Windows exploit would only net you $120,000.

The NSA (and other purchasers) don’t reveal these exploits to security companies, tech companies, or the public; they keep them on hand for their own cyberoperations. Notably, the NSA’s Stuxnet virus, targeted against Iran’s nuclear centrifuges, used four zero-day bugs to help infect computers around the world before making its way into the firewalled Iranian nuclear intranet via an unsuspecting employee’s USB stick.

This is why the fuss over whether the NSA knew about Heartbleed is somewhat beside the point. If the NSA knew, they wouldn’t tell us. If the NSA didn’t know, they should have, since Russian and Chinese governments might have been using it against American computers for the last two years. And it’s certain they know of far many other bugs that they aren’t telling anyone, even if they missed this one. And since the agency pretty much operates with no oversight whatsoever, to the extent that even their water-carrier Dianne Feinstein is now fed up with them for spying on her, we should not expect any of this to change.

My point is that unknown bugs are a reality, not a hypothetical, and a large part of the work of any security engineer is in minimizing the possibility of them happening. Heartbleed was unusually widespread and unusually severe, but it is hardly one of a kind. The actual extent of Heartbleed’s damage is still unclear. The potential jeopardy may well greatly outweigh the actual compromise, especially if few enough people knew about Heartbleed before its discovery last week by Google security engineer Neel Mehta. But as we are seeing, Google, Facebook, and the rest are taking serious action and the security community is all over the bug. They do not want to gamble that Heartbleed didn’t expose them too badly over the last two years. The logical next question is, how do we minimize the risk that those unknown bugs aren’t exposing us too badly?

It is not a simple question, because the problem is systemic, not individual. (I will have more to say about that in my next column.) For now, consider that the underpaid OpenSSL team does at least as good a job as comparable groups at Apple and Microsoft that have far better funding, and they did a far better job than Target, which ignored their security breach even as it happened. Johns Hopkins cryptography professor Matthew Green writes, “The OpenSSL developers have a pretty amazing record considering the amount of use this library gets and the quantity of legacy cruft and the number of platforms (over eighty!) they have to support.” I agree with Green. OpenSSL is asking for government and corporate support to give the project the attention that—as we now know—it needs. Let’s start by helping them out.