Of course (of course!) lawmakers and headlines are referring to the Office of Personnel Management hack as a “cyber Pearl Harbor.” What we do best in the aftermath of a cybersecurity incident is analogize it to something else, preferably something deadly, in catastrophic language. (It’s not unlike what we do in the build-up to a big snowstorm.) What we do worst in the days and months following a major breach is place blame—figure out whom to hold responsible and how to learn from our mistakes in a constructive way.
OPM Director Katherine Archuleta said of the breach at a Senate hearing this week, “If there’s anyone to blame, it’s the perpetrators.” In one sense she’s right—the villain in this story is probably not anyone who works for OPM, or even anyone who works for the federal contractor whose credentials were used to access the stolen records. But that doesn’t mean OPM is blameless. In the aftermath of data breaches, there’s usually enough blame to go around—for victims and perpetrators alike. Perhaps the biggest problem: No one really has any very clear sense of how to draw the line between adequate defense and negligence for defenders.
Yes, OPM was negligent, and this breach should inspire them and others to shore up their systems. (And yes, it’s also fair to place some blame on Congress itself for its role in budgeting.) But it’s almost impossible for an organization today to avoid being accused of negligence, in one way or another, after a security breach. We ask why the victims didn’t require longer or more complex passwords, limit the number of login attempts, and recognize that oddly worded email as a spear-phishing attempt. It’s all very well to point out defensive failings after the fact, but we need to get better at turning those criticisms into concrete guidance to help people select and prioritize among the wide range of security tools out there. Then we can distinguish between defenders that failed to follow that guidance and those that were simply unlucky—in other words, we can place blame.
There should be two very different kinds of blame we think about in the context of computer security incidents—an angrier, more punitive kind directed at the perpetrators, and a critical, more constructive kind assigned among the various victims. The purpose of the latter variety shouldn’t be to exact retribution, but to help us understand what each different defender—the government agencies, the contractors, the individual employees—could and should do differently in the future. But we often confuse these kinds of blame when it comes to cybersecurity incidents. Sometimes, the victims, like OPM, come in for the angry, punitive treatment, while we try to have constructive, more congenial conversations with the perpetrators (in this case, allegedly China).
Of course, Congress can’t easily summon the hackers to a hearing and berate them the way it has Archuleta. It’s often difficult to determine definitively who the perpetrators are, and even when you can identify them, they’re not necessarily within your borders or your power to punish. (Those two problems are also related—identifying who initiated an attack often requires cooperation from entities beyond the investigators’ jurisdiction.) So indictments, like those the U.S. issued last year of five People’s Liberation Army hackers, are more symbolic than they are satisfying.
Since we often don’t get the satisfaction of punishing the perpetrators, we're left to take out our anger on the people and organizations they targeted. That anger is not entirely misplaced—as I said before, OPM was far from innocent in this incident—but neither is it entirely fair, given how murky the security guidelines and rules are for organizations like OPM. Yes, there are a lot of things OPM can learn from this breach and do better at in the future—including maintaining better audit logs, updating their COBOL systems, and screening the security measures implemented by contractors and other third parties with access to its systems. And sure, in retrospect, it’s obvious that the agency should have made these changes a long time ago. But, in many ways, those measures are only obviously necessary after the fact because there’s no clear, concise list of essential defenses and security tools for an organization to run through and check off when its protecting data and computer assets.
So why there isn’t a straightforward checklist of security measures? For one thing, those measures are constantly changing and evolving over time as we witness new attacks and develop new defenses. For another, an organization’s security needs depend to a large extent on the particular organization—what kind of data and assets they’re protecting, who they’re protecting it from, what kinds of access and services they need for their regular day-to-day functioning. It’s hard to come up with a one-size-fits-all set of security measures that would make sense for every possible target.
There are some fairly comprehensive catalogs of security measures, including the encyclopedic “NIST Special Publication 800-53” and the less massive but still pretty extensive list of “20 Critical Security Controls for Effective Cyber Defense.” But while they can be helpful for people trying to get a handle on all the different available tools and techniques, they provide relatively little guidance about where an organization should start and what it should prioritize when it comes to security. Even the deceptively simple-seeming Payment Card Industry Data Security Standards are not entirely easy to interpret and implement: What does it mean, in practice, to “restrict inbound and outbound traffic to that which is necessary”?
Moreover, defending against computer security breaches often involves a number of different defenders—not just the organization protecting sensitive data, but also the organizations involved in transporting and storing that data. Identifying necessary defenses is not just a matter of understanding your particular organization, what it protects, and who it’s protecting those assets from; it’s also about the other people you rely on—and who rely on you—and what their respective defensive roles are, and how your defenses interact with, and ideally augment, theirs. This means, for instance, trying to hash out what responsibilities exactly a retailer has for protecting credit card information, what responsibilities the credit card companies have for detecting fraud and replacing cards, and what responsibilities individuals have to protect their PIN numbers or notice fraudulent charges.
For all the victim blaming we do, the liability regimes for cybersecurity incidents are still very poorly defined—both because we’re wary of codifying any rules or standards for sufficient defense that could quickly become outdated, and also because so many different actors are involved in defending against any one of these threats that there’s always someone for them to try to shift responsibility onto.
So, who’s to blame for the OPM breach? Well, the hackers shouldn’t have stolen the information, and OPM should have paid more attention to upgrading its systems, and KeyPoint Government Solutions—the federal contractor whose credentials were used to access the OPM network— should have been more careful. And perhaps all of us should have spent a little less time fixating on who to blame for these incidents and how many millions of records were stolen in each one, and a little more time thinking about how we can learn from them and provide better, clearer security guidance and more defined defensive responsibilities to the many, many people worrying over how to protect their data and computer networks. As I said, plenty of blame to go around.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.