Future Tense

Pointing the Finger

Who should be held liable when there’s a massive data breach at a big company?

President Obama delivers the State of the Union.
Who should be held responsible for data breaches—coders or companies? The White House recently announced new proposals for assigning liability.

Photo by Mandel Ngan/Getty Images

President Obama mentioned cybersecurity only briefly during last week’s State of the Union. The four vague sentences tucked in between discussions of Iran and Ebola touched on a variety of different issues and didn’t offer many clues as to how the president plans to ensure that no one can “shut down our networks, steal our trade secrets, or invade the privacy of American families, especially our kids.” But in the buildup to the address, the White House made much of its new cybersecurity initiatives. Those proposals offer a glimpse into the administration’s perspective on one of the more divisive areas of computer security policy: defender liability.

In other words: When a security breach occurs, to what extent are the various actors besides the attacker responsible? If someone steals sensitive data—such as private emails, payment card numbers, or naked photos—from servers owned and operated by a private company, is that company partly responsible? If the thieves exploit vulnerabilities in commonly used applications or software, are the people who wrote the code partly to blame? Defenders take a number of different forms when it comes to cybersecurity—from individual users to organizations that store data to service providers to software developers and hardware manufacturers—and untangling who is responsible for what is far from straightforward.

The White House proposals’ take on liability speaks to an old and ongoing cybersecurity debate about whether security would be better served by placing larger or smaller liability burdens on the companies responsible for defending networks, writing secure code, and identifying and responding to threats. The real question here: Do big companies get breached because they are negligent, or because they lack good information about cybersecurity?

The White House’s proposed legislation on cybersecurity encourages companies to share information about computer-based threats more freely, both among themselves and with the Department of Homeland Security’s National Cybersecurity and Communications Integration Center. As an incentive for companies to be more open about their breaches, the law would limit the extent to which that material can be obtained through Freedom of Information Act requests and used in lawsuits against the parties who shared it. This notion of limiting liability is not a new one in cybersecurity policy, but, at the same time, there are also advocates for increasing the liability burdens faced by companies responsible for software vulnerabilities and poor security practices.

The people who agitate for greater liability usually want to see those burdens applied to software developers, while those proposing limitations are often interested in protecting a broader swath of companies that use software but don’t necessarily create it. So part of the discussion of defender liability is about how it is balanced and apportioned—do companies that store data bear too much, and companies (and individuals) that write code too little? Given how difficult cybercrimes can be to trace back to their perpetrators—and the jurisdictional challenges of holding those perpetrators accountable even when they can be identified—it’s often impossible, or painfully slow, to hold criminals responsible for these incidents, so how we assign blame to various defenders matters a great deal.

But at the heart of this debate—and the proposed White House policy—is a really important and difficult question: What’s the best way to ensure better cybersecurity practices? Would it be to ramp up the liability private actors face for security breaches, so that they had greater incentives to implement more proactive precautions? Or is it better to limit the liability they face, so that they have fewer reasons not to share information about incidents with others who might benefit from it?

This is essentially a discussion about how much we know about computer security—or, more specifically, a debate about how much we know about computer defenses. One line of thinking, that of the liability enthusiasts, holds that we actually know a fair amount about what works to debug code and protect computer networks against the various threats we face. Therefore, the people (or companies) who are not acting on that knowledge should be held responsible when their security is breached to the detriment of others.

The other line of thinking, which leads to liability limitations, suggests exactly the opposite: that we don’t have enough information about what works for cybersecurity to be able to say with any authority what companies and software developers should be doing by way of defense. We still don’t know what the cybersecurity equivalents of seatbelts in cars or locks on doors are, these proponents say, and we don’t even really know what the most serious threats we face are or how much damage they do, because organizations are so secretive about security incidents, for fear of inviting bad publicity and lawsuits. Therefore, we should be focused on assuaging those fears, through liability and FOIA limitations, like those in the White House’s proposal, so we can learn more about threats and mitigation tactics.

Of course, both camps could be partially right. Ideally, cybersecurity policies would punish the negligent and still protect the organizations that had taken reasonable precautions but identified or been subject to novel or sophisticated attacks. That way, the latter group would share information about those breaches with others who could benefit from it. The challenge lies in defining those “reasonable precautions,” or drawing the line between negligence and lack of information. Because without those, there’s no way to distinguish who should be subject to more liability and who should be subject to less.

This is not an easy question. Take, for instance, the major cybersecurity headline of the holiday season: the hack of Sony, allegedly by North Korea. Did Sony fail to implement common-sense measures to protect its data? Or was it unfortunate enough to be targeted by a talented and well-resourced adversary that would have been all but impossible to defend against? Joseph Demarest, assistant director of the FBI’s cyber division, told Congress that the attack would have thwarted “90 percent of Net defenses that are out there today in private industry,” though it’s not clear how that assessment was made (or even whether it’s indicative of a particularly solid defensive posture on Sony’s part). There’s not enough publicly available information about Sony’s security posture or the specific mechanisms of the breach to make a call one way or the other yet, but that particular case will almost certainly be fought out by Sony and the people whose information was leaked—and it will again hinge, in part, on the question of what constitutes reasonable security measures.

Congress may take up the proposed information-sharing legislation. But even if it can reach agreement, it remains to be seen whether a law that encourages companies to volunteer security information with a somewhat reduced fear of legal retribution will radically change anyone’s current behavior. The current proposal includes some new privacy protections in response to widespread criticism that previous bills made it too easy for companies to hand over personal data to the government. But, in spirit, it is not a huge departure from previous attempts at information-sharing policies, including the proposed Cyber Intelligence Sharing and Protection Act, passed by the House in April 2013, and the Cybersecurity Information Sharing Act of 2014.

A braver policy might be one that accepts how little we still know about what threats we face and which defenses work to protect against them. It should focus, at least for the time being, on gathering that information from companies about the security incidents they witness and the defensive measures they did (and did not) have in place at the time—without trying to assign blame or liability.

That is not a permanent solution. Organizations that do not implement reasonable security protections should be liable for resulting harm, economic or otherwise. But if we try to hold them responsible at the same time that we are trying to figure out what those reasonable protections are, it becomes that much more difficult to learn from incidents.

Probably improving cybersecurity will, eventually, mean imposing clearer liability burdens on defenders, but first it may be necessary to lift those burdens for a few years—even at the risk of letting some negligent parties go unpunished—so that we can learn enough about the current threat and defense landscapes to make smart decisions about how those responsibilities ought to be defined, and what it means to have “reasonable” security.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.