Future Tense

Plugging Up the Holes

Should software-makers be held liable for the Sony hack?

Photo by Kevork Djansezian/Reuters

Sony Pictures Entertainment Co-Chair Amy Pascal poses during the premiere of The Interview in Los Angeles on Dec. 11, 2014. When your private communications are hacked, it can be tough to smile.

Photo by Kevork Djansezian/Reuters

One outcome of the Sony Pictures Entertainment network hack is certain: You can expect a blizzard of lawsuits from all kinds of parties claiming harm from the unauthorized disclosures of all kinds of material, from unreleased films to employee health records. But as Sony-related legal papers fly through the court system, one party—or rather, one collection of parties—seems likely to be relatively immune from any legal fallout: the software companies that provided the hackable product or products.

I’m not talking about antivirus or other security software companies here. This is about everything we use in our computing and communications.

The Sony hack, which may or may not have been engineered and/or funded by North Korea’s paranoid government, exposed lax security practices inside the company—it was “ripe for hacking,” said the Associated Press. More broadly, the attack demonstrated how easy it still is to penetrate modern software, Windows Server in this case, when companies leave doors open. This wasn’t a particularly sophisticated attack, by most accounts I’ve seen.

It also came in the wake of a staggering array of other network intrusions that have exposed vast amounts of personal data of tens of millions of people in the U.S. and around the world. Companies like Target and Home Depot have suffered significant losses as a result, but as far as I can determine, the blame has fallen on corporate security practices and smart hackers, not the software companies these businesses have relied on. While there have been some lawsuits around these hacks, and occasional government penalties for sloppy security, apart from a tiny number of cases (examples here and here) that legal experts have pointed out to me, the big software companies seem to have been largely immune from responsibility.

Why? For one thing, vendors are forcing customers to use arbitration in disputes, thereby thwarting the class-action suits that might otherwise be filed. And look at the “end user license agreement”—the scroll-forever text that forces you to click Yes or be unable to use almost all software and online services—and you’ll see language like this (taken from Microsoft in this case), and, yes, it’s capital letters in the original:

DISCLAIMER OF WARRANTIES. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, MICROSOFT AND ITS SUPPLIERS PROVIDE TO YOU THE OS COMPONENTS, AND ANY (IF ANY) SUPPORT SERVICES RELATED TO THE OS COMPONENTS (“SUPPORT SERVICES”) AS IS AND WITH ALL FAULTS; AND MICROSOFT AND ITS SUPPLIERS HEREBY DISCLAIM WITH RESPECT TO THE OS COMPONENTS AND SUPPORT SERVICES ALL WARRANTIES AND CONDITIONS, WHETHER EXPRESS, IMPLIED, OR STATUTORY, INCLUDING, BUT NOT LIMITED TO, ANY (IF ANY) WARRANTIES OR CONDITIONS OF OR RELATED TO: TITLE, NON-INFRINGEMENT, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, LACK OF VIRUSES, ACCURACY OR COMPLETENESS OF RESPONSES, RESULTS, LACK OF NEGLIGENCE OR LACK OF WORKMANLIKE EFFORT, QUIET ENJOYMENT, QUIET POSSESSION, AND CORRESPONDENCE TO DESCRIPTION. THE ENTIRE RISK ARISING OUT OF USE OR PERFORMANCE OF THE OPERATING SYSTEM COMPONENTS AND ANY SUPPORT SERVICES REMAINS WITH YOU.

In other words, use our product entirely at your own risk. If you don’t like that, tough.

In an era when software is at the heart of so much of what we do, this seems a bit absurd. What if General Motors and every other auto manufacturer made you agree to language like that when you bought a car? Do you think the highways would be safer?

Which raises a question that comes up from time to time: Should the software industry be required to take more legal responsibility for making its products safe? On one level this feels long overdue: an end to the legal free ride. On another, however, it’s problematic if not outright impossible.

First, software is code: programming instructions. Code has bugs and flaws—always has and always will. Code is complicated, a perpetual work in progress. It’s never “done” in any traditional sense of the word.

It’s complex enough on a single general-purpose device like a PC, phone, or tablet, each of which requires an operating system—a collection of programs and modules working together to provide all kinds of useful functionality for a PC or phone or other device. A PC application or mobile app lives in the operating system environment, calling on various functions so it can do its own relatively specialized task or tasks. The individual components have flaws, and their interaction can introduce unexpected ones.

The Internet has compounded the complexity. Today’s Web combines and connects all kinds of elements. There’s the software, from a variety of commercial and open-source projects, running on servers and personal devices. Each Web page loads a variety of plugins, also from numerous sources, to operate various functions ranging from advertising to social media and much more. Everything’s interconnected.  (Take a look at the source code for this page, which you can view via your browser menu if you’re on a PC or Mac, to get a sense of what I’m talking about.)

Hackers are relentlessly attacking almost every part of this ecosystem, and policing it is difficult at best. As Bruce Schneier, a security expert (and friend), observes: “Attack is much easier than defense. There are a bunch of reasons for this, but the basic reason is the complexity of Internet systems. It’s easier for attackers to find one vulnerability in your defenses than it is for you to find every one and then secure them. “

So what can we do? In a deeply reported New Republic series on software liability last year, Jane Chong called for much tighter rules governing software security. “To put it simply, the laws on the books must change—or the quality of our software will not,” she wrote.

Sounds right at first glance. But if the government did made vendors liable—and it would take a law, since we have to assume the industry won’t do this on its own—there would be downsides. For one thing, we’d likely end up with products that were even more locked down than they already are. That would mean much less opportunity for users to add software and configure our computers as we want to. If you want all computing to be in other people’s control, this is a way to do it—and censorship and surveillance can come under the guise of security, too. (That’s why civil liberties advocates have repeatedly opposed various incarnations of the Cyber Information Sharing and Protection Act.)

New legal policy would be difficult to put into practical use, not to mention its wide-ranging ramifications for a free and open Internet. Yet how can we just let the industry get away with selling products riddled with vulnerabilities? The consequences we know about—incessant exploits and the reality that our data live in a Wild West where just about anything goes—are grim enough to at least ask the question. And the consequences are even grimmer when we realize that reprogrammable software is becoming an increasingly significant component of more and more of what we buy and use every day.

Your car is a computer network on wheels. An airplane is a network that flies. Soon enough, given our race to create an Internet of Things—a world where almost every object has intelligence, memory, and a connection to other things and us—the complexity will reach staggering levels. We are not even close to ready for this from a security and privacy standpoint.

Are we going to let the software industry’s caveat emptor standard, however realistic and perhaps unavoidable, infect everything else we touch? Would that be better, or worse, for society and the economy? Maybe the marketplace would solve it, but call me skeptical on that one.

I don’t have a good answer to these questions. But we’d better start taking all of this more seriously.

At the very least, as Schneier observes, we increasingly have to put some trust in third parties—and do our best to secure what we can. But we can’t assume anything we say or do in a digital world is safe—which means we have to be a lot more careful about what we say and do, not just where we send and store it.

You can be sure that senior executives at Sony Pictures Entertainment, which apparently had a fat budget for executive salaries and a thin one for information security, are wishing they’d heeded that advice a long time ago.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.