Future Tense

Code Is Law

But law is increasingly determining the ethics of code.

Code is law
Overreaching laws are increasingly subjugating important social and ethics questions raised by code to the domain of law.

Photo illustration by Slate. Photos by Thinkstock.

In late July 2014, the information security world was on edge. Researchers from Carnegie Mellon University—who work “closely with the (US) Department of Homeland Security”—were scheduled to give a talk at the Black Hat USA information security conference on a simple method to “de-anonymize” Tor users. Many were skeptical. Tor, after all, was a respected and widely used tool for online anonymity, employed by activists, dissidents, journalists, and yes, criminals too, to cloak their activities from the prying eyes of state authorities at home and abroad; even Edward Snowden trusted its protection. The idea that there was an undisclosed vulnerability that could be exploited “on a budget” to cheaply and easily unveil the identity of Tor users, was difficult to believe. And yet, the security researchers in question, from the Computer Emergency Response Team, or CERT, unit of the CMU Software Engineering Institute, seemed credible. So, people withheld judgment and waited for the talk.

But the talk never happened. It was pulled from the conference program at the last minute, with the CMU researchers, as reported in the Washington Post, claiming the materials they planned to present had “not yet been approved by CMU/SEI for public release.” There was plenty of speculation as to the reason for the cancellation, with some suggesting a possible national security letter from a federal agency, while others argued CMU lawyers, likely concerned by the legality of some aspects of the research, killed the talk to avoid potential liabilities. The cancellation also led commentators to raise important ethical questions about the CMU research—had users’ privacy been violated or laws broken? Were identities of Tor users harvested without their consent? Was CMU’s Institutional Review Board—the body responsible for overseeing ethical approval for research—properly consulted? None of these questions have yet been fully debated or answered and may not ever be. All that we can say for sure is that the cancellation notice sent to the Black Hat USA conference came from CMU’s legal counsel. The law had foreclosed any ethics debate. It wasn’t always like this.

Perhaps the most contentious ethics debate in the infosec community took place in the late 1990s and early 2000s and occurred beyond, and sometimes in spite of, any relevant law. That debate was prompted by the antisecurity movement and concerned the ethics of “full disclosure”; that is, the infosec industry practice—the industry norm at the time—to fully disclose security vulnerabilities in various online security forums, justified as the best means to force, or shame, vendors into patching those security holes. Full disclosure itself was a product of “frustration” with an earlier and much criticized CERT-based disclosure process, wherein “bugs” were reported to CERT but kept secret until patched, with venders often dragging their feet or simply not bothering to patch them at all. Full disclosure, so the argument went, created public pressure to encourage venders to patch vulnerabilities and do so quickly.

The hackers in the antisec movement disagree. They strongly opposed full disclosure and targeted high profile infosec industry figures aligned with such disclosure practices—like OpenBSD’s Theo de Raadt or Aleph1 of SecurityFocus—with hacks to make their point. Now, to be clear, antisec, particularly its more “violent incarnations” like Pr0j3kt M4yh3m and Phrack High Council, was prone to trolling and exaggeration, and was often unnecessarily offensive, but at bottom there remained an important ethos to the antisec movement: It took aim at the commercialization and greed it believed was overtaking the infosec community, and they were not alone as many in the broader community agreed with that sentiment. Full disclosure, antisec advocates believed, had nothing to do with security and everything to do with certain infosec practitioners building their public profile via publishing bugs and exploits to curry favor with corporate interests and secure lucrative security jobs. For antisec, full disclosure was not only betrayal of the hacker underground but also deeply irresponsible security-wise—because even with public disclosure vendors were still slow to patch, leaving any “script kiddie” with an Internet connection to wreak havoc with published exploit code.

Whatever your views on their modus operandi, the antisec movement did provoke a broader debate over security vulnerability disclosure practices, with far ranging implications. Disclosure practices ultimately evolved with “responsible disclosure” now the norm, where researchers work behind the scenes with vendors to protect end users but usually with a fixed deadline for publication to incentivize bug fixing. Applied properly, it balances incentives for vendors to act while avoiding the problems with what Bruce Schneier calls “bug secrecy” (personified by the CERT reporting system) and the dubious ethical practice (and broader insecurities) resulting from full vulnerability disclosure that the antisec movement criticized. Here, a broad and contentious debate within a research community led to better ethics and security practices in the wider industry.

But plenty has changed since the days of Pr0j3kt M4yh3m—most importantly, the legal landscape. Expansive laws like the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act, coupled with aggressive enforcement by state authorities and corporate interests, have subjected an increasing array of online activities to criminal and civil penalty. What was once considered “full disclosure” may today constitute a criminal act under the CFAA or DMCA. The Tor de-anonymization talk, which may have once led to a much needed infosec community debate about research ethics and the security and dignity of users, was cut off by lawyers and legal concerns. Similar problems are arising in data research beyond information security. The discussion concerning the controversial Facebook “contagion” study, for example, was arguably also dominated by lawyers, with concerns about the study’s legality potentially deterring similar research or, at least, publication thereof, in the future. A “destructive silence” from social computing researchers and data scientists on the broader social, technological, and ethical implications of the Facebook study was filled by the lawyers and legal questions.

 “Code is law,” the aphorism Larry Lessig popularized, spoke to the importance of computer code as a central regulating force in the Internet age. That remains true, but today, overreaching laws are also increasingly subjugating important social and ethics questions raised by code to the domain of law. Those laws—like the CFAA and DMCA—need to be curtailed or their zealous enforcement reigned; they deter not only legitimate research but also important related social and ethics questions. But researchers must act too. The infosec community, and research communities like it, must not fall silent in the face of legal threats nor tolerate research censorship, as is the case with the Tor de-anonymization talk. The point is not that researchers must launch some divisive “project” or movement within this or that discipline; only that they need, at the very least, to re-assert control over the social, legal, and ethical direction of their fields. Otherwise, law will increasingly determine the direction of data science and the ethics of code.

This essay originally appeared in Internet Monitor 2014: Reflections on the Digital World, published by the Internet Monitor project at Harvard’s Berkman Center for Internet & Society. It is licensed under a Creative Commons Attribution 3.0 Unported license.

Future Tense is a collaboration among Arizona State University, New America, and Slate.