Technology

No Place to Hack  

Why Aaron Swartz profoundly misjudged MIT, an institution fundamentally inhospitable to Free Culture. 

Aaron Swartz
Although Aaron Swartz was never formally enrolled in or employed by MIT, he was nevertheless a member of the broader community there.

Photo courtesy Sage Ross/Flickr Creative Commons

Excerpted from The Idealist: Aaron Swartz and the Rise of Free Culture on the Internet by Justin Peters. Out now from Scribner.

By the time he snuck into an MIT basement and downloaded millions of documents from the academic database JSTOR, an act that led to his aggressive indictment by the Department of Justice, which would lead, in turn, to his tragic suicide in 2013, Aaron Swartz had been living in Cambridge, Massachusetts, for more than two years. “Cambridge is the only place that’s ever felt like home,” he wrote on his blog upon his departure from San Francisco in 2008.  

Although Aaron Swartz was never formally enrolled in or employed by MIT, he was nevertheless a member of the broader community there. Officials recalled that Swartz had been “a member of MIT’s Free Culture Group, a regular visitor at MIT’s Student Information Processing Board, and an active participant in the annual MIT International Puzzle Mystery Hunt Competition.” Mystery Hunt is a puzzle-solving contest that is half scavenger hunt, half Mensa entrance exam. The annual event attracts participants from around the world, many of them grown adults unaffiliated with MIT. Teams spend the weekend of the hunt running around the MIT campus solving a series of difficult puzzles, occasionally sneaking into rooms and campus locations that are technically off-limits.

In Cambridge, Swartz started to treat his own life as a puzzle to be solved. He designed various lifestyle experiments to optimize his efficiency and happiness. He dabbled in creative sleep schedules, then abandoned the experiment when he found it only made him tired. In the spring of 2009, he spent a month away from computers and the Internet for the first time in his adult life. His laptop had become “a beckoning world of IMs to friends, brain-gelatinizing television shows, and an endless pile of emails to answer. It’s like a constant stream of depression,” he wrote. “I want to be human again. Even if that means isolating myself from the rest of you humans.”

He spent June offline, an experience he later described as revelatory. “I am not happy. I used to think of myself as just an unhappy person: a misanthrope, prone to mood swings and eating binges, who spends his days moping around the house in his pajamas, too shy and sad to step outside. But that’s not how I was offline,” Swartz wrote, recounting how he had come to enjoy simple human pleasures such as shaving and exercising in the absence of perpetual connectivity. “Normal days weren’t painful anymore. I didn’t spend them filled with worry, like before. Offline, I felt solid and composed. Online, I feel like my brain wants to run off in a million different directions, even when I try to point it forward.” Swartz vowed to find ways to sustain this serenity and thenceforth tried to make his apartment a computer-free zone. But he would never be able to avoid the Internet entirely.

In 2010, Swartz was named a fellow at the Edmond J. Safra Center for Ethics at Harvard University. Lawrence Lessig, who had also returned to Cambridge from Palo Alto, California, brought him aboard. Lessig was supervising a Safra Center program that examined institutional corruption and its effect on public life. The fellowship was well suited for Swartz, who had spent so much of his life fixated on institutional and personal ethics. Individual ethicality had obsessed Swartz for years, and as he aged, it became perhaps his chief concern.

“It seems impossible to be moral. Not only does everything I do cause great harm, but so does everything I don’t do. Standard accounts of morality assume that it’s difficult, but attainable: don’t lie, don’t cheat, don’t steal. But it seems like living a moral life isn’t even possible,” Swartz declared in August 2009.  The next month, he extrapolated from this line of thought:

The conclusion is inescapable: we must live our lives to promote the most overall good. And that would seem to mean helping those most in want—the world’s poorest people.

Our rule demands one do everything they can to help the poorest—not just spending one’s wealth and selling one’s possessions, but breaking the law if that will help. I have friends who, to save money, break into buildings on the MIT campus to steal food and drink and naps and showers. They use the money they save to promote the public good. It seems like these criminals, not the average workaday law-abiding citizen, should be our moral exemplars.

This section ignited a debate in the comments section of Swartz’s blog. Readers chided Swartz for sanctioning the theft of services from MIT. The next day, in a blog post titled “Honest Theft,” Swartz defended his position: “There’s the obvious argument that by taking these things without paying, they’re actually passing on their costs to the rest of the MIT community.” But perhaps that wasn’t as bad as it seemed, since “MIT receives enormous sums from the wealthy and powerful, more than they know how to spend.” Other readers argued that the freeloaders’ actions just forced MIT to spend more money on security. “I don’t see how that’s true unless the students get caught,” Swartz responded. “Even if they did, MIT has a notoriously relaxed security policy, so they likely wouldn’t get in too much trouble and MIT probably wouldn’t do anything to up their security.”

Swartz had good reason to think this way. MIT was the birthplace of the hacker ethic. The university tacitly encourages the pranks and exploits of its students; stories abound of clever undergraduates breaking into classrooms, crawling through air ducts, or otherwise evading security measures for various esoteric and delightful reasons, and these antics have been cataloged in museum exhibits and coffee-table books. By officially celebrating these pranks, MIT sends the message that it is an open society, a place where students are encouraged to pursue all sorts of creative projects, even ones that break the rules.

MIT’s public reputation for openness extends to the wider world. The front doors to its main building on Massachusetts Avenue, the imposing Building 7, are always unlocked. For years, local drama groups conducted impromptu rehearsals in vacant MIT classrooms. In 2010, any stranger could show up to MIT, unfold a laptop computer, connect to its wireless Internet network, and retain the connection for a full two weeks; guests were even allowed to access MIT’s library resources.

As a Safra Center fellow, Swartz had access to JSTOR via Harvard’s library. So why did he choose to deploy his crawler at MIT, a school with which he was not formally affiliated? One possible reason is that computer-aided bulk-downloading violated JSTOR’s stated terms of service, and, for that reason, Swartz may have preferred to remain anonymous. MIT might even have seemed to him like the sort of place that would be unbothered by, and possibly encourage, his actions. But Swartz would soon realize that MIT’s public image did not directly align with reality.

The Massachusetts Institute of Technology is not configured to address the sorts of philosophical and ethical questions that one might expect a great university to address, in part because it is not and has never been a university. It is, rather, a technical institute. (This distinction might seem like a minor semantic point, but our self-descriptions set implicit boundaries that we are often loath to cross.) Ever since its establishment in 1861, the institute has trained engineers and pursued practical applications for existing and emerging technologies. A center for applied thinking and science, it specializes in the practical rather than the philosophical. It is, as the scholar Joss Winn once put it, “the model capitalist university.”

Former MIT engineering dean Vannevar Bush served as President Franklin Roosevelt’s science adviser during the World War II and directed large amounts of money toward university laboratories in an effort to develop technologies that could aid the war effort. As Bush later noted, “World War II was the first war in human history to be affected decisively by weapons unknown at the outbreak of hostilities,” which “demanded a closer linkage among military men, scientists, and industrialists than had ever before been required.” That linkage was particularly strong at MIT, where researchers at the school’s Radiation Laboratory developed microwave radar systems for the U.S. military and “practically every member of the MIT Physics Department was involved in some form of war work,” as the department itself has stated. Academic science helped the Allies win the war—and the war helped the Allies win over academic science.

After the war, Bush and others recommended the creation of a federal body that would supervise and direct the conjoinment between academia and the government. Government support would revolutionize American academic scientific research, argued Bush in his report Science—The Endless Frontier, germinating a new golden age of pure science and, eventually, “productive advance.” Measures would be taken to ensure the separation of scientist and state and encourage a research environment that was “relatively free from the adverse pressure of convention, prejudice, or commercial necessity.” Federal funding would, in fact, save the scientist from the indignities of industrial research. The American people would benefit from the partnership, and eventually so would the world.

Bush’s benign vision for the future of academic science in the United States translated to a boon for MIT and other domestic research universities. Expansive federal support for scientific research made it easier for scientists to fund expensive experiments that had no immediate practical applications. Like privately held companies that decide to go public to fund growth and expansion, universities reaped immediate benefits from government partnerships—more money meant more hires, new facilities, and increased prestige—but they also ceded some control over their institutional priorities. In his great book The Cold War and American Science, the Johns Hopkins professor Stuart W. Leslie observed that the long-term effects of military-funded university research cannot be measured in merely economic terms, but must also be measured “in terms of our scientific community’s diminished capacity to comprehend and manipulate the world for other than military ends.”

Today, MIT’s own website proudly announces that it “ranks first in industry-financed research and development and development expenditures among all universities and colleges without a medical school.” In her fascinating 2002 dissertation, “Flux and Flexibility,” the MIT doctoral student Sachi Hatakenaka traced the school’s modern-day corporate partnerships and their effect on institutional structure and priorities. Though MIT has long been a cheerful collaborator with industry, the practice has expanded over the past 40 years; Hatakenaka reported that industrial research funding at MIT jumped from $1,994,000 per year in 1970 to $74,405,000 per year in 1999.

In her dissertation, Hatakenaka carefully outlined all of the boundaries that MIT has erected to ensure both that corporate sponsors have no direct control over specific research projects and that MIT researchers can maintain their scholarly independence. But the institute’s increasing reliance on corporate contributions perforce affects the administration’s attitude toward the free market, and toward anything that might jeopardize its profitable partnerships.

The federal support that Vannevar Bush believed would free academic scientists from the need to collaborate with industry has ended up pushing them more firmly into industry’s embrace. Initially, the federal government retained title to all of the scientific research that it funded. For example, if the government gave a university lab a grant to study computing, and the laboratory used that grant money to develop a new type of microprocessor, then the government owned the rights to that microprocessor and could license those rights to private industry. This setup changed in 1980, when President Jimmy Carter signed into law the Bayh-Dole Act, which effectively privatized the fruits of publicly funded research.

Bayh-Dole was meant to address the perceived technology gap between Japan and the United States around the time of its passage, and reduce the gap between when a useful technology was developed and when it was brought to market. It decreed that, henceforth, domestic universities would retain the rights to the results of federally funded research and could patent those inventions, license them to industry, and reap the resultant profits.

After Bayh-Dole became law, universities began to establish what they referred to as “technology transfer offices”: administrative divisions that existed to facilitate patent licensing and to liaise between academic researchers and corporate customers. The act allowed the fruits of academic research to be harvested and sold with unprecedented ease and rapidity, and the ensuing licensing fees made many universities wealthy. When the sale or rental of intellectual property becomes a university profit center, then research outcomes will inevitably become a proprietary concern. “Far from being independent watchdogs capable of dispassionate inquiry,” wrote Jennifer Washburn in her sobering book University, Inc., “universities are increasingly joined at the hip to the very market forces the public has entrusted them to check, creating problems that extend far beyond the research lab.”

The hacker ethic was, in a sense, a critique of applied, corporate science in the university. But the hackers gradually left MIT, and the school’s center for innovative computing shifted from the AI Lab to the Media Lab, Aaron Swartz’s father Robert’s employer, a loose affiliation of varied research groups that specialized in applied consumer technologies funded by a large array of corporate sponsors.

The unlocked campus, the student pranks, the accessible computer network: these are the public trappings of translucence, vestiges of an era when MIT did perhaps take it seriously, or else decoys to deflect attention from the fact that it had never really done so. In 2002, when he was still 15, Swartz traveled to MIT to speak about the Semantic Web. At that time, MIT’s wireless Internet network did not allow guest access, and Swartz had to use a public computer terminal to get online. Unfortunately, the only non-password-protected terminals he could locate were behind a locked door. “I joked that I should crawl thru the airvent and ‘liberate’ the terminals, as in MIT’s hacker days of yore,” Swartz recalled on his blog. “A professor of physics who overheard me said ‘Hackers? We don’t have any of those at MIT!’”

At the beginning of every year, Aaron Swartz would post to his blog an annotated list of the books he had read over the previous 12 months. His list for 2011 included 70 books, 12 of which he identified as “so great my heart leaps at the chance to tell you about them even now.” The list included Franz Kafka’s The Trial. Swartz cited its lesson that there’s no beating a bureaucracy through official channels and that unexpected stratagems are the only way to get what you want in such a setting. Swartz observed that K., Kafka’s protagonist, “takes the lesson to heart and decides to stop fighting the system and just live his life without asking for permission.” Swartz had come to the same conclusion and had lived that way for a while, too. Engaging with bureaucracies on their own terms gets you nowhere—the best course is to disregard their rules and follow a different path.

But left unmentioned in Swartz’s post was how The Trial ends. K. is visited by two pale, silent gentlemen, clad in black, who join his arms and march him out of his house, through the town, and to a desolate quarry. “Was he alone?” K. wonders. Then the two government agents unsheathe a butcher’s knife, grab K. by the throat, and stab him through the heart.