Facebook’s New “Manifesto” Is Political. Mark Zuckerberg Just Won’t Admit It.
What is Facebook? It’s a question that seems like it ought to become easier to answer over time. Instead, it has become more difficult. CEO Mark Zuckerberg’s latest and most lengthy attempt to define the service that he created—a 6,000-word open letter to the Facebook “community”—instead underscores just how murky its purpose and agenda have become.
When Zuckerberg founded Facebook in 2004, the concept was simple. It was a social network in the classic sense: a place to post a personal profile, connect to other people’s profiles, and interact with them online. Over the years, the news feed evolved from metaphorical to literal as Facebook became the dominant online platform for reading and sharing news, opinion, and other content from around the wider web. It turned out that online friend networks were a singularly potent way for ideas to spread.
The downsides of this arrangement emerged more gradually. But the 2016 U.S. presidential election, in which Facebook served as a vector for fake news and sensationalism and a force for ideological polarization, helped to distil them. That, and a series of more or less related controversies in the United States and elsewhere, prompted founder and CEO Mark Zuckerberg to embark on a countrywide listening tour that many interpreted as a signal that he planned to one day run for office.
More likely, it was a signal that he’s rethinking Facebook’s role in society. On Thursday, that soul-searching bore fruit in the form of a lengthy and high-flown open letter addressed to “our community.” The letter’s title: “Building Global Community.” Yes, the word community features prominently throughout, as does the phrase “social infrastructure,” which I’ll get to in a moment. As The Verge’s Casey Newton points out, the sprawling epistle essentially becomes Facebook’s new manifesto, modifying and complicating its well-known mission to “make the world more open and connected.”
Read the letter and you’ll see why critics have accused it of using a lot of words while saying rather little. Jargon is rife; specifics are scarce. Still, a careful read reveals a lot about how Zuckerberg views the site he created and how he hopes it will evolve. It’s been obvious for some time now that Facebook is much more than a social network. But just what it is—a media company? A tech company? A utility?—has become a matter of debate.
Zuckerberg’s letter tries to settle that question in many ways, but perhaps the most direct is exemplified by his persistent use of the phrase “social infrastructure.” The sole sentence that appears entirely in bold is this one (italics mine):
In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.
Zuckerberg never explicitly defines “social infrastructure,” despite deploying the phrase 15 times—an oversight typical of his unrigorous armchair philosophizing. You could be forgiven for concluding that it doesn’t really mean anything, except that Facebook has found a clever new way to answer the question of what it is without making any particular commitments. But we can glean a little more than that if we try.
By choosing “infrastructure” as the noun, Zuckerberg shows that he still views Facebook as a technology—a set of tools—rather than a media company responsible for creating or curating published content. This is consistent with Facebook’s past attempts to define itself, including that widely mocked 2012 TV ad in which the company compared itself to chairs. It implies that Facebook will continue to shun political controversy and disclaim responsibility for the content that its users post and share, at least in cases where doing so would run counter to the company’s interests.
On the other hand, Zuckerberg’s manifesto makes it clear that he no longer views Facebook as fully neutral. He recognizes—at last?—that his technology molds how its billion-plus users read, communicate, organize themselves, and form ideas about themselves and the world. And he no longer views openness and connectedness as ends in themselves.
“Social infrastructure,” then, must mean something different from mere “communications infrastructure.” It means that Facebook sees its purpose as facilitating certain kinds of interactions and social arrangements. Namely, Zuckerberg sees Facebook’s purpose as building communities with five particular characteristics: supportive, safe, informed, civically engaged, and inclusive.
Zuckerberg also states repeatedly that he wants to build a community that is “global.” As the New York Times’ Mike Isaac notes, this places Facebook in surprisingly explicit opposition to the tides of nationalism and isolationism that have swept the likes of Donald Trump and Theresa May into power.
That pointed exception aside, Zuckerberg’s goals for Facebook’s communities sound relatively anodyne. Who but a misanthrope could dispute the value of safety, civic engagement, or inclusion?
The rub, however, is essentially the same as it has always been with Zuckerberg: He’s trying to have it both ways. He wants Facebook to play a more active role in shaping conversations and communities for the better. Yet he still wants to call it infrastructure.
Sure, infrastructure can shape conversations and communities. Highways, irrigation systems, cellular data service: They all affect how societies organize themselves, in one way or another. But the key questions of how and where to deploy infrastructure, for whose benefit and at what cost, are inherently political. A highway that makes some people feel included can leave others feeling isolated. An irrigation system can secure one community’s water supply while depriving another. And a Facebook post that makes some people feel civically engaged could make others feel unsafe. In all cases, there are tradeoffs—a concept Facebook and Zuckerberg have always been loath to acknowledge.
The manifesto does provide some clues as to how Facebook will adjudicate conflicting claims among its users. It states that Facebook will “try to reflect the cultural norms of our community,” while also giving individual users the power to control their own experience, where possible. Those are both fine principles. But implementing them will necessarily involve a level of political decision-making on Facebook’s part that Zuckerberg’s letter does not fully reckon with.
Making the world more open and connected was a radical idea, and a substantive one—and, ultimately, a dangerous one. On one level, it implied a strong bias toward freedom of expression and against provincialism and repression. At the same time, it implied that it was none of Facebook's business exactly what kinds of ideas and speech flourished on its platform. Paradoxically, then, a bias toward freedom of expression could conceivably end up empowering those who would quash it.
Facebook’s new manifesto is either far less radical or far less substantive than its famous mission statement, depending on how you read it—but not necessarily any less dangerous. If, as some critics allege, it’s largely pablum designed to appease the company’s collective conscience without committing it to any particular ideals, then it’s less substantive. On the other hand, if it’s really about strengthening distinct communities and upholding existing cultural norms, it’s substantive, but also deeply conservative. It implies, for instance, a bias in favor of free speech and against repression only in cultural contexts where those biases are already present. That’s not making the world a better place. It’s entrenching the status quo.
To be clear: Zuckerberg and Facebook thinking seriously about their impact on the world is a good thing. Technology is at its most dangerous when it is created thoughtlessly, for its own sake or for the sake of profit. It’s at its most manageable when its political agenda is transparent and explicit, because then it can be openly debated, supported, or opposed. Zuckerberg’s manifesto makes it clear that he does care about Facebook’s role in society. Yet as a statement of values, it is compromised by the undefined jargon, the unacknowledged conflicts, and the uncritical optimism about Facebook’s ability to meet the needs and desires of all of its users at once.
I’ve argued before, in a different context, that Facebook is biased, and it needs to admit it. This manifesto might seem like a step in that direction. But it doesn’t go nearly far enough.
Netizen Report: In Kenya and Mexico, Citizens Suspect State Manipulation on Twitter
The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices Advocacy. Afef Abrougui, Mahsa Alimardani, Ellery Roberts Biddle, Marianne Diaz, Leila Nachawati Rego, Njeri Wangari, and Sarah Myers West contributed to this report.
A social media tug-of-war has emerged in the face of a nationwide strike by Kenyan doctors protesting the government’s failure to honor their collective bargaining agreement. The strike has brought the public health care system to a halt and has stoked public mistrust of the Uhuru government, particularly following allegations of millions of dollars having gone missing from the Ministry of Health.
While doctors have garnered substantial public support in their demands, there has also been a spate of social media messages maligning doctors. Local bloggers have identified a strong correlation between Twitter accounts propagating hashtags such as #GreedyDoctors, #MyBadDoctorExperience, and #DaktariRudiKazi (“Doctors, go back to work”) with those promoting other pro-government messages. Social media experts believe the messages are not being circulated by regular citizens, but rather by government-paid “social media influencers.” Some have suggested links between these accounts and widespread reports of a group of 36 social media influencers purportedly hired by the Presidential Strategic Communications Unit to change online narratives critical of the Kenyan government.
In recent days, #GreedyDoctors and similar hashtags have been overwhelmed by Twitter users promoting the implementation of the collective bargaining agreement, adding to their tweets the hashtag #implementCBA.
Kenya is not alone in this phenomenon. Alongside countries with long-standing practices of promoting state interests via social media commenters, such as China and Venezuela, Mexico appears to have joined these ranks with various recent pro-government campaigns online.
Most recently, after January’s gas price hikes triggered public protests on major roadways and online, a select set of Twitter accounts began promoting illegal activities such as looting and theft, in what appeared to be an effort to influence conversations and delegitimize the protests. Most commonly, they inserted hashtag #SaqueaUnWalmart (“loot a Walmart”) into conversations bearing the #gasolinazo hashtag, which was widely used by protesters. These accounts also propagated images of people rioting, which turned out to be false. (The photos actually depicted street riots in Egypt in 2011.)
By visualizing data from more than 15,000 tweets associated with the protests, data scientists at the Jesuit University of Guadalajara observed that the #SaqueaUnWalmart hashtag interrupted the flow of conversations, seeking to associate #Gasolinazo with malicious intentions. Some of the accounts involved in these campaigns have been identified as bots or trolls who had already been linked to harassment and threats against journalists and social activists.
These observations, along with recent allegations of spyware used against researchers and public servants promoting a tax on soda (reported by the New York Times and analyzed by Citizen Lab) suggest an increasingly threatening environment for citizens seeking to advocate and express their views on matters of public interest in Mexico.
Future Tense—a partnership of Slate, New America, and Arizona State University—will hold an event in Mexico City on Feb. 23 about the online balance of power between the people and the state. For more information, visit the New America website.
Venezuela blocks more news websites, including CNN
The Spanish language version of CNN, and its corresponding website, were blocked in Venezuela on Feb. 15, after reporting on passport fraud allegations. CNN is not alone—the Mexico-based TV Azteca was also taken off the air on Feb. 16. Since Feb. 7,the Venezuelan news and public opinion website Maduradas has been inaccessible on a majority of internet service providers (including government-controlled CANTV) in seven provinces. The site is known for its summaries of online responses to issues of public interest. In a public statement, Venezuelan President Nicolas Maduro called CNN an “instrument of war.”
Phishing attacks in Qatar target migrant rights advocates
Researchers at Amnesty International uncovered a wave of sophisticated phishing attacks aimed at spying on the activity of journalists, trade unions, and labor activists advocating on the rights of migrant workers in Qatar, a large proportion of whom come from Nepal. The campaign was likely orchestrated by a state-affiliated actor, although there is no evidence at the moment to conclusively identify who was behind the attacks. The attacks invited targets to open links to what appear to be files in Google Drive and Google Hangouts, but actually lead to spyware.
Thai draft law would hand media control to government
Media organizations in Thailand are warning that draft legislation could lead to complete government control over the press. The curiously named “Protection of Media Rights and Freedom, Ethics and Professional Standards” would require journalists to obtain licenses in order to do their work. It would also create a National Professional Media Council that would be staffed primarily with representatives from government ministries. According to Chakkrit Permpool, the former chair of the National Press Council of Thailand, “This kind of thing exists only under dictatorship governments. This is against the new constitution … that ensures media freedom and people’s freedom of expression.” More than 30 media groups have signed a statement rejecting the bill.
Facebook plans to fight fake news in France
Facebook announced plans to combat the spread of fake news in the lead up to French elections in April and May by launching a new partnership with eight media organizations that will fact-check and filter news articles flagged by Facebook users. But some worry that Facebook’s reliance on already-stretched newsroom resources will not be sustainable.
Tech activists plan a “Distributed Denial of Women”
On Feb. 23, tech companies and organizations will face a Distributed Denial of Women, a general strike to show how important women and gender non-binary people are to the tech industry. In support of this action, the Association for Progressive Communications’ Take Back the Tech campaign is collecting stories about discrimination and gender in the technology industry and community. Learn more here.
“Battle of the Hashtags: Mapping the Online Conversation Surrounding Mexico’s Gas Prices“—Signa Lab, Jesuit University of Guadalajara
NASA’s “Space Poop Challenge” Aims to Spare Space-Walking Astronauts the Indignity of Diapers
NASA has turned to crowdsourcing to solve one of space flight’s thorniest problems: astronaut poop. Some of the world’s top scientific minds piped up with ideas, and on Wednesday, the victor of the agency’s “Space Poop Challenge” found himself flush with cash—$15,000, to be exact.
While the bathroom situation on the space station is pretty OK, it’s another story when nature calls during a space walk. The competition, which ran October–December 2016, called on people to devise a more hygienic and comfortable solution to the problem that would manage human waste—including feces, urine and menstrual fluid—for six days. As the competition website explained: “An astronaut might find themselves in this suit for up to 10 hours at a time nominally for launch or landing, or up to 6 days if something catastrophic happens while in space.” The need for long-term waste solutions may become more urgent as NASA considers missions farther from Earth, which will increase the risk of emergencies stranding astronauts for longer periods of time in their suits.
Future Tense Newsletter: Does Your Internet Service Provider Know Too Much About You?
Greetings, Future Tensers,
In just the first few weeks of the new administration we have seen significant changes to policy. But here’s one you might have missed: Rep. Frank Pallone Jr. and Terrell McSweeny of the Federal Trade Commission write in Future Tense that basic consumer-privacy rules enacted by the Federal Communications Commission just a few months ago are already under threat. They write, “Broadband providers potentially have access to every bit of data that flows from a consumer. That type of access demands a set of rules that matches the long held expectations of Americans—that we should have the freedom to control access to the most sensitive information about our daily lives.”
If this has you thinking about your own privacy, we do have some good news. As part of our Futurography project, February is “Cybersecurity Self-Defense” month at Future Tense. We’ve explained what threats to protect yourself against, discussed how protecting yourself online serves the greater good, and presented tutorials on how to set up Signal private messenger, how to use a password manager, and how to turn on two-factor authentication. We’ve got more tips coming in the next two weeks, too.
Here are some of the articles we read this week while imagining what GOP senior officials are discussing on an encrypted messaging app:
- Wikipedia’s Daily Mail ban: Will Oremus writes how Wikipedia editors’ decision to “generally prohibit” the use of the Daily Mail as a source in their encyclopedia articles could lead to greater awareness about unreliable news sources.
- The Trump donors like best: Katy Waldman unpacks a memo from the Republican National Committee to see what portrayals of Trump on his campaign website were most conducive to inveigling users into making donations.
- Asking visa applicants for passwords: Last week Secretary of Homeland Security John Kelly told Congress that the Department of Homeland Security is considering the possibility of asking visa applicants for full access to their online accounts. Laura Moy explains the real problems this would create for free expression.
- The Spawn of Frankenstein: Did you miss our event on the legacy of Mary Shelley’s Frankenstein? Rachelle Hampton covered the highlights in her recap. You can also watch video of the event here.
- You may not be high on Putin's to-hack list, but there are still good reasons to protect yourself online. And there is no better place to start than a Future Tense happy hour. Bring your devices and join us for drinks and demos on Feb. 16—yes, tomorrow—in Washington for a Cybersecurity Self-Defense Class in which experts will teach you how to use a virtual private network, cover your digital tracks, use secure communications platforms, and more. RSVP to attend in person or watch online here.
- Join Arizona State University and Future Tense in Mexico City on Feb. 23 to consider how technology will continue to alter the balance of power between individuals and the state. RSVP to attend the event in person here.
For Future Tense
The Real Problem With Asking Visa Applicants to Hand Over Their Social Media Passwords
On Tuesday, Secretary of Homeland Security John Kelly told Congress that the Department of Homeland Security is exploring the possibility of asking visa applicants not only for an accounting of what they do online, but for full access to their online accounts. In a hearing in the House of Representatives, Kelly said:
We want to say for instance, “What sites do you visit? And give us your passwords.” So that we can see what they do on the internet. And this might be a week, might be a month. They may wait some time for us to vet. If they don't want to give us that information then they don't come. We may look at their—we want to get on their social media with passwords. What do you do? What do you say? If they don't want to cooperate, then they don't come in.
This is alarming to many. As TechCrunch’s Devin Coldewey pointed out, asking people to surrender passwords would raise “obvious” privacy and security problems. But beyond privacy and security, the proposed probing of online accounts—including social media and other communications platforms—would, if implemented, be a major threat to free expression.
It is already clear that Trump is no great fan of free expression. On the campaign trail, Trump suggested talking to Bill Gates and others “about, maybe in certain areas, closing that internet up in some ways” as a way to combat ISIS. He then openly mocked those who would protest on behalf of free speech. Later, he promised to “open up” libel laws as a means of silencing media organizations. In November, one of his first public moves as president-elect was to propose “loss of citizenship” as a potential punishment for flag burning (an expressive act that the Supreme Court has found to be fully protected by the First Amendment). Not to mention his vow to wield the power of the federal government against practitioners of an entire religion, and his arguably unconstitutional executive order banning immigration from seven majority-Muslim countries.
Government surveillance is always a threat to free expression. That’s because surveillance encourages people to censor what they say. And this threat does not fall equally on all—our nation has a long history of disproportionately subjecting people of color and other marginalized communities to government surveillance.
Even when surveillance tactics monitor all individuals equally, social science research suggests that awareness of surveillance leads to self-censorship disproportionately among those who are racial or ethnic minorities or who hold minority viewpoints. Elizabeth Stoycheff, the author of a study exploring “how perceptions and justification of surveillance practices may create a chilling effect on democratic discourse by stifling the expression of minority political views,” explained last year in Future Tense,
I discovered that exposure to the terms of agreement [acknowledging surveillance] dampened individuals’ willingness to express or otherwise support their political views. These effects were found among people who felt they held political opinions different from those of most Americans, among those who thought these programs were necessary for the sake of national security, and in a recent follow-up analysis I conducted, among racial and ethnic minorities. These individuals refrained from expressing opinions that would alienate them from both their fellow citizens and from the government.
And indeed, government surveillance has been used to monitor and chill minority speech and conduct throughout American history. It happened in the 18th century, when “lantern laws” facilitated vigilante surveillance of everything people of color did after dark. It happened after Sept. 11, when agencies ramped up targeted surveillance of Muslims, chilling speech, activism, organizing, and religious conduct. It’s happening right now to activists in the Movement for Black Lives, some of whom have acknowledged that repeated revelations about widespread surveillance of activists has caused them to temper their speech.
So when a Cabinet member starts talking about compelling visa applicants from Muslim-majority countries to let his agency scrutinize both their public social media activities and their private communications, we should be very concerned. Even the threat of increased surveillance could lead to self-censorship—disproportionately of minority viewpoints—impoverishing our public discourse.
GOP Officials Reportedly Using a Snapchat-Esque App for Private Conversations
On Wednesday, Axios reported that, spooked by the Democratic National Committee hack, “numerous senior GOP operatives and several members of the Trump administration” are using Confide, an encrypted messaging app. Confide self-destructs messages once they are read, promising that they will be “gone forever”—or at least wiped from your device and Confide’s servers.
Confide also forces you to drag your finger to read one line of the message at a time, making it difficult to take a single screenshot to use as a receipt—helping to thwart potential leaks. It can integrate with Siri and voice messages on iPhones and Androids, too.
Confide’s co-founder and president, Jon Brod, said in an email that thanks to the news of Confide’s purported GOP fans, this week’s new-user signups were triple what they were last week.
Confide also has politician fans outside of the United States. Australian Prime Minister Malcolm Turnbull, a politician who also got into hot water for having a private email server, uses Confide and Wickr to talk to journalists and colleagues. Although he said he does not use them to communicate classified information, he nevertheless defended their use in a press conference in 2015, saying, “[Y]ou shouldn’t assume that government email services are more secure than private ones.”
Of course, Confide isn’t the only option for today’s privacy-conscious politician. This past May, Democratic National Committee and Hillary Clinton campaign staffers were reportedly ordered to use Signal, an encrypted messaging app that Edward Snowden “uses every day,” after the DNC’s servers were found compromised.
At the White House, all official business correspondence is supposed to take place over White House email for preservation purposes. Of course, since the contents of those encrypted messages were not revealed in the Axios report, it’s unclear whether anyone’s breaking the law.
What is clear is that Confide is another tool that is giving people back some control over their data from third parties. “Everything we communicate digitally is on various servers that we have no idea about and certainly no control over. And the recipient retains a copy of the correspondence forever. We think this is dangerous,” Brod said about why Confide was created. (Nothing is sacred these days—some enterprising folks uncovered what they believed to be White House press secretary Sean Spicer’s Venmo account. The account’s payments were private, but its friends’ list was searchable for people to find and troll, as the Who? Weekly podcast pointed out.)
It’s this lack of security and privacy over convenient communication that Confide wants to fix. In a world where private words are screenshoted, reblogged, and retweeted on public stages, Confide wants to give all people, even the Sean Spicers of the world, the face-to-face promise of olden times: We’ll keep this just between us.
Wikipedia’s Daily Mail Ban Is a Welcome Rebuke to Terrible Journalism
The Daily Mail, a leading conservative British tabloid, has a storied history of scaremongering, warmongering, gossipmongering, stereotyping, and even supporting fascism. In the pre-internet era, it was mostly Britain’s problem. But over the past decade, the Mail’s online organ has metastasized into one of the world’s largest news sources, covering dubious, salacious, and sensational stories wherever it finds them—with relatively little original reporting, accountability, or regard for accuracy. Its vast readership reminds us that popularity and credibility don’t necessarily go hand-in-hand, especially on the internet.
On Wednesday, the volunteer editors of Wikipedia took a rare and widely lauded stand: They decided by consensus to “generally prohibit” the Mail’s use as a source in the online encyclopedia’s articles. Here is the notice the editors posted to Wikipedia’s discussion page on identifying reliable sources:
Consensus has determined that the Daily Mail (including its online version, dailymail.co.uk) is generally unreliable, and its use as a reference is to be generally prohibited, especially when other more reliable sources exist. As a result, the Daily Mail should not be used for determining notability, nor should it be used as a source in articles. An edit filter should be put in place going forward to warn editors attempting to use the Daily Mail as a reference.
The move is the culmination of a two-year-long debate among the site’s editors. Those in favor of discouraging Mail citations pointed to its “poor fact checking, sensationalism, and flat-out fabrication,” providing numerous examples of each. Those who opposed blacklisting the paper argued that it is “actually reliable for some subjects” and “may have been more reliable historically.” More persuasively, they pointed out that the internet is full of news sources whose reporting is equally dubious, if not more so, yet few have been the target of a blanket policy. In the end, the consensus decision left the door open to similar prohibitions on other unreliable outlets in the future.
It’s worth clarifying that the stand was taken by Wikipedia’s volunteer editors and not by the nonprofit Wikimedia Foundation, which runs the site but does not directly oversee its content. Still, the foundation told the Guardian the decision was “consistent with how Wikipedia editors evaluate and use media outlets in general—with common sense and caution.”
There are risks involved when an influential platform decides to blacklist certain publications. Read the full discussion page that led to the decision and you’ll see that the people who made it are not necessarily media experts. You don’t need to sympathize with the Daily Mail to worry that Wikipedia’s editors are opening a dangerous box by targeting specific news outlets for blanket prohibitions. Bans are binary, whereas journalistic credibility lies on a spectrum.
For an example of how this sort of process can go awry, look at the 2013 decision by moderators of Reddit’s influential /r/politics subreddit to ban links to a wide range of publications, including Mother Jones, Huffington Post, and Gawker. Their alleged offenses: “blogspam,” “sensationalism,” and “bad journalism.” Their flaws aside, however, each of those publications has been responsible for groundbreaking news investigations over the years, along with influential commentary. When my own Slate story about the ban made the subreddit’s front page, the thin-skinned moderators took that down, too. (They later apologized for their handling of the ban.)
That said, the Wikipedia editors’ decision-making process was far more transparent and deliberate, and the rationale seemed to have little to do with any particular political agenda. Their ultimate decision was also somewhat more reasonable than an outright ban: Daily Mail citations may still be permitted on a case-by-case basis, in recognition of the fact that the paper does sometimes conduct original reporting on newsworthy topics that haven’t been better-covered elsewhere. (I’ve also argued that some stodgier media outlets could learn from the Mail editors’ nose for a story, if not from their shameless aggregation practices and cavalier attitude toward science, facts, and general decency.)
All things considered, it’s hard to argue with this decision: It should encourage more careful sourcing across Wikipedia while doubling as a richly deserved rebuke to a publication that represents some of the worst forces in online news. Perhaps it will even help to encourage readers around the world to treat Daily Mail stories with similar circumspection. (Greater awareness about unreliable news sources would be especially welcome stateside, given the current political climate.)
Besides, it’s a great excuse to repost perhaps the most satisfying Daily Mail rebuke of all: Dan & Dan’s timeless “Daily Mail Song.”
Previously in Slate:
The Spawn of Frankenstein: A Future Tense Event Recap
Since Mary Wollstonecraft Shelley published Frankenstein in 1818, the novel has spawned countless derivatives, from films and musicals to Scooby-Doo episodes and sugary breakfast cereals. Almost two centuries later, it could be argued that no other work of literature has done more to shape the way people think about science and its moral consequences. But it’s not clear what lesson, exactly, Shelley hoped to impart on her readers. Is Frankenstein’s monster a cautionary tale of hubris and science to go too far? Or is that too simplistic an interpretation?
On Feb. 2, Future Tense convened experts in Washington, D.C., to discuss the ways in which Victor Frankenstein and his monster influence how we think about research and innovation—and how it can help us weigh the benefits of innovation against their unintended consequences.
The event kicked off with a discussion of what it means to “play God,” opened by a presentation from Patric M. Verrone, writer and producer for Futurama, who recalled that his own journey into animation began when he heard a writer say, “When we are creative, we are our most godlike.” Verrone and moderator Ed Finn, director of the Center for Science and the Imagination at Arizona State University, riffed on the idea that in writing Frankenstein, Shelley didn’t just tell a story of creation. She spawned endless creation itself by giving popular culture a trope, a series of gestures (a pulled switch followed by lightening), and language (like portmanteaus involving franken-). “I believe quite firmly that Mary Shelley meant this to be the impact,” Verrone said in his opening remarks:
In Shelley’s mind, Frankenstein was the modern Prometheus: a hip, up-to-date, vital god who chose to create human life and paid the dire consequence. To Shelley, gods create and for humans to do that is bad, bad for others, but especially bad for one’s own creator.
When asked whether the ubiquity of the trope had caused it to lose its moral force, Hugo Award-winner and science fiction writer Nancy Kress responded that it was more relevant now, with the advent of technologies like genetic engineering. “What we have available today are, perhaps, genuine godlike powers,” she said. “There is an enormous potential [in genetic engineering] and an enormous danger, and neither one is particularly well understood, which is also true of Mary Shelley’s monster.”
Josephine Johnston, director of research at the Hastings Center, stressed that bioethics as a field has largely vacated the concept of playing god, as it necessarily relies too heavily on theology. Still, she suggested that other notions wrapped up in the concept are relevant today. If safety were not an issue, if Frankenstein’s monster had the disposition of Pollyanna, what other moral dilemmas do the creation of a human-adjacent creature pose? What does it mean to have complete dominion over an invention that can, at the very least, simulate cognition?
In the second panel, engineer and naval officer Cara LaPointe underscored that point, remarking that the primary question that underlies the trope of playing God—to create or to not create—is largely obsolete. We have already decided to create, as evidenced by artificial intelligence, automation, and genetic engineering. These technologies suggest that perhaps the warning we take from Frankenstein should be about how we create. Susan Tyler Hitchcock, author of Frankenstein: A Cultural History, argued that the lore and culture surrounding the novel had done it a disservice, turning the moral into an admonition against creation rather than a warning against bad science.*
Hitchcock and LaPointe agreed with their fellow panelist Samuel Arbesman, scientist-in-residence at Lux Capital, that the best way to mitigate the unintended consequences of technology was through collaboration. Frankly, Victor Frankenstein was a bad scientist. He worked in isolation, with no peer oversight or review, and abandoned his creation as soon as the going got rough. LaPointe advocated for crowdsourcing innovation, diversifying the tech development process, and including nonscientists in discussions about research. Humility in the face of technology—which Arbesman described as halfway between the natural awe that follows innovation and the complete disdain or fear that Frankenstein regarded his monster with—would also go a long way.
Jacob Brogan, editorial fellow at New America, launched the last panel by arguing that that it is not the novelty of technology that scares us. Expanding on the Freudian concept of the uncanny, he suggested that the more we learn, the more alien the things we already know seem to be. Technology serves to show us how much larger—and stranger—the world is than we expect. Charlotte Gordon, author of Romantic Outlaws: The Extraordinary Lives of Mary Wollstonecraft and Her Daughter Mary Shelley, said that another reading of Shelley’s novel would reflect a fear not of an unknown technology, but of the society in which she was raised.
If Shelley would have sympathized with anyone in Frankenstein, it would be the monster. Gordon noted that as an unwed mother, the female author of a truly gruesome book in 19th-century England, and the daughter of radical feminist Mary Wollstonecraft and political theorist William Godwin, Shelley was uniquely positioned to be ostracized from general society and to understand the sociopolitical ramifications of that gender-based exclusion. By creating the monster, Shelley was attempting to make the rules society abided by strange to those that accepted them without question. “As she’s writing Frankenstein, Mary Shelley is thinking less about technology than she is the social ills that she has endured,” Gordon said. “She’s really describing a world in which there are no mothers, in which the ideals of femininity as she knew it did not exist.”
*Correction, Feb. 8, 2017: This post originally misspelled Susan Tyler Hitchcock's middle name.
Future Tense Newsletter: What Will Trump Mean for the Internet?
Greetings, Future Tensers,
It’s so difficult to keep up with the political news these days that we can sometimes miss small but critical stories. This past week on Future Tense, Dan Gillmor highlighted one such issue: If the president keeps his earlier promises, expect the internet to be made great again for telecom giants, and terrible for the average user. For decades, telecom companies and the government have had a mutually beneficial relationship, he writes. Corporations gladly carried out surveillance, while Washington enacted policies encouraging consolidation. Abandon net neutrality on top of it, as it seems Trump’s FCC pick inevitably will, and you’ve got companies that would, Gillmor writes, “both owe the government and have more control over what you and I can do and say.”
This month’s Futurography on cybersecurity self-defense continues with an introduction from Jennifer Golbeck on how to determine which online threats you need protect yourself against. She also wrote an explainer about how to set up a virtual private network to safeguard your surfing on unprotected public Wi-Fi. Plus, Josephine Wolff explains how personal cybersecurity is about more than protecting just yourself online.
Here are some other things we read this week, between lobbying for the return of jetpacks at the Super Bowl:
- You’re not your car’s boss anymore: From blind-spot detections to drowsiness warnings, vehicle safety systems are getting better than ever. Yet drivers seem to bristle at handing over control to their cars. Michael Manser explains how we need to rethink, and in some cases redesign, the car-driver partnership to pave the way for safer roads.
- 300-drone salute: Though Intel’s synchronized “spaxels” seemed all show on Sunday as they lit the sky behind Lady Gaga’s Super Bowl halftime performance, Jacob Brogan writes there’s more to the story. The gleaming quad-copters also performed some serious public-relations aerobatics.
- Killer robots: Heather Roff details a surprising new public opinion survey that gauges what people around the world think of autonomous weapons—and what that may mean for regulating them on a global stage.
You may not be high on Putin’s to-hack list, but there are still good reasons to protect yourself online. And there is no better place to start than a Future Tense happy hour. Bring your devices and join us for drinks and demos on Feb. 16 in Washington, D.C., for a Cybersecurity Self-Defense Class where experts will teach you how to use a virtual private network, cover your digital tracks, use secure communications platforms, and more. RSVP to attend in person or watch online here.
Unable to look away from the terradorable otter robot spy,
for Future Tense
What Slate Readers Think About the Legacy of Frankenstein
Throughout January, Futurography focused on the legacy of Frankenstein, tracing the scientific and cultural reverberations of that 199-year-old novel. We looked at its relationship to the anti-vaxxer movement, how it can help A.I. researchers, and even why our modern monsters are so much sexier. But we’re also interested in what you have to say, so we’ve written up the results of our survey on the topic. Meanwhile, Futurography continues this month with our course on the essentials of cybersecurity self-defense.
By and large, those who wrote in agreed that Frankenstein still has lessons to teach us, though they had a range of thoughts about how it might do so. Some held that the novel offers a warning against the unintended consequences of our actions, while others took it as a story of hubris. “Just because you can create life, doesn’t mean you can, or should try to, control it,” one wrote, even as another took the opposite approach, proposing that it’s imperative to “[c]ontrol the monsters that you build.”
Some embraced more philosophical approaches, as did one who wrote, “A ‘monster’ is an assigned concept that becomes self-fulfilling.” And yet another described the book as “a profound essay on … the male inability to escape the trap of masculine thinking,” arguing, “Given the opportunity to create a new being, instead of nobility and kindness, they enshrine strength and violence.”
A similarly contemplative attitude manifested in many readers’ response to the question of whether we should worry about scientists “playing god.” As one put it, “We all play God. We think we know best and continually lament that other people aren’t more like us.” Meanwhile, another observed, “If by ‘playing God’ you mean blithely creating things without looking to the consequences and impacts they may have in the real world, yes, this is a concern,” before adding, “good old-fashioned omnipotent hubris” was less worrisome, even if it was still a concern. And at least one thought the framing of the question was wrong, telling us, “There is no god. Stop using the false concept and perpetuating it.”
Whatever their stances on the question of divine overreach, readers pointed to a variety of possible Frankentechnologies. Cloning seems troubling, wrote one, explaining, “Already had that as I am a twin.” Several others echoed this concern in one way or another, though one tossed “AI systems capable of creative thought” into the hat as well. Some got far more specific, pointing to particular examples such as reports of a lab-created human-pig hybrid embryo. And one suggested that the real problem isn’t with bad science, but with scientific illiteracy: “It’s not so much the Frankentech that worries me as it is the uproar over it that seems to support pseudoscience and charlatans’ quests to have things like GMOs banned or labeled with no real evidence of harm.”
One thing that didn’t seem to divide readers? Their love for Young Frankenstein, which many cited as their favorite pop cultural incarnation of the original book. Several more pointed to the seminal 1931 James Whale film adaptation, about which one acknowledged, “I know it’s nothing like the book; they really seem to have keyed into the supposition of Victor’s madness/the Creature’s badness and gone off on a tangent from there.” Others nodded to an array of ’80s and ’90s gems, including Blade Runner, Robocop, and Demolition Man. Here too, though, at least one reader expressed an ongoing frustration with the Frankensteinian genre, since all adaptations “depict scientists in simplistic, comic book fashion.”
Finally, readers got speculative in response to our query about how the book would have differed if it had been written today. “His parts would be grown in a lab instead of stolen from graves,” one suggested, even as others proposed that the story would bypass the physical altogether and focus on artificial intelligences. Among those who thought that corporeality would remain important, some seemed to agree with Joey Eschrich, holding that the monster would be a lot sexier today. Beyond such superficial changes, however, many readers were convinced that the central themes would persist. Exploring this idea, one wrote, “Our simultaneous fascination with and fear of technological advancements seems to create [a] tension that’s worthy of dramatists in every century.”
And that, dear readers, is why we’re still reading Frankenstein today.
This article is part of the Frankenstein installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month, we’ll choose a new technology and break it down. Future Tense is a collaboration among Arizona State University, New America, and Slate.