Future Tense

Facebook Under Pressure

Looking beyond the social network’s decision to take down and then restore the “napalm girl” photo. 

Vietnamese children flee from their homes in the South Vietnamese village of Trang Bang after South Vietnamese planes accidently dropped a napalm bomb on the village, located 26 miles outside of Saigon.
A version of the “napalm girl” photo.*

Bettmann/Getty Images

Late last week, the Norwegian prime minister did something relatively unremarkable: She posted a historical war photo on Facebook. But this was no austere shot of Winston Churchill or even troops on the battlefield. The famous picture depicted a 9-year-old girl running naked through the dirt streets of Vietnam following a napalm attack. A few hours after being posted, it was removed by Facebook for violating community standards. The Norwegian PM claimed “censorship,” and on Friday morning, the Norwegian daily Aftenposten decried the take down as “limiting freedom” in a letter on its front page.

A few hours later, Facebook put the picture back up, and reporters and the public were left to wonder what exactly had just happened. Some conjectured an error in algorithms on the site. Others just claimed censorship. None of that gets it right. But worst of all are the outlets that miscast the “napalm girl” takedown as a story of human “editorial” judgment gone awry, and not part of a larger picture how the public exerts pressure on the platforms that govern online speech.

“This works like a Toyota factory, not a newsroom,” Dave Willner, former head of content policy at Facebook and one of the architects of the process the social media site uses to curate user-generated content, told me in an online chat. “And that assembly-line nature changes what we can expect from these systems and how we have to critique them.”

That system works something like this: When you upload content to a site such as Facebook or YouTube, it is automatically screened by algorithms for easily identifiable illegal content. At YouTube, for example, a program called ContentID is used to check for known copyrighted content. At Facebook, and many other sites, an algorithm called PhotoDNA checks uploaded photos against a database of known child pornography. All that screening happens in the few milliseconds between upload and publishing.

After that, by far the vast majority of content that is posted stays up. With a few exceptions, most platforms do not proactively search for content to remove, simply because there is way too much of it. Instead, they rely on users to flag content and let them know how it violates the site’s terms of service.

At Facebook, roughly 1 million pieces of content are flagged a day. And though those pieces of content are reviewed by human content moderators (which has been written about extensively, for years), to describe them as removing napalm girl as an act of editorial discretion—as NPR did in one story—would be a little like saying Babe Ruth was a great hitter because he had a problem with baseballs. It’s an inference of intent that is not only wrong; it doesn’t really make any sense.

“Facebook has guidelines, but beneath them it has layers and layers of people,” said Tarleton Gillespie, a fellow at Microsoft Research who studies content moderation, told the Wall Street Journal on Friday. “At each of these layers, someone could remove something the rules actually allow, or allow something the rules actually prohibit.”

Willner’s factory analogy supports this idea of “layers” and perhaps improves it: Workers, both outsourced and direct employees of Facebook, review the flagged content and judge it against internal versions of the terms of service to determine if the content should be removed. Very, very little is taken down: One former Facebook employee working in content moderation estimated in a phone interview earlier this summer that the daily percentage of removed photos or posts to be “in the low single digits.” Of those, some content removal is in error, and an even smaller percentage generates controversy that reflects evolving norms of what we want to see online.

Of course, Facebook has seen these controversies before. As early as 2008, Facebook received criticism for removing posts that depicted a woman breast-feeding. The specifics of what triggered removal changed over time. Pictures of “non-active” breast-feeding were not allowed in 2012; in 2014, any flagged picture of a woman’s breast with an exposed nipple would be taken down, but later that year Facebook allowed pictures of breast-feeding mothers to show nipples. The changes came after a campaign in the media and in pages on Facebook itself staged at least, in part, by women who had their content removed. Similar results occurred after public outcry over Facebook’s real name policy, the removal of a gay kiss, censoring of an 1866 painting that depicted a nude woman, posting of a video of a beheading, and the takedown of photos depicting doll nipples.

In this sense, napalm girl is nothing new. The photo is far from the first—or the last—picture to be removed and reinstated following popular outrage. Instead of hopping from one takedown flub to the next, both the press and users should focus on the interplay between the takedown process, the public, and Facebook’s response. The takedown, the process behind it, and the responses that can get it put back up, are part of a new form of governance for online speech.

That means not just looking at how a piece of content like napalm girl gets removed, but how Facebook responds when it’s clear that the company’s process goes against what people want and expect from the site. Like any government, some of these platforms are more responsive than others. “What we do is informed by external conversations that we have,” said Monika Bickert, Facebook’s head of global policy, in an April interview with the Verge. “Every day, we are in conversations with groups around the world. … So, while we are responsible for overseeing these policies and managing them, it is really a global conversation.” This responsiveness might be one reason users keep moving to Facebookistan and fleeing like refugees from the Twitterverse.

It also reveals what might be the only thing to be concerned with in the napalm girl controversy: that some people have the opportunity to exert more influence to publish certain kinds of content than others. If the napalm girl photo had been removed from the feed of an average Facebook user in Norway, would it have gotten the same level of coverage? Or been put back on the site?

It’s hard to know—though campaigns like those surrounding breast-feeding imply that it might have been reinstated with enough public outcry. But the rapid escalation of this event seems to be helped along by the high-profile nature of the photo’s poster. This is where the conversation should be as we talk about what happened with the napalm girl picture: focusing on how these platforms are balancing the pluralistic influences of users, governments, media, and civil society groups to give us the internet we want and the online speech we expect.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.

Correction, Sept. 13, 2016: Due to a production error, the photo caption identified this article’s image as the “napalm girl” photo that Facebook removed. It’s a different version of the image; Nick Ut of the Associated Press took the famous version.