Future Tense

The Most Important Lesson From the Leaked Facebook Content Moderation Documents

Content moderation: harder than it looks.

AFP/Getty Images

“Napalm Girl.” Philando Castile. Donald Trump’s hate speech. Fake news. The Cleveland murder. If you’re a living, breathing, clicking person, you’ve probably heard about these moments in which Facebook faced controversy for removing (or not removing) users’ postings. Beyond the substance of the speech that is or isn’t being taken down, part of the controversy derives from the realization that Facebook is constantly policing our speech at all. This process, called content moderation, has been happening at Facebook since 2008, and until recently how and what Facebook moderates has been largely opaque.

Of course, every social media post is subject to the terms and conditions of that site. At Facebook these are called Community Standards. An example of a community standard is “We restrict the display of nudity … ” But enforcing that standard is much more complex than the straightforward sentence might suggest, and until recently, the fine details of Facebook’s approach were kept under wraps. When a picture or video is flagged for violating a Facebook standard on nudity, it’s sent to a human content moderator to review. That moderator uses a number of intricate rules, internally developed by Facebook, to determine whether the content should be removed. These internal rules are what leaked documents published in May by the Guardian—more than 100 pages of internal Facebook content moderation rules—reveal. The documents are an incredible tool for understanding how the social network conceives of hate speech, violence, and sexual content, as well as what they find permissive.

For example, exceptions to the nudity standard state that pictures of “ ‘handmade’ art showing nudity and sexual activity is allowed but digitally made art showing sexual activity is not” and that “[v]ideos of abortions are allowed, as long as there is no nudity.”

This week, ProPublica published an in-depth investigatory article explaining how and why Facebook makes the decisions it makes in policing user content. (Full disclosure: I was interviewed for the article, and it cites my previous work on content moderation.) The article is the latest peek into a private process that has slowly become more transparent over the last few years. Some of that has transparency into Facebook’s content moderation process has been involuntary. In 2013, Adrian Chen published a few pages of a leaked training manual used by third-party content moderators in the Philippines hired by Facebook. This was most of what we know about Facebook’s internal policies until the Guardian published the leaked documents.

Not all of these disclosures have been involuntary, however. Just this week, Facebook published a blog post carefully detailing and explaining how those working in content moderation and moderation policy deal with the “hard questions” its employees have to answer in policing speech on a global platform with more than 2 billion users. “What does the statement ‘burn flags not fags’ mean?” Richard Allan, vice president of European, Middle East, and African public policy at Facebook, wrote as an example of the difficulty of sussing out hate speech without context. “[I]s it an attack on gay people, or an attempt to ‘reclaim’ the slur? Is it an incitement of political protest through flag burning? Or, if the speaker or audience is British, is it an effort to discourage people from smoking cigarettes (fag being a common British term for cigarette)?”

Allan’s post, the ProPublica piece, and the hundreds of pages of documents in the Guardian leaks demonstrate another central fact about content moderation: that many of these questions would be difficult even for a constitutional lawyer. They also highlight where Facebook has been able to use algorithms or automation to moderate content, and where the questions are so complex and nested in constantly changing social norms that its likely decades before AI will be able to replace people on much of this moderation work.

That is one of the most important realities of these new disclosures: The things we’re upset about—for example, why Facebook’s “rules protect white men from hate speech but not black children”—is not a thing you can fix with an algorithm or new AI, as Jacob Brogan recently wrote in Future Tense. It might not even be a thing you can fix with new policy. Now that the curtain is pulled back, it turns out that these decisions we’re so unhappy with are just really hard problems, and that humans are making them.

Does that reality alter what we expect from Facebook? Not necessarily, but it should change how we respond to it in order to get real change, and how we focus our ire when bad decisions are made. As my colleague Margot Kaminski and I wrote earlier this week, “unlike a government, Facebook doesn’t respond to elections or voters. Instead, it acts in response to bad press, powerful users, government requests and civil society organizations.” Thus, it’s the job of civil liberties groups and user rights groups to take “advantage of the increased transparency to pressure these sites to create policies advocates think are best for the users they represent.”

And there’s hope that Facebook will listen. Last week the company released a new mission statement that prioritizes giving “people the power to build community and bring the world closer together.” It’s a much more human slogan than the prior sterile, goal-like motto to “make the world more open and connected.” Perhaps that is reflective of not only us humans who use Facebook everyday, but the humans who work to bring it to us.