The XX Factor

Facebook Should Not be in the Business of Censoring Speech, Even Hate Speech

Should we give this guy the power to decide what is and isn’t hate speech?

Justin Sullivan

The recent campaign by a number of women’s rights groups, most prominently Women, Action & the Media, to push Facebook to take anti-woman hate speech more seriously was, as Amanda Marcotte wrote yesterday, incredibly swift and effective, garnering a response from the social media giant in less than a week. The campaign’s goals are noble: for Facebook to bring its policies toward gendered hate speech in line with its policies toward similar types of content, including anti-Semitic and racist speech. An open letter to the company also noted that Facebook regularly takes down images of breastfeeding mothers and breast cancer survivors.

But the idea that Facebook should be arbitrating this type of speech at all is questionable. It of course must be noted that the company—like any company—is well within its rights to regulate speech as it sees fit. The question is not can Facebook censor speech, rather, but should it?

For years, activists all over the world have complained of arbitrary takedowns of content and unfair application of Facebook’s “real name” policy. Along with breastfeeding moms are people like Moroccan atheist Kacem Ghazzali, whose Facebook pages promoting atheism in Arab countries were regularly removed.  Before he rose to fame as the man behind the January 25 protests in Cairo, Wael Ghonim experienced a takedown of his famous “We Are All Khaled Said” page because he was using a pseudonym.  And not a week goes by where, as director for international freedom of expression at the Electronic Frontier Foundation, I don’t receive emails from individuals from the United States to Hong Kong telling me their account was deleted “for no good reason.” 

This happens because the company is merely unequipped to deal with the sheer number of complaints it receives on a daily basis. One billion users undoubtedly translates into millions of reports through Facebook’s system, a system about which the company is famously opaque.  Whether these reports are fed through an algorithm or dealt with individually remains unclear, but what is certain at this point is that, like the atheism example above, many such reports are false positives. So while Facebook is well within its rights to determine what types of speech it wants to host, the company is inconsistent at best at managing its own policies, and at worst, biased in those policies.

Setting aside concerns about Facebook’s procedures, there is a bigger question: Should private companies be determining what constitutes “hate speech”?  In the United States (where Facebook is based), most of the speech flagged by Women, Action & the Media as offensive is, while abhorrent, protected by law. And while Facebook may be private, many of its users treat it like the new town square, making it more of a quasi-public sphere. While the campaigners on this issue are to be commended for raising awareness of such awful speech on Facebook’s platform, their proposed solution is ultimately futile and sets a dangerous precedent for special interest groups looking to bring their pet issue to the attention of Facebook’s censors.