The Industry

Why Does Facebook Always Need a Shove to Deal With Hate Speech?

Kamala Harris’ grilling of Sheryl Sandberg got to a question at the core of Facebook’s moderation issues: Is hate profitable?

Facebook COO Sheryl Sandberg.
Facebook COO Sheryl Sandberg testifies during a Senate Intelligence Committee hearing on Capitol Hill on Wednesday in Washington. Drew Angerer/Getty Images

With two months to go until Election Day, social media executives were back on Capitol Hill on Wednesday to testify about propaganda and voter manipulation on their platforms. Politicians grilled Facebook Chief Operating Officer Sheryl Sandberg and Twitter CEO Jack Dorsey to learn more about what the companies are doing about foreign election meddling, alleged political bias, and their principles in general—like whether Facebook’s absence from China is due to human-rights concerns or not. A morning hearing with the Senate Intelligence Committee featured both executives (and an empty chair for Google) while an afternoon hearing with the House Energy and Commerce Committee starred only Dorsey. This was the first time that either Dorsey or Sandberg sat for public testimony in front of Congress, and the ways the executives handled a number of the tougher questions from elected officials revealed a lot about how their companies are coming to terms with their role in and responsibility to the public discourse—particularly when it comes to the thorny issue of hate speech.

In one particularly revealing line of questioning during the Senate hearing, California Democrat Kamala Harris asked Sandberg how Facebook makes money and whether the company’s hate-speech policies are truly aimed to protect vulnerable communities that are often the subject of prejudice and animus. Harris, noting that Facebook makes more money the more people engage with Facebook, also pointed out that it’s precisely the content that’s hateful, conspiratorial, and inflammatory that often generates the most engagement. Her point was that there’s a real question as to whether Facebook, a company whose first responsibility is to its shareholders, is adequately poised to address false news, hate speech, or any other harmful—and highly engaging—content that users generate.

Here’s part of their exchange:

Sen. Harris: Would you agree that—I think it’s an obvious point—the more people that engage on the platform, the more potential there is for revenue generation for Facebook? 

Sandberg: Yes, senator. Only when the content is authentic—

Sen. Harris: I appreciate that line. So the concern that many have is how we can reconcile an incentive to create and increase your user engagement when the content that generates a lot of engagement is often inflammatory and hateful. So for example, Lisa-Maria Neudert, a researcher at Oxford Internet Institute, she says, quote, “The content that’s most misleading or conspiratorial, that’s what’s generating the most discussion and the most engagement and that’s what the algorithm is designed to respond to.”   

Harris went on to note that it wasn’t until last year, perhaps in light of a blockbuster June 2017 report from ProPublica, that Facebook decided to refine its categories for what counts as hate speech. That report revealed documents showing that Facebook relied on formulas that prohibit hateful language against “protected categories,” which include sex, gender identity, race, religion, national origin, serious disability or disease, and sexual orientation. Yet the company was more permissive when it came to hate speech directed at subsets of these categories, like age, political ideology, appearance, social class, or occupation. So, with that approach, commenting on white people in general would probably be considered hate speech, but a post about killing people based on a political and religious ideology, like “radical” Muslims, wouldn’t be, because it’s referring to a particular subset of Muslims. Under this rubric, black children would also be allowed to have hate speech leveled against them but white men wouldn’t. Sandberg said that that was a “bad policy” and that it has since been fixed.

Sen. Harris: My concern is that, according to Facebook’s community standards, you do not allow hate speech on Facebook. However, contrary to what we’ve seen, on June 28, 2017, a ProPublica report found Facebook training materials instructed reviewers to delete hate speech targeting white men but not against black children because black children are not a protected class. Do you know anything about that and can you talk to me about that?

Sandberg: I do. What that was, was, I think, a bad policy that was changed, but not saying black children—it was children—it was saying different groups weren’t looked at the same way and we fixed it.

Sen. Harris: Isn’t that with hate, period, that not everyone is looked the same way?

Sandberg: Hate is against our policies and we take strong measures to take it down. We also publish publicly what our hate-speech standards are. We care tremendously about civil rights—we care tremendously about civil rights. We’ve worked closely with civil rights groups to search for hate speech and take it down.

Harris was suggesting that if Facebook was indeed dedicated to making its community safer for all users and weeding out hate speech, it probably would’ve examined its definition of hate speech before it was called out by journalists. And Sandberg didn’t offer much of an explanation for the oversight.

While it may be true that Facebook is working with civil rights groups now, it wasn’t the case for a long time despite repeated attempts from civil rights organizations to get the social media giant to meet with them. In October 2016, a group of 73 national and local civil rights groups asked for clarification on the social media giant’s content-removal polices, which the groups claimed were unfairly censoring the advocacy of racial-justice advocates. In January 2017, again, a coalition of 77 social and racial-justice organizations—like the ACLU, Asian Americans Advancing Justice, and others—wrote a letter to Facebook requesting a meeting to address what the organizers called the “disproportionate censorship of Facebook users of color,” but Facebook declined. Then, in March 2017, the coalition gathered about 570,000 petition signatures to present to Facebook in an attempt to meet with the company to address the troubling experiences minority groups had with Facebook’s content-moderation policies. It wasn’t until after the ProPublica report came out that Sandberg met with the leaders from civil rights and racial-justice groups in October. In April, Facebook finally released its previously secret community guidelines for handling hate speech.

But that wasn’t necessarily out of a sense of responsibility to the civil rights communities. It followed a revival of complaints from conservatives that Facebook was unfairly biased against them, which came into even sharper focus as a concern for conservatives following Facebook CEO Mark Zuckerberg’s testimony to Congress in April. All of which set the stage for Facebook agreeing in May to two separate internal audits—one focused on harms to civil rights that occur on the platform and a second to assess whether Facebook is unfairly biased against conservative voices. The claim of partisan bias against the party that’s currently in control of Congress and the White House isn’t something that one of the most powerful American companies in the world can really take lightly, even if there’s little evidence to back it up. And while it’s hard to know if Facebook’s decision to release its rules and conduct audits was due to complaints from the civil rights communities or cries from conservatives about decreased traffic, the company certainly responded quickly to the political party in power’s complaints.

And so, when Harris asked Sandberg about its previous hate-speech policy and how much money Facebook makes from posts intended to inflame political divisions, like those from Russian agents posing as American activists, Sandberg fumbled. She admitted Facebook was wrong in the past, admitted Facebook makes money from increased engagement, and said Facebook has changed its policies—but she couldn’t address the real problem Harris was scratching at. That problem is that Facebook has every incentive to allow hate speech against historically targeted groups if it means more engagement and little incentive to remove it without outside pressure or regulation. Which might lead critics to a question Harris didn’t ask but was probably thinking: Would the company do better if failing to protect its users’ civil rights was against the law?