Future Tense

Sheryl Sandberg Says Facebook Is Overhauling Its Ad System That Allowed Anti-Semitic Targeting

Sheryl Sandberg, chief operating officer of Facebook, at a meeting at Trump Tower last year.

Drew Angerer/Getty Images

Hey, look, journalism works! On Wednesday, Facebook Chief Operating Officer Sheryl Sandberg posted a lengthy note to her Facebook page announcing that the company has taken steps to remedy a situation pointed out last week by the nonprofit news outlet Pro Publica, which found that the social media giant lets advertisers target users interested in anti-Semitism and other hateful categories. The investigation found that someone could buy ads that would reach “Jew haters” or people interested in “how to burn Jews.” A follow-up post by Slate found even more categories, like “killing bitches,” “threesome rape,” and “killing Haji,” could also be used to tailor ads to Facebook users.

Sandberg writes that Facebook is now clarifying its advertising and enforcement process to ensure that content that “directly attacks” people based on race, sex, gender, national origin, religion, sexual orientation, or disability can’t be used to target ads. Sandberg says that targeting of this kind has always been against Facebook’s polices; apparently those polices weren’t enforced in this area until now.

Facebook is adding more humans to review its automated ad system, Sandberg wrote, and is reinstating its “5,000 most commonly used targeting terms” that Facebook has deemed do not peddle hate speech, but it’s not clear if these are the only terms that Facebook will allow advertisers use in targeting now. Finally, Facebook will build a new way for users to crowdsource complaints about their ads, like if someone believes ads are being targeted to people based on their race or religion. Of the last fix, Sandberg says similar methods have worked in other parts of Facebook and should be able to carry over to ads. It’s not obvious what she means here either—she uses the term “technical systems”— and I’ve asked Facebook to clarify Sandberg’s statement. (I’ll update this post if they respond.)

It’s also unclear how Facebook plans to deal with ad targeting that doesn’t directly attack people but allows for racist or homophobic or sexist hateful stereotyping anyway. Facebook’s full metric for what constitutes hate speech isn’t public information. But documents obtained by Pro Publica earlier this summer reveal that the platform has used formulas that prohibit hate speech against “protected categories,” which include sex, gender identity, race, religion, national origin, serious disability or disease, and sexual orientation when it comes to content posted by users. Facebook is more permissive, however, when it comes to hate speech directed at subsets of these categories, like age, political ideology, appearance, social class, or occupation. Under that formula Facebook permitted hateful speech against black children, since it’s a subset of people, but not against white men. If Facebook is using the same standards that it uses on individual posts, then the company might allow ads that target people who dislike poor people, for example. Or an advertiser might be able to target users based on their hatred of people who are perceived as fat.

Adding more people to the mix might help, since humans are likely better at knowing when something is offensive and when something isn’t. But unless Facebook publishes clear guidelines that it enforces consistently, simply adding more staff to try to fix the anti-Semitism that its system condoned might not free Facebook of its ad-targeting dilemma.

Another thing that might help: Facebook should hire more diverse technologists. According to the company’s last diversity report, its technical staff is 81 percent male and 1 percent black. You have to wonder: If there were more women or under-represented minorities on Facebook’s engineering product teams, would this flawed ad tech have even seen the light of day?