Future Tense

Facebook Is Cracking Down on Viral Hoaxes. Really.

To reduce the spread of viral misinformation, Facebook will give users the option to flag bogus news stories.
To reduce the spread of viral misinformation, Facebook will give users the option to flag bogus news stories.

Image courtesy of Facebook

Facebook has long been a hotbed of hoaxes. As more people have turned to the site for news and information, its news feed has become a Petri dish for viral misinformation about everything from cancer to celebrity deaths to the Sept. 11 attacks.

That may finally be changing.

In December, I wrote a column proposing an easy way to inhibit the spread of false information on Facebook: Tweak the news feed software to take into account some obvious signs from users that a given story might be false. The idea came to me from Slate’s science editor, Laura Helmuth, who was tired of having to debunk viral anti-science stories that were circulating on the social network.

For instance, we suggested that appearances of the words hoax and debunked in the comments below a story might be good tipoffs that a story is bogus. Links to Snopes.com stories in the comments might also suggest that the original post contained falsehoods. Facebook wouldn’t have to censor such stories entirely, I wrote. Its news feed algorithms could simply treat them with a little more caution, the same way they’ve recently been reprogrammed to mitigate the flood of clickbait, like-bait, and other “low-quality” content on the site.

At the time, Facebook said it had no plans to downgrade likely hoaxes in its news feed. “We haven’t tried to do anything around objective truth,” news feed product manager Greg Marra told me in November. “It’s a complicated topic, and probably not the first thing we would bite off.”

On Tuesday, however, Facebook announced that it will indeed update its news feed software to flag stories that might be false—and to limit their spread. It won’t do it in exactly the way we proposed, but the approach is similar. Instead of looking at the comments on a given post, it has added an option for Facebook users to explicitly flag it as “a false news story” when they run across it in their feeds. Here’s what that looks like:

Facebook: "Help us understand what's happening"

Screenshot courtesy of Facebook

As another signal that a given story might be false, Facebook will also look at how often it has been deleted by the people who posted it. The theory is that a widely deleted post may be one that many users regretted posting because they realized it was bogus.

As we proposed, Facebook won’t remove such stories from its feed altogether. Instead, the company said it will reduce their distribution and add an annotation warning news feed readers that they may contain false information. So a post that has been either widely deleted or flagged as false news by a large number of users will now come with a note like this when it appears in your feed:

"Many people on Facebook have reported that this story contains false information."

Screenshot courtesy of Facebook / Illustration by Slate

To be clear, Facebook’s software will not be analyzing the actual content or substance of stories to suss out the fake ones. That would be extremely difficult and fraught with the potential for mistakes. Its approach—relying on explicit feedback from human users—is far simpler and makes more sense. Humans, for all our flaws, are still collectively better than bots at recognizing bogus stories when we see them.

Facebook told me these changes should not affect satirical articles from sites like The Onion. The company found in its testing that these sorts of posts are not often flagged as false by users—or, at least, not as often as actual hoaxes are. Re/code’s Peter Kafka evinced some skepticism about that, which I can understand. But I can also see how Facebook might be right. Plenty of people might mistake an Onion story for a hoax at first blush. But intentional hoaxes seem more likely to provoke annoyed users to take the extra step of reporting them to Facebook as “false news.” Admittedly, the line becomes blurrier when it comes to fake-news sites such as the Daily Currant, which claim to be satire but profit from duping the gullible. Presumably Facebook’s hoax-flagging algorithms will have a relatively light touch. After all, its “war on clickbait” hasn’t exactly crushed the likes of Upworthy so far.

So, did Facebook get Slate’s memo in deciding to implement these changes, or was the timing a coincidence? The company wouldn’t tell me directly, saying only that it “started working on this update in November.” I first ran the idea by Marra on Nov. 4, which is when he told me it was “probably not the first thing we would bite off.” My story ran Dec. 3. So, who knows? Maybe I should have headlined this post, “Facebook Caves to Slate’s Call for Better Hoax Detection”—and then waited to see if Facebook flagged it as false.

Previously in Slate: