Technology

One Weird Trick to Stop Facebook Hoaxes

The social network could be a force for truth—if it wanted to be.

People will believe just about anything on Facebook. But it doesn’t have to be that way.

Photo illustration by Lisa Larson-Walker. Photos by Kimberly White/Getty Images and Thinkstock.

Macauley Culkin is dead. Eating whole lemons will save you from cancer. And this shocking video proves that no planes actually hit the World Trade Center on 9/11.

That’s if you believe what you read on Facebook—which an awful lot of people clearly do. A recent Pew survey found that 30 percent of American adults turn to the social network for news, making it perhaps the most influential media platform in the country. Far more people read Facebook every day than watch CNN or Fox News, or read the Huffington Post or the New York Times.

And yet Facebook’s news feed remains a hotbed of hoaxes, lies, and conspiracy theories. (Yes, even more so than Fox News, or whatever shrill liberal publication you’d like to hold up as its lefty counterpart.) Some, like the periodic claims that today is the date shown on the time machine in Back to the Future II, are relatively benign. Others are more insidious, like the posts claiming that for every person who shares this photo of a cancer-stricken baby, Facebook will donate to his family to help cover their medical expenses. And many are surprisingly resilient. This week, Slate debunked a bogus copyright notice that was going viral on Facebook for the third time in the past two years.

From a business perspective, media outlets have little cause to complain: Each of Slate’s short posts debunking that copyright notice has been shared tens of thousands of times, bringing the site some very easy traffic. But for a journalist—not to mention, you know, a human being and a citizen—it’s disheartening to realize that you can shout the truth over and over and over without making a discernible dent in the spread of the falsehoods. And the design of Facebook’s news feed has a lot to do with that.

True, Facebook didn’t invent the viral hoax. It just happens to be the perfect 21st-century venue for the propagation of scams and urban legends that would have propagated by word of mouth, tabloid, or chain letter in earlier eras. But Facebook amplifies misinformation at a speed and on a scale that exceeds what was possible before. Its news feed is the product of cutting-edge software that the company has finely calibrated to maximize user engagement—that is, to prioritize the posts that catch people’s eyes and compel them to click, like, or share. One problem: Hoaxes, scams, and conspiracy theories are specifically optimized to do just that. Truth may be stranger than fiction, but on Facebook, fiction is often more viral.

Here’s the thing: It doesn’t have to be that way. I’ve written before about how machine-learning algorithms could help identify false rumors on social media. Already powered by some of the most advanced machine-learning software on the planet, Facebook’s news feed could easily be a potent force for truth if the company wanted it to be—far more so than rivals like Twitter, which is constrained by its chronological timeline from intervening too much in what its users see at any given time. So why isn’t it?

Facebook will tell you it’s because distinguishing truth from lies is none of its business. The purpose of the news feed, the company explains, is not to sift right from wrong or good from bad according to some objective standard. It’s to sift what’s interesting to each Facebook user from what isn’t—that is, to give its users what they want. And what they want, Facebook has learned, is to see what their friends, family, and acquaintances are talking about. Whether that’s a cute baby photo, a serious current event, a clever lifehack, or a 9/11 conspiracy theory is not Facebook’s concern.

“Our goal is to connect people with the content they’re most interested in and not to prioritize one point of view over another,” spokeswoman Jessie Baker told me.

There’s some logic to that. Facebook would be loath to appoint itself the arbiter of the veracity of everything its users post on the site, and I doubt its users would much appreciate that either. It’s a tech company, not PolitiFact.

That said, it’s a false dichotomy to imply that Facebook must either hire legions of censors to police users’ posts, or throw up its hands and absolve itself of all responsibility for the nature of the content they share. As Adrian Chen detailed in a terrific Wired story, Facebook already employs teams of traumatized contractors in the Philippines to scrub the site of pornography and criminality. In theory, they could zap certain known scams and hoaxes while they’re at it. That, of course, would be a difficult and controversial job, and I wouldn’t necessarily recommend that Facebook take it on.  

Fortunately, it doesn’t have to. There’s a much better and easier way that Facebook could rebalance the scales to give less weight to viral bunkum—if it cared enough to do so. It’s something that Facebook already does in service of other goals, like increasing engagement. As such, it doesn’t require a fundamental shift in Facebook’s philosophy. All it requires is an expansion of how Facebook defines “quality,” and a few corresponding tweaks to the news feed code.

For the past two years, Facebook has been on a high-profile campaign to prioritize “high-quality content” in its news feed, an effort I wrote about in depth earlier this year. Exactly what the company means by “high quality” keeps evolving. But at a basic level, the idea is that the number of clicks, likes, shares, and comments that a given post gets from your friends and others are not the only indicators of its worthiness to be shown in your news feed. Specifically, Facebook realized that users’ feeds were becoming overrun with what it calls “clickbait” and “like-bait”—posts whose sensational headlines ginned up “engagement” but whose content failed to deliver when users actually clicked through. To combat those trends, Facebook’s news feed engineers sought out new metrics, like the time users spend reading a post, and whether they come back to like it and share it once they’ve actually seen it.

Similarly, Facebook chief Mark Zuckerberg was reportedly moved to tweak the algorithms in a different direction when he saw a co-worker’s birthday ranking ahead of the birth of his niece. The problem: The news feed’s emphasis on relationships was leading it to prioritize trivial posts from his closest connections over the major life events of other friends and family. Facebook’s clever solution: When a post is prompting the word “congratulations” in the comments, show it to a wider audience.

A similarly simple approach could work wonders to counter the inherent virality of popular hoaxes, as Slate science editor Laura Helmuth recently pointed out to me. Let’s say a given post is repeatedly triggering comments that contain links to Snopes.com, for instance, or that include words like “hoax” or “debunked.” Facebook’s algorithms could take that as a subtle sign that the post’s widespread engagement might be ill-gotten. It wouldn’t have to censor the post—just treat it less like a viral sensation worthy of topping everyone’s feeds. The poster’s friends and family might still see it and have a chance to educate or disabuse him of the false information. But it would be far less likely to spread like fungus across the entire platform.

I ran this idea by Greg Marra, a Facebook news feed product manager, in a recent conversation about another of the company’s efforts to improve the quality of users’ feeds. He sounded vaguely intrigued, but said his team probably wouldn’t be making it a priority anytime soon. “We haven’t tried to do anything around objective truth,” Marra mused. “It’s a complicated topic, and probably not the first thing we would bite off.”

Marra might disagree, but to me this confirms that Facebook could make its algorithms less hoax-friendly—if it saw misinformation as sufficiently detrimental to its business interests. Cracking down on clickbait, in Facebook’s view, is justified by feedback from users who have told Facebook they’re sick of misleading headlines. Apparently there hasn’t been a great hue and cry from the company’s focus group subjects to stop showing them stories that are too good to be true. But I suspect it might be one of those things that eats away at the platform’s credibility—and, ultimately, users’ trust and enjoyment—in ways that aren’t readily captured by Facebook’s existing metrics.

To its credit, Facebook has quietly developed some other features that can have the effect of countering blatantly deceptive content. For instance, certain posts that are being widely shared are now accompanied by a “related articles” module that sometimes includes links to Snopes or other debunkings. But, amusingly, Facebook clarifies that the intent of this feature has nothing to do with correcting the record. When Snopes links appear among the related articles, that’s only because they’re also being widely shared on the site. In fact, Facebook has been criticized for including bogus stories among its own automated “related articles” recommendations.  

Recently, the site has also begun testing a “[Satire]” tag that comes attached to stories from fake-news sites. This might work well in the case of stories from sites like Empire News and the Daily Currant, whose headlines often seemed designed to mislead more than amuse. But it feels a little heavy-handed and humorless when attached to actual satire. The Onion’s swift response: “Area Facebook User Incredibly Stupid.”

It’s true: Facebook, like the world at large, counts among its members a lot of gullible folks, and perhaps one way for users to avoid hoaxes on the site would be simply to unfriend all of them. But Facebook, unlike Twitter, is about reciprocal relationships with people you know in real life, some of whom are bound to be more skeptical than others. And even otherwise intelligent people can sometimes be taken in by fabrications, especially when they’re presented in the same stream as all of their legitimate news and come with the tacit endorsement of their friends.

So, yes, area Facebook users can be stupid—but Facebook could be doing a lot more to help them out. Maybe all we need now is for Zuckerberg to open his news feed one day and see a post from his young niece about how her mean old Uncle Mark is reporting people who pose with guns to the Feds.