Future Tense

Facebook’s Not-So-Evil Crusade Against Clickbait

Facebook is trying to make its system harder to game. That's good news for journalism.
Facebook is trying to make its algorithms harder to game. That’s good news for journalism.

Photo by Stephen Lam/Getty Images

Facebook brought clickbait into this world, and now it’s trying to take it out.

In a blog post Monday, the company announced a change to the algorithms that govern what you see in your Facebook news feed. The change is aimed at filtering out “click-baiting headlines”—that is, headlines that entice people to click on them, but lead to stories that fail to satisfy. The goal, Facebook says, is “to help people find the posts and links from publishers that are most interesting and relevant, and to continue to weed out stories that people frequently tell us are spammy and that they don’t want to see.”

This should come as a welcome change for just about everyone. One of the loudest complaints about Facebook in recent years has been the profusion of viral junk that is carefully designed to game the site’s algorithms by attracting cheap clicks and likes. (See the post below for an example.)

Facebook clickbait example
It turns out people don’t particularly like being manipulated.

Screenshot courtesy of Facebook

It would be bad enough if this sort of content were confined to Facebook itself. Unfortunately, it has also infected the wider Web due to Facebook’s outsize influence on other media organizations’ fortunes. You can now find headlines that oversell their corresponding stories just about everywhere, from Upworthy to Business Insider to the Atlantic. Yes, Slate too has been guilty of this on plenty of occasions, despite our writers’ and editors’ genuine efforts to walk the fine line between entertaining headlines and sensational ones.

The fact is that most journalists don’t want to oversell their stories. But Internet advertising and social media have ushered in a free-for-all marketplace in which the grabbiest headlines tend to win the readers—even if the ensuing content doesn’t deliver on their promise.

Some of the most irksome excesses been driven by Facebook’s news feed algorithms, which have historically rewarded stories that get clicks and likes, regardless of whether those stories are actually any good.  Sites that don’t attempt to game those algorithms risk irrelevance or extinction at the hands of those that do. So if any single entity has the power to tilt the incentives back in the direction of headlines that actually tell readers what a story is about, it’s Facebook.

Ah, but how can Facebook know whether a story is any good? That is, how does it define clickbait? Those are important questions—and Facebook has surprisingly good answers.

Clickbait, says Facebook, is “when a publisher posts a link with a headline that encourages people to click to see more, without telling them much information about what they will see.” That’s a pretty fair subjective definition of the term. As BuzzFeed’s Matt Lynley explains:

This is not to suggest that all stories that have clickable headlines will be penalized. While the term “clickbait” is often a placeholder to describe undesirable internet content, the clickbait that Facebook will look to eradicate is made up of posts that often fail deliver on the headline’s promise or posts that leave readers feeling tricked.

OK, so how can Facebook’s algorithms recognize clickbait when they see it? They do it by looking beyond the standard metrics—total likes and clicks—to focus on what happens after a user clicks on a story. Do people actually spend some time reading the post once they’ve clicked through? Do they go on to like it, comment on it, or share it with their friends? If so, Facebook assumes that they got some real value out of it.

If, on the other hand, people click on a story only to end up right back on Facebook moments later, that raises the probability that it was clickbait. Likewise, if most people are liking a story before they’ve read it rather than after, that’s an indication that they’re responding to the headline and/or the photo rather than the substance of the story.

As with any change to Facebook’s algorithms, this one has sparked its share of carping and conspiracy-mongering despite its apparent good intentions. Who is Facebook, critics demand to know, to tell us what to read and what not to read? If people like clickbait headlines, why should Facebook withhold them from us? What’s the secret agenda here?

These questions rest on flawed premises.

First, Facebook is not telling people what to read. Like any media company, from CNN to the New York Times, Facebook’s goal is to present its users/readers with a selection of content that it thinks will interest and inform them. If it fails in that task—if readers don’t like what they see—they’ll go elsewhere. Schoolteachers tell people what to read. The Chinese government tells people what not to read. Media organizations in a competitive marketplace—including social-media sites—simply do not have that power.

Second, if people really liked clickbait headlines, Facebook probably would keep showing them to us. Facebook isn’t waging war on clickbait out of some paternalistic sense of responsibility. It’s doing it because Facebook’s own users have explicitly told Facebook in surveys that they don’t like clickbait. Yes, they may succumb to teaser headlines, but they usually end up feeling cheated and annoyed. That feeling, in turn, makes them less likely to spend time on Facebook in the long run. And that is the worst thing that could happen to Facebook’s business.

Whether this strategy will work as intended is another question. It’s quite possible that Facebook’s implementation of this change will backfire somehow, or open up new ways for publishers to game the system. No single metric, including “attention-minutes,” can fully capture the value of a given story to readers.

Facebook understands that, and is likely to keep tweaking its algorithm to respond to new traffic-grubbing tactics as they emerge. This is exactly what Google has been doing for years to combat shady search-engine optimization strategies that skew its search results.

Facebook’s rise as a portal for news has profoundly changed journalism in just the past few years. Some of those changes are welcome, like the way the social network can deliver a great story—or even a life-saving one—to a far wider audience than it would have reached otherwise. Others are insidious, like the way it can deliver wildly sensationalized or inaccurate stories to a wide audience at the expense of more nuanced ones.

Fortunately for all of us, Facebook is beginning to realize that those skewed incentives risk harming its own brand in the long term. The better Facebook gets at understanding what its users actually like, as opposed to what they just Facebook-like, the more its positive effects on journalism will balance out the insidious ones.

Previously in Slate: