Technology

The Rise of the Facebook Truthers

Journalists will believe anything about the social network, as long as it’s totally outlandish.

Facebook Truth Now
There’s a collective willingness to believe Facebook is up to no good.

Photo by Manjunath Kiran/AFP/Getty Images

Something about Facebook makes journalists lose their minds. How else to explain the seemingly unending procession of stories based on wild speculation and implausible conspiracy theories? 

This young week has already brought two separate sets of bizarre allegations about the social network’s business practices. Both have largely been taken at face value by commentators on Twitter, Hacker News, and elsewhere, despite making very little sense.

The more outlandish of the two comes from Business Insider’s Nicholas Carlson, a writer who seems fully capable of sound reasoning when the subject is a company other than Facebook.

In a post Monday about an apparent decline in traffic to Upworthy, the politically progressive viral-content site, Carlson hypothesized that recent changes to Facebook’s news feed algorithm were to blame. Those changes, which Facebook explained in a November blog post, were meant to favor content that users actually enjoyed over viral memes and other types of posts that, however widely shared, are also widely disliked in Facebook user-satisfaction surveys. 

Based on my own reporting, I think it’s unlikely that those news feed changes are responsible for Upworthy’s apparent dip in Facebook traffic—if the dip is even real. Upworthy chief executive Eli Pariser told me his site’s metrics can fluctuate from month to month, so an apparent decline in December and January might be more accurately viewed as a regression from a one-time spike in November.

In any case, Pariser went on, Upworthy has actually changed its own priorities in recent months. Rather than focus on page views and Facebook shares as its main metrics for a story’s success, the site has begun measuring the amount of time readers spend reading or commenting on a given story once they’ve clicked on it. Accordingly, Upworthy has been tightening its focus on stories that it sees as relevant to its political and social mission. Presumably, that comes at the expense of posts that go viral on Facebook simply because they’re feel-good human-interest stories. “If that means that in the short run we do a few less page views or a few less uniques, we’re totally happy to take that trade, because we’re optimizing for user satisfaction in the long run,” Pariser told me.

Besides, he pointed out, the same metrics that showed a big dip in Upworthy’s Facebook shares showed a sustained rise in Facebook interactions for ViralNova, a viral-content site that shares Upworthy’s affinity for click-bait headlines but not its social mission. That would seem to undermine the Facebook-is-punishing-Upworthy argument, Pariser noted. “We don’t have any reason to believe Facebook’s trying to optimize for ViralNova, right?”

So Carlson’s main hypothesis is probably off-base—but that’s not even the biggest problem with his post. The jaw-dropping part comes at the end, where he tries to explain why BuzzFeed’s traffic has apparently been surging while Upworthy and others have suffered:

It could be that BuzzFeed, unlike all those other sites, buys traffic from Facebook. BuzzFeed’s business model is to create advertorials on BuzzFeed.com and then get traffic to these advertorials by buying Facebook ads. If that’s the reason, then the message Facebook is sending isn’t so much that it wants “high quality” content for its News Feed. It’s that if you are a media company, and you depend on Facebook for your traffic, you better make sure Facebook is benefiting from your existence.

This is an explosive claim. To be clear, there’s nothing wrong with “buying traffic” in the form of paying for advertisements on Facebook. But the algorithms that govern Facebook’s news feed are supposed to be independent of its advertising products.  Carlson seems to be insinuating that Facebook rigged its news feed to punish news outlets that don’t advertise on Facebook and favor those that do. This is akin to accusing Google of rigging its organic search results to punish companies that don’t advertise with Google. For that matter, it’s akin to alleging that Business Insider rigs its news stories to provide favorable coverage to companies that advertise on Business Insider and unfavorable coverage to those that don’t.

That’s a pretty serious charge—which Carlson must have realized at some point, because he went back and quietly changed the article after BuzzFeed complained. Now it concludes by speculating that BuzzFeed has done well on Facebook lately because of its great content.

Is it theoretically possible that Facebook could rig its news feed in this way? I suppose so. The company considers its algorithms a secret sauce, so we can’t know for sure what’s in them. But rigging the news feed would not only be unethical—it would also be terrible for Facebook’s own business. Ads on Facebook are displayed in a few discreet slots within the news feed, and the ads you see are determined by different algorithms than the organic posts in your feed. Buying ads on Facebook, then, ought to get you one thing: ads. The news feed, meanwhile, is the core product by which Facebook attracts, retains, and engages its users, and the company has huge teams of engineers and machine-learning experts working constantly to fine-tune its algorithms to show people the posts they most want to see. If they had to go in and muck with their code every time the ad team struck a deal with the likes of BuzzFeed, they’d rebel. Meanwhile, the news feed would suffer, and users would flee.

Facebook itself called the Business Insider allegation simply “not true.” Spokeswoman Jessie Baker told me, “Organic News Feed ranking is not impacted at all by ads. We try to show people the things they will find the most interesting based on what and who they interact with, not who spends money on Facebook.”

Further evidence of our collective willingness to believe just about anything about Facebook so long as it looks salacious: Carlson’s accusation came the same week that another critic scored a viral hit alleging essentially the opposite. In a nine-minute video that went wild on Reddit and Hacker News, and was republished by Mashable, Derek Muller of Veritasium makes the case that Facebook is in effect punishing its own advertisers by flooding their pages with bogus likes.

Some of Muller’s complaints ring true: Facebook does have a problem with bots—i.e., fake accounts—and “click farms” in which people are paid to like various pages and posts in order to boost their apparent popularity on the network. Muller is right that these likes threaten to actually dilute a page’s reach, because the bogus likers don’t engage with any subsequent posts, which signals to Facebook’s algorithms that the posts aren’t that interesting. Muller’s concern is that people who use Facebook’s own advertising tools to gain more likes are ending up with bogus likes as well, perhaps as a byproduct of click-farm activity. The mechanism proposed here is fascinating and makes the full video worth watching if you have even a passive interest in click fraud or online marketing.

Again, Muller’s hypothesis seems plausible, and so it seems reasonable to see it as a genuine problem for Facebook in delivering on its promises to advertisers.

But that’s just the thing: If Muller is right that bogus likes are diluting advertisers’ reach on Facebook, this would be an obvious problem … for Facebook. Where Muller jumps the tracks is in suggesting that Facebook has no incentive to crack down on fake likes, because it gets paid either way. He goes so far as to title his video “Facebook Fraud,” implying that Facebook tolerates click fraud for its own benefit. We’re talking about a $160 billion public company whose success quite plainly depends on its ability to convince advertisers that it can deliver real value in exchange for their dollars. Do we really think it’s intentionally swindling its own clients for short-term gain?

Again, it’s theoretically possible but would be monumentally counterproductive. Still, I dutifully put the question to Facebook: Are you perhaps being a little lax in cracking down on bogus likes in order to make it easier to fulfill your own promises to advertisers? Here’s spokesman Jay Nancarrow’s response:

Fake likes don’t help us. For the last two years, we have focused on proving that our ads drive business results and we have even updated our ads to focus more on driving business objectives. Those kinds of real-world results would not be possible with fake likes. In addition, we are continually improving the systems we have to monitor and remove fake likes from the system. … We’ve made a lot of progress by building a combination of automated and manual systems to block accounts used for fraudulent purposes and like button clicks. We also take action against sellers of fake clicks and help shut them down.

People are so suspicious of Facebook these days, though, that accusatory posts like Carlson’s tend to be far more widely shared on the site than well-reasoned debunkings. For instance, my story picking apart a ridiculous Princeton study that found Facebook is headed the way of MySpace did not reach anywhere near as many readers as a brief Time post that reported on it uncritically. Were I to view the journalists who publish such posts through the same lens that they apparently view Facebook, I would conclude this post by suggesting that they’re willfully duping their own readers in order to boost their own traffic. While that’s theoretically possible, I find it highly unlikely. After all, they’d only be hurting their own business in the long run.