Technology

The Real Problem Behind the Fake News

Facebook is under fire for spreading falsehoods. But it’s getting away with a bigger lie.

Facebook CEO Mark Zuckerberg
Facebook CEO Mark Zuckerberg during a town hall at Facebook’s headquarters in Menlo Park, California, in 2015.

Stephen Lam/Reuters

In the wake of Donald Trump’s election as president, Facebook has taken justifiable heat for its role in spreading misinformation and propaganda about the candidates. In particular, its news feed algorithm fueled a cottage industry of fake and intentionally misleading “news” that skewed heavily anti–Hillary Clinton and pro-Trump, according to a BuzzFeed analysis. These falsehoods attracted far more user engagement, on average, than true stories from the same outlets and drowned out earnest attempts by dedicated fact-checking sites such as Snopes to debunk them.

This should not surprise anyone who understands how Facebook works. People tend to read, like, and share stories that appeal to their emotions and play to their existing beliefs. Without robust countervailing forces favoring credibility and accuracy, Facebook’s news feed algorithm is bound to spread lies, especially those that serve to bolster people’s preconceived biases. And these falsehoods are bound to influence people’s thinking.

And yet, in the days following the election, as criticisms of the company mounted, Facebook CEO Mark Zuckerberg downplayed and denied the issue—a defensiveness that says even more about the company than the fake news scandal itself. Zuckerberg’s response points to a problem deeper than any bogus story, one that won’t be fixed by cutting some shady websites out of its advertising network. The problem is Facebook’s refusal to own up to its increasingly dominant role in the news media. It’s one that is unlikely to go away, even if the fake news does.

In a public interview last Thursday, Zuckerberg claimed that fake news on Facebook “surely had no impact” on the election and that to suggest otherwise was “a pretty crazy idea.” He accused Facebook critics of condescension for assuming that voters could be influenced by falsehoods and dismissed the notion that one side could have shared more fake news than the other. (There is evidence that it did.) As criticism intensified, he followed up with a personal Facebook post on Saturday, which struck a more conciliatory tone but still rejected the notion that fake news had an impact. He noted that Facebook already allows users to flag hoaxes and fake news and added that “we will continue to work on this to improve further.” At the same time, he cautioned that Facebook had to “proceed very carefully,” because “identifying the ‘truth’ is complicated.”

Yes, the truth is complicated, and Facebook should proceed carefully. But there is a growing sense, both inside and outside the company, that it may be proceeding rather too carefully, given its increasingly dominant role in the distribution of news online. And Zuckerberg’s denials seem to be fanning the flames.

Over the weekend, some highly placed, anonymous Facebook employees told the New York Times that they’ve been questioning the company’s role in the campaign. Five more anonymous employees told BuzzFeed on Monday that they and dozens of others within Facebook have formed a secret “task force” to advocate for stronger action against fake news. Meanwhile, a top Clinton strategist told Politico that Democratic leaders are looking for ways to get Facebook to address the problem. And Gizmodo reported, citing an anonymous source, that Facebook considered a tougher move against fake news this summer but held off out of fear of upsetting conservatives. Facebook disputed that, telling me it did no such thing and providing an alternative explanation for its tweaks to the news feed over the summer. (This puts Facebook in the ironic position of arguing that Gizmodo’s post is itself a false news story of sorts.)

Finally, on Monday night, the company took a concrete step. Following the lead of Google, which made a similar move earlier in the day, Facebook announced that it will ban fake news sites from using its advertising network. It’s a fine start. It is not nearly enough.

The furor over fake news is warranted. Fabricated stories about the pope endorsing Trump or an FBI agent getting murdered for leaking Clinton’s emails may have composed a small fraction of all the political content shared on Facebook. (Zuckerberg declared, without sharing any evidence, that more than 99 percent of Facebook content is “authentic.”) But they and others like them were so widely shared—nearly 1 million times in the case of the bogus pope endorsement—that it’s easy to imagine they played a role in at least some voters’ thinking. By contrast, a major investigative scoop from the New York Times about Trump’s tax returns was shared fewer than 200,000 times. The presence of fake news side by side with real news, in identical format, contributes to a sense that anything you read in the news feed could just as well be true as entirely made up.

Yet in the long run, fake news on Facebook may prove to be a relatively short-lived concern compared with the deeper fault line that the tremors have exposed. It reveals a company increasingly torn between its self-conception as a neutral technology platform and its undeniable influence on the creation, distribution, and consumption of news and other media. And it’s left Zuckerberg, one of the world’s most powerful executives, struggling to keep control of his company and the story it tells about itself.

He’s right, by the way, to be wary of casting Facebook in the role of arbiter of journalistic credibility. A news feed that’s a messy free-for-all is probably preferable to one in which only Facebook-approved sources can be heard. So let’s grant that it would be impossible for Facebook to stamp out all falsehoods in its network and perhaps even dangerous for it to even try, were it to cast too expansive a net. Even so, the existence of Macedonian click-farms dedicated to churning out fake news stories for profit is a clear sign that Facebook could be doing more to address the small fraction of content that is obviously bogus. Even Zuckerberg admits this.

What was odd about Zuckerberg’s response to the fake news problem was how adamant he seemed that it had no impact. Facebook’s whole premise as a business is that what people read in their news feeds can influence their decisions—otherwise, there would be little point in advertising there. And Zuckerberg has been more than happy to trumpet the company’s estimate that it encouraged 2 million people who might have otherwise stayed home to vote. Yet he wants us to believe that fake news stories played no role at all.

If Gizmodo’s report is accurate, it would cast Zuckerberg’s pooh-poohing of the fake news problem in an ugly new light. It would suggest that the company knew fake news was helping one political party more than the other and that it declined to take action for that very reason. It would imply that Zuckerberg isn’t just in denial—he’s flat-out lying.

But there’s another explanation for his defiant stance that doesn’t rely on speculation (or single anonymous sources). It’s that Zuckerberg is so loath to take responsibility for the content that appears on Facebook—so reluctant to be weighed down by its baggage, even as he runs the conveyor belt—that he’d rather deny its effects than grapple with its causes.

That’s consistent with Zuckerberg’s approach to other deeper questions about Facebook’s role in the media, including the charge that it insulates users in ideological bubbles by reinforcing what they already believe. “All the research we have suggests that this isn’t really a problem,” Zuckerberg said on Thursday, citing a Facebook-funded 2015 study that has been criticized as misleading. The data showed that Facebook does in fact expose users primarily to political content that conforms to their partisan identifications. But the study concluded, a little defensively, that this problem was insignificant compared with the problem of users’ own choices as to which sort of content to engage with. As Jefferson Pooley pointed out in Slate, it’s impossible to reproduce Facebook’s findings, because the company won’t let independent researchers see its data.

Dubious as the study’s conclusions are, it seems to have convinced Zuckerberg beyond a doubt that Facebook doesn’t have a filter-bubble problem. That’s convenient for Facebook, since addressing such an issue would require rethinking the fundamental structure of its algorithm and user experience. Evidently Facebook’s users are not the only ones subject to confirmation bias and epistemic closure.

There is an even more subtle and insidious effect of Facebook’s algorithm that has gone almost unmentioned in this saga. It’s the incentive Facebook creates for the media—both the hoax-disseminating media and the truth-telling one—to write and frame stories in ways that are geared to generate likes, clicks, and shares among the social network’s users. The illusion that Facebook is a neutral platform should have been shattered long ago by the obvious ways it has warped online news coverage, from the manipulative headlines to the feeding frenzies over sensational stories and anecdotes that are too good to check. If you were trying to design a media diet that could help give rise to something like the Trump phenomenon, you could hardly do better than 24-hour cable news and the Facebook news feed. To its credit, Facebook has acknowledged the problems of clickbait and likebait and made real efforts to mitigate its own perverse incentives. But even as Zuckerberg has repeatedly addressed the issue of fake news, he has evinced no awareness of the other ways Facebook might have disrupted political coverage for the worse.

Finally, there’s Zuckerberg’s oft-criticized denial that Facebook is a media company. “It’s a technology company,” he says, as if that settles it. There are valid arguments on both sides, and no doubt the company has its feet in both sectors. It would be eminently reasonable for Facebook to admit that it is a media company in some key respects but not in others. But Zuckerberg denies even that. To him, there is no argument.

Drill down into Facebook’s reasons for insisting that it isn’t a media company, and you’ll hit layer upon layer of denial. It denies that it’s a media company because that allows it to further deny that Facebook shapes not only how the news is distributed, but how it is reported, framed, discussed, and perceived. That in turn allows it to deny that its humans or algorithms might exhibit any bias that could warp the news for better or worse or favor one set of interests over another. If Facebook is a neutral platform, as it insists, then it can deny any responsibility for how people use it, any responsibility for what they post or share, any responsibility to ensure the accuracy or fairness or journalistic virtue of whatever news might circulate on it.

The ultimate denial, and the underlying purpose of it all, is to deny the very possibility of any tension between Facebook’s own interests and the interests of society. Facebook, by Zuckerberg’s lights, is simply a powerful tool for making the world more open and connected. And if that means Trump is elected U.S. president, there must have been good reasons for his election that had nothing to do with Facebook. Or, in Zuckerberg’s words, “voters make decisions based on their lived experience”—as if Facebook weren’t a part of that, as if its $335 billion market value weren’t a function of the incredible degree to which it has managed to ingratiate itself into people’s daily lives, as if our online and offline lives weren’t now irrevocably intertwined.

Either the internal contradiction of Zuckerberg’s position is lost on him or, more likely, he recognizes it but refuses to acknowledge it. His discretion makes sense, from a business perspective if not a moral one, if he believes that confronting Facebook’s impact on politics would require changes that would hurt the company’s bottom line. But coming from a figure who preaches the gospel of openness, it’s baffling.

It now seems, however, that Zuckerberg has lost the faith of some of his own employees on this issue. Facebook has rarely been a leaky company in the past. But the leaks started with its bungling of the trending news controversy, and they’ve resurfaced around the fake news debate. Facebook’s move on Monday to cut off advertising to fake news sites feels like an acknowledgement from the top that outright denial is no longer tenable.

The question now is how far Facebook will go to placate its critics. The last time it faced an uproar over its influence on U.S. politics—the overblown controversy involving its trending news section—it grossly overreacted and made everything worse. That seems less likely this time, especially since the news feed is a far more precious product to the company.

What’s more likely is that Facebook will seek to isolate and defuse the fake news issue while preserving its claim to be a neutral technology platform. As John Herrman pointed out in the New York Times last week, Facebook may already be evolving in ways that render the current controversy largely irrelevant. For instance, it has been partnering with prestigious media outlets to produce video content, broadcast live videos, and publish glossy “instant articles” within the news feed itself. It’s using the power of its algorithm to prioritize those forms over others, including links to news stories from publishers around the web. It’s conceivable that Facebook will end up drowning out most fake news, along with a lot of legitimate content from second- and third-tier web publishers, without having to police it any more actively than it already does.

Those clamoring for Facebook to fix its fake news problem should be careful what they wish for. They might find in a few years that the fake news is gone—but the filter bubbles, the perverse incentives, and Facebook’s pretense to algorithmic neutrality remain.