Technology

Not Our Mess

Facebook is feeling the pressure over its misinformation problem. It seems to want to do as little cleanup as possible.

Sheryl Sandberg speaks onstage at the Fortune Most Powerful Women Summit at Ritz-Carlton Laguna Niguel on Oct. 18, 2016, in Dana Point, California.

Joe Scarnici/Getty Images for Fortune

On Thursday, Facebook Chief Operating Officer Sheryl Sandberg sat down for an interview with Axios’ Mike Allen to discuss the role the social media platform played in the dissemination of deceptive, potentially Russian-linked information intended to sway American voters in the 2016 election.

Sandberg reiterated that Facebook is cooperating with the congressional investigation into the Russian meddling. And to that end, the company is going to release the ads it shared with Congress that were bought by apparently Kremlin-backed groups, as well as the targeting information that those ads employed to reach specific kinds of Facebook users. That’s a good thing. Understanding the mechanics behind how foreign operatives worked to manipulate voters will go a long way toward helping to increase media literacy and hopefully give Congress ammunition to push for Facebook to make real reforms regarding how online political ads can directly target individuals—which can be a problem if those ads cater to people’s racism, sexism, or anti-Semitism, as Facebook allowed to be the case until recently.

But if you were expecting Sandberg’s appearance to ignite a new era of candor, contrition, and resolve to avoid a repeat of 2016’s fake news–clogged jumble, her comments offered little comfort beyond a few fig leaves. Yes, Facebook seems to realize its news feed and advertising infrastructure amplify a larger problem of misinformation polluting the public commons. It just doesn’t seem very interested in assembling the necessary cleanup crew, and it certainly doesn’t want Congress to mandate that it do anything about the problem, either.

Consider what Sandberg said the company still would allow on its network. Sandberg emphasized that the bar for what the company allows to be posted as a political ad is low. If the Russian-backed ads had not been bought by accounts pretending to be Americans, “most of them would be allowed to run,” said Sandberg. She pointed to an example about Tennessee Republican Rep. Marsha Blackburn, who made a video she tried to promote on Twitter launching her Senate campaign. Twitter at first refused to promote the ad, but later decided to let it run. Blackburn’s video included a line that accused Planned Parenthood of “selling baby body parts,” which, as Sandberg said, isn’t true. But still, Sandberg said, Facebook would have allowed the ad to run—essentially an admission that Facebook thinks it’s OK to run Facebook ads containing falsehoods as long as a real person posted them. “The thing about free expression,” Sandberg said, “is when you allow free expression you allow free expression.” Facebook doesn’t check what’s put on Facebook before it runs, she reminded Allen. “I don’t think anyone should want us to do that.”

Why does a distinction between real users and fake users matter? Because it’s a very convenient loophole. Let’s say a group of Russian operatives is based in the U.S. and runs a nonprofit here; if it posted from that organization and wasn’t trying to front as another group but was still coordinating a disinformation campaign, Facebook would let those ads run. That’s troubling, and more than anything, it shows that Facebook is still grappling with what responsibilities it has for the information it hosts, even as it continues to conceive of itself, first and foremost and perhaps to all of our detriments, as a neutral platform. After all, if Facebook admitted that it is really a media company, it would have to do a much better job making sure it wasn’t helping to incubate and disseminate disinformation. Being a media company means taking responsibility of the fact that people depend on your services to receive information about local, national, and global affairs in order to inform their vote and participate meaningfully in democracy. And that means being careful about the kind of issue-based ads and boosted posts that peddle disinformation yet are allowed to flourish on Facebook.

But Facebook doesn’t want to be an editor or make calls about what is and is not factual information. It would rather put that onus on the users or other fact-checking organizations. That’s why in March the company rolled out its “disputed” news tag, which users can turn on to flag a potentially incorrect story. In an internal email obtained by BuzzFeed between Facebook’s manager of news partnerships Jason White and one of the reputable third-party fact checkers Facebook uses, White says that once a fake story is flagged, the platform can reduce the amount of people who would otherwise see the post by about 80 percent. That’s great. But Facebook also said that it generally takes about three days for the label to be applied, and White admitted that “we know most of the impressions typically happen in that initial time period.” And a recent study from Yale surveying more than 1,800 people has shown that even when a fake news article is tagged as “disputed” it “did not abolish or even significantly diminish” the perceived accuracy of the information.

To Facebook’s credit, Sandberg said that Facebook is working to make it harder for people who run fraudulent accounts to make money off the spread of fake news, even though it’s still not clear what Facebook considers fake news and what it doesn’t. Facebook has, after all, verified the pages of websites on its platform that are known to push fake news stories, like Your News Wire, which has been dubbed by both experts in the U.S. and the EU as a Russian proxy site for the spread of misinformation. Your News Wire was one of the key websites that pushed the blatantly false Pizzagate conspiracy theory that inspired an armed man to fire a gun inside a D.C. restaurant, and it helped spread those stories on its Facebook site. The publication is just one of many known fake news sites that have had verified pages on Facebook, according to Media Matters.

The ways that Facebook is confronting these issues is heartening, but also insufficient. Sandberg did say that Facebook is hiring 4,000 more people to help with its content oversight. Yet 2 billion people use Facebook every month. Adding thousands of more people, even if they are aided by software, just doesn’t scale. Unless Facebook starts to take its responsibility to not be an active arbiter of misinformation seriously, and not just put the onus on users or on a much smaller third-party fact-checking company, users might be wise to hold doubt about whatever they find on the site. It’s not just politics. It’s also junk information about health, government services that are needed during a weather crisis, and so on. There’s simply too much dangerously wrong information being shared on Facebook for a few thousand people armed with software to manage.

“We are very different than a media company. We are a technology company at our heart. We hire engineers. … but that doesn’t mean we don’t have responsibility,” Sandberg said. The problem, though, is that if users do take that admission as a cue to trust the information they read on Facebook less, the end result could sow more distrust in credible news organizations that work hard to fact-check and be correct. And Facebook’s stance could prove toxic for the health of our democracy, in the same way Trump’s appeal to distrust the media is. After a while, it’s hard to know who to trust anymore. Which is what those Russian operatives trying to muddy the waters wanted in the first place.