We can’t know whether Facebook is to blame for Trump's win.

Why We Can’t Know Whether Facebook Is to Blame for Trump’s Election

Why We Can’t Know Whether Facebook Is to Blame for Trump’s Election

Future Tense
The Citizen's Guide to the Future
Nov. 11 2016 1:02 PM

Why We Can’t Know Whether Facebook Is to Blame for Trump’s Election

483202864-republican-presidential-candidate-donald-trump-fields-a
In a primary debate, the Facebook logo was behind Trump. Was the platform also behind his win?

Scott Olson/Getty Images

Who’s responsible for Donald Trump’s victory? In a thoughtful election postmortem, Neiman Journalism Lab’s Joshua Benton proposed an answer. “There’s plenty of blame to go around,” he wrote, “but the list of actors has to start with Facebook.” Benton’s complaint is that the site’s indifference to truth, together with its algorithmic sorting, have created self-contained cesspools of mendacity. He pointed to the feed of his Louisiana hometown, where stories like “FBI Agent Who Was Suspected of Leaking Hillary’s Corruption is Dead” were posted by the mayor and showered with likes. Fake news, Benton wrote, has “weaponized” Facebook’s filter bubbles.

Benton and many others invoke the irony that a California company staffed by doe-eyed liberals and officially committed to giving “people the power to share and make the world more open and connected” helped to elect a proto-fascist. There was some pushback to this thesis, though, including a smart if strawman-toppling piece from political media scholar Daniel Kreiss and a rant from Recode.

Advertisement

Missing from the debate is any real evidence one way or another—and we probably won’t get it any time soon. That’s Facebook’s fault. For years now the social-media giant has selectively published self-exculpatory research papers authored by its own employees while refusing to give independent researchers the data to perform their own analyses. For instance, in June 2015, three Facebookers published an academic article in the venerable journal Science on the idea of the filter bubble. What they found confirmed the obvious: People tend to selectively consume media that affirms their existing beliefs—a staple of media research for the last 75 years. They also reported that the News Feed algorithm itself winnows out some of the ideological diversity before it ever reaches Facebook users—the filter bubble in practice, in other words. On the odd grounds that the former factor (users’ selective choices) has more impact than the latter (Facebook’s algorithm), they exonerate their employer: “our work suggests that the power to expose oneself to perspectives from the other side in social media lies first and foremost with individuals.” Basically: It’s not Facebook’s fault. They concluded that “individuals are exposed to more cross-cutting discourse in social media than they would be under the digital reality envisioned by some.”

A number of scholars called out the paper’s many flaws. The self-serving conclusion, especially given its dubious logic, led one critic to compare the study to tobacco-industry misdirection. The most troubling angle, though, went mostly unmentioned: Facebook restricts data access—meaningful data access—to its own researchers. As David Auerbach noted on Slate at the time, “no one outside Facebook can do this research.” Flawed papers get published all the time, and the system is designed to encourage criticism and replication. Without access to the data, however, follow-on studies just weren’t possible (or else required workaround proxies like web-browser historythird-party Facebook apps, or the company’s notoriously restrictive API). So Facebook gets to launder its case through scientific journals without real scrutiny from, or further research by, anyone who isn’t employed there.

Remember the furor over Facebook’s 2014 emotional contagion study, in which the emotional content of 700,000 News Feeds was tweaked? The company came under major fire for the study’s consent-free “manipulation” of its users. As with the 2015 filter-bubble study, independent researchers exposed the study’s many other weaknesses. But the paper’s biggest flaw isn’t the shoddy ethics or its poor design. It’s the fact that the study absolves Facebook from another major line of criticism—that the site makes us depressed since we compare our lives to the highlight-reel posts of others. No, upbeat posts on Facebook don’t make us sad, the researchers conclude; in fact, they make us happier. This finding, the authors explain, “stands in contrast to theories that suggest viewing positive posts by friends on Facebook may somehow affect us negatively, for example, via social comparison.” You should be thanking Facebook for your good mood.

And so it goes. Late last month, the company’s in-house researchers (with university-based co-authors) published yet another Facebook-affirming study, this one using propriety data to show that Facebook users live longer than social-network abstainers.

Perhaps the critics blaming Facebook for Trump’s shocking victory are wrong. CEO Mark Zuckerberg dismissed the claim as a “pretty crazy idea.” The problem is that we won’t ever know, unless Facebook grants access to impartial researchers to study the question, with the freedom to publish their results regardless of whether the findings reflect well on Facebook. We certainly can’t rely on the company’s own researchers—who have a paycheck at stake and a history of published apologia —to settle the issue. If the fake-news-and-algorithms stew really did sway the electorate, it’s doubtful the problem will solve itself by 2020.

Maybe Facebook really does make us happier and healthier. Maybe there’s no big problem with algorithmic echo chambers, and maybe Trump’s rise has nothing to do with the site. That’s the takeaway from the company’s sponsored research. No one else has comparable access to the data, so it’s hard to know. You might even say the system is rigged.

Future Tense is a partnership of SlateNew America, and Arizona State University.