Do The Math

Making (a Huge Number of) Facebook Users (Very Slightly) Sadder

How multiplying very tiny effects by gigantic numbers strains our quantitative intuition.

In this photo illustration a girl browses the social networking site Facebook in London, England.
The study’s authors found they could induce people to use significantly fewer positive words in their own Facebook posts.

Photo illustration by Chris Jackson/Getty Images

Facebook researchers creeped out the world this month by revealing they’ve been manipulating users’ experience on the site to control their emotions—for science.

The scientists used what people in the biz call a “sadness induction,” reducing the likelihood that posts containing positive emotional words would show up on certain subjects’ news feeds. The researchers found that this change to the news feed’s emotional tone made people—around 155,000 people—act sadder themselves.* Mind control via status update, on a massive scale. Creepy!

But here’s a technical wrinkle. The standard scientific division between an experiment that works and an experiment that fails isn’t a bright line. It’s a threshold of “statistical significance,” a construct with a perfectly crisp mathematical definition but which is notoriously hard to interpret. Statistical significance does not mean that an experimental effect was significant in the sense of “important.” The Facebook scientists found they could induce people to use significantly fewer positive words in their own posts. Significantly fewer, in this case, meant “99.9 percent as many”—the decline was microscopic, but detectable. So, with regard to this Facebook study, significantly fewer in scientific terms is insignificantly fewer in the usual sense of the words. (What’s more, it’s far from obvious that we should interpret the use of fewer positive words as a change in emotional state, as the authors do.)

What’s remarkable about the Facebook finding isn’t that emotions can be transmitted from one person to another by written words alone. Anybody who’s read The Fault in Our Stars knows that’s possible. What’s remarkable is that the effect is, on average, so very small. That’s why the scale of the experiment is so massive. It has to be, in order to detect such a tiny behavioral change. Facebook is a kind of social supercollider, a gigantic, insanely expensive hunk of technology without which we can’t hope to glimpse tiny, ephemeral social facts.

None of this serves to excuse the ethical failures of the study. They are inexcusable. The Common Rule that governs federally funded research requires informed consent from human research subjects. That consent wasn’t asked for, nor were the subjects debriefed after the fact. You might, for all you know, be one of them.

But if we look beyond the consent problems with the Facebook study, the issues get harder. Should regulators block research projects, like this one, that might cause harm, even a tiny amount of harm, to their human subjects? That’s how you get institutional review boards stepping in to require approval for oral history projects. If I ask you questions about your grandparents’ life in the war, it could make you sad—probably sadder than Facebook did. No one thinks that’s creepy. In many contexts, we think of tiny harms as essentially no harms at all.

The difference is the scale. Tiny harms add up. And multiplying very tiny effects by gigantic numbers is a trick that strains our quantitative intuition and our existing ethical norms. As Laurie Penny writes in the New Statesman, “There are no precedents for what Facebook is doing here. … Facebook itself is the precedent. The ethics of this situation have yet to be unpacked.”

Each subject may have been only slightly saddened by her altered news feed, but out of 155,000 people, isn’t it possible there were a few poised on the edge of a major crisis? Did the Facebook experiment create suicides? Fair question. But note that the researchers carried out a parallel experiment in which another 155,000 people were artificially made happier by news feed manipulation. If Facebook is to blame for hypothetical suicides, does it also get Samaritan’s credit for suicides it prevented? Should Facebook be required to salt its users’ feeds with positive stories, since failing to do so compromises their emotional well-being on a titanic scale? Oral historians don’t have to wrestle with these questions, because oral historians aren’t soliciting heartstring-tugging stories from 155,000 people at a time.

The micronudges available to Facebook aren’t restricted to cheering up and bumming out. In November 2010, the company rolled out a “social voting” promotion that increased subjects’ likelihood of voting in real-world elections by 0.39 percent. That’s a small percentage. But a lot of elections are close. And Facebook can reach a lot more voters than any traditional campaign, in a much more precisely targeted way. The ability of a gigantic corporation to push an election one way or the other, more or less untraceably, sounds creepy, too—but it’s hard to pin down exactly how it differs from the deployment of a massive ad buy, which at the moment is a constitutional right.

The scientists behind the study understand the importance of scale very well. They write: “Even small effects can have large aggregated consequences: For example, the well-documented connection between emotions and physical well-being suggests the importance of these findings for public health.” But there’s another well-documented connection not mentioned here that Facebook is surely aware of: Sad people buy more stuff. From the point of view of social science, making millions of people’s lives imperceptibly sadder is an ethical red flag. But for a company whose survival depends on turning ad views into money, maybe it’s just business.

Correction, July 1, 2014: This piece originally misstated the number of people who were shown an altered, sadder Facebook news feed. It was approximately 155,000, not 300,000. (Return.)