Facebook would like to know whether it’s bad for you. Some social science research suggests that people who use social media more are less satisfied with their lives and tend to self-segregate into partisan political camps. If your multibillion-dollar company is ruining both people’s lives and our democratic system, it seems like something you might want to know, and perhaps even change. So Facebook, which maintains a large research group, doesn’t just study technical questions of how to store your photos for instant retrieval. It also asks big social questions. That large research team is filled with Ph.D.s and visiting professors, who run experiments and conduct studies and publish the results with impressively scholarly titles like “Experimental evidence of massive-scale emotional contagion through social networks” and “Exposure to ideologically diverse news and opinion on Facebook.”
When those two studies were published, though, they touched off firestorms of public controversy. Why the heat? In part it’s because when Facebook’s “data scientists” do empirical studies on users and publish the results in peer-reviewed journals, they’re acting like academics. Facebook, though, is a company, and its values are not the values of the academy. It’s hard to think of a slogan more antithetical to the careful and deliberative attitude of scholars toward their craft than “move fast and break things.” What is common sense in industry is crazy talk in the ivory tower, and vice versa. This isn’t just a case of disruptive innovation disruptively disruptifying everything in its path and leaving no survivors in its wake. It’s a case of two very different ethical worlds colliding.
Take the emotional-contagion study, in which researchers hid emotionally laden posts from users’ News Feeds to see how their own posts changed. If it had been carried out in a university lab by university faculty on volunteers they recruited, the researchers would almost certainly have drawn up a detailed description of the experiment and submitted it to their school’s institutional review board. The IRB would have gone through the proposal in detail, making sure that the participants gave a level of informed consent appropriate to the design of the study and the risks of the research. What’s more, the researchers and their colleagues would share a sense of professional ethics, taking into account respect for participants, balancing risks and benefits, and the integrity of the research process itself. The process is slow but careful; it is deliberately, explicitly, extensively public-spirited.
Suffice it to say that this is not how things work in Silicon Valley, where speed and scale are the order of the day. It’s a culture of constant, iterative, and relentless experimentation. Try a new feature, and see how it works. Better yet, try a new feature in an A/B test—that is, give it to some of your users, and see how it works compared with the alternative. If something breaks the user experience, just revert the change and move on. This professional ethos values doing over talking; the highest good is to build something that users want to use. If others replicate your work, they’re competitive threats, not colleagues. The public good enters only implicitly, from the belief that innovation inherently promotes the right kind of progress.
Put these two sets of ideals together and it’s a recipe for controversy, because the contradictions are so sharp. David Auerbach has written in Slate about how industry research threatens academic values of transparency and public accountability. Conflicts of interest pervade these studies, because, as he writes, “no one outside Facebook can do this research.” It’s obvious how Facebook might stand to gain from research suggesting it doesn’t make you sad or destroy democracy as we know it. But that also raises uncomfortable questions about how many other studies Facebook considered or conducted and never made public, and whether their results might have gone in the other direction. Medical-research regulators have increasingly required public disclosure of all clinical trials, even the ones that reach negative or inconclusive results, out of just such a concern. (Auerbach pointed out some similar cherry-picking in how Google tweaked its search algorithms to favor its own vertical search content.)
For another vivid example of the culture clash, consider IRB review. The emotional-contagion study escaped from IRB review entirely, even though two of its authors were at Cornell, which most definitely has an IRB and most definitely requires IRB oversight of research conducted by Cornell faculty, staff, and students. But the Cornell IRB concluded that the study was “conducted independently by Facebook” and as such was exempt from review at Cornell. Facebook didn’t have an IRB at all. (After the backlash to the study, it created an internal review process, a kind of IRB Lite.) The result was that an experiment run by academics and published for other academics escaped from IRB oversight entirely.
In a recent paper, I call this “IRB laundering.” Suppose that Professor Cranium at Stonewall University wants to find out whether people bleed when hit in the head with bricks, but doesn’t want to bother with the pesky IRB and its concern for “safety” and “ethics.” So Cranium calls up a friend at Brickbook, which actually throws the bricks at people, and the two of them write a paper together describing the results. Professor Cranium has successfully laundered his research through Brickbook, cutting his own IRB out of the loop. This, I submit, is Not Good.
This isn’t to say that the IRB system is ideal. IRBs have been subjected to stinging criticism for slowing and censoring valuable research. But they do represent a consensus about the right ethical and legal framework for research on people. If we’re going to change that consensus, it ought to be for all researchers, not just those who sneak down the road from the Stonewall campus to 1 Brickbook Way in the dead of night.
On the industry side of the fence, there is much to like about a world of regular experimentation. As ethicist Michelle Meyer observes, it’s much better when companies test new ideas rather than simply impose them on users across the board. But the right answer is almost certainly not a free-for-all in which each company is the sole, unappealable, and unaccountable judge of what’s best for its users, regardless of what half-truths or untruths it tells them.
Some defenders of Facebook’s experiments observe, quite correctly, that “companies engage in A/B testing all the time.” (On Facebook, any given user is typically in 10 experiments at once.) They sometimes go on to assert either that users understand that this is the case, or they ought to understand it, and so don’t need any more in the way of informed consent. But everything we know about users says exactly the opposite. They don’t read terms of service (the legalese you inevitably click past when signing up for a new service) and they don’t understand how extensively what they see on the Web is algorithmically massaged. Good, informed consent for the Web would help users understand what they’re seeing all the time, not just on those occasions when an academic comes to visit.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.