Future Tense

FTC Report Details How Big Data Can Discriminate Against the Poor

Federal Trade Comission Chairwoman Edith Ramirez.

Photo by Mark Wilson/Getty Images

It’s easy to dismiss big data as an empty buzzword. But as a new Federal Trade Commission report demonstrates, big data analytics deserve serious examination. According to the report, they can both contribute to social changes and reinforce existing patterns. Indeed, the FTC suggests that the embrace of big data may be doing considerable harm to low-income communities and the people who make them up.

Although it wasn’t published until this week, the FTC’s big data report derives from a public workshop that it hosted in September 2014. Focusing on “the potential impact of big data on low-income and underserved populations,” the report reflects concerns that academics and activists have been expressing for some time. For the FTC’s purposes, big data analytics come into play when businesses consolidate information gleaned from numerous individuals into predictive profiles. These analytics can reveal surprising connections between seemingly disparate characteristics—for instance, big data analysis has suggested that people who like curly fries on Facebook tend to be smarter. These models then inform the way that those businesses act in the future, helping them decide what to sell, whom to hire, and so on.

At its worst, big data can reinforce—and perhaps even amplify—existing disparities, partly because predictive technologies tend to recycle existing patterns instead of creating new openings. They can be especially dangerous when they inform decisions about people’s access to health care, credit, housing, and more. For instance, some data suggests that those who live close to their workplaces are likely to maintain their employment for longer. If companies decided to take that into account when hiring, it could be accidentally discriminatory because of the racialized makeup of some neighborhoods.

Despite such risks, the commission’s report also stresses potential benefits of big data analysis. Among other things, the commission claims, such information can open up employment opportunities, furnish credit through “non-traditional methods,” transform health care delivery, and much more. These and other possibilities mean that big data has the potential to promote the social mobility of those who’ve previously faced discriminatory and exclusionary practices.

So what determines whether a big data system will reinforce or ameliorate existing social problems? One important factor is whether those responsible for collecting information incorporate biases from the start, even accidentally. As the FTC report notes, precisely because of their scale, large data sets can effectively render such beliefs invisible. And because those who are using the data are often separate from those who collected it, these problems can be all the harder to spot. In this regard, prejudices need not even be deliberate or malicious to be damaging.

The report warns that prejudicial data collection and analysis “may give companies new ways to attempt to justify their exclusion of certain populations from particular opportunities.” To speak in terms of “justification,” however, is to suggest that such actions are deliberate. In fact, they may be most dangerous when they’re performed autonomously. This is especially true for some of the other possible consequences that the report lays out, such as “higher-priced goods and services for lower income communities” and a general weakening of “the effectiveness of consumer choice.” The report notes, for instance, that “online companies may charge consumers in different zip codes different prices for standard office products,” depending on levels of competition from brick-and-mortar stores. This can result in ongoing price gouging for those who are already at a disadvantage.

The FTC offers only the broadest suggestions on avoiding the resulting dilemmas. Among other things, it encourages companies to look more carefully at their data sets, questioning whether they’re representative, whether they account for biases, and how accurate their predictions are. To paraphrase the FTC’s last proposal, all of this essentially amounts to encouraging companies to ask if their “reliance on big data raises ethical or fairness concerns.” Given that these problems result from excessive abstraction, such vagaries are almost certainly insufficient. Merely recognizing the trouble with big data is an important move for the FTC, but if it wants to make a real difference, its next steps will have to be more specific.