Future Tense

The Potemkinism of Privacy Pragmatism

Civil liberties are too important to be left to the technologists.

A revolution is afoot in privacy regulation. In an assortment of white papers and articles, business leaders—including Microsoft—and scholars argue that instead of regulating privacy through limiting the collection of data, we should focus on how the information is used. It’s called “use regulation,” and this seemingly obscure issue has tremendous implications for civil liberties and our society. Ultimately, it can help determine how much power companies and governments have.

You are probably familiar with privacy laws that regulate the collection of data—for example, the military’s famous “don’t ask, don’t tell, don’t pursue.” When you interview for a job, the employer should not ask you about your religion, your plans to have children, or whether you are married. There’s also the national movement to “ban the box” to stop collection of arrest and old conviction data on job applications.

In a use-regulation world, companies may collect any data they wish but would be banned from certain uses of the data. In U.S. law, a good example of use regulation comes from credit reporting. Your credit report can be used only for credit decisions, employment screening, and renting an apartment. Or consider your physician: Her professional norms encourage expansive data collection, but she can use medical records only to advance patient care.

Bans on data collection are powerful tools to prevent institutions from using certain knowledge in their decision-making. But advocates of use regulations have some compelling points: Collection rules are too narrow by themselves. They ignore the real-life problem that we just click away our rights for the newest free service. And, increasingly, technologies gather data with no realistic opportunity to give notice to the individual at all. Some of these technologies can be used to infer knowledge about the very issues collection limitations attempt to protect. For instance, consider the Target Corporation’s ability to infer that a shopper was pregnant when she went from buying scented to unscented lotion. Use regulations shift the pressure away from notice and choice, making a more universal set of rules for data.

Craig Mundie is a longtime Microsoft technologist and member of the President’s Council of Advisors on Science and Technology, which recently released a set of new recommendations for the use of big data, wrote. In July, Mundie wrote in Foreign Affairs that use regulations are a “pragmatic” solution to the “data exhaust” generated by new products and services. But such a label shifts critical focus away from technology companies that define and control the space we have for “pragmatic” decisions. Technology companies have designed products to produce a data exhaust, often deliberately. Unreadable privacy notices and the inconvenient choice mechanisms were created by the very companies that want to encourage data sharing. Technologists designed systems to make privacy impossible, and now they say it is “pragmatic” to accept their legal proposal to solve the privacy problem.

There are deep problems with protecting privacy through regulating use. When one takes into account the broader litigation and policy landscape of privacy, it becomes apparent that use-regulation advocates are actually arguing for broad deregulation of information privacy.

Initiatives to promote use regulations say they will maintain collection limitations, but the proposed restrictions are hollow. One leading proposal to reform privacy laws promoted by Microsoft would allow any collection of data that is not already illegal—giving business free rein to condition access to services on users giving up data. As companies sell devices for the home, they will be uninhibited in what they collect, even if the data are unnecessary for the service.

Freed from any kind of data diet, corporate databases become attractive to governments. Any data obtained by the private sector can be obtained by the government, sometimes through voluntary information sharing.

Government collection of data would be equally voracious under a use-regulation regime. This should come as no surprise, because technology companies benefit from big government data. Information-intensive companies promote government power through a form of data laundering. They lobby to encourage the government to collect more personal information, and then argue that the personal data should be released under open government laws.

Use regulations offer no real protection, because businesses themselves get to choose what uses are appropriate. Worse yet, companies misusing data will have a huge legal loophole—the First Amendment. Companies have long argued that privacy rules are a form of censorship, and thus limits on use will be an abridgement of their free expression rights. The only workable situation for this problem is to require companies to contractually waive their First Amendment rights with respect to personal data.

The first U.S. privacy law, the Fair Credit Reporting Act of 1970, nicely demonstrates the pitfalls of use regulation. It places almost no limit on how data are collected and instead regulates how data are used in employment, credit, and tenancy. We have had 40 years of experience with the regime, and it governs our daily lives in important ways. Yet, in the various articles and whitepapers on the subject, the FCRA goes unmentioned.

At enactment, the FCRA was seen as a huge giveaway to industry. It created large, unaccountable bureaucracies that are notoriously unresponsive to consumers. Consumer reporting agencies regulated by the FCRA’s use-based approach used messy data and fuzzy logic in ways that produced error that was costly, but diffuse. CRAs were given broad immunity from defamation and invasion of privacy lawsuits in exchange for promises to treat individuals responsibly, but they have failed in this bargain. The use-regulation proposals just ignore this history.

In another wrinkle to the use-regulation landscape, under the PCAST framework, “analysis” of data is not considered a use. On its face, it seems reasonable to allow broad analysis of data so long as it is not incorporated into a company policy with consequences to individuals. But consider the recent Snowden revelations that analysts were looking at nude pictures of Americans, or the claims that Mark Zuckerberg of Facebook used the service to predict who was about to end a relationship. This knowledge was not used to implement any policy, yet we might not want governments and companies to be able to discover these things.  By exempting analysis from privacy regulation, we valorize many data activities, even those motivated by puerile curiosity. 

Use regulations, understood in context, are part of what appears to be a general strategy to eliminate legal responsibility for data companies. The proposals paint a rosy picture of responsibility, but a peek into litigation in data cases shows a systemic disrespect for law.

A recent petition to the Supreme Court fully unmasks the public policy efforts of technology companies. In Robins v. Spokeo, an unemployed man sued Spokeo for maintaining inaccurate records about him. Robins sued, arguing that Spokeo willfully violated the FCRA. However, Robins could not show that Spokeo contributed to his unemployment, nor could he show any economic injury. That lack of injury, according to the entire data industry (including Facebook, Google, and Yahoo, which filed an amicus brief on Spokeo’s behalf), should prevent Robins from suing.

In other words, even though Congress gave CRAs immunity from certain laws in exchange for fair treatment of individuals, and Spokeo is alleged to have deliberately violated those rights, Robins has no remedy.

The use-regulation policy proposal and the litigation in Spokeo adds up to a bullet-proof strategy:

1) Data companies can collect anything they want and analyze it however they please.

2) They are liable only for misuses of data, which businesses define themselves, narrowly.

3) If pressed, they can argue that use restrictions are unconstitutional censorship.

4) Companies can purposely engage in those misuses, and only be liable when it causes concrete injury.

Consider how the Target shopper who was identified as pregnant would fare under use regulations. The discovery that she was pregnant was an analysis, free from privacy rules, even if it was a fact she considered personal or wanted to keep secret. Sending pregnancy ads to her would be a use, but what firm would bar itself from such a use, even if the company knew that some would be embarrassed? In fact, if the firm wanted to use the data to charge her more for organic or natural products, it would be free to do so. If challenged by authorities, it could claim a violation of its First Amendment rights, and that the shopper was not harmed because she could have bought the products from a competitor.

The newfound support of privacy regulation among big businesses masks a radically deregulatory agenda. A regime that only pays attention to use erects a Potemkin Village of privacy. From a distance, it looks sound. But living within it we will find no shelter from the sun or rain.

This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.