Future Tense

Why I Walked Out of Facial Recognition Negotiations

Industry lobbying is shutting down Washington’s ability to protect consumer privacy.

A 3D facial recognition program is demonstrated during the Biometrics 2004 exhibition and conference October 14, 2004 in London.

And it’s come a long way since this: A 3-D facial recognition program is demonstrated during the Biometrics 2004 exhibition and conference on Oct. 14, 2004, in London.

Photo by Ian Waldie/Getty Images

June 2, 2015, was a great day for privacy. June 16 was not.

On June 2, the Senate voted to pass the USA Freedom Act—an imperfect bill that is nonetheless the most significant reform to NSA surveillance in a generation. That night, President Obama signed it into law.

On June 16, consumer privacy advocates walked out of talks to set voluntary rules for companies that use facial recognition technology. They explained that they were withdrawing from the talks because industry would not agree to critical privacy protections. I was one of those advocates.

There is a growing divide in Americans’ right to privacy. As checks strengthen on government surveillance, tech companies are evading even basic limits on their ability to collect, share, and monetize your data. At the heart of this divide is an increasingly formidable force: industry lobbying.

Facial recognition lets companies identify you by name, from far away, and in secret. There’s little you can do to stop it. You can’t change your fingerprints or the unique dimensions of your face—not easily. And while you leave your fingerprints on only the things you touch, every time you step outside, your face appears, ready for analysis, in the video feeds and photographs of any camera pointing your way.

And there are more of those cameras than ever before. A company aptly named FaceFirst sells technology that lets businesses identify VIP customers, suspected shoplifters, and “known litigious individuals” the moment they walk into a store. Another company, Churchix, sells facial recognition systems that let churches track who is attending their services. (And no, they don’t tell their parishioners about it.)

Since early 2014 the Department of Commerce has sought to develop voluntary privacy rules for companies that use this technology. So for 16 months, consumer privacy advocates (including me) and industry representatives met in a low-lit, circular room two blocks from the White House, looking for a compromise.

At our last meeting, privacy advocates proposed that in general—there would be exceptions—companies should get consumers’ permission before using facial recognition to identify them. This may seem like a matter of basic fairness. Yet trade groups representing the country’s leading tech companies, advertisers, and retailers rejected the proposal. Representatives from Facebook and Microsoft rejected it, too. (After the walkout, Microsoft partially changed course, saying it would support an opt-in rule if that were the consensus.)

So we asked a narrower hypothetical: Suppose you’re walking down a public street—not private property—and a company you’ve never heard of would like to use facial recognition to identify you by name. We asked whether there were any trade groups or companies in the room that would agree that in this case, consent was necessary. The answer was unanimous: silence.

The idea that companies don’t need permission before using facial recognition to identify you cannot be squared with consumer expectations. It also may be illegal under state law. What’s most remarkable about the industry lobbyists’ position, however, is that it contradicts industry’s own practices—at least those industries that have human beings as customers.

Despite what happened during our last meeting, almost all major consumer-facing tech companies use face recognition only with user consent. Microsoft’s Xbox uses the technology on an opt-in basis. Apple uses it on iPhoto to cluster similar faces together, but the program doesn’t identify the people in the photo—the user does. Google Plus leaves the technology off by default; the company even banned facial recognition apps for Google Glass. (Executive Chairman Eric Schmidt once said that facial recognition was “the only technology Google has built and, after looking at it, we decided to stop.”)

Companies that have activated facial recognition by default have paid a high price for it. In 2011, Facebook automatically enrolled all its users in a facial recognition database used for photo tagging. The investigations and consumer outcry that followed forced the company to shut off facial recognition in Europe and Canada. Now, the company faces two lawsuits in Illinois, one of two states to require user permission for facial recognition. (Shutterfly now faces a similar suit.)

The fact is, there is an established business standard in favor of consumer choice. So why did companies reject a voluntary rule to codify it?

I believe it had to do with the fact that most of the industry representatives in the room didn’t work for companies. Most of them came from industry associations—entities that are financed by tech companies, advertisers, and retailers to lobby for their interests but are legally separate from their backers.

Industry associations don’t feel the same consumer pressures as their member companies. If Facebook or Google or a major retailer violates its customers’ privacy or takes a position contrary to their interests, users might revolt, and its stock price might suffer. Industry associations feel no such pressures. What’s more, industry associations must cater to their members’ lowest common denominator. If an association is asked to support privacy standards that half its members meet and half do not, it will either oppose the rules or remain neutral. This is likely why NetChoice, an industry association that has both Google and Facebook as members—and non–consumer-facing companies that cater to law enforcement—opposed an opt-in privacy standard.

The end result is a lobbying apparatus that is untethered from the realities of many of the businesses that it serves. You may have heard of “yes men.” In Washington we have “no men”: industry lobbyists whose primary purpose is to stop attempts to regulate their members’ products and services, and who have no product or brand that could be hurt by their efforts. The tech industry can increasingly afford a lot of no men.

What happened in the facial recognition talks is not an isolated instance. Rather, it’s part of a broader pattern of industry lobbying that is shutting down Washington’s ability to protect consumer privacy.

For decades Congress championed commercial privacy protections. Congress created the Federal Trade Commission. It passed privacy laws that applied to banks, cable companies, phone companies, Internet providers, hospitals, credit rating agencies, schools, video companies, telemarketers, and online companies targeting children. Then, after updating the federal health privacy law in 2009—it essentially stopped. Aside from passing a 2011 law that narrowed the federal video privacy law, Congress hasn’t passed a single consumer privacy law since 2009.

Let me be clear: I don’t think that in the absence of industry lobbying, there would be a flood of new consumer privacy laws, or that Congress would pass comprehensive consumer privacy legislation. There are legitimate industry concerns with many privacy proposals. I’m saying that industry lobbying has blocked even the most measured and reasonable privacy bills—like former Sen. Jay Rockefeller’s bill to rein in so-called data brokers, private companies that sell lists of people, by name, who are struggling financially, who are HIV-positive, or who are victims of sexual assault.

This impasse has profound consequences. Think for a moment of all of the privacy and security controversies that have happened since 2009. Here are just a few: Google Buzz; Apple’s “Locationgate”; the circumvention of Safari privacy settings; breaches at Target, Home Depot, TD Bank, and Anthem; the Heartbleed bug; Uber. Then, think of all of the technologies that are blossoming today but were little-known six years ago: wearables that track your fitness and medical conditions; connected in-home devices like Nest thermostats and smart refrigerators; and, yes, facial recognition.

Technically speaking, these gadgets and services are all different things. In terms of privacy, they are the same thing: They digitize what was once your offline life; all of them have blossomed after 2009; and as a result, all of them are almost entirely unregulated.

People often refer to this constellation of digital technologies as the “Internet of Things.”

I think that we are witnessing the creation of a Loophole of Things—a legal atmosphere in which the digital facts of our physical lives are subject to startlingly few privacy protections. As of June 16, facial recognition is squarely within its ambit.

Our privacy rights against companies have always been at a disadvantage compared with our privacy rights against the government. Thanks to the Fourth Amendment, both Congress and federal courts can check government surveillance. But the Fourth Amendment doesn’t apply to private companies. As far as the Constitution is concerned, Congress is the only entity that can protect you from the excesses of Silicon Valley; federal courts will get involved only if Congress has passed a law telling them to do so.

Industry lobbying deepens this divide. NSA revelations cost American companies billions in lost business and reputation, so industry bet big on surveillance reform—and won. Meanwhile, industry lobbyists quash efforts to update and expand consumer privacy protections. This widening divide explains the simultaneous success of the USA Freedom Act and failure of facial recognition negotiations. It also helps create a Loophole of Things.

There is hope—in state government. While Congress may have passed zero new consumer privacy laws since 2009, the state of California alone passed more than two dozen. California is not alone. More and more, it’s state legislators and state attorneys general who are taking the lead in creating and enforcing cutting-edge consumer privacy protections. People who care about their privacy would do well to reorient their efforts away from Washington, and toward their state capitols.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.