How Google and Facebook Hide Behind “Opt-In” Policies

What's to come?
Dec. 19 2011 6:56 AM

Saving Face

How Google, Facebook, and other tech companies hide behind “opt-in” policies.

Google Chairman Eric Schmidt Speaks At The Economic Club Of Washington.
Google Chairman Eric Schmidt called facial-recognition software "creepy"

Photo by Getty Images.

Has Google finally grown up? The care with which it has handled facial-recognition technology seems to support this thesis. Compare it with Facebook. When Zuckerberg’s social network unveiled its facial-recognition technology in June, it found itself in the middle of a global privacy backlash. But Google has avoided that fate: A few weeks ago, it unveiled a technology to automatically identify one's friends in photos uploaded to Google+—and almost nobody noticed.

The different reactions are easy to explain: Facebook enabled this feature for all users without asking their permission, while Google made its tool optional. Facebook may now be warming up to this more-polite approach, too: Its recent settlement with the Federal Trade Commission stipulates that all future changes to existing privacy controls would require user consent.

The Web seems to be moving away from the “opt-out” mentality of the arrogant bully—e.g., “We know you'll love this feature, so we'll enable it by default!”—to the “opt-in” mentality of the smooth-talking diplomat—“Hey, check out this new feature—but only if you want.” As Facebook's embrace of “frictionless sharing” shows, it's one thing to force us to share by altering our privacy settings—and it's quite another to convince us that sharing is something we really want to do. The former is an offense; the latter is a cause for celebration.

Advertisement

And yet this triumph of the “opt-in” is not all that it seems. While it's certainly less coercive, any opt-in still makes the underlying technology—automated facial recognition, in this case—seem normal and acceptable. But no technology companies will acknowledge this. “The decision is all in the user's hands.” “It's all about giving users more control.” “We are not forcing anyone—people can stay out.” Such bland rhetoric of “user empowerment” has been the staple of Silicon Valley gospel for decades. It rests on a naive belief that technologies are just tools and their impact is quite narrow and limited to accomplishing (or not) the task at hand. Thus, if users want to use Tool X to accomplish Task Y, the only thing up for debate is the desirability of Task Y. That the wide adoption of Tool X may also trigger an unexpected Effect Z never bothers the instrumentalists or, if it does, they just write it off as something incalculable.

Alas, such reasoning overlooks the fact that technologies, in addition to serving their immediate functions, also have an ecological footprint—in that they can transform environments, ideologies, users, power relations, and even other technologies. While cars may be a perfectly effective way of getting from Point A to Point B, one shouldn't focus on this feature alone and disregard what the car culture in general might be doing to the quality and even forms of urban living or pollution rates or mortality statistics. Focusing on the immediate uses of an artifact—regardless of whether those are “opt-in” or “opt-out”—seems like a poor way of navigating the “car problem.”

Similarly, to assume that a given technology isn't problematic because its users can turn it off seems misguided. Why disregard the possibility that, once enough people opt in to use it, the collective adoption of this technology might dramatically transform the social environment, making nonuse difficult or impossible? Once enough Californians have opted in to use the car, something changed—both at the levels of public infrastructure and norms—that makes much of California completely inhospitable to carless living. The car still gets us from Point A to Point B, but wouldn't our quality of life be much higher if we tried to anticipate its side effects by developing a more multifaceted view of the car technology?

Now, to return to the subject of automated facial-recognition technologies, here is what we know. This technology can be easily abused; a search engine that generates people's names from their faces would be very popular with dictators, all too keen to crack down on popular protest. We also know that facial-recognition technology has already penetrated many walks of life. It is a popular way to secure our smartphones and laptops. It's used in many game consoles to create a more personalized gaming experience. It's used to track (and in real-time!) the number of male and female patrons in bars. And the list goes on.