Future Tense

The Real Problem With Google’s New Privacy Policy

The tech giant owes users better tools to manage their information.

Businessman peering over computer monitor.
Google doesn’t yet give users enough control over what data it collects

Siri Stafford/Thinkstock Images.


When Google announced impending changes to its privacy policy, users and the media alike were focused on one thing: the inability to opt-out, short of deleting your account. Though Congress keeps pushing Google for more clarification, many users have grumpily acknowledged the Gmail notifications and moved to new privacy concerns like an iPhone app that copied and uploaded users’ contacts.

However, the popular conversation must continue because this is about more than opt-in or opt-out. It’s about control. The search giant plans to replace dozens of separate privacy policies with a unified policy. Google calls this a step toward simplicity—a single policy instead of 60. But while the new policy does not expand the personal information it collects, it introduces new ways for Google to combine and share data across its own services.

This transition to a unified service sets the course for future growth built on each user having a single profile across their online experience. But while Google is changing, the tools are not keeping up.

Google’s public-policy director Pablo Chavez responded to criticism emphasizing that Google will not collect any new data, and users can still employ the same privacy tools currently available to them. In a letter to Congress, Google stressed that it is maintaining its privacy approach and will “continue to focus on providing transparency, control, and security to our users.” Additionally, users can access many services without a Google account, and they can always execute the ultimate opt-out: deleting their data and choosing other online services.

While Google’s dashboard offers a number of tools, and one privacy policy can be simpler than several dozen, users are fundamentally losing the ability to manage and maintain different identities within the massive Google World.

Social-media researcher danah boyd (as she prefers to style her name) previously confronted this challenge in the fall of 2010 following a change in Facebook’s privacy policies. In “Why Privacy Is Not Dead,” published in MIT’s Technology Review, boyd writes: “Privacy is not simply about controlling access. It’s about understanding a social context, having a sense of how our information is passed around by others, and shared accordingly.” How most online companies share information with customers and services is not transparent for users.

Google explained to Congress why it believes users will benefit from sharing data between Google services, including editing Google Docs within Gmail, or integrating Google Maps within Google+. Already, if you log in to one Google service, your Google ID follows you to the next site. The company argues that sharing personal data between services yields benefits in the form of tailored services, like recommended videos based on past search results.

But how and why a person uses different services does not always demand, or benefit from, sharing information. A user may primarily use Google’s search engine for work or technology issues and YouTube for music and cute cat videos. Some people simply prefer to have a different persona on YouTube than on Google search, much like how we may reveal different aspects about ourselves at a library than at our favorite watering hole. Forcibly consolidating these identities undermines the users’ freedom to reveal different facets of their identity to Google in different contexts.

In an effort to clarify misconceptions about the new policy changes, Google policy manager Betsy Masiello reminds readers many Google services such as Search or Maps are available to use without logging in. With a single sign-on, however, this is an unrealistic proposition—just as user accounts are carried from one service to the next, logging out takes you out of the entire Google platform. Google’s customers are practically entrenched in some services, such as Gmail or Chat. Forcibly bridging services without the choice of a partial opt-out is an attempt by Google to leverage user dependency on some services to increase the usage of others—most notably Google+. Instead, Google could offer the tools to manage how their identity is shared, or separated, between various services.

Google could also offer a service for customers to audit their online identity. On a smaller scale, Google+ users already enjoy a feature to “View profile as …” to examine how their online identity looks to others—a subtle, easy-to-miss text box with a faint gray label on your Google+ home page that lets you review your posts as if you were a peer, so you can see how, say, your parent sees your information. This is a similar feature to one on Facebook, where a user can assume another user’s perspective to check for leaks in their sharing rules and permissions. Why not go further and provide a tool for users to see what Google knows about them and add or delete assumed topics of interest on different services accordingly? True, users can tweak their ad profile or search history, but as Google transforms into an integrated world, more comprehensive user controls are needed. In fact, the ad-profile option and the Google+ “view as” feature reinforces that while Google’s policies are evolving, privacy tools are painfully service-specific.

Google’s revised and consolidated policies, and the data collection and sharing changes at their core, could be viewed as a competitive response to Facebook initiatives that integrate your Facebook identity into thousands of different websites. Together, Google’s and Facebook’s actions exemplify the current state of the online public sphere—a space governed by terms of service, but without meaningful protections for its participants to control and manage their online identity across different services.

If we gain access to the public sphere using the currency of personal information, then users must also have greater control on how that information is given, stored, shared, and deleted.

This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.