Google privacy: The good things that happen when Web companies use our personal data.

Innovation, the Internet, gadgets, and more.
April 7 2011 5:25 PM

No More Privacy Paranoia

Want Web companies to stop using our personal data? Be ready to suffer the consequences.

Illustration by Robert Neubecker. Click image to expand.

Last week the Federal Trade Commission and Google signed a broad privacy settlement that requires the search company to submit to "privacy audits" every two years. The agreement ended a dispute that began last year, when Google launched Buzz, the ill-fated social-messaging system built into Gmail.

According to the FTC, just about everything about Buzz was flawed. Google made it difficult for users to decline to join, or to leave after they'd joined (the "Turn off Buzz" button didn't actually turn off Buzz). Even if you wanted to participate, the service didn't make it clear how to keep your private information private (the earliest versions of the service made public a list of the people you e-mailed frequently). Most damningly, the FTC says that Buzz violated Google's own privacy policy. In that policy, Google promises that it will ask permission when it uses private information acquired for one product to produce something else. In this case, the FTC says, Google used information gathered for Gmail to build a social-networking service that had nothing to do with e-mail. It never asked permission to do so. For all these sins, Google agreed to submit to a procedure that none of its rivals will have to endure—regular reviews of its privacy and data-collection practices by independent consultants. The audits will last for 20 years—that is, longer than the lifespan of many tech giants. Will there even be a "Web" to search in 2030?

Advertisement

Buzz was certainly a privacy boondoggle for Google—a black eye for a company that had been trying to position itself as the good guy to Facebook's bad guy. I agree with the FTC that Google should pay for the mistakes it made (the company has apologized and says it's fixed its privacy procedures to prevent another such imbroglio). And if they're done judiciously, the privacy audits may prove helpful in ensuring that Google stays on the up and up.

But that's what I worry about: Will the audits in fact be done judiciously? There's a good chance that privacy regulators—spurred by a public that doesn't really know what it wants when it comes to online privacy—may go too far, blocking Google from collecting and analyzing information about its users. That will be a terrible outcome, because while we all reflexively hate the thought of a company analyzing our digital lives, we also benefit from this practice in many ways that we don't appreciate.

I know I sound naïve, but bear with me. Yes, Google collects a lot of information about all of us. It does so on purpose, and for all sorts of reasons. Some of these reasons we don't like very much—Google, like all big Web companies, sells ads, and it can get more money for those ads when they're targeted to you. This practice pays for the Web, and it's the reason you don't pay a fee to conduct a Google search. Still, I understand why people are wary of online data collection. Too often, though, our conversations about online privacy end right here.

Broadly speaking, there are two types of data that Web companies keep on us—personally identifiable information (like your name and list of friends), and information that can't be tied to you as an individual. In our discussions about privacy, we rarely make this important distinction. While we focus on the disadvantages of companies collecting our information, we rarely look at the innovations that wouldn't be possible without our personal data. This is especially true when it comes to anonymous data—information that can't be used to identify you, but which serves as the building blocks of amazing things.

Indeed, some of Google's best and most-loved products would not be possible without our data. Take the spell checker: How does Google know you meant Rebecca Black when you typed Rebeca Blacke? Note that this is a trick that no ordinary, dictionary-based spell-checker could perform—these are proper nouns, and we're dealing with an ephemeral personality. But since Google has stored lots of other people's search requests for Black, it knows you're looking for the phenom behind "Friday." The theory behind the spell-checker can be applied more broadly. By studying words that often come together in search terms—for instance, people may either search for "los angeles murder rate" or "los angeles homicide rate"—Google can detect that two completely different words may have the same meaning. This has profound implications for the future of computing: In a very real sense, mining search queries is teaching computers how to understand language (and not just English, either). If Google were forced to forget every search query right after it served up a result, none of these things would be possible.

  Slate Plus
Slate Picks
Dec. 19 2014 4:15 PM What Happened at Slate This Week? Staff writer Lily Hay Newman shares what stories intrigued her at the magazine this week.