A simple prescription for keeping Google's records out of government hands.
In Google's Mountain View, Calif., campus, there's an LCD showing what's being searched for at any moment. A passing glance may reveal that information on "Depression" "marital counseling," or "anna kournikova" are all hotly sought after at a given time. The revelation that Google is fighting a Bush administration subpoena seeking to get hold of search records like these has, unsurprisingly, hit a lot of nerves. In part because it pits the Bush administration against Google—making the case a kind of a showdown of East coast against West; religion vs. science; Jedi Masters of information-seeking vs. Jedi Masters of information control, and so on.
But the big news for most Americans shouldn't be that the administration wants yet more confidential records. It should be the revelation that every single search you've ever conducted—ever—is stored on a database, somewhere. Forget e-mail and wiretaps—for many of us, there's probably nothing more embarrassing than the searches we've made over the last decade. Google's campus LCD sounds like it's just fun and games, but when a search can be linked to you (through the IP address recorded by Google), that's a lot less fun. And when, as we're seeing, it can all be demanded by the government, that's no fun at all.
Google is being commended by many for standing up to the Bush administration. But however brave Google's current stance may be, the legal debate over Google's compliance misses the deeper and more urgent point: By keeping every search ever made on file, the search-engine companies are helping create the problem in the first place. In the wake of what we're seeing with this subpoena controversy, the industry must change the way it preserves and records our search results and must publicly pledge not to keep any identifying information unless required by court order. This has nothing to do with our mistrust of Google and everything to do with mistrust of the range of government actors—domestic and foreign—that Google must ultimately obey.
Let's be clear, first, about what this Google case is about. Back in the 1990s, Congress passed a succession of laws designed to keep porn off the Internet. Those laws didn't work—largely because the courts kept striking them down as violations of the First Amendment. Sick of losing and finding themselves back in the district court defending a reconfigured version of the anti-porn bill (now named the "Child Online Protection Act"), lawyers in the Justice Department decided to hire Berkeley professor Philip Stark. Stark's assignment: Use statistics to show what everyone already knows—that there's an awful lot of porn on the Internet.
Stark's plan is to demand that Google and the other major search engines supply him, and thus the government, with a random selection of a million domain names available for search and a million sample user queries. He will then demonstrate how truly nasty the Internet is, and in particular, just how hard it is to block out stuff that's "Harmful to Minors." Faced with this third-party subpoena, Microsoft and Yahoo! agreed to supply Stark with this information, but Google refused, calling the subpoena request "overbroad, unduly burdensome, vague, and intended to harass." (Some can't help but wonder if the subpoena might be small revenge for that whole "miserable failure" thing.)
This legal battle is not really about the privacy of any one individual; nor does it raise the tricky issue of when, if ever, turning over search records represent the kind of "self-incrimination" protected by the Fifth Amendment. Nevertheless, the subpoena does strike me as unnecessarily heavy-handed—rather like asking the car industry for masses of personal data merely to prove that cars can indeed be used to break the speeding laws. And whether Google ultimately wins or loses this skirmish, the subpoena's relevance as legal precedent will be limited.
The better, more enduring question is: Why is all this information being kept in the first place?
Google and other search engines argue—with some justification—that preserving search records is important to making their product the best it can be. By looking at trillions of search-result pages, Google, for example, can do things like offer a good guess when you've spelled something wrong – "Did you mean: Condoleezza Rice?" And Google's "Zeitgeist" feature is able to tell you what the top searches are every week and year—a neat way of tracking other people's passing obsessions. But even though keeping such logs may make their product better, or more fun on the margin, the justifications for keeping so many secrets in such a vulnerable place are just too weak.
Imagine we were to find out one day that Starbucks had been recording everyone's conversations for the purpose of figuring out whether cappuccino is more popular than macchiato. Sure, the result, on the margin, might be a better coffee product. And, yes, we all know, or should, that our conversations at Starbucks aren't truly private. But we'd prefer a coffee shop that wasn't listening—and especially one that won't later be able to identify the macchiato lovers by name. We need to start to think about search engines the same way and demand the same freedoms.
It all goes back to this basic point: How free you are corresponds exactly to how free you think you are. And Americans today feel great freedom to tell their deepest secrets; secrets they won't share with their spouses or priests, to their computers. The Luddites were right—our closest confidants today are robots. People have a place to find basic anonymous information on things like sexually transmitted diseases, depression, or drug addiction. The ability to look in secret for another job is not merely liberating, it's economically efficient. But all this depends on our feeling free to search without being watched.
Photograph of people walking over Google logo by Torsten Silz/AFP/Getty Images.