Such seemingly innocuous uses beget a generation of start-ups that are looking for new uses for this technology—not all of them innocuous but many of them foreseen by its critics. By the time the general public wakes up, of course, this technology becomes so deeply embedded in our culture that it is too late to do anything.
In a sense, we are dealing with a process that is more sinister than the popular notion of the “butterfly effect”—the idea that the flap of a butterfly's wings in Brazil can set off a tornado in Texas. Call it the “Palo Alto effect”: A carefree user in Palo Alto, Calif., who decides to “opt in” and use Google's facial-recognition technology ends up strengthening a dictator in Damascus. Why “sinister”? Because the Palo Alto user, unlike the butterfly, can actually think two steps ahead—but prefers not to.
What's to be done? Well, we can put the ethical burden squarely on Internet users and sensitize them to the ultimate (even if indirect) consequences of their choices. There are many precedents for this. Mounting concerns over economic inequality, climate change, and child labor have led to the emergence of the “ethical consumption” movement, which seeks to get consumers to consider the ethical ramifications of their behavior in the marketplace.
In a similar vein, why not think about applying similar concepts to our engagement with the Internet? What would “ethical browsing” or “ethical social networking” entail? Never using sites that exploit facial-recognition technology? Refusing to do business with Internet companies that cooperate with the National Security Agency? These are the choices we'll have to make if we don't want the Internet to become an ethics-free zone. After all, unreflective use of technology—just like unreflective shopping—does not a good citizen make.
But let's not allow Internet companies off the hook, either. Of course, Google and Facebook are different from rapacious corporations exploiting poor farmers or underage children. Neither company is building surveillance tools that would be used by dictators. What they do, however, is help create the apposite technical and ideological infrastructure for such tools to emerge in a seemingly natural manner. This doesn't provide strong grounds for regulation—but it opens the door for citizen activism, boycotts, and, if all else fails, civil disobedience.
Internet companies know perfectly well that they've got responsibilities. Earlier this year, Eric Schmidt, Google's executive chairman, called facial-recognition technology “creepy” and expressed his concern about it. And yet Google has just endorsed this technology—albeit with the “opt-in” proviso. This, Google thinks, shields it from any accusations of unethical behavior; after all, it's all up to the user! But would we be persuaded by oil companies claiming that anyone concerned with climate change doesn't have to drive a Humvee? Perhaps not. It's in pretending that they don't know how this sad movie ends that technology companies' chief ethical blunder becomes evident.