What a Supreme Court Case Means for Google's and Facebook's Use of Artificial Intelligence

The Citizen's Guide to the Future
Feb. 3 2014 11:02 AM

What a Supreme Court Case Means for Google's and Facebook's Use of Artificial Intelligence

79493977-in-this-photo-illustration-the-facebook-logo-is

Photo illustration by Chris Jackson/Getty Images

It’s possible that even Google and its partners are getting nervous about its efforts to develop artificial intelligence. The company recently closed a deal to acquire DeepMind, a wonderfully named British startup that sounds like an AI porn-generator. The reported cost is fairly significant, $400-500 million, and as part of the deal, Google will create an ethics board to ensure that AI technology isn’t abused. This could be important because, as Shane Legg, one of DeepMind’s co-founders, said, “human extinction will probably occur, and technology will likely play a part in this.” Among the forms of technology that could wipe out mankind, Legg has identified AI as the “number 1 risk for this century.” So it’s good that Skynet and the Matrix will have an ombudsman.

Of more immediate concern, though, is what Google’s and Facebook’s recent investments in AI mean for personal privacy, an area where the companies already draw complaints and skepticism. Although they aren’t publicizing the details of their plans, the speculation is that they hope to use AI to better collect, monitor, track, and analyze our activities and interactions with their sites, applications, and devices. This would permit them to provide superior search results, create a more inviting social media experience, and know what we want before we want it, basically.

Advertisement

All of this depends on Google and Facebook monitoring us. Thus far, their position has been essentially that they can track anything we do on their real estate. This is not that far off from the position the police take when suspects move on public roads. Google and Facebook don’t need special permission to monitor what we do in their cyberspace; the police don’t need special permission to monitor what we do in public space.

However, a 2012 Supreme Court decision potentially upends that thinking, at least with regard to forms of AI and autonomous technology that permit constant surveillance and analysis beyond what was originally and reasonably intended in those spaces. In United States v. Jones, the court considered whether police needed a warrant to attach a GPS tracker to a suspect’s car and record his movements. The court ultimately decided the police did. And although Justice Scalia’s majority opinion– strangely, and yet predictably—focused on 18th-century trespass law to govern a 21st-century technology, the concurring opinions were written with modern and evolving technology in mind. Ultimately, those are likely to be much more influential in future decisions addressing the limits placed on AI and autonomous technology by privacy laws and the 4th Amendment.

In his concurring opinion, Justice Alito worries about technology like GPS and AI that can track individuals without a human law enforcement officer being directly involved. Using that sort of technology, he writes, “in investigations of most offenses impinges on expectations of privacy” because “society’s expectation has been that law enforcement agents and others would not—and indeed, in the main, simply could not—secretly monitor and catalogue every single movement of an individual’s car for a very long period.” More significantly, Justice Sotomayor in her concurring opinion wonders about “the existence of a reasonable societal expectation of privacy in the sum of one’s public movements. [emphasis added] She lists the personal information—“familial, political, professional, religious, and sexual associations”—potentially revealed when advanced technology monitors and analyzes the “precise, comprehensive record of a person’s public movements.” Her opinion goes on to ask “whether people reasonably expect that their movements will be recorded and aggregated in a manner that enables the Government to ascertain, more or less at will, their political and religious beliefs, sexual habits, and so on.”

Replace “public movements” with “Facebook posts” and “Government” with “Google,” and her concerns about government use of technology to track private individuals become a legitimate concern regarding use of AI by those companies to track and analyze our activities. We are used to Google and Facebook tracking our searches, clicks, posts, and friends. We accept, to varying degrees, we pay for use of their sites by providing them with information about ourselves. But is there a point at which their ability to use the data generated by our online activity becomes invasive? Most people would rightly say that a restaurant that conducts invasive body searches has crossed an impermissible line and violated the privacy of its patrons. AI that aggregates, analyzes, peers into, and gawks at our online (and real world) activity arguably violates our privacy in the same way.

That isn’t to say that Google and Facebook should be prevented from using AI entirely. Indeed, both companies are using limited forms of AI now. There is also the potential for great good in their research. Using AI to analyze data could help determine and limit the spread of disease, prevent car accidents, and improve food distribution.

It isn’t too early, though, to take the Supreme Court’s lead, start thinking about where AI potentially violates personal privacy, and consider laws and regulations that would clearly state AI’s limitations in that area. This would both protect our privacy while also giving Google, Facebook, and other companies exploring AI clear guidelines for what they can and cannot do.

Future Tense is a partnership of SlateNew America, and Arizona State University.

John Frank Weaver is an attorney in Portsmouth, N.H., who works on artificial intelligence law. He is the author of Robots Are People, Too. Follow him @RobotsRPeople.