Future Tense

Google’s New Lens Feature Turns Your Smartphone Camera Into a Search Engine

Google CEO Sundar Pichai speaks at the company’s I/O conference on Wednesday.

Glenn Chapman/AFP/Getty Images

Smartphone cameras have a lot of utility these days: capturing a memory, snapping a selfie, taking notes (far better to photograph a Wi-Fi password than write one down), checking one’s complexion (who needs a mirror?). And at Google I/O on Wednesday, the tech company’s annual developer confab, CEO Sundar Pichai unveiled a new product that adds one more function to our phone cameras’ repertoire: search engine. It’s called Google Lens.

First the gee-whiz-how-cool-is-this part: At its most basic, Google Lens recognizes the contents of images its users take and then displays detailed information about those contents. But this vision-based search engine goes far beyond previously developed products, by Google and others, that offered only trivial information or superficial descriptors. Demoing the new technology during his keynote address, Pichai showed Lens identifying a flower’s genus and species from only a photo, connecting to Wi-Fi after snapping a picture of a network name and password, automatically scheduling billboard-advertised events as Google Calendar appointments, and displaying detailed information about businesses—including reviews, hours of operation, and contact information—with a single shutter-click.

And now the context: As Wired’s David Pierce wrote Wednesday about the announcement, Google Lens is the next step on Google’s road to becoming a fully realized A.I. company. Past milestones include Word Lens, an augmented-reality translation app initially launched in 2010 that was basically the closest thing we had to Star Trek’s universal translator.* Google bought Word Lens in 2014 and folded it into its growing Google Translate empire, which got a technical boost last year that allowed it to approach human-level translation accuracy. Another forebear to Lens—and perhaps its most direct ancestor—is Google Goggles, also first launched in 2010, which displayed information about paintings and landmarks and had the ability to recognize and search for barcodes and text. Unlike both Word Lens and Goggles, however, Google Lens won’t take the form of a standalone app. It will instead roll out first through Google Photos, allowing users to retroactively trawl through their past albums and display new data. Next it’ll head to Google Assistant, a competitor to Siri that will now be available on iPhones in addition to the Android devices it already runs on, before being integrated into the full universe of Google products.

Although the details of Lens’ functionality are fairly breathtaking, Recode’s Tess Townsend predicted some of its broad strokes in advance. As she wrote on Tuesday, this year’s I/O was likely to front Pichai’s “A.I.-first” vision of Google’s future, which the CEO laid out in a memo last year. Google Lens goes a long way toward making that vision more concrete. The new technology “combines a bunch of tech buzzwords—AI, AR, machine learning, computer vision—into one feature, and highlights how Google’s massive collection of data about the world can be used together,” she added on Wednesday. More sweepingly, Wired’s Pierce argues, Google Lens is in line with the realization that other tech companies like Snapchat and Instagram have increasingly come to: that augmented-reality images are the wave of the future. And as the company later boasted on Twitter:

Other highlights of I/O 2017 included “smart replies” on Gmail that offer three automated responses based on the content of the received email and a new “suggested sharing” function on Photos that recognizes people in photos and prompts users to send images directly to them.

*Correction, May 31, 2017: This post originally misspelled the name of the augmented-reality translation app Word Lens.