Technology

Google Translate

It already speaks 57 languages as well as a 10-year-old. How good can it get?

Was3634586
Will Google’s computers understand languages better than humans?

Photograph by Karen Bleier/AFP/Getty Images.

A computer that translates “natural language” is the holy grail of artificial intelligence—language being so integral to our intelligence and to our humanness that to crack it would be to achieve artificial consciousness itself. But until relatively recently, attempts at it have mostly sucked. They’ve tended to mix the words of one language with the grammar of the other, getting both wrong in the process. Mostly, this is the fault of literal translation—the kind of process that translates kindergarten as children garden. Newer methods—dominated by Google—turn the problem around: Using data, statistics, and brute force, they succeed in part by their refusal to “deconstruct” language and teach meaning to computers in the traditional way.

Google is grossly outperforming the rule-based methods that have historically been used to teach language to computers. These classic methods work on the principle that language can be decoded, stripped to its purest component parts of “meaning,” and built back up again into another language. Linguists feed computers vocabularies, grammars, and endless rules about sentence structure—but language isn’t so easily formalized this way. There are more exceptions, qualifications, and ambiguities than rules and laws to follow. And, when you really think about it, this approach hardly respects the complexity of the problem.

Enter Google Translate—Google didn’t invent this method but they’re certainly dominating it now—which avoids that reductive concept of language altogether. Google mines existing translated material, recognizes how words or phrases typically correspond, and uses probability to deliver the best match based on context. Being Google, its digital Rosetta Stone amounts to trillions of words, from a corpus of U.N. documentation (in its six official languages, translated at high quality) to company memos to Harry Potter novels. Although Google builds a “language model” that describes the basic look of a well-formed sentence, it doesn’t have linguists try to decode the languages at all. Wittgenstein’s maxim of “Don’t ask for the meaning, ask for the use” is an effective working mantra for Google’s statistical method.

In his wonderful book, Is That a Fish in Your Ear?, the Princeton linguist and translator David Bellos notes the link between early machine translation pioneers and modern philosophers of language—that hopeless pursuit to discover “the purely hypothetical language which all people really speak in the great basement of their souls.” When I spoke to Bellos about Google, he stressed that Google’s achievements doesn’t make Google Translate akin to how human translation actually works. Though a translation is what you get, translation isn’t really what Google Translate does. (Depending on what we understand by “translation”—but let’s not get into that.) “It’s like the difference between engineering and knowledge,” says Bellos. “An engineering solution is to make something work, but the way you make it work doesn’t necessarily have anything to do with the underlying things. Airplanes do not work the way birds fly.”

Which is quite true. But even if Google Translate doesn’t translate language like humans do, there are parallels in the effect, especially in the way Google Translate learns language. Children don’t learn with prescriptive rules and by deconstructing sentence structure. Subjects, nouns, verbs—these are drilled later, once we’re all but fluent. When I spoke to Franz Och, who heads up Google Translate, he told me how, in hindsight, it’s almost obvious that rule-based methods aren’t necessarily as fruitful as data-driven ones. When children learn, “You just give examples, you interact with the child—grammar is something which is never explicit, it’s always implicit,” he says. “Just the same, when our system is learning, a lot of the grammar is not explicit—it’s implicit in the model parameters, in what comes out.”

Here Wittgenstein pops up again. Translation was one of the philosopher’s many examples of a “language game,” a form of rule-following wherein we partake in the game (of translation) without direct use of the rules that are implicit in it. Translation isn’t reducible to its rules (grammar, syntax, semantics), but they’re still there, in some sense, beneath the surface. Just the same, Google Translate doesn’t grasp the “rules”—they’re implicit, and learned implicitly, as Och says.

A metaphor, perhaps, but this isn’t the first time a little applied Wittgenstein has been put to work at Google, intentionally or not. Part of Google’s search power is in its intelligent handling of context: Searches for “hot dogs” yield results for the food rather than puppies, working on the insights of family resemblance. In Steven Levy’s recent book about Google, In the Plex, an interview with search engineer Amit Singhal suggests that the Wittgenstein influence was deliberate, and was a key breakthrough. Another example: “Today, if you type ‘Gandhi bio,’ we know that ‘bio’ means ‘biography,’ ” Levy quotes Singhal. “And if you type ‘bio warfare,’ it means ‘biological.’ ” In other words, Google’s search engine learns its semantics from human input and improves with more data, just as Google Translate does.

Google Translate used to trip up more in its early days, when its data was skewed to the formal legalese of U.N. and EU documentation: Bellos recalls how searches for “avocado salad” in French (salade à l’avocat) would return “lawyer salad” in English—“avocat” being both avocado and lawyer in French, and in the corridors of the EU, “avocat” being more likely to mean lawyer.

This sort of family resemblance world of meaning is a given for humans. It’s also something we clearly can teach computers—and the key to doing so is just gathering more data. Google was the first to really put this idea to use, and it marked a significant step from computers reading strict syntax to reading the force of meaning with context-sensitive intelligence. Today, the algorithm has an understanding of language something like a 10-year-old’s, but its rate of improvement is fast exceeding human language-learning development. When I say that the computer “learns,” that isn’t just a metaphor. “In a very meaningful sense it’s really learning language,” says Och. It might not be able to form its own sentences, digest Gödel, Escher, Bach, and answer your questions—full-blown A.I.—but it teaches itself automatically from data alone.

With all its data, might the computer have some kind of advantage where it understands language better than we do? Could it even beat us at translation? Any reflection on the difficulties and subtleties of human translators’ work and art, from simultaneous interpreting to subtitling films, makes that suggestion almost laughable—almost. Bellos expressed great admiration for the innovations at Google, but pointed out its limitations: “Machines aren’t—at least aren’t currently—the same sort of thing as a human translator.” To reach human levels a machine must understand context in all its forms, be culturally aware in a particularly profound and deeply embedded way. It must understand the force of meaning, which of course comes from more than page by page data or facts it can be fed. Nonetheless a hefty chunk of language understanding does come from data, and data Google certainly has.

Perhaps the barrier is lower than we think. It’s at least conceivable that the machinery could become better linguists. Moore’s law—that computer power doubles relative to price every 18 months—persists, and is not to be understated. More data, more tweaks to the model, bring improvements year on year. The big question is whether or not Google’s date-intensive method is determinately limited in some way. “It’s very clear that our translation quality will continue to improve,” says Och. “Where the limit is is a fundamental question.”

In the meantime, Google Translate’s enormous databank goes about learning languages as we actually speak them, taking in all their complexities and inconsistencies. It manages swear words and slang better than any rule-based system could, and keeps up to date with contemporary language. It speaks the English we speak, wherein infinitives are split and nonplussed means unperturbed—and linguistic conservatives have no say in what’s correct. But Google does. And does for dozens of languages besides English.

Might all that knowledge about language and how we speak it be something to worry about, when at the hands of a single company? Translation can be a sensitive thing—laden with responsibility that can’t be passed up to algorithms with a shrug of the shoulders. It’s worth questioning the power it brings Google—and especially since Google likes to shy away from the murky waters of culture and politics, where authority over language certainly brings them. Before you know it, all this technical tinkering and grand ambition brings them into public, social, and moral realms—something Evgeny Morozov has written about brilliantly.

In Wittgensteinian fashion, I shouldn’t advance any thesis—but there is food for thought here. With some clever technology built by some clever people, Google has decreed on language and translation, confirming the nonsense of prescriptive rules—a small victory for the descriptive grammarians, if an obvious one—and that Wittgenstein’s remarks on language really bear an effective model of artificial means of learning it. Google Translate might not become better than humans at translation any time soon—perhaps it can’t be done. But at some point it will be good enough: cheaper and more convenient than hiring human translators (for many whose standards are lower, this is already true) or bothering to learn languages ourselves—at which point “good enough” becomes our standard. How do we want our machine translators to work for us?