Future Tense

Why Watson Is Real Artificial Intelligence

Don’t insult Watson’s artificial intelligence.

Photo by Ben Hider/Getty Images

Artificial intelligence is here now. This doesn’t mean that Cylons disguised as humans have infiltrated our societies, or that the processors behind one of the search engines have become sentient and are now making their own plans for world domination. But denying the presence of AI in our society not only takes away from the achievements of science and commerce, but also runs the risk of complacency in a world where more and more of our actions and intentions are being analyzed and influenced by intelligent machines. Not everyone agrees with this way of looking at the issue, though.

Douglas Hofstadter, cognitive scientist and Pulitzer Prize-winning author of Gödel, Escher, Bach, recently claimed that IBM’s Jeopardy! champion AI system Watson is not real artificial intelligence. Watson, he says, is “just a text search algorithm connected to a database, just like Google search. It doesn’t understand what it’s reading.” This is wrong in at least two ways fundamental to what it means to be intelligent. First, although Watson includes many forms of text search, it is first and foremost a system capable of responding appropriately in real-time to new inputs. It competed against humans to ring the buzzer first, and Watson couldn’t ring the buzzer until it was confident it had constructed the right sentence. And, in fact, the humans quite often beat Watson to the buzzer even when Watson was on the right track. Watson works by choosing candidate responses, then devoting its processors to several of them at the same time, exploring archived material for further evidence of the quality of the answer. Candidates can be discarded and new ones selected. IBM is currently applying this general question-answering approach to real-world domains like health care and retail.

This is very much how primate brains (like ours) work. Neuroscientists like Michael Shadlen can recognize which brain cells monkeys use to represent different hypotheses about how to solve the current puzzle they are facing. Then, he can watch the different solutions compete for influence in the brain, until the animal finally acts when it is certain enough. If the puzzle has a short time limit, the animals will act for a lower threshold and will be less accurate. Just like us. And it wouldn’t be hard to reprogram Watson to do the same thing—to give its best answer at a fixed time rather than at a fixed level of certainty.

How about understanding? Watson does search text in various Internet sources (like Wikipedia) but didn’t during competition. It had to read the text in advance and remember it in a generalized way so that it could access what it had learned quickly by all different kinds of clues. Jeopardy! questions require understanding jokes and metaphors—what Hofstadter calls “analogical reasoning.” Being able to use the right word in the right context is the definition of understanding language, what linguists call semantics. If someone blind from birth said to you “I’ll look into it” or “See you later,” would you say they didn’t understand what they were saying?

Let’s go back to whether Google understands English. Cognitive scientist Bob French has written that no computer will ever pass the Turing test—a competition in which bots try to pass as people—because they don’t share human experience. Understanding the sentence “After the holidays, the scale becomes my enemy” is an apparently impossible problem for a simple computer and database. The key concept (weight) is never mentioned, and there is a metaphor to battle. But type the sentence into Bing, and several references to weight come up on the first page. (This used to work for Google, too—apparently intelligence doesn’t sell ads.)

Hofstaedter dismisses Watson as a system that seems impressive until you “look at the details.” By this he means that at the level of computer code, “all” Watson is doing is number-crunching, pattern-matching, searching, etc.—not generating “true thought.” But true thought in humans is made up of small, unintelligent parts. (For more on this see Dan Dennett.) No brain, or computer chip, “looks” intelligent in its details, under the microscope.

Humans are responsible for mixing and matching different aspects of intelligence in the technologies we develop, and making systems just like us often isn’t the right goal. Self-driving cars already make millions of decisions in an hour about speed and direction, even route, but there is no reason to build a car that determines its own ultimate destination, and every reason not to.

How we talk about AI matters. AI is likely to change our civilization as much as or more than any technology that’s come before, even writing. Shaping that effect is one of our key challenges this century. But if we dismiss all progress in AI unless and until it meets an arbitrary, human-centric standard of behavior, we may overlook the power this new intelligence is giving all of its users, and particularly its owners. Rather than asking how to make our machines think just like humans, we should ask: In what ways are they intelligent, and what forms of intelligence, embodied in our technologies, would bring the most benefit to our civilization?