Why Watson Is Real Artificial Intelligence

The Citizen's Guide to the Future
Feb. 14 2014 2:07 PM

Why Watson Is Real Artificial Intelligence

Don't insult Watson's artificial intelligence.

Photo by Ben Hider/Getty Images

Artificial intelligence is here now. This doesn’t mean that Cylons disguised as humans have infiltrated our societies, or that the processors behind one of the search engines have become sentient and are now making their own plans for world domination. But denying the presence of AI in our society not only takes away from the achievements of science and commerce, but also runs the risk of complacency in a world where more and more of our actions and intentions are being analyzed and influenced by intelligent machines. Not everyone agrees with this way of looking at the issue, though.

Douglas Hofstadter, cognitive scientist and Pulitzer Prize-winning author of Gödel, Escher, Bach, recently claimed that IBM’s Jeopardy! champion AI system Watson is not real artificial intelligence. Watson, he says, is “just a text search algorithm connected to a database, just like Google search. It doesn’t understand what it’s reading.” This is wrong in at least two ways fundamental to what it means to be intelligent. First, although Watson includes many forms of text search, it is first and foremost a system capable of responding appropriately in real-time to new inputs. It competed against humans to ring the buzzer first, and Watson couldn’t ring the buzzer until it was confident it had constructed the right sentence. And, in fact, the humans quite often beat Watson to the buzzer even when Watson was on the right track. Watson works by choosing candidate responses, then devoting its processors to several of them at the same time, exploring archived material for further evidence of the quality of the answer. Candidates can be discarded and new ones selected. IBM is currently applying this general question-answering approach to real-world domains like health care and retail.

This is very much how primate brains (like ours) work. Neuroscientists like Michael Shadlen can recognize which brain cells monkeys use to represent different hypotheses about how to solve the current puzzle they are facing. Then, he can watch the different solutions compete for influence in the brain, until the animal finally acts when it is certain enough. If the puzzle has a short time limit, the animals will act for a lower threshold and will be less accurate. Just like us. And it wouldn’t be hard to reprogram Watson to do the same thing—to give its best answer at a fixed time rather than at a fixed level of certainty.


How about understanding? Watson does search text in various Internet sources (like Wikipedia) but didn’t during competition. It had to read the text in advance and remember it in a generalized way so that it could access what it had learned quickly by all different kinds of clues. Jeopardy! questions require understanding jokes and metaphors—what Hofstadter calls “analogical reasoning.” Being able to use the right word in the right context is the definition of understanding language, what linguists call semantics. If someone blind from birth said to you “I’ll look into it” or “See you later,” would you say they didn’t understand what they were saying?

Let’s go back to whether Google understands English. Cognitive scientist Bob French has written that no computer will ever pass the Turing test—a competition in which bots try to pass as people—because they don’t share human experience. Understanding the sentence “After the holidays, the scale becomes my enemy” is an apparently impossible problem for a simple computer and database. The key concept (weight) is never mentioned, and there is a metaphor to battle. But type the sentence into Bing, and several references to weight come up on the first page. (This used to work for Google, too—apparently intelligence doesn’t sell ads.)

Hofstaedter dismisses Watson as a system that seems impressive until you "look at the details." By this he means that at the level of computer code, "all" Watson is doing is number-crunching, pattern-matching, searching, etc.—not generating "true thought.” But true thought in humans is made up of small, unintelligent parts. (For more on this see Dan Dennett.) No brain, or computer chip, “looks” intelligent in its details, under the microscope.

Humans are responsible for mixing and matching different aspects of intelligence in the technologies we develop, and making systems just like us often isn’t the right goal. Self-driving cars already make millions of decisions in an hour about speed and direction, even route, but there is no reason to build a car that determines its own ultimate destination, and every reason not to.

How we talk about AI matters. AI is likely to change our civilization as much as or more than any technology that's come before, even writing. Shaping that effect is one of our key challenges this century. But if we dismiss all progress in AI unless and until it meets an arbitrary, human-centric standard of behavior, we may overlook the power this new intelligence is giving all of its users, and particularly its owners. Rather than asking how to make our machines think just like humans, we should ask: In what ways are they intelligent, and what forms of intelligence, embodied in our technologies, would bring the most benefit to our civilization?

Future Tense is a partnership of SlateNew America, and Arizona State University.

Miles Brundage is a Ph.D. student in Human and Social Dimensions of Science and Technology at Arizona State University.

Joanna Bryson is a cognitive scientist who applies AI to modeling human and animal intelligence. She has been writing about the role of AI in society since 1998.



The Democrats’ War at Home

How can the president’s party defend itself from the president’s foreign policy blunders?

An Iranian Woman Was Sentenced to Death for Killing Her Alleged Rapist. Can Activists Save Her?

Piper Kerman on Why She Dressed Like a Hitchcock Heroine for Her Prison Sentencing

Windows 8 Was So Bad That Microsoft Will Skip Straight to Windows 10

Homeland Is Good Again! For Now.


Cringing. Ducking. Mumbling.

How GOP candidates react whenever someone brings up reproductive rights or gay marriage.


How Even an Old Hipster Can Age Gracefully

On their new albums, Leonard Cohen, Robert Plant, and Loudon Wainwright III show three ways.

The U.S. Has a New Problem in Syria: The Moderate Rebels Feel Like We’ve Betrayed Them

We Need to Talk: A Terrible Name for a Good Sports Show by and About Women

Trending News Channel
Oct. 1 2014 1:25 PM Japanese Cheerleader Robots Balance and Roll Around on Balls
  News & Politics
The World
Oct. 1 2014 12:20 PM Don’t Expect Hong Kong’s Protests to Spread to the Mainland
Oct. 1 2014 2:16 PM Wall Street Tackles Chat Services, Shies Away From Diversity Issues 
The Eye
Oct. 1 2014 1:04 PM An Architectural Crusade Against the Tyranny of Straight Lines
  Double X
The XX Factor
Oct. 1 2014 2:08 PM We Need to Talk: Terrible Name, Good Show
  Slate Plus
Behind the Scenes
Oct. 1 2014 3:24 PM Revelry (and Business) at Mohonk Photos and highlights from Slate’s annual retreat.
Brow Beat
Oct. 1 2014 3:02 PM The Best Show of the Summer Is Getting a Second Season
Future Tense
Oct. 1 2014 3:01 PM Netizen Report: Hong Kong Protests Trigger Surveillance and Social Media Censorship
  Health & Science
Oct. 1 2014 2:36 PM Climate Science Is Settled Enough The Wall Street Journal’s fresh face of climate inaction.
Sports Nut
Sept. 30 2014 5:54 PM Goodbye, Tough Guy It’s time for Michigan to fire its toughness-obsessed coach, Brady Hoke.