Why Watson Is Real Artificial Intelligence

The Citizen's Guide to the Future
Feb. 14 2014 2:07 PM

Why Watson Is Real Artificial Intelligence

108024371-general-view-of-ibms-watson-computing-system-at-a-press
Don't insult Watson's artificial intelligence.

Photo by Ben Hider/Getty Images

Artificial intelligence is here now. This doesn’t mean that Cylons disguised as humans have infiltrated our societies, or that the processors behind one of the search engines have become sentient and are now making their own plans for world domination. But denying the presence of AI in our society not only takes away from the achievements of science and commerce, but also runs the risk of complacency in a world where more and more of our actions and intentions are being analyzed and influenced by intelligent machines. Not everyone agrees with this way of looking at the issue, though.

Douglas Hofstadter, cognitive scientist and Pulitzer Prize-winning author of Gödel, Escher, Bach, recently claimed that IBM’s Jeopardy! champion AI system Watson is not real artificial intelligence. Watson, he says, is “just a text search algorithm connected to a database, just like Google search. It doesn’t understand what it’s reading.” This is wrong in at least two ways fundamental to what it means to be intelligent. First, although Watson includes many forms of text search, it is first and foremost a system capable of responding appropriately in real-time to new inputs. It competed against humans to ring the buzzer first, and Watson couldn’t ring the buzzer until it was confident it had constructed the right sentence. And, in fact, the humans quite often beat Watson to the buzzer even when Watson was on the right track. Watson works by choosing candidate responses, then devoting its processors to several of them at the same time, exploring archived material for further evidence of the quality of the answer. Candidates can be discarded and new ones selected. IBM is currently applying this general question-answering approach to real-world domains like health care and retail.

This is very much how primate brains (like ours) work. Neuroscientists like Michael Shadlen can recognize which brain cells monkeys use to represent different hypotheses about how to solve the current puzzle they are facing. Then, he can watch the different solutions compete for influence in the brain, until the animal finally acts when it is certain enough. If the puzzle has a short time limit, the animals will act for a lower threshold and will be less accurate. Just like us. And it wouldn’t be hard to reprogram Watson to do the same thing—to give its best answer at a fixed time rather than at a fixed level of certainty.

Advertisement

How about understanding? Watson does search text in various Internet sources (like Wikipedia) but didn’t during competition. It had to read the text in advance and remember it in a generalized way so that it could access what it had learned quickly by all different kinds of clues. Jeopardy! questions require understanding jokes and metaphors—what Hofstadter calls “analogical reasoning.” Being able to use the right word in the right context is the definition of understanding language, what linguists call semantics. If someone blind from birth said to you “I’ll look into it” or “See you later,” would you say they didn’t understand what they were saying?

Let’s go back to whether Google understands English. Cognitive scientist Bob French has written that no computer will ever pass the Turing test—a competition in which bots try to pass as people—because they don’t share human experience. Understanding the sentence “After the holidays, the scale becomes my enemy” is an apparently impossible problem for a simple computer and database. The key concept (weight) is never mentioned, and there is a metaphor to battle. But type the sentence into Bing, and several references to weight come up on the first page. (This used to work for Google, too—apparently intelligence doesn’t sell ads.)

Hofstaedter dismisses Watson as a system that seems impressive until you "look at the details." By this he means that at the level of computer code, "all" Watson is doing is number-crunching, pattern-matching, searching, etc.—not generating "true thought.” But true thought in humans is made up of small, unintelligent parts. (For more on this see Dan Dennett.) No brain, or computer chip, “looks” intelligent in its details, under the microscope.

Humans are responsible for mixing and matching different aspects of intelligence in the technologies we develop, and making systems just like us often isn’t the right goal. Self-driving cars already make millions of decisions in an hour about speed and direction, even route, but there is no reason to build a car that determines its own ultimate destination, and every reason not to.

How we talk about AI matters. AI is likely to change our civilization as much as or more than any technology that's come before, even writing. Shaping that effect is one of our key challenges this century. But if we dismiss all progress in AI unless and until it meets an arbitrary, human-centric standard of behavior, we may overlook the power this new intelligence is giving all of its users, and particularly its owners. Rather than asking how to make our machines think just like humans, we should ask: In what ways are they intelligent, and what forms of intelligence, embodied in our technologies, would bring the most benefit to our civilization?

Future Tense is a partnership of SlateNew America, and Arizona State University.

Miles Brundage is a Ph.D. student in Human and Social Dimensions of Science and Technology at Arizona State University.

Joanna Bryson is a cognitive scientist who applies AI to modeling human and animal intelligence. She has been writing about the role of AI in society since 1998.

TODAY IN SLATE

Politics

Talking White

Black people’s disdain for “proper English” and academic achievement is a myth.

Hong Kong’s Protesters Are Ridiculously Polite. That’s What Scares Beijing So Much.

The One Fact About Ebola That Should Calm You: It Spreads Slowly

Operation Backbone

How White Boy Rick, a legendary Detroit cocaine dealer, helped the FBI uncover brazen police corruption.

A Jaw-Dropping Political Ad Aimed at Young Women, Apparently

The XX Factor
Oct. 1 2014 4:05 PM Today in GOP Outreach to Women: You Broads Like Wedding Dresses, Right?
Music

How Even an Old Hipster Can Age Gracefully

On their new albums, Leonard Cohen, Robert Plant, and Loudon Wainwright III show three ways.

How Tattoo Parlors Became the Barber Shops of Hipster Neighborhoods

This Gargantuan Wind Farm in Wyoming Would Be the Hoover Dam of the 21st Century

Moneybox
Oct. 1 2014 8:34 AM This Gargantuan Wind Farm in Wyoming Would Be the Hoover Dam of the 21st Century To undertake a massively ambitious energy project, you don’t need the government anymore.
  News & Politics
Politics
Oct. 1 2014 7:26 PM Talking White Black people’s disdain for “proper English” and academic achievement is a myth.
  Business
Moneybox
Oct. 2 2014 8:07 AM The Dark Side of Techtopia
  Life
Quora
Oct. 2 2014 8:27 AM How Do Teachers Kill the Joy of Reading for Students?
  Double X
The XX Factor
Oct. 1 2014 5:11 PM Celebrity Feminist Identification Has Reached Peak Meaninglessness
  Slate Plus
Behind the Scenes
Oct. 1 2014 3:24 PM Revelry (and Business) at Mohonk Photos and highlights from Slate’s annual retreat.
  Arts
Brow Beat
Oct. 1 2014 9:39 PM Tom Cruise Dies Over and Over Again in This Edge of Tomorrow Supercut
  Technology
Future Tense
Oct. 1 2014 6:59 PM EU’s Next Digital Commissioner Thinks Keeping Nude Celeb Photos in the Cloud Is “Stupid”
  Health & Science
Bad Astronomy
Oct. 2 2014 7:30 AM What Put the Man in the Moon in the Moon?
  Sports
Sports Nut
Oct. 1 2014 5:19 PM Bunt-a-Palooza! How bad was the Kansas City Royals’ bunt-all-the-time strategy in the American League wild-card game?