Watson can win at Jeopardy, but how would it do at poker?

The state of the universe.
Feb. 15 2011 2:03 PM

Jeopardy, Schmeopardy

Why IBM's next target should be a machine that plays poker.

Contestants Ken Jennings and Brad Rutter compete against 'Watson'. Click image to expand.
Ken Jennings and Brad Rutter compete on Jeopardy! against Watson

Watson, the Jeopardy!- playing supercomputer, did give the right response. In a way.

The clue, in the category "Rhyme Time," was "A hit below the belt." Though this was just a practice round, held years before the public man-vs.-machine challenge that airs this week, Watson was dealing with authentic game-show material. Back in 1992, when human contestant Marty Brophy saw that $200 stumper in a broadcast episode, he correctly replied, "What is low blow?" The state-of-the-art AI, by contrast, scanned its elephantine database of documents and came up with something else: "What is wang bang?"

Watson's wang-bang days are now in the past, to the point that the machine now competes with the best human players in the history of the game. IBM is rolling out the red carpet for its new star-child with a big publicity campaign, but for all the hype, it insists Jeopardy! is just a convenient exhibition. They have a much more important goal: teaching a machine to understand language written for humans, not computers. This is one of the holy grails of artificial-intelligence research, and a technology that would revolutionize any industry plagued by the fact that computers are still miserable at understanding what's known as "natural language."

Advertisement

The Watson project is a case where a relatively simple human game can teach a computer the skills and character it needs to succeed elsewhere in life. This is not true of every game that computers can play well. No human can beat a machine that's programmed to play checkers perfectly, but the existence of masterful checkers software doesn't solve any classic problems in artificial intelligence. There may be some applications for the decision-making algorithms they use, but nothing close to the promise of Watson's post-game-show career.

Quite simply, the development of computer programs that can beat champion human players of checkers—or even chess—hasn't really changed the world outside of the competitive gaming circuit. This didn't always appear to be the case. Decades before IBM's Deep Blue showed up and defeated chess grandmaster Garry Kasparov, we imagined that such an accomplishment would require a machine that could think creatively and exploit an opponent's particular tendencies and habits. But the emergence of massive processing power seems to have obviated the need for major innovations in AI. It's been 14 years since that famous chess match in New York City. As Kasparov wrote last year in the New York Review of Books,

Instead of a computer that thought and played chess like a human, with human creativity and intuition, [the AI crowd] got one that played like a machine, systematically evaluating 200 million possible moves on the chess board per second and winning with brute number-crunching force.

One Watson researcher I spoke with disputed this, saying the strategic element of the Deep Blue program was as important as its computational brawn. (One of that machine's original developers now works on the Watson team.) But it's safe to say that the algorithms that were finally able to defeat Kasparov did not revolutionize the industry. Chess simply wasn't the right challenge for the computer scientists. In fact, there are many other games at which computers are blisteringly incompetent, and whose mastery would, in fact, herald tremendous breakthroughs in artificial intelligence. One of those games is poker.

It may be surprising to learn that it's much easier to build a computer that can win at Jeopardy! than one that cleans up at the poker table in real-world situations. The quiz show, after all, can draw from any subject on God's green earth, while card games are based on 52 discrete units that interact with presumably calculable probabilities. Good Texas Hold 'Em players can estimate the odds of completing this or that hand based on the cards they see in their hands and on the table. So why can't a computer kick some ass at the casino?

For two-player, limited-bet Hold 'Em, computers are quite good. In 2008, a program called "Polaris" edged out a team of professional poker players with two wins, one loss, and a draw. Computers are easier to beat when you play "no-limit" Hold 'Em—unlimited bets complicate the algorithms and change the optimal strategy—but researchers are confident this problem will be solved in due time. These types of two-player games are all fairly predictable, in their way. But when you add a third player to the game, all hell breaks loose.

  Slate Plus
Slate Picks
Nov. 25 2014 3:21 PM Listen to Our November Music Roundup Hot tracks for our fall playlist, exclusively for Slate Plus members.