Artificial Intelligence

Artificial Intelligence

E-mail debates of newsworthy topics.
May 21 1997 12:30 AM

Artificial Intelligence

VIEW ALL ENTRIES

Dear Bert,

Advertisement

       The discussion we had on The NewsHour was fun but, really, Bert, I think you have only yourself to blame for the widespread opinion that you said computers were not gonna be able to play great chess. I went back and looked at the revised edition of your book What Computers Still Can't Do, and you go to considerable lengths to pooh-pooh not just the current prowess but the future prospects of computer chess. Yes, near the end (Page 259) of the book, you make the distinction between the closed nature of chess and the open-ended nature of embodied life. But I think it is fair to say this is your fallback position, which runs: Computer tic-tac-toe is trivial; computer chess is almost certainly impossible (at the world-champ level); and computer conversation is out of the question. Otherwise, why would you spend so much time bad-mouthing the (over-) confident declarations of Simon, et al.? And anyway, there are so many passages in your book which--aside from sea-lawyering provisos--commit you to a negative prophecy. For instance:

We shall soon see that given the limitations of digital computers, this [stagnation] is just what one would expect. (Page 85)

Work on game playing revealed the necessity [sic] of processing "information" which is not explicitly considered or excluded, that is, information on the fringes of consciousness. (Page 107)

In chess programs, for example, it is beginning to be clear that adding more and more specific bits of chess knowledge to plausible move generators, finally bogs down in too many ad hoc subroutines. ... What is needed is something which corresponds to the master's way of seeing the board as having promising and threatening areas. (Page 296)

        But let's let bygones be bygones, and consider the future; as one e-mail correspondent to me put it, you "wimped out" totally on my request for a test without "points for style." Why don't you fish or cut bait? If "understanding natural language" isn't required for passing the unrestricted Turing test (with, say, you as judge), name a tougher test--winning the Pulitzer Prize for fiction or poetry? Being head gag writer for Jay Leno? Take your pick, but put up or shut up! I'm not impressed by the claim that a computer won't really understand natural language (if it uses GOFAI--good old-fashioned artificial intelligence--techniques) until you couple it to a performance measure. After all, I once listened in utter fascination to a well-known philosopher as he insisted that neither Joseph Conrad nor Vladimir Nabokov understood English--it not being their native language. Surely (surely?) you wouldn't descend to such depths, would you? Just to be sure, I'd want a promise in advance.
       Otherwise you leave yourself in a particularly feckless rhetorical position, by the way: "Understanding is mighty important, all important, wonderful, wonderful--but I can't offhand think of a single stunt that requires it."
       By the way, I've been defending you quite vigorously and sincerely in AI circles in recent years--saying that the only thing wrong with your line has been the misguided (and misdirecting) tone of absolutism, as found in your book title. More specifically, in your insisting that these were not just the hard problems (you've been right about them being the tough problems all along), but being insoluble problems (sword-in-the-stone problems, in the metaphor of Darwin's Dangerous Idea). As I said in a talk at an AAAI (American Association for Artificial Intelligence) workshop at MIT in November (and even had an overhead proclaiming it): "Just because Bert Dreyfus said it doesn't mean it's wrong."
       Now I think that you would be quite consistent (and wise, and honorable) to say that only a Cog-type, embodied robot has a chance of passing the unrestricted Turing test (and you could add that it would be a colossally difficult feat, dwarfing Deep Blue). Then you would be right about the limitations of GOFAI without going overboard. And you could admit that if any Cog of the future actually passed the unrestricted Turing test (with you as judge), you would declare it to be a genuine understander of natural language. Why not?

All the best,
Dan