Dialogues

Artificial Intelligence

Dear Dan,

       It’s good to hear from you, especially since the sort of sound-bite discussion we had on The NewsHour is very unsatisfying and needs some sort of follow-up.
       I think that you are right that the tone of my discussion has always suggested that symbolic AI is impossible, although I have always said that I had no in-principle argument to back me up. And all I could legitimately claim was that, based on the phenomenology of human skill acquisition, the GOFAI research program would almost surely fail. I was surprised by the results of the last match as were many others. (There is a growing consensus among chess masters that Kasparov may well have thrown the match in order to promote a tiebreaker. We’ll know for sure if Deep Blue is really better if there is a playoff.) But I have never said that such a chess victory was impossible or even “almost certainly impossible.” I said that a chess master looks at only a few hundred plausible moves at most, and that the AI people would not be able to make a program, as they were trying to do in the 60s and 70s, that played chess by simulating this ability. I still hold that nothing I wrote or said on the subject of chess was wrong. The question of massive brute-force calculation as a way of making game playing programs was not part of the discussion, and heuristic programs without brute force did seem to need, and still seem to need, more than explicit facts and rules to make them play better than amateur chess. But I grant you that, given my views, I had no right to talk of necessity.
       Let’s turn to the language issue, which is more important. There we seem to have had a misunderstanding that another 30 seconds of air time would have allowed us to straighten out. You seem to think I was asking, like John Searle in his Chinese Room argument, that the computer do more than behave like it understood natural language. That it really understand, whatever that turns out to mean. But I have never asked that. I have always been willing to play by the rules of the unrestricted Turing test (that is, if a computer could be programmed so that most of the time when I was conversing with it by teletype, I could not tell whether I was talking with a computer or a human being. I would count that as thinking and admit the success of AI). It was on those grounds that I proposed natural language understanding as a challenge for Cog. Several e-mail correspondents who know my work have told me that they understood me that way. I would be happy to say in print that I sign off on everything in your last paragraph.
       A computer scientist here at Berkeley pointed out to me that, in the spirit of the current discussion, I could equally have asked that Cog be given a camera to see the cards and the players and then be asked to play high-level poker. (Of course, given the stochastic nature of poker, a very large sample would be needed to evaluate performance.) Do you like that goal post better? The two goals–playing poker and conversing in ordinary English–seem equally hard to me. Both presuppose that the computer can be given a knowledge of human psychology of the sort that we understand by being human, but that we cannot fully articulate.
       Where should we go from here? Have you seen Drew McDermott’s op-ed piece in the New York Times of May 14th? I suspect we would agree that Drew is wrong to think human beings might be using unconscious brute-force calculation, but I am not sure what you think these days. Do you agree with me that the success of Deep Blue in a formal domain like chess where relevance is settled beforehand and brute-force calculation is, therefore, possible, shows nothing whatsoever about the possibility of GOFAI? Why do you think GOFAI is a plausible goal for a research program? Or do you? Have you written a paper on Cog? If so, I would love to see it. I’m not clear whether Cog will be using neural-net simulation, symbolic representation, or both.

Regards,
Bert