Artificial Intelligence

Artificial Intelligence

E-mail debates of newsworthy topics.
May 29 1997 3:30 AM

Artificial Intelligence

VIEW ALL ENTRIES

Dear Bert,

Advertisement

       What a pleasure to see some actual progress, for a change, in a philosophical "debate." I think the key is that while you are sure that there is something importantly different about the internal goings-on in a person and in a computer/robot, you see that this difference of the innards has to be manifest somehow in ultimate behavioral competence in order to be ... important! You and I agree on this point, which smacks of evil behaviorism (and evil operationalism, and evil verificationism, and who knows what else evil--communism, fascism, Zoroastrianism?) to many of those who have been brainwashed by some of the ideologues of the Cognitive Revolution, but the alternatives all have the drawback of deflecting attention from the interesting issue: whence cometh the mental power of a mind? What is amazing about people is what they can do, in the hard world, in real time.
       In Darwin's Dangerous Idea, in the discussion of Roger Penrose's arguments against strong AI, I call this a recognition of the importance of a "sword in the stone" test, and you and Penrose and I agree with Turing--and Descartes, of course--that unless you can describe an independent test of mental power, you are just waffling. Thus suppose some philosopher--not you, but we can think of others--were to declare that one thing a robot could never do is make a hand-knit sweater. But then when the roboticists produce their triumph, they encounter the retort--"But this isn't hand-knit, because those aren't hands. Oh, sure, they have five so-called 'fingers,' and they dangle on the so-called wrists of two so-called arms, but they aren't made from the right stuff, or they don't have the right history, to count as real hands." This move, the "it doesn't count" move, is a loser.
       We both agree with Descartes and Turing that passing the unrestricted Turing test (with or without robotic add-ons such as eyes and hands) is an overwhelmingly difficult task, since it indirectly puts exactly the right strain on the innards: there is simply no way--no physically possible way--of passing it by "cheating," but the test otherwise does not foreclose on "non-organic" or "artifactual" or "symbol-manipulating" or "syntactic" paths. Let's see if such a program can do such a thing.
       (I argue for this view in a 1985 paper of mine, "Can Machines Think?," which is going to appear next year in a collection of mine, Brainchildren, with a postscript describing the Loebner Prize competitions of a few years ago.)
       That then leaves just the right empirical question open: is it possible to get human-level behavior out of a computer, and if so, will its methods show us anything interesting about human intelligence?
       You ask me whether I disagree with Drew McDermott. I think he is right about something, but I would put it somewhat differently. Kasparov's brain is a parallel-processing device composed of more than ten billion little robots. Neurons, like every other cell in a body, are robots, and the organized activity of ten billion little unthinking, uncomprehending robots IS a form of brute force computing, and surely intuition IS nothing other than such an emergent product. As I said in my very first published paper back in 1968 ("Machine Traces and Protocol Statements," Behavioral Science, 13, 155-61, March 1968.)--my rebuttal of your notorious RAND memo ("Alchemy and Artificial
Intelligence")--when we say we do something "by intuition" we are saying we don't know how we do it, and that is consistent with any story at all. When we discover how "intuitive" thinking is accomplished, we will probably be surprised about many of the details--and we will probably feel that mixture of amusement and letdown that often accompanies learning how a magic trick is done. I think it is quite clear that the answer will be comprehensible in computational terms--a massively parallel dynamical competitive process, in which the "magic" gets replaced, one way or another, with a lot of mindless drudgery. But I do agree with you that that process will almost certainly not look much like the brute force search processes of Deep Blue. That's precisely why Deep Blue is not very interesting as AI.
       Yes, I've written a paper on Cog. It's a talk I gave at the Royal Society back in 1994. And in other talks on Cog to AI groups I have stressed the fact that when we get to the point of getting Cog to engage in human-only or "adult" cognitive activities, we will not solve the problem by splicing a GOFAI system onto the underlying neural-net-style architecture. When I speak about this, I use an overhead that always gets a laugh. It's a parody of the "Intel inside" logo, with "GOFAI" instead of "Intel" and a big X through it.
       As for your suggested variations on the unrestricted Turing test, I am not convinced they add anything, since there's nothing to prevent you as judge in the unrestricted Turing test from playing a game--any game you can play via typing, in effect--and bluffing and other psychological ploys are readily available to the resourceful judge. Having human-acuity vision and the dexterity to shuffle cards is presumably not a very central feature of genuine human consciousness. In any event, we agree on the fact (but I am sure it's "just" an empirical fact, not a conceptual or constitutive fact) that human-level intelligence depends on competences that are not practically achievable by using "fully articulate" representations of a GOFAI sort.
       Since we both agree that getting a robot to pass the unrestricted Turing test is incredibly more difficult than getting it to be world chess champion, perhaps the next constructive move would be to suggest some intuitively much easier test that would still knock your socks off, and is a little closer (in my opinion, at least) to being implemented in a decade or so. Since Cog is supposed to go through a humanoid infancy, you might propose some variation on a developmental milestone in the first two or three years of a child's life, for instance. It is commonly said that the things that kids can do easily are the things it is hardest for AI to do. So let's have some examples--perhaps something one-year-olds can do easily, two-year-olds can do easily, and so forth. (Just remember: Cog is paraplegic, and can't walk around, but it is designed with two arms, two eyes, two ears, and can touch things.)

Best wishes,
Dan