Dialogues

Artificial Intelligence

Dear Dan,

       It’s a real pleasure to find that there is so much we agree on, that we can focus on the hard and interesting philosophical issues that, to my knowledge, defenders of AI have not yet faced, let alone answered. Happily, as you say, Cog gives us both a chance to sharpen our philosophical intuitions and to test our conclusions.
       I have been saying since l972 (and have spelled out in detail in the preface to the 3rd edition of What Computers Still Can’t Do) that computers will have to be embodied as we are if they are to interact with us and thus be counted as intelligent by our standards. Given this view, does Cog have a body enough like ours to have at least a modicum of human intelligence? Despite Cog’s arms, etc., I don’t think Cog will have any human intelligence at all. What, then, would it lack? It need not lack intuition. The billions of dumb robot neurons in our brain properly organized and working together, somehow, I agree, manifest expert intuition. (Although I don’t agree with you that pointing out we have intuition is consistent with just any story about brain operation. In Mind Over Machine [The Free Press, l988] my brother Stuart and I argue that intuitive skills are inconsistent with the GOFAI approach. But that is not at issue here since you are not proposing to use symbolic representations in Cog.) What I want to argue here is that Cog cannot manifest human-like emotions. According to neuroscience, emotions depend upon (although they are much more than) chemical changes in the brain. These changes are due to hormones, adrenaline, and the like. It may not be important that Cog’s brain is silicon and ours is protein, but it might be crucial that ours is wet and Cog’s is dry.
       But the GOFAI believer has a ready answer to such an objection. If it is to pass the Turing test, AI researchers will just have to program into an artificial intelligence all the facts about human emotions that it needs to know–facts the baby doesn’t need to know because it is made of the relevant wetware and so can be socialized into the emotions shared by members of its culture. Facts like being loved makes us happy, disapproval leads to shame or guilt, insults make us angry, we perform less well when we are tired, but performance degrades differently for each domain and differently depending on how interested we are. As you say, putting in all these sorts of facts was Doug Lenat’s GOFAI approach in CYC. But CYC has not fulfilled any of the promises made 12 years ago about how it would be able to teach itself in 10 years time, and all but GOFAI fanatics agree that the CYC project is a failure. Having you as an ally against GOFAI is so important to me that I will quote from your last posting. “We agree on the fact … that human level intelligence depends on competencies that are not practically achievable by using the ‘fully articulate’ representations of a GOFAI sort.” We agree too that this is an empirical not an in-principle point. But I don’t see how you expect to put emotional responses into Cog’s neural nets or get Cog to learn them and I don’t see what other resources you have. So my questions to you are, (l) Do you have any in-principle arguments that a simulated neural network with emotions must be possible?, and, (2) How do you expect to get these typically human emotional responses into Cog?
       I think that you would be right if you claimed that some sort of intelligence doesn’t require these human emotions. All you need is pain and pleasure receptors, fight and flight responses, and simple feedback systems that reinforce success in learning, penalize errors, and reward novelty. You can build all of this machinery into Cog. But then you would be making an artificial “life-form”–certainly not a being that could share our form of life where caring about who we are and getting others to recognize us on our terms is crucially important. You seem to reduce your position to absurdity when, in the Cog paper, you say that “while we are at it, we might as well try to make Cog crave human praise and company and even exhibit a sense of humor.” Is that meant as a joke? If you were to tell me that the Cog project is simply misleadingly named, that it is really an attempt to make a BUG–an artificial life-form with a humanoid body–I’d be impressed with the robotic challenges afforded by such a project. In fact, I thought the attraction of Brooks’ approach was to start with insect-like “animates” and slowly try to work one’s way up to human beings. One might then learn a lot about the difficulties in mechanizing the “right” receptors and emotional responses for each particular new species. (This issue is touched on in the discussion of reinforcement learning on p. xlv of What Computers Still Can’t Do.) I was stunned when I read your Cog paper and discovered Brooks’ research team is now attempting to jump right from bugs to us. Perhaps I have misunderstood, but you do say that you expect Cog to “engage in human-only or ‘adult’ cognitive activities.”
       To help make my point I will accept your challenge and conclude by suggesting some early developmental examples of human intelligence Cog’s performance of which would knock my socks off. As you probably know, and my informant, Alison Gopnik, a development psychologist at U.C. Berkeley, confirms, children are born able to imitate facial expressions and gestures. By 14 months they imitate the goals of a person trying to fix some broken toy, for example, but will not imitate a machine “trying” to do the same thing. I remember that my kids imitated not only my gestures, mannerisms, etc.; they imitated my style. Will Cog imitate other Cogs, or people? I would be impressed either way. By 18 months, infants already demonstrate a rudimentary sense of shame and guilt which is quite different from fear of punishment. By 3, if I remember rightly, my kids could talk about emotions and give psychological explanations. They could understand (that is, answer questions about) stories whose sense depended on the fact that people can be envious of each other, admire each other, etc. By 4, they could understand trickery, bluffs, etc. Any of these performances, learned by a neural-net Cog, would blow me away. I haven’t moved a goal post yet, so you can count on me not to move any of these. If we are going to take goal posts seriously, when would you be willing to admit that Cog had failed to achieve any of the above and that that counts as a failure for AI?
       The important point, as I argue in What Computers Still Can’t Do, is not just that such sensibilities are essential if an android is to inhabit our world and learn from us, but having basic embodied experiences such as effort, fatigue, frustration, triumph, and so forth, plays a crucial role in determining what counts as relevant for us and what situations count as similar to other situations. Thus such capacities are the basis for learning from experience and so form the basis for “adult” cognitive abilities.

Regards,
Bert