Dialogues

Artificial Intelligence

Dear Dan,

       I’m glad to hear that the Cog team is thinking hard about questions like how to simulate emotions in a simulated neural network. I’m also not surprised to learn that you do not think you are clear about all the details. But philosophical discussion becomes difficult when you don’t tell us the limits on Cog’s cognitive machinery. Any problem I raise concerning the difficulty of doing X, you can always say you are working on a virtual X. So we are back to our opposed intuitions about how hard such a project is. I don’t claim so far to have brought up any “insuperable problems,” but I think you grossly underestimate how hard the problems are. Your idea of virtual emotions reminds me of the old GOFAI researchers who had flow charts with boxes named “Understanding” and “Perception,” but still had to work out the details of what went on in each box. Just how would the simulated neuromodulators work to produce artificial emotions? They couldn’t just change the thresholds of the simulated neurons. That might work for fear and lust but most emotions are not so straightforwardly chemical. The chemical effects are mediated by meaning. As you recognize in insisting that Cog be socialized, emotions such as shame, guilt, and love require an understanding of public narratives and exemplars which must be picked up not just as information but by imitating the style of peoples’ behavior as they assume various social roles. Just how is Cog supposed to pick up the style of its caretakers? Which gender is Cog? Does Cog come to recognize its virtual gender and others with the same gender? How is Cog’s imitation supposed to modify its net?
       Also, emotions require that what happens to me and in my community matters to me. This is in turn tied up with my identity–the sense I have of who I am–so that for me courage is important but patience isn’t, while for someone else these priorities might be reversed. The stand I take on who I am is not captured in the higher-order intentions presupposed in bluffing, etc. It is a sense of what is ultimately worthy for me. (See Charles Taylor, “Responsibility for Self,” The Identities of Persons, ed. Amelie Oksenberg Rorty, University of California Press, 1976.) We know our brain, interacting with other brains, does something that enables people to take stands on who they are, but I don’t think we have a clue as to how this could be done in the neural nets that make up Cog’s brain.
       You might well answer that, since I have accepted the Turing test, I should not be concerned with how Cog feels or what it senses, but this leads to a whole bunch of other problems. Suppose we grant that we can fix the net to respond to virtual neuromodulators and so act as if in fear or, although I don’t see how, ashamed or in love. To pass the Turing test it is not sufficient that Cog respond with appropriate emotional behavior (for example, send back angry messages when I insult it); Cog must also be able to answer questions about its emotional life. It must be able to formulate propositions like insults make me angry unless I am just too tired to care, but my jealousy increases with my exhaustion. We agree one can’t just store such factual knowledge about our emotional and embodied life in a huge database like Lenat’s CYC since we can always mine out new explicit knowledge. Somehow, Cog will have to be able to transform its virtual hormones, etc., and the responses they cause, into the required explicit knowledge. As I understand it, neural-net modelers at present haven’t a clue how to do that sort of thing. How do you picture the conversion of Cog’s nonpropositional know-how into propositional knowing-that? Is there reason for believing that neural nets are the right causal mechanism for explaining propositional knowledge?
       And there is a final problem that may well be insuperable. People and cultures can change their worlds. This happens in people’s lives when they have a change of identity, for example, a religious conversion that radically “re-gestalts” what they consider worthy. It happens too when cultures change, as when our culture changed from the Greek to the Christian, and then to the Modern World. Charles Spinosa, Fernando Flores, and I have just published a book called Disclosing New Worlds in which we describe three different ways this kind of change occurs (we call them reconfiguring, cross-appropriating, and articulating), and we argue that that these types of radical change are normally brought about by entrepreneurs, successful citizens’ action groups, and cultural figures like Martin Luther King Jr., respectively. Moreover, we hold that opening up or producing new worlds is a capacity that only human beings exhibit. Even if, as I strongly doubt it could, Cog evolved from BUG to APE, I don’t see how it could override and/or transform the innate feature detectors and the innate similarity space that enabled it to start interacting with the world in the first place. So it could not open up a new world in which what counted as significant features and similarities were totally changed. If, as I and my co-authors argue, disclosing new worlds is the most important thing human beings do, we have here the ultimate, immovable goal post. Any of the developmental achievements I mentioned in my last letter would certainly be enough to bowl me over and change my view of what computers can do, but such behavior, as impressive as it would surely be, would still fall short of being human.

All the best,
Bert