Dialogues

Artificial Intelligence

Dear Bert,

       Good. We’re getting closer and closer to turning this into a straightforward empirical disagreement about which we each have our strongly held hunches. For instance, you note that Cog’s silicon brain may not disqualify it from human intelligence, but “it might be crucial that ours is wet and Cog’s is dry.” I suppose it might be, but why?
       In Kinds of Minds (Basic Books, 1996), I discuss this in some detail, noting that the paracrine and endocrine systems in which the brain soaks its parts–the neuromodulators and hormones and the like–do indeed play crucial mediating roles. But so far as I can see, the role is always that of keys-hunting-for-locks, a process accomplished by differential diffusion. The Cog team has been planning on installing “virtual hormones” and “virtual neuromodulators” (and a virtual amygdala to help orchestrate them) as needed. How this might be accomplished is not yet clear in all its details, but I see no reason to think there are any insuperable problems. There have been virtual fluids and other stuff floating around in computer science for years. Think of Donald Knuth’s elegant idea (in Tex) of virtual glue for use in formatting text for fine printing–you put a little dab on each word in a string, depending on its length (or weight), and then you (virtually) stretch the string of words to fit the left and right margin, letting the virtual glue, which has defined elastic properties, stretch just enough to make a properly wide space between each pair of words. Or think of Doug Hofstadter’s JUMBO program, with all those codelets floating around like enzymes, latching onto this and that, snipping here, joining there. Virtual neuromodulators diffusing by variable broadcast may be computationally expensive to simulate on a large scale, but it is a well-explored idea, not a stumper.
       You see readily enough that Cog can have the basic animal rig for appreciating the carrot and the stick, but wonder if this could be the foundation for “typically human emotional responses.” Well, it’s a start, so what makes you think the problems would be insuperable? My suggestion that Cog should have a sense of humor was not at all meant as a joke. How on earth would I go about installing such? Well, I’d build in several cravings that are not much in evidence in other species, such as a relatively insatiable appetite for novelty, with a bias in favor of complex novelty, and some built-in gregariousness (hey, even Descartes’ automata-sheep had that), and some particular fears. And I’d let these do some of the driving of Cog’s analogy-appreciation faculties (which are certainly not “modules” in our way of thinking!)–along the lines inspired by Hofstadter and Melanie Mitchell and Bob French.
       I agree that it is startling to see Rod Brooks leap directly from “insects” to humanoids, but as I have always said, good AI is opportunistic, weaving back and forth between bizarre ambition and equally bizarre modesty (making toy problems comically small when necessary). We’ll just have to see if his eagerness to try his hand at the big prize without spending a few decades more of apprenticeship on artificial iguanas and tree sloths will pay off.
       We’ve been working so far on the most fundamental equipment of newborn infants–hand-eye coordination, location of sounds, distinguishing interesting things in the visual world, so we haven’t seriously begun work on specifically human talents. But I like your developmental milestones. Imitation has been a theme in the recent work of Maja Mataric and some others involved in Cog. Shame is a tough one, I grant you, but it’s probably a fair challenge–especially since you grant that what the 18-month-old human exhibits is “a rudimentary sense of shame,” not existential torment or the sort of Weltschmerz only neurotic Viennese chainsmokers can enjoy. Your addition of higher-order intentionality–understanding trickery, bluffs, and the like–has already been much discussed among us. I put a typescript copy of Simon Baron-Cohen’s book, Mindblindness (MIT Press, 1995), in circulation in the lab several years ago; it has a handy list of suggested mechanisms that might rescue Cog from autism. (An automaton doesn’t have to be autistic, but it will be unless rather special provisions are made for it.) There are lots more ideas along those lines to be explored as well.
       That isn’t meant to be a complete answer to your challenges, but just a few hints about how we’ve been thinking. You will see that your concerns have not gone unconsidered by us. I can’t say now when I would be willing to admit that Cog had failed to achieve any of this. Cog needs major funding and has so far been worked on in spare time only, instead of being a huge, well-funded project like the Japanese humanoid project. We certainly agree with you that it will take “effort, fatigue, frustration, triumph, and so forth”–not just ours, but Cog’s–to make a humanoid intelligence. The sooner we get the funding, the quicker you can win your bet, so help us out! One way or the other, we’ll learn what no amount of philosophical argument by itself could discover.

All the best,
Dan