Dialogues

Artificial Intelligence

Dear Bert,

       You say we “grossly underestimate how hard the problems are,” and you may be right. Time will tell. But whatever mistakes we’re making, we’re not making the old “draw a box around each residual problem and call it a system in a flowchart” mistake.
       First things first. You have a knack for drawing a forbidding map of the hard problems that lie in the distant future, but long before we address Charles Taylor’s concerns about What Matters for Me, for instance, we’d like Cog to pass the Gallup test: self-recognition in mirrors, or watching itself on closed-circuit TV. As Jackie Gibson’s experiments showed long ago, even very young infants readily distinguish a closed-circuit video of their own waggling legs from a video of some other baby’s waggling legs. This is something Cog had better be able to do (lacking legs, it will have to recognize something else as a part of its own body in action).
       You are right: “somehow Cog will have to be able to transform its virtual hormones, etc., and the responses they cause into the required explicit knowledge.” In other words, infant Cog is eventually going to have to be able to say “I’m afraid” or “I’m bored” and mean it. Eventually it should know its own emotional state in something like the way we know our own. But again, first things first; it will be hard enough to get Cog to be bored or frightened by something happening around it. I don’t think what you call the conversion from “nonpropositional know-how into propositional knowing-that” is well described (Bert, you sound so East Pole when you say that!), and perhaps for that reason I don’t see it as posing quite the problem you see. I agree there’s a bunch of tough problems lurking in the vicinity, however.
       And as for “disclosing new worlds,” your “final problem that may well be insuperable” is as good a candidate as any I’ve heard for the ultimate immovable goal post, but it’s so special I wonder how much people would care if it were (or seemed to be) off-limits. (“Yes, folks, these new humanoids make great caretakers for your kids–more enjoyable and reliable than anybody you could hire, and better teachers, too. Kids–and grownups too–are finding them to be worthy intellectual companions … but, of course, they aren’t gonna disclose new worlds.”) Well, so what? I didn’t say you would want to marry one.
       And besides, at this point neither of us can say anything very clear-cut and useful about whether this talent is some emergent property of the lesser talents we might succeed in designing. I’m quite content to leave these issues in the mists indefinitely, with you having expressed your hunch and me having expressed mine. Meanwhile, you’ve acknowledged plenty of nearby goal posts that we may live to see scored upon, and if, on the other hand, all of them defeat us, you can say, “I told you so” and I’ll still be around to say, “So you did.”

Best wishes,
Dan