Dialogues

Artificial Intelligence

Dear Dan,

       I’m puzzled by your saying that Cog might not be able to disclose new worlds and then adding: “Well, so what?” As I remember it, the starting point of our discussion way back on the NewsHour With Jim Lehrer was whether Cog could pass the Turing test. To pass the test it is not necessary that Cog be a discloser of new cultural worlds, but as Disclosing New Worlds points out, world-change happens in most people’s lives and those who don’t initiate it have enough experience of it to understand it well, not simply as alien onlookers. They live in a tradition that includes, for instance, born-again Christians from St. Augustine to Eldridge Cleaver. They have seen, in movies and real life, men go from being swinging bachelors to solid husbands and fathers, and women go from devoting themselves to their career to becoming full-time mothers. On a cultural level, people can understand that Henry Ford helped change our world from one where people felt they had to govern animals and their own desires, to a world in which people control their cars, their desires, and even birth. Many can remember how Martin Luther King Jr. changed the world of race relations and how MADD changed not just the drinking laws, but the way people took responsibility for their relaxation. Moreover, in our pluralistic world, most people realize that there are many different cultures each with its own understanding of which kinds of similarities are significant and which aren’t. An intelligence that could have no understanding of such things, or only a distant spectator’s, might still be, like a Saint Bernard, fun and reliable with the kids, but it would not be much good as a teacher and certainly no “worthy intellectual companion.” By asking it about the above kinds of radical change it would be easy to trap it in the Turing test.
       I grant you it is logically possible that such a disclosing capacity might “emerge” from the lesser capacities one might be able to design. That’s why I didn’t introduce disclosing as an absolutely unachievable goal. But I see no reason to think the capacity to open new worlds will emerge from the sort of neural nets making up Cog’s brain. It has not emerged in chimps. Indeed, higher mammals have been around for millions of years and, as far as we know, the capacity to change one’s world only emerged once, along with language, art, institutions, and all the capacities that seem to be uniquely human.
       In any case, I agree that you will have your hands full giving Cog the simple developmental capacities that make animal coping possible. I can’t tell just how hard you think going from BUG to APE is going to be, since you wisely refrain from setting any time limit. Still, while you work on this challenging project, I hope that you establish a new scientific approach in AI by reporting your failures as well as your successes. Then, no matter what finally results, the effort will have been worthwhile.

All the best,
Bert