Future Tense

The Wrong Cognitive Measuring Stick

Why it’s a mistake to compare A.I. with human intelligence.

brain illo.
Evolved intelligence reflects the compromises and shortcuts necessary for a low bandwidth ape brain to function in an incredibly complex world.

Photo illustration by Sofya Levina. Image by lucky_spark/Thinkstock

In 1950, the brilliant mathematician and cryptographer Alan Turing began his seminal paper “Computing Machinery and Intelligence with a simple query: “I propose to consider the question, ‘Can machines think?’ ” It is a question that still resonates today, because it is essentially incoherent and thus unanswerable.

Turing himself quickly turned to a more pragmatic approach, proposing the now famous Turing test. While there are many versions today, they all essentially involve a human interlocutor being fed two information feeds, one from another human and one from the device or software being tested for intelligence. The interlocutor does not know which feed is the human, and so asks questions or converses with both. If, after a fixed period of time, the interlocutor cannot tell which feed comes from which entity, the machine is adjudged intelligent.

Clearly, this is a pragmatic, rough-and-ready test. For example, in one case in Russia, a number of adolescent males sent money and gifts to a particularly charming but technically primitive fembot on their chatboard, who thus obviously passed the Turing test to the extent they thought it was human. Of course, this may say more about adolescent males and hormones than A.I., but that’s exactly why the Turing test is less than definitive. Nonetheless, while it has been criticized on a number of grounds, it is still used. Why?

Turing himself provides a hint of the answer, noting that his deceptively simple question requires a robust definition of “think”; without one, the question itself is meaningless. And there’s the rub, indeed, for it is striking that, amid all the staggering advances in neuroscience, cognitive psychology, biotechnology, and machine intelligence, most of the critical terms for purposes of understanding intelligence cannot be rigorously defined. We may use words such as intelligence, self, consciousness, think, and free will every day in casual speech, but they are certainly not well enough understood to be useful in answering even simple queries. Indeed, Turing’s original question—vital as it is in a world where some, like Elon Musk, fear that A.I. is an existential risk—remains unanswered.

Brilliant as the Turing test is, its popularity has had one pernicious effect: It has reinforced for many people the comfortable illusion that human intelligence is a meaningful measure of intelligence in general. This is understandable: If you can’t define think or intelligence, refer back to human intelligence as the gold standard. But, of course, human cognition is itself remarkably full of kludges and shortcuts. This is because it is evolved intelligence, or E.I.—which means it reflects the compromises and shortcuts necessary for a low bandwidth ape brain to function in an incredibly complex world.

Professionals in psychology, behavioral economics, and marketing and advertising have over the past decades identified many examples of how the brain, with limited resources, has created endogenous rules of thumb to enable it to better manage complexity. Why are expensive flat-panel systems put at the front of stores selling electronics? Because doing that “frames” the purchase decision in terms of the high cost system, making other systems appear inexpensive by comparison. “Confirmation bias,” in which desired behaviors are encouraged by linking them to pre-existing mental models, is on obvious display in today’s politics. And so it goes. E.I. actually works pretty well—but it is a pastiche of kludges. And more importantly, those kludges reflect human evolutionary challenges—low levels of available conscious attention, limited memory, limited decision energy, early environmental conditions—rather than being inherent in intelligence per se. E.I. is one, rather idiosyncratic, form of intelligence; it is not the same as intelligence.

And what of A.I.? Well, the first and most obvious observation is that the entire discourse around A.I. implicitly presupposes the superiority of E.I.—after all, any other intelligence is “artificial,” as opposed to the human E.I., which is therefore implied to be “natural” and thus superior. The second is that much of the dystopian hysteria around A.I.  reflects the fear that it will act as humans act (which is to say violently, selfishly, emotionally, and at times irrationally)—only it will have more capacity. In essence, much of what we fear is a much more competent E.I.

But, of course, that makes little sense. Any A.I. will face some limits and constraints, but they will certainly be different than those that have shaped E.I. No A.I., for example, will be limited by the head size that can fit through the birth canal (the so-called obstetrical dilemma where evolution must balance larger craniums against a birth canal limited by bipedal form). And it is also doubtful that an A.I. will need to have the “follow the crowd” reflex so characteristic of humans when they become part of a mob. Perhaps more fundamentally, why should an A.I. need or use emotion the same way an E.I. does, as a convenient way to abbreviate longer decision-making processes (a mental shortcut called an affect heuristic)?

Moreover, it is fantasy to suggest that the accelerating development and deployment of technologies that taken together are considered to be A.I. will be stopped or limited, either by regulation or even by national legislation. A.I. is increasingly critical to competitive performance in most economic sectors, whether manufacturing or services; it is a significantly valued consumer product (how many people rely on softly spoken directions from their cellphone when they are driving?); and in some sectors in which it is highly prized, such as pornography and cybercrime, regulations are unlikely to be effective. And it is not just economics driving a more cognitive future: Every major military organization in the world knows that one way or another cognition in integrated techno-human systems—A.I. in one form or another—will be critical for military competence and national security in an increasingly complex and uncertain geopolitical environment. If the U.S. wants to stop military A.I. research, it is a dangerous whimsy to think that China, Russia, and others, public, private, and nongovernmental, necessarily will follow its lead.

This suggests that a pressing need as progress in different forms of machine intelligence accelerates is to go back to basics and begin a more sophisticated dialogue about what non-E.I. might actually look like and what different forms might arise. This might draw upon, but dramatically expand, the current effort to understand E.I. in terms of various subcomponents, such as emotional intelligence. Anthropocentric concepts of cognition may offer pleasing terrain for dystopian or utopian musing, but they are partial and potentially misleading if the need is to make the deep unknown knowable. They reflect our fears, not what cognition is evolving to be on our highly complex, increasingly technological, terraformed planet. Our challenge, in short, is not to continue building boogeymen in the dark; rather, it is to perceive and understand that which may be profoundly different, and nonhuman to the point of being truly alien, even though it is we who have built it.

This article is part of the artificial intelligence installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on artificial intelligence:

Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.