Future Tense

Can We Teach Computers What “Truth” Means?

It’s harder than it sounds.

John McCarthy.
The term artificial intelligence was coined in 1955 by a computer scientist named John McCarthy

Courtesy of null0/Flickr

This article arises from Future Tense, a partnership of Slate, the New America Foundation, and Arizona State University. From Feb. 28 through March 2, Future Tense will be taking part in Emerge, an annual conference on ASU’s Tempe campus about what the future holds for humans. This year’s theme: the future of truth. Visit the Emerge website to learn more and to get your ticket.

I’d like to begin with two different ideas of truth. The first appears to be the simplest: “It is true that 1+1=2.” The second is from the beginning of the Declaration of Independence: “We hold these truths to be self evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness.” Now, these sound like quite different ideas about truth. But the process of trying to teach computers to understand truths like these—difficult for both notions—is revealing the ways in which they are similar.

The term artificial intelligence was coined in 1955 by a computer scientist named John McCarthy. Early on, McCarthy enunciated his key aim as the systematization of common sense knowledge. In 1959, he wrote: “[A] program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows.” This has proven very difficult, primarily because it is difficult to encode, in a systematic fashion, what it means to say something is true.

Even “1+1=2” is less obvious than it seems at first. Beginning in the early part of the 20th century, mathematicians and philosophers, led at first by Bertrand Russell and Gottlob Frege and later by Ludwig Wittgenstein, Kurt Gödel, Alan Turing, and others, tried to see whether mathematical knowledge—facts like “1+1=2”—could be reduced to the laws of logic. (By logic, Frege meant “those laws of thought that transcend all particulars.” The most basic principle of logic is perhaps the conviction that nothing exists—it is possible to name a set that has no elements.) David Hilbert, the dean of mathematics at the dawn of the century, had thought that such a reduction was possible and posed it as a challenge.

But Hilbert was doomed to failure here. The reason for this, at a basic level, is self-reference. Sentences like “This sentence is false” turn out to pose a nasty set of technical challenges that make it impossible to fully express mathematical knowledge as a consequence of logical axioms—things that are held, on their face, to be true.

Gödel, an Austrian logician who would become a good friend of Albert Einstein’s after both of them settled in Princeton, proved this in a 1931 paper, whose consequences were later strengthened by Turing. Gödel’s incompleteness theorem says that in any sufficiently strong logical system (meaning one that is rich enough to express mathematics), it is impossible to prove that the axioms—the assumptions—of the system do not lead to a contradiction.

The importance of Gödel’s incompleteness theorems for artificial intelligence is something that remains hotly debated. One school of thought, as Ernest Nagel and James Newman wrote in 1956, holds that incompleteness means “that the resources of the human intellect have not been, and cannot be, fully formalized, and that new principles of demonstration forever await invention and discovery.” The other school of thought says, basically, “Don’t worry about it!” The best-known recent exponent of this school is Ray Kurzweil, who claims, without much evidence, that “there is an essential equivalence between a computer and the brain.”

Kurzweil’s overheated triumphalism aside (he seems determined to prove that careful thought is not necessary to be human by displaying a tremendous lack of care himself), this is not a question that we need to resolve to say something about what current progress in artificial intelligence is doing to the idea of truth. Even if Nagel and Newman are right and human intellect cannot be fully formalized, computer scientists have come a long way since John McCarthy first enunciated the aim of formalizing common sense.

Computer scientists have worked to come up with formal descriptions of the everyday world. Here is a short list, taken from the Stanford Encyclopedia of Philosophy of some of scenarios they’ve tried to encode:

The Baby Scenario, the Bus Ride Scenario, the Chess Board Scenario, the Ferryboat Connection Scenario, the Furniture Assembly Scenario, the Hiding Turkey Scenario, the Kitchen Sink Scenario, the Russian Turkey Scenario, the Stanford Murder Mystery, the Stockholm Delivery Scenario, the Stolen Car Scenario, the Stuffy Room Scenario, the Ticketed Car Scenario, the Walking Turkey Scenario, and the Yale Shooting Anomaly.

Let’s take the last of these—the Yale Shooting Anomaly, which aims to formally codify the fact that an unloaded shotgun, if loaded and then shot at a person, would kill the person. Classical logic dealt with things like “1+1=2” which are true, (or false, like 1=0) for all time. They were true, are true, and always will be true. It doesn’t allow for things to happen. But to encode common- sense knowledge, computer scientists need a way to allow for events to take place. They also need ways to encode spatial locations.

Some of this had been worked out in a rigorous but limited way, in what philosophers call modal logic, which was first enunciated by C.I. Lewis in 1918. But modal logic was too limited for computer scientists to use in semireal world systems. In the languages that computer scientists have come up with, as in the Yale Shooting Anomaly, they were unable to preclude the possibility that the shotgun would spontaneously unload itself. It’s not that computer scientists think that that will happen; it’s that they struggle to formalize how it can’t. (Since the Yale Shooting Anomaly was first stated in 1986, many solutions have been proposed, but it remains an area of research.)

A central challenge computer scientists face is what’s called the ramification problem: How to codify that fact that if I walk into a room, my shirt does, too. This is paralleled by the “frame problem,” first enunciated by McCarthy in 1969, which is the “problem of efficiently determining which things remain the same in a changing world.” These problems are considerably harder than careless cheerleaders like Kurzweil make them out to be.

The central result of logicians in the 20th century was that, in the end, it will always be necessary to extend your axioms—things you just assume to be true without proving them—if you are to extend your idea of truth. This brings us to our second idea about truth—that men are created equal and entitled to life, liberty, and the pursuit of happiness. Thomas Jefferson’s insight (without getting into the abominable hypocrisy of the fact that slavery was legal at the time) was that these truths were not provable from some more basic system of logic, but must themselves be assumed.

The sense in which artificial intelligence research has eroded the distinction between such moral truths and mathematical truths is not a rigorous philosophical identification of the two, but just a sense that truths of the first sort are not as absolute as they seem (at the end the fights between logicians come down to opinion and taste) while truths of the second sort can be, grudgingly and with struggle, written down in a form that outwardly resembles the simpler-seeming truths of mathematics.