Can We Really Teach Computers What “Truth” Means?

What's to come?
Feb. 26 2013 7:46 AM

Can We Teach Computers What “Truth” Means?

It’s harder than it sounds.

John McCarthy.
The term artificial intelligence was coined in 1955 by a computer scientist named John McCarthy

Courtesy of null0/Flickr

This article arises from Future Tense, a partnership of Slate, the New America Foundation, and Arizona State University. From Feb. 28 through March 2, Future Tense will be taking part in Emerge, an annual conference on ASU’s Tempe campus about what the future holds for humans. This year’s theme: the future of truth. Visit the Emerge website to learn more and to get your ticket.

I’d like to begin with two different ideas of truth. The first appears to be the simplest: “It is true that 1+1=2.” The second is from the beginning of the Declaration of Independence: “We hold these truths to be self evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness.” Now, these sound like quite different ideas about truth. But the process of trying to teach computers to understand truths like these—difficult for both notions—is revealing the ways in which they are similar.

The term artificial intelligence was coined in 1955 by a computer scientist named John McCarthy. Early on, McCarthy enunciated his key aim as the systematization of common sense knowledge. In 1959, he wrote: “[A] program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows.” This has proven very difficult, primarily because it is difficult to encode, in a systematic fashion, what it means to say something is true.

Advertisement

Even “1+1=2” is less obvious than it seems at first. Beginning in the early part of the 20th century, mathematicians and philosophers, led at first by Bertrand Russell and Gottlob Frege and later by Ludwig Wittgenstein, Kurt Gödel, Alan Turing, and others, tried to see whether mathematical knowledge—facts like “1+1=2”—could be reduced to the laws of logic. (By logic, Frege meant “those laws of thought that transcend all particulars.” The most basic principle of logic is perhaps the conviction that nothing exists—it is possible to name a set that has no elements.) David Hilbert, the dean of mathematics at the dawn of the century, had thought that such a reduction was possible and posed it as a challenge.

But Hilbert was doomed to failure here. The reason for this, at a basic level, is self-reference. Sentences like “This sentence is false” turn out to pose a nasty set of technical challenges that make it impossible to fully express mathematical knowledge as a consequence of logical axioms—things that are held, on their face, to be true.

Gödel, an Austrian logician who would become a good friend of Albert Einstein’s after both of them settled in Princeton, proved this in a 1931 paper, whose consequences were later strengthened by Turing. Gödel’s incompleteness theorem says that in any sufficiently strong logical system (meaning one that is rich enough to express mathematics), it is impossible to prove that the axioms—the assumptions—of the system do not lead to a contradiction.

The importance of Gödel’s incompleteness theorems for artificial intelligence is something that remains hotly debated. One school of thought, as Ernest Nagel and James Newman wrote in 1956, holds that incompleteness means “that the resources of the human intellect have not been, and cannot be, fully formalized, and that new principles of demonstration forever await invention and discovery.” The other school of thought says, basically, “Don’t worry about it!” The best-known recent exponent of this school is Ray Kurzweil, who claims, without much evidence, that “there is an essential equivalence between a computer and the brain.”

Kurzweil’s overheated triumphalism aside (he seems determined to prove that careful thought is not necessary to be human by displaying a tremendous lack of care himself), this is not a question that we need to resolve to say something about what current progress in artificial intelligence is doing to the idea of truth. Even if Nagel and Newman are right and human intellect cannot be fully formalized, computer scientists have come a long way since John McCarthy first enunciated the aim of formalizing common sense.