Egghead

Thinking Inside the Boxes

“Newcomb’s Problem” still flummoxes the great philosophers.

Robert Nozick, who died last week, was famous as the author of Anarchy, State, and Utopia. This philosophical defense of the minimal state, published in 1974, resonated with libertarian types everywhere and became something of a bible for many Warsaw bloc dissidents. But Nozick never thought of himself as a political philosopher. Anarchy, State, and Utopia, he claimed, was an “accident.” (He was prodded to write it by the appearance of his Harvard colleague John Rawls’ book A Theory of Justice.) He was more interested in rational choice and free will. What had launched his philosophical career was a wonderful paradox that involved both these topics. Nozick did not invent this paradox himself. It was thought up by a Californian physicist named William Newcomb and reached Nozick by way of a mutual friend, the Princeton mathematician David Kruskal, at a cocktail party (“the most consequential party I have attended,” said Nozick, who was by no means party-shy).

Nozick wrote about this paradox in his dissertation and then published an article on it in 1969 titled “Newcomb’s Problem and Two Principles of Choice.” The result was, as the Journal of Philosophy put it, “Newcombmania.” Suddenly, everyone in the philosophical world was writing and arguing about Newcomb’s Problem. Now, a third of a century later, and despite the best efforts of Nozick and dozens of other philosophers, the paradox remains just as perplexing as it was when it was first conceived.

Newcomb’s Problem goes like this. There are two closed boxes on the table, Box A and Box B. Box A contains $1,000. Box B contains either $1 million or no money at all. You have a choice between two actions: 1) taking what is in both boxes; or 2) taking just what is in Box B.

Now here comes the interesting part. Imagine a Being that can predict your choices with high accuracy. You can think of this Being as a genie, or a superior intelligence from another planet, or a supercomputer that can scan your mind, or God. He has correctly predicted your choices in the past, and you have enormous confidence in his predictive powers. Yesterday, the Being made a prediction as to which choice you are about to make, and it is this prediction that determines the contents of Box B. If the Being predicted that you will take what is in both boxes, he put nothing in Box B. If he predicted that you will take only what is in Box B, he put $1 million in Box B. You know these facts, he knows you know them, etc. So, do you take both boxes, or only Box B?

Well, obviously you should take only Box B, right? For if this is your choice, the Being has almost certainly predicted it and put $1 million in Box B. If you were to take both boxes, the Being would almost certainly have anticipated this and left Box B empty. Therefore, with very high likelihood, you would get only the $1,000 in Box A. The wisdom of the one-box choice seems confirmed when you notice that of all your friends who have played this game, the one-boxers among them are overwhelmingly millionaires, and the two-boxers are overwhelmingly not.

But wait a minute. The Being made his prediction yesterday. He either put $1 million in Box B, or he didn’t. If it’s there, it’s not going to vanish just because you choose to take both boxes; if it’s not there, it’s not going to materialize suddenly just because you choose only Box B. Whatever the Being’s prediction, you are guaranteed to end up $1,000 richer if you choose both boxes. Choosing just Box B is like leaving a $1,000 bill lying on the sidewalk. To make the logic of the two-box choice even more vivid, suppose the backs of the boxes are made of glass and your wife is sitting on the other side of the table. She can plainly see what’s in each box. You know which choice she wants you to make: Take both boxes!

So, you can see what’s paradoxical about Newcomb’s Problem. There are two powerful arguments as to what choice you should make—arguments that lead to precisely opposite conclusions. The first argument, the one that says you should take just Box B, is based on the principle of maximizing expected utility. If the Being is, say, 99 percent accurate in his predictions, then the expected utility of taking both boxes is .99 x $1,000 + .01 x 1,001,000 = $11,000. The expected utility of taking only Box B is .99 x $1,000,000 + .01 x $0 = $990,000. The two-box argument is based on the principle of dominance, which says that if one action leads to a better outcome than another in every possible state of affairs, then that’s the action to take. They can’t both be right, on pain of contradiction. And they play the devil with intuition.

“I have put this problem to a large number of people, both friends and students in class,” Nozick wrote in the 1969 article. “To almost everyone it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.” When Martin Gardner presented Newcomb’s Problem in 1973 in his Scientific American column, the enormous volume of mail it elicited ran in favor of the one-box solution by a 5-to-2 ratio. (Among the correspondents was Isaac Asimov, who perversely plumped for the two-box choice as an assertion of his free will and a snub to the predictor, whom he identified with God.) Newcomb, the begetter, was a one-boxer. Nozick himself started out as a lukewarm two-boxer, despite being urged by the decision theorists Maya Bar-Hillel and Avishai Margalit to “join the millionaires’ club” of one-boxers. By the ‘90s, however, Nozick had arrived at the unhelpful view that both arguments should be given some weight in deciding which action to take. After all, he reasoned, even resolute two-boxers will become one-boxers if the amount in Box A is reduced to $1, and all but the most diehard one-boxers will become two-boxers if it is raised to $900,000, so nobody is completely confident in either argument.

Other philosophers refuse to commit themselves to one choice or the other on the grounds that the whole setup of Newcomb’s Problem is nonsensical: If you really have free will, they argue, then how could any Being accurately predict how you would choose between two equally rational actions, especially when you know that your choice has been predicted before you make it?

Actually, though, the predictor does not have to be all that accurate to make the paradox work. We’ve already seen that the readers of Scientific American favored the one-box solution by a 5-to-2 ratio. So, for that crowd, at least, a perfectly ordinary Being could achieve better than 70 percent accuracy by always predicting that the one-box choice would be made. A psychologist might improve that accuracy rate by keeping track of how women, left-handers, Ph.D.s, Republicans, etc., tended to choose. If I were playing the game with a human predictor whose high accuracy depended on such statistics, I should certainly opt for the contents of both boxes. On the other hand, if the Being were supernatural—a genie or god or a genuine clairvoyant—I would probably take only Box B, out of concern that my choice might affect the Being’s prediction through some sort of backward causation or timeless omniscience. I would also have to wonder whether I was really choosing freely.

The quantity and ingenuity of the resolutions proposed for Newcomb’s Problem over the years have been staggering. (It has been linked to Schrödinger’s Cat in quantum mechanics and Maxwell’s Demon in thermodynamics; more obviously, it is analogous to the Prisoner’s Dilemma, where the other prisoner is your identical twin who will almost certainly make the same choice you do to cooperate or defect.) Yet none of them has been completely convincing, so the debate goes on. Could Newcomb’s Problem turn out to have the longevity of Zeno’s paradoxes? Will philosophers still be vexing over it 2,500 years from now, long after Anarchy, State, and Utopia is forgotten? If so, it is sad that the late Robert Nozick, the man who put Newcomb’s Problem on the intellectual map, should not be the one to enjoy eponymous immortality. “It is a beautiful problem,” he wrote, in a melancholy vein. “I wish it were mine.”