How Not to Be Wrong

Does 0.999… = 1? And Are Divergent Series the Invention of the Devil?

999 Perspective.
Wait, so does this equal 1? 

Image by Melchoir/Wikimedia Commons.

I did a Q&A this week at the website io9, and one of the questions was something I get all the time: How can you add up a series of infinitely many numbers and get a finite answer? This question has been troubling mathematicians ever since Zeno, who wondered how you can possibly walk from one side of the street to the other, given that you’ve first got to go halfway, then cover half the remaining distance, then half of what’s still left, and so on, ad infinitum. A few months ago, a video claiming that the infinite series

1 + 2 + 3 + ….

had the value -1/12 went improbably viral, launching fierce arguments all over the Internet, including here on Slate.

But the version of the infinity puzzle most people first encounter, the one I’ve had 100 arguments or more about, is this one: Is the repeating decimal 0.999… equal to 1?

I have seen people come nearly to blows over this question. (Admittedly, these people were teenagers at a summer math camp.) It’s hotly disputed on websites ranging from World of Warcraft message boards to Ayn Rand forums. Our natural feeling about Zeno is, of course you eventually get all the way across the street. But in the case of the repeating decimal, intuition points the other way. Most people, if you press them, say that 0.999… doesn’t equal 1. It doesn’t look like 1, that’s for sure. It looks smaller. But not much smaller! It seems to get closer and closer to its goal, without ever arriving.

And yet, their teachers, myself included, will tell them, No, it’s 1.

How do I convince someone to come over to my side? One good trick is to argue as follows. Everyone knows that

0.333… = 1/3

Multiply both sides by 3 and you’ll see

0.999… = 3 / 3 = 1

If that doesn’t sway you, try multiplying 0.999… by 10, which is just a matter of moving the decimal point one spot to the right.

10 x (0.999…) = 9.999…

Now subtract the vexing decimal from both sides.

10 x (0.999…) - 1 x (0.999…) = 9.999… - 0.999…..

The left-hand side of the equation is just 9 times (0.999…), because 10 times something minus that something is 9 times the aforementioned thing. And over on the right-hand side, we have managed to cancel out the terrible infinite decimal, and are left with a simple 9. So we end up with

9 x (0.999…) = 9.

If 9 times something is 9, that something just has to be 1—doesn’t it?

These arguments are often enough to win people over. But let’s be honest: They lack something. They don’t really address the anxious uncertainty induced by the claim 0.999… = 1. Instead, they represent a kind of algebraic intimidation. You believe that 1/3 is 0.3 repeating—don’t you? Don’t you?

Or worse: Maybe you bought my argument based on multiplication by 10. But how about this one? What is

1 + 2 + 4 + 8 + 16 + …?

Here the … means carry on the sum forever, adding twice as much each time. Surely such a sum must be infinite! But an argument much like the apparently correct one concerning 0.999… seems to suggest otherwise. Multiply the sum above by 2 and you get

2 x (1 + 2 + 4 + 8 + 16 + ….) = 2 + 4 + 8 + 16 + …

That looks a lot like the original sum. Indeed, it is just the original sum (1 + 2 + 4 + 8 + 16 + …) with the 1 lopped off the beginning, which means that the right-hand side is 1 less than (1 + 2 + 4 + 8 + 16 + …) In other words,

2 x (1 + 2 + 4 + 8 + 16 + …) - 1 x (1 + 2 + 4 + 8 + 16 + …) = −1

But the left-hand side simplifies to the very sum we started with, and we’re left with

1 + 2 + 4 + 8 + 16 + … = −1

Is that what you want to believe? That adding bigger and bigger numbers, ad infinitum, flops you over into negativeland?

(So as not to leave you hanging, there is another context, the world of 2-adic numbers, where this crazy-looking computation is actually correct.)

More craziness: What is the value of the infinite sum

1 – 1 + 1 – 1 + 1 – 1 + …

One might first observe that the sum is

(1-1) + (1-1) + (1-1) + … = 0 + 0 + 0 + …

And argue that the sum of a bunch of zeroes, even infinitely many, has to be 0. Or you could rewrite the sum as

1 - (1 - 1) - (1 - 1) - (1 - 1) - … = 1 – 0 – 0 – 0 …

Which seems to demand, in the same way, that the sum is equal to 1! So which is it, 0 or 1? Or is it somehow “0 half the time and 1 half the time?” It seems to depend where you stop. But infinite sums never stop!

Don’t decide yet, because it gets worse. Suppose T is the value of our mystery sum:

T = 1 – 1 + 1 – 1 + 1 – 1 + …

Taking the negative of both sides gives you

-T = −1 + 1 - 1 + 1 - …

But the sum on the right-hand side is precisely what you get if you take the original sum defining T and cut off that first 1, thus subtracting 1. In other words

-T = −1 + 1 - 1 + 1 - … = T - 1

So -T = T - 1, an equation concerning T which is satisfied only when T is equal to 1/2. Can a sum of infinitely many whole numbers somehow magically become a fraction?

If you say no, you have the right to be at least a little suspicious of slick arguments like this one. But note that some people said yes, including Guido Grandi, after whom the series 1 - 1 + 1 - 1 + 1 - 1 + … is usually named. In a 1703 paper, he argued that the sum of the series is 1/2, and moreover that this miraculous conclusion represented the creation of the universe from nothing. (Don’t worry, I don’t follow that step either.) Other leading mathematicians of the time, like Leibniz and Euler, were on board with Grandi’s strange computation, if not his interpretation.

But in fact, the answer to the 0.999… riddle (and to Zeno’s paradox, and Grandi’s series) lies a little deeper. You don’t have to give in to my algebraic strong-arming. You might, for instance, insist that 0.999… is not equal to 1, but rather 1 minus some tiny infinitesimal number. And, for that matter, that 0.333… is not exactly equal to 1/3, but also falls short by an infinitesimal quantity. This point of view requires some stamina to push through to completion, but it can be done. There’s a whole field of mathematics that specializes in contemplating numbers of this kind, called nonstandard analysis. The theory, developed by Abraham Robinson in the mid-20th century, finally made sense of the notion of the infinitesimal. The price you have to pay (or, from another point of view, the reward you get to reap) is a profusion of novel kinds of numbers—not only infinitely small ones, but infinitely large ones, a huge spray of them in all shapes and sizes.

But we’re no closer to settling our dispute. What is 0.999… really? Is it 1? Or is it some number infinitesimally less than 1, a crazy kind of number that hadn’t even been discovered 100 years ago?

The right answer is to unask the question. What is 0.999… really? It appears to refer to a kind of sum:

0.9 + 0.09 + 0.009 + 0.0009 + …

But what does that mean? That pesky ellipsis is the real problem. There can be no controversy about what it means to add up two, or three, or 100 numbers. This is just mathematical notation for a physical process we understand very well: Take 100 heaps of stuff, mush them together, see how much you have. But infinitely many? That’s a different story. In the real world, you can never have infinitely many heaps. What is the numerical value of an infinite sum? It doesn’t have one—until we give it one. That was the great innovation of Augustin-Louis Cauchy, who introduced the notion of limit into calculus in the 1820s.

The great number theorist G.H. Hardy, in his book Divergent Series, explains it best:

[I]t does not occur to a modern mathematician that a collection of mathematical symbols should have a “meaning” until one has been assigned to it by definition. It was not a triviality even to the greatest mathematicians of the 18th century. They had not the habit of definition: it was not natural to them to say, in so many words, “by X we mean Y.” … It is broadly true to say that mathematicians before Cauchy asked not, “How shall we define 1 - 1 + 1 - 1 + …” but “What is 1 - 1 + 1 - 1 +…”? and that this habit of mind led them into unnecessary perplexities and controversies which were often really verbal.

This is not just loosey-goosey mathematical relativism. Just because we can assign whatever meaning we like to a string of mathematical symbols doesn’t mean we should. In math, as in life, there are good choices and there are bad ones. In the mathematical context, the good choices are the ones that settle unnecessary perplexities without creating new ones.

The sum 0.9 + 0.09 + 0.009 + … gets closer and closer to 1 the more terms you add. And it never gets any further away. No matter how tight a cordon we draw around the number 1, the sum will eventually, after some finite number of steps, penetrate it, and never leave. Under those circumstances, Cauchy said, we should simply define the value of the infinite sum to be 1. And then he worked very hard to prove that this choice of definition didn’t cause horrible contradictions to pop up elsewhere.

As for Grandi’s 1 - 1 + 1 - 1 + …, it is one of the series outside the reach of Cauchy’s theory—that is, one of the divergent series that formed the subject of Hardy’s book. In the famous words of Lindsay Lohan, the limit does not exist!

The Norwegian mathematician Niels Henrik Abel, an early fan of Cauchy’s approach, wrote in 1828, “Divergent series are the invention of the devil, and it is shameful to base on them any demonstration whatsoever.” Hardy’s view, which is our view today, is more forgiving. There are some divergent series to which we ought to assign values and some to which we ought not, and some to which we ought or ought not depending on the context in which the series arises. Modern mathematicians would say that if we are to assign the Grandi series a value, it should be 1/2, because, as it turns out, all interesting theories of infinite sums either give it the value 1/2 or decline, like Cauchy’s theory, to give it any value at all. The series 1 + 2 + 3 + 4 + … has a similar status; it’s divergent, and Cauchy would have said it has no value. But if you have to assign it a value, -1/12 is probably the best choice.

Why is the 0.999… problem so controversial? Because it brings our intuitions into conflict. We would like the sum of an infinite series to play nicely with arithmetic manipulations like the ones we carried out on the previous pages, and this seems to demand that the sum equal 1. On the other hand, we would like each number to be represented by a unique string of decimal digits, which conflicts with the claim that the same number can be called either 1 or 0.999… as we like. We can’t hold on to both desires at once; one must be discarded. In Cauchy’s approach, which has amply proved its worth in the two centuries since he invented it, it’s the uniqueness of the decimal expansion that goes out the window.

We’re untroubled by the fact that the English language sometimes uses two different strings of letters (i.e., two words) to refer synonymously to the same thing in the world. In the same way, it’s not so bad that two different strings of digits can refer to the same number. Is 0.999… equal to 1? It is, but only because we’ve collectively decided that 1 is the right thing for that repeated decimal to mean.