# How I Was Wrong

When you write a book called *How Not to Be Wrong*, you ought to expect to be fact-checked a little. And one of the virtues of the new, data-driven journalism currently in vogue is the habit of going back and checking one’s own old stuff. We’re not supposed to avert our gaze from the howlers in our old columns. We’re supposed to find the mistakes and learn from them.

In that spirit I’m going to use the last entry in this blog to look over some of my old columns from ** Slate**, with special attention to the times I blew it.

# How Can Rich People Vote Republican and Rich States Vote for Democrats?

My doctor once recommended I take niacin for the sake of my heart. Yours probably has too, unless you’re a teenager or a marathon runner or a member of some other metabolically privileged caste. Here’s the argument: Consumption of niacin is correlated with higher levels of HDL, or “good cholesterol,” and high HDL is correlated with lower risk of “cardiovascular events.” If you’re not a native speaker of medicalese, that means people with plenty of good cholesterol are less likely on average to clutch their hearts and keel over dead.

But a large-scale trial carried out by the National Heart, Lung, and Blood Institute was halted in 2011, a year and a half before the scheduled finish, because the results were so weak it didn’t seem worth it to continue. Patients who got niacin did indeed have higher HDL levels, but they had just as many heart attacks and strokes as everybody else.

How can this be? Because correlation isn’t *transitive.* That is: Just because niacin is correlated with HDL, and high HDL is correlated with low risk of heart disease, you can’t conclude that niacin is correlated with low risk of heart disease.

Transitive relations are ones like “weighs more than.” If I weigh more than my son and my son weighs more than my daughter, it’s an absolute certainty that I weigh more than my daughter. “Lives in the same city as” is transitive, too—if I live in the same city as Bill, who lives in the same city as Bob, then I live in the same city as Bob.

But many of the most interesting relations we find in the world of data aren’t transitive. Correlation, for instance, is more like “blood relation.” I’m related to my son, who’s related to my wife, but my wife and I aren’t blood relatives. In fact, it’s not a terrible idea to think of correlated variables as “sharing part of their DNA.” Suppose I run a boutique money management firm with just three investors, Laura, Sara, and Tim. Their stock positions are pretty simple: Laura’s fund is split 50–50 between Facebook and Google, Tim’s is one-half General Motors and one-half Honda, and Sara, poised between old economy and new, goes one-half Honda, one-half Facebook. It’s pretty obvious that Laura’s returns will be positively correlated with Sara’s; they have half their portfolio in common. And the correlation between Sara’s returns and Tim’s will be equally strong. But there’s no reason (except insofar as the whole stock market tends to move in concert) to think Tim’s performance has to be correlated with Laura’s. Those two funds are like the parents, each contributing one-half of their “genetic material” to form Sara’s hybrid fund.

Friendship (whether the real kind or the Facebook kind) isn’t transitive; the friends of your friends aren’t necessarily your friends. If they were, you’d have only as many friends of friends as you have friends. In fact, you’re likely to have more like a quarter-million.

Relations like “better than” sound like they ought to be transitive. But even here there are some wrinkles. In 1994, a pre-election poll for the prime ministership of Denmark, voters preferred Hans Engel to Uffe Ellemann-Jensen by a narrow margin of 39 percent to 38 percent. But Poul Nyrup Rasmussen beat out Engel in a proposed head-to-head matchup, 47–42. This seems to suggest that Rasmussen was a better match for the voters’ desires than Engel, and Engel in turn better than Ellemann-Jensen. But in the same poll, voters chose Ellemann-Jensen over Rasmussen. It seems like a paradox. But the sad fact is that “preferred by voters in a head-to-head matchup” does not have to be a transitive relation, which leaves it an interesting philosophical question who the “best” of the three candidates really is.

Lest you think this weirdness is a product of human folly, consider the famous Efron dice. There are four dice, none of which is numbered in the usual way. Die A has numbers 2, 2, 2 , 2, 6, and 6; die B has numbers 1, 1, 1, 5, 5, and 5; die C has numbers 0, 0, 4, 4, 4, and 4; and die D has all six faces numbered 3. If two players roll dice trying to get the highest number, and one is rolling A and the other is rolling D, the D player is going to win two-thirds of the time, because two-thirds of the time is how often player A gets a 2 instead of a 6. On the other hand, in a C vs. D matchup, D gets beaten by C two-thirds of the time. What’s more, you can check that B beats C two-thirds of the time as well. So B is a better choice than C, which is a better choice than D, which is a better choice than A. So what happens when you pit A against B? You’ve probably figured out where I’m going with this: A, the “worst” die, beats B two-thirds of the time.

The non-transitivity of correlation helps unravel results that might at first seem paradoxical. Take, for instance, the case of the wealthy liberal elitist. For a while now, this slightly disreputable fellow has been a familiar character in political punditry. Perhaps his most devoted chronicler is David Brooks, who wrote a whole book about the group he called bourgeois bohemians, or Bobos. In 2001, contemplating the difference between suburban, affluent Montgomery County, Maryland (my birthplace!) and middle-class Franklin County, Pennsylvania, he speculated that the old political stratification by economic class, with the GOP standing up for the moneybags and the Democrats for the working man, was badly out of date:

Like upscale areas everywhere, from Silicon Valley to Chicago's North Shore to suburban Connecticut, Montgomery County supported the Democratic ticket in last year's presidential election, by a margin of 63 percent to 34 percent. Meanwhile, like almost all of rural America, Franklin County went Republican, by 67 percent to 30 percent.

First of all, this “everywhere” is a little strong. Wisconsin’s richest county is Waukesha, centered on the tony suburbs west of Milwaukee. George W. Bush crushed Al Gore like a grape there, 65–31, while Gore narrowly won statewide. Still, Brooks is pointing to a real phenomenon. In the contemporary American electoral landscape, rich states are more likely than poor states to vote for Democrats. Mississippi and Oklahoma are Republican strongholds, while the GOP doesn’t even bother to contest New York and California. In other words, being from a rich state is positively correlated with voting Democratic.

But statistician Andrew Gelman found that the story is more complicated than the Brooksian portrait of a new breed of latte-sipping, Prius-driving liberals with big tasteful houses and NPR tote bags full of cash. In fact, rich people are still more likely to vote Republican than poor people are, an effect that’s been consistently present for decades.

Being rich is positively correlated with being from a rich state, more or less by definition. And being from a rich state is positively correlated with voting for Democrats. But because correlation is non-transitive, it doesn’t follow that being rich is correlated with voting for Democrats. In fact, pace David Brooks, it’s exactly the opposite.

# What’s Even Creepier Than Target Guessing That You’re Pregnant?

The age of big data is frightening to a lot of people, in part because of the implicit promise that algorithms, sufficiently supplied with data, are better at inference than we are. Superhuman powers are scary: Beings that can change their shape are scary, beings that rise from the dead are scary, and beings that can make inferences that we cannot are scary. It was scary when a statistical model deployed by the guest marketing analytics team at Target correctly inferred based on purchasing data that one of its customers—sorry, *guests*—a teenage girl in Minnesota, was pregnant, based on an arcane formula involving elevated rates of buying unscented lotion, mineral supplements, and cotton balls. Target started sending her coupons for baby gear, much to the consternation of her father, who, with his puny human inferential power, was still in the dark. Spooky to contemplate, living in a world where Google and Facebook and your phone, and, geez, even *Target*, know more about you than your parents do.

But we ought to spend less time worrying about eerily super-powered algorithms, and more time worrying about crappy ones.

For one thing, crappy might be as good as it gets. Yes, the algorithms that drive the businesses of Silicon Valley get more sophisticated every year, and the data fed to them more voluminous and nutritious. There’s a vision of the future in which Google *knows *you—where by aggregating millions of micro-observations (“How long did he hesitate before clicking on *this* …* *how long did his Google Glass linger on *that* …”) the central storehouse can predict your preferences, your desires, and your actions, especially vis-à-vis what products you might want, or might be persuaded to want.

It might be that way! But it also might not. There are lots of mathematical problems where supplying more data improves the accuracy of the result in a fairly predictable way. If you want to predict the course of an asteroid, you need to measure its velocity and its position, as well as the gravitational effects of the objects in its astronomical neighborhood. The more measurements you can make of the asteroid and the more precise those measurements are, the better you’re going to do at pinning down its track.

But some problems are more like predicting the weather. That’s another situation where having plenty of fine-grained data, and the computational power to plow through it quickly, can really help. In 1950, it took the early computer ENIAC 24 hours to simulate 24 hours of weather, and that was an astounding feat of space-age computation. In 2008, the computation was reproduced on a Nokia 6300 mobile phone in less than a second. Forecasts aren’t just faster now; they’re longer-range and more accurate, too. In 2010, a typical five-day forecast was as accurate as a three-day forecast had been in 1986.

It’s tempting to imagine that predictions will just get better and better as our ability to gather data gets more and more powerful. Won’t we eventually have the whole atmosphere simulated to a high precision in a server farm somewhere under The Weather Channel’s headquarters? Then, if you wanted to know next month’s weather, you could just let the simulation run a little bit ahead.

It’s not going to be that way. Energy in the atmosphere burbles up very quickly from the tiniest scales to the most global, with the effect that even a minuscule change at one place and time can lead to a vastly different outcome only a few days down the road. Weather is, in the technical sense of the word, *chaotic*. In fact, it was in the numerical study of weather that Edward Lorenz discovered the mathematical notion of chaos in the first place. He wrote, “One meteorologist remarked that if the theory were correct, one flap of a sea gull’s wing would be enough to alter the course of the weather forever. The controversy has not yet been settled, but the most recent evidence seems to favor the sea gulls.”

There’s a hard limit to how far in advance we can predict the weather, no matter how much data we collect. Lorenz thought it was about two weeks, and so far the concentrated efforts of the world’s meteorologists have given us no cause to doubt that boundary.

Is human behavior more like an asteroid or more like the weather? It surely depends on what aspect of human behavior you’re talking about. In at least one respect, human behavior ought to be even harder to predict than the weather. We have a very good mathematical model for weather, which allows us at least to get better at short-range predictions when given access to more data, even if the inherent chaos of the system inevitably wins out. For human action we have no such model, and may never have one. That makes the prediction problem massively harder.

In 2006, Netflix launched a $1 million competition to see if anyone could write an algorithm that outperformed Netflix’s own with regard to recommending movies to customers. The finish line didn’t seem very far from the start: The winner would be the first program to do 10 percent better at recommending movies than Netflix did.

Contestants were given a huge file of anonymized ratings—about a million ratings in all, covering 17,700 movies and almost half a million Netflix users. The challenge was to predict how users would rate movies they *hadn’t *seen. There’s data—lots of data. And it’s directly relevant to the behavior you’re trying to predict. And yet this problem is really, really hard. It ended up taking three years before anyone crossed the 10 percent improvement barrier, and it was only done when several teams banded together and hybridized their almost-good-enough algorithms into something just strong enough to collapse across the finish line. Netflix never even used the winning algorithm in its business; by the time the contest was over, Netflix was already transitioning from sending DVDs in the mail to streaming movies online, which makes dud recommendations less of a big deal. And if you’ve ever used Netflix (or Amazon, or Facebook, or any other site that aims to recommend you products based on the data it’s gathered about you), you know that the recommendations remain pretty comically bad. They might get a lot better as even more streams of data get integrated into your profile. But they certainly might not.

Which, from the point of view of the companies doing the gathering, is not so bad. It would be great for Target if they knew with absolute certainty whether or not you were pregnant, just from following the tracks of your loyalty card. They don’t. But it would also be great if they could be 10 percent more accurate in their guesses than they are now. Same for Google. They don’t have to know exactly what product you want; they just have to have a better idea than competing ad channels do. You don’t need to outrun the bear!

Predicting your behavior 10 percent more accurately isn’t actually all that spooky for you, but it can mean a lot of money for them. I asked Jim Bennett, the vice president for recommendations at Netflix at the time of the competition, why they’d offered such a big prize. He told me I should have been asking why the prize was so small. A 10 percent improvement in their recommendations, small as that seems, would recoup the $1 million in less time than it takes to make another *Fast and Furious *movie.

It’s no big deal if Netflix suggests the wrong movie to you. But in other domains, bad data is more dangerous. Think about algorithms that try to identify people with an elevated chance of being involved in terrorism, or people who are more likely than most to owe the government money. Or the secret systems the rating agencies use to assess the riskiness of financial assets.

Here, the mistakes have real consequences. It’s creepy and bad when Target intuits that you’re pregnant. But it’s even creepier and worse if you’re *not* pregnant—or a terrorist, or a deadbeat dad—and an algorithm, doing its business in a closed and opaque box, decides that you are.

# Does 0.999… = 1? And Are Divergent Series the Invention of the Devil?

I did a Q&A this week at the website *io9*, and one of the questions was something I get all the time: How can you add up a series of infinitely many numbers and get a finite answer? This question has been troubling mathematicians ever since Zeno, who wondered how you can possibly walk from one side of the street to the other, given that you’ve first got to go halfway, then cover half the remaining distance, then half of what’s still left, and so on, ad infinitum. A few months ago, a video claiming that the infinite series

1 + 2 + 3 + ….

had the value -1/12 went improbably viral, launching fierce arguments all over the Internet, including here on ** Slate**.

But the version of the infinity puzzle most people first encounter, the one I’ve had 100 arguments or more about, is this one: Is the repeating decimal 0.999… equal to 1?

I have seen people come nearly to blows over this question. (Admittedly, these people were teenagers at a summer math camp.) It’s hotly disputed on websites ranging from *World of Warcraft* message boards to Ayn Rand forums. Our natural feeling about Zeno is, *of course you eventually get all the way across the street*. But in the case of the repeating decimal, intuition points the other way. Most people, if you press them, say that 0.999… doesn’t equal 1. It doesn’t *look* like 1, that’s for sure. It looks smaller. But not much smaller! It seems to get closer and closer to its goal, without ever arriving.

And yet, their teachers, myself included, will tell them, *No, it’s 1*.

How do I convince someone to come over to my side? One good trick is to argue as follows. Everyone knows that

0.333… = 1/3

Multiply both sides by 3 and you’ll see

0.999… = 3 / 3 = 1

If that doesn’t sway you, try multiplying 0.999… by 10, which is just a matter of moving the decimal point one spot to the right.

10 x (0.999…) = 9.999…

Now subtract the vexing decimal from both sides.

10 x (0.999…) - 1 x (0.999…) = 9.999… - 0.999…..

The left-hand side of the equation is just 9 times (0.999…), because 10 times something minus that something is 9 times the aforementioned thing. And over on the right-hand side, we have managed to cancel out the terrible infinite decimal, and are left with a simple 9. So we end up with

9 x (0.999…) = 9.

If 9 times something is 9, that something just has to be 1—doesn’t it?

These arguments are often enough to win people over. But let’s be honest: They lack something. They don’t really address the anxious uncertainty induced by the claim 0.999… = 1. Instead, they represent a kind of algebraic intimidation. You believe that 1/3 is 0.3 repeating—don’t you? *Don’t you*?

Or worse: Maybe you bought my argument based on multiplication by 10. But how about this one? What is

1 + 2 + 4 + 8 + 16 + …?

Here the … means *carry on the sum forever, adding twice as much each time*. Surely such a sum must be infinite! But an argument much like the apparently correct one concerning 0.999… seems to suggest otherwise. Multiply the sum above by 2 and you get

2 x (1 + 2 + 4 + 8 + 16 + ….) = 2 + 4 + 8 + 16 + …

That looks a lot like the original sum. Indeed, it is just the original sum (1 + 2 + 4 + 8 + 16 + …) with the 1 lopped off the beginning, which means that the right-hand side is 1 less than (1 + 2 + 4 + 8 + 16 + …) In other words,

2 x (1 + 2 + 4 + 8 + 16 + …) - 1 x (1 + 2 + 4 + 8 + 16 + …) = −1

But the left-hand side simplifies to the very sum we started with, and we’re left with

1 + 2 + 4 + 8 + 16 + … = −1

Is that what you want to believe? That adding bigger and bigger numbers, ad infinitum, flops you over into negativeland?

(So as not to leave you hanging, there is another context, the world of 2-adic numbers, where this crazy-looking computation is actually correct.)

More craziness: What is the value of the infinite sum

1 – 1 + 1 – 1 + 1 – 1 + …

One might first observe that the sum is

(1-1) + (1-1) + (1-1) + … = 0 + 0 + 0 + …

And argue that the sum of a bunch of zeroes, even infinitely many, has to be 0. Or you could rewrite the sum as

1 - (1 - 1) - (1 - 1) - (1 - 1) - … = 1 – 0 – 0 – 0 …

Which seems to demand, in the same way, that the sum is equal to 1! So which is it, 0 or 1? Or is it somehow “0 half the time and 1 half the time?” It seems to depend where you stop. But infinite sums never stop!

Don’t decide yet, because it gets worse. Suppose T is the value of our mystery sum:

T = 1 – 1 + 1 – 1 + 1 – 1 + …

Taking the negative of both sides gives you

-T = −1 + 1 - 1 + 1 - …

But the sum on the right-hand side is precisely what you get if you take the original sum defining T and cut off that first 1, thus subtracting 1. In other words

-T = −1 + 1 - 1 + 1 - … = T - 1

So -T = T - 1, an equation concerning T which is satisfied only when T is equal to 1/2. Can a sum of infinitely many whole numbers somehow magically become a fraction?

If you say no, you have the right to be at least a little suspicious of slick arguments like this one. But note that some people said yes, including Guido Grandi, after whom the series 1 - 1 + 1 - 1 + 1 - 1 + … is usually named. In a 1703 paper, he argued that the sum of the series is 1/2, and moreover that this miraculous conclusion represented the creation of the universe from nothing. (Don’t worry, I don’t follow that step either.) Other leading mathematicians of the time, like Leibniz and Euler, were on board with Grandi’s strange computation, if not his interpretation.

But in fact, the answer to the 0.999… riddle (and to Zeno’s paradox, and Grandi’s series) lies a little deeper. You don’t have to give in to my algebraic strong-arming. You might, for instance, insist that 0.999… is not equal to 1, but rather 1 minus some tiny infinitesimal number. And, for that matter, that 0.333… is not *exactly* equal to 1/3, but also falls short by an infinitesimal quantity. This point of view requires some stamina to push through to completion, but it can be done. There’s a whole field of mathematics that specializes in contemplating numbers of this kind, called *nonstandard analysis*. The theory, developed by Abraham Robinson in the mid-20^{th} century, finally made sense of the notion of the infinitesimal. The price you have to pay (or, from another point of view, the reward you get to reap) is a profusion of novel kinds of numbers—not only infinitely small ones, but infinitely large ones, a huge spray of them in all shapes and sizes.

But we’re no closer to settling our dispute. What is 0.999… *really*? Is it 1? Or is it some number infinitesimally less than 1, a crazy kind of number that hadn’t even been discovered 100 years ago?

The right answer is to unask the question. What is 0.999… really? It appears to refer to a kind of sum:

0.9 + 0.09 + 0.009 + 0.0009 + …

But what does that mean? That pesky ellipsis is the real problem. There can be no controversy about what it means to add up two, or three, or 100 numbers. This is just mathematical notation for a physical process we understand very well: Take 100 heaps of stuff, mush them together, see how much you have. But infinitely many? That’s a different story. In the real world, you can never have infinitely many heaps. What is the numerical value of an infinite sum? It doesn’t have one—*until we give it one*. That was the great innovation of Augustin-Louis Cauchy, who introduced the notion of *limit* into calculus in the 1820s.

The great number theorist G.H. Hardy, in his book *Divergent Series*, explains it best:

[I]t does not occur to a modern mathematician that a collection of mathematical symbols should have a “meaning” until one has been assigned to it by definition. It was not a triviality even to the greatest mathematicians of the 18^{th}century. They had not the habit of definition: it was not natural to them to say, in so many words, “by X wemeanY.” … It is broadly true to say that mathematicians before Cauchy asked not, “How shall wedefine1 - 1 + 1 - 1 + …” but “Whatis1 - 1 + 1 - 1 +…”? and that this habit of mind led them into unnecessary perplexities and controversies which were often really verbal.

This is not just loosey-goosey mathematical relativism. Just because we *can* assign whatever meaning we like to a string of mathematical symbols doesn’t mean we should. In math, as in life, there are good choices and there are bad ones. In the mathematical context, the good choices are the ones that settle unnecessary perplexities without creating new ones.

The sum 0.9 + 0.09 + 0.009 + … gets closer and closer to 1 the more terms you add. And it never gets any further away. No matter how tight a cordon we draw around the number 1, the sum will eventually, after some finite number of steps, penetrate it, and never leave. Under those circumstances, Cauchy said, we should simply *define* the value of the infinite sum to be 1. And then he worked very hard to prove that this choice of definition didn’t cause horrible contradictions to pop up elsewhere.

As for Grandi’s 1 - 1 + 1 - 1 + …, it is one of the series outside the reach of Cauchy’s theory—that is, one of the *divergent series *that formed the subject of Hardy’s book. In the famous words of Lindsay Lohan, the limit does not exist!

The Norwegian mathematician Niels Henrik Abel, an early fan of Cauchy’s approach, wrote in 1828, “Divergent series are the invention of the devil, and it is shameful to base on them any demonstration whatsoever.” Hardy’s view, which is our view today, is more forgiving. There are some divergent series to which we ought to assign values and some to which we ought not, and some to which we ought or ought not depending on the context in which the series arises. Modern mathematicians would say that if we are to assign the Grandi series a value, it should be 1/2, because, as it turns out, all interesting theories of infinite sums either give it the value 1/2 or decline, like Cauchy’s theory, to give it any value at all. The series 1 + 2 + 3 + 4 + … has a similar status; it’s divergent, and Cauchy would have said it has no value. But if you *have* to assign it a value, -1/12 is probably the best choice.

Why is the 0.999… problem so controversial? Because it brings our intuitions into conflict. We would like the sum of an infinite series to play nicely with arithmetic manipulations like the ones we carried out on the previous pages, and this seems to demand that the sum equal 1. On the other hand, we would like each number to be represented by a unique string of decimal digits, which conflicts with the claim that the same number can be called either 1 or 0.999… as we like. We can’t hold on to both desires at once; one must be discarded. In Cauchy’s approach, which has amply proved its worth in the two centuries since he invented it, it’s the uniqueness of the decimal expansion that goes out the window.

We’re untroubled by the fact that the English language sometimes uses two different strings of letters (i.e., two words) to refer synonymously to the same thing in the world. In the same way, it’s not so bad that two different strings of digits can refer to the same number. Is 0.999… equal to 1? It is, but only because we’ve collectively *decided* that 1 is the right thing for that repeated decimal to mean.

# How to Lie With Negative Numbers

A recent working paper by economists Michael Spence and Sandile Hlatshwayo painted a striking picture of job growth in the United States. It’s traditional and pleasant to think of America as an industrial colossus, whose factories run furiously night and day producing the goods the world demands. Contemporary reality is rather different. Between 1990 and 2008, the U.S. economy gained a net 27.3 million jobs. Of those, 26.7 million, or 98 percent, came from the “nontradable sector”—the part of the economy including things like government, health care, retail, and food service, which can’t be outsourced and which don’t produce goods to be shipped overseas.

That number tells a powerful story about recent American industrial history, and it was repeated everywhere from the *Economist* to Bill Clinton’s latest book. But you have to be careful about what it means. Ninety-eight percent is really, really close to 100 percent. So does the study say growth is as concentrated in the nontradable part of the economy as it could possibly be? That’s what it sounds like—but that’s not quite right.

Jobs in the tradable sector grew by a mere 620,000 between 1990 and 2008. But it could have been worse—they could have declined! That’s what happened between 2000 and 2008; the tradable sector lost about 3 million jobs, while the nontradable sector added 7 million. So the nontradable sector accounted for 7 million jobs out of the total gain of 4 million, or 175 percent!

The slogan to live by here is: *Don’t talk about percentages of numbers when the numbers might be negative.*

This may seem overly cautious. Negative numbers are numbers, and as such they can be multiplied and divided like any others. But even this is not as trivial as it first appears. To our mathematical predecessors, it wasn’t even clear negative numbers were numbers at all—they do not, after all, represent quantities in exactly the same way as positive numbers do! I can have seven apples in my hand, but not negative seven.

The great 16^{th}-century algebraists, like Cardano and Vieta, argued furiously about whether a negative times a negative equaled a positive. Rather, they understood that consistency seemed to demand that this be so, but there was real dissent about whether this had been proved factual or was only a notational expedient. Cardano, when an equation he was studying had a negative number among its solutions, had the habit of calling the offending solution *ficta, *or fake.

The mathematical arguments of the Italian Renaissance can at times seem as recondite and irrelevant to us as their theology. But they weren’t wrong that there’s something about the combination of negative quantities and arithmetic operations like percentage that short-circuits one’s intuition. When you disobey the slogan I gave you, all sorts of weird incongruities start to bubble up.

For example, say I run a coffee shop. And people, sad to say, are not buying my coffee—last month, I lost $500 on that part of my business. Fortunately, I had the prescience to install a pastry case and a CD rack, and those two operations made a $750 profit each.

In all, I made $1,000 this month. Seventy-five percent of that amount came from my pastry case, which sounds like the pastry case is what’s really moving my business right now; almost all my profit is croissant-driven. Except that it’s just as correct to say that 75 percent of my profits came from the CD rack. And imagine if I’d lost $1,000 more on coffee—then my total profits would be zero, infinity percent of which would be coming from pastry! Seventy-five percent sounds like it means “almost all,” but when you’re dealing with numbers that could be either positive or negative, like profits, it might mean something very different.

This problem never arises when you study numbers that are constrained to be *positive*, like expenses, revenues, or populations. If 75 percent of Americans think Paul McCartney was the cutest Beatle, then it’s not possible that another 75 percent give the nod to Ringo Starr. Ringo, George, and John have to split the remaining 25 percent between them.

One can’t object very much to what Spence and Hlatshwayo wrote. It’s true, the total job growth in an aggregate of hundreds of industries *can* be negative, but in a normal economic context over a reasonably long time interval, it's extremely likely to be positive. The population keeps growing, after all, and, absent total disaster, that tends to drag the absolute number of jobs along with it.

But other percentage-flingers are not so careful. In June 2011, the Republican Party of Wisconsin issued a news release touting the job-creating record of Gov. Scott Walker. It had been another weak month for the U.S. economy as a whole, which added only 18,000 jobs nationally. But the state employment numbers looked much better: a net increase of 9,500 jobs. “Today,” the statement read, “we learned that over 50 percent of U.S. job growth in June came from our state.” The talking point was picked up and distributed by GOP politicians, like Rep. Jim Sensenbrenner, who told an audience in a Milwaukee suburb, “The labor report that came out last week had an anemic 18,000 created in this country, but half of them came here in Wisconsin. Something we are doing here must be working."

This is a perfect example of the soup you get into when you start reporting percentages of numbers, like net job gains, that might be either positive or negative. Wisconsin added 9,500 jobs, which is good; but neighboring Minnesota, under Democratic Gov. Mark Dayton, added more than 13,000 in the same month. Texas, California, Michigan, and Massachusetts also outpaced Wisconsin’s job gains. Wisconsin had a good month, that’s true, but it didn’t contribute as many jobs as the rest of the country put together, as the Republican messaging suggested. In reality, job losses in other states almost exactly balanced out the jobs created in places like Wisconsin, Massachusetts, and Texas. That’s how Wisconsin’s governor could claim his state accounted for half the nation’s job growth, and Minnesota’s governor, if he’d cared to, could have said that his own state was responsible for 70 percent of it, and they could both, in this technically correct but fundamentally misleading way, be right.

Percentages of negative numbers are especially perilous when you start thinking about inequality, as Felix Salmon explained in an April article about why it makes no sense to say the 85 richest people on Earth hold 50 percent of the world’s wealth. Or take this 2012 *New York Times* op-ed by Steven Rattner, which used the work of economists Thomas Piketty and Emmanuel Saez to argue that the current economic recovery is unequally distributed among Americans:

New statistics show an ever-more-startling divergence between the fortunes of the wealthy and everybody else—and the desperate need to address this wrenching problem. Even in a country that sometimes seems inured to income inequality, these takeaways are truly stunning.

In 2010, as the nation continued to recover from the recession, a dizzying 93 percent of the additional income created in the country that year, compared to 2009— $288 billion—went to the top 1 percent of taxpayers, those with at least $352,000 in income. … The bottom 99 percent received a microscopic $80 increase in pay per person in 2010, after adjusting for inflation. The top 1 percent, whose average income is $1,019,089, had an 11.6 percent increase in income.

The article comes packaged with a handsome infographic that breaks the income gains up even further: 37 percent to the ultra-rich members of the top 0.01 percent, with 56 percent to the rest of the top 1 percent, leaving a meager 7 percent for the remaining 99 percent of the population. You can make a little pie chart:

Recall that the top 1 percent scored 93 percent of income gains. Now let’s slice the pie one more time, and ask about the people who are in the top 10 percent, but *not* the top 1 percent. Here you’ve got the family doctors, the non-elite lawyers, the engineers, and the upper-middle managers. How big is their slice? You can get this from Piketty and Saez’s data, which they’ve helpfully put online. (Warning: Clicking that link will download a large spreadsheet.) And you find something curious. This group of Americans had an average income of about $159,000 in 2009, which increased to a little more than $161,000 in 2010. That’s a modest gain compared with what the richest percentile racked up, but it still accounts for 17 percent of the total income gained between 2010 and 2011.

Try to fit a 17 percent slice of the pie in with the 93 percent share held by the 1 percenters and you find you’ve got more pie than plate.

Ninety-three percent and 17 percent add up to more than 100 percent; how does this make sense? It makes sense because the bottom 90 percent actually had *lower* average income in 2011 than they did in 2010, recovery or no recovery. Put negative numbers in the mix, and percentages get wonky.

None of which is to deny that morning in America comes a little earlier in the day for the richest Americans than it does for the middle class. But it does put a slightly different spin on the story. It’s not that the 1 percent are benefiting while the rest of America languishes. The people in the top 10 percent but not the top 1 percent—a group that includes, not to put too fine a point on it, many readers of the *New York Times* opinion page—are doing fine, too, capturing more than twice as much as the 7 percent share that the pie chart appears to allow them. It’s the *other* 90 percent of the country whose tunnel still looks dark at the end.

# Why Are Handsome Men Such Jerks?

Julian Barnes’ *The Sense of an Ending** *is a good novel. We know it’s a good novel because lots of people like it, and because it won the Man Booker, one of the biggest prizes in English-language literature. But here’s the funny thing. After the book won the prize, people didn’t like it as much! Its rating on the site Goodreads took a sudden plunge. And it wasn’t the only book to suffer that fate. A recent paper by sociologists Balázs Kovács and Amanda J. Sharkey studied a group of 32 English-language novels that won major literary awards. After the prize, their ratings on Goodreads dropped from an average of just under 4 to about 3.75. A group of comparably rated novels that were short-listed for prizes, but didn’t win, showed no such diminution.

When a book wins a Booker, that ought to make us think it’s good. Every sociologist—OK, every human being over the age of 12—knows we like things more when we hear that other people like them. So what explains the Booker backlash?

At least in part, it’s a quirk of statistics called Berkson’s fallacy. If you know one thing about correlation, it’s that correlation is not the same as causation. Two variables, like height and math scores in school kids, may be correlated, even though being good at math doesn’t make you taller, or vice versa. What’s going on is that older kids are both taller and better at math. Correlation can arise from a common cause that drives both variables in the same direction.

But that’s not the only way misleading correlations can pop up. Joseph Berkson, the longtime head of the medical statistics division at the Mayo Clinic, observed in 1938 that correlations can also arise from a common *effect*. Berkson’s research was about medical data in hospitals, but it’s easier to explain the phenomenon in terms of the Great Square of Men.

Suppose you’re a person who dates men. You may have noticed that, among the men in your dating pool, the handsome ones tend not to be nice, and the nice ones tend not to be handsome. Is that because having a symmetrical face makes you cruel? Does it mean that being nice to people makes you ugly? Well, it could be. But it doesn’t have to be.

Behold the Great Square of Men. (And I'd like to note that you can find more stunning hand-drawn illustrations just like this one in *How Not to Be Wrong*.)

Now, let’s take as a working hypothesis that men are in fact equidistributed all over this square. In particular, there are nice handsome ones, nice ugly ones, mean handsome ones, and mean ugly ones, in roughly equal numbers.

But niceness and handsomeness have a common effect: They put these men in the group of people that you notice. Be honest—the mean uglies are the ones you never even consider. So inside the Great Square is a Smaller Triangle of Acceptable Men:

Now the source of the phenomenon is clear. The handsomest men in your triangle, over on the far right, run the gamut of personalities, from kindest to (almost) cruelest. On average, they are about as nice as the average person in the whole population, which, let’s face it, is not that nice. And by the same token, the nicest men are only averagely handsome. The ugly guys you like, though—they make up a tiny corner of the triangle, and they are pretty darn nice. They have to be, or they wouldn’t be visible to you at all. The negative correlation between looks and personality in your dating pool is absolutely real. But the relation isn’t causal. If you try to improve your boyfriend’s complexion by training him to act mean, you’ve fallen victim to Berkson’s fallacy.

The fallacy works, too, as a driver of literary snobbery. Why are popular novels so terrible? It’s not because the masses don’t appreciate quality. It’s because the novels you read are the ones in the Acceptable Triangle, which are either popular or good. So within that group, the good ones are less likely to be popular, for the same reason the handsomer men are bigger jerks. If you force yourself to read unpopular novels chosen essentially at random—I’ve been on a jury for a literary prize, so I’ve actually done this—you find that most of them, just like the popular ones, are pretty bad. And I imagine if you dated men chosen completely at random from OkCupid, you’d find that the less attractive men were just as jerky as the chiseled hunks. But that’s an experiment I can’t recommend, not even for the sake of mathematical enlightenment.

And now what happened to Julian Barnes is pretty clear. There are two reasons you might have read *The Sense of an Ending *and rated it on Goodreads. It might be because it’s exactly the kind of novel you’re apt to like. Or it might be because it won the Booker Prize. When a book wins a prize, then its audience expands beyond the core group of fans already predisposed to love it. That’s what every author dreams of, but more frequently read inevitably means less universally liked.

# Stephen Colbert Thinks “Number Sentences” Are Silly. They’re Not.

People who teach math, like me, hate it when students ask us, “When am I going to use this?” We don’t hate it because it’s a bad question. We hate it because it’s a *really good* question, and one that our curriculum isn’t really set up to answer. And that’s a problem.

In my day job I’m a pure mathematician, specializing in the most abstract parts of number theory and geometry. But I’ve also been writing here at ** Slate** for more than a decade about the connections between the mathematical world and the things we think about every day. The foreignness of mathematical language, from arithmetic to calculus, can create among outsiders the misapprehension that these spheres of thought are totally alien and contrary to how most people navigate the world around them.

That’s exactly wrong. Math is built on our natural ability to reason. Despite the power of mathematics, and despite its sometimes forbidding notation and abstraction, the actual mental work involved doesn’t differ much at all from the way we think about more down-to-earth problems. Rather, mathematics is like an atomic-powered prosthesis that you attach to your common sense, vastly multiplying its reach and strength.

In my new book, *How Not to Be Wrong*, I write about the many ways math is woven into our thinking, covering everything from lottery schemes to the obesity apocalypse to the Supreme Court’s view of crime and punishment to the existence of God. It’s not the kind of book where math is a big floating blob to be admired (or feared) from afar. We get right up next to it and get our hands dirty. Because that’s what math is about.

This week, I’ll be math-blogging at ** Slate**—sometimes about stuff from the book, other times about math that’s in the news right now. Let’s get started right now, because I have something to get off my chest about mathematics education.

* * *

We are angry about the way math is taught. We have always been angry about the way math is taught. One day Pythagoras started talking up his theorem, and the next day, people were saying, *When I was a kid we just measured the hypotenuse*. Easy peasy. Why do you have to make things so complicated, Pythagoras?

Nobody’s expressed this anger more precisely or more hilariously than Tom Lehrer did in his song “New Math” (from the early 1960s, in case you’re under the mistaken impression the math wars are a contemporary development). “But in the new approach, as you know,” Lehrer says, “the important thing is to understand what you’re doing, rather than to get the right answer.”

At the moment, the anger is centered on the Common Core, a fairly mild-mannered suite of goals and standards that’s become a stand-in in the popular mind for everything from overreliance on standardized tests (if you’re a Democrat) to jackbooted federal intrusion into local culture (if you’re a Republican). A Florida legislator said it would turn our kids gay. Louis C.K. said it made his kid cry.

And then there’s the “number sentence.” OK, number sentences aren’t actually mentioned anywhere in the Common Core, and they’ve been part of the math curriculum for decades. They sound weird and unfamiliar, though, which makes them fair game for schools-these-days tsk-tsk-ery. A New York principal argued in the *Washington Post* that the number sentence concept was too sophisticated for young children. And Stephen Colbert mocked the phrase, suggesting that students should also have the option of using “a word equation or formula paragraph.”

The phrase “number sentence” was new to me, too, when my 8-year-old son brought it home from school this year. But I didn’t make fun of it. I cheered it! When we call an equation like

2 + 3 = 5

a “number sentence” we’re not dumbing down. We’re telling it like it is. That equation *is* a sentence. It’s a sequence of symbols that makes an assertion about the world. “Dave” is not a sentence; it’s a proper noun. “Vsdsgs” is not a sentence; it’s a meaningless glurg. But “Dave is my brother” is a sentence. It has a subject (“Dave”), a verb, (“is”), and an object (“my brother”). Just like 2 + 3 = 5: the subject is “2+3,” the verb is “=,” and the object is “5.”

What’s wrong with calling 2 + 3 = 5 a good old “equation”? First of all, not all number sentences are equations. A number sentence is a sentence about numbers, which could be an equation, like 2 + 3 = 5, but also an inequality, like 5 > 3.

Worse, “equation” has been around so long that it’s lost contact with its literal meaning, “something that equates.” Students in my courses routinely refer to “5 > 3” as an equation. The same with a mathematical expression like “*x*^2 + *y*^2,” which is no more an equation than “Dave” is a sentence.

When we call 2 + 3 = 5 a “sentence” we engage in the radical act of insisting that mathematics has meaning. That *shouldn’t* be a radical act. But, too often, we teach our students that “doing mathematics” means “manipulating clusters of digits according to rules presented to us by the teacher.”

That’s not math. And when we teach our students to do that, and only that, we are training them to be slow, buggy versions of Excel. What’s the point?

If a student doesn’t truly grasp that “2 + 3 = 5” is a sentence, a statement about the world that might be true or false, it’s hard to see how, when algebra comes around, they can grasp that

*x*^2 + 3 = 5

is a sentence, too, one which is true for precisely two values of x (namely, the positive and negative square roots of 2) and false for all the rest.

One solution, of course, is to double down, addressing algebra, too, in a purely algorithmic way. You have an equation involving *x*, certain modifications of the equation are allowed (because the teacher says they’re allowed) and when you get to something that has just “*x* =” on the other side, you’ve won the game. If you can do this, you can get an A on a typical algebra test. But can you do algebra? I’m not so sure.

My kids really liked a tablet game called DragonBox, which, according to its promotional material, “secretly teaches algebra to your children.” Kind of like the way you can secretly feed kale to your children by grinding it up and hiding it in a meatball. But what DragonBox actually teaches are the *rules* of algebra: that you’re allowed to add the same symbol to both sides, that multiplying a sum by a symbol requires you to multiply each summand by the same symbol, and so on. Getting these rules in muscle memory is what you need if you want to get fluent in algebraic computations.

There’s just one thing missing, but it’s a big thing: the fact that algebra is made of sentences, that it means something, that it refers to something outside itself. An algebraic statement isn’t just a string of symbols with an *x* stuck in there somewhere. It’s an assertion about a relationship between quantities (or, when you get to more advanced algebra, between functions, or operations, or even other assertions.) Without that animating idea, algebra is a dead and empty exercise.

To be fair, Jean-Baptiste Huynh, who created DragonBox, gets this. He says in an interview that “DragonBox does 50 percent of the job. We need to teach the rest.” Fifty percent is not such a small number, and it sounds about right to me. Computation is important! We lose just as badly if we generate students who have some wispy sense of mathematical meaning but who can’t work examples swiftly and correctly. A math teacher’s least favorite thing to hear from a student is “I get the concept, but I couldn’t do the problems.” Though the student doesn’t know it, this is shorthand for “I don’t get the concept.” The ideas of mathematics can sound abstract, but they make sense only in reference to concrete computations. William Carlos Williams put it crisply: *no ideas but* *in things*.

Because math is about things. Not, despite how it sometimes looks, about itself, and not about getting a good score on the test. It’s about which things—which *sentences*—are right, and which are wrong, and how to tell the difference.

**Update, June 12, 2014:** I should have said that my thoughts about DragonBox, formalism, and algebra were touched off by this 2013 John Holbo post at Crooked Timber and the associated comment thread, whose existence I’d forgotten about when I was writing the piece.