Science

“Hot Hands” in Basketball Are Real

Experts have been arguing about these statistics for decades; now we know why.

“Hot hands” are perhaps not a myth.
“Hot hands”—perhaps not a myth after all.

Photo illustration by Slate. Photos by Brett Carlsen/Getty Images and Shutterstock.

Do basketball players actually have brief runs of superhuman “hot hands,” when the basket looks 5 feet wide and sinking a free throw is as easy as opening a door? Or are basketball shots more or less a random sequence, on which our pattern-seeking monkey brains impose illusory order after the fact?

The latter view has been conventional wisdom among psychologists and math-y types ever since a famous paper of Gilovich, Tversky, and Vallone showed that the so-called hot hand, if it existed at all, was too faint an effect to show up in statistical tests.

Now the hot hand is back in the news with a new finding by economists Joshua Miller and Adam Sanjurjo threatening that consensus. They’re not the first to go heterodox on the hot hand. (In Deadspin last year, I rounded up some of the reasons the hot hand may have been laid in the grave too quickly.) But they’ve found something truly new—a serious mathematical flaw in the Gilovich-Tversky-Vallone study, missed by the many scientists, me included, who’ve combed through the paper in the 30 years since it came out. That’s impressive.

So what’s the math snafu?

To understand it, let’s start with a simple example.

Think about all the families with kids in Nashua, New Hampshire. What’s the average ratio of boys to girls?

It’s natural to suppose the answer is 1, or pretty close. Boys and girls are born in just about equal numbers, so shouldn’t the ratio average out to an equal distribution?

But that’s wrong. I know the answer to the question on the nose, and I’ve never even been to Nashua. It’s not 1. It’s infinity. And it’s infinity for a kind of stupid reason: If there’s even one family with sons and no daughters, the boy-to-girl ratio for that family is infinite, and when you average a bunch of quantities, one of which is infinite, the average has to be infinite, too.

Of course, the average number of boys per family is the same as the average number of girls per family—how could it be otherwise? The ratio between those two averages is 1. But the ratio of the averages is not necessarily the average of the ratios. That’s just a fact of mathematical life. (It’s also the key to unlocking a much puzzled-over Google interview question.)

Or how about this? You roll out a new standardized math test in your school district. In Central High School, 60 out of 120 students, or 50 percent, pass. Meanwhile, Outlying Charter School has only two students, and both pass—a 100 percent rate. An enterprising educrat might crow, “Our average school passing rate is 75 percent.” And that is, in a sense, true! But this average percentage conveys a completely wrong impression of how well kids are doing.

Gilovich, Tversky, and Vallone looked at strings of free throws shot by 26 Cornell basketball players. They computed each player’s shooting percentage in two different contexts: after three straight hits, and after three straight misses. If players’ shooting tended to run hot and cold, as the average fan believes, you’d expect a shooter to be more likely to hit a free throw after three straight hits than after three straight misses. But the authors found no such effect. The average percentage after three straight hits was just about the same as the average percentage after three straight misses.

Given what happened with the children of Nashua, those words average percentage should produce a vague sense of unease. A percentage is a kind of ratio, and averaging ratios can yield screwy results, even when infinity doesn’t come into play, as we saw with the standardized math test. Miller and Sanjurjo found just such a wrinkle in Gilovich, Tversky, and Vallone’s method. The new study showed that if there’s no hot hand—if shots were utterly random and independent from each other—the average shooting percentage following three misses will, strange as it seems, be higher than the average shooting percentage following three hits. So the original data, showing that the two averages are roughly the same, is actually evidence that players are shooting better after they make a few hits. In other words, the study turns its coat, providing evidence for the hot hand instead of against it!

I know this sounds weird. Another example will help; it’s not exactly the same as the hot-hand study but close enough to display the same math quirk. Go back to Nashua. Are boys in those families more likely to have little sisters or little brothers? Surely the likelihoods are equal. But take a look at the eight possibilities for a three-child family, each one of which ought to be equally prevalent:

BBB
BBG
BGB
BGG
GBB
GBG
GGB
GGG

For each family that has a boy, we can calculate the percentage of boys who have a younger brother and the percentage of boys who have a younger sister. (Some boys, like the oldest boy in the second family, have both, and some boys—the youngest ones in their families—have neither, so these percentages don’t have to add up to 100.) For instance, in that second family, 100 percent of boys have a younger sister, while only 50 percent have a younger brother. In the first family, 67 percent of boys have a younger brother, and 0 percent have a younger sister.

If you average over all families, you find what looks like asymmetry: The average percentage of boys with a younger sister is 50 percent, while the average percentage of boys with a younger brother is only 31 percent.

Does that mean having a boy makes it more likely there are female babies to follow? Nope—it just means that averaging percentages, as Gilovich, Tversky, and Vallone did, is a terrible, horrible, no good, very bad idea.

Can we believe in the hot hand again? The case seems pretty good. Gilovich has a response: “Because our samples were fairly large, I don’t believe this changes the original conclusions about the hot hand,” he told the New York Times. But that isn’t compelling; Miller and Sanjurjo show that the sample of 100 shots in the original study is definitely small enough for the bias to show its face. And in another paper, Miller and Sanjurjo go back and reanalyze the data from just about every hot-hand study ever done. Their method is charming: Take a real-life sequence of hits and misses and rearrange it completely at random. If there’s no hot hand, the resulting sequence should be no more or less streaky than the original. Miller and Sanjurjo find the opposite: Real-life data consistently becomes less streaky when rearranged, suggesting that shooters really do run hot and cold. It may be time to put the myth of the myth of the hot hand to rest.