# The Psychology of Statistics

## In the coin flip paradox, you have to know what the coin flipper is thinking.

Toby and Marla are playing a game with coins, because that’s what people in math problems do. Toby flips a fair coin three times, out of Marla’s view. “Did you get any heads?” Marla asks.

“Yes,” Toby says. “For instance, the second coin came up heads.” (Because that’s how people in math problems talk.)

“I’ll bet you the next flip after that came up tails,” Marla says.

Is this a good bet?

That doesn’t *sound* like a hard problem. A coin is a coin and a flip is a flip. Coins have no memory; the third coin doesn’t know or care what happened to the second one. So Marla’s bet isn’t good or bad—the odds must be 50-50, heads or tails.

And yet this problem is the viral math conundrum of the moment, thanks to Joshua Miller and Adam Sanjurjo. They are the authors of the “hot hand” paper I wrote about last week showing that basketball players shooting free throws really do go on hot streaks.* And they say, against all common sense, that Marla has the advantage in this bet.

The worst part is, they are right.

How can the second coin being heads possibly make the third coin want to land tails?

It helps to consider some variants on our story. Call the story we started with Story A. And now consider** **Story B. Toby flips a fair coin three times, out of Marla’s view. “Did the second coin come up heads?” Marla aks. “Yes,” Toby says. Should Marla bet on the third coin to come up tails?

In Story B, Marla has the exact same information she does in Story A; that the second coin landed heads. But now she should be indifferent about the bet; it’s equally likely the third coin is heads or tails.

Perhaps you have already gotten started on your tweet explaining what an idiot I am. If so, take five deep breaths and step with me through the argument.

Marla, before she hears anything at all, has to rate all 8 possible three-coin sequences of heads (H) and tails (T) as equally likely. That part’s the same in both stories: TTT, TTH, HTT, HTH, HHH, HHT, THH, THT.

In Story B, Marla asks whether the second coin came up heads. When Toby says “yes,” four of the possibilities are ruled out: Only HHH, HHT, THH, THT remain, and each is equally likely. In two out of four cases, the third coin is tails. So Marla’s chances are even.

What about Story A? Now there’s a wrinkle: We have to know (and Marla has to know!) how Toby decided to tell Marla about the second coin. Let’s go behind the scenes:

**Story A1. **If any one of Toby’s coins came up heads, he chooses at random one of the coins that came up heads and tells Marla about it.

**Story A2.** If one of Toby’s coins came up heads, he tells Marla about the *earliest one* of his coins to land heads. (For example: if the sequence is HTT, Toby tells Marla the first coin came up heads; if it’s THH, he tells Marla the second coin came up heads.)

**Story A3.** If the second of the three coin tosses came up heads, Toby says, “Yes; for instance, the second coin came up heads.” If another coin came up heads, he just says “Yes,” but doesn’t specify which coin he has in mind.

It might seem like Toby’s thought process shouldn’t be relevant to Marla’s opinion about the third coin. But it is.

Story A1 seems to me the most natural interpretation. Toby’s “yes” rules out the sequence TTT, which leaves seven equally likely possibilities: HHH, HHT, HTH, HTT, THH, THT, TTH.

Now imagine this game played again and again—as if you went to story problem hell and this was your punishment. You play, let’s say, 2,400 games, and each of these seven sequences occurs 300 times. (The other 300 times, the coins fall all tails, and Toby just says, “No heads, sorry.”) How many times will Toby tell Marla the second coin landed heads?

Out of the 300 HHH sequences, that’ll happen just 100 times, since Toby might have mentioned any one of the three coins. In the HHT and THH sequences, Toby has just two choices, and he’ll bring up the second coin 150 times in each case. And if the sequence is THT, Toby will mention the second coin all 300 times. In the other cases, the ones where the second coin fell tails, Toby is sure to stay silent about the second coin, and bring up another one.

Now add it up. In the 2,400 games, Toby tells Marla “Yes; for instance, the second coin came up heads” this many times: 100 (HHH) + 150 (HHT) + 150 (THH) + 300 (THT) = 700 times.

Out of those 700, the third coin is a tail 150 (HHT) + 300 (THT) = 450 times. In other words, the third coin is a tail almost two-thirds of the time!

What about Story A2? Here, Toby’s not making any random choices; what he says is determined by the coins.

HHH, HHT, HTH, HTT: “The first coin landed heads.”

THH, THT: “The second coin landed heads.”

TTH: “The third coin landed heads.”

TTT: “Nope, no heads.”

Adding up these instances out of 2,400 trials, Toby tells Marla the second coin landed heads 600 times (cases THH and THT) and in exactly one-half of those cases (the THT ones) the third coin landed tails; so Marla has even odds.

In Story A3, Toby’s responses are:

HHH, HHT, THH, THT: “Yes; for instance, the second coin came up heads.”

HTH, HTT, TTH: “Yes, I did get a head.”

TTT: “No, I didn’t get a head.”

In this scenario, too, the third coin lands heads in exactly one-half the cases where Toby tells Marla the second coin landed heads, so her odds are even.

The numbers above tell us Marla’s bet is a good one in scenario A1. The computations are, at least for me, convincing but still confusing. It *just feels weird* that Marla has somehow acquired information about the third coin.

But in a sense, it’s obvious that Toby’s internal decision process matters. It matters even for Story B, which seems like a simpler scenario. Remember: In that story, Marla asks specifically about the second coin toss, and Toby tells her it was heads. Given that, she should know there’s a 50-50 chance that the third coin is tails.

But what if the mind of Toby works like this?

**Story B1. **Marla asks Toby, “Did the second coin land heads?” If the second coin lands tails, Toby says “no.” If the second coin and the third coin both land heads, Toby says “yes.” If the second coin landed heads but the third coin landed tails, Toby says, “I’m not telling. Why should I help you? Who even *are* you?”

If that’s Toby’s internal decision process, and Marla knows it’s Toby’s internal decision process, then Marla, getting a “yes,” ought to bet that the third coin lands on heads; she’ll win every time.

The Toby and Marla problem isn’t just an artificial puzzle. It has real consequences: Treating a Story A like a Story B is exactly the mistake that Miller and Sanjurjo found in the original “hot hand” paper. (Executive summary: The authors found that for most of their shooters, a successful free throw was about as likely after a miss after a hit. But the method they used to test this fell victim to the screwy counterintuitive bias that appears in Story A, which means their data actually supported the claim that shooting percentage goes up after a made basket.)

It’s also an encounter with a foundational difficulty of probabilistic reasoning. At its heart, Toby and Marla’s story gets at the same issue as some of the other members of the probability paradox hit parade. Like the “son born on Tuesday” paradox: One of Sue’s kids is a son born on Tuesday. What’s the chance her other kid is a boy? (This one has been masterfully unraveled by Tanya Khovanova.) Even better known is the Monty Hall problem: You’re on a game show and behind the doors are either a car or a goat—you know the one.

We would *like* statistics to be a machine that takes observations and turns them into inferences: If we observe result X, we draw conclusion Y. But that’s just wrong. Observations are not enough. Marla in Story A and Marla in Story B have access to the same observations about the coins, namely that the second coin landed heads; but those observations don’t determine how Marla should bet. It matters how the Marlas arrived at these observations. Another way to put it: To do statistics properly, it’s not enough to know what happened. You have to know *what might have happened, but didn’t happen*.

You can’t, of course. That’s why statistics is hard.

**Update, Nov. 3, 2015: **This sentence has been updated to include Adam Sanjurjo as co-author of the “hot hand” paper. (Return.)