The fallacy that became itself a fallacy

(featured image credit LearningLark CC BY)

Even specialists can fall prey to cognitive errors

I have been tossing a fair coin, and it has come up heads six times in a row. The chance of either heads or tails is 1 in 2, but for this to happen six consecutive times is 1 in 2 to the power 6, or 1 in 64. What is the probability that the next toss will turn up a seventh consecutive head?

The correct answer is of course 1 in 2. Coins don’t have a memory. What happened before cannot influence the current result. We all know that – yet our intuition sometimes leads us to believe differently.

On 18 August 1913, the ball at one of the roulette tables in the Monte Carlo casino had been ending up in a black slot for nearly 20 times in a row. Several gamblers started taking an interest and started putting money on red: after such a long streak of black, red was surely due to come up. And still the ball kept falling on black. People put more and more money on red at each successive turn of the wheel – and kept on losing it, as black kept on coming up. Let’s face it, where would you put your money if you happened to be there, having seen black come up 25 times in a row? Eventually, after an unbroken series of 26 blacks (likelihood: 1 in more than 136 million) the ball finally landed on red.

More recently, it was number 53 in the Italian lotto that had failed to come up for nearly two years and which, apparently, not only drove people bankrupt, but some to their death. The phenomenon has been described long before, though, for example by the early 19th century mathematician Pierre-Simon de Laplace, as Joshua Miller and Andrew Gelman discuss in a fascinating paper.

Casino Roulette - 3d render

Red at last! (photo: SalFalko CC BY)

It is gamblers who give this cognitive illusion – that if something happens less frequently than normal for a period, it will occur more frequently in the future (or vice versa) – its name: the gambler’s fallacy.

Let’s not assume that only compulsive gamblers fall for it, however. If your neighbour is pregnant again, having had five girls, what is the chance of the sixth child being a boy? Did you not, for a moment, feel that it ought to be more than 1 in 2? Given the dry and sunny weather we’ve been experiencing in the UK for weeks now, do you think a dry second half of August is more or less likely than normal?

We treat streaks as if they predict the next outcome. Our human brain is ill-equipped to handle the concept of randomness: we easily see patterns where there are none. The figure below[1] shows the outcome of three sequences of 50 roulette wheel spins (ignoring any zeroes). We know that red and black are equally likely, and we intuitively interpret this as “50% of the outcomes should be black”.  When we see long streaks of one colour, we think that is extraordinary. The universe is out of balance, and a reversal is overdue. Our intuition tells us that after the five blacks in A, a red outcome must be more likely to redress the imbalance. A sequence without long streaks (like C) looks more ‘normal’, but in reality, A and B are the result of genuine spins, while C has been manipulated to generate a reversal 3 out of every 4 times.

roulettespin

The hot hand

There is another phenomenon that is often compared with the gambler’s fallacy. Known as the hot hand fallacy, it was first described in 1985 by Thomas Gilovich, Amos Tversky and Robert Vallone. The ‘hot hand’ refers to a presumed temporary state of a basketball player in which they are more likely to perform better than average, i.e. they will be producing streaks of successful shots. Fans, coaches and players alike widely believe that players are more likely to make a shot after having made the last two or three shots, than after having missed them.

In several studies the researchers failed to find any significant correlation between shots (except for one player). They concluded that the belief in the hot hand in basketball is a cognitive illusion, resulting from the “expectation that random sequences should be far more balanced than they are, and the erroneous perception of a positive correlation between successive shots.” The chance of scoring does not depend on what went on before.

For about 30 years, this conclusion remained largely unquestioned. The hot hand fallacy stood as one of the more robust cognitive errors, even as new research using larger datasets found patterns actually consistent with the hot hand. Issues of measurement and control prevented these studies from determining the magnitude of the hot hand, and so they did not topple the prevailing wisdom.

But in 2016 Joshua Miller and Adam Sanjurjo discovered a fundamental flaw in the reasoning by Gilovich et al. One of their original studies positioned players on various spots on an arc, at a distance from which their shooting percentage was approximately 50%. Each player had to take 100 shots and they found no marked difference in the goal percentages after 1, 2 or 3 hits, or after 1, 2 or 3 misses. Intuitively, this is exactly what you would expect to find if you were to replace the outcomes of the players’ shots by a sequence of 100 coin flips. As successive flips are independent, we expect the percentage of heads that follow a streak of heads to be identical to the percentage of tails that follow a streak of heads.

This is incorrect. Bafflingly so, but yes, it truly is incorrect. Meet Jack, a hypothetical character from Miller and Sanjurjo’s paper. Jack tosses a coin 100 times. Every time it produces heads, next time he flips he writes down the result. He – like Gilovich and co, and most of us – expects to see approximately 50% of heads. But it is less.

To illustrate why, let’s look at what happens when he tosses the coin just three times. There are eight possible outcomes:

jackstable

In the first two cases, Jack has nothing to write down. The next four situations turn up one relevant head, so Jack writes down the outcome of the next flip. The final two feature relevant heads twice, so each time Jack writes down two results. The proportion of heads for each flip is shown in the third column. In three of the six cases (3, 5 and 6) heads is followed by tails, so the proportion is 0. In case 4, heads follows heads, so here the proportion is 1 (or 100%.) In the remaining two cases, heads occurs twice. In case 7, one heads is followed by heads, the other by tails so the proportion is 0.5; and finally, in case 8, heads is followed by heads two times out of two, so another 1.

Now imagine I have flipped the coin 3 times and I have calculated the proportion of heads following heads. What would be your best guess for the proportion? A simple calculation shows that the best guess (the expected value) is 2.5/6 (or 5/12), which corresponds with 41.67% — considerably less than the 50% we were all expecting.

Miller and Sanjurjo provide mathematical proof that this remains true for larger, finite sequences and for longer streaks: the likelihood that heads turns up after a preceding streak of heads is always less than one-half. The implication of this is clear: a basketball player shooting just as well after a streak of hits as after a streak of misses, is shooting about 8 percentage points better than you would expect by chance. This is no small beer. Typically the field goal percentage (the ratio of successful shoots over attempts) of a top NBA player is about 10% higher than that of a median player. In other words, a median player with a hot hand gets a boost that is nearly equivalent to her moving to top level!

A surprising fallacy

So if the hot hand actually exists, the hot hand fallacy is itself a fallacy. How could it persist for so long?

While I was drafting this article, Dan Kahan, a law professor at Yale University, published a post that reflects pretty much my own thinking. The authors of the original study, and many of those who subsequently took it at face value, assumed that the belief in the hot hand is similar to that which is central to the gambler’s fallacy: the erroneous belief that independent successive events are, in fact, not independent. It has the look and feel of a typical System 1 error – the cognitive system popularized by Daniel Kahneman in Thinking, Fast and Slow: fast but impulsive, and therefore relatively easy to fool.

What better way to show the error in the belief in a hot hand, than the conscious, deliberate, systematic, effortful application of System 2 thinking in the 1985 paper? However, hidden in it was the erroneous intuition that Miller and Sanjurjo brought to light. Cognitive errors are generally attributed to an overreliance on System 1. But here we have a situation where the researchers had not only mistakenly assumed such a System 1 bias in believers in the hot hand, but also neglected to verify their own assumptions.

little hot hand

Little hands, but hot hands! (photo: Patrick CC BY)

There is a lesson here for everyone involved in scientific research: doubt your assumptions and your intuitions, even if – or indeed especially if – they look totally self-evident. Verify the hell out of them (and enlist the help of your most critically-minded colleagues).

But there is also a lesson for all of us. Andrew Gelman, a statistician who delights in exposing sloppy statistics in the social sciences, describes the response of Tom Gilovich to the criticism as doubling down on the original conclusions.

It is not in our interest to cling on to beliefs that are crumbling under new evidence, let alone to double down on them. There is an array of cognitive explanations why we tend to do so, from confirmation bias to sunk cost fallacy – but it doesn’t have to be that way. If we don’t identify with our beliefs – if, as Nick Maggiuli says, we treat them like clothes we can change, rather than tattoos we are stuck with forever (or until painful removal), we will not find it so hard to replace our old, obsolete beliefs with better ones.

I firmly believe this – at least until I am confronted with credible evidence of the contrary.

 

[1] Thanks to Josh Miller for kindly helping me, in an email exchange, with visualizing the human fallibility in evaluating randomness

Advertisements

About koenfucius

Wisdom or koenfusion? Maybe the difference is not that big.
This entry was posted in Behavioural economics, Cognitive biases and fallacies, Psychology and tagged , , . Bookmark the permalink.

One Response to The fallacy that became itself a fallacy

  1. Pingback: Opening Bell 17.07.2018 - StockViz

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s