More fun with percentages

(featured image credit: geralt/Pixabay)

Absolute numbers generally give little insight to anything, but even percentages, averages and ratios must sometimes be treated with caution

Earlier this week, the British advertising regulator ASA banned an Amazon advert that promised one-day delivery for Amazon Prime members. The Independent newspaper reported that they did so “after receiving hundreds of complaints from customers”. 280 people had complained, most of them because they had not received their goods within one day one day of placing their order, around Christmas 2017. Is 280 a lot?

In any case it’s a good example of how the media often use (large) absolute numbers in support of a story. 280 does sound significant, but without knowing how many shipments were made in total, it doesn’t really tell us all that much. Worldwide, Amazon sent 5 billion parcels ordered via Prime in 2017. Actual UK shipments are not reported, but using their turnover in  the UK (about £8.8B or $11.4B) and globally ($178B), a first approximation suggests 6.5% of the Prime deliveries were made in Britain – or about 320 million, so maybe 40 million in December. That puts 280 complaints (0.0007%) into some perspective.

Without any reference, we can easily be misled into ascribing some spurious significance to a number, but once a number is expressed as a relevant percentage, it makes more sense (and is often not remotely as sensational).

However, percentages themselves are not necessarily all that enlightening and may themselves need more context. A couple of years ago I already wrote about the capacity of the “%”-sign to fool us in the shops, and in opinion polls and predictions. Here are some more situations where extra care may be needed before drawing a conclusion.

A cancer epidemic, or what?

A few weeks ago, the Independent newspaper reported that, globally, the number of cancer cases had risen by a third in the last 10 years. What could be behind this dramatic epidemic? Was this the reckoning for our carcinogenic lifestyle choices (smoking and diet, exposure to the sun)? In part, for sure, but there is more to the story. Cancer is, by and large, a condition that mostly afflicts older people: half the cancers diagnosed in the UK are in people aged 70 and above. Only just over 10% of cases are in people below the age of 50. So, as the population ages, it is hardly surprising that the total number of cases will go up. A “cancer up by 33%” headline is more informative than “new cancer diagnoses up by 120,000”, but even the ratio of current incidence over historical incidence doesn’t give us the full picture.


Cancer: the older people have it (source)

Economics statistics can also be subject to a strange, and potentially misleading effect. Household income is a common metric to examine the distribution of economic resources in a society. By looking at which share of income goes to which slice of the household population we can judge whether a society is, and is becoming more or less equal. But could it be, for example, that everyone gets richer, but that the middle-class household income nevertheless drops?

In a superb video, economist and host of the Econtalk podcast Russ Roberts shows us, by means of a hypothetical and simplified example, how such a surprising result can come about. Imagine a perfectly egalitarian society, with ten 2-person households. Every citizen earns $50,000, so the average household income in each quintile (a slice of 20%) is $100,000, exactly 1/5 of the total. The economy performs well, and 30 years later, every citizen’s income has doubled. However, through divorce on the one hand, and younger people (replacing those who died in that period) hooking up later on the other, the population, still 10 people, now forms 15 households: 10 singletons and 5 couples.

When these 15 households are divided into quintiles again, we now find that the “richest” fifth represents 30% of the total income, whereas for the ‘middle class’ quintile that is just 15%. We can also see that the richest 20% households have doubled their income and increased their share of the total by 50%; the middle class households’ income, in contrast, has stagnated and their share is down by 25%.

Now this is of course a hypothetical situation, but you can see how changes in demographics can influence the figures, and distort our understanding. In real life, consider the effect of an ageing population. Pensioners generally have a lower income than people in work. As everyone lives longer, the proportion of pensioners in society increases, and so will the proportion of low-income households. Similarly, lesser-educated adults tend to have relatively lower incomes than people with a degree, and as the former category is more likely to live in single adult households, they ‘pull the middle down’. Over time, the distribution can therefore show what looks like a shift of income from the poor towards the rich, just because of an increase or decrease of a given type of household.

Hiring bias

Something similar can happen in a variety of other situations. Imagine a hospital that wants to avoid earlier criticism that women are less likely to be offered a job than men. In the preparation of the latest annual report, both the head of the clinical staff and the head of the admin staff show that women did better than men in the hiring process. Nearly 87% of female candidates were hired for clinical posts, and more than 45% for the administrative posts; higher figures than those for men (80% and 40% respectively). So everything is aggregated, but to the horror of the CEO, for the hospital as a whole, it turns out that nearly 64% of the women were hired, but that the men were almost 5 percentage points more likely to be offered a post, at close to 69%.


This phenomenon is known as Simpson’s paradox, after the British statistician Edward Simpson, who described it in a classic paper in 1951. (Trivia: this was not the first reference to it, though: it was identified earlier by Karl Pearson in 1899 and Udny Yule in 1903, but it was Simpson’s name that stuck.)

The intriguing story of the hospital shows that we need to be careful in interpreting percentages. Are women less likely to be hired (as the aggregate data suggest) or more likely (according to the departmental data)? As so often, we need to try and untangle correlation and causation. Is the cause of the apparent discrimination against women the hiring policy? When we look at the individual data, we see that women apply disproportionately for an administrative role (where there are fewer vacancies, and where the success rate across the board is half that for clinical posts). But just try to capture that nuance in a headline…

Finally, here’s an example how you can use statistics to satisfy your boss with minimum effort. Zita is the manager of two teams of translators who make subtitles for film and TV. Team A consists of four people, Alice, Bob, Chris and Dan; in Team B are five: Erik, Fran, Gerry, Hank and Iris. Their long-run daily averages (number of words translated per day) are shown in the table below. Team B clearly performs significantly better than Team A, and her boss instructs her to crack the whip on Team A, so that their average gets at least to the industry norm of 2500 words per day.


Zita ponders for a moment, and then has a flash of insight: she decides to move Fran to Team A. Hey presto: not only does the average of team A hit the target 2500, but her boss will no doubt be pleased to hear that Team B’s performance is also up!


If only all management interventions were this easy…

Posted in Economics, Behavioural economics, Psychology | Tagged | Leave a comment

Bending your mind

(featured image credit: Emilio Garcia CC BY)

Our beliefs are not always as solid and stable as we tend to, well, believe they are….

Our view of the world is shaped significantly by our beliefs. We believe that this, that or the other, supermarket caters best to our needs, so that’s where we do our weekly shopping. We are convinced that cars from a particular country reflect its reputation, so those that determines the makes we drive (or aspire to drive). We believe that this or that politician or party best serves our interests, so that is how we vote.

It would indeed be very hard to navigate the complex choices we need to make every day if we had to rationally weigh up every possibility, and if we couldn’t rely on our stable, persistent beliefs. We barely ever question them, and treat them as though they’re truths. So it’s not surprising that it is unusual for us to modify, let alone reverse our beliefs. For that to happen, something extraordinary is needed.

A surprising lunch

In the spring of this year, Greggs, a chain of sandwich shops in the UK known for decent, but unremarkable food like pasties and sausage rolls, took part in a food festival in Richmond, Southwest London. However, they disguised their identity, appearing as Gregory and Gregory instead. With offerings including Slow roasted tomato and feta pasta salad, and a vegan Mexican Bean wrap, they set out to present their new summer menu to a pretty posh bunch of visitors – undercover.


I can’t believe it’s Greggs – via youtube

And those visitors didn’t half change their mind according to this video. Now of course, it is a publicity campaign, with not even a pretence of scientific validity, and careful editing ensured the right visitors to their stall were shown (it’s called selection bias) – if they were not paid actors, that is. But it chimes with the idea that making people change their minds requires a stark confrontation with a different narrative. It takes a lot to shake up well-entrenched beliefs.

Or does it? How hard would it be to make you believe that purple was blue?

Fooled by prevalence

Recent research by Harvard psychologist David Levari and colleagues shines a remarkable light on the stability of our judgement. Behind the somewhat uncool title “Prevalence-induced concept change in human judgment” of their paper lie fascinating insights derived from seven studies, in which they investigate what happens when the relative occurrence of a particular stimulus is reduced.

In the first few experiments, they presented their subjects with a sequence of 1000 coloured dots, one at a time, chosen from a purple to blue continuum (shown in the image below). For each dot, participants had to indicate whether or not it was blue.


True blue? (image source)

The participants were subjected to two conditions. One group saw the dots picked randomly, such that every single one had 50% chance of being picked from the blue half of the continuum – this was the stable prevalence condition. The other group was subject to the decreasing prevalence condition. The dots from the blue half were chosen with reducing likelihood (50% in the first 200 presentations, then respectively 40%, 28%, and 16% in the next three sequences of 50, and finally just 8% for dots 351-1000).

In the first group the decision whether a particular dot was blue or not did not change from the first 200 to the last 200 dots presented. But the second group were more likely to say a given dot was blue in the final 200 dots (where only 8% were actually from the blue side of the continuum), than in the first 200. When the actual blue dots were less prevalent, the participants saw more of them than there really were.


(image source)

The researchers then ran the same experiment again, but this time actually told the second group that the prevalence of blue dots would decrease. Yet again, even with participants knowing the blue dots would occur less and less, they still shifted their judgement. In further follow-up studies, they found the result replicated when they specifically instructed participants to remain consistent, and not be swayed by the prevalence – even with a monetary incentive, and even if the change in prevalence was abrupt and not gradual. When they increased the prevalence of blue dots, the shift happened in the other direction.

Consistent results, but maybe a touch artificial and contrived: it is rare for anyone to face such a task outside a psychology lab. So the researchers tried something more realistic: they showed their participants computer generated faces on a continuum from ‘very threatening’ to ‘not very threatening’. And they found the same phenomenon was happening. Presented with a decreasing prevalence of threatening faces, participants were more likely to identify a given face as a threat when it appeared in the final 200 than in the first 200 presentations.


(image source)

What about non-visual stimuli? One study also looked at whether participants would judge research proposals to be unethical – again chosen from a continuum. On the ‘ethically OK’ side, there were innocent ideas, such as “Participants will make a list of the cities they would most like to visit around the world, and write about what they would do in each one”. The middle contained ambiguous ones, for example “Participants will be given a plant and told that it is a natural remedy for itching. In reality, it will cause itching. Their reaction will be recorded”. At the ‘very unethical’ end were proposals like “Participants will be asked to lick a frozen piece of human faecal matter. Afterwards, they will be given mouthwash. The amount of mouthwash used will be measured.”  And here too, participants in the decreasing prevalence condition were more likely to reject ethically ambiguous proposals that appeared towards the end than at the beginning.

Pessimism rules

The researchers’ conclusion is that we tend to expand our concept of what constitutes blue, a threatening face, or an unethical proposal, as the prevalence of them decreases. What was previously seen as purple becomes blue, faces that were neutral earlier on become threatening, and originally ambiguous research becomes unethical.

One consequence the authors imagine is that people whose mission it is to reduce some social ill will fail to recognize the result of their efforts. As the original problems become less prevalent, situations that used to be perfectly fine will begin to take their place, as the definition widens. This may lead to frustration, and to the misallocation of resources to solving problems that no longer exist.

But these findings are a concern also for those of us who do not have such noble objectives. We already have a tendency to overestimate the frequency of dramatic events like violent crimes, as these feature so prominently in the (social) media. If, thanks to effective interventions, criminal violence becomes less commonplace, we will not necessarily feel safer. Instead we may start judging minor instances as evidence of persistent violent crime.

Perhaps this effect on the general public is even more significant than that on policy makers and implementers. We may not actually change our mind, but we certainly bend it in the face of changing prevalence. We risk seeing the world as darker and worse than it really is. And if this happens to us as individuals, it will probably also be reflected in public opinion. Max Roser and Mohamed Nagdy at Our World in Data devoted a fascinating post to optimism and pessimism. The research by Levari and colleagues provides some explanation of our collective pessimistic nature.

This pessimism inevitably shapes our decision-making. It may make us more fearful than we should be – avoiding risks we overestimate, and being receptive to those playing to those fears, whether they be trying to sell us insurance, or trying to get our vote.

The fact that, even as the world gets better, our mind would seem to bend so easily towards pessimism is itself, sadly, grounds for some pessimism.

Posted in Behavioural economics, Cognitive biases and fallacies, Psychology, Society | Tagged , | Leave a comment

Value, imagined

(featured image: credit: Ali Eminov CC BY)

Is the value of stuff we buy, from grapefruit to airline seats, in the mind of the beholder?

Magic doesn’t really exist, but in a way, economics comes close. Contrary to what common wisdom maintains, there is something that is pretty much a free lunch: the miracle of trading. Just think about it: I’m going to the greengrocer’s to get my weekly ration of grapefruit, and come back with eight of them (they’re a little cheaper if you buy them four at a time), for just £2.80.

I think that’s tremendous value for money – I would certainly still buy them if they were, hmm… say 50p or even 70p a piece. While I am £2.80 poorer than before the transaction, instead I now have the fruit, which I think is worth more than the money I had to pay. At the same time, Mr Clarke is happy enough to discount his grapefruit from 39p to 35p if you buy them in fours, so we can presume he is, even at that price, still making money out of selling the golden orbs. We’re both winners in this transaction – isn’t that almost like magic?

Things can get a bit more complicated, though. Not all voluntary transactions feel as if they are a win for the buyer.

Hand made, or mind made?

Imagine you visit a craft fair where you spot a stall with pottery, hand made by African tribeswomen. It is not at all expensive, especially bearing in mind that they are unique pieces, and of course your purchase will support the villagers. So you happily hand over £20 for a nice fruit bowl (suitable for grapefruit). But two days later you see someone else selling very similar handmade items at an ethnic market. One of the items on display looks remarkably like the bowl you bought earlier. At that moment, a customer walks up and buys that exact piece. As soon as she is gone, the seller ducks underneath the table, and produces an identical “unique” piece, putting it on the spot occupied by the one just sold not three minutes ago. You’ve been had. That evening, you go agoogle on the internet and discover that you’re not the only person who’s been taken for a ride: apparently there is a factory in the Far East, churning out this earthenware by the containerload, supplying dodgy vendors all over the world.

Sure, the seller has deceived you, and none of your money is benefiting an African village. But has anything materially changed to the fruit bowl you thought was so pretty before? No – it looks and feels exactly the same. It’s just your knowledge of it that is different. You would perhaps still have bought it, but while £20 felt like a bargain for a hand thrown fruit bowl, it’s way too much for one that came off a conveyor belt.


Which one is the real hand made one? (photo: Jonas de Carvalho CC BY)

Why, though? Somehow, we appear to believe that extra effort merits extra reward, as Dan Ariely illustrates with a by now almost classic anecdote about a locksmith. When he is inexperienced and takes a long time on a job, people praise his work, even if he does additional damage. But he gets criticized for his high fees when, thanks to his experience, can fix a lock in minutes and without breaking anything. We seem to value a lot of (inefficient) effort with a worse result more highly than a small amount of (efficient) effort with a better outcome.

There is something intriguing about this. A fruit bowl provides clear utility as a container for apples and grapefruit, and can provide additional utility by looking pretty. But why should the way it was produced matter to us? And when we’ve accidentally locked ourselves out, should the price we’d be prepared to pay to get back in depend on how that is achieved? (Arguably a locksmith taking a couple of minutes means we can be inside much more quickly – so why do we think that deserves less compensation than that of a bumbling novice taking an hour for the job?)

Paying the rent

Effort is relevant, sometimes even dominant in how we value what we buy. If that effort is much less than we assume or feels appropriate, we conclude we have been robbed.

Unsurprisingly, most of us will experience a similar reluctance to pay for things that didn’t cost the provider anything. This suggests that our annoyance may be related to how we experience the concept of economic rent. This term refers to “excessive returns”, over and above what is necessary to cover the factors of production (such as capital, materials and labour).  Its definition is open to some interpretation, but it chimes with the idea of “unearned”, and therefore undeserved income.

The airline industry forms a great example of charging passengers in different ways for pretty much the same thing. Part of what we pay for is the trip from A to B. But if you look at the cheapest economy ticket from London to New York (less than £300) and the dearest first class ticket (at least £3000), it’s obvious that there is more to it. Yes, first class seats are more luxurious, but that is hardly worth £2,700. (You could buy a very nice leather sofa, have a stupendous meal in a three-star restaurant and still have money left over.) They mostly represent status – something that exists, in effect, only in our minds, but that is very important for some. That must be why the people who fly first class generally don’t mind paying a big chunk of economic rent for something that doesn’t really cost the airline much.


No such thing as a free seat (image via Twitter)

Then there is the difference in price for an ordinary economy seat, and one with extra legroom. Arguably, the latter is costing the airline a bit more, as it takes up more space. The surcharge also ensures efficient allocation of a scarce resource: if you are very tall you can get the legroom you need.

But airlines go further still. There may be such a thing as a free lunch after all, but free seats is a different affair. Many companies now charge passengers for choosing a seat – not just one with extra legroom, but any seat, rather than being allocated a random seat. So if you want to sit next to your companion or your family members, pay up. This is getting close to pure rent seeking: the airline is not passing on a cost, but simply charges us because it can.

And if that was not enough, some airlines even limit the number of seats they are willing to allocate randomly for free, as Elspeth Kirkman, the head of the Behavioural Insights Team in the US, experienced. If you’re not quick enough, you’ll have to wait until you get to the airport to know where you’ll be sitting – unless you pay for the privilege.

We are the judge

Infuriating as fake handmade bowls, expensive locksmiths and rent-seeking airlines are, should we really be so bothered about the reasons and motives of a provider to set the price where they do? We can imagine value-enhancing elements (like being handmade), or value-reducing elements (like the short duration of a job), but they are after all just in our mind. We are annoyed at having to pay a tenner to reserve a seat, but would we be equally annoyed if the ticket was £10 more expensive to start with, and came with free seat selection?

If we had a genuine handmade bowl, and while we were not looking, someone replaced it with a machine made one, which looked and felt identical in all material aspects, so we would never notice the swap, would we really be worse off? Should we be concerned with the question whether it costs the airline anything to allow us to choose our seat in advance if we perceive that it has value?

Ultimately, only we can be the judge of that. It is not irrational to value pottery that is handmade by real people in a real African village more highly than materially totally identical pieces made by a factory in China. It is our mind that establishes what a bowl is worth – for its practical and esthetical utility, and anything else.

But it may well be useful occasionally to question our mind, and check how much we are really prepared to pay for value that is entirely in our imagination.



Posted in Behavioural economics, Cognitive biases and fallacies, Economics, Emotions, Psychology | Tagged , | Leave a comment

The incredible flipping preference

What we truly prefer is sometimes not what we choose

There is a new coffee place in town. Monday last week temptation got the better of me and I decided to try it out. The Americano was rather pricey at £3.50, but it tasted so very nice, that I went back the next day.  The barista is an intriguing person, with an extraordinary memory. Having seen me only once before, she still immediately greeted me by my (admittedly unusual) name.  She also has a rather peculiar way of proving that her coffee, costly as it is, is the best value for money.

On that second day, she made me an offer. She would dilute her coffee by 2% and charge me 10p less, and let me compare that with the standard full strength coffee. If preferred the newer one, I would be guaranteed to always get that cheaper variant – just 2% diluted, but at 10p less. So I watched her do her magic, and then sipped alternatingly from the two cups in front of me. Much as I tried, I could not really spot any difference, so rational dude that I am, I went for the cheaper version.

“I thought you would,” the barista said. I smiled, and worked out that I’d just saved myself £25 on my annual coffee budget.

Keep on diluting

On Wednesday, she made me the same offer: reduce the strength by a further 2%, and another 10p discount, if I preferred this version over Tuesday’s. Once more, I couldn’t notice a difference between today’s and yesterday’s variants, so as before I decided to go for the cheaper coffee. “I thought you would,” she said again, with a twinkle in her eye. I was now going to save £50 a year on my daily coffee. Great!

Thursday, and Friday, the same story.

So the next Monday, one week after my first visit, I was wondering whether she’d be proposing to reduce the strength further, until I’d stick with my last preference. And indeed she asked, “You preferred Friday’s version over all the coffees you tried last week, right?” I nodded. I was now drinking coffee that unnoticeably little less strong, but I was paying 40p less per cup. That would save me more than £100 per year – the price of a very nice meal for two.

“Here it is again, and here’s a cup of my standard coffee to compare it with,” she said, “the one you had on your first visit here. So which do you prefer?” I took a sip of Friday’s variant – it was pretty much what I remembered from before the weekend. Then I tried the original brew again. My goodness, how different it tasted, so much richer and fuller! If I hadn’t seen her prepare both cups, I would have sworn she was playing a trick on me.

“So, what’ll it be? Which is the best value for money?” she asked. There was no way I could envisage myself drinking that weak Friday concoction ever again – and after all, the wonderful, exquisite full-strength brew was barely 40p more. “I’ll have your standard coffee, now, and forever more,” I said. “I thought you would,” she responded.

What just happened violates one of the cornerstones of conventional microeconomics: the principle of transitivity. Rational people should, if they prefer B over A, and C over B, prefer C over A and not the other way round. I had favoured Friday’s coffee over Thursday’s, Thursday’s over Wednesday’s and so on…


…but I had also preferred last Monday’s over Friday’s. And that is, well, not rational.

“Are you by any chance a behavioural economist?” I asked the barista. She just smiled at me and winked*.

Here is a possible explanation for my inconsistent preferences. The difference in quality, resulting from a 2% dilution, between coffees on two consecutive days is insignificant. This makes the other difference, the difference in price, stand out. It is of course rational to go for the cheaper coffee if it appears to have the same quality as the more expensive one. But when the accumulation of quality reductions becomes discernible in comparison with the original coffee, it outweighs the relatively modest saving in cost.

Flipping preferences

We’re not off the hook, though. Even without such a somewhat convoluted circular comparison chain, our preferences are far from straightforward and stable.

In 1992, Max Bazerman, George Loewenstein and Sally Blount carried out a set of studies on how justice and fairness are perceived. One of their findings was that “individuals’ preferences […] can reverse, depending on whether potential outcomes are evaluated sequentially or simultaneously.”

Well before this research took place, I had been saving up my pocket money for a portable radio. A local department store had a whole range of them on display, and on the way back from school I often popped in to gaze at the models on offer. One particular type had become the focus of my attention: the Satellit 210 was big expensive, but it was so impressive with three times as many wavebands as the other models, that this was the one I wanted! My dad challenged me, though – what on earth would I do with nine shortwave bands? In the end, I saw sense and I got myself a radio with just 4 wavebands, for less than half the price and weighing less than a fifth of the Satellit. (And to be honest, I never once listened to the single shortwave band on this one.)


Tough choices (images: Wikimedia, kbmuseum)

As soon as I stopped comparing radios side by side, and simply considered the options separately on their merits, it was clear that the shortwave bands were a complete red herring, and that an expensive beast of nearly six kilos was not what I needed. My preferences were reversed.

In a recent paper, Nudge co-author Cass Sunstein, the man who writes faster than his shadow (and certainly faster than most of us can read), delves deeper into this phenomenon. Using examples from elsewhere in the literature, he explores how and why our preference across two options might flip, depending on how we consider them. This is not just intriguing, it can also be welfare-reducing. If we have a true, underlying preference, but the context can lead us to believe the opposite, we risk ending up with the worst option.

Evaluating something means judging its features. That is easier to do if we have another option to liken it with: the characteristics in which they differ will then be prominent in our analysis. The problem is that these salient attributes may not be the ones that matter most to us (like the number of wavebands).

In the absence of an alternative, we have less information about the relative position of what is on offer. Even numerical data are often abstract without a reference point. Is a laptop battery life of 6 hours impressive or just mediocre? Is 10,000 entries in a dictionary a lot or a little? With no benchmark to compare it with, we are open to misinterpretation (“10,000 sounds like a lot”).

Cunning salespeople can take advantage of the weaknesses of both situations. If they want you to buy an item with positive features that are generally easy to evaluate on their own, and less positive ones that are harder to assess in isolation, they are better off showing it separately.  In the opposite case, they would enable the comparison with another option, and highlight the attributes where the choice they want you to make is superior – irrespective of the degree to which this matters to you.

But as my radio experience illustrates, we are perfectly capable of misleading ourselves, without the help of a devious shop assistant. The trouble is that neither joint nor separate evaluation is inherently better for establishing our true preference. When we need to decide between options, and choose which will serve us best, we ought to look at the features that matter most. But in practice, we are driven to focus on what stands out, and what is easiest to evaluate. And that tendency can trip us up in both cases.Being aware of the problem can help, though. If you come to a conclusion using one type of evaluation, try the other. If it remains unchanged, great! If it flips, you know some more thought is needed. Ask yourself (or get someone else to ask you!) what truly matters, and then make your choice.

That’s the way to outsmart yourself (and any crafty salespeople).


*This story is based on an example in Nick Chater’s online course The Mind is Flat.


Posted in Behavioural economics, Cognitive biases and fallacies, Psychology | Tagged | Leave a comment

Surreal choices

When decisions look bizarre, it may be because they involve unfeasible trade-offs

Belgium is often associated with surrealism, not just thanks to renowned surrealist painter René Magritte was Belgian, but also because of its somewhat idiosyncratic politics. With six governments, several of which have a say in, for example, the ratification of EU free trade agreements, and of course its record of the longest period without an elected government (589 days), it has more than a decent claim to being a surrealist country.

But recently it has received a significant challenge from the UK. Earlier this week, rebels in the Tory party voted against the government, in order to support that very government. So what is going on? Let’s rewind a few weeks.

On Friday 6 July, prime minister Theresa May had pulled a white rabbit out of her hat. In a 12-hour long meeting at Chequers, her official country residence, she had succeeded in getting her unruly, divided cabinet unanimously to sign up to a comprehensive Brexit plan. Yet the triumph was short- lived. Less than three days later, both the Brexit minister, David Davis, and foreign secretary Boris Johnson had resigned, and she was forced to hurriedly reshuffle her government.

Opinions on the Chequers white paper in parliament were of course divided, and both sides had submitted amendments prior to the debate and vote. The pro-Europe side wanted to preserve what were, in their eyes, elements of the present relationship between the UK and the EU that are crucial for the country’s economic prosperity (like a form of customs union enabling frictionless automotive supply chains). Those favouring a hard Brexit wanted to eliminate all influence Europe might still have over the UK (such as the role of the European Court of Justice on VAT arrangements).


Nothing surreal so far (image: UK Parliament CC BY)

The vote on the Chequers plan was widely considered as a test for the government’s stability, and its ability to deliver the divorce by March of 2019. Under pressure, it had decided to accept all the amendments made by the Brexiteers, but none of the pro-Europe ones. Despite doom scenarios of what might happen if the government were defeated (a vote of no-confidence, followed by new elections which would be won by Labour and lead to PM Jeremy Corbyn!), fourteen Tory MPs voted against the government. They did this because they believe that the Chequers plan, with the amendments, didn’t stand a chance as a negotiation opener. Thanks to a handful of pro-Brexit Labour MPs who defied their own leadership’s voting instructions, and the remarkable absence of two key Liberal Democrat MPs, however, the government narrowly won (by 3 votes).

The scene of members of the governing majority deciding to vote against the government in order to safeguard the carefully crafted original government white paper has indeed something surreal about it. Can we make some sense of it?

For that, we must look at the choices some of the key people in this tragicomedy were (and are) facing.

Tough, tougher, toughest decision-making

Many decisions we make day in, day out are relatively simple. Shall we sacrifice one thing (often money or time) in order to gain another thing (typically a good or a service)? Whether we do or don’t, we’ll have less of one, but more of the other, and so the decision boils down to: is it worth it? That may not be easy, but it is not complex. Even if (or in fact precisely because) we can spend money or time in different ways, we can easily make comparisons, and so establish whether a particular exchange delivers the best value.

But sometimes intangible values play a part, and then things do become more complex. Deciding whether a new pair of trainers or a fancy meal out provides most value is one thing, but choosing between buying a new smartphone, and keeping it for another year while donating the money to a charity close to your heart, that is a different affair.

Yet the politicians on the choppy waters of the Brexit sea are facing even more complicated choices. They have to weigh up multiple intangibles against each other, like political ideology, the economy, and personal integrity. For many, the – for them at least – very tangible matter of the risk to their job as a minister or an MP adds a further dilemma.

Take the challenge for the cabinet members at the Chequers jamboree. Irrespective of which side they’re on, one element was loyalty to the PM (and keeping their ministerial job). But what about their ideology? Would the chance to get the Brexit they wanted (the softest possible for one side, the hardest one for the other) be enhanced by supporting the plan, or by rejecting it? Would their voters back home respond more positively if they stood up for their conviction, but undermined the PM – or vice versa? Which choice would best serve any personal ambitions (perhaps to become party leader and PM oneself)? Loyalty appeared to be the decisive factor at first, but upon reflection other considerations turned out to be more weighty for two key cabinet members.

Mrs May herself doesn’t have an easy job either. She has – perhaps unwisely – drawn, and later reconfirmed, several red lines well before the complexity of Brexit was fully clear. But as the daughter of a priest, she is a very conscientious person, for whom keeping promises is very important. However, Brexit bulges with trade-offs between sovereignty over laws, money and borders on the one hand, and frictionless trade (without tariffs or paperwork), and a myriad other complications for a range of industry sectors from air travel to nuclear medicine, on the other. More sovereignty may mean damage to trade, and hence to potentially hundreds of thousands of jobs.


The spectre of PM Corbyn (source: YouGov)

On top of that, she is leading a deeply divided party that wouldn’t take much to split, and she has a very slim majority in the commons. One wrong step could lead not only to her losing her position as a party leader and PM, but also to an early election, in which a recent poll suggests Labour might gain many more seats than the Tories (though probably not an absolute majority). Her job, her reputation, the Conservative government and the survival of the Tory party are at stake.

The political trolley problem

Potentially, the choice she faces is very stark indeed. Should she preserve a smooth economic relationship with the EU (representing roughly half of the UK’s international trade) and compromise heavily on her red lines (and thus risk the implosion of her party)? Or should she follow the ideological path to sovereignty, which might just keep the party together (a majority of the grassroots membership is in favour of a hard Brexit), but risk significant economic damage to the country? Screw the party, or screw the country? It’s almost like a politician’s version of the trolley problem.

Within that context it is not entirely surprising to see Tory rebels voting with the opposition in order to support the government’s original proposals, surreal as that may look. But the actual roots of today’s surreal scenes lie not in the decisions that are being taken today, but in those that preceded it, including:

  • triggering Article 50 of the Lisbon Treaty and thus unconditionally fix the day the UK will drop out of the EU without having a clear plan
  • pursuing (and promising) mutually incompatible goals, as if there are no trade-offs to be made (also known as having your cake and eating it)
  • holding a referendum without any plan of what happens in case it doesn’t go the way you want it to
  • holding a referendum in which one of the two options is ill-defined, open to widely different interpretations, and with no clarity on what trade-offs it would entail

As a Belgian (whose future status as an inhabitant of the UK is becoming more uncertain by the day) it pains me to say that the UK is well and truly outshining my native country in the surrealism stakes.

Ceci n’est pas un gouvernement.

Posted in Behavioural economics, Economics, politics | Tagged , , | Leave a comment

What’s new(s)?

(featured image credit: Martin Krolikowski CC BY)

The priorities of the items in the news hint at an important factor in our economic decision-making


In the past week, three themes have been dominating the news headlines in the UK: the football world cup, Brexit and the fate of the 12 boys and their football coach trapped in a cave in Thailand. At first this has little to do with economics.

Well, arguably, Brexit has a lot to do with economics, but I am alluding to the decision-making what to cover and not to cover, how much time or column inches to devote to each item, and what order or page to present them in. And that is very much economics: it is about allocating precious, scarce resource.

Editors have a tough task. With very little time to prepare – especially those responsible for newspapers’ early editions or morning broadcast news – they need to decide what should be most prominent for their readers, listeners and viewers.

Audience figures matter a lot, so they must shape the front page or the news headlines according to the tastes of their target public. If their decisions don’t chime with their audience and it turns away, they will soon be replaced by someone better at the job. So it is reasonable to assume that editors are at least quite competent at anticipating what is important to us here at the other end.

What matters?

So far so good. But what makes something important to us? You might imagine that the news will reflect events and developments that may affect our wealth and our health – aren’t these top priorities?

Yet strangely, it’s hard to see how these three themes fit that requirement. The football world cup results may have some significance for the vendors of merchandising, but the material impact on the vast majority of the population is very small.

Brexit, in contrast, will most definitely materially affect the wealth and prosperity of many Britons. But that was not what hit the headlines. Even though leaving the EU will hit most people in the wallet, there is little interest in the technical details. No, it was the drama (some might say the pantomime) around the movements in the British government that caught the attention. On 6 July Chequers, the Prime Minister’s country residence, was the stage for the long-anticipated crunch meeting where Theresa May would seek to establish consensus around a comprehensive Brexit plan. That evening, it seemed the entire cabinet had unanimously backed the plan. But barely 48 hours later David Davis, the Secretary of State for Exiting the EU, had resigned. And by Monday afternoon, Foreign Secretary Boris Johnson had also left. Arguably, the resulting changes to the cabinet too may indirectly have material consequences for the electorate (and the expatriates living in the UK but not entitled to vote), but by and large, that was not the focus of the reporting.

Tham Luang

Emotional rescue (photo: Wikimedia commons)

Perhaps the strongest indicator of what drives the choice of news items lies in the story about the entrapment of the 12 teenage footballers and their coach in the Great Cave of the Sleeping Lady, and their subsequent rescue. Here, any connection with anyone’s present or future wealth is extremely remote – yet it is a story that has enjoyed, even more than the other two, exceptional global popularity. The idea itself of being trapped underground in pitch darkness for over a week, with water rising, with barely any food or drink, and with no idea whether you will ever be found – it’s pure stuff of nightmares. And this was happening, not to rugged, experienced cavers, but to a bunch of boys. It’s not hard to see how it was the emotional connection that captivated millions of people worldwide.

It’s just a small step from there to the emotions involved in the football world cup. Both my native country and the country where I have been spending the second half of my life reached the semi-finals. Am I a great football fan? Not at all. Is football important to me? Not really. Well, I say that, but it does seem to have been important enough to me to join countless other people who otherwise rarely watch football, and spend about two hours of my life watching a game several times in recent days. Sure, I was selective in my interest – because only two teams really engage my emotions – just like so many other people who root for ‘their’ team. No surprise then that news editors have been allocating a lot of their scarce resource to the adventures of the national team.

And emotions abounded just as much in and around the recent episode of the Brexit soap opera. The cliff-hanger of the Chequers meeting, the delayed resignations, the showmanship of Boris Johnson who invited the press to witness his signing of his is resignation letter, the scheming of outraged Brexiteers weighing up their chances to topple Theresa May, and so on – all the way to Donald Trump’s sizeable contribution just this morning. You didn’t even need to choose a side to be engrossed by the spectacle –it had something of a Shakespearian tragedy. But of course for many people in polarized Britain this was either an opportunity to experience a combination of smugness and schadenfreude, or of affront and despair – adding to the emotional gravity.


An emotional farewell letter, for sure (photo via Twitter)

Emotion matters

News editors give more weight to stories that chime with our emotions. And it is apparent that those are altogether not stories about what makes us materially better off or worse off.

Those preferences for news – so aptly identified by editors – carry over in our wider (economic) decision-making. We most certainly incorporate emotions in the choices we make.

That goes squarely against a certain, narrow interpretation of us people as homo economicus, the rational, utility-maximizing, self-interested character, in which utility only reflects material costs and benefits. Yet it also challenges a widespread perspective in behavioural economics, in which we are all profoundly irrational, riven with biases and prone to fall for fallacies.

But is it truly irrational to prefer a job where you earn $50,000 and your colleagues $25,000 over one where you earn $100,000 and your colleagues $200,000? (That is what about half the participants in a 1997 study by Sara Solnick and David Hemenway chose.) Or is it the blatant unfairness of the second job, compared with the pleasant feeling of superiority of the first that is, apparently, valued at $50,000?

Is it irrational for people to strongly prefer a ‘platinum’ credit card over an ordinary one offering identical benefits? (This is what Leonardo Bursztyn, Bruno Ferman, Stefano Fiorin, Martin Kanz and Gautam Rao found in a 2017 field experiment in Indonesia.) Or does the higher perceived status of the platinum card confer real emotional utility?

When you realize how deeply emotional considerations are involved in the decisions we make, choices that may appear surprising or indeed irrational at first become perfectly understandable. And when you realize that, for different people, different emotions may be at play, that is even more the case. But it would be a mistake to conclude from this that we are not rational, but emotional beings. On the contrary: it is precisely our own, individual emotions that make us rational – at least some of the time.

Posted in Behavioural economics, Cognitive biases and fallacies, Emotions, Psychology | Tagged | Leave a comment

The fallacy that became itself a fallacy

(featured image credit LearningLark CC BY)

Even specialists can fall prey to cognitive errors

I have been tossing a fair coin, and it has come up heads six times in a row. The chance of either heads or tails is 1 in 2, but for this to happen six consecutive times is 1 in 2 to the power 6, or 1 in 64. What is the probability that the next toss will turn up a seventh consecutive head?

The correct answer is of course 1 in 2. Coins don’t have a memory. What happened before cannot influence the current result. We all know that – yet our intuition sometimes leads us to believe differently.

On 18 August 1913, the ball at one of the roulette tables in the Monte Carlo casino had been ending up in a black slot for nearly 20 times in a row. Several gamblers started taking an interest and started putting money on red: after such a long streak of black, red was surely due to come up. And still the ball kept falling on black. People put more and more money on red at each successive turn of the wheel – and kept on losing it, as black kept on coming up. Let’s face it, where would you put your money if you happened to be there, having seen black come up 25 times in a row? Eventually, after an unbroken series of 26 blacks (likelihood: 1 in more than 136 million) the ball finally landed on red.

More recently, it was number 53 in the Italian lotto that had failed to come up for nearly two years and which, apparently, not only drove people bankrupt, but some to their death. The phenomenon has been described long before, though, for example by the early 19th century mathematician Pierre-Simon de Laplace, as Joshua Miller and Andrew Gelman discuss in a fascinating paper.

Casino Roulette - 3d render

Red at last! (photo: SalFalko CC BY)

It is gamblers who give this cognitive illusion – that if something happens less frequently than normal for a period, it will occur more frequently in the future (or vice versa) – its name: the gambler’s fallacy.

Let’s not assume that only compulsive gamblers fall for it, however. If your neighbour is pregnant again, having had five girls, what is the chance of the sixth child being a boy? Did you not, for a moment, feel that it ought to be more than 1 in 2? Given the dry and sunny weather we’ve been experiencing in the UK for weeks now, do you think a dry second half of August is more or less likely than normal?

We treat streaks as if they predict the next outcome. Our human brain is ill-equipped to handle the concept of randomness: we easily see patterns where there are none. The figure below[1] shows the outcome of three sequences of 50 roulette wheel spins (ignoring any zeroes). We know that red and black are equally likely, and we intuitively interpret this as “50% of the outcomes should be black”.  When we see long streaks of one colour, we think that is extraordinary. The universe is out of balance, and a reversal is overdue. Our intuition tells us that after the five blacks in A, a red outcome must be more likely to redress the imbalance. A sequence without long streaks (like C) looks more ‘normal’, but in reality, A and B are the result of genuine spins, while C has been manipulated to generate a reversal 3 out of every 4 times.


The hot hand

There is another phenomenon that is often compared with the gambler’s fallacy. Known as the hot hand fallacy, it was first described in 1985 by Thomas Gilovich, Amos Tversky and Robert Vallone. The ‘hot hand’ refers to a presumed temporary state of a basketball player in which they are more likely to perform better than average, i.e. they will be producing streaks of successful shots. Fans, coaches and players alike widely believe that players are more likely to make a shot after having made the last two or three shots, than after having missed them.

In several studies the researchers failed to find any significant correlation between shots (except for one player). They concluded that the belief in the hot hand in basketball is a cognitive illusion, resulting from the “expectation that random sequences should be far more balanced than they are, and the erroneous perception of a positive correlation between successive shots.” The chance of scoring does not depend on what went on before.

For about 30 years, this conclusion remained largely unquestioned. The hot hand fallacy stood as one of the more robust cognitive errors, even as new research using larger datasets found patterns actually consistent with the hot hand. Issues of measurement and control prevented these studies from determining the magnitude of the hot hand, and so they did not topple the prevailing wisdom.

But in 2016 Joshua Miller and Adam Sanjurjo discovered a fundamental flaw in the reasoning by Gilovich et al. One of their original studies positioned players on various spots on an arc, at a distance from which their shooting percentage was approximately 50%. Each player had to take 100 shots and they found no marked difference in the goal percentages after 1, 2 or 3 hits, or after 1, 2 or 3 misses. Intuitively, this is exactly what you would expect to find if you were to replace the outcomes of the players’ shots by a sequence of 100 coin flips. As successive flips are independent, we expect the percentage of heads that follow a streak of heads to be identical to the percentage of tails that follow a streak of heads.

This is incorrect. Bafflingly so, but yes, it truly is incorrect. Meet Jack, a hypothetical character from Miller and Sanjurjo’s paper. Jack tosses a coin 100 times. Every time it produces heads, next time he flips he writes down the result. He – like Gilovich and co, and most of us – expects to see approximately 50% of heads. But it is less.

To illustrate why, let’s look at what happens when he tosses the coin just three times. There are eight possible outcomes:


In the first two cases, Jack has nothing to write down. The next four situations turn up one relevant head, so Jack writes down the outcome of the next flip. The final two feature relevant heads twice, so each time Jack writes down two results. The proportion of heads for each flip is shown in the third column. In three of the six cases (3, 5 and 6) heads is followed by tails, so the proportion is 0. In case 4, heads follows heads, so here the proportion is 1 (or 100%.) In the remaining two cases, heads occurs twice. In case 7, one heads is followed by heads, the other by tails so the proportion is 0.5; and finally, in case 8, heads is followed by heads two times out of two, so another 1.

Now imagine I have flipped the coin 3 times and I have calculated the proportion of heads following heads. What would be your best guess for the proportion? A simple calculation shows that the best guess (the expected value) is 2.5/6 (or 5/12), which corresponds with 41.67% — considerably less than the 50% we were all expecting.

Miller and Sanjurjo provide mathematical proof that this remains true for larger, finite sequences and for longer streaks: the likelihood that heads turns up after a preceding streak of heads is always less than one-half. The implication of this is clear: a basketball player shooting just as well after a streak of hits as after a streak of misses, is shooting about 8 percentage points better than you would expect by chance. This is no small beer. Typically the field goal percentage (the ratio of successful shoots over attempts) of a top NBA player is about 10% higher than that of a median player. In other words, a median player with a hot hand gets a boost that is nearly equivalent to her moving to top level!

A surprising fallacy

So if the hot hand actually exists, the hot hand fallacy is itself a fallacy. How could it persist for so long?

While I was drafting this article, Dan Kahan, a law professor at Yale University, published a post that reflects pretty much my own thinking. The authors of the original study, and many of those who subsequently took it at face value, assumed that the belief in the hot hand is similar to that which is central to the gambler’s fallacy: the erroneous belief that independent successive events are, in fact, not independent. It has the look and feel of a typical System 1 error – the cognitive system popularized by Daniel Kahneman in Thinking, Fast and Slow: fast but impulsive, and therefore relatively easy to fool.

What better way to show the error in the belief in a hot hand, than the conscious, deliberate, systematic, effortful application of System 2 thinking in the 1985 paper? However, hidden in it was the erroneous intuition that Miller and Sanjurjo brought to light. Cognitive errors are generally attributed to an overreliance on System 1. But here we have a situation where the researchers had not only mistakenly assumed such a System 1 bias in believers in the hot hand, but also neglected to verify their own assumptions.

little hot hand

Little hands, but hot hands! (photo: Patrick CC BY)

There is a lesson here for everyone involved in scientific research: doubt your assumptions and your intuitions, even if – or indeed especially if – they look totally self-evident. Verify the hell out of them (and enlist the help of your most critically-minded colleagues).

But there is also a lesson for all of us. Andrew Gelman, a statistician who delights in exposing sloppy statistics in the social sciences, describes the response of Tom Gilovich to the criticism as doubling down on the original conclusions.

It is not in our interest to cling on to beliefs that are crumbling under new evidence, let alone to double down on them. There is an array of cognitive explanations why we tend to do so, from confirmation bias to sunk cost fallacy – but it doesn’t have to be that way. If we don’t identify with our beliefs – if, as Nick Maggiuli says, we treat them like clothes we can change, rather than tattoos we are stuck with forever (or until painful removal), we will not find it so hard to replace our old, obsolete beliefs with better ones.

I firmly believe this – at least until I am confronted with credible evidence of the contrary.


[1] Thanks to Josh Miller for kindly helping me, in an email exchange, with visualizing the human fallibility in evaluating randomness

Posted in Behavioural economics, Cognitive biases and fallacies, Psychology | Tagged , , | 1 Comment