Shaping our reality

(Featured image credit: Free-Photos)

Is there an objective reality, and are we capable of observing it?

On the day after Donald Trump officially became the US president, the then White House press secretary stated the crowd “was the largest audience ever to witness an inauguration “. This was not the first time the president or people from his entourage were constructing a reality that was, shall we say, at odds with the facts – or indeed to advocate that this is as a good thing to do.

Last month an old tweet by his daughter and presidential adviser Ivanka was circulated widely. In it she apparently quotes Albert Einstein:


If that sounds an unlikely thing for the Nobel prize winning physicist to have said, that is because he did indeed never say anything of the kind. Unsurprisingly, there was widespread derision. It fitted perfectly in the mindset of alternative reality (or even ‘alternative facts’, a phrase made popular by Counsellor to the President, Kellyanne Conway) that seems to prevail in the White House. But are we perhaps not a bit quick in mocking others? How good are we in separating what we want to be true from what is objectively true?

Beliefs and desires

Anyone with a passing interest in behavioural economics (or who occasionally reads my writings) is most likely familiar with the confirmation bias, the tendency many people have to seek out evidence that confirms our prior beliefs, and ignore or dismiss evidence to the contrary. But Ben Tappin, a psychologist at Royal Holloway University in London, and colleagues suspected there was something else beyond simply the tendency to confirm our prior beliefs. We also appear to assign greater weight to information when it is desirable (irrespective of what we actually believe) – the so-called desirability bias.


A record crowd! (Source: Snopes)

Often, our beliefs and our desires or hopes coincide. We believe that we will find a good job after college, that our children will be healthy and smart, that we will climb the career ladder pretty swiftly, and so on. We also wish for all these things. Tappin et al designed an experiment in which they could separate the two, in the context of political beliefs.

In the 2016 US presidential election, they surmised, many Trump supporters believed that Clinton would win. New poll information that gave Clinton the advantage would be simultaneously confirming and undesirable, and polls indicating a win for Trump would simultaneously desirable and disconfirming. In either case, desirability bias and confirmation bias would have opposite effects.

By providing the participants in their study with selective, but real poll outcomes, they could control whether they received information that was either consistent or inconsistent with their desire, or their belief of who would win the election.

And they did indeed find a robust desirability bias effect. People tended to incorporate information significantly more if it was consistent with the desired outcome, and this was independent of their prior beliefs. For example, people who wanted Trump to win but who believed Clinton would win, were significantly more confident of a Trump win after they were given poll information boosting Trump’s chances. But with the same information, people who wanted Clinton to win but who believed Trump would win, made much smaller adjustments to their confidence level.

Unwanted solutions

In another paper, Troy Campbell and Aaron Kay at Duke University explore what could explain the widespread public distrust of the scientific consensus, e.g. in climate change. There has been intensive communication stating the statistical evidence and proposing government policies, like Al Gore’s documentary, An Inconvenient Truth. How come this failed to resonate with the (largely conservative) climate change ‘deniers’?

Conservatives have been found to be more sensitive to scary information. Perhaps that motivates them to deny climate due to a stronger fear of the consequences? But the researchers started from a different hypothesis.  They proposed that aversion of the consequences of the proposed solutions, rather than fear of the problem, motivates scepticism of the scientific evidence.

In their experiment, they first gave participants the scientific consensus on global temperature increase by the end of the 21st century as held by the Intergovernmental Panel on Climate Change (IPCC). Then the participants were asked to evaluate a proposed policy measure, after which they had to indicate whether or not they agreed with the IPCC’s prediction.

If the proposed solution was new government regulation, like a carbon tax, only 22% of Republican-leaning agreed that temperatures would rise by at least the IPCC’s predicted value of a 3.2 degrees Fahrenheit rise. But when the policy option emphasized the role of a free market (e.g. innovative green technology), the proportion of Republicans in agreement with the IPCC was 55%. For Democrat-leaning participants, the choice of policy measure made no significant difference to their stated belief (68% in agreement).

Is this kind of solution aversion a typically conservative phenomenon? The researchers conducted a similar study, in which they explored people’s perception of the severity of the problem of violent home break-ins. And indeed, here participants with more liberal ideologies (i.e. favouring tighter control on the possession of fire arms) were much more likely to downplay the problem if the proposed solution called for looser gun control.


Not that big a problem, actually. (photo: TheDigitalWay)

They concluded that, irrespective of a person’s politics or ideology, the more threatening a solution is to them, the more likely they are to deny the problem.

A reality for everyone

We are often good at noticing and calling out the tendency of others to interpret the world in a subjective way. If we want to be unkind to them, we say they are living in an alternative reality. Even if we don’t say so explicitly, we imply that we ourselves are much better at seeing the world as it really is.

As the research suggests, that is likely to be a case of illusory superiority, though. We are often unduly influenced by what we want to be true (or untrue). We tend to adapt the size of a problem according to how much we like the solution.

Perhaps not all of us go so far as Ivanka Trump suggests, and change the actual facts to fit our assumptions. But we had better be aware that we are shaping our own subjective reality.


Posted in Behavioural economics, Cognitive biases and fallacies, Emotions, Morality, politics | Tagged , | Leave a comment

Eggs and chips

(featured image credit: paul sandham/CC)

Why we see two similar health threats very differently

My native country is in international the news again, thanks to the unfolding egg scandal that is now expanding well beyond its borders. Traces of Fipronil, an insecticide used to combat poultry mite among chickens, have been found in eggs, causing millions of them to be taken off the shelves. The initial unrest was put into perspective with claims from both the Belgian agriculture and the health ministers that the levels were just one tenth of the strict European threshold. But shortly after, that norm appeared to have been breached anyway.

Figures of acceptable limits of mg/kg, and the number of eggs it was safe to eat in a day were bandied around (was it fifteen, or just four?). There was no real panic (the Belgians are a sanguine people), but the dedicated call centre set up by the federal food agency received more than 1500 calls on its first day – 10% of what they normally get in a whole year. Clearly there was a good deal of concern.


Which one contains the Fibronil? (credit: annca)

But why so much attention for suspect eggs, and so much less for the acryl amide in the Belgian national dish, the fries? This substance is produced when starch or sugar is heated above 120 degrees (so definitely in the deep fryer), and is regarded by the World Health Organization as a possible carcinogenic. Objectively, this problem would appear to be more significant, especially in the long term – wouldn’t it?

Quite likely, but thanks to a range of cognitive quirks, we ordinary mortals find taking such an objective perspective not that easy.


For a start we regard the prominence of a news item as an indicator of its importance (the so-called salience effect). If something frequently, and for many days, figures in the headlines, then it has to be a weighty matter. The acryl amide story appeared one day in July in the Belgian media, but the eggs have been the first topic in every news bulletin for over a week now.

We also have a tendency to underestimate effects that are far in the future (a phenomenon known as hyperbolic discounting). This is behind our delaying behaviour: exercise, diet, retirement savings – we can start them all next month instead of today, as it won’t make much difference anyway. And yes, perhaps we will get cancer in 30 years’ time from that acryl amide, but hey, that’s such a long time. These eggs, though, they are toxic right now.


Not taking any risk: I’m having both! (image: bob walker/CC)

Furthermore, it seems that we magnify (in our perception) harm if it is the consequence of someone’s deliberate actions. Daniel Ames and Susan Fiske, two Princeton University psychologists, investigated how precisely we estimate harm. The first scenario concerned a fictitious CEO who had made a poor investment, resulting in lower salaries for his employees. Participants who had been told that he did this intentionally rated the damage to the employees as 39% higher than those who thought he did so accidentally. The second scenario described a real case in which someone had diverted the flow of a river and caused a shortage of water. The participants received a list of damages and had to estimate the magnitude of the losses suffered. Those who were told that the problem had a natural cause (lack of rain) were close to the truth ($2,753 on average compared to the real $2,862). But those who were told it concerned a deliberate act overestimated considerably ($5,120). The researchers’ analysis showed that blame motivation was behind this magnification. This is another reason why a scandal, in which business managers and government carry responsibility, is seen as a larger problem than the chemistry of deep fried potatoes.

…and consciously

These cognitive errors leading to the difference in perception of two threats are largely unconscious. Nevertheless, if we really took our health seriously in a rational manner, we should be equally worried about Fibronil eggs as about acryl amide fries. When we are clearly not doing that, we risk experiencing cognitive dissonance, the psychological stress we feel when we are not acting in accordance with our convictions.

But we have another string to our bow to deal with that: motivated reasoning. We add positive and negative emotions to an argument, as if they have the power of evidence, in order to not just confirm, but also justify the way we see the world.

On the one hand there are the unscrupulous businesses placing profit motives above public health, and government ministers and officials who had rather pass the hot potato (deep fried or not) than to intervene. The suspicion of fraudulent behaviour by the disinfection companies, and the arse-covering by the administration only serve to reinforce our opinion about them. On the other hand, fries have been a delicacy for generations, and few people are known to have died of cancer because they’d eaten too many of them. So this business with this carcinogenic – no, allegedly carcinogenic – substance is probably not that bad anyway.

So, quite a few reasons why one health hazard is not the same as the other one… Cognitive errors that give us a distorted, subjective picture of the world, and that even save us when all that irrationality threatens to trip us up.

We are only human after all.

Posted in Behavioural economics, Cognitive biases and fallacies, Economics, Emotions | Tagged | Leave a comment

What do we want? Control!

(image credit: Sarah Ross/CC)

Control and freedom of choice are important to us, and we are prepared to pay real money for it. But things are not always that simple…

‘A la carte’ – a posh French phrase that implies something that is bespoke, something that is tailored to our specific preferences. It means that we can choose. The term stems from the pleasant world of gastronomy, where restaurants often offer a set menu on the one hand, and a wide range of starters, main courses, second courses, desserts and goodness knows what more, on the other hand.


Choice doesn’t come cheap (image: Alpha/CC)

The remarkable observation is that the price of the set menu is almost always a good deal lower than it would cost to order the same courses separately from the general menu. Cynics might argue that this is because the dishes in question are smaller when they are part of the set menu, but I have never seen any evidence to support such claim. It is not the motivation behind the restaurateur’s pricing approach that is really interesting, though. Far more intriguing is the fact that we appear to be willing to pay more for a menu that we assemble ourselves, than for one that has been constructed for us.

Paying for control

We are willing to pay for control. Not just in restaurants,

Such payment is not always made in money. Just this week, the commemoration took place of the mindboggling battle of Passchendaele. One hundred years ago, in the muddy fields around this tiny village, in the span of just a few days, close to half a million soldiers (just try to imagine such a number) perished. War is, in essence, about control. The aggressor wants to gain control over the resources, the land, the people of the occupied country. The other side wants to defend their control and regain what was lost already. Hundreds of thousands paid the ultimate price in that battle for control.

One recent political event is very overtly about control. In the referendum around the UK’s membership of the EU, the most prominent slogan of Vote Leave was ‘take back control’.

More than one year after the referendum, the discussion gravitates towards the economic consequences of Brexit. There is little clarity about exactly what Brexit will mean, to growing unease of the British business community. But few doubt that, in the short term at least, it will bring with it a significant economic cost.

That too is in a very real sense the price some people are willing to pay to gain control.


They paid for our control with their lives (image: Stephen Curry/CC)

In a paper published last month, Sebastian Bobadilla-Suarez, Cass Sunstein and Tali Sharot explore that intrinsic value of choice. A fully rational approach should simply be to consider whether the payoff is likely to be higher with, or without, delegation to someone else. But immediately, other elements enter the thought process: how much effort does it take to make the decision? Would we enjoy the reward of a good choice or suffer the pain of a bad one more, or less, if we had delegated it?

Good council is worth (more than) money

The researchers wanted to test whether people would make a sacrifice to be the choosers, rather than to delegate a choice in the face of potential losses and gains. The task they set in the experiment was straightforward. Participants were repeatedly presented with a pair of shapes, one of which was ‘better’ than the other. In one version of the task, making the right choice resulted in a gain of £10 and the wrong choice in no gain. In the other version, the right choice produced no loss, and the wrong choice a loss of £10.

In a first stage, all participants executed 60 trials in the gain frame and 60 in the loss frame on their own. This was supposedly to discover the rules behind the supremacy of one shape over another. There were, however, no rules: the game was rigged, producing random outcomes with an overall success rate of 50% for every participant.

After this learning stage, participants could delegate the choice between the two shapes to a (computerized) advisor. At every trial, before deciding whether to delegate, they were shown the historical accuracy of the advisor (between 0% and 100%), as well as the fee that would be charged (between £0 and £10) if the advisor were to select the correct shape. This allowed participants to work out the expected value of delegating. For example, if the advisor’s accuracy was 80% and the charge £2, the expected value in the gain frame would be £10 x 80% – £2 = £6. Their own was, of course, £5 – by definition they got the right answer 50% of the time.

A rational participant would choose to delegate when the advisor’s expected value was higher than £5, and retain control if it was lower. The expected values for the advisor were engineered so that they were higher than £5 50% of the time, so a rational participant would delegate in half the cases.

What happened in practice? Participants delegated just 29% of the time. Even more interestingly, the ‘errors’ they made (delegating when their own performance was better than the advisor’s, and not delegating in the opposite case) were not symmetrical. The participants failed to delegate in around 43% of the cases, while they failed to retain control just 8% of the time. (There was no significant difference between the gain frame and the loss frame.)


The researchers then calculated the ‘control’ premium – the amount people were willing to sacrifice to retain control. This was £2.50 in the gain frame, and £3.53 in the loss frame. Compare this with the expected value, and it is easy to see the importance of this: we are prepared to give up 50% or even 75% of the likely outcome of a decision in order to be able to make it ourselves.

Back to reality

Before we project the findings of this study onto the real world, it is important to recognize an important constraint. The participants made their sacrifices out of future gains – none of them gave up any of their own money in the experiment. Furthermore, even those who ended up with a negative balance at the end were paid a positive amount in compensation.

So what about Brexit? As the actual financial and economic costs of ‘taking back control’ – become clear over the next year or so, the trade-off between control and sacrifice will also become more apparent. And here, that sacrifice will not come out of future gains, but there is a real chance that people will be worse off than they are now.


Source: YouGov survey, August 20, 2016 – via Eric Kaufmann

Not long after the Brexit referendum a survey by Eric Kaufmann at the London School of Economics revealed a distinctly modest willingness to pay more than a relatively small amount of money to ‘take back control’.

When people realize what the actual price tag is going to be, and how much control will really be taken back, will they still think it is a good bargain?

Time will tell whether lab and field experiment will tell us the same story…

Posted in Behavioural economics, Cognitive biases and fallacies, Morality, politics | Tagged , , , | Leave a comment

What is your reputation worth?

(featured image credit: Evanherk/Wikimedia)

We are very protective of how others see us. How protective, and why?

Imagine you said something that you never intended to be or sound racist, but that could easily be interpreted to be blatantly so. You discover someone is willing to share that information within your community: your friends, your family, your colleagues, your neighbours and so on. How would you feel about that? Could you persuade everyone that whatever is being claimed you said was taken out of context, and that you are not in the least racist? Or would you do whatever it took to stop this message getting out in the open? What sacrifice would you make to safeguard your reputation?

Disreputable business

Reputations certainly count for businesses. A famous case from 1990 was the recall of 160 million bottles of Perrier mineral water, after excessive (but by no means harmful) levels of benzene were found in some of the product. The recall and the PR around it cost the company reportedly $250 million. Despite this, Perrier’s market share tumbled from 15% to 9%  in the US, and from 49% to 30% in the UK, and five years later, their sales were still only half of what they were at the peak in 1989. Part of the problem was that this episode revealed that Perrier water was not actually naturally sparkling – the CO2 gas was (and is) extracted separately from the source, and added to the water in the bottling plant. ‘Deception’, the consumer thought.

More recently, there is the scandal in the automobile industry. In 2015 Volkswagen was discovered to have equipped 11 million of its cars with a ‘defeat device’ to fool emissions testing equipment. And just this week Audi (which is part of the VW group) and Mercedes have joined the dieselgate alumni. The share prices of both companies (who are also implicated in a cartel investigation) has taken a battering. That is entirely due to the reputational damage which is expected to be reflected in sales figures. People don’t want to be seen driving cars that are cheating society.


That’s what our reputation looks like (image: terimakasih0)

What about our personal reputations? They are pretty important for our social and professional functioning (just think what finding a new job will be like for the teacher who was alleged to have had sex with a pupil in the toilet of an aeroplane). But a good reputation was essential long before there were aeroplanes. For our distant ancestors, losing it might have meant banishment from the tribe, and surviving on your own was not easy back then.

Serious about our reputation

How can we tell how important we find our reputation? Businesses can estimate the effect of a dented reputation means on their revenue, but how do you go about it as an individual? This is something Andrew Vonasch, a psychologist at the University of North Carolina at Chapel Hill, and colleagues set out to investigate across several studies.

They found that reputation is a good that transcends nations and cultures. In a survey that ran in 100 countries, more than 150,000 participants rated moral reputation as more important than any non-moral value other than physical security, and well above being rich or having an exciting life.

Our reputation matters, and we’re willing to step up to defend it. In a range of hypothetical settings, the researchers observed that participants, given a choice between…

  • … a year in jail with a clean reputation afterwards, and perpetually being seen as a criminal by their community, 40% people chose to go to prison
  • … going through life with a prominent swastika tattoo (without being able to explain why to everyone taking you for a Neo-Nazi), and having your dominant hand amputated, 70% chose to lose their hand
  • …immediate death, and living to be 90, but with a reputation of being a paedophile (that you would be unable to disprove), 53% choose to die at once. (The average age of the participants was 35, so this represented a significant decision).

Now you could object that this is not entirely realistic. Researchers do indeed have a problem: they cannot really wreck a poor participant’s reputation for real, and less still cause them permanent physical harm. Nevertheless, the tendencies are revealing, and suggest that there is a rational trade-off being considered here. And real life, outside the safe environment of a psychology lab, sadly also has way too many examples of people committing suicide as a result of a tarnished reputation (deserved or not).

The worm turns

But the researchers came up with an ingenious experiment that would take them closer to reality. 123 undergraduate students were told the university was running an investigation into implicit racism on the campus. This would involve administering the Implicit Association Test and then disseminating the results to all members of the university, listing the names of the students, with the highest scoring ones first. The participants carried out the IAT, but were given rigged test results:  either 31 (“not very racist”) or 97 (“extremely racist”).


Thank goodness for these worms! (source: authors’ paper)

At that moment, the participants were given the option to carry out another task. If they submerged their hand in a container filled with superworms for one minute, their results would be removed from the email. 30.4% of participants in the high racism condition and 3.9% in the low racism condition opted for this task. The authors hypothesize that many participants doubted that their test results would actually be shared, and that in a real situation even more of them would have opted for a close encounter with the worms. The experiment was repeated with the worms swapped for a tank with ice water. Here 62.8% of participants iin the high-racism condition opted to endure pain to avoid reputational damage, and 8.9% in the low-racism condition.

The value of reputation is not only visible by what we are prepared to do to avoid disturbing information to become widely known. We also seek to signal positive reputational signs. Users of auction sites like eBay rely on feedback from those with whom they have done business, and sellers in particular are anxious to resolve any problems lest a buyer leaves a negative mark.  On social media sites such as Facebook and Twitter, the number of friends or followers, and the number of ‘likes’ are highly salient, quantitative signals a user’s reputation. And bloggers too seek likes or recommendations, or publicize the number of ‘shares’ of their posts.

This is by no means a recent phenomenon. For centuries, Christian church services have been featuring collections of alms. It is impossible to pinpoint the exact motivation of the members of the congregation for placing their coins in the basket or on the plate, but the ritual’s highly public nature is for sure a clever trick of those in charge to nudge churchgoers into contributing.

We humans are, inherently, traders. We trade goods and services for money, and we trade our time with others and with ourselves.  Our willingness to make sacrifices to preserve our reputation – either by seeking to enhance it, or by defending it when it’s under threat – confirms that like most other things, our reputation has a price too. What is yours?



Posted in Behavioural economics, Emotions, Ethics, Morality | Tagged | Leave a comment

Make it hard

(featured image credit: Skitterphoto)

Nudging is about ‘making it easy’, but sometimes difficult can be better

We are very much creatures of habit. Unlike what neoclassical economists would have us believe, most of us are also generally not utility maximizers, but satisficers, content with what is good enough. Put these two observations together, and it’s not hard to see the origins of our status quo bias. Satisficing behaviour helps us avoid regret and hence contributes to our general sense of wellbeing, but it is hardly a strong motivation for changing our lives for the better.

Take a look at your kitchen. Many tasks require moving things between the three most important items: the refrigerator, the sink and the cooker. A well laid out kitchen is supposed to rely on the so-called work triangle to minimize the number of wasted steps. But how important is that really? Most of us would barely notice the waste of one or two more steps. So we’re not too bothered if kitchen design pays more attention to aesthetics than to efficiency.

At least as long as we’re fully mobile… and that can change quickly. A few weeks ago, my dad, a sprightly 89-year-old dude, slipped on a patch of dewy grass early in the morning, and broke his ankle. The fracture means that he should not at all use his left foot, not even to lean on. After a few days in hospital he returned home, and with the aid of two rollators (one upstairs and one on the ground floor) he manages to get about. But even covering the distance of a single step takes a lot more time and effort than without several kilos of plaster on his foot. A wasteful step has suddenly become rather costly.

During one of our daily calls he related how he has worked out the most efficient routine for making his breakfast of porridge or fried eggs. This not only involves positioning the rollator at a precise locus from where the fixed points of fridge, worktop and gas hob can be reached within one step, but also ensuring the ingredients, pots, pans, utensils and plates are within easy reach.


Plaster on the leg = nudge for the brain

Now this is unlikely to be of great use to him when, in a few weeks’ time, the plaster is removed and he can fully use his ankle again. But the experience shows the ingenuity of the human mind when the balance of costs and benefits is suddenly changed.

Being aware of wasteful behaviour is not necessarily enough to change it, especially if it is not us directly experiencing the cost of the waste. It’s much easier to pick up a bunch of free bags at the supermarket every week than to bring our own. But introduce a charge per bag, even one so small that it pales into insignificance compared to the cost of a weekly shop, and hey presto, our behaviour changes.

Tongue twisters and budget cuts

There are other interesting examples of how making it harder can help us understand things that we’d otherwise be blind to. Non-native English speaking readers will know very well how hard it can be to express your thoughts accurately in English, and how struggling to find the right words affects your confidence and sense of competence. (Of course, the same is true for other languages as well.)

Many native English speakers have never experienced this (if only because they simply do not speak a foreign language). That means they might find it hard to empathize with colleagues, clients, or people they meet socially who don’t speak English fluently. A simulation game I have used many times in multi-lingual settings tries to address this by making speaking harder for native speakers. The principle of Redundancía is that you need to explain something, or tell a story or anecdote in your native language, with one twist: for every noun or verb (except to be or to have), you need to add a synonym. The principle/idea of Redundancía is that you need/have to explain/clarify something or tell/narrate a story/tale or anecdote/sketch in your native language/tongue, with one twist/variation – you get the idea. Just try it out for yourself and see how clumsy it makes you feel. Perhaps next time you are in conversation with a non-native speaker, you’ll have a better understanding of the challenge they face speaking in your, rather than their language.


Nice colours/tints you have there! (Image: splongo)

Another example comes from a now-retired client who used to be the Chief Technology Officer of a large multinational company. It was not uncommon for development project leaders to judge that the resources – money, manpower, or both – were insufficient to bring a project to a successful conclusion. The obvious thing for them to do was to come and ask him for more people or for a budget increase. And invariably, he would listen carefully to their argument, and thoughtfully reply that he agreed that the resources were not in line with the needs of the project.

The twist came when he clarified that he thought they were excessive – and then promptly cut the overall budget by something like 5%. It gave him much pleasure to relate how the project statistics confirmed that these projects tended to deliver better results. They were less likely to overrun, and the products that came out of them were more commercially successful. It was hardly a scientific experiment, but I have more than a bit of sympathy for his conclusion that making things harder for his engineers and scientists forced them to be more creative and inventive.

The paradox of difficulty

Like water follows the path of least resistance from the mountain top to the sea, we are naturally seeking out the easy life. That congenital laziness can be a motivation to come up with clever ideas that allow us to achieve more with less effort: our houses and factories are full of labour-saving devices. But at the same time, it is when we face tough challenges that we are at our best using our imagination and resourcefulness.

My father’s predicament suggests how a designer breaking an ankle could exploit her misfortune to experience what a difference a more efficient kitchen makes when it really matters. More realistically, she might not wait for such an event, but have a plaster cast fitted to a fully intact leg. Making things harder in that way would provide a direct insight into the challenges of a disabled person in the kitchen – without the pain, but no less dramatically.

In any case, it teaches one additional lesson: no matter how old they are, always be prepared to learn something from your parents.

Posted in Behavioural economics, Cognitive biases and fallacies | Tagged | Leave a comment

Belief without value

Our beliefs often stand in the way of better decisions – because we value them too much

The great economist John Maynard Keynes reportedly once said, ‘When my information changes, I alter my conclusions. What do you do, Sir?’ As so often with such powerful quotes, there are no reliable sources that confirm Keynes really said this. But that doesn’t detract from the profound wisdom in the guidance it offers.

It summarizes very well the basis of the scientific method (not just for the dismal science, for that matter). When a theory no longer fits the observed facts, it is discarded or adapted. Research takes place through hypotheses, which are accepted or rejected. Researchers are agnostic, and don’t care which hypothesis is right or wrong.

Bad science

Unfortunately this is not always the case. Sometimes researchers are motivated by other matters than the pursuit of the truth. The chance that a study is published is a lot higher if it describes a positive result. That leads to publication bias (the world rarely hears about a study that finds absence of an effect), and p-hacking, the massaging of the data to come to a statistically significant result.

Even ideology can be involved in the manipulation of scientific research. Some people are convinced that living near electrical power lines is the cause of serious health problems. So when, in 1992, a Swedish study by Maria Feychting and Anders Ahlbom of the Karolinska Institute found that the incidence of leukaemia among children living within 300m of such a distribution line was four times higher than normal, this was grist to the mill of campaigners.


Hello, leukaemia? (image: Oran Viriynici)

The American PBS show Frontline investigated the phenomenon in the episode Current of Fears in 1995 (the transcript is no longer available on the Frontline site, but as so often, comes to the rescue!). The study in question appeared to have fallen foul of the so-called multiple comparisons fallacy. Eh? Well, the Swedish researchers did not just look for a link between proximity to overhead power lines and leukaemia, but they simultaneously investigated no less than about 800 diverse risks.

Does that make a difference? Certainly. Consider this example: imagine I want to pay you with a coin, but you suspect it is false. If a fair coin is tossed 10 times, the chance it turns up heads at least 9 times is quite small (1.07%). Let’s assume that if this happens with a random coin, it is most likely a fake, so you can use that as a test.

But what if my debt was 100 coins, and you suspect there are false coins in the purse… would the approach for an individual coin be sensible for a whole bag? The chance that a fair coin turns up heads fewer than 9 times is nearly 99% (100%-1.07%). The probability that two fair coins both turn up heads fewer than 9 times is a little smaller: 98.93% x 98.93% = 97.86%. For three coins it is a bit smaller still (96.81%) and so it goes on, until for 100 coins it is just under 34%.

This means that the chance that at least one coin will turn up nine heads in a row is 66% – nearly 2/3.

It may then look as if that one coin is a fake, but that is only because we are testing so many coins at the same time. We never indicated upfront which of the coins would be fake. The fact that one coin produces an unusual result is, well, pretty unremarkable, and certainly not proof that I am out to deceive you.

The same happens when you investigate the correlation between living near overhead power lines and 800 different conditions. Even if there is nothing to worry about, a handful out of that set of 800 conditions might produce an elevated frequency. (This phenomenon is sometimes called the Texas Sharpshooter Fallacy, referring to someone who first fires at random at an old barn, and then draws the target around the bullet hole.)

So there was nothing going on in Sweden. But several groups, with good (or not so good) intentions maintained something fishy was going on, finding a fertile breeding ground among people who already believed there was something sinister about these power lines.

A thin line between belief and confirmation bias

Confirmation bias is indeed often the culprit in situations like this: we look for information that supports what we believe, rather than for facts that might falsify it. Worse still, when we are confronted with information that should make us question our beliefs, we double down. This is why beliefs like the one that Barack Obama was not born in the USA and is a Muslim, or that EU membership is costing the UK £350M per week have proved impossible to eradicate. In 2005 and 2006, Brendan Nyhan and Jason Reifler studied the tendency of people to reject arguments and evidence that contradicts their beliefs. In several cases, they found that corrections actually reinforced the misperceptions people held, especially among those who were the most committed.

This is known as the backfire effect. A 2016 study by neuroscientists Jonas Kaplan, Sarah Gimbel and Sam Harris, placed subjects in an MRI scanner and confronted them with arguments that challenged their strongly held beliefs, political and others. They found that the brain activity they observed was “signaling threats to deeply held beliefs in the same way they might signal threats to physical safety.” As Gimbel and Kaplan explain in an episode of the You Are Not So Smart podcast, the more a subject resisted changing their mind, the stronger the activity was in two areas: the amygdala and the insular cortex. “We find that the response in the brain is very similar to what would happen if you were walking in the forest and came across a bear.”

Of course we cannot switch off this response to threats: it is in our interest to engage protective mechanisms when we are under physical threat by a bear, a mugger or an out of control trolley. But we do have a choice in how deeply we hold a belief.

It is the strength of the attachment to a belief that makes us hold on to it, even in the face of unquestionable evidence to the contrary. If our belief is strong enough, we resist changing our mind as much as we would resist having an arm eaten by a wild animal.

Scientists ideally approach a subject from a neutral viewpoint, without emotional pay-off for either the confirmation or the falsification of a hypothesis. The true utility of research is to discover the truth, not to be right. But we ordinary people also sometimes have to choose between getting wiser, and hanging on to an existing belief. If we really want to pursue wisdom, then we must leave behind the idea that giving up a belief comes at an emotional cost. We have to accept that it is a good thing to lose one’s beliefs.

For belief in itself has no value. Only belief that is not being contradicted by facts can be valuable.

What with the question whether Keynes really spoke the words at the beginning of this article? Dyed-in-the-wool Keynesians might like to believe so, because the quote fits well with his character. But anyone doing so really ought to be prepared to give up that belief. Even if, one day, someone came with unequivocal proof that it was actually said by his rival, that other great 20th century economist, Friedrich von Hayek.


Posted in Behavioural economics, Cognitive biases and fallacies, Emotions, politics | Tagged | Leave a comment

Fairness or efficiency?

(featured image: NeuPaddy)

Yet another instance of the tension between our emotions and economic logic

Imagine a clothing retailer who charges more for an item in a larger size than for an almost identical one in a more mainstream size. While you’re at it, imagine a supermarket increasing the price of food during rush hour, and for good measure, also an energy provider with more expensive electricity at peak times.

You’d be imagining the kind of practices that quickly receive the label of unfairness. In the summer of last year, fashion retailer ASOS was found to sell a dress in size 18 for £10 more than a very similar one in size 6-16. The reactions were predictable at the time. A year on, it featured in the BBC consumer show You and Yours earlier this week, suggesting that the outcry over this kind of ‘fat tax’ has not quite died down.

Unfairness everywhere

A Mail Online article last week headlined: “Supermarkets could use ‘e-pricing’ to jack up the cost of food during lunchtime and after-work rushes”. It supports its prediction that “a shift towards surge pricing is extremely likely” with quotes from a consumer psychologist and a retail strategist. It’s not quite happening yet, but the comments below the article suggest that it would surely not be seen as a fair move. (Uber knows all about the questionable popularity of surge pricing.)


Oversize, overprice (image: sumonpcs)

And the introduction of smart energy meters in the UK enables suppliers to introduce ‘Time of Use’ pricing. This goes well beyond the conventional dual-tariff system, which charges less for electricity used during the night (when demand is low). Greenenergy UK was the first to introduce such a flexible tariff, with a night time rate of 4.99p/kWh between 11pm and 6am, and a standard rate of 12p during the day, but with a peak tariff of 25p between 4pm and 7pm on weekdays. As a paper on so-called cost-reflective pricing by Elisabeth Hobman and colleagues states: “there remains a common public perception it is harmful and unfair. In particular, variable pricing is thought to harm ‘vulnerable groups’, specifically, those segments of the community with limited capacity to reduce energy usage during peak times, e.g., low-income households, those with disabilities or medical/health issues, shift workers, and young families with many children.”

Yet the importance of fairness in economic transactions is not a new topic. In Fairness as a Constraint on Profit Seeking: Entitlements in the Market, a paper from 1986, Daniel Kahneman, Jack Knetsch and Richard Thaler describe a classic lab experiment. They read the following scenario to 107 participants: “A hardware store has been selling snow shovels for $15. The morning after a large snowstorm, the store raises the price to $20.” When they asked the participants to judge the action of the store owners, 82% thought that it was unfair or very unfair.

Emotionless economics

Conventional economic theory does not take into account such emotional judgements, focusing instead on efficient allocation of resources. By raising the price of what has become a scarce good, the hardware store owner ensures that the shovels will be bought by those to whom they are most valuable. In other words, the shovels should end up where they provide maximum utility.

Economic efficiency is of course also behind differential pricing for energy. The technical constraints of the way energy is generated mean that it is tricky and costly to ensure supply always matches demand. Peaks and troughs are therefore best avoided, and a dynamic price is a simple and effective way of signalling this to the consumers. They can then choose to adapt their consumption accordingly. The same applies to (currently still hypothetic) surge pricing in supermarkets. The need for extra staff during rush hour is expensive. If this is not recharged to the peak time consumer, it raises the overall cost for everyone. So here too the price signal may work as an incentive for consumers to consider shopping at a less busy time.

The argument for more expensive ‘plus’ size clothes is slightly different. Larger items may require a bit more material, but that justification is hard to maintain if the price for sizes 6 and 16 is the same. More plausible is the explanation is that demand for outlier sizes is considerably less than for mainstream ones, and therefore also more volatile. This volatility represents a risk for the retailer. The clothes rails at the end-of-season sales tell the story: they tend to contain mostly the less common sizes. As the proportion of unsold items that now need to be discounted is higher, one way to sustain overall profitability is to sell them at a higher normal price. Failing to do so would mean that the other prices would need to be higher in compensation.

Perception versus reality

But we are very sensitive to unfairness – even if it is perceived rather than real. We resist being taken advantage of. We don’t care that there are good economic arguments for surge pricing – we think it is not just unfair, but unethical and unconstitutional.

But we can be a bit selective with our insistence on fairness. We may be happy to demand from businesses that they take a financial hit to treat us and others more fairly, but if it is within our own power to do so, we seem less keen to put our money where our mouth is. For example, the proportion of fair-trade coffee sold in the UK is 20% (just 4% for instant coffee). At the same time, we seem pretty content with dearer train tickets at peak times, and with having to pay more for a cinema ticket on a Saturday night than on a Tuesday afternoon or for a holiday during the school breaks.


How much for that mark?

Maybe the conflict between the real world with our messy emotions and that of economic theory, where only the logic of resource allocation counts, is largely a matter of perception. Loss aversion would explain a lot. Having to pay more for something that we see as the same version of a cheaper option feels as an unjustifiable loss.

If that is the case, it is remarkable indeed that suppliers are not always and everywhere framing the highest prices as the standard, and the lower prices as discounts. A nice story shows how reframing a price when people think they’re being taken for a ride can make them change their perception:


True, myth busting site Snopes considers it a legend, but legends often contain important and enduring lessons. Maybe, with the right framing, we may yet see the fairness in market transactions.


Posted in Behavioural economics, Cognitive biases and fallacies, Economics, Emotions | Tagged , | Leave a comment