Ethical liars

(featured image: Will Temple/Flickr CC BY 2.0)

Our morality is not quite so black-and-white as we sometimes like to believe

One of my earliest memories, I must have been four or five, is when my mother, to her embarrassment, forgot her hairdresser’s appointment. As she rang to make a new one, she made up an excuse for missing it, involving a wholly fictitious sudden illness of my baby sister. “Isn’t that lying?”, I asked her as she put the phone down. That was the moment my moral compass acquired the notion of the white lie.

I like to think that I have not abused the concept in the many years since that time, but I would be lying if I said I have never used it. (Strangely, I cannot quite recall any particular instance at this very moment, though that may well be a case of motivated amnesia.) Nevertheless, the lesson my parents and other educators tried to instil in my youthful person – that lying is wrong – still dominates my sense of morality, and that view is one I most likely share with the vast majority of my fellow humans throughout the ages, regardless of their nationality or culture.

Lying is wrong, except when it isn’t

It is not hard to see why lying seems to be universally condemned. A community in which lying would be the norm, or more precisely, in which adherence to the truth was optional (so you would never know whether someone was truthful or not) would have a hard time functioning effectively. Trust would be non-existent, and cooperation or long-term plans (for which one needs to be able to rely on promises others make) would be almost impossible. The evolutionary imperative to be trustworthy, and therefore to be truthful, is as obvious as it is strong.

There are bigger ones (image: .Martin./Flickr CC BY ND 2.0)

But at the same time, it is odd that we seem unable to consistently apply this. Psychologist Bella DePaulo found that it is not untypical for people to clock up a couple of lies every single day. These could be like my mother’s white lie and the ambiguity of “No, your bum doesn’t look big in this (…well it depends what you call big, I’ve seen bigger)” to technically-not-a-lie like “Yes, I did go to Harvard (…but as a tourist)” and being economical with the truth, all the way to full-blown lies (“I did not have sexual relations with that woman”). In almost all situations where we – or ‘someone’, let’s not get too personal – resorts to deception, there is some kind of justification, extenuating circumstances that take the sharp edges off, and soothing the cognitive dissonance that would otherwise be experienced when breaking one’s moral code. Sometimes this relates to what is being said or implied (perhaps the untruth is not that far from the truth, or it could very well have been true). Sometimes it also relates to the result (another person’s welfare is enhanced or safeguarded).

Unnecessary harm

Levine focuses specifically on deception, and takes it out of the laboratory setting and into routine interaction, where things can be much fuzzier. For example, a liar’s motives may not always be clearly either self-serving or benevolent – is false praise inspired by concern for the target, or for the deceiver’s own reputation? The nature of the relationship (if, indeed, there is one) between the deceiver and the deceived is also likely to play a role in the judgement. Everyday lies are found to be well-reasoned, intentional acts, and justified when telling the truth would have little or no instrumental value, and would harm the target.

It seems there is some system to this second justification. Despite broadly condemning deception as a rule, people actually perceive it as ethical when it prevents unnecessary harm, according to new research by Emma Levine, a professor of behavioural science at Chicago University. When benevolence and honesty are in conflict, the former seems to have the upper hand, and functions as a get-out-of-jail-free card.

To establish how people concretize unnecessary harm in practice, she asked some of the research participants to give examples where they would they prefer to be lied to (e.g., when their dog died through being hit by a negligent driver, being told it died peacefully in its sleep). Others were asked for concrete instances where they themselves would consider lying as ethical (e.g., temporarily withholding distressing information from a friend, to avoid compromising her result on an imminent exam).

Subjective or trivial information? (image via youtube)

From this input, the author distilled eight distinct community standards over three dimensions (the deceived person, the subject and the context). People consider it acceptable to lie to people who are emotionally fragile, do not have the capacity to understand the truth, or are at the end of their life. Lying is considered as ethical if it concerns subjective or trivial information, or information about what cannot be controlled. And it is also OK to lie if the truth would disrupt special events or moments, or if it would embarrass someone in front of others. The avoidance of unnecessary harm is a common thread across all eight.

Levine then empirically tested the framework in two ways. Presented with prepared scenarios and vignettes, participants took different perspectives to judge the right action (as deceiver – “I should lie”, observer – “Others should lie”, andtarget – “Others should lie to me”). In addition, she asked participants to relate real situations in which they – perhaps reluctantly – deceived another person, and rate their actions according to the same key variables (whether telling the truth was instrumental, and whether it would cause harm). This confirmed the validity of the unnecessary harm framework.

Consequences matter

Her conclusion is that there are clear conditions under which people systematically endorse (and indeed practise) deception. In those circumstances, people consider deceiving another person as more ethical than honesty, and people prefer to be lied over being told the truth.

While, superficially, morality may seem to be a matter of somewhat simplistic and rigid rules – like lying is wrong – our personal ethics appear to be much more nuanced and pragmatic. In practice, we are quite capable of considering the consequences of strictly applying those rules, take into account the trade-offs involved and, where necessary, at the very least bend the rules.

But this research focused on unnecessary harm to others. What of the self-serving white lie from the beginning of this piece? Arguably the very same reasoning is at play here. The instrumental value of revealing that my mother plainly forgot her appointment was very low: it is hard to see what material benefit it would have given the hairdresser. At the same time, telling the truth rather than pretending my sister was ill would have led to uncomfortable embarrassment for her.

Perhaps, deep down, we are all utilitarians.

Posted in Behavioural economics, Ethics, Morality, Psychology, Society | Tagged , | Leave a comment

Important futilities

Sometimes what we do seems to be unduly influenced by what appears to be utterly futile. Is that as unwise as it seems?

Last Saturday was Luka’s 6th birthday. We had some balloons to decorate the house, and for just £1 my wife had bought a self-assembly garland with cardboard flags spelling “Happy Birthday”. But when we opened the pack, all it contained was just enough letters to make the word “pitday”. Return it to the shop and ask for a replacement?

The item was clearly not suitable for purpose, but the idea of returning a faulty product bought for the futile sum of £1 felt, well, a bit petty. What if we made up the missing flags ourselves with cardboard, markers, and a pair of scissors? That sounded even crazier, so I set off to the nearby shop anyway to purchase another, hopefully complete, garland, with no mention of the faulty one. It would surely look better than anything we’d concoct ourselves, and well worth the cost of just a pound – a no brainer, really.

Not all futilities are perceived equal

Earlier that morning I had popped into the local fishmongers on my way back from the baker’s shop in the next town (they sell the kind of bread for which I am quite happy to drive the extra mile). Now, the fishmonger’s street is one where you need to pay to park, as a friendly traffic warden reminded me a couple of months ago. I had always known this, of course, but that day I had, as always before, betted that it would be exceedingly unlikely for a traffic warden to turn up in the five minutes it’d take me to collect the order I’d phoned in earlier. I was, very much, the rational criminal that the late, great economist Gary Becker describes in Crime and Punishment.

Thankfully, the traffic warden on duty that day was a benevolent man, who advised me to put just 5 or 10 pence (about 4-8 euro- or dollar cents) in the meter and avoid a pricey fine. At 5p for three minutes’ worth of parking, it makes sense, even for a rational criminal. Our local fishmonger’s does not only supply excellent seafood, they are also very efficient at serving customers, so since then a 10p parking fee has been the norm for me. If I see there is nobody in the shop before parking up, I can even get away with 5p. (Reader, if you are wondering why on Earth I am going on about such a futility, remember the title. Also, we are not quite done yet.)

What has this got to do with the price of fish? (image: DCRC-UWE/Flickr CC BY-NC 2.0)

Back to last Saturday. I noticed the smallest coin I had on me was 20 pence (small change is scarce these days: since the start of the pandemic everyone, me included, is paying contactless). I could not possibly contemplate paying twice as much as normal – what a waste! I was ready to start the car and drive home, to come back later on foot, when – thankfully – my more rational self woke up.

“Imagine”, it argued, “that someone offers you 1 penny a minute for wasting 20 minutes of your time (roughly the time it takes to walk to the fishmonger’s and back). Would you do it?” Even on a Saturday morning, or perhaps especially on a Saturday morning, that felt like a gross underpayment for my precious time, so no, I wouldn’t do it. “So, it must be worth at least 20p to avoid wasting 20 minutes then, yes?”, my rational self suggested – but I had already got out of the car with my 20p coin, on my way to feed the parking meter.

These two anecdotes illustrate two apparently rather different perspectives on relatively insignificant amounts of money. In the first one, a materialist course of action (getting a refund for a faulty purchase) was rejected for one that took into account emotional motives (not appearing miserly). In the second one the opposite happened: the initial emotional response (I am not paying twice the amount needed!) was displaced by a more reasoned one (make a good time/money trade-off). How can we (or at least I) be so inconsistent?

Ruled by rules and emotions

Perhaps it is not so inconsistent after all. The idea that we act either rationally, or irrationally – perpetuated by a simplistic interpretation of Daniel Kahneman’s bestseller Thinking, Fast and Slow – does not fit reality. Many of our decisions are driven by diverse motives, even if they are made with little or no deliberate, explicit reasoning.

All but the most trivial trade-offs are complex, and would require comparing not just apples with proverbial pears, but with cherries, bananas, celery, yoghurt and washing up liquid. There is no way we can make such comparisons in an consistently analytical manner. Instead, we mainly use rules of thumb that we are taught, that we learn, or that we develop ourselves – sometimes called heuristics.

The first anecdote features at least two such rules. One is “if you are sold a piece of junk, ask for redress”, the other is “if you demand that someone pays you a trifling amount of money, even if it is legitimately owed, you come across as a miser”. Arguably there is even a third rule at play: “if it makes little difference either way, take the course of least resistance/cost/inconvenience etc.”.

In the second anecdote, “If you have to pay more for something than it is worth, walk away” or “Don’t pay more than you need to” would be a good stab at the first one. A second rule might be something like “If you are about to act impulsively, consider the consequences first”.

Many more rules could be playing a role here, for example, “Do what you have always done in situations like this”, “Do what others would do”, “Don’t waste time”, or “Ignore small amounts”. Naturally, your rules may (and probably will) be different from mine – and even if some are the same, we will give different weights to them.

No decision without emotion (image: Wayhomestudio/Freepik)

But some of these simple rules inevitably contradict each other, like “demand a refund” and “don’t come across as a miser” in the first situation. To solve that kind of conflict and get to a decision, our emotions get involved. Antonio Damasio, a Portuguese-American neuroscientist, established that superior intellect is not sufficient to make good decisions. After surgery to remove a brain tumour, one of his patients, pseudonymously named Elliot, had no apparent impairment to his cognitive abilities (his IQ continued to be in the top-3%). Nevertheless, he appeared incapable of making the simplest decisions, such as how to categorize his documents at work – by date, by size, by topic? What Elliot lacked was not intelligence, but emotion: the tumour and its removal had led to permanent damage to his frontal lobe – a part of the brain that is highly important in our emotional processing. Damasio realized that, without emotion, Elliot was no longer able to determine which option was better (or worse) when faced with a choice, and had become profoundly indecisive.

Each rule triggers either positive or negative emotions to some degree, and our amazing brain manages to integrate all this and come up with the option that feels the best. In one situation, that can be the more materialist option, in another it might be the less materialist one. Homo economicus fans may well consider the former as the superior solution in all circumstances, but ultimately it is our emotions decide how important futilities are to us.

(I have since decided to adopt another mental accounting rule: “treat small amounts as part of a larger one”. When I consider the parking fee as part of the price I pay for the fish – 10 pence is less than 1% of my typical order – it gets lost in the noise. Problem solved!)

Epilogue

And sometimes the right decision unexpectedly turns out to be very much the right decision. As I tried to verify that the “Happy Birthday” garlands in the shop actually contained all the necessary flags, I began to suspect there was in fact nothing wrong with the one at home. I did buy a “replacement”, but as I got back, a quick inspection revealed that my hypothesis had been correct. Had I tried to return the original garland, coming across as a cheapskate would have been the least of my worries as the shop assistant pulled apart the flags that were stuck together to show me that the pack was indeed complete. I would have saved myself £1, but at what emotional cost…

Posted in Behavioural economics, Cognitive biases and fallacies, Economics, Emotions, Psychology | Tagged | Leave a comment

The relativity of mysterious overconfidence

(featured image: kues1/Freepik)

Both doubt and confidence have benefits, but is there anything worthwhile in overconfidence?

It is the season of party conferences in Britain. Over the years I have lived here, my attitude towards them has evolved from interest to irritation, and on to indifference. What has been a constant throughout, though, is the overconfidence the political leaders exhibit at a conference: not a hint of doubt that the proposed policies will be effective, serve everyone and lead to electoral victory.

The conundrum of persistent bad decision-making

Of course, overconfidence is not something that just politicians exhibit. We are all occasionally guilty of being too certain for our own good, discovering later on that the viewpoint we thought was unassailable or the watertight course of action we undertook were not quite that unassailable or watertight as we thought. Overconfidence is a cognitive bias – one of the most pernicious ones, argues Nobel laureate and éminence grise of the behavioural sciences Daniel Kahneman. It is the one, given a magic wand, he would eliminate first. But he entertains no illusions: in the same article, he admits it “is built so deeply into the structure of the mind that you couldn’t change it without changing many other things”. One problem is that overconfidence is widely rewarded: given a choice, audiences generally prefer pundits and commentators who self-assuredly proclaim to know what is going on and what will happen, to colleagues who express more nuanced viewpoints. And naturally the same holds for politicians: you get more votes by being 110% certain (and coming across that way) than by admitting that you’re not entirely sure and that you could be wrong.

“You don’t need to be overconfident to be a polit… actually, you do” (image via YouTube)

Nevertheless doubt – in moderation at least – is a useful cognitive state, in which we have not (or not yet) decided between belief and disbelief. It stops us jumping to conclusions, prevents us picking the first answer to a question or the first solution to a problem that comes to mind, and encourages us to consider multiple potential options, so we can make better decisions. As we make up our mind, we develop confidence in our conclusion, and that triggers our own internal reward system – no audience needed! Research by neuroscientist Pascal Molenberghs and colleagues found that the more confident people are, the stronger the activity in brain areas associated with reward processing, like the striatum. And that is useful too: once we have a solution or an answer that we feel sufficiently confident about, the reward tells us we can move on.

The trouble is that this can tempt us to imagine we are more confident than we ought to, just to experience a stronger reward. Ever more confidence can easily lead to overconfidence, and poor decision-making. But if that is so, how come such a maladaptive characteristic has not been evolved away, and is instead so ubiquitous? Of course, not all poor decisions are fatal, and it is quite possible to overconfidently survive until a ripe old age. But presumably, at some point we ought to realize that this overconfidence is serving us badly, and learn our lesson. So why don’t we?

Alice Soldà, an economist at Heidelberg University, and colleagues, investigated this apparent mystery. In particular, they focused on whether there are benefits to overconfidence that might outweigh the costs after all.

Central to their study was an experiment in which participants were paired up, and individually answered multiple choice general knowledge questions. Each correct answered earned money for a joint account, which they would be able to divide between them at the end of the experiment. After completing the questionnaire, they were asked to state how many answers they believed they got correct, and to indicate on a scale of 0-100% how likely they thought it was that they had done better than their partner.

Next, the participants were given feedback on their actual performance, but in order to allow participants to develop overconfidence, some uncertainty was introduced. Those who had outperformed their partner received either a green signal with a probability of 75%, or a red signal with a probability of 25%. In the opposite case, this was reversed, with red more likely than green. After receiving this feedback, participants indicated once again the likelihood they had performed better than their partner.

Negotiating for a worse outcome… or is it?

Then the bargaining started. Initially, the participants were told that the money earned jointly had been split into two unequal parts: 70% and 30% of the total, and their task was to agree with their partner who would receive which share. To this end, they had to write a message to their partner in which they justified their choice. If both participants chose a different part (and hence immediately agreed who was the better performer), the negotiation was complete, and the money was distributed. If, however, they both chose the larger part (in none of the nearly 150 pairs both participants chose the small part!), they received an additional three minutes to reach an agreement. During this time, they could communicate with each other through a chat facility (with one key condition: they could not reveal which colour signal they had received). If one of the participants switched from claiming the larger part to accepting the smaller part, again the negotiation would be concluded and the money divided. In the absence of agreement by the end of the period, they were granted a further 30 seconds to settle the difference, but with every passing second, the amount in the account was reduced by 1/30, so that after 30 seconds nothing was left. As soon as one participant switched to the smaller share, the clock stopped, and the remaining sum was divided 70/30.

Just over 6% of the pairs reached an immediate agreement, around 36% agreed who would get the larger share in the second stage (14% in the last 10 seconds!), and nearly 43% did so in the last stage (33% in the first 10 seconds). Nearly 15% of the pairs did not reach an agreement and went home empty handed (!).

Of course, the researchers knew precisely how each participant actually performed, how they thought they did, and how strong their belief was that they outperformed their partner before and after they received the feedback. Thus, they could establish the influence of (over)confidence on the monetary outcomes, mediated through the negotiation process. However, the researchers considered not just the payoff as a percentage of the total amount of money earned, but also as a percentage of the actual amount left, taking into account the situation where the negotiation went against the clock.

“I was a little overconfident too” (image via Hetemeel.com)

This was revealing. While, on average, participants with the highest levels of confidence, who kept their nerve all the way to the last stage, collected less money (and caused their partner to collect less, too, as the clock ticked away), they did on the whole end up with larger share of the actual prize than their partner.

They lost out absolutely – which supports the idea that overconfidence can lead to poor decisions – but they gained relatively. This can explain why overconfidence persists: we tend to value relative positions more than absolute positions. A telling paper by Sara Solnick and David Hemenway describes how over half the respondents in their study would prefer a child with an IQ of 110, if she is more intelligent than other kids, to a child with an IQ of 130 if her peers were smarter still. More than 50% said they’d prefer infrequent praise from their boss over more frequent praise, as long as they got more than their colleagues, and even that they’d rather earn $50,000 if their peers earned $25,000, than earn $100,000 if their peers took home $200,000.

Overconfidence, in this experiment, clearly comes with an absolute cost – at first sight, it leads to a poor decision. But if we are prepared to pay that cost to obtain the relative benefit of gaining more than someone else, then perhaps the persistence and ubiquity of overconfidence is not so mysterious. I mean, then that is 110% certain.

Posted in Behavioural economics, Cognitive biases and fallacies, Emotions, Psychology | Tagged , | Leave a comment

Real prices can be about more than supply and demand

(featured image via Twitter)

How to turn an unfair price rise into a fair one

This last week, the UK has been experiencing a freak petrol* crisis. A handful of filling stations on the south coast had run out of fuel because of minor supply issues. The news led to a vicious circle of spreading panic buying, dozens more stations running dry, even more panic buying with demand up to 5 times its normal level, and so on. Why did such a demand-supply imbalance not push prices up?

The law of supply and demand in economics is much like the laws of thermodynamics in physics: it describes a natural phenomenon. The market price of a good is largely determined by how much there is on offer (the supply) and how much is required (the demand). Whatever its price is right now, if there is more than needed (i.e., supply exceeds demand), it will come down. Some suppliers want to get rid of their surplus, which they need to offer more cheaply in order to attract more buyers, and this raises demand. The price reduction may also lead to some producers halting production, thus reducing supply. Once supply and demand are matched, the price stops moving: every unit produced is sold, and every unit demanded is fulfilled – “the price is market clearing”, economists say.

Conversely, if there is not enough of a good to meet the demand, the price will rise: buyers who value it the most will offer a higher price, at which other potential buyers will drop out, and demand will fall. The price rise may also encourage producers to make more, or new producers to enter the market, thus increasing supply. And again, this will happen until supply and demand are in balance.

The market to the rescue… if we let it

So, whenever something happens that pushes the supply-demand equilibrium out of kilter, markets enable human behaviour to affect the price, which in turn affects behaviour, and so on, until rest is restored. In a splendid video, part of the equally splendid Marginal Revolution University’s course in micro-economics, economist Alex Tabarrok explains that a price (or perhaps more accurately, a price movement) is a signal wrapped up in an incentive. It signals that something becomes more (or less) scarce, and gives an incentive to act upon this signal to both buyers and sellers. Magic.

How would such a ‘natural’ price rise in the case of this specific petrol crisis work out? It is hard to see how it might boost supply. There is enough fuel in the depots, so producing more would not help at all. The problem is that there are not enough drivers to replenish the petrol stations at the rate drivers are filling up. But using a higher price at the pump to increase their pay is obviously also not a practical solution to peak demand. What higher prices would very likely do, though, is dampen demand, and thus facilitate a quicker return to normality. Yet, the prices did not budge.

See what happens when you overcharge customers? (image: Jeffrey Rolinc/Flicker CC BY NC ND 2.0)

This is one of the situations where rationalist economics clashes with real life economics – as economics Nobel laureate and behavioural economist Richard Thaler would say, ‘econs’ (reasoning, emotionless creatures) versus humans (complex bags of contradictory emotions, motives and preferences). The boss of the British Petrol Retailers Association put it pithily: “The one thing we do not condone is profiteering in situations like this.” Customers have long memories, he argued, and would remember how their local filling station offended them by overcharging them. Thaler himself alludes to the problem of profiteering – regardless of how much economic sense it makes – in Misbehaving: together with Jack Knetsch and Daniel Kahneman, he explored how people perceive fairness in economic transactions. One of the questions they asked concerned a hypothetical hardware store selling snow shovels at $15, but the morning after a snowstorm raises the price to $20, as the demand outstrips the supply. Unfair, 82% of the respondents found. (I wrote about this phenomenon a while ago here.)

Still, a situation where supply cannot meet demand and prices cannot rise is not ideal. Long queues at the pump, but not everyone needs fuel equally badly. Some people’s tanks are genuinely nearly empty; others are simply topping up or even bring along a collection of jerrycans to stockpile. Some people need their car to get to patients, pupils or to their job as an ambulance driver, or to drive to the hospital for important medical consultations; others only to take the dog for a walk in the woods. There are calls to give certain categories of key workers priority to fill up to avoid this problem, but that is easier said than done.

A fair allocation through an unfair price hike?

A market in which the price can freely move to bring demand and supply in line with each other could help. In a recent episode of the Rationally Speaking podcast, host Julia Galef discussed the practice of price gouging (or profiteering) with two economists, Raymond Niles and Amihai Glazer. Along with the other effects of pricing (the signal and incentive), they also considered the related aspect of the allocation of scarce goods: who gets what?

Niles argues that not allowing the market to set the price leads to inefficient allocation: people will buy more than they need, and so inevitably some people end up with less than they need. He also points out that rationing does not ensure that the scarce good goes to those people who value it most. (Here, for example, the person who just wants to fill up their car but leaves it on the drive for days gets as much as the person who needs their car to go to work every day and earn a living.) Glazer observes that misallocation (which can become full-blown hoarding) when prices are kept low is primarily a problem for goods that can practically be stashed – and that is indeed the case (at least in the short term) for petrol.

If there is no practical mechanism to ensure the efficient allocation of a scarce good during short term peak demand other than allowing the price to rise, and at the same time such price rises are considered hopelessly unfair by the consumers, is there nothing that can be done? Richard Thaler hints at a way to circumvent the aversion apparent profiteering: the perceived fairness of a transaction can depend on how it is framed.

What if a price hike did not, as is assumed by default, take extra money out of the buyer’s pocket only to add to the seller’s profit, and was framed differently? Sure, it would then not act as a signal and an incentive to the supply side, but since that is not where the problem is, that is just fine. It would send a signal and an incentive to the demand side, though: dearer fuel discourages those buyers who value petrol the least, and ensure that what is available can go to the drivers with the highest need. Yes, dearer fuel would also irk people, but arguably, wasting time frantically searching for petrol, and wasting more time queueing when you’ve finally found some is also not without irksomeness.

“Move on, you pillock, so I can fill up with cheap fuel too!” (image: John Greenfield/Flickr CC BY 2.0)

That leaves the question what to do with the surplus income that cannot go to the seller. What if that went to a good cause? In Britain, the national treasure of the NHS would be a most suitable recipient of these funds, and it would make many drivers a lot less irked at having to pay more.

One way to implement this could be a special levy imposed by the government, but my inner curious libertarian would prefer leaving the setting of the price to the market, and limiting the government’s role to creating the legal framework for raising prices in emergencies and funnelling the proceeds. Not only would this ensure it is market forces rather than bureaucrats or, heaven forfend, politicians setting prices, but it would also leave the decision whether or not to engage in this mechanism to the retailers. That might create diversity in the market, and provide drivers with a true choice between cheaper petrol that is hard to find and costly (in time) to obtain, more expensive, “NHS+” fuel… or simply staying put and waiting until things calm down.

What an interesting economic field experiment that would be!

(Between writing this article and publishing it, I noticed that none other than the FT’s undercover economist, Tim Harford, proposed something very similar on Twitter. Modesty stops me from saying that great minds think alike :-).)

* ”gas” if you are in America

Posted in Behavioural economics, Cognitive biases and fallacies, Economics, Psychology | Tagged , , | Leave a comment

Incentives in the balance

(featured image: Ruslan/Flickr CC BY SA 2.0 and Thamizhpparithi Maari/Wikimedia CC BY SA 3.0)

We respond to incentives, that is true – but not always in the way that might be expected

An insight to which I often return is the following economic observation, from Steven Landsburg’s The Armchair Economist: “People respond to incentives. The rest is commentary.” It has an irresistible simplicity, and it’s not hard to find plenty of evidence for it: positive incentives make a choice more attractive, and negative incentives less so. But things are rarely quite that simple.

Incentives work because we have a built-in mechanism to establish and evaluate costs and benefits. From our most ancient unicellular ancestors all the way to us here and now, only those individuals who reliably made choices that were better for them – so they were able to live long and prosperous enough to procreate – and who avoided worse choices were able to persistently pass on their genes. This ability allowed all our predecessors, and allows us today, to respond to and navigate nature, seeking out and pursue what is good for us, and evade what is not.

Human activation of incentives

For almost ever, this was a passive affair, in which nature provided the situations and living things responded to them. But one day, very recently (in historical terms), we humans figured out that we could actively manipulate costs and benefits to others (and to ourselves), and thus influence behaviour: we invented incentives.

Since then, reward and punishment – for desired and undesired behaviour, respectively – are widely used instruments to change behaviour, from bribes (extra pocket money) and threats (confiscation of smartphone) in the process of education our children, to, well, bribes (bonuses and promotions) and threats (sanctions and firing) in the workplace. And of course, we see incentives in the world of commerce, with special offers, discounts, coupons, free pens and whatnot, encouraging us to buy cheese or insurance.

You can picture a decision (to either do, or not do, something) as a set of scales on which one side represents the costs and the other side the benefits. What an incentive does is either put extra cost or benefit on the corresponding scale, or take cost or benefit away from it, in order to make it tilt the other way. (Often one and the same intervention can be seen either way. If a coffeeshop charges less for a croissant with a cappuccino than for both items separately, that can appear as an incentive to buy two items rather than one, or a disincentive to buy just one item.)

Making things cheaper or more costly is an obvious application, which can even be done conditionally, like off-peak prices for public transport or holidays. But practically anything that is important to the decision-maker (convenience, effort, time, risk, timing etc), can be manipulated to alter the balance between one choice and the alternative, and hence act as an incentive.

The miracle of the Passe Sanitaire (via ourworldindata.org)

From 9th August 2021 the French government, for example, has made the passe sanitaire – proof of immunity to COVID-19 or a recent negative test – compulsory for a wide range of public activities, including eating or drinking in the outside areas of bars and restaurants. This was at least in part intended as an incentive for people to get immunized: it makes failing to do so costly (no vaccine, no fun). It is also coercive in nature: it takes away a previously existing right from people who choose not to get their jab. And it worked: France’s rate of vaccination accelerated remarkably, overtaking the EU curve which it had been lagging since May of this year.

Stuff the scales

The British government attempted a similar tactic with the personnel of care homes. These house people who are often quite vulnerable through age and various morbidities, even when they are fully vaccinated, and so minimizing the risk of infection is very important. Many employees, however, appear reluctant to get immunized. By the end of July, only 78% of personnel in older adult care homes was vaccinated, and the government itself estimates that some 7% of the 570,000 care home staff (40,000 people) will refuse. Last month, it therefore decreed that anyone working in a care home who is not exempt (for medical reasons) must be fully vaccinated by 11th November.

Like the French pass, this too is a coercive incentive. Hey, if people are prepared to get immunized just to be able to continue to have a beer, they should surely be at least as willing to do so in order to keep their job. That assumption would very likely be valid if jobs were scarce. But there are over a million vacancies in the UK, many of which are for jobs with no more qualification requirements than in the social care sector, for instance in hospitality.  Adding conditions to employment in such a climate will certainly work as an incentive – only not necessarily in the intended direction. Unsurprisingly, the measure is a big worry for care home operators, who already face the most severe acute staff shortages in living memory, according to a open letter to the government issued this week.

A jab to keep my job — but who says I actually want to keep it? (photo: Mufid Majnun/Pixabay)

Exactly how people respond to an alteration to the costs and benefits balance depends on what the alternative is. For government ministers and civil servants with a well-paid job that they enjoy, and who consider vaccination at worst as a minor inconvenience, getting the jab and keeping their job might look like a total no-brainer. But imagine someone in a tough, underpaid job, who is deeply suspicious of a new vaccine that is pushed by people with whom they have little affinity, in a climate with plenty of other low-paid, but less burdensome jobs around. They may well opt for the alternative, choose to check out completely, and move on.

Similarly, the Brussels regional government announced earlier in September that a coronapas will be essential to visit places like gyms or, like in France, bars and restaurants. They hoped this would provide a boost for the vaccination rate in the Belgian capital which, at 51%, limps way behind the national rate of 72%. Now maybe the unvaccinated Bruxellois care little about gyms and bars, or they plan simply hop over the boundary into nearby Flanders (which surrounds the city) for a beer or a steak frites, where at present no pass is needed. In any case, unfortunately the compulsory pass appears to have little effect.

Incentives can be a powerful instrument to change people’s behaviour. But it is advisable to take a good look at what their alternative options might be – especially if the incentive assumes they need or profoundly want to do something. Above all, it is important to evaluate how the incentive will influence the choice they ultimately make from their perspective, not yours, otherwise they may well tell you to stuff your balance leave you alone with your set of scales.

Posted in Behavioural economics, Economics, Psychology | Tagged | Leave a comment

Like me

(featured image: Goran Has/Flickr CC BY 2.0)

Is it irrational to favour people with whom you have something important in common?

Imagine the prime minister of your country is not someone you naturally align with: in fact, you’d rather see him or her defeated at the next election. But then you suddenly learn that your PM is a life-long supporter of the same football team that you have been a fan of since you were a kid. What effect does this new knowledge have on you?

This very thing happened to Samuel Salzer, one of my fellow behavioural practitioners and an ardent Tottenham Hotspur fan, who recently learned that the Swedish Social Democratic premier, Stefan Löfven, has also been a Spurs supporter for decades. While before, he didn’t like his prime minister or, at best, felt neutral about him, he reported that he “instantly noticed how [his] image of him changed,” despite knowing that this is “irrelevant”.

Is this fact really as irrelevant as Samuel writes, and is this reaction, as he suggests, a case of irrational behaviour?

Affinity matters

It seems evident that politics is a matter of policies, and of nothing else. A rational voter – even a hypothetical one – will evaluate the different policies each candidate represents, weigh them up and determine which of the candidates is most likely to pursue a policy set that best matches their wishes.

But that is not quite how we work.

Should he be employed by the private sector? Photo: Kampus Productions/Pexels

In a Danish experiment by political scientists Rune Slothuus and Claes de Vreese, citizens were presented with a constructed newspaper article on two proposed policies: a conflict issue (contracting out the care for the elderly, something on which the two main Danish political parties oppose each other) and a consensus issue (joining a proposed trade agreement, something on which they agree). The respondents were put in four different framing conditions: a pro- and a con-frame, each of which was presented as sponsored by either one or the other of the two parties, and had to express their level of support for the opinion in the article. The researchers found that citizens tended to respond more favourably to an issue frame – regardless whether it was for or against the policy – if it was sponsored by the party they vote for than if it was promoted by the other party.

Party affiliation thus seems to shape people’s view on specific policy proposals. Otherwise put: whether we are for or against a policy is a function not just of its precise content and nature, but also of the nature of the party proposing or objecting to it.

Might there be other additional factors that might influence our political choices?

Consider this thought experiment: you must choose between two politicians who both propose exactly the same policies (to control for party affiliation, we can even assume they are from the same party, for example if both aspire to become party leader). If you genuinely know nothing else about the two candidates, you might as well vote by spinning a coin. But will you still be inclined to do so if you learn some other factoids about each candidate? Would you perhaps have a slight preference for the person who has the same degree as you, or who is known to share with you a fondness of Kraftwerk, for Harry Potter books or for Stoic philosophy?

While such characteristics – or any other you like to name – are most likely not going to have any material effect on their political actions, they do seem to matter, in some odd way, in shaping your perception of a person. Now, consider a small alteration to the thought experiment: this time, the candidates’ policy proposals differ in some minor, and in your eyes quite insignificant way. Can you imagine still opting for the candidate with whom you feel some affinity – especially if it is a strong, emotional one – even if the other candidate has, albeit marginally, the best fit with your own political views? In other words, might you trade some policy choice for the knowledge that the candidate is in your in-group?

We are more inclined to cooperate with (and tend to have positive feelings towards) members of our in-group (however defined – even in so-called minimal groups, established on the basis of the smallest, trivial choices, e.g., whether or not a hot dog is a sandwich, or how toilet paper should be orientated).  You may have noticed that astute car or kitchen salespeople will seek to boost their chances of a sale by taking advantage of this predisposition. They will ask you some personal questions and quickly bring up a – real or made-up – connection: “That’s an unusual name! Where are you from? Oh, Belgium? I was there two years ago! Lovely country, excellent beer, and Bruges, what a pretty city!” And bingo, as a buyer, you cannot help feeling some degree of connection.

Evolutionary psychology theorizes that in-group favouritism has its roots in the evolution of humans as a cooperative species, capable of surviving and prospering in a wide range of environments. Cooperation requires trust, but indiscriminate trust would be ineffective, and our early ancestors had to be able to trust those who could be trusted to cooperate too. Well-defined In-groups were, and are, a plausible mechanism for this to work. Even though party politics and football were not around when this societal model started to emerge, both clearly map onto the concept of the in-group, alongside numerous other characteristics.

(Not so) irrational preferences

Is it irrational to change your image of someone – even a politician! – because they happen to support the same football team as you? The term ‘irrational’ is widely used to label behaviour judged as illogical, suboptimal and indeed often, frankly, a bit stupid. This is not a very helpful definition, not only because of its implied normative connotations, but because it ignores the aims and preferences of the supposedly irrational individual.

Our choices are, fundamentally, a consequence of our preferences. When we can choose freely, we go with whatever we prefer. Say there are two slices of pie left, a larger and a smaller one. I really like this pie, and I could easily eat the bigger one. Is it then irrational for me to choose the smallest one and leave the largest one for you? Yes, from a very narrow perspective: I should simply self-interestedly take the larger one. But not when taking a broader view: there are ample reasons why, even if I’d rather have the larger slice, I’d still leave it for you, from genuine generosity and reputational concern to deontological or reciprocity motives. In all these situations, I would certainly be acting in accordance with my preferences.

Spot the irrational mode of transport (photo: Alden Jewell/Flickr CC BY 2.0)

Whether or not a choice is rational is inextricably linked with the chooser’s preferences. It can only be irrational if it goes against them. Alice, who places a high value on car’s badge, is not irrational for preferring an expensive model when she could have bought a functionally identical model of a less prestigious brand for thousands of pounds (euros, dollars) less. Bob, who prefers to avoid traveling by plane because the very idea terrifies, them is not irrational for opting to drive long distance, even if that increases their chance of becoming a fatality during the journey.

And neither is someone who prefers, all else being equal, to have a member of their in-group as a prime minister, and who feel a stronger affinity with the PM the moment they learn he is, like them, also a Spurs supporter.

Their more favourable attitude towards the politician is unlikely to be quite enough to also assure him of their vote at the next election. But knowing a political opponent is somehow in the same in-group – especially when, as in this case, it has a pronounced emotional significance – may encourage a more neutral evaluation of his policies than the dogmatic dismissal that is often the typical reaction to the policies of a political adversary. That can’t be bad, can it?  

Posted in Behavioural economics, Emotions, politics, Psychology, Society | Tagged , | Leave a comment

Buyer and seller – one market price, two perspectives

(featured image: abby_mix07/Flickr CC BY NC SA 2.0)

Markets enable prices to reach a level that satisfies both sellers and buyers, but that doesn’t necessarily mean that they see that price in the same way

Do you know what a kilo of brown rice costs, a 350g pot of salt, or a pack of four sponge wipes? If you are like me (and I do the weekly shop!), in all likelihood the best you can do is give a pretty wide price range for these products. As long as the price is within that range, you’d put these items in your trolley without giving it more thought.

You trust that the price is reasonable, because you assume the supermarkets compete with each other in an open, free market, where overpricing items in comparison with their rivals will lose them customers. And so do we all. Even for items that we buy more frequently than salt or sponge wipes, and whose price we know well, we rely on the market to tell us what they cost: there is no ‘right’ price for a pint of milk or a loaf of bread. Supply and demand find an equilibrium, and anyone trying to make excessive profits will quickly be undercut by someone who is happy with a more reasonable margin.

The buyer’s perspective

This is the case for low-cost items like these groceries, but also for more expensive goods and services costing tens, or even hundreds of pounds. But our assumption that the market price is a fair price can end up being challenged when we see it broken down into its constituents.

HOW MUCH for cleaning? (photo: via twitter)

A little while ago, an economist of my acquaintance had hoped to get away for one night in celebration of his birthday, and checked out what was available on AirBnB. Even more than for brown rice or dishwasher detergent, the typical consumer can only guess what a fair price for a night’s accommodation would be, dependent as it is not only on the destination city (and where exactly in that city), but also on the date.

He found an option with a pretty decent rental price for one night of $199 (£144, €167), but that was not the whole picture: on top of that there was the AirBnB service fee ($49), the tax ($41) – and the cleaning fee, which turned out to be an eye watering $150. Even AirBnB’s helpful notice that this price was still a good deal “less than the average nightly rate for the last 60 days” did not do much to sugar that pill.

Now most economists, and even most behavioural economists, would generally tell you that, in a transaction, it is the total price that matters. If the benefits exceed that amount, then the transaction can go ahead. Its components should not matter at all.

However, the cleaning fee here stood out like a very sore thumb. We make decisions based on more than just the numbers, and that is why we might find the total price of $439 much more acceptable if it was composed of $299 for the accommodation $50 for the cleaning, and reject this identically priced arrangement. That’s just how we roll. Nobel Economics laureate Richard Thaler puts it succinctly: we are “humans” with idiosyncratic preferences and feelings, and not “econs”, dispassionate creatures that narrowly focus only on material costs and benefits.

Is it irrational to reject a transaction, not because the total price outweighs the benefits, but because it is revealed that a particular component of it is, in your eyes, excessive? For “econs”, yes, but not if you adopt the “human” vantage point: doing business with someone who appears to take advantage of us would be deeply unpleasant, no matter how enjoyable the actual stay might be.

So it seems that acquiring additional information can alter our viewpoint significantly. But does this decomposition tell us everything, or is there still more that can influence our view? How do things look like from the seller’s eyes?

The seller’s perspective

As it happens, an actual AirBnB host responded to the original tweet, explaining that he uses the cleaning fee “to encapsulate all the one-off costs of a booking”. Doing so discourages 1-night stays (which take just as much effort as longer ones), without eliminating the option altogether: someone who is really, really keen still has the option of paying the fee.

This is an interesting, and economically quite plausible, position to take. Pricing almost always comprises both fixed and variable elements, but hospitality is particularly exposed to the imbalances that might arise. In retail, the sales volumes are usually so high that fixed costs can be spread over many sold items and become relatively insignificant – the transportation component of the price of a pot of salt is a small fraction of the cost of the actual salt. Hotel rooms and AirBnB properties, however, can be let for one night to one guest and for seven nights to the next, with the same fixed cost per stay. In a hotel with many rooms, the average duration of a stay will generally be fairly steady, which makes it sensible to allocate the fixed costs on a per-night basis. That doesn’t quite work for a host with a single AirBnB property, though.

And as a guest, we don’t immediately realize this challenge. We tend to evaluate the price in light of the value we get in return – a night’s stay – and both the magnitude of the fixed cost, and the complexity of recharging, it are hidden to us.

Might guests be underestimating the cost of cleaning? (photo: Matilda Wormwood/Pexels)

Is the amount this particular host charged excessive? Maybe disclosing the actual one-off costs might address would-be guests’ suspicion. Of course, not all of these are really out-of-pocket costs that can be itemized, like hiring a cleaner. Some manifest as effort, time and hassle for which the host wants to be compensated. “But hey, is all that not part and parcel of the business of letting, which should simply be incorporated in the price of the stay?”, I can hear you ask.

This may feel intuitively fair, but it makes poor economic sense. By design, this approach would reduce the total cost of short stays, and increase that of longer stays. As a result, the total revenue of a shorter stay would no longer cover its full fixed cost. More longer stays would thus be needed to subsidize that shortfall, but as this approach would make them relatively more expensive, there would be fewer of them. At the same time, the cheaper shorter stays would become relatively more attractive, and hence tilt the balance in the wrong direction – thus further increasing the total shortfall for the host.

Even more information, in particular the seller’s angle, does indeed add more nuance: splitting off the fixed cost in this way is both a fairer way of pricing (without subsidy) and economically more sensible. As a guest, you may still think the fixed cost fee is too high, and go elsewhere, just like the host can decide to deter guests who only want short stays.

Yet, even here you can trust the market to regulate matters: a host whose cleaning fees are truly excessive will see his custom melt away, and will be encouraged to lower the charges until supply and demand are back in balance.

Posted in Behavioural economics, Cognitive biases and fallacies, Economics, Emotions, Psychology | Tagged , | Leave a comment

A right to entitlement

(featured image: jEd dC/Flickr CC BY NC ND 2.0)

We often assume we are entitled to something, but that is literally only half of the story

Years ago, my (British) colleagues and I had what we thought was a brilliant idea for a practical joke. We were in a French chateau, at a debriefing session concluding a large international project with ten Germans on the team. Our plan was to sneak into the meeting room late that evening with all our spare towels, and place them on the chairs, a reference to a curious German tourist custom.

Well, perhaps not that curious. You may remember that, once upon a time, German citizens holidaying around the Mediterranean had a reputation of getting up at the crack of dawn to place towels on the best placed loungers around the hotel pool, to assert the temporary ownership they believed to be entitled to (to the detriment of the not-so-early birds among the guests, naturally). In the event, we never got to see the reaction of our German colleagues at the meeting, since the staff of the venue had got up at the crack of dawn and removed the towels before breakfast. (In hindsight, I guess the joke was not all that funny anyway.)

Visible when there is a conflict

Entitlements permeate social interaction in numerous ways. Some are enshrined in law, for example entitlements to free or subsidized healthcare, education, and to welfare like child benefit or a pension. But most form part of social norms, for example the entitlement to a seat on a crowded public transport vehicle for passengers who find it hard to stand up.

They can also influence our behaviour, as illustrated on poolside sunbeds and in aircraft cabins, where conflicts may arise over the allocation of space related to the position of the seat backs. As the distance between airline seats gradually shrunk over the last 50 years, reclining one’s backrest has intruded more and more into the space of the person in the seat behind. While one individual may firmly believe they are entitled to sit back and relax, not least because the seat is a equipped with a button to allow this to happen, the other may be equally convinced that they are entitled to the space for their knees, or to use a laptop computer on the folding table without having to squeeze their elbows in an anatomically unlikely position.

It is indeed especially when conflicts around entitlements (actual or perceived) arise that they become salient. As long as everyone gets what they believe they are entitled to, they are barely noticed, but when people feel short changed, tempers can flare up quickly.

Fuzzy definitions and one-sidedness

I *will* park in front of my house, no matter what (image: Marika Lüders/Flickr CC BY NC 2.0)

This is probably in part because of the fuzzy, malleable definition of the concept. What makes us entitled to something? Unsurprisingly, we may well be a little biased towards simply considering whatever is good for us as an entitlement. Who wants to park their car two blocks away? So, we hypothesize some kind of entitlement to park our car right in front of our house. Even if we know that it is not real, it entitles (!) us to grumble when someone has the temerity to leave their car where ours should be. (We used to have a neighbour who complained whenever anyone parked their car in front of his house, in the firm belief that such an entitlement really existed.) Similarly, in a movie theatre, we like to think we are entitled to seat with an unencumbered view of the screen, and we are annoyed if a tall person – arriving at the 11th hour! – plonks himself down right in the empty seat in front of us.

Imagine you’re in the queue at a café at a tourist spot, and you’ve got your eye on the last slice of strawberry cheesecake in the display cabinet. As you order it, you hear the person behind you sigh that they were hoping to get that piece of cake. But who, in your view, is more entitled to it? You of course, because you were first.

Sometimes being first is not enough, though. I recall one Friday evening, on the last flight home from Amsterdam, a man walking up to the passenger seated across the aisle from me, claiming that was their seat. Both men then produced their boarding passes, which showed the same name and the same seat number, to the bafflement of the cabin crew. It turned out that these two gentlemen had the same surname and initial, and for some reason the staff at the desk (this happened well before online check-in) had issued a duplicate boarding card to the person already seated. Who ended up being entitled to the seat here? Not the Mr Brown who got his boarding pass first, but the Mr Brown who had possession of the seat – in the entitlement stakes, possession is a powerful argument.

Perhaps the most legitimate reason for entitlement in our own eyes, however, is that we have made some kind of sacrifice – we worked or we paid for something: “I slaved all my life, so I am entitled to a decent pension!”, or “I paid £200 for this hotel room, so I am entitled to a sea view rather than one of the car park!”

Two sides to the coin

But all these reasons tend to take a rather self-centred perspective of what an entitlement is, which overlooks something fundamental: there can be no entitlement unless someone else is able and prepared to fulfil it.  Maybe it is precisely this last subset of entitlements that can help us understand this, and abandon the idea that entitlements are only about what we have a right to expect.

Who’ll pay for all that rain? (image via YouTube)

Last Sunday, the Belgian Formula 1 Grand Prix at Spa-Francorchamps was not quite what the more than 70,000 fans (nor the drivers) anticipated: persistent heavy rain first delayed the start of the race by half an hour, then, after just eight minutes, caused the race to be suspended for nearly three hours. When it restarted, the elation of the drenched spectators was short-lived: after barely another two laps it was suspended again, and eventually, a few minutes later, abandoned.

Here, the fans had definitely made a significant sacrifice: some had paid upwards of £500 for a ticket. Surely they were entitled to a refund? That is what many spectators claimed, supported by drivers Lewis Hamilton and Carlos Sainz. But this case shows that an entitlement is one side of a coin, with on the other side an obligation of another party to fulfil it.

It is true that the enjoyment of the F1-fans on that rainy afternoon undoubtedly fell seriously short of their expectations, and it is true that they paid the race organizers significant amounts of money. But does that constitute an entitlement to a refund that the organizers are obliged to meet? They were not responsible for the unusually bad weather, and it is not clear what they could have done differently to give the spectators a satisfactory experience. There was a risk this would happen, but it is by no means obvious that the consequences of the event should be borne by the race organizers.

Of course, it is possible to explicitly reflect in the commercial agreement that the purchase of a ticket represents the obligation to refund spectators in case the race is cancelled because of bad weather, earthquakes, acts of terrorism, all drivers being struck by a severe bout of diarrhoea or whatever (and hence grant them the entitlement). In that case, the organizers would of course have been wise enough to take out insurance… and reflect that extra cost in the price of the ticket.

When we feel we are entitled to something, it is worth checking the other side of the coin, figuring out whose the obligation would be to fulfil this entitlement, and whether that is a legitimate and reasonable expectation. If that is not the case, our entitlement is just a figment of our imagination.

Posted in Behavioural economics, Psychology, Society | Tagged | Leave a comment

We can fabricate data, but we cannot fabricate the truth

(Featured image: Matt Lemmon/Flickr CC BY SA 2.0)

When a renowned behavioural scientist gets embroiled in a case of fabricated data, there may be some lessons for us all

When a behavioural science paper is discovered to have been using fraudulent data, the field understandably experiences some mild, but distinct tremors. If the co-author who was responsible for the data happens to one of the field’s most famous scientists, the tremor becomes a proper shockwave. What has been going on?

Data detectives at work

It concerns a study from 2012, which suggested that asking individuals to sign a declaration of honesty at the beginning of a self-report form (rather than at the end, as usual) makes them complete it more honestly, based on two lab experiments and a field experiment. In the lab studies, for example, students completed puzzles and could claim money according to how many they had solved, as indicated on a form. The setup was such that it appeared to be possible for participants to overstate their performance without being caught (but in reality, the experimenters could compare their claimed results with their actual performance). In the field study, customers of an insurance company reported the actual odometer reading of their vehicle(s). In both cases, the information provided by the participants indicated that it was more truthful (fewer puzzles solved, more miles driven) in the treatment condition (i.e., when they had signed upfront).

On 17 August, Datacolada, a website devoted to critically evaluating behavioural science research, showed that the data regarding the insurance field experiment were almost certainly fabricated. (Do consult the post for a wonderful account of forensic data scrutiny.) The five authors of the study were asked to comment on the findings; four of them did, all agreeing with Datacolada’s findings, while the first author reacted on Twitter. The bombshell was that the co-author responsible for sourcing the data from the insurance company was none other than Dan Ariely, very well-known both in academia and beyond thanks to popular books like Predictably Irrational and, ironically, The honest truth about dishonesty.

They should have signed at the beginning (image: PNAS)

An interesting twist is that the data in question were not posted publicly until 2020, when a paper was published by a team involving the original five authors that failed to replicate the 2012 lab experiments, concluding that signing upfront does not decrease dishonesty. The earlier paper will now be retracted, and arguably, the practical significance of the fraud is limited anyway because of the failure to replicate the results. Nevertheless, the affair offers some useful, more general insights – not just for academics, but for all of us.

The work, not the person

The reactions to the discovery were, as could be expected, varied. Alongside nuanced and constructive calls for a thorough investigation into what went wrong and why, so that similar occurrences could be prevented in future, there were less nuanced ones. Dan Ariely is not someone who shuns the limelight, and he is very popular with the members of the rapidly growing community of behavioural scientists and practitioners. That invites schadenfreude and, unsurprisingly, some reactions contained subtle hints of it that someone who some might see as perhaps a bit too popular for his own good got taken down a few notches. Others saw in it an opportunity to criticize the entire discipline of behavioural economics, addicted as they see it to be to hype and sensational illustrations of supposed human irrationality. Neither of these are really helpful, and both are themselves examples of the kind of biases we all have to some extent. If we are unsympathetic to an individual or to a cause, we will tend to amplify negative information about them, and see it as confirmation of our prior opinion.

The identity of a person involved in questionable practices should not matter: our judgement of their actions should be independent of whoever they are. But this is easier said than done. We tend to be lenient towards people that we feel affinity with or who, in some way, belong to the same group as we do, and we tend to be much more critical of people whom, for whatever reason, we don’t like. Such emotional connections to a person can cloud our judgement: we jump to conclusions (in favour or to the detriment of the individual concerned), and give credence to superficial speculation that fits our view. We are well advised not to become uncritical of people because they are held in high regard – in the field of behavioural science someone like Nobel laureate Daniel Kahneman – and we should likewise not uncritically dismiss all the work by people found guilty of scientific misconduct, like Brian Wansink (who had 18 of his papers retracted). We should judge the work, not the person.

People respond to incentives

One particular point that is often made in cases of impropriety in academia is that there are perverse incentives at work. Scientific journals rarely publish null results and practically all but demand positive results, and to be a successful academic you need lots of publications. The temptation to massage the figures, to be less than diligent in scrutinizing the data, and sometimes indeed to fabricate data, is real. In addition, sensational results bestow status and fame on researchers, and that too can provide an incentive that influences behaviour. Andrew Gelman, a critical statistician, refers to the (Lance) Armstrong principle (after the disgraced American cyclist): “If you push people to promise more than they can deliver, they’re motivated to cheat.”

At least he still has the principle named after him (image: Milan/Flickr CC BY NC ND 2.0)

Given that the way the data were falsified was rather incompetent, it is not very likely that Ariely himself did so; more probable is that it was someone at the insurance company who, for some reason, was unable to produce the agreed data (but I am speculating here, and at least one person argues differently). However, he did not – in his own words – “test the data for irregularities”. Checking third party data might have been perceived as a low-priority task that could be skipped, but neglecting it might also be motivated by a desire to avoid troubling the prospect of a successful publication. The anticipation of success is a powerful incentive.

The trouble with a preferred truth

If analysing the data confirms what we already believe to be true, it is a very human thing to tend to be less than critical, both of the data and of the analysis. Alongside confirmation bias, there are plenty of related tendencies that might encourage us to be not as thorough as we could or should – wishful thinking, selection bias, optimism bias, escalation of commitment and more – all amplifying whatever incentives we may have to navigate towards a positive result, whether in academic research, or in our work or home lives.

But perhaps the most fundamental challenge might be that we have a horse in the race, that we have a preference for a particular result. If we are searching for the truth – and that is the case in science, but in many other endeavours in business and in our private lives too – we should not be concerned with what exactly the truth will turn out to be. We should, literally, not care about the outcome. All we should care about is that we learn the truth, whatever it is – that is where the value resides. If we become attached to a particular theory or claim, the corresponding emotion can make it very hard to remain dispassionate across the entire process – from formulating the problem or research question and designing the experiment or approach, to collecting and analysing the data and drawing conclusions.

We cannot pursue the truth if we have a preference for what the truth should be.

Posted in Behavioural economics, Cognitive biases and fallacies, Emotions, Psychology | Leave a comment

An accidental behavioural economist is on holiday – again

Human behaviour continues to be an inexhaustible source of wonder and fascination – even when on holiday

Unexpectedly, your correspondent had an opportunity to return to his native country at short notice for a brief holiday after skipping the annual tradition last year. And yet again, it provided ample educational entertainment (or entertaining education) for any behavioural economist, accidental or not. Take a seat, and prepare for some anecdotes.

International travel has become a bit more complicated since the beginning of 2020. One of the first tasks – other than booking the ferry crossing between the UK and France – was to figure out what hoops we’d need to jump through to get to our destination. Thanks to constant mentions in the British media, we knew we’d need an antigen test before travelling back home and a PCR test upon return, and those were quickly arranged. The conditions to travel to (or in our case, through) France had recently been relaxed from “Don’t even think about it” to “Vaccinated? Do come in!”, albeit with a declaration sur l’honneur that we didn’t have a fever, had not been close to a sick person etc. Easy peasy!

Emotions, irrepressible emotions

Britain may have left the EU, but the rest is still there, so I naturally assumed that Belgium would apply the same relaxed attitude. Or was it wishful thinking, that pernicious cognitive error in which it is our desires, rather than evidence, that shapes our beliefs? Anyway, the day after I had booked the ferry ticket and the tests, I discovered that entering Belgium would entail not only a PCR test within two days of arrival, but also self-isolation until the test was confirmed negative. Our plan was to spend three days at the seaside, but we were going to arrive on a Friday evening, so suddenly the prospect loomed of having to stay indoors all weekend, get a test on Monday morning, and then wait another 24 hours for the result. That would shorten our seaside stay to, well, zero. Eek.

A popular holiday attraction these days (image: Dunk/Flickr CC BY 2.0)

Was it still worth it? Had I realized earlier, I’d probably have given up the idea before making any commitment. But the ticket was non-refundable, and pretty pricey! Pessimism bias in the red corner, sunk cost fallacy in the blue one, two types of loss aversion pitted against one another. Well, my friends, do not diss good old sunk cost fallacy. We decided to persist and not be deterred by the doom scenario – and we’re glad we did. I then found out the local hospital conducted PCR tests during the weekend mornings too (without appointment!). So, on the day after our arrival, we queued up with about 20 others, and by 4pm we had our negative results and at once our vacation could start for real.

Quirky motives

As we arrived, things looked pretty much how we remembered them from two years ago. However, one exception struck me at once. The flat where we have been staying during our summer visits to the old country is located close to the beach, where car parking is scarce and expensive. For many years, cheapskates as we are, we have dropped the bags off, and then driven to the outskirts, a good ten minutes’ walk away, to leave the car where parking is free (as loyal readers might remember from four years ago). Alas, even there, free parking had now disappeared, but for a small consideration of 5 euros (85p, $1.17) I could leave it all day. Time to take my own advice from back then: mental accounting! Bearing in mind the total cost of the trip (not least the cost of all the tests), 20 euros for four days’ parking seemed negligible in comparison – a fine case of motivated reasoning. Since I had no choice but to pay, I might as well reduce the corresponding pain by framing it in that way.

Other things had not changed, including the number of dog owners, and a commensurate amount of dog excrement on the pavements. Even on my walk from the car to the flat, I immediately needed to engage scanning mode: while contemplating the idiosyncratic architecture with one eye, the other constantly glanced ahead to avoid a smelly encounter. It is not that the local council had not adopted some pedigreed behavioural interventions: there are “poo tubes” all over the place, making it easy for dog owners to dispose of the plastic bags containing their pets’ recent production, with plenty of signs reminding them to do so. These classic nudges are complemented by equally classic incentives: a fine for leaving dog poo of 105 euros during the week, with a 40-euro turd surcharge on Saturdays and Sundays. But it was clear that this combination, even underpinned by remedial instruments in the shape of a fleet of poo vacuums roaming the streets, as is often the case, was no match for the prevailing social norm among many dog owners. Changing social norms is hard to do, especially in a town where, over the summer season, most dog owners are tourists with little stake in the upkeep of the place. Behavioural interventions, sadly, have their limits.

Behavioural interventions don’t always work (image: Jonathan Davis/Flickr CC BY NC 2.0)

Principles, profits and irrationality

On Tuesday morning, as the two previous days, we trundled over to the local mini-mart for some fresh breakfast pastries. Retailers in Belgium as most everywhere else have had a tough time, so we particularly wanted to support this store. As we approached, we realized the shutters were down, and a notice informed us that the shop was closed on Tuesdays. Now, in Belgium, shops are obliged to close their doors one day per week, but stores in recognized tourist areas are exempt from this rule. The owner of this one clearly had voluntarily made the decision to sacrifice approximately 1/7 of the weekly turnover. Irrational? Some (behavioural) economists might say so. Your correspondent is not among them, though. Perhaps the weekly day of closure is an important principle for the owner, perhaps he feels he must be there all the time when it is open and at the same time feels he does not want to work 7 days a week. Who are we to question the rationality of whatever his  preferences are to decide to close one day per week?

A similar conundrum arose in a discussion I had while having dinner in a somewhat unusual restaurant in town. It is located in a large old shed, with a décor to match: the simplest chairs and tables arranged in long rows seating well over 100 people, much like a school refectory. It also serves only seafood (no meat or vegetarian dishes), AND NO FRIES (everything is served with bread and butter). Doesn’t that put people off, and might making concessions on the menu widen the net, so to speak? For sure. But the place is pretty much packed every evening, even with a large number of outside tables in summer, so trying to attract more diners would be pointless. Might it instead be possible to increase the margins (by raising prices)? Undoubtedly, given that it was full, even on a rainy weekday. But here too, there are perhaps other concerns are at play than pure economics. The restaurant’s challenge is akin to the artist who sells out night after night. This leaves money on the table: at least some people would be prepared to pay more, and that would be pure profit (as ticket resellers prove time and time again). But raising prices would tarnish their reputation – and the same might well be true for this restaurant. Precisely because they are popular, largely thanks to their reputation (which stretches far) they can afford to decide what is, and isn’t on the menu, and maybe that is what is important to the owners.

Like all of us, these business owners tend to do what feels right, and while earning money may well figure in it, it is rarely the only, or even dominant motive. And ‘doing what feels right’ would similarly seem to be the driver behind the behaviours in the other observations in this piece: the careless dog owners, my rationalizing of the cost of parking, decisions on the basis of wishful thinking or the consideration of sunk cost.

When looking for the reasons for behaviour, perhaps we need look no further than our innermost feelings.

Posted in Behavioural economics, Cognitive biases and fallacies, Economics, Emotions, Psychology | Tagged , | Leave a comment