The limits of transactional thinking

Featured image: mh.xbhd.org/Flickr CC BY 2.0

Markets are instrumental to our health, wealth and wellbeing. They enable the expression of demand to be passed on to prospective suppliers, and allow the goods or services demanded to reach the customers willing to pay for them. Take a look around you wherever you are, and you will see numerous objects that someone (depending on where you are, that might be you yourself) had a need for, and someone else has supplied through the market. It’s little short of magic. But that near magic can fool us, if we are not careful.

Markets are brilliant

Trading and bartering have been around for thousands of years, likely for longer than records stretch. But we have Adam Smith to thank for, if not being the first person to describe the essence of trade, then certainly being the first person to capture it in the immortal words, “It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest. We address ourselves not to their humanity but to their self-love, and never talk to them of our own necessities, but of their advantages.” We take our money to the butcher, the brewer or the baker, and in return we receive meat, beer or bread – and both they and we are better off. When many suppliers and many more customers can find each other in a market, the whole process of concluding transactions becomes easier still. This is, in part, because markets come with the concept of a market price, the amount paid for a good or service that ensures that the quantity of the good or service produced precisely matches the quantity demanded.

In Adam Smith’s time customers and suppliers often had some form of relationship beyond an anonymous purchase transaction: the same customers regularly bought goods from the same local butcher, brewer and baker, and likely socialized with them outside the shops too.

A minor problem with the security of supply (image via DALL-E 3)

These days we have access to dozens and dozens of sellers who can offer us not just meat, beer and bread, but also washing up liquid, T-shirts, lipstick, tyres, mattresses and thousands more goods. We can ‘go to market’ at any time of the day or night (literally), and have our expectations met that, whatever it is we desire, there will be several suppliers willing to conclude a transaction at a price that is right for both of us. Our relationship with any of them rarely goes beyond that one single transaction because – well, why would it? The market delivers not just goods and services, it delivers abundance. Whenever we need something, the market can supply. Almost everything has become a commodity, available any time, any place.

Almost.

Unfortunately, the ubiquitous efficiency of the market for just about everything we need has led us to sometimes ignore that little word, and to assume that, to all intent and purpose, what is needed is always abundantly available. And this is not always so.

The value of security

Many countries have been affected by a shortage in drugs to treat ADHD, in part due to rising demand, but also as a result of “manufacturing issues” according to the UK’s National Health Service. Henry Shelford, co-founder of the charity ADHD UK, said last Monday in a consumer programme on BBC radio that the supplier of the drug in question (Takeda) has only a single factory outside North America to supply the rest of the world, located in Ireland, where there has been a catastrophic failure. According to an earlier Sky News report, it appears that there is no second source to supply the UK’s demand.

What used to be abundantly available suddenly no longer is. Abundance is relative: when we can get what we need whenever we need it, we experience abundance. But that may well be because the supply matches exactly the demand – not because there are ample suppliers. When, in a market, a supplier experiences problems, there are usually plenty of others who can step in. Abundance in a market means resilience. But when the apparent abundance is the result of a single supplier steadily meeting a steady demand, that resilience is not there. When the supply is disrupted, the abundance turns out to be a mirage.

A different example of abruptly vanishing abundance occurred last Tuesday evening at the UK’s border control posts, when the 270 electronic passport gates at 15 airports and international railway stations went out of action. These e-gates can handle incoming passengers much more efficiently than the old system in which an agent manually checked every individual’s passport. But in the old system, there was, in effect, a resilient market place at work with multiple suppliers (of labour) – the individual border control agents. At peak moments, more of them could be deployed, and if sickness prevented some of them from turning up, qualified staff could be brought in from other duties to increase capacity if needed. An event whereby all agents – all the suppliers in the market – would be out of action at the same time was exceedingly unlikely. But the gates act like a single supplier, and an event in which their apparently abundant capacity for handling huge numbers of travellers suddenly and completely disappears is clearly not so improbable.

The resilience seems to have left us (photo: BBC TV screenshot)

In both cases, it is technically perfectly possible to make contingency arrangements to add resilience. In the extreme, the supplier of ADHD drugs could add a second, independent production line next door and thus vastly reduce the risk of a major disruption in supply. The UK border control service could likewise set up a parallel, independent system. More cost-effective solutions probably exist, but they require thinking beyond the transactional.

In a market, transactional relationships work fine because resilience, and hence security of supply, is inherent and self-evident. Much of what we buy is provided by markets where, thanks to competition, the prices are low. But we might forget that the resilience that is effectively included for free in the low prices of a market is not automatic when there is no market with an abundance of suppliers.

If security of supply represents a significant part of the value of a purchase, that value is not accurately reflected in the market price, but in how bad it would be if the supply was disrupted. Richard III, in Shakespeare’s eponymous play, probably possessed a stable full of horses that had cost what was for him barely more than some pocket money. But when he lost his horse in the heat of the battle, the true value he put to one was no less than his entire kingdom.

When there is no abundant market ready to provide resilience, a transactional relationship with a supplier is not sufficient. Ensuring resilience requires a more profound relationship between customer and supplier, like a well-oiled machine, reflecting the fact that what the customer buys is not just product, but security of supply.

Posted in Behavioural economics, Economics, Management, Psychology | Tagged | Leave a comment

Suspicious minds

Featured photo: Pixabay

People can be too sceptical, and irrationally turn down offers that are genuine. Or might that not be as irrational as it seems?

Imagine a mad billionaire makes you the following offer: he will pay you an amount of money every time you drink a can of cola, starting at one dollar, increasing by one dollar with every subsequent drink. There is one constraint: as soon as you fail to drink the next can of cola within 24 hours, the payments will stop forever. Would you take the offer? If you even have the slightest hesitation, you are not alone. But why might that be?

I came across this thought experiment on twitter a few days ago. At first sight, it is a no-brainer – apparently you can only win if you accept it: even drinking just the one can will earn you a dollar. My first reaction was to do a search to find out whether this was a recasting of a classic thought experiment, but to be honest, I also wanted to know whether there was a catch that I had missed. The thing is: I wasn’t sure that there wasn’t a catch, even though it seemed to be a pretty safe way of making money, potentially a large amount of it.

This hesitation is not risk aversion: the way the case is presented is that there are two options, both with certain outcomes: either you decline and you gain nothing, or you accept and you end up with a gain. What might be behind it is what we might call learned skepticism. We have all heard of (or perhaps even been unlucky enough to experience) deception and fraud, if not from the Nigerian Prince scam, then from Ponzi scheme con artists like Bernie Madoff. But perhaps even deeper sits our conviction that the world is zero-sum. This is not true in general (commercial transactions produce a gain for both buyer and seller), but it is very much true when bad actors are involved. We believe that nobody will give us large amounts of money, unless they can win back even more from us. When someone makes us an offer that is too good to be true, it is clear: they must have ulterior motives, or there must be strings attached.

Cookie? No? And if I pay you $20? Definitely not? (photo: Pixabay/CC0)

Recent research by Andy Vonasch (a psychologist at the university of Canterbury in New Zealand) and colleagues, set out to examine this tendency: why, if money is an incentive, is more money not necessarily a larger incentive? Is this evidence that humans are, as some might claim, irrational? The researchers speculated that people believe there is always a reason why others make them implausible generous offers, because they adopt the “norm of self-interest” – everyone, including those making such offers, seek to end up with the most money. That reason is that there are hidden “phantom costs” (e.g., unspecified downsides – the implausibly low-priced product is of poor quality, the activity for which they will pay us generously is dangerous, or the high-wage job on offer involves very hard labour), so the offer is not actually all that generous. Alongside the norm of self-interest, the authors propose two further criteria for such phantom cots to be perceived: one party violates this norm (by making an offer that significantly exceeds what is considered normal, e.g., offering $100 per hour for a job that normally would pay $20 per hour), and there is no salient explanation for this violation. The suspicion of phantom costs might be attenuated if the deviation is only moderate (e.g., an offer of double the normal pay), but can also be raised if any money is offered for an activity that normally does not involve being paid. What happens is that while the economic value of a transaction increases with increasingly generous offers, the imagined phantom cost reduces the psychological value, and thus makes the transaction less appealing, and hence less likely.

This is different from the conventional backfire effect, by which an incentive has the opposite result – for example by charging people if they collect their children late from daycare to encourage them to be on time, late pick-ups can be seen to be legitimized, and hence increase rather than decrease. It is also different from the so-called overjustification effect, by which paying people to perform enjoyable tasks decreases their motivation, as here it is the offer – accepted or not – and not the payment that reduces interest in the transaction. Moreover, phantom costs require no intrinsic motivation in the transaction, and if it is present, does not affect it (e.g., if you like cola, you might be suspicious if someone gave you a can and offered you money to drink it, but you would not suddenly lose your liking of the drink).

The researchers conducted a range of experiments in three different settings: an offer to do something for money that normally would not require money to change hands, an offer for a job for a higher wage than normal, and an offer to sell a service at a considerable discount – in all but two cases without an adequate explanation for the deviation.

In the first condition, in one experiment people on a university campus were approached by a person claiming to have baked too many biscuits and asked whether they would like one, with or without an extra payment of $2; in three other experiments, participants were given hypothetical vignettes in which they were offered $100 for eating ice cream, for letting a stranger into a lobby, or for being given a ride home from the airport by a fellow traveller. In all four cases, the offer of money significantly reduced the likelihood of the offer being taken up. In the three vignette experiments, the psychological phantom cost was also gauged through the strength of agreement/disagreement with statements like “the person who wants to come in has good intentions” or “I would be scared to take the ride”, and was found to be significantly higher when money was offered.

The second condition used fake job advertisements for a truck driver or a construction worker, with randomly allocated hourly pay that went from the typical industry average ($20) to $250 for the first, and from $5 to $960 for the second job (with an industry average of $15). A similar experiment was conducted with Iranian participants, with a job as cleaner offered for a wage ranging from half to 20 times the normal pay. Participants, both in the US and in Iran, were extrinsically motivated by the extremely generous offers, but were put off by the assumed phantom costs (e.g., “the job would be extremely disgusting/dangerous”), so much that they were less likely to take the highest paying job than the one at industry average.

Too good to be true? Or are the seats really uncomfortable? (photo: via Ryanair)

In the final two experiments, the effect of an explanation for the generous offer was examined. In one, a woman in a park recommending a walk along a particular path and offering money to do so reveals to some participants that she is a contestant in a game show and will win if they follow her recommendation. In the other, some participants are informed a flight is unusually cheap because the seats are uncomfortable. In both experiments, the explanation strongly reduces or even eliminates the phantom costs.

What this research illustrates in considerable detail is why and how we tend to be suspicious of offers that look too good to be true. In the absence of a plausible explanation, we imagine some psychological phantom cost, which we mentally subtract from the generous offer. The effect is that, at some point, extra money tends to backfire – and there is nothing irrational about it.

But what if there is no reason at all to imagine any phantom costs? If, in the thought experiment at the top of this piece, we can freely buy the cola ourselves (to make sure it is not tampered with), we are entirely in control, and any suspicion would be unjustified, wouldn’t it? Maybe not. Perhaps our residual reluctance to agree to the deal is that we are suspicious of ourselves. For when would we stop? Perhaps the phantom cost of forever being beholden to a mad billionaire – which we impose on ourselves – is what holds us back.

We have suspicious minds – and rationally so.

Posted in Economics, Behavioural economics, Emotions, Cognitive biases and fallacies, Psychology | Tagged | Leave a comment

Thinking fast and wrong

featured image by Julian Mason/Flickr CC BY 2.0

They say parents live vicariously through their children. Well, last Sunday, my distant (and, frankly, always unrealistic) dream of running the London Marathon was going to be realized by my daughter. She had been entered in the ballot for a place by her husband, and as one of the 20,000 lucky ones (out of 500,000 people applying), she was given a starting time of 11am, at one of the 4 starting locations (the logistics of handling a record 53,000 runners is quite something.) And thanks to modern technology, we would be able to track her both via her own sports watch, and through the official Marathon app.

The app seemed to have some trouble showing her details, but thankfully, the email with the link to her watch data came through and we could see her start off. That is to say, the link we received was actually for her husband’s device. Why might that be?

Jump… to conclusions

I could have come up with at least a half dozen of possible reasons, but in the moment, my first thought was that there had been a problem with her watch, and that at the last moment he had given her his. Was that the most likely one? Hard to tell – all of the possible reasons for this puzzling turn of events are highly unlikely since, statistically speaking, the overwhelming majority of runners simply use their own device. But it was plausible: it was compatible with the (very little) information we had, and with any stored knowledge (her husband was with her, had a similar watch, she would not want to run without it etc.).

This way of thinking is actually using an evolved, adaptive survival instrument. When our distant ancestors were presented with a strange stimulus, they had to determine what it meant, and what – if anything – they needed to do in response to ensure they survived. An individual for whom, of all the different possibilities, a serious threat was way down the list, would be eaten before having the chance to decide that noise was a sabre tooth tiger. He or she could clearly not have been an ancestor (otherwise we’d not be here). Moreover, our actual ancestors not only put the most critical threats at the top of the list. They also did not spend much time considering alternative possibilities, and were quick to guess that a suspicious sound was a threat.

A sabre tooth tiger, or a gust of wind? Better play it safe! (image: Dall-E 3)

Yes, they jumped to conclusions, but even if, more often than not, their first assumption turned to have been a false positive (mistaking a gust of wind for a predator), picking the first possibility was still a winning approach. One false negative (mistaking a predator for a gust of wind) would be enough to wipe an individual out, so the trade-off was skewed very much towards avoiding false negatives. Today, we, their descendants, are still more likely to pick the first possibility we think of, especially in situations of elevated emotional arousal, which we tend to associate with no time to cogitate at leisure.

For related reasons, we tend to look for confirmation, rather than for disconfirmation. Who has the greatest chances of survival among two individuals who both assume there is a threat: the one who continues to maintain that assumption until the evidence it is wrong is overwhelming, or the one who immediately starts to look for evidence that contradicts the assumption? Right. Not only does that mean we will emphasize evidence that supports our initial guess, we will also recruit our reasoning skills to interpret the evidence to persuade ourselves that we are right. In The Enigma of Reason (summary here) , cognitive scientists Hugo Mercier and Dan Sperber argue that our reasoning skills evolved not so much as a tool to think abstract thoughts, but to convince others, as that was a valuable ability for a member of the cooperating species we were in the process of becoming. The same mechanism is also effective to persuade ourselves – perhaps it started evolving in this respect even before it became an instrument tool for persuading others.

Motivated reasoning for the win

I even thought, at the time, that the swap of devices could explain why the marathon app failed to track the progress of my daughter. Later, I would realize that this was a crazy bit of reasoning (the marathon app used the tag of the runners to track them along the route and has nothing to do with runners’ individual devices). But it only goes to show the power of motivated reasoning to create and sustain beliefs. At the time, it provided sufficient confirmation to increase my confidence in the initial assumption.

Initially, the tracking information I saw showed a fairly quick pace, characteristic for the start of a run with a very dense group of participants. After about a kilometre, the pace dropped to almost walking speed. In other circumstances this might have raise my suspicion, but motivated reasoning still had the upper hand: obviously this was because, as different streams of runners joined up, the route turned into one giant bottleneck. That was the moment I temporarily quit tracking my daughter, as there was not all that much to be seen, and we had her two boys to keep entertained.

About an hour and half later, I had another look to check her progress, and noticed the device had stopped providing tracking data. Looking at the history, I saw that over the first five kilometres, the pace was irregular, and the heartbeat data fluctuated between 80 and 160 bpm… and then nothing. Had she given up? She had reported not feeling 100%, with a lingering cold, but nothing serious… but if so, how come she had not let us know? Further inspection of the tracking data revealed more mysteries: the route she had followed contained several dead end forays into side streets. Surely, she had not got lost?

Here, we see the mechanism of confirmation and rebuttal in action: I was hanging on to my original belief, but the contradicting evidence was piling up. Perhaps she was not actually running with another watch after all? I decided to look again at my emails. And yes, a few minutes after the first mail with a link, there was a second one – this one for my daughter’s device. Tappety-tap. Hey presto, there was the moving dot, just passing the 19km marker.

At that point, my wife reports from the kitchen, “The app is working again, and she is coming up to 20 km!” Now, my slow-thinking self was having none of it – enough with the motivated reasoning! This was a complete coincidence, and the reason why the app had failed to show her information for nearly the first half of the run shall forever remain a mystery. Fine. Whatever.

The perfect combination of thinking fast and slow (image via Dall-E 3)

Later in the day, I reflected on my thought process. We are often quick, and sometimes too quick, to jump to a conclusion, and then hang on to it, not letting go until we have strong contradictory evidence. But that fast thinking is the reason why we are all here. Our ancestors’ cousins, who didn’t do so eventually failed to pass on their genes before they were some carnivore’s lunch – and so I felt kind of thankful.

It may take us a while before we start questioning our hypotheses and beliefs, but that is something we can control, since that is the slow part of our thinking. If there is no risk of an untimely death and no urgency, we can step in and consider alternative explanations for what we experienced, and indeed, as the case may be, admit that thinking fast got us thinking wrong. The intellectual humility that helps us take that step has probably not had the time yet to evolve as an inherent common trait, but that doesn’t stop us from consciously practising it.

Epilogue: the link to my son-in-law’s watch I had received was the result of an omission. He had participated in the Manchester marathon a week earlier, and he had initially forgotten to switch off the auto-email function – something he eventually noticed half an hour after the start. And my daughter? She finished in 4 hours and 25 minutes and some seconds, a time I can only dream of, but I am happy that half my genes have contributed to it. Isn’t evolution great?

Posted in Behavioural economics, Cognitive biases and fallacies, Emotions, Psychology | Tagged | Leave a comment

Bans and obligations: a last resort?

(featured image collage with work by LtapsaH/Pixabay)

A decision whether or not to do something is almost always made on the basis of one of two types of judgement: does it deliver a benefit (or avoid harm), or is it the right thing to do? But sometimes, there is a third factor: the law. Does the law oblige us to do something, or prohibit us from doing something? Two recent developments, one in my native country and one in the one where I have lived for many years, caught my attention and made me wonder about the impact of the law on our decisions.

Belgium is one of the few countries where voting (or, more precisely, presenting yourself at a polling station) is mandatory, with a penalty of between 40 and 80 euros (escalating for repeat offenders, with for persistent ones, ironically, a ban on voting). This year, for the first time, 16- and 17-year-olds will be allowed to vote, and there was concern about the legal obligation to exert this democratic right for these youngest of voters. An amendment to exempt them from this duty was declared null and void by the constitutional court, so every Belgian citizen over 16 who fails to turn up still risks a fine (in theory, since prosecutions have been few and far between). Yet, is such legal compulsion a good idea?

In 50 years, anyone smoking under 65 will be underage (photo: Mauricio Fotografia/Flickr CC BY NC ND 2.0)

In the UK, the government is tabling legislation to introduce a phased ban on the sale of tobacco. If it becomes law, then it will be illegal for anyone born after 1 January 2009 to purchase tobacco products (or for people born before that date to purchase them on behalf of people born afterwards. Technically, smoking itself will not be banned, but the intended effect is to prevent younger people, even as they get older from smoking. Is such a legal ban a good idea?

Not so simple

Superficially, obligations and bans seem simple enough: something is either obligatory or not, or prohibited or not. But two aspects of such legal constraints make it a much more interesting influence on decision making than its binary appearance suggests. The first one is that proscriptions and legal requirements are not absolute: they tend to translate into a particular punishment in response to a breach. The second one is that the burden of an obligation or a prohibition only really matters for those people whose behaviour deviates from what the law prescribes. If you are a loyal voter in Belgium, the threat of a fine is of no importance to you; if you are a non-smoker (or indeed over the age of 15) in the UK, the law will have no effect on you.

If, as is almost always the case, punishment follows transgression, it can be seen as part of a risky bargain: if you are caught, you pay the price. This takes it, for those who chose to transgress, out of the realm of intrinsic motivation and even of moral obligation: paying up is a transactional cost which clears your moral debt. A classic example of this is the paper A fine is a price, by psychologists Uri Gneezy and Aldo Rustichini, which describes how the introduction of a fine for parents who collected their children late from daycare institutionalized the behaviour. Instead of reducing the number of late pick-ups, it increased them.

The surprise at this result stems from our tendency to estimate the effectiveness of the threat of the punishment from our own perspective. If we are conscientious parents, the fine will, if anything, make us even more conscientious. Similarly, if we would not dream of shoplifting, we would do so even less if we risk being punished, therefore we believe that this will apply to everyone. We fail to consider that, for some, the penalty for a transgression becomes an invitation to a transaction, negating any (weak) moral hesitation they might have had. Punishment can even normalize the situation – we are all great at adapting to new situations: we don’t like paying but if that’s the price, so be it. Stronger punishment? That may well backfire: research suggests that harsher punishment fails to deter (with only a small fraction of crime reduction being attributable to punishment policy). Some transgressors will simply choose to raise the reward by increasing the frequency or magnitude of the transgression to compensate for the added risk that is imposed on them.

Coercion is generally an expensive way of pursuing behaviour change in any case. If you force people to behave in a way they would not normally do, the game of reward and punishment means enforcement is likely to be needed indefinitely. Several decades of working in organizational change has left me with no doubt that desired change, in which individuals want to adopt different behaviour is much more effective than imposed change in which they have to. People will seek, and find, ways around the restrictions and impositions.

Looking for a balance

The fact that laws are often de facto only relevant to a small minority of people is significant too. If a law affects relatively few people, those who remain untouched by it are unlikely to question it. Quite the contrary, perhaps: if behaviour A is the norm, a law prohibiting non-A behaviour will force those violators to conform, won’t it? Maybe. But some people, even if they are conforming now, might flip, an effect known as reactance: when people feel their autonomy is threatened or constrained, they will resist and try to restore their freedom by doing what is not allowed, or failing to do what is obliged.

This phenomenon illustrates how deeply we value our individual freedom. It is obvious that we cannot realistically claim absolute freedom to do whatever we want, regardless of the consequences. Any such scenario would rapidly lead to a dystopian society and eventually its complete breakdown. But we evolved as a social species, capable of balancing our own, personal freedom with that of others. We are considerate of our fellow humans, by and large, because we intrinsically believe that this is the right thing to do, not because we are afraid of being punished if we don’t. Intriguingly, our willingness to comply seems even to increase when our freedom not to comply is explicitly asserted – the opposite of reactance – known as the “But You Are Free” technique, as research by psychologists Nicolas Guéguen and Alexandre Pascual, and a later meta-analysis of 42 studies by psychologist and communication scientist Christopher Carpenter found.

No obligation (photo: RachelH_/Flickr CC BY 2.0)

This is not an argument against all legal bans and obligations, but against their use without consideration of the fundamental value of individual freedom. We should be careful sacrificing it, and make sure that when we do we have a good justification – especially if it is not our freedom, but someone else’s, that is on the line. We should be even more cautious using legal bans and obligations, and the threat of punishment, as a paternalistic instrument to shape individual behaviour or societal norms. (Remember that nudging is deliberately described as freedom preserving libertarian paternalism.)

So what with the British anti-smoking legislation and the Belgian mandatory voting? The case for the former is weak. Historically, legal prohibition of alcohol and drugs has not really been a resounding success, and given the fact that smokers already pay through the nose for their nicotine fix, a fine is unlikely to make much difference. Should such personal decisions really be the government’s business, regardless of the detrimental effect on the smoker? (And if the argument is the externality smokers impose on society, the duty paid on tobacco sales is perfectly suited to handle that.) But in Belgium, an unenforced old law does perhaps have its use. The fact that nobody gets fined for not turning up at the polling station is well known in Belgium – functioning much like a “But You Are Free” invitation – with, regardless, more than 90% freely exercising their right (or fulfilling their legal duty).

Posted in Behavioural economics, Cognitive biases and fallacies, Law, Morality, Organization Development, Psychology | Tagged | Leave a comment

Beyond value for money: lessons from a broken-down car

(Featured image: cottonbro studio/pexels)

Economic transactions like purchases are often treated as exchanges: you pay someone money and get goods or services in return. But is the only thing that matters that the benefit outweighs the cost – that you get value for money?

I drive an old car. That is to say, for the last four weeks, I didn’t drive anything at all: the car had developed a fault, and was off the road during that time. At the end of that period, the fault was identified and repaired, and my car was mobile again. Yes, it had taken a long time, and the repair was pricey, but I got what I needed in the end. All’s well that ends well?

A journey through unnecessary uncertainty

Not quite. If, when the car was towed into the dealership, I had been told that the whole process would take nearly a month, I might have huffed and puffed and rolled my eyes, but I would have had something that had been all too scarce for most of those four weeks: information. This is an underestimated aspect of interaction – commercial or other. Information allows us to make decisions and choices – even if it is not absolutely uncertain. It has instrumental value. In addition, offering information is an indirect signal of care: it takes effort to acquire and to communicate it, and the willingness to make that effort tells the recipient that they are being valued.

Being left in the dark, in contrast, leaves one with the sense of not mattering much, and finding it hard to make decisions. Here, there was a lot of being left in the dark: the four weeks immobility did not happen in one go, but in several successive chunks, each one accentuating the agony.

Left in the dark (image: Semion Krivenko Adamov/Pixabay

It started with some good facts, though. The car was taken to the garage on a Saturday (let’s call that Day 0), and by the end of Day 2 – the Monday evening – I had received a call confirming the fault had been diagnosed: a defective low-pressure fuel pump. This is a part that rarely breaks, and it had to be ordered, with a delivery delay of 3-4 days. Once the part was in, the car could be brought into the workshop and fixed.

It is important to note that these facts were not remotely incompatible with a duration of four weeks to fix the car, but still, that did not quite occur to me. We don’t reason with raw facts, but we convert them first into meaning – we answer the so what? question. We do that by interpreting them into possible conclusions (about present situations) or predictions (of future situations). But we rarely do this in a comprehensive manner, considering all possibilities. Moreover, to do so, we add some preconceptions and assumptions (and often a good dollop of wishful thinking!), and end up with what we believe is the right, the most plausible, the most likely meaning. And we reason based on that meaning to make decisions.

To me, the facts meant that the part might (and, with added wishful thinking, would) be delivered by Day 5 or 6, and that the car would then immediately be rolled into the workshop and fixed. My expectation wasthat the car would be thus fixed by the weekend or, latest, a few days after (Day 9 or 10).

People abhor uncertainty, and so, eager to get some form of uncertainty-reducing confirmation, I tried to get in touch with the service department several times on Days 6 and 7 to find out whether the part had been delivered, but my calls were never returned. Without new information to update my expectation, other than receiving no information, I felt very much left in the lurch. No news was definitely not good news to me.

The next Monday (Day 9), the service manager eventually called me with “good news”: the part had been delivered, and they would start work on my car “at the end of the week or next week”.  New information! Ok, it was a week later than I originally had assumed, but now I could surely expect the car to be ready by Day 13, or at the latest early in week 3 (Day 16), right? As the end of week 2 approached, no updates were forthcoming, and my attempts to find out what the status was were once again left unanswered. By the middle of week 3 (Day 18), my mood had changed to dull resignation. I realized that I had not been the only person subject to wishful thinking. My wishful inferences had, in fact, been built on the service manager’s own wishful thinking.

Insights from the waiting game

The saga eventually came to a conclusion. I received a call on the morning of Day 24 with the joyful news that my car was in the workshop. Wishful thinking played its last trump card: surely, I would now be able to collect the car in the evening, at long last? Yet once more, no update came. Wishful thinking folded and pessimism took its place. The mechanics had probably found another fault, that would turn out too expensive to fix. I could not even muster the energy to make another call to the garage to get confirmation of this doom scenario. Then, late in the afternoon of Day 25, the call came: everything was fine and I could pick up my car the next morning… after 26 days.

A source of information… when it rings (photo: Pixabay/Pexels

What can be learned from this extended event? One thing is that managing uncertainty is tricky. We need to navigate between what we would like to happen, what is likely to happen, what might happen, and what we fear might happen. Emotions – both the optimism of wishful thinking and the pessimism of the doom scenarios – are a poor pilot to avoid trouble. The only role we should allow them to play is to define what is the best-case, and the worst-case scenario.

Faced with uncertainty, we fill in the blanks with whatever we can find – assumptions and preconceptions, some informed by facts, others more instances of speculation and conjecture. We do well to try to be more conscious of how we construct meaning from scarce facts, and to keep emotions out of this process. This will help us avoid jumping to conclusions, and shape a more accurate picture of the different scenarios and their relative likelihood (what Annie Duke calls Thinking in Bets). Perhaps even more importantly, it will help us avoid the emotional anguish of repeated cycles of building up hope and soon seeing it quashed.

For anyone dealing with customers, the lessons are, if anything, even clearer. There are three. One is that information matters more than certainty. Even if you are uncertain about what might happen, or when, you can offer information – information that allows customers to make choices, rather than leaving them to their own speculation.  Use your judgement, your best- and worst-case estimates, and indicate what is likely and what is possible. Two: don’t give information that is based on wishful thinking. Just be straight. And three: be responsive. A quick call back, an SMS or email signals to the customer that you care about them. Failing to respond to their calls also sends a signal the customer: they’re not important enough to deserve being kept informed. You have direct influence on your customers’ ability to make good decisions, and on their mood. That contributes to their satisfaction well beyond offering them value for money.

Posted in Behavioural economics, Cognitive biases and fallacies, Emotions, Psychology | Tagged , | Leave a comment

A craving for certainty

(featured photo: Bill Reynolds/Flickr CC BY 2.0

Retirement planning. It’s one of the classic topics in behavioural economics: most people realize they should be saving (more) for their retirement, but the utility of spending the money now seems so much higher. Retirement is so far in the future that we find it hard to picture ourselves in that position. The technical terms used, myopia and present bias, exemplify what it is about. If you manage to resist these tendencies and put money away in a pot to fund your eventual retirement, you get some welcome peace of mind. But unfortunately, that is temporary. Once retirement is no longer in the distant future, a new dilemma presents itself – not between spending and saving, but between two very different ways of securing an income from that pot.

Simple and certain

In the UK, until 2015, prospective pensioners could take 25% of their pension savings as a tax-free lump sum, but they had to purchase an annuity with the balance – hand it over to an insurance company, which then pays you a regular amount of money, for as long as you live. Consequently, the advice that was widely given to pension savers was to derisk their portfolio by shifting investment from stocks to bonds in the last few years before retirement, to avoid a sudden stock market crash annihilate your fund just when you’re about to buy an annuity.

Annuities are the perfect solution for the risk averse person. You know exactly how much you will get, without having to worry that the money runs out or about interest rate or stock market performance. The only remaining worry is inflation, and hey, you can inflation-proof your annuity too.

A cat with an annuity: total peace of mind (photo: jdblack/Pixabay)

But peace of mind does not come for free. Annuity rates are linked to long term bond rates, and generally, certainly over longer periods, these have been consistently several percentage points below to stock market returns. This was not an issue when the purchase of an annuity was mandatory and there was no other choice, but since 2015, this obligation has been lifted. It is now possible to choose to keep your pension invested and draw it down over time, which would give you a more generous retirement income. You draw the market return gains from your investment, and top them up with a slice of capital to fund your living costs. In good years, you may be able to leave the capital untouched, and not deplete it at all. But even if you need to withdraw capital, that will, initially at least, be at a low rate. However, if you are unlucky, and the stock market falls, you may need to take out a sizeable chunk of capital to pay the bills, so much that the absolute return in the next years will never again be enough to give you the income you need, thus forcing you to keep on drawing more and more capital… until it is all gone. Oops.

As a would-be pensioner, you now need to choose how to provide for your ageing self. Until now, my retirement fund has a moderately high-risk profile, which has served me well, even across a few downturns in the last few decades. I have not de-risked it, as the old advice would have been, because there is no longer an obligation to buy an annuity. I am well aware of human cognitive biases, and so the draw-down option, leaving the pot invested for the long term and living off the proceeds would seem the best choice. But I sense some cold feet. The lure of the apparent certainty of the annuity seems irresistible. Why might I have suddenly become more risk averse?

Reasons to be risk averse

Risk aversion seems to be literally in our genes. A recent systematic review by Francisco Molins, a neuropsychologist at the university of Valencia, and colleagues concluded that risk aversion may have a genetic basis. Variations in our DNA that relate to the regulation of two key neurotransmitters, serotonin and dopamine, appear to be associated with different degrees of risk aversion in individuals. In particular, a lower sensitivity to dopamine (associated with, among others, pleasure and reward, learning, and behaviour and cognition), and higher levels of serotonin (associated with mood regulation, memory and also learning) would be linked to higher risk aversion. But that does not quite explain why anyone would be less risk averse while saving, and more risk averse when choosing how to live off the proceeds.

For that, prospect theory, developed by Amos Tversky and Daniel Kahneman (who recently died) offers several complementary explanations. According to this theory, we are more risk averse for gains than for losses. The decision at the point of retirement is one between a certain gain (the annuity) and an uncertain gain (the draw-down pension). That is a different reference point compared to the person still saving, when contributions not only continually add to the fund, but also, should the overall value drop significantly, would buy more assets, and hence increased potential for more growth over time. Irrespective of the reference level of retirement income we might determine, we are also loss averse. Literally running out of money before we die is pretty catastrophic, and not outweighed by the potential of a more comfortable retirement when we’re lucky. And prospect theory also holds that we overweight small probabilities (such as that of a dramatic fall in the stock market that would mean we outlive our savings).

Other reasons for feeling a preference towards the annuity option might be that several decades of assuming an annuity was the only option has worked as an anchor, and turned it into a default that does not need justifying.

Dueling regrets

[Regret, I have just the one, but it’s a big one (photo: derneuemann/Pixabay]

But perhaps the most important underlying cause for the dilemma is the prospect of future regret – of having made the wrong decision. The annuity option avoids ever regretting that a big chunk of your nest egg is wiped away by the kind of post-dotcom bubble and 9/11 stock market slump (when, for example, the S&P500 index ended down three years in a row, with a total cumulative loss of over 38%), or the precipitous crash in 2008 (when the S&P500 lost 37% in a single year). But it also embodies the lingering ‘what if’ regret of missed opportunity. Currently the rate for a joint life annuity, increasing by 3% per year to protect against inflation, is around 4.5%. That is distinctly mediocre compared to the annualized return of the S&P500 which, even including the two major downturns in the last 25 years, was around 7.5%.

These regrets are embroiled in an duel. If I opt for the drawdown pension, and there is a dramatic downturn that slashes my investment, I will regret not having chosen for the security of the annuity, even if the income would have been more modest. If I choose the annuity, over time I might find that inflation is eroding my purchasing power, and I will regret missing out on the market returns that would have made my retirement a lot more comfortable.

Perhaps the best way to resolve the dilemma is a hybrid solution: ensure there is a floor underneath any potential losses that eliminates the catastrophic risk of running out of money before I die, and leave another part of my investment in place to capture market upsides and provide protection against inflation.

If only I could get rock solid certainty about how I should split the pension pot between the two options to maximize peace of mind in my old age…

Posted in Behavioural economics, Cognitive biases and fallacies, Economics, Emotions, Psychology | Tagged , | Leave a comment

The amoral universe

(featured image: Hubble ESA/Flickr CC BY 2.0)

Every parent will, sooner or later, experience clashes between their beliefs, and those of their children. The first such clash in my life happened when my father and I disagreed about the make of a particular car.

Throughout the 1950s, Auto Union – later to become Audi, but at the time also known as DKW – had been producing a sequence of very rounded, streamlined cars not unlike the VW beetle. These emitted a characteristic sound, typical for their two-stroke engine, which my four-year-old ears found quite funny. In the early sixties, the newest model, the DKW Junior, however, looked very different (but sounded the same), and my father disbelieved my adamant claim that it was, nonetheless, a DKW. It was not a profound conflict (certainly not compared to the one that would follow a decade later, concerning the length of my hair), but it was still a disagreement. More importantly, it was a disagreement that could be settled – and actually was shortly after, when, on a walk somewhere, we saw one parked up and could substantiate its identity. (My father kept a brave face, but I am not sure he ever really forgot that he lost that argument with me!)

Both are DKWs – I did tell you, didn’t I? (photo’s: Wikimedia CC BY SA 3.0)

When beliefs concern facts, conflicts can often be settled through simple observation and verifying which claim corresponds best to the observed facts. Often, but not always. Perhaps an extreme case is that of the Flat Earth beliefs. For most people, there is no doubt that our planet, like most large celestial bodies, is roughly spherical in shape, with plenty of factual evidence to support this claim. But some maintain a different belief (scientific or biblical) that the Earth is a flat disc, with the North Pole at its centre, and a wall of ice (Antarctica) at its edge. While my father accepted the evidence of the badge on the parked DKW Junior, staunch Flat-Earthers refuse to recognize the evidence that the Earth is a spheroid.

So, even beliefs concerning observable facts in the physical world can be contentious. Absolute scientific proof is often elusive. Theories and models are constructed that explain observations and predict outcomes, but it is generally impossible to prove that a theory is the only one that fits the present observations. Belief in a theory is then ultimately a matter of judgement of the credibility of the argumentation and the evidence presented, and the balance of probabilities.

In the immaterial world

But many of our beliefs are not concerned with the material world. We all have beliefs about what is right and wrong – what we call morality. And even though those are sometimes as strong as, if not stronger than, the belief that the earth is a spheroid, there is no objective basis for them. Where do these beliefs come from?

One interesting theory links morality with evolution. In contrast with the popular Moral Foundations Theory, which is – according to its co-originator Jonathan Haidt – ‘ad hoc’, Morality as Cooperation (MAC), developed by Oliver Scott Curry and colleagues, used the evolution of humans as a cooperative species as a starting point. It is a collection of biological and cultural solutions to a range of problems of cooperation that humans and their ancestors have been facing for 50 million years. Over that period, the best win-win solutions evolved into seven moral domains reflecting different kinds of cooperation, which Scott Curry summarizes thus:

(1) Kin selection explains why we feel a special duty of care for our families, and why we abhor incest. (2) Mutualism explains why we form groups and coalitions (there is strength and safety in numbers), and hence why we value unity, solidarity, and loyalty. (3) Social exchange explains why we trust others, reciprocate favours, feel gratitude and guilt, make amends, and forgive. And conflict resolution explains why we (4) engage in costly displays of prowess such as bravery and generosity, why we (5) express humility and defer to our superiors, why we (6) divide disputed resources fairly and equitably, and why we (7) respect others’ property and refrain from stealing.

This quite advanced mechanism for organizing stable groups of individuals could, however, only develop and take root thanks to the capacity of all living organisms to survive and pass on their genes to a next generation. This requires a fundamental capability to detect threats and taking action to avoid them, and to detect nutrition and taking action to obtain and consume it. In other words, to establish what is good and what is bad, what is right and what is wrong, and to pursue the former, and avoid the latter.

Oh yes, we already had moral rules and beliefs! (image via DALL-E 3)

Many of our present moral beliefs can be traced back to one the seven domains of MAC, and are still serving us to benefit from cooperation. There are of course different ways to operationalize the rules in these domains and to resolve conflicts between them. That is why moral rules can differ in detail from one culture to the next, and indeed from one person to the next, even though they all ultimately serve cooperation.

Elusive objectivity

However, nothing stops us from repurposing the same mechanism of distinguishing right from wrong for developing or adopting beliefs, and develop rules and social norms, that do not directly stimulate or facilitate cooperation between individuals, but establish a group identity that binds a tribe together and helps to differentiate it from other tribes. Think of religious and secular rituals and obligations (attending religious services on a particular day of the week, celebrating a national holiday), or shared ideologies (socialism, free markets), and even arbitrary beliefs (animal rights, dietary choice, fashion).

The moral rules that directly support cooperation can be argued to have some objective validity, since they evolved and persisted. Yet there is no reason to believe that the present set is the only possible way in which this could have happened, as the diversity in moral frameworks across cultures already illustrates. The more arbitrary social norms, however, lack any objectivity. The only thing that holds such beliefs together is the consensus among the believers.

Nonetheless, we often feel very strongly about what we believe is right and wrong, whether it is that we should not steal each other’s stuff, or whether we should not eat meat. What’s more, we tend to believe that our beliefs are the right ones and the others wrong, or at least that ours are superior to those of others. When we see people simultaneously sharing some of our rules while violating others, we see cognitive dissonance – they must surely come up with some twisted motivated reasoning not to share all of our rules! But perhaps the cognitive dissonance is all ours. We believe that our moral beliefs are the only right ones and apply universally. We seek to resolve the fact that not everyone shares them by concluding that they must be wrong – rather than acknowledging that our beliefs hold no special status, and are as arbitrary as theirs.

We believe (and have reasonable evidence to back this belief) that the laws of physics – which Mr Scott, the Chief Engineer on Star Trek’s Enterprise famously declared cannot be changed – apply across the known universe. We can say no such thing about our moral rules. The universe is amoral, and cannot give us the moral certainty of universal truths that we crave. We are on our own, and we have to work out the answers out for ourselves, and accept there is no single right answer.

Posted in Behavioural economics, Cognitive biases and fallacies, Emotions, Morality, Psychology | Tagged | Leave a comment

Don’t confuse the facts with the truth

(featured image via DALL-E 3)

A good 20 years ago, Frédéric Brochet, then a PhD student oenology (the study of wine), was cataloguing the descriptors wine tasters use for his dissertation. He ran an experiment with 54 undergraduate students at Bordeaux (where else!) University’s Faculty of Oenology, tasting a white (sémillon and sauvignon) and a red (cabernet-sauvignon and merlot) Bordeaux wine. The participants were asked to pick from a list of odour descriptors those they thought corresponded with the wines (they could also provide new descriptors themselves). One week later, the same panel was invited to taste the same wine again, with each participant receiving the full list of descriptors he or she produced a week earlier, listed alphabetically and without any indication which descriptor had been used for the red or the white wine. For each of the descriptors, they were asked to indicate which of the two wines most intensely presented the character of the descriptor. There was one twist, though: this time the red wine was in fact the white wine, with the addition of a neutral red dye. Nonetheless, they ascribed the descriptors used earlier for the red wine to the fake red wine, as if it were real. They interpreted the fact that the wine was coloured red as a definitive signal that they were genuinely tasting red wine.

The predicting mind

A common model of how our mind functions is that it acts like a prediction machine. It creates a picture of the world based on what we have stored in our memory, and constantly verifies it with input from our senses, adjusting it as required. Because the world more often conforms with what we know about it, than deviating from it, we don’t need to reconstruct the entire picture from scratch every millisecond, and only need to process what is different, when something is different. Efficient, or what?

Sometimes this assumption of concordance is so strong that subsequent signals to the contrary have no effect. This is what happened with these wine expert students. Despite their comparative youth, being French, they would have had the chance of tasting numerous wines, even before they started their course. Every single time, they’ll have associated the colour with the nature of the wine. But our minds do not automatically distinguish between observed facts (“this looks like red wine”) and the meaning we associated with them (“this is red wine”).

Are you questioning my legitimate victory? (photo: Tim Reckmann/Flickr CC BY 2.0)

While they were, in truth, given two glasses of white wine, the fact was that one of those glasses was spiked with a red dye. They had interpreted this as an unequivocal sign that it contained genuine red wine – and that belief was so strong that they ignored the information their nose was supplying.

I was reminded of this remarkable study when after last weekend’s presidential election in Russia, Vladimir Putin was reported to have gained 87.8% of the vote. This cited result is a fact. Yet while some political leaders, from Chinese president XI Jinping and Indian prime minister Narendra Modi to Iran’s president Raisi and North Korean leader Kim Jong Un congratulated Putin on his landslide re-election, in Europe, the elections were labelled “pseudo-election” (by German President Steinmeier) and “neither legal, free or fair” (by the Polish foreign ministry) and described as an “imitation of elections” (by Ukrainian president Zelenskyy), “so-called elections” by German PM Scholz’s spokeswoman Christina Hoffmann. The same fact (87.8% of the vote), but two very different interpretations (massive signal of trust from the electorate, or massive fraud).

It is tempting to regard one of the two interpretations as the only correct one, and the other one as factually and morally wrong. The problem is that this would apply regardless of what one’s actual position is: of course, your own side is right, and the other side is deluded and malicious. However, neither side can possibly convince their opponents of the error of their ways. Either side can adduce arguments for their case, but these will be immediately rejected by the other side. So, either side sticks to its own, very different truth, based on the same facts.

The commonality disparate interpretations

And such situations are common, perhaps much more common than we might realize. In the last few days, there has been some commotion around something former US president, and current presidential candidate Donald Trump had said. The headlines left little to the imagination: NBC wrote “Trump says there will be a ‘bloodbath’ if he loses the election”, and  the Guardian headlined “Trump predicts ‘bloodbath’ if he loses election”. Even given Mr Trump’s reputation for regularly making shocking statements, many people will, for a moment, have wondered, “did he really say that?”. And the accompanying video clips confirmed he really did. Only, often the broader context for the statement was omitted, namely that Trump was talking about the fate of the US car industry, which would be slaughtered unless he was elected and subsequently imposed 100% tariffs on imported Mexican-built Chinese cars. Countless people would, however, have taken the headline at face value, and come to the conclusion that the prediction referred to the violence that would erupt in case Trump lost the election. Once again, nobody questions the fact itself (what Trump said), yet the (lack of) context determined the interpretation: Trump was openly inciting violence.

<Pakistan bridge.jpg>

[The buses are as regular as clockwork (image: screenshot BBC)]

The buses are as regular as clockwork (image: screenshot BBC)

As if to further support my point about the ubiquity of such situations, another example came to my attention this week. A BBC report from Pakistan featured a reporter supposedly standing on a bridge over a busy highway in Islamabad. However, closer inspection of the background shows that the image is looped: the same bus keeps on appearing every 15 seconds or so. So, was the journalist really where she pretends to be? One lone commenter remarked that it was at worst lazy editing, not deception – “standard practice in newsrooms, not meant to deceive, but to represent the location from where the reporter is reporting from”. Many more reactions were less forgiving, and accused the BBC of fraud, fakery and lying. Here too: same fact (the background was not real), different interpretations (ineptitude, or proof of ill intent).

We rarely start with an open mind. More often than not we already have a clear view of what the truth is – Putin is an evil dictator, or a strong, nationalist leader; Trump is a mad authoritarian, or the guy who stands up for American workers; the BBC is tendentious and untrustworthy, or blunders when applying common techniques.

We not only have prior experiences and memories, but we also have preconceptions and prejudices, we have affinities and dislikes, allegiances and aversions. All of these shape the beliefs we have, well before any facts make their appearance. When they eventually do, our tendency is to look for signs of confirmation for those beliefs. The facts, almost always beyond dispute, easily become indistinct from the truth, as it exists in our imagination, and hence unquestionable proof of it. It is hard to resist that tendency, but that doesn’t mean it is not worth trying, and at least consider that other interpretations are possible.

There is an ironic little twist to the wine experiment: it has often been retold without mentioning that the inept ‘experts’ were, in fact, undergraduate students. Might this be because it is more satisfying to take down experts a few notches, or because students don’t belong in a narrative that seeks to mock wine snobs? I rest my case (of Bordeaux).

Posted in Behavioural economics, Cognitive biases and fallacies, Psychology | Tagged | Leave a comment

Overwhelmed by wide horizons

Featured image Martin Heigan/Flickr CC BY NC ND 2.0

In the olden days, we could see little further than the corner of our street. There were a few national or regional radio stations, and some national TV stations that filtered international news and broadcast principally locally produced material, complemented with mostly English and American series. If we wanted to buy a book or a CD, we went to the store and checked what they had on their shelves; if we wanted to meet people, we went to the pub or joined a club or society where we would find likeminded individuals.

Our horizon was limited, and to discover something novel, or to make new friends we could only explore the small, local pond we were swimming in. Now, borders and distances are no longer constraining us. We have access to countless radio stations, on-demand video, books, musical artists and people who share our interests, many orders of magnitude larger than we imagined possible. Our horizon has become so wide that we risk being completely overwhelmed. How can we make sure we don’t end up the victim of abundance?

Overload

The challenge of choice overload is a well-known phenomenon (I also wrote about it here). Intuitively, having more options to choose from would seem to be better than having fewer. In practice, however, it depends. 20 years ago, Barry Schwarz described a classic experiment on this subject in his book The Paradox of Choice. Sheena Iyengar and Mark Lepper offered shoppers in a supermarket a promotion on a range of either six, or 24 different jams. They found that more people took up the offer and actually bought the product when only six jams were presented, and that buyers in this condition were more pleased with their choice than in the 24 jams condition. However, later meta-analytic research (joint analyses of multiple studies) provided important nuance: Benjamin Scheibehenne and colleagues found a mean effect size of close to zero, albeit with a large variation, suggesting that the number of options is not the only thing that influences our behaviour. Alexander Chernev and colleagues identified specific factors that influence the degree to which people experience overload (complexity of the choice set, difficulty of the decision task, uncertainty about preferences and the actual goal of the decision). This explains why, for instance, we are not at all fazed by the typical menu of a Chinese or Indian takeaway, with usually well over 100 options – these are neatly organized in a way that helps us quickly eliminate what we are not interested in. And as Rory Sutherland once said, “If you’ve driven 27 miles to visit a place called World of Jam, you’re probably not going to walk in and go ‘Oh Jesus! There’s just too much jam.’”

What you hope to find when you’re looking for lots and lots of jam (photo: Joanna Poe/Flickr CC BY SA 2.0)

Still, when the number of options increases by orders of magnitude, there can be detrimental consequences that do not depend on the context. Imagine you’re getting into podcasts, and you’re looking to identify some good-to-excellent ones to listen to while you’re working out. You plan to sample some in the next twelve weeks, picking one at random every week. Let us consider two situations: one in which the total number of podcasts on the topics you’re interested in is 24, and one in which it is ten times higher. In both situations, the quality distribution of podcasts is the same: 1 in every 8 are excellent, 1 in 4 are good, 10 in 24 are mediocre and 5 in 24 are terrible. What is the chance that, in your twelve-week period you would fail to find any of the excellent podcasts? When there are only 24 to choose from, it is about 11%. When there are 10 times more, counterintuitively, failing to find a single excellent one is almost twice as likely (19%). Similarly, the chance that you will find at least one excellent and one good podcast in your sample of twelve is close to 90% if there are only 24 to consider, but only just under 73% when there are 240.

Rethinking how to choose

It is easier to find the two good options in a set of ten, than to find two good options in a set of 100, even if it contains 20 suitable ones: instead of just eight unsuitable ones, in the latter there are 80. The proportion of suitable items, whatever category you’re looking at, is almost always relatively low. So, as the number of options increases, you will need to reject ever more of them, as philosopher Stefan Schubert recently observedin a tweet. Instead of about three radio networks that, collectively, produced a handful of hours of the music I was into when I was a whippersnapper, I can now listen to dozens of stations broadcasting exactly the kind of music I want, every waking hour and more. But these few dozens of stations are buried among more than 60,000 stations. Picking the very best ones thus becomes a colossal task.

Before, I didn’t listen to the radio because I couldn’t find anything to listen to, but now I don’t listen to the radio because there’s so much choice I don’t know what to listen to (image: Dall-E 3

But is that what I really want? When the very best options are, technically at least, abundantly available and just a click away, the temptation to be continually looking for them is real. The fear of missing out on something that I am unaware of – whether it is a radio station, a book, a blog, an artist on Spotify, yet another cool person to follow on Twitter or BlueSky – is ready and waiting to pounce if I am not careful. In the past, the supply of interesting stuff was usually well below what we wanted. Now, it is our capacity to handle the supply that is the limiting factor.

This points towards a possible strategy to avoid being crushed by the innumerable possibilities of an unfeasibly wide horizon. There is no way that we can possibly read all the books, listen to all the albums, visit all the blogs and so on that are available. Maybe there are domains in our life where we really want the very best, and then we should find the most efficient strategy to achieve it – for example, study the subject matter, or tap into the recommendations of experts – to find our way in the abundance.

But for anything else, how about turning the framing upside down? Instead of measuring our success by what percentage of what we would, or could do, we have actually done, and strive for that elusive 100%, what if we looked at it from the viewpoint of what we will never do? No matter what, there will be hundreds, if not thousands of books we will never read, of places we will never visit, of songs we will never hear, of people we will never meet, because we will never even know that they exist. One more or less is not going to make the slightest dent in that. So, why worry about it?

We could choose to treat the plenitude of an incredibly wide horizon not as an obligation to pursue as many opportunities as we humanly can, but as a store of possibilities that are available to us should we wish. This is where a concept that Herbert Simon, the renowned polymath, called satisficing can serve us much better than optimizing: stop searching when some acceptability threshold is met. If we focus less on the elusive best things we might be missing out on, and more on the good things we can and do enjoy, we are making the best of these wonderful times of plenty. Focus on good, focus on better or different, but resist the seduction of the best.

Posted in Behavioural economics, Emotions, Psychology | Tagged | Leave a comment

Right things

One of my guilty pleasures – well, one that I am happy to share with the world – is that I am a regular listener to a BBC radio soap opera. The Archers, an everyday story of country folk, is the longest running soap in the world. I can even justify my habit, as it occasionally provides inspiration for an article, because the writers are pretty good at capturing human behaviour and decision making. From highbrow to pulp literature, the dramatic value of moral dilemmas is without doubt.

Recently, one of the plotlines has been producing rich pickings in this respect. The dramatis personae are Police Constable Harrison, his wife Fallon, their friend Alice, her new boyfriend Harry, and Harrison’s boss, Inspector Norris. Here is how things unfold.

An everyday story of country folk moral dilemmas

Harrison arrests Harry when he attempts to drive while drunk, and discovers he also has a prior driving ban for the same infraction. He is worried for Alice’s and her young daughter’s safety, as they might get into the car with someone who is unfit to drive. Harrison issues an ultimatum: Harry must confess to Alice, or he will tell her himself. Harry refuses, and threatens to report him for breach of police rules of confidentiality. Back home, burdened by the affair, Harrison reluctantly shares his concerns with his wife, and intends to go and warn Alice.

“For simple moral dilemmas, that way, ma’am. For complex ones, just stay here with me” (photo: Michael Summers/Flickr CC BY NC ND 2.0)

Fallon intervenes, instead inventing some story to make Alice avoid a trip with Harry, but Alice becomes suspicious that it has something to do with her budding relationship with Harry. She confronts her friends, demanding to know what is going on. Harrison claims confidentiality and asks Alice to just trust him, but eventually relents and reveals Harry’s drinking behaviour to Alice. Anticipating that Harry will retaliate, Harrison then confesses his breach to his boss, prompting a disciplinary investigation. She says she will support him, but cannot guarantee he will not lose his job.

Against Fallon’s appeals, Alice – who has broken off the relationship with Harry – approaches Norris to advocate for Harrison. She accidentally reveals that Fallon had learned about Harry’s alcohol abuse first, thus compounding Harrison’s violations. Finally, Harry, angry after being dumped, ponders whether to file a complaint against Harrison.

This intricate scenario, where loyalty, professional rules, and ethical obligations are inextricably intertwined, brings to life the kind of moral complexities people face in everyday life. It also shows why it is often impossible to plot an objectively right route: in a genuine dilemma, you cannot do the right thing without also causing harm.

Five dilemmas for the price of one

For Harrison, the choice is probably the starkest, as he faces a conflict between two powerful moral imperatives: loyalty to a friend, and obedience to the sacred rules of his office. How on earth do you choose between these two? What makes things even more complicated for him is that his violation may cost him his career, so the route he eventually decides to take carries the risk of a significant personal sacrifice. We may come down to the view that he ultimately did the right thing – perhaps because we can imagine the unspeakable agony and regret that he would feel if he had said nothing, and Alice and her daughter would have travelled with Harry who, unfit to drive, had an accident in which they would have been maimed or killed. But at the same time, should we not also expect that police officers do not autonomously decide whether and when they can break the fundamental rules of their office?

This consideration may have been less salient for Fallon (as it would be for anyone who is not a serving or ex-police officer). However, lying to a friend is something that, in many of us, will feel like crossing a line, no matter how good a justification we can come up with. Betraying a friend’s confidence is not something we can reason away so easily, and most people will have no trouble understanding why Fallon struggled to keep her secret. Initially, it may have been in a veiled manner, yet often there is a slippery slope: once we have broken our silence, step by step revealing a little more of the truth becomes much easier. But can Fallon really be sure she did the right thing, now her husband’s career is on the line, knowing that, if she had been a more convincing (and convinced!) liar, things might have turned out different?

Alice’s well-intentioned decision to plead with Harrison’s boss is also understandable, but illustrates how, when we are certain we have an obligation to right a wrong we have some responsibility for, we might be blind to the consequences. It is a form of action bias, a tendency to do something – anything – to avoid being seen as passive and accepting of a situation, even if our rash action against our better judgment may make matters worse.

[Impulsive vengeance… or considered forgiveness? Just another everyday moral dilemma (image via Dall-E 3)

Perhaps Inspector Norris is in the least unenviable position (“most enviable” is not really an appropriate description of facing a moral dilemma). More even than a lowly PC, an inspector is deemed to act in accordance with the professional rules, and it would be ethically entirely unacceptable for her to turn a blind eye to Harrison’s breach. Being able to refer to inviolable statutes may be scant comfort when you face a moral dilemma, but it is comfort nonetheless.

And maybe Harry’s position is the one where formulating an objectively right choice seems the easiest. Acting out of vengeance is something that most of us, certainly when we are not directly involved, would condemn. At the same time, though, the temptation of reacting out of spite is far from alien to us. Punishing people who, in our eyes, have wronged us, is a deeply ingrained impulse: it sends an unambiguous signal that we are not to be messed with.

Moral dilemmas, uniquely human?

Five fictitious but deeply realistic characters, five individual and tough moral dilemmas illustrating the diverging perspectives on the same issue, and the challenges of navigating ethical grey areas. There is no place for simple black and white thinking here. There is also no easy route out – neither by adopting a utilitarian viewpoint, and weighing up the total harm and goodness, nor by adopting a deontological one, because the rules contradict each other. Yet the choices we ultimately make reveal to the world what kind of person we are.

Perhaps, that perspective can provide us with useful guidance: when we are confronted with a dilemma, we can imagine the different options, and ask ourselves what kind of person we want to be. We may think non-human animals have it easy, unburdened by the moral dilemmas which seem to be an exclusively human thing. But then, our capacity to find our way through the inevitable tensions between legitimate, but incompatible moral values, trying to do the right thing is also uniquely human. Maybe the greatest moral virtue we can hope to cultivate is the humility to recognize the complexity of each situation and the validity of perspectives differing from our own.

(The subject of this article was inspired by a fascinating paper by Daniel Yudkin and colleagues, A Large-Scale Investigation of Everyday Moral Dilemmas.)

Posted in Behavioural economics, Cognitive biases and fallacies, Emotions, Ethics, Morality, Psychology | Tagged | Leave a comment