The challenge of making the best choices

(featured image: duncan c/Flickr CC BY)

Better decision making may mean widening, or narrowing horizons – depending on whether you are a human or a machine

Four years ago, in the small hours of 21st March 2017, the neighbourhood around the Dortmund-Scharnhorst station was shaken by a loud blast. Earlier, a 31-year-old man had decided to gain access to the contents of a ticket machine on the platform – money and unused travel permits. He had been drinking in a nearby bar, presumably to stiffen his resolve, before setting off for the halt in the North-Eastern suburbs of the German city with a bag of gas cannisters, which he was seen sprayed into the machine. As he ignited it, the machine exploded, its metal front panel hitting his unprotected head, and showering him with shrapnel from its innards. Another person who had also been at the bar recognized the ailing man and alerted the emergency services, but despite resuscitation efforts by the paramedics, he died of his injuries.

This sad story was one of the submissions to the 2017 Darwin Awards. Such an award is a tongue-in-cheek, albeit somewhat morbid, recognition for people who, by their own ill-considered actions, remove themselves from the human gene pool, either by perishing or by unintentionally sterilizing themselves. While the humour of the concept may be of questionable taste, there is some substance behind it, as the name implies.

Wired for survival

Evolution, the brainchild of Charles Darwin, is a relentless force that inevitably favours the individuals in any species that are best adapted to their environment. They will survive, prosper, and – most importantly – be more successful in ensuring their genes are passed on to the next generation. Should traits undermining the chances of successful survival and procreation spread, the species itself would eventually become extinct. There is, so far, no genetic carrier known to be associated with a propensity to blow up travel ticket machines (and to do so without adequate protection), but the principle is obvious: if maladaptive traits are eliminated, that is beneficial for a species.

Darwin could not have foreseen this

Humans may be a relatively new species to evolve on this planet (depending on where you start counting, we have been around for between about six million and 200,000 years). Of course, humans carry genetic code going back to the very first life forms that emerged billions of years ago. But even the more recent development of human cognitive ability has had at least ten thousand generations over which natural selection has fine-tuned our brain circuitry.

The fact that we are here today, surviving, prospering and procreating is largely thanks to the way we think. The wiring in our brain allows us to react to the external world in a way that serves us well, so we can continue to survive, prosper and procreate, and suppresses impulses that would go against it (like blowing up ticket machines).

Our brain wiring, however, also embodies many tendencies that have been hugely beneficial over our existence, and that still are adaptive in many, but not all circumstances. Because we do not seem to have a systematic, in-built mechanism to distinguish between situations where our biases and heuristics are, or are not, helpful, they can lead to suboptimal decisions. Sometimes we don’t make the best choices: we might, for example, be better off being less risk averse than we are, or clinging less to the status quo than we do. Behavioural science, and in particular the study of heuristics and biases, has been focusing on this aspect of decision-making for a few decades now.

But human brains have come up with a possible solution: artificial intelligence (AI) and machine learning (ML). It has been a while since humans developed devices that can outperform us by many orders of magnitude for simple cognitive tasks like manipulating numbers (as the etymology of the word ‘computer’ indicates). Current computers can do much more, though. They can learn in ways that are akin to the way small (and large) humans learn: using the power of trial-and-error. But humans, because our cognitive capacities are limited, and because of the tendencies we have baked into our cognitive wiring, are inevitably selective in the trials we engage in. Machines are not inherently subject to these constraints, and this is what can make AI systems so powerful. It can, however, also lead to interesting, and unexpected situations.

Too clever for their own good?

In 2013, Alphabet, Google’s parent company, officially announced its project Loon, aimed at providing stable internet to remote areas around the world. This was to be done through high-altitude balloons (18-25km above the surface). These were not powered, but moved around much like more conventional hot air balloons: air moves in layers in the stratosphere, and the balloons, which had access to real-time data about speed and direction of each layer, could adjust their altitude using a solar-powered pump to select an appropriate layer.

Outsmarted by a balloon (image via

Occasionally some of the balloons made surprising moves and deviated from the expected course by zigzagging, or even reversing. At first, the engineers overrode the system, but then they realized that, all by themselves, the balloons had learned a technique well-known to sailors, called tacking. Instead of plotting the shortest route, and moving up and down to the layers of wind to follow this course, they took full advantage of the wind vector information and got to their destination more quickly. Salvatore Candido, the CTO at Loon, summarized it thusly in a blogpost: “We quickly realized we’d been outsmarted when the first balloon allowed to fully execute this technique set a flight time record from Puerto Rico to Peru. I had never simultaneously felt smarter and dumber at the same time.”

The balloons from project Loon illustrate the power of artificial intelligence and machine learning. Unrestrained by the tendencies that limit what humans would consider or even be able to consider, AI systems can learn and continually seek to further optimize whatever they are being constructed to optimize, based on whatever reward function that it has been programmed with. This can lead to surprising novel solutions to problems, large and small, that are almost by definition unpredictable.

But the biases and heuristics that restrain us humans, and thus give AI the edge, also form part of our instinctive mechanisms that help us survive, prosper and procreate. Over many thousands of generations, these have shaped our own approach to trial-and-error, discouraging us from engaging in trials that can lead to errors detrimental to survival, prospering and procreation. Individuals doing so were at a disadvantage compared to others who acted more cautiously and got deselected by evolution. As a result, we, the long-term survivors, have an inner voice, much like the annoying Harry Enfield character from the Fast Show, who warns us “you don’t wanna do that”.

Machines don’t. Unlike homo sapiens sapiens, the ‘species’ of artificial intelligence has not had the benefit of a long succession of generations weeding out the more hare-brained individuals and their way of thinking. Their trials often lead to superior outcomes we would never in a million years have come up with. But for the same money, the errors they make could be the equivalent of the hapless attempt at blowing up a ticket machine – but at a larger scale.

This leaves us with an interesting symmetry in our fundamental decision-making challenges. On the one hand, we are trying to unshackle our own, human decision-making, and overcome the maladaptive use of otherwise mostly adaptive heuristics and biases. On the other hand, we need to prevent AI becoming supercharged Darwin Award candidates, by restraining its raw capacity to think the unthinkable – and act accordingly, blissfully unaware of the potential unexpected and catastrophic consequences.

It is perhaps not surprising, and certainly quite apt, to find ourselves with one of the signature characteristics of economics, its inherent focus on conflicts, embodied in “on the one hand/on the other hand” thinking that so infuriated president Harry Truman.

(Behavioural) economics sure has got its work cut out to help both people and machines make the best choices.

Posted in Behavioural economics, Cognitive biases and fallacies, Economics, Psychology | Tagged , , | Leave a comment

Why rules should not be rulers

(featured image via Pixabay)

When we treat rules, however well-intentioned, as unconditional imperatives. we may end up doing more harm than good

When I was younger, so much younger than today, I joined the Institute of Advanced Motorists, a British organization that aims to increase road safety by improving driving standards. Now, like most people, I was convinced that I am a better driver than the mean driver, but at the time I was spending a lot of time in the car and wanted to regain the joy of driving, rather than experience it as a boring chore. So, for several months, every other Sunday, I dutifully went to the group meeting for a check drive with one of the volunteer observers making constructive comments about my driving, gradually improving until the day I was ready for my test with an advanced police class 1 driving instructor.

The most memorable aspect of the test was not that I passed (obviously!), but the advice my assessor gave me. As I was about to overtake a slow vehicle, he told me that it was safe to briefly exceed the prevailing speed limit (providing the circumstances allowed it, of course), and thus reduce the time driving on the wrong side of the road.

The tension between rules and consequences

This was quite the revelation at the time: to hear a police officer tell me that it was OK to break the rule, for safety’s sake. I am not sure either of us realized the significance of the trade-off – for my assessor, it was most likely the most normal thing to do, while I would only much later recognize the inherent tension between deontological and consequentialist (or utilitarian) choices.

Flooring it for safety’s sake? (image: Tama66/Pixabay)

This week, I came across two further stories that highlight this contrast between strict adherence to rules and pragmatism. One, told in a Twitter thread, is the story of two lost dogs in the Irish Wicklow mountains. A few weeks ago, a family had gone for a hike in the national park, just south of Dublin, with their two dogs. When they spotted a deer in the distance, the dogs shot off towards it, and failed to return. The owners came back to the same location the next day and discovered the younger of the two dogs in the car park. But despite using a drone and even a basket of unwashed laundry, in the hope that the older dog, a golden retriever named Naoise, would catch the scent, they were unable to find the second one. Two weeks later, two doctors who were hiking in the same area, near the summit noticed an emaciated, cold and very weakened but otherwise unharmed dog. Not without difficulty, the couple carried it 10km over the rugged track, back to their car, and took it home, where she was watered and fed. They then contacted the local animal rescue group and thanks to them, Naoise was soon reunited with her people.

But the story goes on. The two doctors who found the lost dog had been staying in a small hotel at the foot of the mountain, which was offering mental health retreats for health workers in need of a break. But Ireland is under strict lockdown rules under which people must not go further than 5km from their home. This prompted a member of the public to report the couple to the police: not only had they taken a non-essential trip further than the prescription distance from their home, but their hike had also taken them further than that from the hotel.

Technically, the docs that rescued the dog did breach lockdown rules. Arguably, the fact that they rescued a dog should not even be taken into account in judging their actions – they could not have known that this would be the fortunate outcome of their hike. But, one can wonder, if rules – as they should be – are intended to ensure the world is a better place: is the world a better place now the couple is under investigation, and the hotel is closed pending the conclusion of the enquiry?

The other story is less anecdotal and more illustrative of the profound questions that arise around rules and how much power they have over our choices. In a post by Tyler Cowen, I read that Steven Joffe, a medical doctor with a master’s degree in public health and an ethicist at the university of Pennsylvania, declared that he does not believe clinicians “should be lowering their standards of evidence because we’re in a pandemic.” Cowen remarks, “It is stunning to me that a top researcher at an Ivy League school literally cannot think properly about his subject area at all, and furthermore has no compunction admitting this publicly”.

Exceptions rule

Dr Joffe appears to be not remotely interested in considering the trade-offs that one might encounter in medical decision making. The standards of evidence are what they are, and they should apply, even as the US has seen the total death toll of COVID-19 exceed half a million, and still experiences well over two thousand COVID deaths every day.

“I am Hippocrates and I declare: to follow my rule, you will have to break rules.” (image: virtusincertus/CC BY)

Ironically (if I may use that term), it is a foundational principle in healthcare that can help us see why Dr Joffe’s position is problematic. Primum non nocere – “first do no harm”, linked with the Hippocratic Oath – is the ultimate litmus test by which to evaluate a medical decision. While it is sometimes impossible not to do any harm at all, the principle can reasonably be widened to include “do as little harm as possible”. Can a rule unconditionally ensure that the least possible harm is done? Can we know this without actually explicitly considering the consequences both of maintaining the prevailing standards of evidence, and deviating from them, without, for example, weighing up the statistical (how large an experiment?) and time (how long for?) parameters of the process to collect evidence? Can we know this without applying considered judgement? Can we know dogmatic adherence to a rule does the least harm?

Rules, whether they concern the maximum speed on a road, people’s behaviour during a pandemic lockdown, which standards of evidence to be followed, or whatever, did not emerge out of thin air. They are often rooted in a desire to codify, in a given – although rarely explicitly stated – set of circumstances, the right (or wrong) choices in a simple way. Speed limits, movement restrictions during a pandemic, and standards of evidence all fit that description.

But very few, if any, rules will fulfil this noble intent unconditionally, irrespective of the context and the circumstances. It is widely assumed that the expression “the exception that proves the rule” is ironic – by definition, an exception cannot provide proof that a rule is universally valid. However, it is a translation of the first part of a legal phrase that goes back to Cicero, a lawyer and scholar in ancient Rome. The full phrase is “exceptio probat regulam in casibus non exceptis”, or “the exception proves the rule, in cases not excepted”: the existence of an exception proves that there is a rule in other circumstances.

If every rule has exceptions that prove it, we should reject the idea that they apply universally, and challenge anyone who claims otherwise to argue their case that either the rule does apply without any exceptions, or that the present circumstances do not provide an exception. We should not treat rules as instructions that need to be followed blindly, but use our judgement to verify that they will meet their intended purposes in the present circumstances.

We should treat rules not as dictates but as guidance, and not allow them to be rulers of our lives.

Posted in Behavioural economics, Cognitive biases and fallacies, Ethics, Philosophy, Psychology | Tagged | Leave a comment

Everything is unfair

(featured image: EliasSch via Pixabay)

Ethical concerns are an important factor in policymaking, and fairness often figures prominently in this respect. But are we really using it in the way we do?

In last week’s post, I referred to the challenges in setting policies for vaccinating people against COVID-19. If the aim is to protect the people who are the most at risk of the disease, then it makes sense to give priority to the elderly and those with underlying conditions, and to work your way down from there to the young and healthy, the people who are unlikely to develop serious symptoms, let alone require hospital admission.

There are clear instrumental motives for such an approach: the risk is not just to the individuals, but also to society as a whole. People who are seriously ill with COVID-19 use up scarce healthcare resources, which are then unavailable to others who might also need them for unrelated reasons. Vaccinating those at risk first takes the pressure of the health system, and is therefore an intervention that serves everyone.

Fairness rules

Nevertheless, there is also an element of fairness entwined in the strategy of distributing scarce resources (the vaccine) first to those whose need is the greatest. But how far should we go to enforce such fairness?

On 29 December 2020, Dr. Hasan Gokal, an emergency physician in Texas, was in charge of an early vaccination event in Texas, mostly for emergency workers. As the event drew to a close, a vial remained with ten of the eleven Moderna vaccine doses it held still available. They had to be used within six hours, or they would go to waste. Dr. Gokal asked the staff manning the event, but either they had already had a jab, or they declined the offer. He called a colleague whose parents and in-laws he knew were eligible for the vaccine, but they were not available. Meanwhile the clock was ticking. So he called the people in his phone contact list to ask them if they had older relatives or neighbours who needed to be inoculated.

When he got home, two elderly women with underlying health conditions were waiting for him, and he gave them a shot of the vaccine. He then drove to a nearby house where he knew four people of between nearly 70 and over 90 years old lived, all with health issues. They got a jab, and so did a housebound woman in her late 70s. By now, three doses remained, and three people had agreed to meet the doctor at his home. Two were waiting as he got back from his vaccination tour – a woman in her 50s working as a receptionist in a health centre whom he knew vaguely, and a woman of around 40 he did not know at all, with a child that relies on a ventilator. As time ran out, the last person called to say he would not be coming. So with minutes to go before the vaccine would go to waste, Dr. Gokal turned to his wife who, as she suffered from a chronic lung condition, was also eligible, and gave her the last dose of the vaccine.

Only for vaccinated passengers? (image: Juno Kwon via Pixabay)

The next morning, he completed the paperwork relating to the ten vaccinations he had carried out. A few days later, he was called to his boss and the HR director, and as he confirmed he had administered the 10 doses as described, he was fired. The reason: he abused his position to place his friends and family in line in front of people who had gone through the lawful process to be there. He had acted unfairly.

Immunization will, we all hope, lead us out of the interminable lockdowns and allow us to return to the economic and social activity that has been so profoundly impaired for nearly a year. If the people who are inoculated were given a vaccination certificate, activities ranging from visiting relatives in care homes to international travel for business or leisure, could restart gently and ramp up as more and more people get a vaccine. Such a document is not new: it is already required for conditions like yellow fever to enter countries like Australia, China and Mexico. But this is on a different scale, and there are naturally concerns about fraud and counterfeit.

However, a widely raised objection is that it would be unfair. A Wired article invites us to “[i]magine being locked indoors while your immune neighbours galavant (sic) around the park”. It cites Robert West, a health psychologist at University College London, who warns for a loss of social cohesion as a result of sharpened ingroup/outgroup tribalism, and Adam Oliver, a behavioural economist at the London School of Economics, who fears it might undermine the message that “we are all in this together”. In a comment in the journal Nature, Natalie Kofler, a molecular biologist at Yale University and Françoise Baylis, a bioethicist at Dalhousie University in Halifax, Canada, also argue such documents will lead to social stratification between “immunoprivileged and immunodeprived” people. Forbes quotes the CEO of the World Travel and Tourism Council, Gloria Guevara: “we should not discriminate against those who wish to travel but have not been vaccinated”. Let’s not take the faster route back to normality, because that would be unfair.

How fair is fair?

Fairness appears as a criterion in decision making and in policy setting, because it is an important element in how we experience and judge our (and others’) treatment in society. It is an instinct that develops at young age in humans, one we even share with other primates, Maria Konnikova writes in the New Yorker. We don’t like being disadvantaged (or even being advantaged) compared to others, and we don’t like it if others are being disadvantaged (or advantaged).

The Ultimatum Game, a classic, popular instrument in experimental economics, shows we are even willing to make material sacrifices to prevent perceived unfairness. It involves two players: a proposer, who is given a sum of money, and a receiver, with whom the proposer can share the money. The proportion offered to the receiver is entirely at the discretion of the proposer – they can keep all the money, give it all away, or share anything in between. The receiver can decide to accept, or to decline the offer. In the latter case, neither player gets anything. A meta-analysis by Jean-Christian Tisserand, an economist at the Dijon-Bourgogne Business School, of 42 research papers that used in total 97 instances of the game, found that receivers often decline offers of 20% or less of the total sum, and frequently refuse offers between 20 and 40%. This means that receivers literally forego an amount of up to 40% of the money, just to stop the proposer from getting an unfair amount.

How much to stop unfairness? (image: CC BY)

Avoiding unfairness in decisions or policy making is a deontological choice: it is deferring to an ethical rule, rather than to reasoned consideration of costs and benefits. For ethical rules that are broadly objective and clearly defined, this may be a sensible approach inviting little or no challenge. Few people will question the judgement of, say, a dairy in which health and safety practices ensuring neither staff nor customers are poisoned are adopted for ethical reasons, without a thorough cost-benefit analysis.

But outside the formal simplicity of experimental games, fairness is neither an objective concept, nor clearly defined. It is very much in the eye of the beholder, closely linked with (perceived) entitlement, group affiliation and ideology. Its Wikipedia disambiguation page, even when you ignore the more frivolous possibilities, refers to a range of (mutually incompatible) perspectives on justice, to absence of bias and more.

It is therefore all too easy to invoke fairness in a self-serving manner: if we don’t like something, we can find a way in which it violates some fairness principle. A moment’s reflection will confirm that most non-trivial decisions are disadvantageous to someone, whether it concerns a rise in Value Added Tax, the fact that supermarkets can sell clothes during lockdown while clothes shops cannot, or whatever. The same is true for leaving things as they are. For almost any ‘something’, both doing it, and not doing it, someone can claim that it is unfair to somebody. Everything is unfair.

This makes fairness practically a useless criterion to guide decision-making. The absurdity of firing a doctor who ensured precious vaccine did not go to waste for violating a fairness rule, and of holding back a return to economic and social normality because a key device to allow that to happen would be unfair are stark illustrations.

If the best argument we can make for or against a decision is that it is unfair, we don’t have an argument.

Posted in Behavioural economics, Ethics, Philosophy, Psychology, Society | Tagged | Leave a comment

Is authenticity incompatible with capitalism?

In a recent blogpost, economist Brank Milanovic riffs on a theme from the end of his book, Capitalism, Alone. Capitalism facilitates (and arguably even requires) the progressive commercialization of activities and relationships that, before, happened as part of general social interaction, e.g., within or between families, and between friends.

In this post, he focuses specifically on the arts. The advantage of capitalism, Branko says, is that you can only make a profit if you satisfy someone else’s need: this aligns, as if by an invisible hand, the profit goal of a producer with the personal needs of their customer. This is fine and dandy when it concerns reproducible goods like shoes: someone who correctly predicts the need for shoes will make money, and a lot of shoe-wearers happy with exactly the shoes they want. But it is not so fine when it concerns the production of art: an inherent, essential quality of art is its uniqueness, its individualism, and its authenticity. An artist who correctly predicts the public’s preference in literature, films or paintings will, just like the successful cobbler, become wealthy, but there will be no authenticity in her art.

He contrasts the films made by Steven Spielberg, various possible endings of which were tested with different audiences to establish the most popular ones with Franz Kafka’s diaries and Karl Marx’s 1848 manuscripts which were never intended for publication. The latter are, undoubtedly, authentic; the former, well, few would contest that the approach is lacking in authenticity.

In a capitalist system, artists try to maximize their income by trading their authenticity for popularity. Branko concedes that this is not entirely new, and that artists have long been making commissioned works to please the rich and powerful. But that was “artisanal” commercialization on a small scale compared to right now, when literary agents tell authors what to write, and thus instrumentally stifle their authenticity. Hence, he concludes, the mechanism whereby capitalism ensures an optimum provision of shoes fails to do the same where artistic endeavour is concerned.

There are plenty of examples of art of debatable authenticity, which can be explained by the profit motive and the processes by which profit can be maximized. However, I think Branko too easily blames this on capitalism, and he is too pessimistic regarding the fate of authenticity under capitalism.

A perfectly authentic artist would be driven solely by intrinsic motivation, undistracted by income. But such perfection does not exist, and has probably never existed. Artists, even those lucky enough not to have to worry about an income, still crave the appreciation of their audience. The duo that has, for nearly sixty years, been the artistic engine of the Rolling Stones, Mick Jagger and Keith Richards, are unlikely to be solely, or even primarily driven by a profit motive (their estimated net worth is, respectively, $360 million and $500 million). Being revered by an audience is no less an instance of extrinsic motivation than the royalties from music sales or the revenue of a concert tour.

There is, of course, an uneasy tension between intrinsic and extrinsic motivation, but that is not restricted to artists, by the way. Depending on what we do, where, for whom and so on, we are motivated by more or less of one, and correspondingly less or more of the other. If we invite people for a dinner party, the decisions we make regarding the meal we cook will be in part guided by intrinsic motivation (a proxy for authenticity) – perhaps we are cooking a recipe from our grandmother, or an authentic (!) version of a dish from our native country. But we will also have extrinsic, material motives: we will want to limit the budget for ingredients and the time we are prepared to spend in the kitchen, and we will also want to please, perhaps even impress, our guests. The dinner party is, like most of what we do, a compromise between intrinsic and extrinsic motivations, between being authentic and being practical.

Capitalism does, of course, make it easier for prospective artists lacking in intrinsic motivation to compensate for this deficit with by the extrinsic motivation of income (or indeed fame): they produce content with low authenticity but high popular appeal. It may even tempt some more authentic artists to trade a little more of their authenticity for money or glory. But commercial opportunities do not have to mean the demise of authenticity to the extent Branko appears to imply in his post.

In fact, along with amplifying the lure of lucre and thus eroding authenticity, capitalism has, at the same time, given artists an opportunity to increase extrinsic compensation without needing to compromise their authenticity. Technology has made it easier and cheaper for artists both to produce high quality content and to reach much larger audiences. Someone making authentic, but obscure music in their bedroom or garage can now reach people across the globe. Perhaps it is not coincidence that the late David Bowie, who can hardly be accused of a lack of authenticity throughout his career, took advantage of capitalism by securitizing his back catalogue to provide him with the independence to continue to make the music he liked, rather than what managers and executives dictated.

Occasionally, Branko Milanovic takes up the defence of capitalism against anti-capitalist attacks. This is not because he is a capitalist groupie: he is well aware of its flaws, but also of its advantages in addressing economic inequality. He is a pragmatist who realizes – as he argues in his book – that capitalism is here to stay, if only because it follows from human nature. It gives me a peculiar kind of pleasure to join Branko in defending capitalism in the same way, in response to what I think is a not entirely justified criticism… even if it is by Branko himself.

Posted in Economics | Tagged | Leave a comment

Consonance is boring, dissonance is hard

(featured image: KylaBorg/Flickr CC BY)

Is there a way to manage cognitive dissonance that doesn’t involve changing what we do or believe, or fooling ourselves?

In music, two or more tones played together can sound pleasant, or unpleasant. The former is referred to as consonance , the latter as dissonance. Like other concepts from music such as harmony, tempo, and striking a chord, this term too is being used more generally as a metaphor in human behaviour and interaction. A specific case is that of cognitive dissonance, which describes the situation of holding beliefs and values that are in conflict with each other, that are contradicted by facts, or by the way we behave.

The theory around cognitive dissonance was developed by social psychologist Leon Festinger in the middle of the last century. Festinger and colleagues had joined an obscure apocalyptic cult called the Seekers. Following messages its leader claimed to have been receiving from superior beings on the planet Clarion, it had been prophesizing that the earth would be destroyed by a flood early in the morning on 21 December 1954. A small band of true believers would, however, be saved by a UFO that would arrive at midnight. The researchers naturally expected (rightly, as it turned out) that no such flood would take place, and that no UFO would appear, but they wanted to observe how the cult members would react when reality and belief diverged.

“They’re coming to take us away, ha ha (or are they?)” (image: Maxime Raynal/Flickr CC BY)

Many of them had said goodbye to their old life, quitting jobs and spouses, and giving away their worldly possessions in preparation of their anticipated departure in the UFO. When, on 21 December, the spacecraft failed to materialize, some of the cult members reluctantly concluded they had been wrong. The most fervent believers, however, following another message apparently received by the leader, doubled down and claimed that it was their group’s faith that had saved the world from the imminent disaster the beings on planet Clarion had warned them about.  (The whole story is told in the contemporary book When Prophecy Fails by Festinger and his colleagues.)

A new theory

The findings from this remarkable natural experiment contributed to the formulation of a theory of cognitive dissonance. It hypothesizes that the experience of inconsistency (or dissonance) between beliefs, or between a belief and facts or actions is psychologically uncomfortable. This discomfort then motivates us to try and reduce the dissonance and to avoid situations that increase it.

David McRaney, the host of the excellent You are not so smart podcast, summarizes two possible responses in a recent episode devoted to conspiracy theories. If a particular interpretation of new evidence suggests it is in conflict with our beliefs (and with how we act in accordance), we experience cognitive dissonance, until we either change our beliefs (and our behaviour), or change our interpretation of the facts.

Festinger illustrates this with a smoker who learns of the negative health consequences of his habit. He can either update his prior belief that smoking is not harmful and quit smoking, or engage in the mental manoeuvres necessary to reinterpret the new information in such a way that the belief holds and the behaviour can persist – for example by downplaying it (it is not as bad as is claimed) or acquiring information emphasizing the positive effects of smoking (it helps prevent weight gain).

This shows that cognitive dissonance is not just associated with the conspiratorial thinking. Conflicts between beliefs, values and principles can, and do, occur much more widely. A series of recent tweets from Belgian epidemiologist Luc Bonneux provide an interesting example, relating to the priority schemes that are being used for vaccinating people against COVID-19. In most countries, the older people and those who are vulnerable because of certain comorbidities (diabetes, obesity etc.) are at the front of the queue. But, as Dr Bonneux remarks, a glaring fact seems to be completely ignored in this approach: the gender difference in patients with COVID-19. For example, research by Jian-Min Jin and colleagues has found that, while prevalence of the disease is the same for men and women, men are 2.4 times more likely to die of the disease than women. Dr Bonneux observes that a healthy male between 60 and 64, with no lifestyle adversities, carries a higher risk of death than a 50-54-year-old female with obesity, hypertension and elevated cholesterol. Isn’t this discrimination, he wonders – shouldn’t men be prioritized for vaccination? And shouldn’t the same apply to ultra-orthodox Jews who, for religious reasons, as opposed to Christians, must celebrate weddings in large gatherings, or indeed to smokers who also run a higher risk of dying of COVID-19 than non-smokers?

It is easy to see how this insight could create cognitive dissonance in the decision-makers, the people who determine the vaccination approach. Consistently applying the principle that those at the highest risk come first quickly clashes with other principles like gender equity and religious freedom.

Dissonance for everyone

And lest you think cognitive dissonance is still something far removed from the lives of ordinary mortals, here’s something to ponder about. Most people (and I am presuming this includes you, dear reader) believe a world in which every person has enough food to eat is preferable over one in which some people are dying of starvation. Yet do we actually behave– i.e., are we spending our resources – in accordance with that belief? Are we consistently doing everything we can to realize that ideal?

Who gets to go first? (image: Wilfried Pohnke/Pixabay)

Even the mental gymnastics we need to engage in to avoid cognitive dissonance by reconstructing our interpretation of the facts is uncomfortable: we often know, deep down, that we are only trying to fool ourselves. Ignoring the inconvenient facts allows us to maintain the pretence that we are principled individuals, walking the righteous path – whether we are a vaccination policy maker, or just an ordinary person.

But perhaps there is yet another way to manage cognitive dissonance. We feel it most when we lack nuance in our perception, and try to maintain a simplistic, absolute world view, in which beliefs and values are unconditionally 100% true or false. It is easy and appealing to classify such beliefs, and even more the people who hold them, into just two categories: good and bad. Easy, appealing, and incompatible with the complexity of the real world.

The tension between good and bad is inherent all around us. Even the simplest transaction contains both: buying a cup of coffee is bad (we have less money) and good (we enjoy it) at the same time, yet we don’t really experience cognitive dissonance. We can choose to acknowledge such tension between good and bad in more complex situations too. An opinion piece in the LA Times by liberal columnist Virginia Hefferman offers a nice vignette: her Trump-devotee neighbours have spontaneously cleared the snow in front of her house. She cites Republican Senator Ben Sasse: “You can’t hate someone who shovels your driveway”. Even someone with ‘bad’ beliefs is capable of doing something ‘good’.

Tension, by the way, is also a term that has a musical connotation. A piece of music that has only consonance from start to finish is very boring. Adding notes that don’t quite fit with the chords, or substituting existing notes to alter harmonies builds (and releases) tension, and that is what makes music interesting.  

And so it is in life more generally.

Perhaps the most important cognitive dissonance is the one between on the one hand a blunt, unsophisticated perspective on the world in which things and people are either good or bad, and on the other hand reality in which they are neither, but a complex mix of both, full of dynamic tension. If we are willing to solve it by updating our perspective, then other instances of cognitive dissonance won’t trouble us all that much, and life can be just as interesting and rewarding as the most engrossing piece of music we know.

Posted in Behavioural economics, Cognitive biases and fallacies, Psychology | Tagged , | 1 Comment

The meaning(lessness) of a number

Numbers can misinform as well as inform – even if they are correct, because they do not (and cannot) carry sometimes crucial context

For many years, the product evaluations of consumer magazines like Which? in the UK, and countless equivalents in other countries, have been identifying “Best Buys”: the products that performed best in the tests. But such a binary distinction (a product either is, or is not, a Best Buy) is not necessarily very helpful.

One issue with it is that it doesn’t tell us whether a product that failed to receive the coveted badge only just failed to make the grade, or whether it was mediocre across the board. Another concern is that the boundary between Best and Not-Best is arbitrary, and a further one is that the rating inevitably has to combine many criteria. For, say, a dishwasher, perhaps the energy consumption, the duration of a cycle, the noise level, the ease of loading and unloading, and the capacity would be meaningful yardsticks along which to compare different models. But what relative weight do each of these criteria receive in the overall evaluation? And does this reflect how we personally would approach the performance of the appliance?

Can we rank these sauces by a single number? (image: POINTS OF VIEW/Flickr CC BY)

The Best Buy concept is as popular as ever, but it is now complemented by tables in which the relative performance against the chosen criteria is chosen, and by an aggregate score (out of 100). This improvement helps us see whether a product is just below the (still arbitrary) Best Buy threshold or way down. But we still cannot tell whether the relative scores of two products reflect what we find important. If we must have a very quiet dishwasher because we spend a lot of time in the kitchen, but because we’d never fill it up the capacity is of little importance to us, a lower ranked appliance may well be the Best Buy for us.

A score out of 100 looks suitable as a measure, but appearances may be deceptive. Its quantitative nature suggests it is a good guide for decision-making, but that is illusory.

Index pointing the wrong way

Such idiosyncratic, untransparent numbers are not just a feature of product evaluations. In a recent blogpost, Branko Milanovic, a prominent economist, took a look at a report entitled “Global Health Security Index”, which aims to establish individual countries’ readiness and capability of handling infections disease outbreaks. Much like a Which? report on lawnmowers or mayonnaise, it establishes categories against which a country’s preparedness is measured. There are six in total: prevention, detection and reporting, rapid response, health system, compliance with international norms, and risk environment. These in turn are constructed from 34 indicators and 85 subindicators. All of this was combined into one GHS index – a score (you guessed it) out of 100.

The report was published in October 2019, just a few months before the COVID-19 pandemic rapidly gained hold of the world. Curious about the accuracy of the report, Dr Milanovic undertook to compare the index with the actual performance in handling the pandemic of selected countries. Not unreasonably, he chose to look at the number of COVID-19 deaths per million population.

His observations are quite remarkable. The GHS index appears to end up predicting the inverse of what it set out to do: the top-3 countries in the report – the US, the UK and the Netherlands with scores of respectively 83.5, 77.9 and 75.6 – are among the worst when it comes to COVID-deaths relative to population (10th, 4th and 38th from bottom, with nearly 1400, more than 1600, and more than 800 deaths per million). Conversely, countries with a low GHS index did much better in practice. Vietnam, for example, is 4th best in the list of COVID-19 deaths with 0.36 deaths per million, but ranked only 50th in the GHS index chart. Thailand and Sweden appear next to each other on 6th and 7th place in the GHS index ranking but the latter recorded more than 1000 times more deaths per million than the former. Belgium, 19th in the GHS list with a score of 61.0, is second worst (behind San Marino) in the COVID deaths ranking with more than 1800 deaths per million.

The GHS index clearly fails to live up to its stated purpose. And the discrepancy between prediction and reality not only illustrates how such a single number, aggregated from six categories, 34 indicators and 85 subindicators, can be meaningless. It also shows that, when such a number is patently wrong, we are unable to tell why it this is the case.

Efficacy is not everything (and is hardly anything)

As we are talking about COVID-19, there is another example of a single number – again presented as a score out of 100 – that is worth mentioning: the efficacy of a vaccine. The media happily report and discuss this headline number, complete with decimal points (where available). Never mind that few people actually understand what efficacy means (it is the percentage reduction in disease incidence in a vaccinated group, compared to an unvaccinated group under optimal conditions, e.g., a randomized controlled trial – and not, for example, the percentage of people that did not get ill after receiving the vaccine).

Yes, but how efficacious is it? (image: Province of British Columbia/Flickr CC BY)

The issue is that this number says nothing about the effectiveness of a vaccine – its ability to influence outcomes in the real world. Say vaccine A has an efficacy of 70%, while vaccine B has an efficacy of 90%. Uninformed intuition would suggest that, given a choice, we should opt for vaccine B, even to the point of rejecting vaccine A. But we don’t know what happened to the people who, despite being inoculated with either vaccine, still develop the disease. How ill did they get? Did they need to be taken to hospital? Did they need intensive care? And most importantly, did they live or die? Everyone who got ill after receiving vaccine A may well have had very mild symptoms, while some of those who got the disease after receiving vaccine B may have required hospitalization or have died.

The efficacy figure says exactly nothing about this. The good news is that the vaccines that have been widely approved so far all perform excellently with very few or no severe COVID-19 cases in the treatment group, and no deaths from either the virus or the vaccine. This means that, no matter which vaccine you receive, it is highly unlikely you will get seriously ill and will have to go to hospital, and even less likely that you will die.

The bad news is that the inappropriate focus on this one headline figure – the efficacy – by the media, by politicians and their advisers, and by the general public, is influencing key decisions, and not for better. The suggestion that there is a big difference in what vaccines really mean in practice, that a vaccine with a lower efficacy is somehow a lot worse than one with a higher efficacy is feeding vaccination hesitancy – who wants to receive an ‘inferior’ vaccine? And that is influencing political leaders and policy makers who don’t want to be seen to be pushing ‘inferior’ vaccines. The inevitable result is that it will take longer to get the COVID-19 pandemic under control, and that more people will die.

Sometimes, numbers are not just meaningless. They can misdirect us and guide us towards poor decisions. It is up to us to question how significant a number truly is, especially if it is bandied around as authoritative indications of something important.

Posted in Behavioural economics, Cognitive biases and fallacies, Emotions, Psychology | Tagged | Leave a comment

Why does favouritism persist?

(featured image: Paul Sableman/Flickr CC BY)

We tend to be ambivalent towards preferential treatment – kind of OK when we benefit (or we grant it to others), but dislike it when it’s others who gain. Or is it not that simple?

We care more for ourselves than for others. This may sound a tad controversial, but it is in fact not surprising: perhaps our oldest and most profound imperative is to pass on our genes – and not those of strangers. So, when it comes to the crunch, we come first.

Of course, in our sophisticated societies, we don’t act this out blindly and in an extreme fashion. Paying a round in the pub (remember those?) to our mates, doing a colleague a favour, or making a donation to a good cause are not likely to significantly endanger our ability to successfully pass on our genes. Also, while collaboration and prosocial behaviour may be puzzling from a narrow, individual perspective, it makes much more sense from a collective viewpoint: societies in which people collaborate will thrive, and so indirectly enable individual success too.

But even in collaboration, we tend to exhibit favouritism: we are more likely to help out close relatives and friends, colleagues, and members of – political or other – groupings we belong to. Some of this undoubtedly consciously instrumental and motivated by reciprocity (we are returning a favour, or anticipating someone else will do so in the future). It may also be a manifestation of tribalism, in which we obey moral obligations towards our fellow tribespeople, regardless of reciprocity. (It can be argued that these are two sides of the same coin, as they both serve social cohesion.)

Some cases of favouritism stand out. Last week, on his last day in office, the 45th president of the United States of America completed a series of presidential pardons totalling 143 individuals, many of whom were close friends and allies (including his former campaign manager Paul Manafort, his adviser Roger Stone, and Charles Kushner, the father of his son-in-law). Donald Trump was no exception in this respect, though – Bill Clinton pardoned no less than 140 people on his last day in office, including his brother Roger, and Gerald Ford famously pardoned former president Richard Nixon after the Watergate scandal.

Questionable favouritism, but legal favouritism (image: Mike Licht/Flickr CC BY)

This behaviour by American presidents is, while morally perhaps questionable, sanctioned by the constitution. Similar actions to save allies from prosecution, carried out by lesser individuals, are not. I remember hearing, while growing up, plenty of stories about people whose traffic fines somehow disappeared in the system, thanks to a friend or a relative working in the relevant administration who could “take care of it”. Naturally, I never saw actual proof of this, and I suspect things may have been tightened up somewhat since then. But the fact that it has its own particular idiom in Flemish (“verticaal klasseren”, or to file vertically, alluding to the way papers end up in the bin) strongly suggests that it really was a thing at the time.

Malleable judgement

More generally, our judgement – of ourselves, and of others – is malleable. We tend to be more lenient, and come up with potential mitigating elements, justifications and plain excuses for dodgy deeds when it concerns ourselves or people with whom we associate closely.

A recent paper by Corey Cusimano and Tania Lombrozo, two psychologists at Princeton University, provides an interesting angle on this intriguing tendency. The paper describes a set of studies that investigate the dilemma we sometimes face between believing what is supported by an impartial assessment of the evidence, and believing what is morally desirable or expedient – and in particular why it is not uncommon to verge towards the latter. At first sight, this research seems to have little to do with favouritism, but it is in fact just a different framing. When the evidence suggests one thing, but we believe something else and deliberately act accordingly, we are de facto taking a moral stance. If do believe we are moral beings (something most of us do), then we must believe it is morally preferable to downplay or dismiss the evidence.

A substantial body of research suggests that most people treat impartial, evidence-based reasoning is the only legitimate way to form beliefs. So how do we square that with the relative ease with which we override this, and for example give people the benefit of the doubt, or find justifications for their transgressions?

A central concept in the paper is the discrepancy between what someone ought to believe who was guided solely by the objective assessment of the evidence, and what someone ought to believe who took into account non-evidential considerations (like moral obligations). The authors call this “prescribed motivated reasoning”, as it reflects a prescriptive norm – a permission, if not an obligation, to engage in motivated reasoning.

They investigate this through a series of vignettes, in which the objective evidence favours one belief but some salient moral consideration favours the opposing belief. One of them describes two students, Adam and John. They are old friends, and John is accused of possession of a controlled substance, based on evidence including the fact that cocaine was found in his dorm room (which he does not share with anyone else), and that he has been seen to hang out with known drug dealers. At the same time, John requests that Adam grants him the benefit of the doubt and trusts him out of loyalty and friendship.

How much loyalty to outweigh this evidence? (image: NC Dept of Public Safety/Flickr CC BY)

The researchers find considerable support for the notion of prescribed motivated reasoning. Provided there is a salient and strong enough moral norm – for instance, loyalty between friends demands trust – people indicate that another person (e.g., Adam in the vignette) ought to hold a belief that is inaccurate: “the moral benefit of trusting a friend’s testimony over the accumulated evidence engenders an obligation to give the friend the benefit of the doubt.”

They also identify two mechanisms by which this happens. One they call “evidence criterion shifting”, whereby one has a social obligation to discount, or more heavily weight, certain types of evidence, provided it licenses the morally desirable belief. The other is termed “alternative justification”, in which the moral considerations are weighed directly against the evidence (and may outweigh them).

A moral imperative for favouritism?

These findings help explain why favouritism is so persistent. We use the moral rules we have adopted to license us to interpret available evidence in a motivated manner in line with them: we owe our associates the benefit of the doubt, and ought to evaluate evidence in the most positive way. We are advocate and judge at the same time. Furthermore, we even use these rules to form beliefs that are not at all supported by evidence. If someone is a relative, a friend, or shares membership of a morally significant group, they must be a good person (since we would not associate with bad people), and that may outweigh any evidence to the contrary. Perhaps the most striking finding is that this is, by and large, a prescriptive norm we hold: we don’t just adopt it ourselves, but we think others too should discount evidence in favour of moral rules.

When we hold a belief – even if it is morally motivated – acting accordingly is but a small step. This feeds our inclination to protect people to whom we feel a moral obligation of loyalty and trust, sometimes even in the face of overwhelming evidence. And if we have the opportunity to physically intervene – legitimately, or not so legitimately – only an even higher moral power will prevent us from using it.

We may decry the favouritism and clientelism of politicians. But unless we treat politicians from all parties equally, and in the same way we would treat our best friend or closest relative, we are exhibiting the very flavour of morally motivated reasoning that we are criticizing. Human nature is hard to fight…

Posted in Behavioural economics, Cognitive biases and fallacies, Ethics, Morality, Psychology, Society | Tagged , | Leave a comment

The ethics of making it real

(featured image: raj/Flickr CC BY)

Drills and exercises can help prepare people and organizations for the unexpected, but the choices of how to do so may involve tough trade-offs with ethical concerns

I once had the misfortune of experiencing two fire alarms in the same day. The first one was in the middle of the afternoon, at the office of my then employer in West London. As so often when this happens, the rain was pouring down, and many of us got soaked hurrying from the emergency exit to the assembly point in the car park across the road. Later, I made my way to the Isle of Wight for a client meeting the next day. I was about to get ready for bed when the fire alarm in the small Cowes hotel resounded, and the guests, in various states of dress and undress, proceeded to a corner of the car park – not the most pleasant of places on a cold December night.

In both cases it was a genuine, but false, alarm – at least that is what we were told. I always wondered, though. We have probably all had our share of planned fire drills, in which we get told upfront of the time it will take place – invariably noon, 2pm, or some other conveniently precise whole hour, and never at, say, 3:42pm.

To this date, whenever I hear a fire alarm, I automatically check the time. If it ends on “:00”, I worry a lot less that it might be the real thing – not at all, to be honest. And this relaxed attitude seems to be widely shared during such exercises: nobody appears remotely in a hurry to leave the building. True, the instructions advise us to proceed calmly in case of a fire, but I am not sure they mean the kind of leisurely stroll down the emergency stairs that are typical for fire drills.

So, might devious facilities managers perhaps deliberately engineer ‘malfunctions’ at unexpected times, so they can get a more accurate view of how the building’s occupants would behave in more realistic circumstances? In fact, it is doubtful whether even that would make much difference – most such false alarms tend to be little different from the pre-announced fire drills: the assumption is that it is not a real emergency. No, a facilities manager who genuinely wants to find out how evacuation happens when there is a real fire, should come up with a much more realistic scenario. It should not just take place at an awkward time, but there should also be no doubt that it was real – perhaps helped by a small quantity of pyrotechnics.

Little did we know the deception was about to begin… (photo: Fr. Lawrence Lew O.P./Flickr CC BY)

One summer, long ago, I was out camping with a bunch of other 14-year-olds in the wilderness of Belgium’s Far East. We’d had our dinner and, as dusk was slipping into the darkness of night, we were enjoying the sketches each group had prepared to perform around the campfire. Suddenly, one of the leaders emerged all flustered: one of us, while carrying two full jerrycans of drinking water, had slipped on the narrow path on the way back from the spring that supplied our camp. Apparently, he had suffered a serious leg fracture.

You probably guessed that it was all faked, but to us it there was no doubt at the time that it was all too real. We hastily organized ourselves, improvised a stretcher, and set out to search for our unfortunate friend. In the darkness, all that was needed to give it the necessary tinge of realism was a torn pair of old jeans and the content of a bottle of ketchup (it was only afterwards that we realized there had been a strange scent near the ‘accident’ site).

But was deceiving us in that way an ethical thing to do for the camp leaders – and wouldn’t the same question arise if a daring facilities manager staged a fake but realistically scary fire? It is not an easy question: the ethics of a choice, its purpose and the circumstances all play a role. It is a genuine tough trade-off.

Fighting deception with deception

Perhaps the dilemma is less stark if the purpose of an intervention involving misleading people, and hence debatable ethics, is precisely aimed at combating the criminal use of fakery? GoDaddy is one of the world’s largest web hosting companies, managing well over 70 million domains for more than 20 million customers. In 2019, it suffered an embarrassing security breach, in which the accounts of around 28,000 service users were compromised. Understandably, such a company wants to protect itself against such attacks.

Phishing, a form of computer crime in which the perpetrators approach individuals posing as someone legitimate to try and obtain sensitive data, is not just used to steal money or people’s identities, but also to break into corporate systems. For companies like GoDaddy, it is important to ensure that employees are capable of spotting such attempts, and don’t get taken in.

In December of last year, several hundreds of GoDaddy’s staff received an email from the address, promising them a one-time $650 Holiday bonus. All they had to do is click a link to select their location and provide some other details. About 500 of them received another email a few days later, from GoDaddy’s chief security officer, to inform them that they had failed a phishing test.

Good attempt, but unlikely to be totally effective on its own
(photo: Widjaya Ivan/Flickr CC BY)

It is not uncommon for companies to conduct such exercises to test their staff’s susceptibility to phishing. This one plainly sought to activate people’s emotions: in a period where many are experiencing financial hardship, the prospect of a bonus is very appealing. Was it, as some say, cruel and indeed unethical to do so? Or should such an exercise, to be effective, be as realistic as possible and use the same kind of ingenious, sophisticated social engineering techniques criminals use to weaken their targets’ sceptical tendencies?

If the benevolent producers of the test email were able to come up with this kind of cynical deceit, actual scammers can surely do the same thing. But does that justify the undoubted hurt it caused the employees who fell for it? Even if the company does not blame (let alone punish) the employees who were caught out, there may be instrumental considerations to take into account as well. How might this exercise affect employees’ attitude towards their employer? Might they become more suspicious and less trusting about all communications from the company?

Formulating a strategy to counteract a real threat of deception does not seem to make handling the ethical concerns involved any easier. Simplistic thinking, especially when decisions have an ethical dimension, is appealing. The allure of both a pure consequentialist perspective in which “the end justifies the means”, and a pure deontological one, in which moral rules are not negotiable is unmistakeable. But the former would effectively dismiss any ethical concerns, and the latter would do likewise with any beneficial outcomes.

In the real world of greyscales, both matter. As a decision-maker, we cannot escape having to make a judgement call by hiding behind oversimplified principles. But also as an observer and a potential critic, we should be conscious of the difficulty that inheres in making such trade-offs, before we feel justified outrage.

Posted in Behavioural economics, Ethics, Psychology, Society | Tagged | Leave a comment

The dark side of motivation

(featured image: Jesper Sehested – PlusLexia/Flickr CC BY)

Motivation is what allows us to survive, prosper and reproduce – but it is also behind the worst of polarization and tribalism. We should use it with care, and engage critical thinking

How come we are here? A good few billion years ago, a bunch of chemicals in the primordial soup that sloshed around a young Earth combined to form what we would, much later, call ‘life’ – organisms that somehow possessed two key capacities. They were able to reproduce, and they could distinguish what was beneficial to them from what was detrimental. The motivation of these organisms to reproduce and, in order to do so successfully, to survive long enough by pursuing the beneficial and avoid the detrimental kicked off an unending evolutionary chain, and the rest, as they say, is history.

That motivation has, since then, become a bit more sophisticated, but in essence, we – and our fellow living organisms – are still driven by strong motives to do what is (or rather, often, what feels) good for us, and avoid what is or feels bad. Our human bodies may be incredibly complex, certainly compared to the simplicity of our oldest ancestors, and equipped with vast cognitive powers, we may be living in incredibly complex societal arrangements, yet our judgement what (not) to do still reduces to that very simplest of emotions – good, or bad? (It is no coincidence that ‘emotion’ and ‘motive’ share the same root: the Latin verb movere, to move.)

Complex motivations, likes and dislikes

Being as complex as we are, not everything that feels good is actually essential for our survival or for our reproduction. For example, if like your correspondent you are very fond of tomatoes, but have a strong dislike of cucumbers, eating a lot of tomatoes and going out of your way to avoid cucumber is unlikely to make you live longer or be more successful at producing offspring. Not eating tomatoes and occasionally having some cucumber is likewise unlikely to kill you prematurely or make you sterile.

Got motivation, but got no motivated reasoning (image: Los Alamos National Laboratory/Flickr CC BY)

Over time we have developed a vast array of possible likes and dislikes, way beyond a taste or distaste of certain foods. Pretty much any situation in which there are multiple options, there are some we like (or dislike) more than others – where to live, what to do for a living, how to spend the time we are not working, whether or not to have a family, which groups to belong to, how much tax we (and others!) would need to pay, whether we are for or against the death penalty, the legalization of same-sex marriage, or being a member of the EU. In any of these choices, as long pursuing what we like and avoiding what we dislike is not materially in conflict with our survival or our ability to reproduce, we should be fine from an evolutionary perspective.

But when likes and dislikes influence how we think, there may be trouble ahead. Hoping that the future will play out as we desire is not problematic in itself, and even a bit of wishful thinking is mostly harmless, provided we don’t take our wishes for reality. However, we move onto shaky ground when we allow our likes and dislikes to influence our reasoning. If we are selective in collecting and evaluating the evidence for an argument or a decision we need to make, guided by what we would like the outcome to be, or based on the fact that we’d rather be right than be proved wrong, we are exhibiting confirmation bias. That is likely to lead us to incorrect conclusions, or weak or false arguments.

More worrying still is the phenomenon of motivated belief. Where confirmation bias is about seeking to support a prior belief, here we actively adopt a belief based on what we like to be true, rather than what is more likely to be true. We may believe we are clever or handsome, that our spouse loves us and is not cheating on us, or that what we have achieved in life is owed to our hard work and merit, and not to luck. We may believe that property prices will rise (and make us rich), or that vaccines are harmful because all our friends believe so too (and we like to remain friends). Such beliefs feel good, but if they are not accurate, we are deluding ourselves and we may make poor choices as a result.

Closely linked with this, and arguable worst of all is motivated reasoning. We go through what is, ostensibly, a deliberate process of rational thought, but both in the selection of the evidence we use, and in application of the rules of effective reasoning. But we apply a strong bias towards the desired outcome, rather than keep an open mind. Often this means working backward from the conclusion to the premises, and thus construct a plausible-enough looking argument. It is the pretence of proper reasoning that makes motivated reasoning such a pernicious phenomenon.

The recent US presidential elections form an interesting illustration. Donald Trump’s belief that he would win by a landslide conflicted with the result of the election. This cognitive dissonance between belief and reported facts was reduced by denying the official results, and reasoning that the only explanation for the reported outcome was premeditated fraud on an epic scale. Trump supporters reported spotting what they saw as anomalies proving fraud, but which were nothing of the kind – there we have confirmation bias. Trump himself was motivated in his belief that he had won the election because that is what he deeply desired and was entitled to; many of his followers shared that belief because they too desperately wanted this to be true.

When what we like feels right

But being motivated by what we like and dislike can take a different shape. One of the things with our individual likes and dislikes is that acting them out may bring us into conflict with people who have different likes and dislikes. To handle these conflicts, any social context, from households to states, is characterized by agreements – principles that set out conditions, obligations, permissions and prohibitions that apply to all, and are aimed at keeping the peace. They can be embodied explicitly, for example in laws, terms and conditions, and implicitly in social norms.

Terms and conditions ostensibly were at the centre when, following the insurrection at the Capitol on 6 January, Donald Trump was banned from Twitter and other popular social media platforms, and Parler, a social media platform popular among Trump supporters was removed from the Google and Apple app stores and from Amazon’s cloud service. But these service providers were accused of bias, of being motivated by a dislike of Trump and his supporters, for instance since they had not acted in the same way in the context of the violent Black Lives Matter demonstrations last year.

Legitimate action or censorship? (image: Mike Licht.Flickr CC BY)

Opinions are divided on these actions. While Angela Merkel, who “considers it problematic that the president’s accounts have been permanently suspended” can hardly be described as a Trump fan, it is striking that opponents of Trump largely approve of the actions, and the other side is largely critical. How come? If we start to regard our own likes and dislikes not as preferences, but as moral imperatives – like = right, dislike = wrong – our judgement and our actions are more likely to be biased. We may be tempted into selectively interpreting the societal principles accordingly, or indeed to reject them outright. Not likely concerning tomatoes and cucumbers: few people (and certainly not your correspondent) have any desire that everyone thinks tomatoes are the best food ever, and cucumbers the worst, perhaps, but when it concerns politics, the step is easily made.

A Twitter thread by the journalist and author Cory Doctorow, commenting on the banning of Trump and Parler contains a hint of how that might happen:

To be clear, he is not in favour of the banning of Parler. But his tweets show how easy it is to stumble into it. There are undoubtedly good moral arguments against making Holocaust jokes (as there are against making jokes about insurrectionists, like the one that one of them got killed by tasering himself in the testicles). But when a personal condemnation of an action on moral grounds is pitted against the resistance to outlawing on the other, the former may well end up winning the tug-of-war.

Let us be honest: if we dislike something or someone, we inherently like the prospect of their being banned. The principle embodied in the saying “I disapprove of what you say, but I will defend to the death your right to say it”, often ascribed to Voltaire is a fine ideal, but it is really hard to put into practice. If our “like” manifests itself as a moral imperative, it is really, really hard. The sincerity of our beliefs does not preclude them from being motivated.

The power of ‘irrespective’

Our biases are strongest when we are motivated by what we see as absolutely right and wrong. And so is our denial that we are biased – the belief that we (and our confederates) are not biased, because we are right, and they are wrong. It is the denial of our biases, and the kind of motivated reasoning that follows from it that is, I think, a major contributor to polarization and tribalism.

And while the biases are inevitable, the denial is not. We can counteract it by practising critical thinking, keeping an open mind and impartially considering the evidence when analysing a situation. When an issue involves moral preferences, we can remind ourselves of where our sympathies lie, and ensure we are aware of our biases, rather than deny them.

In particular, we can apply what can be called moral algebra. Algebra uses symbols to express absolute truths. For example, the equation for a straight line expressing the relationship between two variables x and y, y = ax + b, embodies the fact that if y equals 0, x equals -b/a, and if x equals 0, y equals b – irrespective of the values of a and b. It is true and valid for any value of a and b.

If we make an argument, come to a conclusion, advocate an action, or pursue a decision – are we doing so irrespective of who or what is involved, irrespective of our likes or dislikes, or are we doing so precisely because of our likes and dislikes? Holding up the mirror to our reasoning can help us detect the degree to which it is motivated, and think again.

Let us keep that mirror well-polished, and within easy reach.

Posted in Behavioural economics, Cognitive biases and fallacies, Emotions, Morality, Psychology, Society | Tagged , , , | Leave a comment

Rules and responsibility

Rules of all kinds help us make good decisions all day long, but how does that affect our responsibility for these decisions?

Decision-making is effortful. Even if we have only two options to choose from, they often both have numerous pluses and minuses that need to be weighed up. Thankfully, we can often rely on rules that act as shortcuts and take much of that hard work away.

Many such rules we develop and adopt ourselves. After using the toilet, we don’t every time consider the upsides and downsides of washing our hands – it is a habit we mindlessly carry out. Neither do we spend a lot of time, every week, working out whether we will do the shopping on Friday evening, Saturday afternoon, or – why not indeed? – on Wednesday after work. Most of us have a routine, same day, same time, that we follow pretty well. And we all make use of heuristics: we associate certain brands of products with the features we value and choose on that basis, rather than by time and time again evaluating all the alternatives.

Other rules are imposed on us by others, but we still mostly happily embrace them. If, as is the case for most workers, our employer tells us which days and hours to work, we conform and fit our lives around these rules. Picture the counterfactual, in which we’d have to work out every day first whether to work, and if so, at what time and until when. We’d rather not. Although we could technically wear any clothes at all when we leave the house, rules of decency and appropriateness limit what we can actually choose (and just as well or we might starve to death in front of the wardrobe). In traffic, we tend to follow the rules of the road, rather than make conscious decisions about our speed, positioning, use of indicators and so on. That is generally much safer (not least because means less cognitive load and gives us more capacity to deal with the unexpected). And at work too, we are subject plenty of rules: specific job instructions, procurement guidelines, approval procedures, compliance requirements and many more. All these rules restrict our freedom to act, but in return they reduce the burden, and make our life more efficient.

The perfect purchase to cover your backside (photo: Kevin Walter/Flickr CC BY)

Even moral rules, the foundation of deontological ethics, from the 10 biblical commandments to Kantian philosophy and the variants it spawned operate in a similar way. They relieve us from the need to evaluate the costs and the benefits of our actions, and in return provide us with simple obligations, permissions and prohibitions.

Replacing our judgement

In essence, rules replace our judgement. They are constructed on the basis of a set of assumptions which, if we just follow them, are not verified. And as long as these assumptions hold sufficiently true, following the rules will produce a satisfactory outcome.

But there is another angle to following rules that replace our judgement, related to our responsibility. In the days when a particular American computer manufacturer dominated the information technology market, there was a popular adage claiming “Nobody ever got fired for buying IBM”. This was the perfect CYA (“cover your backside”) method for IT procurement managers dreading having to spend a lot of time and effort evaluating alternatives, weighing up purchase cost, performance, maintenance, interconnectivity, training and whatnot, and arguing the case for whatever they judged to be the best option. Instead they could save themselves the trouble, buy the kit from IBM, and be assured they would not be responsible for any problems.

Yet we often do have the choice whether or not to follow a rule. We can choose to verify whether the assumptions the rule was based on are valid, or to consider the full consequences of following the rule, and on that basis decide whether or not to overrule the rule and engage our judgement.

This occurred to me as we received the good news in November of last year that several COVID-19 vaccine candidates successfully completed their final trials and were being submitted for approval. Rules do indeed play a prominent role in the process of authorizing and administering vaccines – both existing and new rules.

Understandably, approvals bodies apply strict rules for evaluating candidate vaccines, governments have rules for authorizing their use, and there are rules for who is qualified to deliver the shots. Furthermore, specific rules would be needed to determine when and where the vaccination will take place, the sequence in which citizens would be inoculated, how to ensure efficient use and avoid waste of vaccine etc. Together, these rules have a significant influence on how quickly herd immunity would be reached, and hence how the number of deaths, and the social, educational and economic damage could be kept as low as possible.

Opportunities to break the rules

And we saw some intriguing differences appear.

Some countries were quick to authorize vaccines, and had embarked on intensive inoculation programmes with many tens of thousands of people receiving the jab each day, while other countries were still in the process of evaluating them – following their prevailing rules. In some countries the vaccination is taking place 24/7, while others seem to be planning to adhere to a more conventional Monday-Friday, 9-5 schedule. Overall vaccination capacity can be increased by engaging volunteers, but some countries are confronting candidates with a heap of bureaucratic requirement for documents and training. Rules, rules, rules.

The UK approved the Pfizer-BioNTech vaccine on 2 December and started vaccination less than a week later. The EU approval happened 19 days after the UK’s, and – in some EU countries at least – the administration also started about a week later. At the time the EU started vaccinating, the UK had already administered the first dose to nearly 800,000 high-risk individuals, or more than 1% of its population. (Meanwhile the UK has also approved the Astrazeneca vaccine and began administering it on 4 January. It has not yet gained approval in the EU.) Imagine the EU had decided to accept the UK’s medicine agency’s approval of the vaccine, and proceeded in parallel. Nearly six million EU citizens might have had their first shot by the time the EU campaign kicked off.

Over there it has been approved, but here you’ll have to wait for another three weeks (photo: Marco Verch/Flickr CC BY)

Israel is vaccinating around the clock, including on the Shabbat, and despite starting its programme later than the UK (on 20 December), more than 18% of Israelis have received their first dose on 6 January, while in the UK the number was about 1/10th of that (and most other countries are lagging much further behind). Imagine every country had adopted the intensive approach Israel pursues (in which even religious rules were set aside), and – subject to supply constraints – 10% or more of their population would have been vaccinated in less than a month’s time. The light at the end of the tunnel might be a good deal nearer.

In the UK, the tweet of a retired anaesthetist attempting (and another one from a general practitioner who has giving up) starkly illustrates how rules can hinder, rather than facilitate efficiency. As Dr Jones says, “I can remember how to do an intramuscular injection”. Yet, she has to upload 17 different documents, or undergo training on subjects like Fire Safety and Preventing Radicalization before she can help immunize the population. Imagine what capacity level could be reached if, without this burden, plainly qualified people could be brought in to staff up the vaccination programme in a matter of days.

Whether or not we adhere to rules when we make decisions is ultimately our choice and our responsibility. Sometimes following rules is the responsible thing to do, on other occasions acting responsibly is precisely going against the rules.

When COVID-19 is finally under control, and the differences in outcomes – like the number of dead and on the damage done to society – will be plain for all to see, we will be able, together with the decision makers, to judge whether adhering to the rules will have been worth it.

Posted in Behavioural economics, politics, Psychology, Society | Tagged | Leave a comment