In whose interest?

(featured image credit: gaelx CC BY)

The relationship between charitable giving and nudging is an uneasy one

Charitable giving is a challenging phenomenon for neoclassical economics. What homo economicus, what rational, self-interested, utility maximizing would willingly give money away? It could even be argued that the fact itself that so many people make charitable donations is precisely why the homo economicus is illusory.

But perhaps things are not so simple. Sure, giving money away appears to violate the principle of self-interest. Yet, would someone who has just put a fiver into a collection box just as likely have dropped it on the floor, or set it on fire? Not likely. Even though the material effect of the loss is identical, clearly the different ways of disposing of a 5-pound note are not experienced in the same way.

Reasons for giving

There are underlying reasons to give money away to charity. Donating visibly, whether in plain sight, by supplying your name when donating online, or in some other way, could be a form of virtue signalling – letting others know that you are a good person. Maybe you do so in order to build or maintain a reputation, or to signal your wealth.

Recent research by two economists, Felipe Montaño-Campos at the Universidad San Andres in Argentina and Ricardo Perez-Truglia at UCLA, explored another mode of signalling. They asked participants to complete a cognitive test, much like those used for admission to a graduate school. Half of them (the meritocratic condition) then got awarded a sum of money based on their performance ($40 for the top-25%, $30 for those whose score was in the 50-75% range and so on), while the other half (the random condition) received similar sums, assigned at random.

They were then told they would be able to donate part (or all) of this money to a local charity, after both groups were further split in two: a public condition (in which the participants would receive a list showing each participant’s name and the amount donated), and a private condition (in which they would receive the list of donations in an anonymized form).

In the meritocratic condition, the public subgroup donated 57.14% of their gains, much more than the private subgroup (47.62%), and also a little more than the public subgroup in the random condition (56.85%). The authors conclude there was a tendency to signal intelligence through the magnitude of the donations.


Signal boxes (image: Howard Lake CC BY)

Of course we may also be generous because we feel that is the right thing to do (and in the process signal to ourselves that we are a righteous person). In 1990, economist James Andreoni coined the phrase warm glow for this.

So far so good: when we donate money, we actually do get something we value in return, so we’re not violating any economic principles. But then a new question arises: why do we give the specific amount we give? Do we wish to buy a specific amount of reputation or warm glow? Clearly, the total amount is inevitably limited by our overall discretionary budget (which, in itself is an elastic concept): few people would forego essentials or even non-essentials in order to buy some warm glow or do some heavy signalling. We may well operate a mental account for charity with a budget that limits how much we donate. But within those boundaries we make some kind of a trade-off that satisfies our desire for signalling, warm glow, and doing good.

But could we be nudged to donate more than we do?

Donating more, and more, and more

For sure. The identifiable victim effect is an example. But behavioural design firm Ideas42 believes it can be done in a systematic way. In a report entitled Best of Intentions – Using Behavioral Design to Unlock Charitable Giving they identify three dimensions to achieve this. “Tapping into the generosity” of the citizens helps them “increase the amount they donate each year”.  Tools that “allow people to plan when and where to give” ensure that the donations are more “aligned with their intentions”. And timely, relevant feedback can help establish “informed giving”, with the most impact.

They collaborated with a workplace donation platform using a variety of interventions. One example involved sending people a “year-end review” by email, offering a timely opportunity to reflect on their donations so far. The purpose was to prime donors’ philanthropic identities (“you’re doing well doing good”), make the total social activity more salient (“look how much has already been achieved with your generosity”), and establish a sense of urgency (“the year is running out”). In the group who received this email, 23% of people made an extra contribution (compared to 20.6% in the control). Also, among the 10% smallest accounts, the amount contributed was 63% higher, at nearly $11,000. It seems nudging works.

Ideas42 also believes we should be nudged. A survey they conducted indicated that Americans think that their neighbours should, on average, donate 6.1% of their income to charity. Yet statistics indicate that on average people donate just 3% of their income.

Many among us probably do indeed end up giving less to charity than we think we should if we really, consciously thought about it – much in the same way that we think we ought to snack less, or exercise more. But with charity donations, it’s hard for a third party to figure out how much that would be. (Arguably, it’s just as hard for ourselves to know our preferences – see this article.) To aggregate people’s estimates what their neighbours should donate into a target is probably not the most robust approach.


“I have a weak preference for donuts, so WTF am I doing with this banana?” (image: stu_spivack CC BY)

This highlights a more general concern with nudging. The originators of the concept, Richard Thaler and Cass Sunstein, describe it in Nudge as an instrument of ‘libertarian paternalism’, and require that nudges are not “forbidding any options or significantly changing […] economic incentives”, and that they “must be cheap to avoid”. Whoever disagrees with the paternalistic choice has an easy opt-out to the nudge.

Conflict of interest

However, Pelle Guldborg Hansen, a behavioural scientist at the University of Roskilde in Denmark who has written extensively about nudging, has proposed a tighter definition, which adds a crucial clause, “in their [the nudgee’s] self-declared interests”. Why is this important?

By Thaler and Sunstein’s definition, nudges have a limited range of effectiveness. People with a strong preference either way will not be nudged by a mere manipulation of the choice architecture: if you really must have a donut, you will not be prevented from taking one simply because the fresh fruit sits at a more convenient spot. And if you already prefer an apple to a donut anyway, well, your life has just been made a little easier. That leaves people with a weak preference, some of whom may have weak preference for donuts. Because it does not manifest itself strongly enough for them to reach to the less conveniently placed sweet, sweet delicacy, they pick a banana instead.

Is this in the subject’s interest? Is it OK to impose a particular norm (fruit is better than donuts) on all the diners with weak preferences in the cafeteria, and nudge some of them against their actual preference?

Nudging like this could be defended on two grounds. First, there is no such thing as a neutral choice architecture. The eventual choice of people with a weak preference will always be strongly determined by the prevailing choice architecture. Without nudging, some people who (weakly) would prefer fruit may end up with a donut simply because it’s within more easy reach, and so their welfare is harmed by this. With nudging, it’s people whose weak preference is for donuts whose welfare is harmed. One choice architecture is not inherently better than another one, so nudging is not worse than not nudging. Second, all else being equal, a healthy population that is not overweight reduces the burden of healthcare on society. While the nudge may harm some individuals’ welfare, it is serving everyone’s welfare.


That loss of welfare to those who prefer donuts and end up taking fruit would seem relatively small (they also have the option of eating a donut later on), and the societal benefit large. Yet the same does not necessarily apply where charitable donations are concerned.

In the absence of any indication (other than a spurious 6.1% of income figure) how much an individual or a household would like to set aside for charitable donations, legitimate questions can be raised about nudging people to give more. Yes, in some cases, people may be donating less than they actually want, because they procrastinate, because they are distracted and forget, or because they don’t realize how much good their money does.

But it is inherently no different from nudging people to, say, buy more soft drinks. We just don’t know whether this different choice is welfare-enhancing for the individual (it is of course income-enhancing for the other party!)

And while the interventions in Ideas42’s report mostly leave a great deal of conscious agency with the subject, that is not so with all nudges. One example is Give More Tomorrow, an initiative for which Cass Sunstein appears to be a strong advocate.  It is similar to Save More Tomorrow, a retirement planning scheme pioneered by Richard Thaler and Shlomo Benartzi, in which employees commit to channeling a percentage of all future salary increases into their pension pot. Here, instead, they decide to donate a percentage of these raises. Unlike money invested in a retirement fund, once it has been donated, it cannot be retrieved. Furthermore, inertia will continue to work against the welfare of those who have a weak preference to donate less.

Nudging should therefore be done with great care. Nudgers are generally ignorant of the preferences of the individuals in the target group, certainly when they are not explicitly stated. There will always be people for whom a planned nudge will be welfare-reducing. At the very least, mindful of Pelle Guldborg Hansen’s proposition, nudgers should consider the self-interest of all individuals and justify whether the welfare-enhancement for some compensates the welfare reduction for others.

Otherwise, the libertarian nature of nudging may be little more than a thin veneer.


Posted in Behavioural economics, Economics, Emotions, Morality, Society | Tagged , | Leave a comment

Economics in your thoughts

Economic thinking is not just about objective costs and benefits – it is also about your very thoughts

You’re driving along a road in town, and in the distance you can see a pedestrian crossing, protected by traffic lights. A figure with a pushchair is approaching it and proceeds to press the button. By the time you reach the lights, they have turned to red and you need to stop. The pedestrian is an older man, who smiles at you as he crosses, pushing a small child sleeping peacefully in the pram. The light turns green again and you drive on.

A very common scene, and one which – like so often happens in traffic – conceals economic activity. Road space is a scarce resource, which cannot be used simultaneously by different road users without causing unwelcome collisions. And allocation of scarce resources is a matter of economics.

Rules (like the obligation to stop for a red light) and mechanisms (like the push button at the crossing) make that possible in a relatively smooth manner. But when two users compete for the same resource, the benefit of the use to one of them often represents a cost to the other. In this case, the man with the child, in gaining exclusive access to the crossing, imposes a cost on you: you have to wait until the light turns green.

How big is this cost? You could estimate it by assuming a particular hourly rate (for example, what you are paid at work), and working out how much time you lose by having to wait for the light. Or you could ask yourself, how much would I be prepared to pay to ensure the light doesn’t change to read until after I’ve driven past, so my journey is not interrupted? This is a conventional economics way of looking at it. Perhaps you don’t take a single-transaction look: tomorrow you may be the pedestrian, benefiting from the ability to interrupt the flow of traffic, and imposing a cost on drivers. What is a cost today, is a benefit tomorrow – the swings and roundabouts of traffic facilitate efficient interactions, and even things out over time.

All in the mind

There is, of course, more to it. Let’s riff some more on this thought experiment. Imagine the guy pressed the button while you were still 200 metres away, but then judged it was safe to cross at once despite the pedestrian light still being red. By the time you reached the lights, he would be on the other side of the road, continuing his walk, while you would be waiting for a red light – for no obvious good reason. Would your experience be entirely the same? Or would you be somewhat annoyed at the pointlessness of your wait? What if the person had pressed the button but then changed their mind afterwards – perhaps they remembered they still had an errand to do on the same side of the road? Here too, you’d be stopped but there would be nobody benefiting from the transaction. Frustrating, isn’t it?


Not all red traffic lights are created equal (image: Matthias Ripp CC BY)

Or picture this, it’s not an older guy pressing the button, but two young kids who see you coming, wait until your stopped for the lights, and then laugh at you and run away – without crossing the road. They’re definitely deriving some benefit from having stopped you, but perhaps not in a way that you approve of.

None of this should make any difference to someone devoid of emotions – or something. Autonomous vehicles would be programmed to stop for red lights, without getting worked up if there is nobody actually making use of the green pedestrian light. In fact, you yourself may not even be that bothered if you need to stop for a red traffic light at a junction operating on a fixed cycle, and there is no traffic in the intersecting road. That’s just how traffic lights work.

But at the pedestrian crossing, being forced to stop may well feel very different in each of these situations. And that is all in your thoughts.

Two kinds of utility

A little while ago I got involved in an interesting exchange with Moshe Hoffman, an economist at Harvard. It started off with this intriguing question*:


We do indeed seem to derive more pleasure from finding money in the road than from receiving our salary into our bank account. That may be explained, at least in part, by the fact that we feel we have worked for or pay, so there’s a clear quid pro quo in this exchange. Finding money is different – there’s a benefit without a corresponding cost. It is rarer too, so the surprise adds to the joy.

This suggests that perhaps there are two components at play here. On the one hand, there is the aspect of pure economic utility, for example the gain of $10 – whether as part of our salary, or by finding it. On the other, there is the effect that the event of obtaining the money produces. In one case, it’s pretty meh, we’ve worked for our boss and in return we get some money. In the other, there is specific pleasure at finding money. Every economic transaction in which goods or services are traded or bartered, but also any human interaction, or indeed a simple event like finding money, could thus have both a strict economic utility (material cost and/or benefit), and a hedonic utility (pleasure and/or pain).

Moshe speaks of ‘pleasure points’ to express the latter utility. These are reminiscent of the hedons and dolors, the units of pleasure and pain in the hedonic calculus proposed more than 200 years ago by the utilitarian philosopher Jeremy Bentham. Economic and hedonic utility are not necessarily related. We can easily imagine events which do not materially affect our wealth, but which provide us with great joy: a smile from a stranger, a compliment from a colleague, a kiss from a lover. But while, say, the monthly child benefit payment increases our spending power, it’s unlikely to produce much specific pleasure.


Price: $10 – or could it really be free? (image: Asbjørn Sørensen Poulsen CC BY)

Yet the curious thing is that, even though the economic utility of an increase in wealth is realized just once, the corresponding hedonic utility seems to be reusable. Finding $10 means “our expected future consumption goes up”, as Moshe calls it – that’s the economic utility. We can spend it once, and then it is gone. Alongside, we experience the joy at our surprise and luck of obtaining money for nothing. But on top, buying a tasty lunch with this found money can provide more pleasure than doing so with money for which we’ve had to work. And quite likely actually eating that free lunch (there obviously is such a thing!) will fill us with additional delight compared to one paid with hard-earned money.

The money itself seems to be of little or no consequence in this. Imagine Moshe had found an object that had value to him – say a hammer, or a beautiful shell during a walk on the beach. He would feel delight at the find, but he would also experience pleasure every time he used the hammer, or admired the shell on the sideboard in his lounge. Arguably, he would feel more enjoyment than if he had had to buy these objects with his own money.

In charge

Are we double counting anything? I don’t think so – hedonic utility does not seem to follow the same algebraic rigour as economic utility. It is as if the event by which something that provides utility (money, a tool, an aesthetic object) is acquired somehow can endow it with extra hedonic power, which can be drawn potentially indefinitely.

Much of this hedonic stuff happens within our mind, but that doesn’t mean there is no connection with economic utility. We may not get a huge amount of specific pleasure getting paid a salary for a month’s work, but the knowledge that we have more spending power as a result does make us happy (and not getting paid would definitely make us pretty cross). Economic utility generally does not leave us indifferent, so we actually experience it as hedonic utility. In addition, we could figure out how much money would compensate us for the loss of the hammer or the pretty shell, to estimate their hedonic utility.

But while economic utility is objective in nature (a pound in your pocket or your bank account is worth exactly one pound, to you or to any other person), hedonic utility is not. The amount of pleasure we derive from our economic transactions (what we buy and sell – including our time at work), from our interactions with fellow humans (what we do for, and with, others), and from events (finding bank notes, experiencing a sunrise over a misty field) is largely of our own making.

We may not be able to control directly how much economic wealth we have, but we are in control of our subjective hedonic utility. It’s up to us to decide how much joy we get from finding $10, seeing a small child taking its first steps, a hug from a loved one, or the picture of our parents on the wall.

In a very real sense, we are in charge of deciding how rich we are.

*: There is something fishy about an economist finding money on the ground, of course. As the old joke goes, if there was a bank note on the ground, somebody would have already picked it up.

Posted in Behavioural economics, Economics, Psychology | Tagged , | Leave a comment

Bags of nudges

(featured image credit: Mitchell Haindfield CC BY)

The British government is planning to double the charge for single-use plastic shopping bags, to reduce their usage further. Will this work?

I am pretty sure that you, like so many environmentally conscious consumers, take your own reusable bags to the shops all the time. Maybe you simply have no choice – in a growing number of countries, disposable bags have simply been banned. Or if they are still available at a price, you just decide not to pay for the bags that used to be given away for free.

There are many ways to reduce the consumption of certain goods. Bans can be pretty effective, though they are still not an entirely watertight approach. People can stockpile goods (as appears to be the case in New Zealand, ahead of an announced ban on plastic bags). And while the sale of, say, hard drugs or radar detectors is illegal, anyone sufficiently motivated and willing to pay the black-market price can still obtain them. But it is hard to imagine sellers hovering around the tills in supermarkets, surreptitiously offering single-use bags at inflated prices. So if a government really wants to cut the use of such bags (and their environmental impact), a ban is probably going to be the most effective.

A bit more libertarian

But maybe bans are a bit dictatorial. Making consumers pay extra for the environmental cost they cause is a much more libertarian approach: pollute if you want, but foot the bill for the damage you cause. Sometimes this can be done in quite a direct manner. In Belgium, for example, the collection and processing of waste electrical and electronic equipment is managed by Recupel, a non-profit organization. Importers and manufacturers have a legal obligation to recover and handle discarded appliances. They can either do this themselves, or outsource it to Recupel in return for a contribution, which is explicitly added to the price the consumer pays. The “polluter” hence pays for the “clean-up” that will be needed several years in the future.

For single-use plastic bags the contribution serves a different role. It does not really directly cover the handling of the waste, but it provides a negative economic incentive. All else being equal, and with the exception of so-called Veblen goods (whose high price makes them more, rather than less attractive), people tend to consume less of something as it becomes more expensive. Hey presto!

At first sight it looks as if reducing the use of disposable bags, whether through a ban or through a charge has nothing to do with nudging. The former clearly eliminates a key choice, and the latter provides a material disincentive. But is that tiny cost really the reason why the measure has been so successful (in the UK, usage dropped by 86% since the charge was introduced)?

Let’s do the sums. In the olden days, I found that I typically needed eight bags for a weekly shop of around £100, so that would cost me 40p (€0.45, $0.50), or about 0.4% of the total amount. That is not remotely a significant economic disincentive, and in any case well worth the convenience not needing to bring your own bags.


Look ma, no bags! (image: Kat Northern Lights Man CC BY)

So there must be something more at play than just the economic cost. If the cost of the bags were absorbed in the price of the 30 most expensive items I bought every week – a penny each – I most likely would not notice. Yet, I certainly would notice it if the till operator explicitly rang up the extra charge. That would increase the pain of paying – no matter how trivial the amount (even accumulating it over time, it’d run to barely more than £1 a month, or £15 per year).

The salience of the cost of the bags functions as a covert nudge, amplifying the profoundly modest contribution so it punches well above its economic weight in terms of the effect. It’s the fact itself that we need to pay an amount – any amount – for the bag that has the power to change our behaviour.

And there are further, almost inadvertent, nudges supporting it. Reusable bags were available at the tills even before the charge was introduced, but only a small minority actually bought and used them – it was simply easier to stick to the default of taking the single-use bags that were handed out freely. Now, almost overnight, that default had changed and a sizeable, and very visible, proportion of customers, growing in numbers by the week, appeared to have embraced them. We are a social animal, and social proof is a strong influencer of our behaviour, so the snowball rolled and rolled.

More of the same?

But – here is a bit of framing for you – if bag use has dropped by 86%, four years on it is still as high as 14% of the level of what it was in 2014. On average, each British household still uses more than 30 single-use bags per year. To reduce consumption further without introducing an outright ban, additional measures will be needed.

This is why the UK government is toying with the idea of increasing the charge per bag to 10p. Standard economics would indeed predict a substantial drop in the use of any product the price of which is doubled. One might even plausible expect it to fall by the same factor again as the initial charge achieved: a reduction by a further 86% based on current usage would leave us with just 2% of the 2014 levels, or about 150 million bags.

There are reasons to be sceptical, though. A charge of 10p would indeed be a doubling, and framing it that way might reinforce the effect. But anyone who currently happily pays 5p for a bag is more likely to see this doubling as a negligible extra cost of 5p from the current reference point. It’s “pennies a day” (a concept described in depth in 1998 by John Gourville, a professor marketing at Harvard), and then we are more likely to view the increase as absolute than as relative.

To ensure the economic cost really makes current purchasers of single-use bag reconsider, the price would probably need to go up a lot more. It would probably be wiser to turn to behavioural economics for counsel. Nobel laureate Richard Thaler, who wrote Nudge together with Cass Sunstein, often states that his top mantra from the book is “Make it easy” – that is, remove the obstacles toward the desired behaviour.

But that can also be turned on its head: make it more inconvenient or more annoying to do the undesired thing. What if the cost of a bag was set not at 10p but at 6p, payable in cash only? The annoyance of having to carry change, and fishing out the coins is likely to have much more effect than the insignificant cost of adding 5p.


Too cheap and too easy (image: Edinburgh Greens CC BY)

Or imagine that the single-use bags were no longer available at the tills, and you’d need to buy them separately. Having to queue at the customer service desk before you start your shop would be bad enough, but it’d a lot worse if you forgot, and you had to return there to get some bags before you can pack your goods. The irritation of anyone behind you at the till would of course add to the general effect.

An alternative, perhaps even more potent intervention might be to offer the single-use bags at just one till (the furthest one from the exit, naturally). This would be normally unmanned, and you’d need to press a button to summon a cashier, triggering a recorded message, something like “Colleague announcement. Customer wishing to buy single-use shopping bags at till 45!”. Way to change behaviour.

And if that were still not enough to crank up the social proof of doing the right thing together with, by now, the vast majority of shoppers, how about imprinting the bags with a slogan like “I don’t give a damn about the environment”?

Economic interventions have their use, and it is true to say that people respond to incentives. But it’s not just incentives we respond to. Sometimes a bagful of nudges is just what is needed.

Posted in Behavioural economics, Economics, Psychology | Tagged , | Leave a comment

How effective is your altruism?

(featured image credit: Yukiko Matsuoka CC/BY)

We cannot escape profound, inevitable trade-offs when it comes to how we use our resources – including when we give them away. 

Next time you pay for your groceries at the supermarket, have a look at your till receipt. It is a document more valuable than you might think – an empirical reflection of your preferences. Of all the things that are available and that you could have bought for the amount you spent, that particular combination provides you with the maximum utility. In addition, both spending more and spending less would have reduced that utility: the least useful item in your trolley was still worth more to you than the price tag, but anything more would not have been worth the price to you.

At least, that is what conventional economic theory would make from your receipt. Anyone rationally pursuing their self-interest will have done the necessary calculations to work out, not just which of two pots of jam provides the most utility, but also whether a pot of jam provides more utility than a bag of potatoes.

In practice, of course, pretty much nobody shops that way. We don’t explicitly consider how effectively our money is spent in the supermarket. As long as what we get home is good enough, and meets our most important needs (imagine you stocked up on strawberry jam in promotion so much that you had no money left for toilet paper), overall effectiveness is not really of great concern.

The effect of doing good

The same is not necessarily true for charitable giving. Money we donate literally has the capacity to make a difference between life and death. Should we not be concerned that a pound we give to charity A will save more lives than one given to charity B?

Enter effective altruism, a concept that has gained momentum thanks to the views and activism of philosopher Peter Singer. Singer is perhaps best known for the drowning child thought experiment, first formulated in his paper Famine, Affluence and Morality. It goes something like this: if we walk past a shallow pond in which a small child is drowning, we all have a moral obligation to wade in and save it – even if it would ruin our clothes and shoes. Should we not, therefore, be willing to make an equivalent sacrifice, if it would save the life of a small child thousands of miles away?

effective altruism

The different differences $1000 make (source:  The Moral Imperative toward Cost-Effectiveness in Global Health)

The effective altruism movement seeks to channel the resources we are willing to give away in such a way that it produces the biggest bang for the buck. The introduction on the movement’s website shows what $1000 can achieve, depending on how it is used. The objective basis for this is a measure called DALY, Disability-Adjusted Life Year, used by the World Health Organization. One DALY represents the loss year of “healthy” life. Why this qualification? It recognizes that it is more worthwhile to prevent the premature death of a younger and/or healthier person than of an older and/or sicker person.

This is a very clinical, rationalist, accountant-like approach to charity, which meets a fair amount of criticism. BBC radio broadcast a programme on the topic a little while ago (recommended listening, still available on the Radio Player), which explored the objections to effective altruism. For example, altruism is supposed to be an emotional affair, and not like a company board choosing the investment proposal that produces the highest return on investment. Calculating which lives are worth saving – let alone treating older people, or those with a disability differently – feels, well, wrong.

Still feel there is some merit in seeking effectiveness, and making sure your resources are spent such that the world improves the most? How about this thought: is it OK to spend more on your own children who, by any account, already have a vastly better life than children in the most deprived areas of the world? Can you justify getting them a new bike while a few thousands of miles away, children are dying of starvation or malaria?

The science of giving

An interesting recent paper by Jim Everett, a social and moral psychologist at Oxford University, and colleagues, sheds some light on this stark tension. Effective altruism is consequentialist in nature: the moral righteousness of one’s choices is judged by their consequences. This means that the well-being of every individual – your child, or a child in Mali – must be treated the same. Resources should be allocated to strangers, rather than to a family member, if in doing so the total benefit to the strangers would be larger than the benefit to the family member.


It may take more than $2000 to fix, but every little helps (image: photobeppus CC/BY)

The researchers carried out four studies, in which they asked participants to report their perception of a hypothetical protagonist making a strongly consequentialist or non-consequentialist decision in impartiality dilemmas, in which either a close relative or a number of strangers were favoured. The participants were then asked to indicate whether they would see the protagonist as suitable in roles comprising a spouse, a friend, a boss or a political leader. One of the dilemmas featured Janet, an engineer, who spent either a weekend cheering up her lonely mother, or instead helped families rebuild their houses that got damaged in recent flooding. In another one Susan, a grandmother who had won a $2000 prize, and either donated this to a charity providing mosquito nets to families in the developing world to protect them from malaria, or to her grandson so he could fix his car.

The results from the studies suggest that impartial consequentialists would not be popular as a spouse, friend or boss. The researchers conclude that we actually expect partiality from people with whom we want to enter in such a close personal relationship, and are put off by impartiality.

Another recent paper by Jonathan Berman, an economist and marketing professor at the London Business School, and colleagues, looks at what might prevent donating to the most effective cause. The researchers examined various aspects of charitable giving, for example the trade-off between personal preference for a given cause and the consequences on welfare, or how others are judged based on whether they choose an effective or ineffective option.

They found that people consider it to be quite appropriate to be led by subjective preferences, rather than by how much effect a donation will have – even if there are clearly more effective options available. The emotional connection outweighs the amount of good that is done.

Against the current

So it looks as if the effective altruists are swimming against a strong current. We don’t want people who are impartial in the allocation of their resources in our social circle, and we think it is better to support a cause that is close to our heart than one that saves more lives.

And yet, perhaps the argument against effective altruism is largely a straw man argument. Portraying the effective allocation of resources as something that is important in an absolute way, in which every pound, euro or dollar we are about to spend is analysed to death, is a bit disingenuous.


Not all the funds go to life-saving cancer treatment (source: andreas160578)

Consider the trade-offs facing healthcare professionals and policy makers. Given the limited resources available, they are obliged to make economic evaluations. They use a measure closely related to the DALY: the QALY, Quality-Adjusted Life Year. Does that mean that there are no funds available for physiotherapy when you sprain your ankle, because all the money goes to life-saving cancer treatment? Of course not.

Ensuring effectiveness, whether it is in healthcare, in our household budget, or in how we channel our charity, means comparing intelligently – for example two different cancer treatments. We don’t need to compare wine with laundry detergent, but we can choose between Chilean wine and French wine. (This is much like mental accounting, an important behavioural economics concept.) As the philosopher John Gray remarks in the BBC radio programme: we can choose between spending time comforting a dying person, and spending time working so we can donate the money to an effective charity. There is no right answer.

We can decide how much we give to charity, and how much we reserve for our family. And within our charity mental account, we can decide how much we give to dementia research, to supporting the homeless, to fighting parasitic worm infection in the children of the world, and to stray cats.

But within each category, we must choose whether we want to support the cause for which we have a subjective preference, or the one that maximizes the welfare in the world. And if we choose the former, we should be aware of the consequences – whether we are consequentialists or not.


Posted in Behavioural economics, Economics, Ethics, Morality, Philosophy, Psychology | Tagged , | Leave a comment

(Behavioural) economics through sunglasses

(featured image credit: Jennifer Stahn BY/CC)

When you’ve embraced economic thinking, even a summer break from work doesn’t stop you from seeing how profoundly us humans truly are economic beings

This summer I have once again been wearing my trusted economics sunglasses. As before they supplied some interesting observations, confirming how good economics is at describing and explaining a lot of our behaviour.

1. The weather

In my home town (and much of England), we would on average get 8 rainy days per month in June and July. That’s two a week. Of the other days, not all are warm and sunny, so really nice days are generally quite scarce. Not so this year: we had 41 consecutive dry days between the middle of June and the end of July. Most of these were genuine summer days (30 with a temperature above 25 degrees C). In June we had 75 hours more sunshine than average, and in July that was nearly 100 extra hours.

Scarcity is a core concept in economics: when a resource becomes relatively scarcer, market prices tend to go up. This helps ensure that the resource is allocated to those who value it the most, and encourages suppliers to increase their supply. We cannot buy or sell good weather, of course. But scarcity is also an important factor in behavioural economics: we tend to be attracted to, and overvalue what is scarce (and undervalue what is abundant). Shrewd suppliers exploit this, for example by telling us only a few hotel rooms remain available at a bargain price, and thus making us buy.


How boring can the weather get?

And scarcity along with abundance in weather patterns too influence our behaviour. During an ordinary summer, a forecast of a couple of warm, dry and sunny days would easily encourage us to plan a day out, to maximize enjoyment from a relatively rare opportunity. But this year every day was a summer day …and as a result we didn’t go away even once. This shows how a sense of abundance can make us more likely to be wasteful. If you can get away any day, in the end you don’t get away at all, and so all the nice days are lost.

The unusual weather has also reset the benchmark against which we rate summer days (in a process similar to anchoring). It’s late August now and while the temperatures are still above normal, and should otherwise be making me very happy, I can’t help feeling a tad disappointed. Value is relative, in the weather as in the shop.

2. The shop

Differential pricing is the holy grail for retailers. When you offer your goods or services at the same price to everyone, you exclude every prospective customer whose willingness to pay (WTP) is below that price. Moreover, you’re giving it away to cheaply to everyone willing to pay more than your price.

Some industries have become very good at targeting different customers and pitching their prices close to the WTP in the different categories. Airlines are a prime example: identical seats on the same flight, costing the company exactly the same, are offered (and bought) at very different prices.

This is not so easy to achieve in groceries, but the astute owner of the local shop where we were staying (who has featured before in this blog) successfully introduced differential pricing this year. How does he manage to charge 15% more for one bottle of mineral water than for another, identical one? Presumably it was the weather that brought inspiration to the crafty shopkeeper. Thirsty people would prefer cooled drinks, he must have thought – and be willing to pay for it. So he simply added another 20 cents to the price of a bottle of water from the fridge, compared to one from the ordinary shelf.

Admittedly, this is not entirely cost free to him: chilling the water needs energy, and it displaces some other stuff to make space for the bottles. But I have no doubt that it pales into insignificance compared to the extra 20 cents he makes on each one during the summer of 2018. When you make the perceived value salient, the WTP goes up, and so do your profits.

3. The rubbish

Rubbish collection in a seaside town with lots of holiday makers over the summer is a complex affair. Most municipalities in Belgium operate a ‘polluter pays’ principle: inhabitants buy rolls of refuse bags which carry a rubbish levy, and only those bags will be taken by the collectors. The more rubbish you produce, the more therefore you pay. Fair enough, and environmentally sensible.

But each town runs its own scheme, so as a tourist renting a place for maybe a week or two, you’re unlikely to want to buy a whole roll. And even if you did, you’d still have to find a way of disposing of the waste at the end of your holiday, as the next collection may not be for several days.


Just you try to squeeze a refuse bag through that hole!

So, many tourists started resorting to surreptitiously sneaking out after dark, and dumping their rubbish in the public bins. The municipalities retaliated by placing bins with small holes, to prevent entire rubbish bags being inserted. Undeterred, the tourists then began to transport their rubbish to the ubiquitous bins in small enough quantities to fit the holes – several times per day.

The next move from the town was radical: remove the public bins altogether. With no bins next to the benches and along the streets, people with small bits of waste – an empty bottle of cold water from the grocer’s, say, or the paper bag in which they brought fruit or wonderful Belgian pastries – have two choices: take it home, or just drop it. It doesn’t take many people to follow the latter course to create a real problem. And so every morning, a large team of street cleaners must turn out in force, embarking on the Sisyphean task of sweeping up the litter left lying around in the previous 24 hours. (The story now begins to resemble the old song, There’s a hole in my bucket.)


Cause (L) and consequence (R)

This provides us with a minor economics treasure trove, illustrating several concepts. The tragedy of the commons is the phenomenon where people acting selfishly (not bothering to take their litter home and simply leaving it behind) spoil the collective welfare (leading to a dirty environment for everyone). Rubbish collection could be treated as a public good. Funded out of the public purse (perhaps by a tourist tax), it could be unconditionally accessible to everyone (it’s non-excludable), and use by one person does not reduce the availability to another one (it’s non-rivalrous). But this raises the possibility of free-riders: people who either benefit without contributing to the cost, or who overuse the resource (and thus add to the overall cost). It is this free-rider problem that the municipalities want to eliminate. But in doing so they’ve created the need for more of another public good: street cleaning. Time will tell whether this system is optimum.

We also see how the conscious choices behind out behaviour are often a precarious balance. On the one hand we have the extrinsic motivations of our neoclassical economics pursuit of self-interest. All else being equal, we seek to follow the path of the least effort, of the lowest cost, and of maximum benefit. On the other hand, we experience the intrinsic motivation of our personal ethics and morality, urging us to do what is right. Going by the amount of litter on the pavement every morning it would seem not everyone strikes this balance in the same way.

As in earlier years, I return from holiday with a reinforced sense of the profound economic nature of how we choose and how we act. Human behaviour is extremely complex and varied, but (behavioural) economics offers insights that can enhance our understanding. And that understanding can only be helpful if we want to make our lives, and the world a little better.

Or perhaps all of this is just my confirmation bias having a ball while my System 2 is on holiday…

Posted in Behavioural economics, Cognitive biases and fallacies, Economics, Ethics, Psychology, Society | Tagged | Leave a comment

More fun with percentages

(featured image credit: geralt/Pixabay)

Absolute numbers generally give little insight to anything, but even percentages, averages and ratios must sometimes be treated with caution

Earlier this week, the British advertising regulator ASA banned an Amazon advert that promised one-day delivery for Amazon Prime members. The Independent newspaper reported that they did so “after receiving hundreds of complaints from customers”. 280 people had complained, most of them because they had not received their goods within one day one day of placing their order, around Christmas 2017. Is 280 a lot?

In any case it’s a good example of how the media often use (large) absolute numbers in support of a story. 280 does sound significant, but without knowing how many shipments were made in total, it doesn’t really tell us all that much. Worldwide, Amazon sent 5 billion parcels ordered via Prime in 2017. Actual UK shipments are not reported, but using their turnover in  the UK (about £8.8B or $11.4B) and globally ($178B), a first approximation suggests 6.5% of the Prime deliveries were made in Britain – or about 320 million, so maybe 40 million in December. That puts 280 complaints (0.0007%) into some perspective.

Without any reference, we can easily be misled into ascribing some spurious significance to a number, but once a number is expressed as a relevant percentage, it makes more sense (and is often not remotely as sensational).

However, percentages themselves are not necessarily all that enlightening and may themselves need more context. A couple of years ago I already wrote about the capacity of the “%”-sign to fool us in the shops, and in opinion polls and predictions. Here are some more situations where extra care may be needed before drawing a conclusion.

A cancer epidemic, or what?

A few weeks ago, the Independent newspaper reported that, globally, the number of cancer cases had risen by a third in the last 10 years. What could be behind this dramatic epidemic? Was this the reckoning for our carcinogenic lifestyle choices (smoking and diet, exposure to the sun)? In part, for sure, but there is more to the story. Cancer is, by and large, a condition that mostly afflicts older people: half the cancers diagnosed in the UK are in people aged 70 and above. Only just over 10% of cases are in people below the age of 50. So, as the population ages, it is hardly surprising that the total number of cases will go up. A “cancer up by 33%” headline is more informative than “new cancer diagnoses up by 120,000”, but even the ratio of current incidence over historical incidence doesn’t give us the full picture.


Cancer: the older people have it (source)

Economics statistics can also be subject to a strange, and potentially misleading effect. Household income is a common metric to examine the distribution of economic resources in a society. By looking at which share of income goes to which slice of the household population we can judge whether a society is, and is becoming more or less equal. But could it be, for example, that everyone gets richer, but that the middle-class household income nevertheless drops?

In a superb video, economist and host of the Econtalk podcast Russ Roberts shows us, by means of a hypothetical and simplified example, how such a surprising result can come about. Imagine a perfectly egalitarian society, with ten 2-person households. Every citizen earns $50,000, so the average household income in each quintile (a slice of 20%) is $100,000, exactly 1/5 of the total. The economy performs well, and 30 years later, every citizen’s income has doubled. However, through divorce on the one hand, and younger people (replacing those who died in that period) hooking up later on the other, the population, still 10 people, now forms 15 households: 10 singletons and 5 couples.

When these 15 households are divided into quintiles again, we now find that the “richest” fifth represents 30% of the total income, whereas for the ‘middle class’ quintile that is just 15%. We can also see that the richest 20% households have doubled their income and increased their share of the total by 50%; the middle class households’ income, in contrast, has stagnated and their share is down by 25%.

Now this is of course a hypothetical situation, but you can see how changes in demographics can influence the figures, and distort our understanding. In real life, consider the effect of an ageing population. Pensioners generally have a lower income than people in work. As everyone lives longer, the proportion of pensioners in society increases, and so will the proportion of low-income households. Similarly, lesser-educated adults tend to have relatively lower incomes than people with a degree, and as the former category is more likely to live in single adult households, they ‘pull the middle down’. Over time, the distribution can therefore show what looks like a shift of income from the poor towards the rich, just because of an increase or decrease of a given type of household.

Hiring bias

Something similar can happen in a variety of other situations. Imagine a hospital that wants to avoid earlier criticism that women are less likely to be offered a job than men. In the preparation of the latest annual report, both the head of the clinical staff and the head of the admin staff show that women did better than men in the hiring process. Nearly 87% of female candidates were hired for clinical posts, and more than 45% for the administrative posts; higher figures than those for men (80% and 40% respectively). So everything is aggregated, but to the horror of the CEO, for the hospital as a whole, it turns out that nearly 64% of the women were hired, but that the men were almost 5 percentage points more likely to be offered a post, at close to 69%.


This phenomenon is known as Simpson’s paradox, after the British statistician Edward Simpson, who described it in a classic paper in 1951. (Trivia: this was not the first reference to it, though: it was identified earlier by Karl Pearson in 1899 and Udny Yule in 1903, but it was Simpson’s name that stuck.)

The intriguing story of the hospital shows that we need to be careful in interpreting percentages. Are women less likely to be hired (as the aggregate data suggest) or more likely (according to the departmental data)? As so often, we need to try and untangle correlation and causation. Is the cause of the apparent discrimination against women the hiring policy? When we look at the individual data, we see that women apply disproportionately for an administrative role (where there are fewer vacancies, and where the success rate across the board is half that for clinical posts). But just try to capture that nuance in a headline…

Finally, here’s an example how you can use statistics to satisfy your boss with minimum effort. Zita is the manager of two teams of translators who make subtitles for film and TV. Team A consists of four people, Alice, Bob, Chris and Dan; in Team B are five: Erik, Fran, Gerry, Hank and Iris. Their long-run daily averages (number of words translated per day) are shown in the table below. Team B clearly performs significantly better than Team A, and her boss instructs her to crack the whip on Team A, so that their average gets at least to the industry norm of 2500 words per day.


Zita ponders for a moment, and then has a flash of insight: she decides to move Fran to Team A. Hey presto: not only does the average of team A hit the target 2500, but her boss will no doubt be pleased to hear that Team B’s performance is also up!


If only all management interventions were this easy…

Posted in Behavioural economics, Economics, Psychology | Tagged | Leave a comment

Bending your mind

(featured image credit: Emilio Garcia CC BY)

Our beliefs are not always as solid and stable as we tend to, well, believe they are….

Our view of the world is shaped significantly by our beliefs. We believe that this, that or the other, supermarket caters best to our needs, so that’s where we do our weekly shopping. We are convinced that cars from a particular country reflect its reputation, so those that determines the makes we drive (or aspire to drive). We believe that this or that politician or party best serves our interests, so that is how we vote.

It would indeed be very hard to navigate the complex choices we need to make every day if we had to rationally weigh up every possibility, and if we couldn’t rely on our stable, persistent beliefs. We barely ever question them, and treat them as though they’re truths. So it’s not surprising that it is unusual for us to modify, let alone reverse our beliefs. For that to happen, something extraordinary is needed.

A surprising lunch

In the spring of this year, Greggs, a chain of sandwich shops in the UK known for decent, but unremarkable food like pasties and sausage rolls, took part in a food festival in Richmond, Southwest London. However, they disguised their identity, appearing as Gregory and Gregory instead. With offerings including Slow roasted tomato and feta pasta salad, and a vegan Mexican Bean wrap, they set out to present their new summer menu to a pretty posh bunch of visitors – undercover.


I can’t believe it’s Greggs – via youtube

And those visitors didn’t half change their mind according to this video. Now of course, it is a publicity campaign, with not even a pretence of scientific validity, and careful editing ensured the right visitors to their stall were shown (it’s called selection bias) – if they were not paid actors, that is. But it chimes with the idea that making people change their minds requires a stark confrontation with a different narrative. It takes a lot to shake up well-entrenched beliefs.

Or does it? How hard would it be to make you believe that purple was blue?

Fooled by prevalence

Recent research by Harvard psychologist David Levari and colleagues shines a remarkable light on the stability of our judgement. Behind the somewhat uncool title “Prevalence-induced concept change in human judgment” of their paper lie fascinating insights derived from seven studies, in which they investigate what happens when the relative occurrence of a particular stimulus is reduced.

In the first few experiments, they presented their subjects with a sequence of 1000 coloured dots, one at a time, chosen from a purple to blue continuum (shown in the image below). For each dot, participants had to indicate whether or not it was blue.


True blue? (image source)

The participants were subjected to two conditions. One group saw the dots picked randomly, such that every single one had 50% chance of being picked from the blue half of the continuum – this was the stable prevalence condition. The other group was subject to the decreasing prevalence condition. The dots from the blue half were chosen with reducing likelihood (50% in the first 200 presentations, then respectively 40%, 28%, and 16% in the next three sequences of 50, and finally just 8% for dots 351-1000).

In the first group the decision whether a particular dot was blue or not did not change from the first 200 to the last 200 dots presented. But the second group were more likely to say a given dot was blue in the final 200 dots (where only 8% were actually from the blue side of the continuum), than in the first 200. When the actual blue dots were less prevalent, the participants saw more of them than there really were.


(image source)

The researchers then ran the same experiment again, but this time actually told the second group that the prevalence of blue dots would decrease. Yet again, even with participants knowing the blue dots would occur less and less, they still shifted their judgement. In further follow-up studies, they found the result replicated when they specifically instructed participants to remain consistent, and not be swayed by the prevalence – even with a monetary incentive, and even if the change in prevalence was abrupt and not gradual. When they increased the prevalence of blue dots, the shift happened in the other direction.

Consistent results, but maybe a touch artificial and contrived: it is rare for anyone to face such a task outside a psychology lab. So the researchers tried something more realistic: they showed their participants computer generated faces on a continuum from ‘very threatening’ to ‘not very threatening’. And they found the same phenomenon was happening. Presented with a decreasing prevalence of threatening faces, participants were more likely to identify a given face as a threat when it appeared in the final 200 than in the first 200 presentations.


(image source)

What about non-visual stimuli? One study also looked at whether participants would judge research proposals to be unethical – again chosen from a continuum. On the ‘ethically OK’ side, there were innocent ideas, such as “Participants will make a list of the cities they would most like to visit around the world, and write about what they would do in each one”. The middle contained ambiguous ones, for example “Participants will be given a plant and told that it is a natural remedy for itching. In reality, it will cause itching. Their reaction will be recorded”. At the ‘very unethical’ end were proposals like “Participants will be asked to lick a frozen piece of human faecal matter. Afterwards, they will be given mouthwash. The amount of mouthwash used will be measured.”  And here too, participants in the decreasing prevalence condition were more likely to reject ethically ambiguous proposals that appeared towards the end than at the beginning.

Pessimism rules

The researchers’ conclusion is that we tend to expand our concept of what constitutes blue, a threatening face, or an unethical proposal, as the prevalence of them decreases. What was previously seen as purple becomes blue, faces that were neutral earlier on become threatening, and originally ambiguous research becomes unethical.

One consequence the authors imagine is that people whose mission it is to reduce some social ill will fail to recognize the result of their efforts. As the original problems become less prevalent, situations that used to be perfectly fine will begin to take their place, as the definition widens. This may lead to frustration, and to the misallocation of resources to solving problems that no longer exist.

But these findings are a concern also for those of us who do not have such noble objectives. We already have a tendency to overestimate the frequency of dramatic events like violent crimes, as these feature so prominently in the (social) media. If, thanks to effective interventions, criminal violence becomes less commonplace, we will not necessarily feel safer. Instead we may start judging minor instances as evidence of persistent violent crime.

Perhaps this effect on the general public is even more significant than that on policy makers and implementers. We may not actually change our mind, but we certainly bend it in the face of changing prevalence. We risk seeing the world as darker and worse than it really is. And if this happens to us as individuals, it will probably also be reflected in public opinion. Max Roser and Mohamed Nagdy at Our World in Data devoted a fascinating post to optimism and pessimism. The research by Levari and colleagues provides some explanation of our collective pessimistic nature.

This pessimism inevitably shapes our decision-making. It may make us more fearful than we should be – avoiding risks we overestimate, and being receptive to those playing to those fears, whether they be trying to sell us insurance, or trying to get our vote.

The fact that, even as the world gets better, our mind would seem to bend so easily towards pessimism is itself, sadly, grounds for some pessimism.

Posted in Behavioural economics, Cognitive biases and fallacies, Psychology, Society | Tagged , | Leave a comment