(featured image credit: gaelx CC BY)
The relationship between charitable giving and nudging is an uneasy one
Charitable giving is a challenging phenomenon for neoclassical economics. What homo economicus, what rational, self-interested, utility maximizing would willingly give money away? It could even be argued that the fact itself that so many people make charitable donations is precisely why the homo economicus is illusory.
But perhaps things are not so simple. Sure, giving money away appears to violate the principle of self-interest. Yet, would someone who has just put a fiver into a collection box just as likely have dropped it on the floor, or set it on fire? Not likely. Even though the material effect of the loss is identical, clearly the different ways of disposing of a 5-pound note are not experienced in the same way.
Reasons for giving
There are underlying reasons to give money away to charity. Donating visibly, whether in plain sight, by supplying your name when donating online, or in some other way, could be a form of virtue signalling – letting others know that you are a good person. Maybe you do so in order to build or maintain a reputation, or to signal your wealth.
Recent research by two economists, Felipe Montaño-Campos at the Universidad San Andres in Argentina and Ricardo Perez-Truglia at UCLA, explored another mode of signalling. They asked participants to complete a cognitive test, much like those used for admission to a graduate school. Half of them (the meritocratic condition) then got awarded a sum of money based on their performance ($40 for the top-25%, $30 for those whose score was in the 50-75% range and so on), while the other half (the random condition) received similar sums, assigned at random.
They were then told they would be able to donate part (or all) of this money to a local charity, after both groups were further split in two: a public condition (in which the participants would receive a list showing each participant’s name and the amount donated), and a private condition (in which they would receive the list of donations in an anonymized form).
In the meritocratic condition, the public subgroup donated 57.14% of their gains, much more than the private subgroup (47.62%), and also a little more than the public subgroup in the random condition (56.85%). The authors conclude there was a tendency to signal intelligence through the magnitude of the donations.
Of course we may also be generous because we feel that is the right thing to do (and in the process signal to ourselves that we are a righteous person). In 1990, economist James Andreoni coined the phrase warm glow for this.
So far so good: when we donate money, we actually do get something we value in return, so we’re not violating any economic principles. But then a new question arises: why do we give the specific amount we give? Do we wish to buy a specific amount of reputation or warm glow? Clearly, the total amount is inevitably limited by our overall discretionary budget (which, in itself is an elastic concept): few people would forego essentials or even non-essentials in order to buy some warm glow or do some heavy signalling. We may well operate a mental account for charity with a budget that limits how much we donate. But within those boundaries we make some kind of a trade-off that satisfies our desire for signalling, warm glow, and doing good.
But could we be nudged to donate more than we do?
Donating more, and more, and more
For sure. The identifiable victim effect is an example. But behavioural design firm Ideas42 believes it can be done in a systematic way. In a report entitled Best of Intentions – Using Behavioral Design to Unlock Charitable Giving they identify three dimensions to achieve this. “Tapping into the generosity” of the citizens helps them “increase the amount they donate each year”. Tools that “allow people to plan when and where to give” ensure that the donations are more “aligned with their intentions”. And timely, relevant feedback can help establish “informed giving”, with the most impact.
They collaborated with a workplace donation platform using a variety of interventions. One example involved sending people a “year-end review” by email, offering a timely opportunity to reflect on their donations so far. The purpose was to prime donors’ philanthropic identities (“you’re doing well doing good”), make the total social activity more salient (“look how much has already been achieved with your generosity”), and establish a sense of urgency (“the year is running out”). In the group who received this email, 23% of people made an extra contribution (compared to 20.6% in the control). Also, among the 10% smallest accounts, the amount contributed was 63% higher, at nearly $11,000. It seems nudging works.
Ideas42 also believes we should be nudged. A survey they conducted indicated that Americans think that their neighbours should, on average, donate 6.1% of their income to charity. Yet statistics indicate that on average people donate just 3% of their income.
Many among us probably do indeed end up giving less to charity than we think we should if we really, consciously thought about it – much in the same way that we think we ought to snack less, or exercise more. But with charity donations, it’s hard for a third party to figure out how much that would be. (Arguably, it’s just as hard for ourselves to know our preferences – see this article.) To aggregate people’s estimates what their neighbours should donate into a target is probably not the most robust approach.
This highlights a more general concern with nudging. The originators of the concept, Richard Thaler and Cass Sunstein, describe it in Nudge as an instrument of ‘libertarian paternalism’, and require that nudges are not “forbidding any options or significantly changing […] economic incentives”, and that they “must be cheap to avoid”. Whoever disagrees with the paternalistic choice has an easy opt-out to the nudge.
Conflict of interest
However, Pelle Guldborg Hansen, a behavioural scientist at the University of Roskilde in Denmark who has written extensively about nudging, has proposed a tighter definition, which adds a crucial clause, “in their [the nudgee’s] self-declared interests”. Why is this important?
By Thaler and Sunstein’s definition, nudges have a limited range of effectiveness. People with a strong preference either way will not be nudged by a mere manipulation of the choice architecture: if you really must have a donut, you will not be prevented from taking one simply because the fresh fruit sits at a more convenient spot. And if you already prefer an apple to a donut anyway, well, your life has just been made a little easier. That leaves people with a weak preference, some of whom may have weak preference for donuts. Because it does not manifest itself strongly enough for them to reach to the less conveniently placed sweet, sweet delicacy, they pick a banana instead.
Is this in the subject’s interest? Is it OK to impose a particular norm (fruit is better than donuts) on all the diners with weak preferences in the cafeteria, and nudge some of them against their actual preference?
Nudging like this could be defended on two grounds. First, there is no such thing as a neutral choice architecture. The eventual choice of people with a weak preference will always be strongly determined by the prevailing choice architecture. Without nudging, some people who (weakly) would prefer fruit may end up with a donut simply because it’s within more easy reach, and so their welfare is harmed by this. With nudging, it’s people whose weak preference is for donuts whose welfare is harmed. One choice architecture is not inherently better than another one, so nudging is not worse than not nudging. Second, all else being equal, a healthy population that is not overweight reduces the burden of healthcare on society. While the nudge may harm some individuals’ welfare, it is serving everyone’s welfare.
That loss of welfare to those who prefer donuts and end up taking fruit would seem relatively small (they also have the option of eating a donut later on), and the societal benefit large. Yet the same does not necessarily apply where charitable donations are concerned.
In the absence of any indication (other than a spurious 6.1% of income figure) how much an individual or a household would like to set aside for charitable donations, legitimate questions can be raised about nudging people to give more. Yes, in some cases, people may be donating less than they actually want, because they procrastinate, because they are distracted and forget, or because they don’t realize how much good their money does.
But it is inherently no different from nudging people to, say, buy more soft drinks. We just don’t know whether this different choice is welfare-enhancing for the individual (it is of course income-enhancing for the other party!)
And while the interventions in Ideas42’s report mostly leave a great deal of conscious agency with the subject, that is not so with all nudges. One example is Give More Tomorrow, an initiative for which Cass Sunstein appears to be a strong advocate. It is similar to Save More Tomorrow, a retirement planning scheme pioneered by Richard Thaler and Shlomo Benartzi, in which employees commit to channeling a percentage of all future salary increases into their pension pot. Here, instead, they decide to donate a percentage of these raises. Unlike money invested in a retirement fund, once it has been donated, it cannot be retrieved. Furthermore, inertia will continue to work against the welfare of those who have a weak preference to donate less.
Nudging should therefore be done with great care. Nudgers are generally ignorant of the preferences of the individuals in the target group, certainly when they are not explicitly stated. There will always be people for whom a planned nudge will be welfare-reducing. At the very least, mindful of Pelle Guldborg Hansen’s proposition, nudgers should consider the self-interest of all individuals and justify whether the welfare-enhancement for some compensates the welfare reduction for others.
Otherwise, the libertarian nature of nudging may be little more than a thin veneer.