(featured image: Ian Kennedy CC BY)
A simple instrument to influence behaviour has a huge potential for unintended consequences
Imagine you are a dentist (unless you actually are one, of course). To a large extent, your income is paid for by people who have bad teeth, and this tends to be the consequence of inadequate dental hygiene. Would you advise your patients how to take better care of their pearly whites?
Standard economics would discourage you from doing so. The better they look after their teeth, the fewer root canal treatments, crowns and extractions, or even ordinary fillings they will need, and hence the less you will earn. You are, in effect, incentivized against encouraging people to take better care of their oral cavity.
I have quoted the economist Steven Landsburg more than once, and I am sure it will happen again. ‘Most of economics can be summarized in four words: “People respond to incentives.” The rest is commentary,’ he writes on the first page of The Armchair Economist. Despite everything you hear from behavioural economics, it is true: in many cases, a very effective way to influence someone’s behaviour is to pay them if they must do something, or to fine them if they must not do something.
Incentives are all around us. Discounts – whether when buying a car or washing powder – are a form of payment to make us buy something we otherwise wouldn’t. To encourage us to obey the rules, there are penalties for parking our car where we shouldn’t (or for longer than we should), for dropping litter, for smoking where it is prohibited, and for numerous other kinds of transgressions.
Even a salary is an important incentive to get up in the morning and go to work. Yet that is all it does. It generally does not vary directly with our performance: we don’t get extra money for a day when we have worked especially hard, and neither do we receive less if we were to (purely hypothetically, naturally) slack a little.
For that, there exist other incentives – for example a bonus for achieving goals, or a commission on sales. And why indeed not? If an incentive can make us do something, surely a bigger incentive can make us do more of it, or do it better or quicker?
An incentive is almost always linked to a single indicator that is easy to measure, and indicative of the performance of the individual or of the value that they are delivering. A straightforward example is a salesperson’s commission. And yet, the deceptive simplicity of incentives may end up more like simplistic deceptiveness.
Many years ago, a client of mine supplying electronic consumer goods discovered that such a simple incentive scheme overlooked something rather important. Their sales representatives were rewarded for the total order value – fine at first sight. But they happily gave the all customers the same 24-h delivery conditions, regardless whether it concerned a truckload of kit for a large retail chain, or a single clock radio for a countryside mom-and-pop store. You can just imagine the lorry pulling up in front of a shop, the driver walking to the back, operating the tail lift, opening the door, locating the item, securing the door again, restoring the tail lift, carrying the item into the shop, and dealing with the paperwork before finally setting off again to the next destination. It would be nothing short of a miracle if there was any profit left for the firm after the salesperson’s commission.
Incentivizing people to achieve goals or meet targets can likewise have unintended effects, if the bigger picture is not taken into account. Paying an employee to devote more attention to a given objective is surely aligned with the employer’s intention. But attention is a finite resource, and paying more attention to, say, timely achievement of milestones means less attention somewhere else – perhaps corners are being cut in quality assurance or in filing data in accordance with mandatory rules. There is a trade-off to be made – and that is not reflected in the incentive. As Goodhart’s Law says: “When a measure becomes a target, it ceases to be a good measure”.
Worse still, if it is specific targets that are the basis for incentives, people can be tempted to abuse the system. I wrote about the cobra effect before: in colonial times, Delhi experienced a snake plague, and the governor decided to incentivize the inhabitants to combat it by paying a bounty for each cobra skin that was brought in. Unfortunately for him, people started breeding cobras so they could collect their money without actually having to go and hunt. And when the bounty scheme was stopped, the surplus cobras were released, leaving the city with an even bigger cobra problem.
Do negative incentives fare better? They certainly do in the case of the plastic bag charge, which has been introduced in many countries over the last few years (I wrote about it here). The cost is comparatively insignificant (around half a percent of the total shopping cost), and the reduction in usage is disproportionately large.
Healthcare provides another example. In my adoptive country (the UK), a visit to the doctor is free of charge, while in my native country of Belgium it currently costs 5 euro (£4.60, $6), or just 1.50 euro for people in receipt of benefits. This minor payment is unlikely to prevent most people seeking help from their GP when they need it, but it helps prevent problems of overuse of the service for minor ailments experienced in the UK. In recent years, this has made it sometimes so difficult to get a GP appointment that many patients now go straight to the A&E department of their local hospital.
But disincentives do not always work as intended either. Many Belgian municipalities require that household waste is put their own specific bin bags that must be purchased (at a price that includes the cost of collection and treatment). This is intended to encourage recycling and generally reducing the amount of trash. However, the bags are expensive, and the system also leads to increased fly tipping, or (slightly more civilized) abuse of public bins (I wrote about it here).
So what are the issues with incentives? We certainly respond to them, and alter our behaviour in accordance – but not necessarily in the anticipated manner, or with the anticipated results.
A first problem is that incentives are a form of micromanagement, zooming in on one behaviour that must be encouraged (or discouraged), and indeed on one consequential aspect of it. Even if the desired behavioural change is realized, it may have unwanted results (the elevated shipping costs because all sales are encouraged as if they are equal), or it may be at the expense of other important behaviour (e.g. compliance is sacrificed to meet a deadline).
A second problem is that incentives inevitably reduce the behaviour into a single measure. This allows enterprising, but perhaps not so scrupulous people (like the Delhi cobra breeders) to appear to do the right thing, while in reality doing nothing of the kind, or indeed the opposite.
Yet perhaps the most important concern is that incentives interfere with our intrinsic sense of what is the right thing to do. Most of our behaviour takes place in the absence of explicit incentives, and we are quite capable of balancing multiple positive and negative motives to the best of our abilities. When one particular aspect is highlighted with a specific incentive, that balance is disturbed. This one thing then becomes much more important, and everything else much less so. Furthermore, incentives risk crowding out any motives connected to our values, and reframe our behaviour as economic transactions. We behave in a certain way because of the rewards or the penalties, rather than because it is the right thing to do.
Introducing incentives is easy. Doing so while avoiding unintended consequences is not.