(credit: Michael Warmby CC BY)
Should we avoid using heuristics, or instead prefer them over ‘rational’ methods?
Behavioural economics is the branch of the dismal science that is concerned with how real people behave in a way that deviates from the neoclassical assumptions of the so-called homo economicus. Not surprisingly, over the years it has on occasion run into conflicts with its neoclassical relative, although more recently the antagonism seems to have been turning into a rapprochement between the two sides. But that is not the only area of tension in which behavioural economics is involved.
In another argument we find, in the opponent’s corner, Gerd Gigerenzer, a German psychologist (as well as an accomplished banjo player). Throughout most of his career, he has been studying decision-making under uncertainty, and he is a long-standing critic of behavioural economics, or at least of some of it.
Gigerenzer is a vocal advocate of the use of heuristics in preference to conventional analytical and probabilistic calculations. He argues that simple rules of thumb are often (and demonstrably) a more parsimonious way of making ‘good enough’ decisions, and it is this that set (and kept) him on a collision course with, in particular, Daniel Kahneman and Amos Tversky (two pioneers in behavioural economics). But what are heuristics really, and why are they so controversial?
Less is more
The Encyclopaedia Britannica defines a problem-solving heuristic as “an informal, intuitive, speculative procedure that leads to a solution in some cases but not in others.” They are “cognitive shortcuts”, simple mechanisms that transform a given input to an output, without the baggage of a raft of additional data, or complicated calculations. They enable us quickly, and with modest effort, to find an answer or a solution that is most likely good enough. Heuristics are a key instrument in what Nobel laureate and polymath Herbert Simon called satisficing.
A powerful example Gigerenzer often refers to is the gaze heuristic for catching a ball. Most people are perfectly capable of performing this feat, despite the fact that it would, in principle, require simultaneously solving a quadratic equation and a linear equation – the trajectory of the ball, and that of the catcher (assuming a constant speed). This is obviously not what we do in practice. Instead, we look at the ball in the air, and try to keep the angle at which we see it constant. If it reduces, we must speed up, and if it increases, we have to slow down. Just picture it (and if that doesn’t convince you, dogs do the same, and they definitely do not solve quadratic equations).
Gigerenzer cites numerous other examples of heuristics with more economic relevance. In his introduction to the 2016 Behavioral Economics Guide, he refers to the hiatus rule, used in the airline industry to distinguish active from inactive customers: If a customer has not made a purchase for nine months or longer, classify him/her as inactive, otherwise as active. Can this rule of thumb, using a single variable, outperform sophisticated probabilistic calculations, which use lots of salient user data (the total value purchased, the number of items bought, the number of orders placed, their gender and age, their postcode and so on)?
Yes, it seems: a 2008 study by Markus Wübben and Florian von Wangenheim at the Munich Business School compared the performance of a complex prediction model and that of the hiatus rule in three contexts: airlines, fashion, and CDs (back then more common than now). They found that, for CDs, both methods performed equally well (at 77% accuracy), but for the airline and fashion businesses, the heuristic outperformed the mode; (77% vs 74%, and 83% vs 75% respectively. Less (effort) is indeed more (accuracy), it seems.
Two kinds of logic
I doubt many behavioural economists would dispute this, though, and this is not really where the main bone of contention lies. For that, we need to turn to a classic Tversky and Kahneman paper from 1983, featuring a hypothetical young woman named Linda. The participants were given some facts about her (“She is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.”) They then had to rank particular statements about Linda according to the degree she fit the profile, including the following:
- Linda is active in the feminist movement.
- Linda is a bank teller.
- Linda is a bank teller and is active in the feminist movement.
The purpose of the experiment was to see how people use initial information as a heuristic to predict other facts. 85% of participants thought Linda fit the feminist profile the best, followed by the more specific category of feminist bank tellers. But they also judged that she was more likely to be a feminist bank teller than just a bank teller.
This is illogical (the cognitive error is known as the conjunction fallacy). It is like saying that it is more likely that I drive a red Ferrari, than that I drive a Ferrari.
Not so fast, say Gerd Gigerenzer and his then student Ralph Hertwig, in a paper from 1999. It all depends on the context. When people think about probability, they do not necessarily do so in a mathematical sense. Everyday communication uses different standards, including something called the relevance maxim: the person speaking is giving clear and relevant information, and avoids obscurity and ambiguity. In this case: if a person specifically asks about the conjunction, it is not illogical to assume there is a good reason – namely that it is more likely that Linda is a feminist bank teller.
Does Gigerenzer have a point? I am inclined to say so, but then so have Tversky and Kahneman. Presented with the three statements in isolation, few people would fall for the conjunction fallacy and think it is more likely that Linda is a feminist bank teller than just a bank teller. It is the information given upfront, used as a heuristic, which can mislead.
My impression is that the ongoing debate between the Gigerenzer and the Kahneman camps (Tversky died in 1996) is kept alive by purist positions and straw men. Heuristics can backfire, if we use them where or when they don’t apply, and they can be a very useful instrument to quickly find a good enough (or even a superior) solution to a problem.
Keeping our heuristics to ourselves
We should certainly disabuse ourselves of the idea that heuristics are inherently bad. We use them all the time, from making sure we wear decent clothes on days we go into the office, to predicting that drivers of certain German cars will not use their indicators.
It is even tempting to outsource heuristic problem solving to automated tools, especially if the execution is effortful or difficult. Satellite navigation is a good example. It does exactly what we used to do ourselves when we ask it for the quickest route from A to B: it anticipates that we will, on average, make faster progress on main roads than on back streets, and even more so on motorways. It only does it much faster and more comprehensive than we can. But is the motorway always the best route?
Rory Sutherland makes a splendid counterargument. On the way home from the airport, he is happy to follow the satnav’s advice and take the motorway. On the way out however, when there is a plane to catch, the expected travel time may well be shorter via the fastest route, but if there is a major incident on the motorway you end up stuck, with no escape. So, on the way to the airport, Rory ignores the satnav’s recommendation and takes the A-road. His average speed may be lower, but the variance is smaller when taking the scenic route, because in case of congestion he can use the backroads to avoid the hold-up. Such sophisticated heuristics come naturally to us humans, but are beyond simplistic heuristic automation.
Amy Webb, a professor at NYU Stern School of Business and a quantitative futurist, was recently a guest on the Econtalk podcast, and she raised another, altogether more sinister risk of outsourcing heuristics to devices and systems. Amazon, in its relentless drive to automate the shit out of our homes, has introduced a $60 microwave oven, equipped with its Alexa voice recognition software. Sounds crazy – who needs a microwave you can speak to? Lazy Americans can’t even push the buttons to pop their popcorn, right?
But she points at Amazon’s true intent: selling us stuff, including popcorn. Our own heuristic for purchasing this delicacy is to look at the shelf in the pantry, and if the space devoted to it is (nearly) empty, we put popcorn on the shopping list – unless we forget. No longer, though: thanks to this amazing microwave oven, Amazon can know exactly when we are about to run out of popcorn, and make sure we have enough popcorn at all times.
But wait: Amazon can know much more about us. It can check our smartwatch to see how much exercise we’ve been doing, and it can work out how much popcorn we’ve been eating lately. And our fancy microwave may well judge that now is not the time to have more popcorn, and refuse to do the popping, in true “I’m sorry, Dave, I’m afraid I can’t do that” fashion.
The heuristics we use are imperfect. We cannot possibly work out ourselves what the expected travel time is to the airport, and we overlook the fact that we binged on popcorn the day before yesterday, and we forget to buy some. But they are also sophisticated. We are not only interested in the estimated travel time, but we also want to catch our plane. And when we remain in control and can override, if we wish.
Let’s keep on using heuristics, and refine them as we go along. Let’s understand better how they work, and more importantly when and where they work. But let us not outsource our own heuristic decision-making completely to supposedly intelligent systems that fail to recognize the complexity of what we really need, or that purport to know better than we do what is good for us.
Yes, sometimes we make the wrong call with our heuristics, but it is our call.