(featured image: Mussi Katz/Flickr CC BY)
The bounds to our rationality are sometimes within our control and even self-imposed – an easy opportunity to improve our decisions?
How irrational are we really? Book titles like Predictably Irrational and The Upside of Irrationality would seem to suggest we are beyond redemption. Numerous scientific experiments appear to confirm that we are subject to dozens upon dozens of biases, which mess with our decisions. It is true that we don’t always make the best possible decisions, and it is true that sometimes these biases are implicated. But mostly, our decisions are pretty good, and when they’re not, it is not necessarily because we are irrational.
A good while before Behavioural Economics became a thing, Herbert Simon challenged the assumptions of neoclassical economics with two theories, which arguably provide better explanations for human decision-making in the wild than irrationality. Simon is often tagged as an economist (and he did win the Nobel Prize in that discipline in 1978), but his degree and his doctorate were in political science. He was a polymath with interests ranging from administrative behaviour, decision making and psychology to artificial intelligence, mathematics and logic. He saw real people as quite different from the stereotype of homo economicus – the self-interested, rational, optimizing being – but not necessarily because their imperfections are due to biases. Simon argued that, foremost, we are subject to bounded rationality, and that we satisfice rather than optimize.
Decisions in the real world
When it comes to decision making, our tendency to satisfice (the term is a combination of satisfy and suffice) is actually a feature, rather than a bug. Choosing the optimum option from all the available ones could take a lot of effort and time. But unless we are extremely picky, we can settle for ‘good enough’ much more quickly and easily. We trade perfection for scarce resources. In a sense, satisficing is optimizing, across not just the features of whatever it is we’re choosing, but also across the time and effort of making the choice.
Bounded rationality, however, is a different affair. We don’t have access to all information, and we don’t have unlimited cognitive capabilities, but within these limits we can still be making rational decisions based on sound reasoning. Suboptimal decisions are then the result of these limits, not of faulty reasoning. Or is it not quite that simple?
If we lack information, that may be because it is unobtainable or unavailable to us, but also because we do not realize it exists, because we ignore it, or because we choose not to acquire it. We may refrain from effective reasoning about the decision, not because the reasoning is beyond our capability, but because we have underlying motives to, for example, follow a rule rather than weigh up the options. When our rationality is bounded in this way, we can do something about it (if we wish). By challenging ourselves we can test to what extent the limits to the facts we use and to the way we use them are imposed, or self-imposed Let us look at four ways of loosening the bounds of our rationality.
Down with vagueness
A first thing to do is to verify whether the purpose of our decision is clear and precise. Vague goals are a sign of lazy thinking and lead to poor decisions that cannot be evaluated properly. Yet vague goals are surprisingly common. If we go to the supermarket “to do the shopping” without a specific purpose, what we bring home will certainly be “shopping”, but it may not be quite what we need.
This applies on larger scales too. Governments implementing pandemic measures should be definite about what they are striving for. If it is unclear what the restrictions on hospitality, education, retail and socializing, are aimed to achieve, it is impossible to determine whether the measures are effective and when the goal is met and they can be relaxed. Likewise, to establish who should be prioritized in the vaccination process, the aim of the immunization strategy must be clear. Vague goals too often mean arbitrary and incoherent decisions. (Feel free to judge for yourself how your own government is doing.)
Next, if it is clear what must be achieved (or avoided), watch out for vagueness in the sacrifices that need to do so. That is a particular risk when the goal is of great importance, tempting us to overemphasize it. Tell-tale signs are strong or absolute terms without nuance to describe the objective (“at any cost”, “unacceptable”), and elusiveness, fuzziness and lack of precision about the cost (in money, effort, time etc). If we are going to renovate your house, we may be seduced into thinking only about the features of the new bathroom suite and kitchen we want, and about the extra space knocking through that interior wall will create. But if we don’t also pay attention to the budget and the calendar, we cannot work out whether the benefits justify the costs and the disruption. Making that trade-off well is at the core of a good decision. Again, on a larger scale, governments too ought to consider what the financial and non-financial impact of their pandemic measures will be for citizens and taxpayers, and be able and willing to reveal and justify the trade-offs they make.
Forward and backward
Talking of impact: a third point of attention is to check for any blind spots in the reasoning leading to the decision. Have we considered all the consequences? Do we understand how, and to what extent, others will be affected by our decision? Will they benefit to the same extent as we do, and bear similar costs to us, or do they face a very different, and perhaps less favourable cost-benefit ratio? Do we know who will gain or lose? And do we care? We might be very keen on spending a week travelling through the Burgundy region of France, from one vineyard to the next. But fun as it would be for us, perhaps it would be a bad choice for a family holiday if we have a nine-year old and a teenager in the party. Searching for blind spots too is relevant on a national scale – let’s pick another pandemic measure as an example: support for self-employed people unable to work during lockdowns. A scheme that pays them 80% of their equivalent monthly income, averaged over the last three years, may be OK for the majority, who had stable earnings over that period. But woe unto you if you have not been in business for that long, or were on maternity or sick leave for part of it. If the purpose is to support a group of people, perhaps there are more important criteria than administrative simplicity.
Finally, decisions almost always involve an element of uncertainty about the future. When we make our choice, we need to speculate, and sometimes we get it wrong. Looking back at past decisions and comparing them with the counterfactual – how would a different outcome have come about? – can be instructive. What we should not do, though, is confusing what we know now, after the event, with what we could (or could not) have known back then. Instead, we ought to focus on counterfactual thought: how did we reason when we made the decision, and how else we could have reasoned? Did we follow a rule or a principle instead of weighing up options? Did we give too much weight to the expected benefits and not enough to the downsides (or vice versa)? Did we ignore a particular source of information? Was there information available we neglected? If such questions have ‘yes’ for an answer, they can be most enlightening. Exploring why we did what turns out to have been suboptimal can help us avoid being bounded in that way in the future
None of these potential flaws in our decision making are inherently the result of biases and irrationality. They are themselves the consequences of choices we make – deliberate or unthinkingly.
It is within our power to choose differently, and unbound our rationality.