(featured image: kues1/Freepik)
Both doubt and confidence have benefits, but is there anything worthwhile in overconfidence?
It is the season of party conferences in Britain. Over the years I have lived here, my attitude towards them has evolved from interest to irritation, and on to indifference. What has been a constant throughout, though, is the overconfidence the political leaders exhibit at a conference: not a hint of doubt that the proposed policies will be effective, serve everyone and lead to electoral victory.
The conundrum of persistent bad decision-making
Of course, overconfidence is not something that just politicians exhibit. We are all occasionally guilty of being too certain for our own good, discovering later on that the viewpoint we thought was unassailable or the watertight course of action we undertook were not quite that unassailable or watertight as we thought. Overconfidence is a cognitive bias – one of the most pernicious ones, argues Nobel laureate and éminence grise of the behavioural sciences Daniel Kahneman. It is the one, given a magic wand, he would eliminate first. But he entertains no illusions: in the same article, he admits it “is built so deeply into the structure of the mind that you couldn’t change it without changing many other things”. One problem is that overconfidence is widely rewarded: given a choice, audiences generally prefer pundits and commentators who self-assuredly proclaim to know what is going on and what will happen, to colleagues who express more nuanced viewpoints. And naturally the same holds for politicians: you get more votes by being 110% certain (and coming across that way) than by admitting that you’re not entirely sure and that you could be wrong.
Nevertheless doubt – in moderation at least – is a useful cognitive state, in which we have not (or not yet) decided between belief and disbelief. It stops us jumping to conclusions, prevents us picking the first answer to a question or the first solution to a problem that comes to mind, and encourages us to consider multiple potential options, so we can make better decisions. As we make up our mind, we develop confidence in our conclusion, and that triggers our own internal reward system – no audience needed! Research by neuroscientist Pascal Molenberghs and colleagues found that the more confident people are, the stronger the activity in brain areas associated with reward processing, like the striatum. And that is useful too: once we have a solution or an answer that we feel sufficiently confident about, the reward tells us we can move on.
The trouble is that this can tempt us to imagine we are more confident than we ought to, just to experience a stronger reward. Ever more confidence can easily lead to overconfidence, and poor decision-making. But if that is so, how come such a maladaptive characteristic has not been evolved away, and is instead so ubiquitous? Of course, not all poor decisions are fatal, and it is quite possible to overconfidently survive until a ripe old age. But presumably, at some point we ought to realize that this overconfidence is serving us badly, and learn our lesson. So why don’t we?
Alice Soldà, an economist at Heidelberg University, and colleagues, investigated this apparent mystery. In particular, they focused on whether there are benefits to overconfidence that might outweigh the costs after all.
Central to their study was an experiment in which participants were paired up, and individually answered multiple choice general knowledge questions. Each correct answered earned money for a joint account, which they would be able to divide between them at the end of the experiment. After completing the questionnaire, they were asked to state how many answers they believed they got correct, and to indicate on a scale of 0-100% how likely they thought it was that they had done better than their partner.
Next, the participants were given feedback on their actual performance, but in order to allow participants to develop overconfidence, some uncertainty was introduced. Those who had outperformed their partner received either a green signal with a probability of 75%, or a red signal with a probability of 25%. In the opposite case, this was reversed, with red more likely than green. After receiving this feedback, participants indicated once again the likelihood they had performed better than their partner.
Negotiating for a worse outcome… or is it?
Then the bargaining started. Initially, the participants were told that the money earned jointly had been split into two unequal parts: 70% and 30% of the total, and their task was to agree with their partner who would receive which share. To this end, they had to write a message to their partner in which they justified their choice. If both participants chose a different part (and hence immediately agreed who was the better performer), the negotiation was complete, and the money was distributed. If, however, they both chose the larger part (in none of the nearly 150 pairs both participants chose the small part!), they received an additional three minutes to reach an agreement. During this time, they could communicate with each other through a chat facility (with one key condition: they could not reveal which colour signal they had received). If one of the participants switched from claiming the larger part to accepting the smaller part, again the negotiation would be concluded and the money divided. In the absence of agreement by the end of the period, they were granted a further 30 seconds to settle the difference, but with every passing second, the amount in the account was reduced by 1/30, so that after 30 seconds nothing was left. As soon as one participant switched to the smaller share, the clock stopped, and the remaining sum was divided 70/30.
Just over 6% of the pairs reached an immediate agreement, around 36% agreed who would get the larger share in the second stage (14% in the last 10 seconds!), and nearly 43% did so in the last stage (33% in the first 10 seconds). Nearly 15% of the pairs did not reach an agreement and went home empty handed (!).
Of course, the researchers knew precisely how each participant actually performed, how they thought they did, and how strong their belief was that they outperformed their partner before and after they received the feedback. Thus, they could establish the influence of (over)confidence on the monetary outcomes, mediated through the negotiation process. However, the researchers considered not just the payoff as a percentage of the total amount of money earned, but also as a percentage of the actual amount left, taking into account the situation where the negotiation went against the clock.
This was revealing. While, on average, participants with the highest levels of confidence, who kept their nerve all the way to the last stage, collected less money (and caused their partner to collect less, too, as the clock ticked away), they did on the whole end up with larger share of the actual prize than their partner.
They lost out absolutely – which supports the idea that overconfidence can lead to poor decisions – but they gained relatively. This can explain why overconfidence persists: we tend to value relative positions more than absolute positions. A telling paper by Sara Solnick and David Hemenway describes how over half the respondents in their study would prefer a child with an IQ of 110, if she is more intelligent than other kids, to a child with an IQ of 130 if her peers were smarter still. More than 50% said they’d prefer infrequent praise from their boss over more frequent praise, as long as they got more than their colleagues, and even that they’d rather earn $50,000 if their peers earned $25,000, than earn $100,000 if their peers took home $200,000.
Overconfidence, in this experiment, clearly comes with an absolute cost – at first sight, it leads to a poor decision. But if we are prepared to pay that cost to obtain the relative benefit of gaining more than someone else, then perhaps the persistence and ubiquity of overconfidence is not so mysterious. I mean, then that is 110% certain.