(featured image: duncan c/Flickr CC BY)
Better decision making may mean widening, or narrowing horizons – depending on whether you are a human or a machine
Four years ago, in the small hours of 21st March 2017, the neighbourhood around the Dortmund-Scharnhorst station was shaken by a loud blast. Earlier, a 31-year-old man had decided to gain access to the contents of a ticket machine on the platform – money and unused travel permits. He had been drinking in a nearby bar, presumably to stiffen his resolve, before setting off for the halt in the North-Eastern suburbs of the German city with a bag of gas cannisters, which he was seen sprayed into the machine. As he ignited it, the machine exploded, its metal front panel hitting his unprotected head, and showering him with shrapnel from its innards. Another person who had also been at the bar recognized the ailing man and alerted the emergency services, but despite resuscitation efforts by the paramedics, he died of his injuries.
This sad story was one of the submissions to the 2017 Darwin Awards. Such an award is a tongue-in-cheek, albeit somewhat morbid, recognition for people who, by their own ill-considered actions, remove themselves from the human gene pool, either by perishing or by unintentionally sterilizing themselves. While the humour of the concept may be of questionable taste, there is some substance behind it, as the name implies.
Wired for survival
Evolution, the brainchild of Charles Darwin, is a relentless force that inevitably favours the individuals in any species that are best adapted to their environment. They will survive, prosper, and – most importantly – be more successful in ensuring their genes are passed on to the next generation. Should traits undermining the chances of successful survival and procreation spread, the species itself would eventually become extinct. There is, so far, no genetic carrier known to be associated with a propensity to blow up travel ticket machines (and to do so without adequate protection), but the principle is obvious: if maladaptive traits are eliminated, that is beneficial for a species.
Humans may be a relatively new species to evolve on this planet (depending on where you start counting, we have been around for between about six million and 200,000 years). Of course, humans carry genetic code going back to the very first life forms that emerged billions of years ago. But even the more recent development of human cognitive ability has had at least ten thousand generations over which natural selection has fine-tuned our brain circuitry.
The fact that we are here today, surviving, prospering and procreating is largely thanks to the way we think. The wiring in our brain allows us to react to the external world in a way that serves us well, so we can continue to survive, prosper and procreate, and suppresses impulses that would go against it (like blowing up ticket machines).
Our brain wiring, however, also embodies many tendencies that have been hugely beneficial over our existence, and that still are adaptive in many, but not all circumstances. Because we do not seem to have a systematic, in-built mechanism to distinguish between situations where our biases and heuristics are, or are not, helpful, they can lead to suboptimal decisions. Sometimes we don’t make the best choices: we might, for example, be better off being less risk averse than we are, or clinging less to the status quo than we do. Behavioural science, and in particular the study of heuristics and biases, has been focusing on this aspect of decision-making for a few decades now.
But human brains have come up with a possible solution: artificial intelligence (AI) and machine learning (ML). It has been a while since humans developed devices that can outperform us by many orders of magnitude for simple cognitive tasks like manipulating numbers (as the etymology of the word ‘computer’ indicates). Current computers can do much more, though. They can learn in ways that are akin to the way small (and large) humans learn: using the power of trial-and-error. But humans, because our cognitive capacities are limited, and because of the tendencies we have baked into our cognitive wiring, are inevitably selective in the trials we engage in. Machines are not inherently subject to these constraints, and this is what can make AI systems so powerful. It can, however, also lead to interesting, and unexpected situations.
Too clever for their own good?
In 2013, Alphabet, Google’s parent company, officially announced its project Loon, aimed at providing stable internet to remote areas around the world. This was to be done through high-altitude balloons (18-25km above the surface). These were not powered, but moved around much like more conventional hot air balloons: air moves in layers in the stratosphere, and the balloons, which had access to real-time data about speed and direction of each layer, could adjust their altitude using a solar-powered pump to select an appropriate layer.
Occasionally some of the balloons made surprising moves and deviated from the expected course by zigzagging, or even reversing. At first, the engineers overrode the system, but then they realized that, all by themselves, the balloons had learned a technique well-known to sailors, called tacking. Instead of plotting the shortest route, and moving up and down to the layers of wind to follow this course, they took full advantage of the wind vector information and got to their destination more quickly. Salvatore Candido, the CTO at Loon, summarized it thusly in a blogpost: “We quickly realized we’d been outsmarted when the first balloon allowed to fully execute this technique set a flight time record from Puerto Rico to Peru. I had never simultaneously felt smarter and dumber at the same time.”
The balloons from project Loon illustrate the power of artificial intelligence and machine learning. Unrestrained by the tendencies that limit what humans would consider or even be able to consider, AI systems can learn and continually seek to further optimize whatever they are being constructed to optimize, based on whatever reward function that it has been programmed with. This can lead to surprising novel solutions to problems, large and small, that are almost by definition unpredictable.
But the biases and heuristics that restrain us humans, and thus give AI the edge, also form part of our instinctive mechanisms that help us survive, prosper and procreate. Over many thousands of generations, these have shaped our own approach to trial-and-error, discouraging us from engaging in trials that can lead to errors detrimental to survival, prospering and procreation. Individuals doing so were at a disadvantage compared to others who acted more cautiously and got deselected by evolution. As a result, we, the long-term survivors, have an inner voice, much like the annoying Harry Enfield character from the Fast Show, who warns us “you don’t wanna do that”.
Machines don’t. Unlike homo sapiens sapiens, the ‘species’ of artificial intelligence has not had the benefit of a long succession of generations weeding out the more hare-brained individuals and their way of thinking. Their trials often lead to superior outcomes we would never in a million years have come up with. But for the same money, the errors they make could be the equivalent of the hapless attempt at blowing up a ticket machine – but at a larger scale.
This leaves us with an interesting symmetry in our fundamental decision-making challenges. On the one hand, we are trying to unshackle our own, human decision-making, and overcome the maladaptive use of otherwise mostly adaptive heuristics and biases. On the other hand, we need to prevent AI becoming supercharged Darwin Award candidates, by restraining its raw capacity to think the unthinkable – and act accordingly, blissfully unaware of the potential unexpected and catastrophic consequences.
It is perhaps not surprising, and certainly quite apt, to find ourselves with one of the signature characteristics of economics, its inherent focus on conflicts, embodied in “on the one hand/on the other hand” thinking that so infuriated president Harry Truman.
(Behavioural) economics sure has got its work cut out to help both people and machines make the best choices.