(featured image: PublicDomainPictures via Pixabay)
Much of what we think we know is built on the powdery sands of conjecture and assumptions
Imagine you are joining a new company. Together with 15 other newcomers (about whom you know nothing), you are invited to an induction course. On the first evening, before dinner, you will be able to spend half an hour chatting with just one of them. You receive list of photos of each participant, from which you can either choose one, or indicate that you have no preference. What would you decide?
I conducted this exercise with a group of clients a while ago, using synthetic, photorealistic portrait images. There is little doubt that someone’s face provides no meaningful information about whether they share any interests with you, whether they would be entertaining company, or anything other characteristic that would make the half hour of 1:1 conversation pleasant. On that basis, people would be expected to express no preference.
Yet, in my experiment, only about one in three of the respondents gave that answer. Some of the fictitious new colleagues seemed remarkably popular, with one even collecting nearly 40% of the votes. The preferences may not have been strong (that was not gauged) but the very fact itself that two in three participants indicated a preference in a rather inconsequential situation is food for thought.
A little knowledge is not going to stop us
We often know very little about people, but that doesn’t stop us inferring all manner of things from that limited knowledge. First impressions can dominate our judgement of someone. If we need an electrician, will we be more inclined to contact the one whose van you saw last week parked up in a nearby street, looking all spick and span, or the one whose vehicle was covered in such a thick layer of muck that a merciful passer-by had inscribed in it the exhortation to the owner “Clean me, please!” (with the last word underlined)?
Arguably there may be a connection between how much care a tradesperson shows for their vehicle and how well they do their job (although a super-diligent electrician might be so devoted to their customers that they have no time to wash their vehicle).
Yet our propensity to assume one characteristic from the presence (or absence) of another one just as easily links characteristics that are quite unrelated. A study from 1974 by psychologists David Landy and Harold Sigall found that people evaluated the quality of an essay more highly if the writer was good-looking than if she was unattractive, particularly if the actual “objective” quality of the writing was poor. This extrapolation of one positive characteristic to another one is known as the halo effect. We may do the opposite too, and for example assume that someone who speaks with a strong regional or foreign accent is less educated or intelligent than a person with a standard, neutral accent (for undesirable attributes this is referred to as the horn effect). Similarly, research by cognitive scientists Nikolaas Oosterhof and Alexander Todorov suggests that we infer characteristics like dominance and trustworthiness from other people’s facial traits. Experiments by psychologists Barry Schlenker and Mark Leary suggest that, in the absence of any performance information, we consider confidence as a sign of competence.
Our predisposition to infer one characteristic from another one is not limited to people: we may judge a product, about which we know nothing else than its price or its provenance, as high in quality, because it is more expensive than others, or made in Germany or Switzerland rather than in Vietnam. We assume a product is more ‘fresh’ if the dominant colour of the packaging is blue.
This urge to project whatever limited information we have, regardless of its relevance or reliability, to complete the picture of someone or something is quite remarkable. If an unknown object or person is shown in the presence of other objects or people that are known, we will even assume they share salient characteristics: a stranger in the company of people we like is seen as likeable, for example (this phenomenon is known as evaluative conditioning).
Why do we do this so frequently? In a way, it is inevitable: many characteristics, especially people’s traits, are hard, sometimes even almost impossible, to establish. So we look for tell-tale signals – a well-groomed person wearing expensive clothes or jewellery is probably wealthy, someone turning up at a job interview with a badly knotted tie and dirty fingernails is probably not the most perfectionist candidate, a sign stating that a shop has been a family business in the same place since 1965 is probably an indication that we can trust them not to screw their customers over.
But not all such signals are necessarily relevant or reliable, and we still do it. One likely reason is that we are uneasy with uncertainty. We don’t like the unknown. So, from just a simple photo, we will immediately form a view whether the unknown person portrayed is intelligent, competent, trustworthy, or indeed good company for a half hour conversation. From the clutter on someone’s desk we will conjecture that they are disorganized by nature. From the shiny paintwork and chrome of the car on the used car dealer’s forecourt and the new car smell inside, we will surmise that it is in very good condition.
That pursuit of certainty is in part self-serving. In our own eyes, not being certain is a weakness. So, our – potentially imagined – ability to correctly infer characteristics from what we observe confers competence and wisdom upon us: we can confidently tell from their face that a person is clever and diligent, and that this used car is truly a great bargain.
A bit of scrutiny is desirable
We often make all these inferences without giving it much thought, subconsciously, even. And because we’re motivated to establish certainty, we are rarely critical about the inferences we make. Jan De Houwer (a psychologist at the university of Ghent) and colleagues, developed a conceptual framework to describe and analyse such inferences which may be able to help us. It refers to the characteristic we assume as the ‘target’ and the characteristic on which we base this assumption as the ‘source’, and also recognizes different source and target objects (for example if we infer that an old man must be rich because he is accompanied by an attractive, much younger woman). The resulting 2×2 model allows us to pinpoint the mechanics of our inference.
How did we arrive at our conclusion? Say we are judging the quality of a watch we plan to buy. Is our conclusion based on indicators that are actually predictive of quality, like the material of the casing or the glass (cell a)? Or are we inferring its quality from other characteristics like its design, its price or eye-catching features (cell b)? Are we perhaps being influenced by the celebrity wearing it in an advert (cell d)? Understanding this how makes exploring the question why we made the inference, and whether it is valid, a lot clearer.
Might an incorrect inference not easily be rectified once we obtain more accurate information? Not necessarily. Recent research by Duarte Gonçalves (an economist at University College London) and colleagues suggests that we do not fully disregard information when it is found to be incorrect. In other words, unfortunately the original knowledge we acquired – in this case, the inferred characteristic – continues to inform our beliefs. It would seem to be better, therefore, to avoid making wrong inferences in the first place.
Much of what we know, or think we know, we know through inference, rather than through direct verification of the facts. Much of that knowledge is probably correct, but not quite all. And some of those incorrect inferences can lead us to costly mistakes – trusting someone for their looks whom we should not have trusted, or purchasing an expensive washing machine with a German sounding name, wrongly assuming that it would therefore be reliable, for example.
Certainty may be hard to obtain, but it may be wise to at least scrutinize some of our more questionable inferences, even if that means that we remain uncertain.