(Featured image: Matt Lemmon/Flickr CC BY SA 2.0)
When a renowned behavioural scientist gets embroiled in a case of fabricated data, there may be some lessons for us all
When a behavioural science paper is discovered to have been using fraudulent data, the field understandably experiences some mild, but distinct tremors. If the co-author who was responsible for the data happens to one of the field’s most famous scientists, the tremor becomes a proper shockwave. What has been going on?
Data detectives at work
It concerns a study from 2012, which suggested that asking individuals to sign a declaration of honesty at the beginning of a self-report form (rather than at the end, as usual) makes them complete it more honestly, based on two lab experiments and a field experiment. In the lab studies, for example, students completed puzzles and could claim money according to how many they had solved, as indicated on a form. The setup was such that it appeared to be possible for participants to overstate their performance without being caught (but in reality, the experimenters could compare their claimed results with their actual performance). In the field study, customers of an insurance company reported the actual odometer reading of their vehicle(s). In both cases, the information provided by the participants indicated that it was more truthful (fewer puzzles solved, more miles driven) in the treatment condition (i.e., when they had signed upfront).
On 17 August, Datacolada, a website devoted to critically evaluating behavioural science research, showed that the data regarding the insurance field experiment were almost certainly fabricated. (Do consult the post for a wonderful account of forensic data scrutiny.) The five authors of the study were asked to comment on the findings; four of them did, all agreeing with Datacolada’s findings, while the first author reacted on Twitter. The bombshell was that the co-author responsible for sourcing the data from the insurance company was none other than Dan Ariely, very well-known both in academia and beyond thanks to popular books like Predictably Irrational and, ironically, The honest truth about dishonesty.
An interesting twist is that the data in question were not posted publicly until 2020, when a paper was published by a team involving the original five authors that failed to replicate the 2012 lab experiments, concluding that signing upfront does not decrease dishonesty. The earlier paper will now be retracted, and arguably, the practical significance of the fraud is limited anyway because of the failure to replicate the results. Nevertheless, the affair offers some useful, more general insights – not just for academics, but for all of us.
The work, not the person
The reactions to the discovery were, as could be expected, varied. Alongside nuanced and constructive calls for a thorough investigation into what went wrong and why, so that similar occurrences could be prevented in future, there were less nuanced ones. Dan Ariely is not someone who shuns the limelight, and he is very popular with the members of the rapidly growing community of behavioural scientists and practitioners. That invites schadenfreude and, unsurprisingly, some reactions contained subtle hints of it that someone who some might see as perhaps a bit too popular for his own good got taken down a few notches. Others saw in it an opportunity to criticize the entire discipline of behavioural economics, addicted as they see it to be to hype and sensational illustrations of supposed human irrationality. Neither of these are really helpful, and both are themselves examples of the kind of biases we all have to some extent. If we are unsympathetic to an individual or to a cause, we will tend to amplify negative information about them, and see it as confirmation of our prior opinion.
The identity of a person involved in questionable practices should not matter: our judgement of their actions should be independent of whoever they are. But this is easier said than done. We tend to be lenient towards people that we feel affinity with or who, in some way, belong to the same group as we do, and we tend to be much more critical of people whom, for whatever reason, we don’t like. Such emotional connections to a person can cloud our judgement: we jump to conclusions (in favour or to the detriment of the individual concerned), and give credence to superficial speculation that fits our view. We are well advised not to become uncritical of people because they are held in high regard – in the field of behavioural science someone like Nobel laureate Daniel Kahneman – and we should likewise not uncritically dismiss all the work by people found guilty of scientific misconduct, like Brian Wansink (who had 18 of his papers retracted). We should judge the work, not the person.
People respond to incentives
One particular point that is often made in cases of impropriety in academia is that there are perverse incentives at work. Scientific journals rarely publish null results and practically all but demand positive results, and to be a successful academic you need lots of publications. The temptation to massage the figures, to be less than diligent in scrutinizing the data, and sometimes indeed to fabricate data, is real. In addition, sensational results bestow status and fame on researchers, and that too can provide an incentive that influences behaviour. Andrew Gelman, a critical statistician, refers to the (Lance) Armstrong principle (after the disgraced American cyclist): “If you push people to promise more than they can deliver, they’re motivated to cheat.”
Given that the way the data were falsified was rather incompetent, it is not very likely that Ariely himself did so; more probable is that it was someone at the insurance company who, for some reason, was unable to produce the agreed data (but I am speculating here, and at least one person argues differently). However, he did not – in his own words – “test the data for irregularities”. Checking third party data might have been perceived as a low-priority task that could be skipped, but neglecting it might also be motivated by a desire to avoid troubling the prospect of a successful publication. The anticipation of success is a powerful incentive.
The trouble with a preferred truth
If analysing the data confirms what we already believe to be true, it is a very human thing to tend to be less than critical, both of the data and of the analysis. Alongside confirmation bias, there are plenty of related tendencies that might encourage us to be not as thorough as we could or should – wishful thinking, selection bias, optimism bias, escalation of commitment and more – all amplifying whatever incentives we may have to navigate towards a positive result, whether in academic research, or in our work or home lives.
But perhaps the most fundamental challenge might be that we have a horse in the race, that we have a preference for a particular result. If we are searching for the truth – and that is the case in science, but in many other endeavours in business and in our private lives too – we should not be concerned with what exactly the truth will turn out to be. We should, literally, not care about the outcome. All we should care about is that we learn the truth, whatever it is – that is where the value resides. If we become attached to a particular theory or claim, the corresponding emotion can make it very hard to remain dispassionate across the entire process – from formulating the problem or research question and designing the experiment or approach, to collecting and analysing the data and drawing conclusions.
We cannot pursue the truth if we have a preference for what the truth should be.