Questionable research practices, including testing increasing numbers of participants until a result is found, are the “steroids of scientific competition, artificially enhancing performance”. That’s according to Leslie John and her colleagues who’ve found evidence that such practices are worryingly widespread among US psychologists. The results are currently in press at the journal Psychological Science and they arrive at a time when the psychological community is still reeling from the the fraud of a leading social psychologist in the Netherlands. Psychology is not alone. Previous studies have raised similar concerns about the integrity of medical research.
John’s team quizzed 6,000 academic psychologists in the USA via an anonymous electronic survey about their use of 10 questionable research practices including: failing to report all dependent measures; collecting more data after checking if the results are significant; selectively reporting studies that “worked”; and falsifying data.
As well as declaring their own use of questionable research practices and their defensibility, the participants were also asked to estimate the proportion of other psychologists engaged in those practices, and the proportion of those psychologists who would likely admit to this in a survey.
For the first time in this context, the survey also incorporated an incentive for truth-telling. Some survey respondents were told, truthfully, that a larger charity donation would be made by the researchers if they answered honestly (based on a comparison of a participant’s self-confessed research practices, the average rate of confession, and averaged estimates of such practices by others). Just over two thousand psychologists completed the survey. Comparing psychologists who received the truth incentive vs. those that didn’t showed that it led to higher admission rates.
Averaging across the psychologists’ reports of their own and others’ behaviour, the alarming results suggest that one in ten psychologists has falsified research data, while the majority has: selectively reported studies that “worked” (67 per cent), not reported all dependent measures (74 per cent), continued collecting data to reach a significant result (71 per cent), reported unexpected findings as expected (54 per cent), and excluded data post-hoc (58 per cent). Participants who admitted to more questionable practices tended to claim that they were more defensible. Thirty-five per cent of respondents said they had doubts about the integrity of their own research. Breaking the results down by sub-discipline, relatively higher rates of questionable practice were found among cognitive, neuroscience and social psychologists, with fewer transgressions among clinical psychologists.
John and her colleagues said that many of the iffy methods they’d investigated were in a “grey-zone” of acceptable practice. “The inherent ambiguity in the defensibility of research practices may lead researchers to, however inadvertently, use this ambiguity to delude themselves that their own dubious research practices are ‘defensible’.” It’s revealing that a follow-up survey that asked psychologists about the defensibility of the questionable practices, but without asking about their own engagement in those practices, led to far lower defensibility ratings.
John’s team think the findings of their survey could help explain the “decline effect” in psychology and other sciences – that is, the tendency for effect sizes to decline with replications of previous results. Perhaps this is because the original, large effect size was obtained via questionable practices.
The current study also complements a recent paper published in Psychological Science by Joseph Simons and colleagues that used simulations and a real experiment to show how toying with dependent variables, sample sizes and other factors (the kind of practices explored in the current study) can massively increase the risk of a false-positive finding – that is, claiming a positive effect where there is none.
“[Questionable research practices] … threaten research integrity and produce unrealistically elegant results that may be difficult to match without engaging in such practices oneself,” John and her colleagues concluded. “This can lead to a ‘race to the bottom’, with questionable research begetting even more questionable research.”
Leslie John, George Loewentstein, and Drazen Prelec (In Press). Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychological Science
Pulled from the comments: Psychfiledrawer is a repository for non-replications of published results.
Post written by Christian Jarrett for the BPS Research Digest.