Four years ago we were the first to break the disconcerting news that a survey of thousands of US psychologists had found their use of “questionable research practices” was commonplace: that is, their tendency to do things like failing to report all the measures they’d taken, or collecting more data after looking to see if their results were significant.
The story went viral, further aggravating the storm cloud sitting over the discipline at that time (it wasn’t long since one of social psychology’s most prolific professors had been found guilty of fraud). But now in an ironic development, two leading psychologists have published a damning critique of the “questionable research practices” survey, raising concerns about the methods that were used and the way the findings were interpreted. “Claims about violations of the standards of good science deserve to be held to the high standards they endorse,” they write, “not the least in light of the damage that misleading inferences can cause.”
Klaus Fiedler at the University of Heidelberg and Norbert Schwarz at the University of Southern California, Los Angeles, point out that many of the survey items were hopelessly vague and ambiguous. For example, the survey asked whether respondents had “failed to report all of a study’s dependent measures”. Fiedler and Schwarz say it would be unrealistic for any psychologist to always report every single thing they measure. Really, they argue, the question should have asked whether respondents had failed to report all of a study’s dependent measures that were relevant for a particular finding. The pair go on to highlight similar concerns with other items in the survey.
Another issue they highlight is that for a respondent to demonstrate 100 per cent innocence (in terms of their use of questionable research practices), they would need to answer “No” repeatedly to all 10 items on the survey. When people complete surveys, they tend to show an aversion to always providing the same answer, so really a survey should be compiled such that scores toward a given construct or characteristic should be based on a mix of “Yes” and “No” answers.
In terms of interpreting the survey, Fiedler and Schwarz argue that a fundamental error was made by the authors of the survey and by the media reports of its findings. The original survey asked if respondents “had ever” conducted any of the questionable practices in question, which speaks to the proportion of the sample who’d ever committed a given research “sin”, but the authors and media went beyond this, to make assumptions about the prevalence of these behaviours. Fiedler and Schwarz liken this logical error to making inferences about church attendance based on the proportion of people who have ever entered a church.
Fiedler and Schwarz go on to report the findings of their own “questionable research practices” survey, which they gave to 1138 members of the German Psychological Association. Their survey contained the same 10 items that were used in the original 2011 survey, but with the wording modified to be less ambiguous. They also included a measure of prevalence, asking their respondents not only if they’d ever committed the dubious practices but also in what proportion of their published work they had done so.
The new survey finds firstly that admission rates for ever having committed questionable practices were lower than in the 2011 survey – this could be because of the tightened wording, or because this was a sample of psychologists from a different culture. Secondly and more importantly, argue Fiedler and Schwarz, is that by combining the information they collected about prevalence, they find that the survey outcomes drop by an order of magnitude. For example, the new survey found that 47 per cent of respondents admitted to at least once claiming to have predicted an unexpected finding. Yet the average prevalence figure for this practice was just 10 per cent (i.e. respondents on average said they did this for 10 per cent of their published work).
Fiedler and Schwarz agree it is important to address issues of scientific misconduct, but they worry that the misinterpretation of a poorly executed survey risks spreading a harmful message – the idea that questionable research practices are rife, which could encourage more people to follow suit, thinking to themselves “everybody else is doing it, why shouldn’t I?”.
Fiedler, K., & Schwarz, N. (2015). Questionable Research Practices Revisited Social Psychological and Personality Science DOI: 10.1177/1948550615612150
Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!