Often when we discuss the replication crisis in psychology, the main focus is on what it means for the research community — how do research practices need to change, for instance, or which sub-disciplines are most affected? These are all important questions, of course. But there’s another that perhaps receives less attention: what do the general public think about the field of psychology when they hear that supposedly key findings are not reproducible?
So a new paper in Social Psychological and Personality Science should make for concerning reading. Across a series of studies involving a total of almost 1,400 participants, the researchers find that not only do low rates of reproducibility decrease public trust in the field, but that it may also be tricky to build that trust up again.
In their first study, Tobias Wingen at the University of Cologne and colleagues gave participants a description of the Reproducibility Project, a large 2015 study in which researchers attempted to replicate the results of 100 psychology findings. That project found that while 97% of the original results showed a statistically significant effect, only 36% of the replications did. But the participants in the new study were not told about these results — instead, they were asked to guess how many studies had successfully replicated. They also rated their trust in psychology, answering questions like “I trust the psychological science community to do what is right”.
On average, participants guessed that about 61 studies successfully replicated — much higher than the actual results of the project. But there was also a significant relationship between participants’ predictions and their trust in psychology: the lower their estimates of the number of successful replications, the lower their trust in the field.
Of course, these data are only correlational — but a subsequent study suggested that learning about poor reproducibility actually leads to lower trust. Participants again read descriptions of the Reproducibility Project, but this time they were told that a low, medium, or high number of the studies had replicated (39, 61 or 83 studies respectively). Those in the low reproducibility group subsequently rated their trust in psychology as significantly lower than those in the high reproducibility group.
Can this trust be regained? In further studies, the team didn’t have much luck on this front. In one, participants were given reasons for the low reproducibility rate: they were either told that it was due to researchers using questionable research practices like only publishing surprising results, or that it was because research is hard and small differences in studies can have a big impact on the findings. These participants didn’t rate their trust in psychology any differently from those who received no explanations.
Trust wasn’t even any better when participants read that research practices had improved since the original project showing low reproducibility. In the final two studies, some participants were told about ways that the field has recently become more open and transparent — in one study they were even given the (pretend) results of a new project, which supposedly showed that now 83% of recent papers could be successfully replicated. But these participants still didn’t rate their trust in psychology any higher than those who only read about the original project.
It perhaps won’t come as much of a surprise that replication failures can erode trust in psychology. But the more worrying finding is that that trust seems hard to repair. The researchers are at pains to highlight that their study doesn’t necessarily mean that there is no benefit in telling people the reasons behind poor reproducibility, or in explaining the ways that the field is becoming more transparent, as there could be small effects that their study didn’t pick up on. Still, they conclude, it also doesn’t provide any evidence that these strategies are effective.
Of course, reading a few sentences about research practices may not be sufficient to rebuild trust: perhaps more in-depth explanation and education could be useful. But on the other hand, most members of the public probably do get their psychology education from short snippets in news reports (and blogs!), similar to those used by the researchers.
So what’s the solution? We could just not tell people about failures to replicate — but as the authors rightly point out, “covering up low replicability is neither an ethical nor an effective way to handle the problem”. Instead, they suggest the first step may simply be to continue working on improving replicability in the field. Although it’s not yet clear whether this will have immediate effects on people’s perceptions, the study did show that the greater the rate of successful replications in the first place, the higher people’s trust, they write. “Thus, if replicability is constantly high, public trust in psychology might rise.”