As the reality of the coronavirus pandemic set in in March, we looked at the work of psychologists attempting to understand how the crisis is affecting us, and to inform our response to it. A few months later, and hundreds of studies have been conducted or are in progress, examining everything from the spread of conspiracy theories to the characteristics that make people more likely to obey lockdown measures.
However, some researchers have raised alarm. They’re worried that many of these rapid new studies are falling prey to methodological issues which could lead to false results and misleading advice. Of course, these aren’t new problems: the pandemic comes at the end of a decade in which the field’s methodological crises have really been thrust under the spotlight. But is the coronavirus pandemic causing researchers to fall back on bad habits — or could it lead to positive change for the field?
A methodological crisis…
The past decade has been a turbulent one for psychology. Researchers have come to realise that a lot of psychological research rests on rather shaky foundations. A pivotal 2015 study, for instance, attempted to replicate the findings of 100 psychology studies published in three influential journals, finding a significant effect for just 36 of the 97 studies that had originally found a positive outcome. Other replication attempts have cast doubt on well-known findings that appear in many introductory textbooks.
And it’s not all about reproducibility: even when findings do hold up, they don’t necessarily generalise beyond the narrow group in which they were found. Most work focuses on participants who are Western and Educated and come from Industrialised, Rich and Democratic countries — an issue that has become known as psychology’s WEIRD problem.
Of course, none of this will be news if you follow psychology research (or read this blog). And things have undoubtedly improved. Researchers are more aware than ever of the underlying causes of these problems. Increasingly, they are pre-registering their studies so that their methods and hypotheses are out in the open before they even begin collecting their data. Replication studies abound, weeding out those findings that fail to replicate. Large collaborations are popping up to study psychological phenomena with huge numbers of participants across multiple countries.
.. meets a health crisis
Enter the COVID-19 pandemic. On the surface, psychology seems like it should have a lot to contribute in a crisis whose management relies on getting people to act in certain ways. But many researchers point out that the methodological issues that have come to the fore in the past ten years are all the more serious when it comes to an actual health emergency.
Some critics say that evidence from past psychological research is simply too flawed or opaque to inform decisions that involve life-or-death situations — an argument perhaps best exemplified by a recent preprint from Hans IJzerman from Université Grenoble Alpes and colleagues, which questions whether psychology is really “crisis ready”. The team explicitly call out the lack of generalisability and reproducibility in past work. How can we be sure that studies based on small, “WEIRD” pools of participants in very specific circumstances apply to a broad population in the midst of a crisis? Andrew Przybylski, Director of Research at the Oxford Internet Institute and co-first-author on that preprint, likens the situation to “a high stakes version of Groundhog Day”. Just because we’re trying to figure out how to respond to a crisis doesn’t mean that all of those issues that have come to light in the past ten years have just gone away, he says.
Of course, others have expressed more optimism about the role of psychology. This preprint was partly a reaction to a review published in Nature Human Behaviour, in which Jay Van Bavel and colleagues outlined the ways that psychology could support the response to the coronavirus pandemic. While acknowledging that evidence is limited, the team make a number of suggestions based on past work, such as taking into account that people have an “optimism bias”, believing bad things are more likely to happen to others than themselves. And over at The Psychologist, you can read dozens of perspectives from psychologists about how the field can help right now. Still, the debate about whether or not the field is “crisis ready” continues.
Letting our guard down
And for many, that sense of repeating past mistakes also permeates much of the rapid research that psychologists have produced since the crisis began. Writing at The 100% CI in late March, Anne Scheel from Eindhoven University of Technology expressed worries that new studies attempting to understand the pandemic are being rushed out with major flaws: they may have a small number of participants and be underpowered, for instance. And there’s still the question of generalisability. Yes, these studies are being conducted in the context of the pandemic so are at least more relevant to the current situation. But they’re still conducted in an artificial environment — usually online — and often consist of surveys that may not tap into how people think or behave in their daily life. And that WEIRD problem is not going anywhere.
In other words, in “crisis mode”, researchers may be falling for the same old pitfalls that psychology has been trying to move on from in the past decade. “[I]t feels as if we’ve put our guard down rather than up,” Scheel writes. Three months after typing those words, does she feel like things have improved? “My long-story-short answer would be: I’ve rarely felt so grimly vindicated,” says Scheel.
Meanwhile, new scales to tap into attitudes and behaviours related to coronavirus have cropped up — but here too many researchers have expressed scepticism about the assumptions and methodologies used to develop these. Take the “Fear of COVID-19 Scale”, for instance, a seven-item scale that has already been translated into many languages and which asks questions like “I cannot sleep because I am worried about getting coronavirus-19”. The implication is that higher scores on the scale are bad; indeed, the authors write that the scale will “be useful in providing valuable information on fear of COVID-19 so as to facilitate public health initiatives on allaying public’s fears”.
But another group found that people who scored higher on the scale were more likely to practise positive public health behaviours like social distancing. They suggest that rather than measuring some pathological “fear”, the scale may actually be tapping into adaptive negative emotions that help us respond to dangerous situations. Again, concerns about the validity of measures aren’t new (in fact, the same team recently made a similar argument that scales of “social media addiction” are really tapping in to normal, rather than pathological, social behaviour, as we reported in April). But these issues are arguably more problematic in a crisis situation.
None of this is to imply there aren’t many psychologists doing useful research right now — indeed, you can read about some of the important work being done in areas like mental health in our previous post. However, it’s clear that the crisis has highlighted — and potentially exacerbated — many of the field’s existing problems and limitations.
A way forward
But there have also been a number of suggestions for how to improve the research response. With calls for data sharing and collaboration, and the increased use of open science practices like Registered Reports, there are some bright spots, says Przybylski.
The preprint that claimed psychology is not crisis-ready had one unlikely proposal: draw from rocket science. NASA uses a system of “technology readiness levels” to determine whether a new piece of technology is ready to be deployed. The first level indicates that researchers have reliably observed some principle that could help develop a technology; at later levels they have tested a piece of technology in an appropriate environment; and at the highest level they have successfully used the complete system in a mission.
The team writes that most psychological research findings haven’t even passed that first level: it’s still unclear whether many of the effects researchers have found are reliable, let alone whether they can be used in real-world interventions. So they suggest a psychological equivalent of NASA’s framework, called “Evidence Readiness Levels”, to guide the development and implementation of psychological research. Interestingly, they’re not the only ones who have suggested a NASA-inspired rating system. Another preprint from Kai Ruggeri and colleagues (including NASA chief scientist James Green) proposes a similar rating system that could allow decision makers to quickly assess the quality of evidence.
Others have argued that the actual infrastructure by which psychologists communicate and share information needs to be adapted to better suit the crisis. Along with several colleagues, Ulrike Hahn at Birkbeck University of London has set up an initiative that aims to “reconfigure behavioural science for crisis knowledge management”. It’s a kind of meta-science project, says Hahn: “In a sense [it’s] trying to use extant tools to do something like build what the internet would be like, if the internet was still run by scientists, for scientists”. To that end, the team have set up subreddits where behavioural scientists can talk to each other and with policy makers and journalists, as well as a growing database of resources on psychology and the coronavirus. Although the project is still in a proof-of-concept stage, the team hopes that these sorts of forums can facilitate the kind of speedy, transparent discussion and early evaluation of ideas and results that is necessary in a crisis, while also making information readily available to policy makers and the public.
Researchers have also highlighted the urgent need for more large-scale, collaborative studies. In an April paper in Lancet Psychiatry, for instance, a multidisciplinary team outlined priorities for mental health research during the pandemic. Alongside various recommendations, the authors call for work to be conducted at scale with multiple research groups and networks, warning against “the current uncoordinated approach with a plethora of underpowered studies and surveys”. And at least some funding bodies and research councils are beginning to recognise the need to consolidate efforts in this way, notes Przybylski, who was also a co-author on this paper. Having large consortiums working together is “the kind of thing that I hope will eventually move psychology into the domain of ‘real science’ along with physics and chemistry,” he says.
That idea that improvements in our response to the pandemic could also lead to improvements in psychology more generally underlies many of these suggestions. Przybylski says that the notion of evidence readiness levels, for instance, “has been implicit in a lot of our thinking for a long time”, well before coronavirus emerged. Similarly, infrastructure that facilitates communication between researchers and builds an online community is always going to be helpful, says Hahn. “That kind of ‘ideal internet for science’ won’t stop being useful once the crisis has passed”.
Of course, whether or not the crisis does end up leading to long-term change remains to be seen. Some studies have already been published following rapid review, but most of the larger scale research will only hit the journals months or years down the line. Will we then look back at the pandemic as a time when psychological research held a strong line of defence and response, or when we let our guard down and allowed poor practices to spread?