Whether we’re learning a new language, prepping for a job interview or simply trying to remember what we went into the kitchen for, many of us are keen to cultivate a better memory. And often strategies that add an element of effort or difficulty can help: drawing things rather than writing them down, for example, or generating questions about study material rather than simply reading it.
So in 2018, there was much fanfare when a team from Australia’s RMIT University developed a difficult-to-read font, Sans Forgetica, that they said could boost memory through such a “desirable difficulty”.
One of the biggest political challenges of this era is getting powerful people to take the threat of climate change seriously. The most straightforward way to do that would be with bottom-up pressure: if the people who vote demand that their leaders take assertive action against climate change, then politicians will have no choice but to do so (at least if they want to get into office, or to stay there). The major challenge to this, in turn, has been the lingering influence of climate denialism: disbelief in the reality that humans are the cause of climate change, or in the seriousness of the problem.
What can be done to combat climate denialism? Back in 2011, the researchers Jonathon P. Schuldt, Sara H. Konrath, and Norbert Schwarz published an article in Public Opinion Quarterly which suggested one possible partial remedy: framing the issue a bit differently. They found that 75.0% of Americans expressed belief in “climate change,” but only 67.7% in “global warming.” It was Republicans driving this effect: among this more politically conservative subset of Americans, the difference was 60.2% versus 44.0%.
Those findings suggested that environmental campaigns and policy initiatives might do better if they refer to “climate change” rather than “global warming”, write Alistair Raymond Bryce Soutter and René Mõttus in a new paper in the Journal of Environmental Psychology. But while some follow-up studies had been conducted on this issue, with fairly mixed results, no one had yet carried out a direct, pre-registered replication. So Soutter and Mõttus attempted to both replicate the original result and expand it to two other countries: the United Kingdom and Australia. (This gave them a total sample size of 5,717, about double that of the original study.)
Do you often spend time with your friends in order to forget about personal problems? Do you think about your friends even when you’re not with them? Have you even gone as far as ignoring your family to spend time with your friends?
If you answered yes to these questions, you might fit the criteria for “offline friend addiction”, according to a new scale described in a preprint on PsyArxiv. Except, of course, that this notion is ridiculous. How can we be addicted to socialising, the fulfilment of one of our basic human needs?
Well, that’s pretty much the point of the new paper, written with tongue firmly in cheek. But behind it is a serious argument: although a scale for offline friend addiction is clearly absurd, there’s another, similar concept for which such scales have already been developed — social media addiction.
If you follow mainstream science coverage, you have likely heard by now that many scientists believe that the differences between liberals and conservatives aren’t just ideological, but biological or neurological. That is, these differences are driven by deeply-seated features of our bodies and minds which exist prior to any sort of conscious evaluation of a given issue.
Lately, though, follow-up research has been poking some holes in this general theory. In November, for example, Emma Young wrote about findings which undermined past suggestions that conservatives are more readily disgusted than liberals. More broadly, as I wrote in 2018, there’s a burgeoning movement in social and political psychology to re-evaluate some of the strongest claims about liberal-conservative personality differences, with at least some evidence to suggest that the nature and magnitude of these differences has been overblown by shoddy or biased research.
Now, a new study set to appear in the Journal of Politics and available in preprint here suggests that another key claim about liberal-conservative differences may be less sturdy than it appears.
Often when we discuss the replication crisis in psychology, the main focus is on what it means for the research community — how do research practices need to change, for instance, or which sub-disciplines are most affected? These are all important questions, of course. But there’s another that perhaps receives less attention: what do the general public think about the field of psychology when they hear that supposedly key findings are not reproducible?
As most observers of psychological science recognise, the field is in the midst of a replication crisis. Multiple high-profile efforts to replicate past findings have turned up some dismal results — in the 2015 Open Science Collaboration published in Science, for example, just 36% of the evaluated studies showed statistically significant effects the second time around. The results of Many Labs 2, published last year, weren’t quite as bad, but still pretty dismal: just 50% of studies replicated during that effort.
Some of these failed replications don’t come across as all that surprising, at least in retrospect, given the audacity of original claims. For example, a study published in Science in 2012 claimed that subjects who looked at an image of The Thinker had, on average, a 20-point lower belief in God on a 100-point scale than those who looked at a supposedly less analytical statue of a discus thrower, leading to the study’s headline finding that “Analytic Thinking Promotes Religious Disbelief.” It’s an astonishing and unlikely result given how tenaciously most people cling to (non)belief — it defies common sense to think simply looking at a statue could have such an effect. “In hindsight, our study was outright silly,” the lead author admitted to Vox after the study failed to replicate. Plenty of other psychological studies have made similarly bold claims.
In light of this, an interesting, obvious question is how much stock we should put into this sort of intuition: does it actually tell us something useful when a given psychological result seems unlikely on an intuitive level? After all, science is replete with real discoveries that seemed ridiculous at first glance.
To win a medal of any kind at the Olympic Games takes years of training, hard work and sacrifice. Standing on an Olympic podium is widely regarded as the pinnacle of an athlete’s career. Nonetheless, only one athlete can win gold, leaving the two runner-up medallists to ponder what might have been. Intriguingly a seminal study from the 1992 Olympic Games suggested that this counterfactual thinking was especially painful for silver medallists, who appeared visibly less happy than bronze medallists. The researchers speculated that this may have been because of the different counterfactual thinking they engaged in, with bronze medallists being happy that they didn’t come fourth while silver medallists felt sad that they didn’t win gold.
However, subsequent research based on the 2000 Olympic Games did not replicate this finding: this time silver medallists were found to be happier than bronze medallists. To further muddy the waters, a study from the 2004 Games was consistent with the seminal research, finding that straight after competition, gold and bronze medallists were more likely to smile than silver medallists, with these smiles being larger and more intense.
Now further insight into the psychology of coming second or third comes via Mark Allen, Sarah Knipler and Amy Chan of the University of Wollongong, who have released their findings based on the 2016 Olympic Games. These latest results, published in Journal of Sports Sciences, again challenge that initial eye-grabbing result that suggested bronze medallists are happier than silver medallists, but they support the idea that the nature of counterfactual thinking differs depending on whether athletes come second or third.
There’s no simple explanation for why psychology has been hit so hard by the replication crisis – it’s the result of a complicated mix of professional incentives, questionable research practices, and other factors, including the sheer popularity of the sorts of sexy, counterintuitive findings that make for great TED Talk fodder.
But that might not be the entire story. Some have also posited a more sociological explanation: political bias. After all, psychology is overwhelmingly liberal. Estimates vary and depend on the methodology used to generate them, but among professional psychologists the ratio of liberals to conservatives is something like 14:1. A new PsyArXiv preprint first-authored by Diego Reinero at New York University – and involving an “adversarial collaboration” in which “ two sets of authors were simultaneously testing the same question with different theoretical commitments” – has looked for evidence to support this explanation, and found that while liberal bias per se is not associated with research replicability, highly politically biased findings of either slant (liberal or conservative) are less robust.
Now researchers have reproduced the results of another highly-cited study. Back in 2002, Emily Pronin and colleagues first described the “bias blind spot”, the finding that people believe they are less biased in their judgments and behaviour than the general population – that is, they are “blind” to their own cognitive biases. And while that study kick-started a whole line of related research, no one had attempted to directly replicate the original experiments.
But in a preregistered preprint published recently to ResearchGate, Prasad Chandrashekar, Siu Kit Yeung and colleagues report reproducing the original study, first in a small group of Hong Kong undergraduates, and then in two larger samples of 303 and 621 Americans who completed online surveys.
As the list of failed replications continues to build, psychology’s reproducibility crisis is becoming harder to ignore. Now, in a new paper that seems likely to ruffle a few feathers, researchers suggest that even many apparent successful replications in neuroimaging research could be standing on shaky ground.As the paper’s title bluntly puts it, the way imaging results are currently analysed “allows presenting anything as a replicated finding.”
The provocative argument is put forward by YongWook Hong from Sungkyunkwan University in South Korea and colleagues, in a preprint posted recently to bioRxiv. The fundamental problem, say the researchers, is that scientists conducting neuroimaging research tend to make and test hypotheses with reference to large brain structures. Yet neuroimaging techniques, particularly functional magnetic resonance imaging (fMRI), gather data at a much more fine-grained resolution.
This means that strikingly different patterns of brain activity could produce what appears to be the same result. For example, one lab might find that a face recognition task activates the amygdala (a structure found on each side of the brain that’s involved in emotional processing). Later, another lab apparently replicates this finding, showing activation in the same structure during the same task. But the amygdala contains hundreds of individual “voxels”, the three-dimensional pixels that form the basic unit of fMRI data. So the second lab could have found activity in a completely different part of the amygdala, yet it would appear that they had replicated the original result.