Hallucinations are a common symptom of schizophrenia and related disorders, but mentally well people experience them, too. In fact, work suggests that 6-7% of the general population hear voices that don’t exist. However, exactly what predisposes well people to experience them has not been clear. Now a major new study of 1,394 people native to 46 different countries, led by Peter Moseley at Northumbria University, provides support for two hypotheses from earlier, smaller studies — namely, that a history of childhood trauma and a propensity to hear non-existent speech among background noise are both associated with experiencing hallucinations — but does not support three others.
“In terms of reproducibility, these results may be a cause for concern in hallucinations research (and cognitive and clinical psychology more broadly),” writes the team in their paper in Psychological Science. In firming up a few ideas, the work does, though, help to clarify what aspects of cognition as well as past experience are — and are not — linked to being more prone to hallucinations.
“The pen is mightier than the keyboard”… in other words, it’s better to take lecture notes with a pen and paper rather than a laptop. That was the hugely influential conclusion of a paper published in 2014, by Pam Mueller and Daniel Oppenheimer. The work was picked up by media around the world, and has received extensive academic attention; it’s been cited more than 1,100 times and, the authors of a new paper, also in Psychological Science, point out, it often features in discussions among educators about whether or not to ban laptops from classrooms. However, when Heather Urry at Tufts University, US, and her colleagues ran a direct replication of that original study, their findings told a different story. And it’s one that the team’s additional mini meta-analysis of other directly comparable replications supported: when later quizzed on the contents of a talk, participants who’d taken notes with a pen and paper did no better than those who’d used a laptop.
“…Do not fear, for I am with you; do not be dismayed, for I am your God. I will strengthen you and help you.”
This passage, pulled from Isaiah 41.10, is just one example of the Bible’s many references to God’s power to protect. And this protective persona might affect you much more than you think. At least that’s what emerged in 2015, when researchers from Stanford University published a string of studies finding that people prompted to think of God made significantly riskier decisions — whether or not they were religious.
The scientists’ explanation, promptly picked up by the media, was that thinking of God makes risk-taking less intimidating because it primes us to expect divine protection. As of recently, however, this narrative has not stood up to scrutiny. The first pre-registered replication of this study, published in the journal Psychological Science, suggests that the effect was probably no more than an exciting false positive.
Whether we’re learning a new language, prepping for a job interview or simply trying to remember what we went into the kitchen for, many of us are keen to cultivate a better memory. And often strategies that add an element of effort or difficulty can help: drawing things rather than writing them down, for example, or generating questions about study material rather than simply reading it.
So in 2018, there was much fanfare when a team from Australia’s RMIT University developed a difficult-to-read font, Sans Forgetica, that they said could boost memory through such a “desirable difficulty”.
One of the biggest political challenges of this era is getting powerful people to take the threat of climate change seriously. The most straightforward way to do that would be with bottom-up pressure: if the people who vote demand that their leaders take assertive action against climate change, then politicians will have no choice but to do so (at least if they want to get into office, or to stay there). The major challenge to this, in turn, has been the lingering influence of climate denialism: disbelief in the reality that humans are the cause of climate change, or in the seriousness of the problem.
What can be done to combat climate denialism? Back in 2011, the researchers Jonathon P. Schuldt, Sara H. Konrath, and Norbert Schwarz published an article in Public Opinion Quarterly which suggested one possible partial remedy: framing the issue a bit differently. They found that 75.0% of Americans expressed belief in “climate change,” but only 67.7% in “global warming.” It was Republicans driving this effect: among this more politically conservative subset of Americans, the difference was 60.2% versus 44.0%.
Those findings suggested that environmental campaigns and policy initiatives might do better if they refer to “climate change” rather than “global warming”, write Alistair Raymond Bryce Soutter and René Mõttus in a new paper in the Journal of Environmental Psychology. But while some follow-up studies had been conducted on this issue, with fairly mixed results, no one had yet carried out a direct, pre-registered replication. So Soutter and Mõttus attempted to both replicate the original result and expand it to two other countries: the United Kingdom and Australia. (This gave them a total sample size of 5,717, about double that of the original study.)
Do you often spend time with your friends in order to forget about personal problems? Do you think about your friends even when you’re not with them? Have you even gone as far as ignoring your family to spend time with your friends?
If you answered yes to these questions, you might fit the criteria for “offline friend addiction”, according to a new scale described in a preprint on PsyArxiv. Except, of course, that this notion is ridiculous. How can we be addicted to socialising, the fulfilment of one of our basic human needs?
Well, that’s pretty much the point of the new paper, written with tongue firmly in cheek. But behind it is a serious argument: although a scale for offline friend addiction is clearly absurd, there’s another, similar concept for which such scales have already been developed — social media addiction.
If you follow mainstream science coverage, you have likely heard by now that many scientists believe that the differences between liberals and conservatives aren’t just ideological, but biological or neurological. That is, these differences are driven by deeply-seated features of our bodies and minds which exist prior to any sort of conscious evaluation of a given issue.
Lately, though, follow-up research has been poking some holes in this general theory. In November, for example, Emma Young wrote about findings which undermined past suggestions that conservatives are more readily disgusted than liberals. More broadly, as I wrote in 2018, there’s a burgeoning movement in social and political psychology to re-evaluate some of the strongest claims about liberal-conservative personality differences, with at least some evidence to suggest that the nature and magnitude of these differences has been overblown by shoddy or biased research.
Now, a new study set to appear in the Journal of Politics and available in preprint here suggests that another key claim about liberal-conservative differences may be less sturdy than it appears.
Often when we discuss the replication crisis in psychology, the main focus is on what it means for the research community — how do research practices need to change, for instance, or which sub-disciplines are most affected? These are all important questions, of course. But there’s another that perhaps receives less attention: what do the general public think about the field of psychology when they hear that supposedly key findings are not reproducible?
As most observers of psychological science recognise, the field is in the midst of a replication crisis. Multiple high-profile efforts to replicate past findings have turned up some dismal results — in the 2015 Open Science Collaboration published in Science, for example, just 36% of the evaluated studies showed statistically significant effects the second time around. The results of Many Labs 2, published last year, weren’t quite as bad, but still pretty dismal: just 50% of studies replicated during that effort.
Some of these failed replications don’t come across as all that surprising, at least in retrospect, given the audacity of original claims. For example, a study published in Science in 2012 claimed that subjects who looked at an image of The Thinker had, on average, a 20-point lower belief in God on a 100-point scale than those who looked at a supposedly less analytical statue of a discus thrower, leading to the study’s headline finding that “Analytic Thinking Promotes Religious Disbelief.” It’s an astonishing and unlikely result given how tenaciously most people cling to (non)belief — it defies common sense to think simply looking at a statue could have such an effect. “In hindsight, our study was outright silly,” the lead author admitted to Vox after the study failed to replicate. Plenty of other psychological studies have made similarly bold claims.
In light of this, an interesting, obvious question is how much stock we should put into this sort of intuition: does it actually tell us something useful when a given psychological result seems unlikely on an intuitive level? After all, science is replete with real discoveries that seemed ridiculous at first glance.
To win a medal of any kind at the Olympic Games takes years of training, hard work and sacrifice. Standing on an Olympic podium is widely regarded as the pinnacle of an athlete’s career. Nonetheless, only one athlete can win gold, leaving the two runner-up medallists to ponder what might have been. Intriguingly a seminal study from the 1992 Olympic Games suggested that this counterfactual thinking was especially painful for silver medallists, who appeared visibly less happy than bronze medallists. The researchers speculated that this may have been because of the different counterfactual thinking they engaged in, with bronze medallists being happy that they didn’t come fourth while silver medallists felt sad that they didn’t win gold.
However, subsequent research based on the 2000 Olympic Games did not replicate this finding: this time silver medallists were found to be happier than bronze medallists. To further muddy the waters, a study from the 2004 Games was consistent with the seminal research, finding that straight after competition, gold and bronze medallists were more likely to smile than silver medallists, with these smiles being larger and more intense.
Now further insight into the psychology of coming second or third comes via Mark Allen, Sarah Knipler and Amy Chan of the University of Wollongong, who have released their findings based on the 2016 Olympic Games. These latest results, published in Journal of Sports Sciences, again challenge that initial eye-grabbing result that suggested bronze medallists are happier than silver medallists, but they support the idea that the nature of counterfactual thinking differs depending on whether athletes come second or third.