In Part One, published yesterday, we reported the views of active research psychologists on the state of their field, as surveyed by Matt Motyl and his colleagues at the University of Illinois at Chicago. Researchers reported a cautious optimism: research practices hadn’t been as bad as feared, and are in any case improving.
But is their optimism warranted? After all, several high-profilereplication projects have found that, more often than not, re-running previously successful studies produces only null results. But defenders of the state of psychology argue that replications fail for many reasons, including defects in the reproduction and differences in samples, so the implications aren’t settled.
To get closer to the truth, Motyl’s team complemented their survey findings with a forensic analysis of published data, uncovering results that seem to bolster their optimistic position. In Part Two of our coverage, we look at these findings and why they’re already proving controversial.
The field of social psychology is reeling from a series of crises that call into question the everyday scientific practices of its researchers. The fuse was lit by statistician John Ioannidis in 2005, in a review that outlined why, thanks particularly to what are now termed “questionable research practices” (QRPs), over half of all published research in social and medical sciences might be invalid. Kaboom. This shook a large swathe of science, but the fires continue to burn especially fiercely in the fields of social and personality psychology, which marshalled its response through a 2012 special issue in Perspectives on Psychological Science that brought these concerns fully out in the open, discussing replication failure, publication biases, and how to reshape incentives to improve the field. The fire flared up again in 2015 with the publication of Brian Nosek and the Open Science Collaboration’s high-profile attempt to replicate 100 studies in these fields, which succeeded in only 36 per cent of cases. Meanwhile, and to its credit, efforts to institute better safeguards like registered reports have gathered pace.
So how bad did things get, and have they really improved?A new article in pre-print at the Journal of Personality and Social Psychology tries to tackle the issue from two angles: first by asking active researchers what they think of the past and present state of their field, and how they now go about conducting psychology experiments, and second by analysing features of published research to estimate the prevalence of broken practices more objectively.
The paper comes from a large group of authors at the University of Illinois at Chicago under the guidance of Linda Skitka, a distinguished social psychologist who participated in the creation of the journal Social Psychological and Personality Science and who is on the editorial board of many more social psych journals, and led by Matt Motyl, a social and personality psychologist who has published with Nosek in the past, including on the issue of improving scientific practice.
Psychology research is the air that we breathe at the Digest, making it crucial that we understand its quality. So in this two-part series, we’re going to explore the issues raised in the University of Illinois at Chicago paper, to see if we can make sense of the state of social psychology, beginning in this post with the findings from Motyl et al’s survey of approximately 1,200 social and personality psychologists, from graduate students to full professors, mainly from the US, Europe and Australasia.
Psychology is overly dependent on student samples, but on the plus side, you might assume that one advantage of comparing across student samples is that you can rule out the influence of complicating background factors, such as differences in average personality profile. In fact, writing in the Journal of Personality, a team of US researchers led by Katherine Corker at Kenyon College has challenged this assumption: their findings suggest that if you test a group of students at one university, it’s not safe to assume that their average personality profile will match that of a sample of students from a university elsewhere in the same country.
Biologists have their fruit flies and rats, psychologists have students. An overwhelming amount of behavioral science is conducted with young people at universities on the assumption that it’s safe to generalise from this species of human to people more generally. There are some common-sense reasons for thinking this might be a problem and also some more specific issues, which we’ve documented before, such as that burnt out students could be skewing the findings.
Now a recent study in PLOS One shows that the ways students differ from the public is different depending on which country you’re in, meaning it’s extra complicated to figure out if and when it’s appropriate to extrapolate student-based findings to people as a whole.
The human mind has been so successful in transforming the material world that it is easy to forget that it too is subject to its own constraints. From biases in our judgment to the imperfection of our memory, psychology has done useful work mapping out many of these limits, yet when it comes to the human imagination, most of us still like to see it as something boundless. But new research in the journal Cognition, on the capacity of our visual imagination, suggests that we soon hit its limits.
If the courts wanted to know if a suspected sex offender was attracted to children, they could ask him or her, or they could ask experts to measure signs of the suspect’s sexual arousal while he or she looked at different images. But a devious suspect would surely lie about their interests, and they could distract themselves to cheat the physical test.
Brain scans offer an alternative strategy: research shows that when we look at images that we find sexually attractive, our brains show distinct patterns of activation. But of course, the same issues of cheating and deliberate distraction could apply.
Unless, that is, you could somehow prevent the suspect from knowing what images they were looking at, by using subliminal stimuli that can’t be seen at a conscious level. Then you could see how their brain responds to different types of image without the suspect even knowing what they were looking at.
This is the essence of a strategy tested in a new paper in Consciousness and Cognition. Martina Wernicke at Asklepios Forensic Psychiatric Hospital of Gottingen and her colleagues have provided a partial proof of principle that it might one day be possible to use subliminally presented images in a brain scanner to provide a fraud-proof test of a person’s sexual interests. It’s a potentially important break-through for crime prevention – given that deviant sexual interest is one of the strongest predictors of future offences – but it also raises important ethical questions.
Most people who undertake psychotherapy seem to benefit from it. How do we know? Arguably, the most important evidence comes from meta-analyses that combine the results from many – sometimes hundreds – of randomly controlled trials. Based on this, it’s been estimated that psychotherapy is effective for about 80 per cent of people (meanwhile, between five to 10 per cent of clients may suffer adverse effects).
But now the more concerning news: a team of researchers led by Evangelos Evangelou at the University of Ioannina, Greece has assessed the quality of 247 of these psychotherapy meta-analyses and they report in Acta Psychiatrica Scandinavica that many of them have serious methodological short-comings.
Coincidentally, a separate research group led by Brent Roberts at the University of Illinois, Urbana-Champaign has just published in Journal of Personality some of the first observational data on how people’s personalities change after undertaking psychotherapy. In contrast to what’s been found in the clinical literature, they report that people who’ve been in therapy seem to show negative changes in personality and other psychological outcomes.
Racism and prejudice are sometimes blatant, but often manifest in subtle ways. The current emblem of these subtle slights is the “microaggression”, a concept that has generated a large programme of research and launched itself into the popular consciousness – prompting last month’s decision by Merriam-Webster to add it to their dictionary. However, a new review in Perspectives on Psychological Science by Scott Lilienfeld of Emory University argues that core empirical and conceptual questions about microaggressions remain unaddressed, meaning the struggle against them takes place on a confusing battlefield, one where it’s hard to tell between friend and foe.
When a good doctor encounters research comparing the effectiveness of drugs A and B, she knows to beware the fact that B was created by the people paying the researchers’ salaries. Pharmaceutical industry funding can be complex, but the general principle of declaring financial conflicts of interest is now embedded in medical research culture. Unfortunately, research into psychological therapies doesn’t yet seem to have got its house in order in an equivalent way. That’s according to a new open access article in the journal BMJ Open which suggests that, while there is less risk in this field of financially-based conflicts, researchers may be particularly vulnerable to non-financial biases, a problem that hasn’t been adequately acknowledged until now.
It would be very concerning if “girls as young as six years old believe that brilliance is a male trait”, as The Guardian reported last week, especially if “this view has consequences”, as was argued in The Atlantic. Both stories implied girls’ beliefs about gender could be part of the explanation for why relatively few women are found working in fields such as maths, physics, and philosophy. These news stories, widely shared on social media, were based on a new psychology paper by Lin Bian at the University of Illinois at Urbana-Champaign and colleagues, published in Science, entitled “Gender stereotypes about intellectual ability emerge early and influence children’s interests”. The paper reported four studies, which at first appear to have simple, clear-cut conclusions. But a closer look at the data reveals that the results are rather weak, and the researchers’ interpretation goes far beyond what their studies have shown.