Psychology is overly dependent on student samples, but on the plus side, you might assume that one advantage of comparing across student samples is that you can rule out the influence of complicating background factors, such as differences in average personality profile. In fact, writing in the Journal of Personality, a team of US researchers led by Katherine Corker at Kenyon College has challenged this assumption: their findings suggest that if you test a group of students at one university, it’s not safe to assume that their average personality profile will match that of a sample of students from a university elsewhere in the same country.
Biologists have their fruit flies and rats, psychologists have students. An overwhelming amount of behavioral science is conducted with young people at universities on the assumption that it’s safe to generalise from this species of human to people more generally. There are some common-sense reasons for thinking this might be a problem and also some more specific issues, which we’ve documented before, such as that burnt out students could be skewing the findings.
Now a recent study in PLOS One shows that the ways students differ from the public is different depending on which country you’re in, meaning it’s extra complicated to figure out if and when it’s appropriate to extrapolate student-based findings to people as a whole.
By Alex Fradera
The human mind has been so successful in transforming the material world that it is easy to forget that it too is subject to its own constraints. From biases in our judgment to the imperfection of our memory, psychology has done useful work mapping out many of these limits, yet when it comes to the human imagination, most of us still like to see it as something boundless. But new research in the journal Cognition, on the capacity of our visual imagination, suggests that we soon hit its limits.
If the courts wanted to know if a suspected sex offender was attracted to children, they could ask him or her, or they could ask experts to measure signs of the suspect’s sexual arousal while he or she looked at different images. But a devious suspect would surely lie about their interests, and they could distract themselves to cheat the physical test.
Brain scans offer an alternative strategy: research shows that when we look at images that we find sexually attractive, our brains show distinct patterns of activation. But of course, the same issues of cheating and deliberate distraction could apply.
Unless, that is, you could somehow prevent the suspect from knowing what images they were looking at, by using subliminal stimuli that can’t be seen at a conscious level. Then you could see how their brain responds to different types of image without the suspect even knowing what they were looking at.
This is the essence of a strategy tested in a new paper in Consciousness and Cognition. Martina Wernicke at Asklepios Forensic Psychiatric Hospital of Gottingen and her colleagues have provided a partial proof of principle that it might one day be possible to use subliminally presented images in a brain scanner to provide a fraud-proof test of a person’s sexual interests. It’s a potentially important break-through for crime prevention – given that deviant sexual interest is one of the strongest predictors of future offences – but it also raises important ethical questions.
Most people who undertake psychotherapy seem to benefit from it. How do we know? Arguably, the most important evidence comes from meta-analyses that combine the results from many – sometimes hundreds – of randomly controlled trials. Based on this, it’s been estimated that psychotherapy is effective for about 80 per cent of people (meanwhile, between five to 10 per cent of clients may suffer adverse effects).
But now the more concerning news: a team of researchers led by Evangelos Evangelou at the University of Ioannina, Greece has assessed the quality of 247 of these psychotherapy meta-analyses and they report in Acta Psychiatrica Scandinavica that many of them have serious methodological short-comings.
Coincidentally, a separate research group led by Brent Roberts at the University of Illinois, Urbana-Champaign has just published in Journal of Personality some of the first observational data on how people’s personalities change after undertaking psychotherapy. In contrast to what’s been found in the clinical literature, they report that people who’ve been in therapy seem to show negative changes in personality and other psychological outcomes.
By Alex Fradera
Racism and prejudice are sometimes blatant, but often manifest in subtle ways. The current emblem of these subtle slights is the “microaggression”, a concept that has generated a large programme of research and launched itself into the popular consciousness – prompting last month’s decision by Merriam-Webster to add it to their dictionary. However, a new review in Perspectives on Psychological Science by Scott Lilienfeld of Emory University argues that core empirical and conceptual questions about microaggressions remain unaddressed, meaning the struggle against them takes place on a confusing battlefield, one where it’s hard to tell between friend and foe.
By Alex Fradera
When a good doctor encounters research comparing the effectiveness of drugs A and B, she knows to beware the fact that B was created by the people paying the researchers’ salaries. Pharmaceutical industry funding can be complex, but the general principle of declaring financial conflicts of interest is now embedded in medical research culture. Unfortunately, research into psychological therapies doesn’t yet seem to have got its house in order in an equivalent way. That’s according to a new open access article in the journal BMJ Open which suggests that, while there is less risk in this field of financially-based conflicts, researchers may be particularly vulnerable to non-financial biases, a problem that hasn’t been adequately acknowledged until now.
By guest blogger Stuart Ritchie
It would be very concerning if “girls as young as six years old believe that brilliance is a male trait”, as The Guardian reported last week, especially if “this view has consequences”, as was argued in The Atlantic. Both stories implied girls’ beliefs about gender could be part of the explanation for why relatively few women are found working in fields such as maths, physics, and philosophy. These news stories, widely shared on social media, were based on a new psychology paper by Lin Bian at the University of Illinois at Urbana-Champaign and colleagues, published in Science, entitled “Gender stereotypes about intellectual ability emerge early and influence children’s interests”. The paper reported four studies, which at first appear to have simple, clear-cut conclusions. But a closer look at the data reveals that the results are rather weak, and the researchers’ interpretation goes far beyond what their studies have shown.
Between 1837 and 1860 Charles Darwin kept a diary of every book he read, including An Essay on the Principle of Population, Principles of Geology and Vestiges of the Natural History of Creation. There were many others: 687 English non-fiction titles alone, meaning that he averaged one book every ten days. After Darwin finished each one, how did he decide what to read next? In this decision, a scientist like Darwin was confronted with a problem similar to that afflicting the squirrel in search of nuts. Is it better to thoroughly search one area (or topic), or to continually jump to new areas (topics)? Foraging, whether for nuts or information, comes down to a choice between exploitation and exploration. In a new paper in Cognition, a team led by Jaimie Murdock has analysed the contents of the English non-fiction books Darwin read, and the order he read them in, to find out his favoured information-gathering approach and how it changed over time.
During the ongoing “replication crisis” in psychology, in which new attempts to reproduce previously published results have frequently failed, a common claim by the authors of the original work has been that those attempting a replication have lacked sufficient experimental expertise. Part of their argument, as explained recently by Shane Bench and his colleagues in the Journal of Experimental Social Psychology, is that “just as master chess players and seasoned firefighters develop intuitive expertise that aids their decision making, seasoned experimenters may develop intuitive expertise that influences the ‘micro decisions’ they make about study selection … and data collection.”
To see if there really is any link between researcher expertise and the chances of replication success, Bench and his colleagues have analysed the results of the recent “Reproducibility Project” in which 270 psychologists attempted to replicate 100 previous studies, managing a success rate of less than 40 per cent. Bench’s team found that replication researcher team expertise, as measured by first and senior author’s number of prior publications, was indeed correlated with the size of effect obtained in the replication attempt, but there’s more to the story.