Psychology is overly dependent on student samples, but on the plus side, you might assume that one advantage of comparing across student samples is that you can rule out the influence of complicating background factors, such as differences in average personality profile. In fact, writing in the Journal of Personality, a team of US researchers led by Katherine Corker at Kenyon College has challenged this assumption: their findings suggest that if you test a group of students at one university, it’s not safe to assume that their average personality profile will match that of a sample of students from a university elsewhere in the same country.
During the ongoing “replication crisis” in psychology, in which new attempts to reproduce previously published results have frequently failed, a common claim by the authors of the original work has been that those attempting a replication have lacked sufficient experimental expertise. Part of their argument, as explained recently by Shane Bench and his colleagues in the Journal of Experimental Social Psychology, is that “just as master chess players and seasoned firefighters develop intuitive expertise that aids their decision making, seasoned experimenters may develop intuitive expertise that influences the ‘micro decisions’ they make about study selection … and data collection.”
To see if there really is any link between researcher expertise and the chances of replication success, Bench and his colleagues have analysed the results of the recent “Reproducibility Project” in which 270 psychologists attempted to replicate 100 previous studies, managing a success rate of less than 40 per cent. Bench’s team found that replication researcher team expertise, as measured by first and senior author’s number of prior publications, was indeed correlated with the size of effect obtained in the replication attempt, but there’s more to the story.
It’s one of the simplest, most evidence-backed pieces of advice you can give to someone who’s looking to attract a partner – wear red. Many studies, most of them involving men rating women’s appearance, have shown that wearing red clothing increases attractiveness and sex appeal. The reasons are thought to be traceable to our evolutionary past – red displays in the animal kingdom also often indicate sexual interest and availability – complemented by the cultural connotations of red with passion and sex.
But nothing, it seems, is straightforward in psychology any more. A team of Dutch and British researchers has just published three attempts to replicate the red effect in the open-access journal Evolutionary Psychology, including testing whether the effect is more pronounced in a short-term mating context, which would be consistent with the idea that red signals sexual availability. However, not only did the research not uncover an effect of mating context, all three experiments also failed to demonstrate any effect of red on attractiveness whatsoever. Continue reading “Wardrobe malfunction – three failed attempts to replicate the finding that red increases attractiveness”
Can we trust psychological studies? We speak to Brian Earp, of Oxford University and Yale University, about how to respond when we’re told repeatedly that the veracity of eye-catching findings, or even cherished theories, has come under scrutiny. Brian also talks about his own experience of publishing a failed replication attempt – a must-listen for any researchers who are fearful of publishing their own negative findings. Find Brian on Twitter @BrianDavidEarp
By Alex Fradera
“Reading is the sole means by which we slip, involuntarily, often helplessly, into another’s skin, another’s voice, another’s soul.” So said Joyce Carol Oates, and many more of us suspect that reading good fiction gives us insight into other people.
Past research backs this up, for example providing evidence that people with a long history of reading tend to be better at judging the mental states of others. But this work has always been open to the explanation that sensitive people are drawn to books, rather than books making people more sensitive. However in 2013 a study came along that appeared to change the game: researchers David Kidd and Emanuele Castano showed that exposure to a single passage of literary fiction actually improved readers’ ability to identify other people’s feelings.
This finding sent ripples through popular media, even prompting some to suggest strategies for everyday life like leafing through a book before you go on a date. But since then, as is the usual pattern in psychology these days, a struggle has ensued to establish the robustness of the eye-catching 2013 result. Continue reading “Three labs just failed to replicate the finding that a quick read of literary fiction boosts your empathy”
Every now and again a psychology finding is published that immediately grabs the world’s attention and refuses to let go – often it’s a result with immediate implications for how we can live more happily and peacefully, or it says something profound about human nature. Said finding then enters the public consciousness, endlessly recycled in pop psychology books and magazine articles.
Unfortunately, sometimes when other researchers have attempted to obtain these same influential findings, they’ve struggled. This replication problem doesn’t just apply to famous findings, nor does it only affect psychological science. And there can be relatively mundane reasons behind failed replications, such as methodological differences from the original or cultural changes since the original was conducted.
But given the public fascination with psychology, and the powerful influence of certain results, it is arguably in the public interest to summarise in one place a collection of some of the most famous findings that have proven tricky to repeat. This is not a list of disproven or dodgy results. It’s a snapshot of the difficult, messy process of behavioural science. Continue reading “Ten Famous Psychology Findings That It’s Been Difficult To Replicate”
The great American psychologist William James proposed that bodily sensations – a thumping heart, a sweaty palm – aren’t merely a consequence of our emotions, but may actually cause them. In his famous example, when you see a bear and your pulse races and you start running, it’s the running and the racing pulse that makes you feel afraid.
Consistent with James’ theory (and similar ideas put forward even earlier by Charles Darwin), a lot of research has shown that the expression on our face seems not only to reflect, but also to shape how we’re feeling. One of the most well-known and highly cited pieces of research to support the “facial feedback hypothesis” was published in 1988 and involved participants looking at cartoons while holding a pen either between their teeth, forcing them to smile, or between their lips, forcing them to pout. Those in the smile condition said they found the cartoons funnier.
But now an attempt to replicate this modern classic of psychology research, involving 17 labs around the world and a collective subject pool of 1894 students, has failed. “Overall, the results were inconsistent with the original result,” the researchers said. Continue reading “No reason to smile – Another modern psychology classic has failed to replicate”
Being watched encourages us to be nicer people – what psychologists call behaving “pro-socially”. Recent evidence has suggested this effect can even be driven by artificial surveillance cues, such as eyes pictured on-screen or painted on a donations jar. If true, this would offer up some simple ways to reduce low-level crime and, well, to encourage us all to treat each other a little better. But unfortunately, a new article in Evolution and Human Behavior, calls this into question. Continue reading “Two meta-analyses find no evidence that “Big Brother” eyes boost generosity”
Pick up any introductory psychology textbook and under the “developmental” chapter you’re bound to find a description of “groundbreaking” research into newborn babies’ imitation skills. The work, conducted in the 1970s, will typically be shown alongside black and white images of a man sticking his tongue out at a baby, and the tiny baby duly sticking out her tongue in response.
The research was revolutionary because it appeared to show that humans are born with the power to imitate – a skill crucial to learning and relationships – and it contradicted the claims of Jean Piaget, the grandfather of developmental psychology, that imitation does not emerge until babies are around nine months old.
Today it may be time to rewrite these textbooks. A new study in Current Biology, more methodologically rigorous than any previous investigation of its kind, has found no evidence to support the idea that newborn babies can imitate.
Janine Oostenbroek and her colleagues tested 106 infants four times: at one week of age, then at three weeks, six weeks, and nine weeks. Data from 64 of the infants was available at all four time points. At each test, the researcher performed a range of facial movements, actions or sounds for 60 seconds each. There were 11 of these displays in total, including tongue protrusions, mouth opening, happy face, sad face, index finger pointing and mmm and eee sounds. Each baby’s behaviour during these 60-second periods was filmed and later coded according to which faces, actions or sounds, if any, he or she performed during the different researcher displays.
Whereas many previous studies have compared babies’ responses to only two or a few different adult displays, this study was much more robust because the researchers checked to see if, for example, the babies were more likely to stick out their tongues when that’s what the researcher was doing, as compared with when the researcher was doing any of the 10 other displays or sounds. Unlike most prior research, this new study also looked to see how any signs of imitation changed over time, at the different testing sessions. According to the researchers, this makes theirs “the most comprehensive, longitudinal study of neonatal imitation to date”.
Following these more robust standards, Oostenbroek and her team found no evidence that newborn babies can reliably imitate faces, actions or sounds. For example, let’s take the example of tongue protrusions. Averaged across the different testing time points, the babies were no more likely to stick out their tongue when the researcher did so, as compared with the researcher opened her mouth, pulled a happy face or pulled a sad face. In fact, across all the different displays, actions and sounds, there was no situation in which the babies consistently performed a given facial display, gesture or sound more when the researcher specifically did that same thing, than when the researcher was doing anything else.
Based on their results, the researchers said that the idea of “innate imitation modules” and other such concepts founded on the ideal of neonatal imitation “should be modified or abandoned altogether”. They said the truth may be closer to what Piaget originally proposed and that imitation probably emerges from around 6 months.
Oostenbroek, J., Suddendorf, T., Nielsen, M., Redshaw, J., Kennedy-Costantini, S., Davis, J., Clark, S., & Slaughter, V. (2016). Comprehensive Longitudinal Study Challenges the Existence of Neonatal Imitation in Humans Current Biology DOI: 10.1016/j.cub.2016.03.047
Top image is part of a figure that appears in Oostenbroek et al. 2016.
10 surprising things babies can do
Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!
|While 97 per cent of the original results showed a statistically significant
effect, this was reproduced in only 36 per cent of the replications
After some high-profile and at times acrimonious failures to replicate past landmark findings, psychology as a discipline and scientific community has led the way in trying to find out more about why some scientific findings reproduce and others don’t, including instituting reporting practices to improve the reliability of future results. Much of this endevour is thanks to the Center for Open Science, co-founded by the University of Virginia psychologist Brian Nosek.
Today, the Center has published its latest large-scale project: an attempt by 270 psychologists to replicate findings from 100 psychology studies published in 2008 in three prestigious journals that cover cognitive and social psychology: Psychological Science, the Journal of Personality and Social Psychology, and the Journal of Experimental Psychology: Learning, Memory and Cognition.
The Reproducibility Project is designed to estimate the “reproducibility” of psychological findings and complements the Many Labs Replication Project which published its initial results last year. The new effort aimed to replicate many different prior results to try to establish the distinguishing features of replicable versus unreliable findings: in this sense it was broad and shallow and looking for general rules that apply across the fields studied. By contrast, the Many Labs Project involved many different teams all attempting to replicate a smaller number of past findings – in that sense it was narrow and deep, providing more detailed insights into specific psychological phenomena.
The headline result from the new Reproducibility Project report is that whereas 97 per cent of the original results showed a statistically significant effect, this was reproduced in only 36 per cent of the replication attempts. Some replications found the opposite effect to the one they were trying to recreate. This is despite the fact that the Project went to incredible lengths to make the replication attempts true to the original studies, including consulting with the original authors.
Just because a finding doesn’t replicate doesn’t mean the original result was false – there are many possible reasons for a replication failure, including unknown or unavoidable deviations from the original methodology. Overall, however, the results of the Project are likely indicative of the biases that researchers and journals show towards producing and publishing positive findings. For example, a survey published a few years ago revealed the questionable practices many researchers use to achieve positive results, and it’s well known that journals are less likely to publish negative results.
The Project found that studies that initially reported weaker or more surprising results were less likely to replicate. In contrast, the expertise of the original research team or replication research team were not related to the chances of replication success. Meanwhile, social psychology replications were less than half as likely to achieve a significant finding compared with cognitive psychology replication attempts, but in terms of declines in size of effect, both fields showed the same average reduction from original study to replication attempt, to less than half (cognitive psychology studies started out with larger effects and this is why more of the replications in this area retained statistical significance).
Among the studies that failed to replicate was research on loneliness increasing supernatural beliefs; conceptual fluency increasing a preference for concrete descriptions (e.g. if I prime you with the name of a city, that increases your conceptual fluency for the city, which supposedly makes you prefer concrete descriptions of that city); and research on links between people’s racial prejudice and their response times to pictures showing people from different ethnic groups alongside guns. A full list of the findings that the researchers attempted to replicate can be found on the Reproducibility Project website (as can all the data and replication analyses).
This may sound like a disappointing day for psychology, but in fact really the opposite is true. Through the Reproducibility Project, psychology and psychologists are blazing a trail, helping shed light on a problem that afflicts all of science, not just psychology. The Project, which was backed by the Association for Psychological Science (publisher of the journal Psychological Science), is a model of constructive collaboration showing how original authors and the authors of replication attempts can work together to further their field. In fact, some investigators on the Project were in the position of being both an original author and a replication researcher.
“The present results suggest there is room to improve reproducibility in psychology,” the authors of the Reproducibility Project concluded. But they added: “Any temptation to interpret these results as a defeat for psychology, or science more generally, must contend with the fact that this project demonstrates science behaving as it should” – that is, being constantly sceptical of its own explanatory claims and striving for improvement. “This isn’t a pessimistic story”, added Brian Nosek in a press conference for the new results. “The project shows science demonstrating an essential quality, self-correction – a community of researchers volunteered their time to contribute to a large project for which they would receive little individual credit.”
Open Science Collaboration (2015). Estimating the reproducibility of psychological science Science
How did it feel to be part of the Reproducibility Project?
A replication tour de force
Do psychology findings replicate outside the lab?
A recipe for (attempting to) replicate existing findings in psychology
A special issue of The Psychologist on issues surrounding replication in psychology.
Serious power failure threatens the entire field of neuroscience
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!