What’s in a smile? According to a widely reported 2010 study of US major league baseball players, which we covered here at BPS Research Digest, one important answer is: an indication of how long the smiler will live.
By analysing official individual photos of players from the 1952 baseball season, and then looking at subsequent death records, Ernest Abel and Michael Kruger at Wayne State University, Detroit, concluded that players who’d smiled like they meant it – with full “Duchenne smiles“, which involve muscles around the eyes as well as the mouth – lived on average seven years longer than players who’d posed with less convincing grins.
The result was taken to support existing evidence that happier people tend to live longer. It also seemed to show that smiles in posed photos – even on just one occasion – are a fairly reliable signal of people’s underlying emotional disposition and therefore their likely longevity.
But a new replication and extension of the Baseball photo study has produced very different results. This is important, because the idea that happier people live longer is widely promoted, and has implications both for individuals and policy-makers.
There’s been a lot of talk of the crisis in psychology. For decades, and often with the best of intentions, researchers have engaged in practices that have made it likely their results are “false positives” or not real. But that was in the past. The crisis is ending. “We do not call the rain that follows a long drought a ‘water crisis’,” write Leif Nelson at UC Berkeley and Joseph Simmons and Uri Simonsohn at the University of Pennsylvania. “We do not call sustained growth following a recession an ‘economic crisis'”.
In their paper, due for publication in the Annual Review of Psychology, the trio observe that had any psychologists been in hibernation for the last seven years, they would not recognise their field today. The full disclosure of methods and data, the pre-registration of studies, the publication of negative findings, and replication attempts – all of which help reduce risk of false positives – have increased immeasurably. “The improvements to our field have been dramatic,” they write. “This is psychology’s renaissance.”
As well giving the field of psychology a pep talk, their paper provides a useful review of how we got to this point, the reasons things are getting better, and the ongoing controversies.
The failure to reproduce established psychology findings on renewed testing, including some famous effects, has been well-publicised and has led to talk of a crisis in the field. However, psychology is a vast topic and there’s a possibility that the findings from some sub-disciplines may be more robust than others, in the sense of replicating reliably, even in unfavourable circumstances, such as when the participants have been tested on the same effect before.
A new paper currently available as a preprint at PsyArXiv has tested whether this might be the case for nine key findings from cognitive psychology, related to perception, memory and learning. Rolf Zwaan at Erasumus University Rotterdam and his colleagues found that all nine effects replicated reliably. “These results represent good news for the field of psychology,” they said.
In Part One, published yesterday, we reported the views of active research psychologists on the state of their field, as surveyed by Matt Motyl and his colleagues at the University of Illinois at Chicago. Researchers reported a cautious optimism: research practices hadn’t been as bad as feared, and are in any case improving.
But is their optimism warranted? After all, several high-profilereplication projects have found that, more often than not, re-running previously successful studies produces only null results. But defenders of the state of psychology argue that replications fail for many reasons, including defects in the reproduction and differences in samples, so the implications aren’t settled.
To get closer to the truth, Motyl’s team complemented their survey findings with a forensic analysis of published data, uncovering results that seem to bolster their optimistic position. In Part Two of our coverage, we look at these findings and why they’re already proving controversial.
The field of social psychology is reeling from a series of crises that call into question the everyday scientific practices of its researchers. The fuse was lit by statistician John Ioannidis in 2005, in a review that outlined why, thanks particularly to what are now termed “questionable research practices” (QRPs), over half of all published research in social and medical sciences might be invalid. Kaboom. This shook a large swathe of science, but the fires continue to burn especially fiercely in the fields of social and personality psychology, which marshalled its response through a 2012 special issue in Perspectives on Psychological Science that brought these concerns fully out in the open, discussing replication failure, publication biases, and how to reshape incentives to improve the field. The fire flared up again in 2015 with the publication of Brian Nosek and the Open Science Collaboration’s high-profile attempt to replicate 100 studies in these fields, which succeeded in only 36 per cent of cases. Meanwhile, and to its credit, efforts to institute better safeguards like registered reports have gathered pace.
So how bad did things get, and have they really improved?A new article in pre-print at the Journal of Personality and Social Psychology tries to tackle the issue from two angles: first by asking active researchers what they think of the past and present state of their field, and how they now go about conducting psychology experiments, and second by analysing features of published research to estimate the prevalence of broken practices more objectively.
The paper comes from a large group of authors at the University of Illinois at Chicago under the guidance of Linda Skitka, a distinguished social psychologist who participated in the creation of the journal Social Psychological and Personality Science and who is on the editorial board of many more social psych journals, and led by Matt Motyl, a social and personality psychologist who has published with Nosek in the past, including on the issue of improving scientific practice.
Psychology research is the air that we breathe at the Digest, making it crucial that we understand its quality. So in this two-part series, we’re going to explore the issues raised in the University of Illinois at Chicago paper, to see if we can make sense of the state of social psychology, beginning in this post with the findings from Motyl et al’s survey of approximately 1,200 social and personality psychologists, from graduate students to full professors, mainly from the US, Europe and Australasia.
Psychology is overly dependent on student samples, but on the plus side, you might assume that one advantage of comparing across student samples is that you can rule out the influence of complicating background factors, such as differences in average personality profile. In fact, writing in the Journal of Personality, a team of US researchers led by Katherine Corker at Kenyon College has challenged this assumption: their findings suggest that if you test a group of students at one university, it’s not safe to assume that their average personality profile will match that of a sample of students from a university elsewhere in the same country.
During the ongoing “replication crisis” in psychology, in which new attempts to reproduce previously published results have frequently failed, a common claim by the authors of the original work has been that those attempting a replication have lacked sufficient experimental expertise. Part of their argument, as explained recently by Shane Bench and his colleagues in the Journal of Experimental Social Psychology, is that “just as master chess players and seasoned firefighters develop intuitive expertise that aids their decision making, seasoned experimenters may develop intuitive expertise that influences the ‘micro decisions’ they make about study selection … and data collection.”
To see if there really is any link between researcher expertise and the chances of replication success, Bench and his colleagues have analysed the results of the recent “Reproducibility Project” in which 270 psychologists attempted to replicate 100 previous studies, managing a success rate of less than 40 per cent. Bench’s team found that replication researcher team expertise, as measured by first and senior author’s number of prior publications, was indeed correlated with the size of effect obtained in the replication attempt, but there’s more to the story.
It’s one of the simplest, most evidence-backed pieces of advice you can give to someone who’s looking to attract a partner – wear red. Many studies, most of them involving men rating women’s appearance, have shown that wearing red clothing increases attractiveness and sex appeal. The reasons are thought to be traceable to our evolutionary past – red displays in the animal kingdom also often indicate sexual interest and availability – complemented by the cultural connotations of red with passion and sex.
“Reading is the sole means by which we slip, involuntarily, often helplessly, into another’s skin, another’s voice, another’s soul.” So said Joyce Carol Oates, and many more of us suspect that reading good fiction gives us insight into other people.
Past research backs this up, for example providing evidence that people with a long history of reading tend to be better at judging the mental states of others. But this work has always been open to the explanation that sensitive people are drawn to books, rather than books making people more sensitive. However in 2013 a study came along that appeared to change the game: researchers David Kidd and Emanuele Castano showed that exposure to a single passage of literary fiction actually improved readers’ ability to identify other people’s feelings.
Every now and again a psychology finding is published that immediately grabs the world’s attention and refuses to let go – often it’s a result with immediate implications for how we can live more happily and peacefully, or it says something profound about human nature. Said finding then enters the public consciousness, endlessly recycled in pop psychology books and magazine articles.
Unfortunately, sometimes when other researchers have attempted to obtain these same influential findings, they’ve struggled. This replication problem doesn’t just apply to famous findings, nor does it only affect psychological science. And there can be relatively mundane reasons behind failed replications, such as methodological differences from the original or cultural changes since the original was conducted.
But given the public fascination with psychology, and the powerful influence of certain results, it is arguably in the public interest to summarise in one place a collection of some of the most famous findings that have proven tricky to repeat. This is not a list of disproven or dodgy results. It’s a snapshot of the difficult, messy process of behavioural science. Continue reading “Ten Famous Psychology Findings That It’s Been Difficult To Replicate”→