Category: Replications

Personality differences uncovered between students at different US universities

GettyImages-587964230
Students at more expensive institutions tended to score higher in trait Neuroticism 

By Christian Jarrett

Psychology is overly dependent on student samples, but on the plus side, you might assume that one advantage of comparing across student samples is that you can rule out the influence of complicating background factors, such as differences in average personality profile. In fact, writing in the Journal of Personality, a team of US researchers led by Katherine Corker at Kenyon College has challenged this assumption: their findings suggest that if you test a group of students at one university, it’s not safe to assume that their average personality profile will match that of a sample of students from a university elsewhere in the same country.

Continue reading “Personality differences uncovered between students at different US universities”

Replication success correlates with researcher expertise (but not for the reasons you might think)

Old businessman holding his glassesBy Christian Jarrett

During the ongoing “replication crisis” in psychology, in which new attempts to reproduce previously published results have frequently failed, a common claim by the authors of the original work has been that those attempting a replication have lacked sufficient experimental expertise. Part of their argument, as explained recently by Shane Bench and his colleagues in the Journal of Experimental Social Psychology, is that “just as master chess players and seasoned firefighters develop intuitive expertise that aids their decision making, seasoned experimenters may develop intuitive expertise that influences the ‘micro decisions’ they make about study selection … and data collection.”

To see if there really is any link between researcher expertise and the chances of replication success, Bench and his colleagues have analysed the results of the recent “Reproducibility Project” in which 270 psychologists attempted to replicate 100 previous studies, managing a success rate of less than 40 per cent. Bench’s team found that replication researcher team expertise, as measured by first and senior author’s number of prior publications, was indeed correlated with the size of effect obtained in the replication attempt, but there’s more to the story.

Continue reading “Replication success correlates with researcher expertise (but not for the reasons you might think)”

Wardrobe malfunction – three failed attempts to replicate the finding that red increases attractiveness

By Christian Jarrett 

It’s one of the simplest, most evidence-backed pieces of advice you can give to someone who’s looking to attract a partner – wear red. Many studies, most of them involving men rating women’s appearance, have shown that wearing red clothing increases attractiveness and sex appeal. The reasons are thought to be traceable to our evolutionary past – red displays in the animal kingdom also often indicate sexual interest and availability – complemented by the cultural connotations of red with passion and sex.

But nothing, it seems, is straightforward in psychology any more. A team of Dutch and British researchers has just published three attempts to replicate the red effect in the open-access journal Evolutionary Psychology, including testing whether the effect is more pronounced in a short-term mating context, which would be consistent with the idea that red signals sexual availability. However, not only did the research not uncover an effect of mating context, all three experiments also failed to demonstrate any effect of red on attractiveness whatsoever.  Continue reading “Wardrobe malfunction – three failed attempts to replicate the finding that red increases attractiveness”

Three labs just failed to replicate the finding that a quick read of literary fiction boosts your empathy

Book charging brain concept 3d illustration on gradient backgrowBy Alex Fradera

“Reading is the sole means by which we slip, involuntarily, often helplessly, into another’s skin, another’s voice, another’s soul.” So said Joyce Carol Oates, and many more of us suspect that reading good fiction gives us insight into other people.

Past research backs this up, for example providing evidence that people with a long history of reading tend to be better at judging the mental states of others. But this work has always been open to the explanation that sensitive people are drawn to books, rather than books making people more sensitive. However in 2013 a study came along that appeared to change the game: researchers David Kidd and Emanuele Castano showed that exposure to a single passage of literary fiction actually improved readers’ ability to identify other people’s feelings.

This finding sent ripples through popular media, even prompting some to suggest strategies for everyday life like leafing through a book before you go on a date. But since then, as is the usual pattern in psychology these days, a struggle has ensued to establish the robustness of the eye-catching 2013 result. Continue reading “Three labs just failed to replicate the finding that a quick read of literary fiction boosts your empathy”

Ten Famous Psychology Findings That It’s Been Difficult To Replicate

By Christian Jarrett

Every now and again a psychology finding is published that immediately grabs the world’s attention and refuses to let go – often it’s a result with immediate implications for how we can live more happily and peacefully, or it says something profound about human nature. Said finding then enters the public consciousness, endlessly recycled in pop psychology books and magazine articles.

Unfortunately, sometimes when other researchers have attempted to obtain these same influential findings, they’ve struggled. This replication problem doesn’t just apply to famous findings, nor does it only affect psychological science. And there can be relatively mundane reasons behind failed replications, such as methodological differences from the original or cultural changes since the original was conducted.

But given the public fascination with psychology, and the powerful influence of certain results, it is arguably in the public interest to summarise in one place a collection of some of the most famous findings that have proven tricky to repeat. This is not a list of disproven or dodgy results. It’s a snapshot of the difficult, messy process of behavioural science. Continue reading “Ten Famous Psychology Findings That It’s Been Difficult To Replicate”

No reason to smile – Another modern psychology classic has failed to replicate

25401359824_3f753aaf04_o
Image via Quentin Gronau/Flickr showing how participants were instructed to hold the pen

By Christian Jarrett

The great American psychologist William James proposed that bodily sensations – a thumping heart, a sweaty palm – aren’t merely a consequence of our emotions, but may actually cause them. In his famous example, when you see a bear and your pulse races and you start running, it’s the running and the racing pulse that makes you feel afraid.

Consistent with James’ theory (and similar ideas put forward even earlier by Charles Darwin), a lot of research has shown that the expression on our face seems not only to reflect, but also to shape how we’re feeling. One of the most well-known and highly cited pieces of research to support the “facial feedback hypothesis” was published in 1988 and involved participants looking at cartoons while holding a pen either between their teeth, forcing them to smile, or between their lips, forcing them to pout. Those in the smile condition said they found the cartoons funnier.

But now an attempt to replicate this modern classic of psychology research, involving 17 labs around the world and a collective subject pool of 1894 students, has failed. “Overall, the results were inconsistent with the original result,” the researchers said.  Continue reading “No reason to smile – Another modern psychology classic has failed to replicate”

Two meta-analyses find no evidence that “Big Brother” eyes boost generosity

Being watched encourages us to be nicer people – what psychologists call behaving “pro-socially”. Recent evidence has suggested this effect can even be driven by artificial surveillance cues, such as eyes pictured on-screen or painted on a donations jar. If true, this would offer up some simple ways to reduce low-level crime and, well, to encourage us all to treat each other a little better. But unfortunately, a new article in Evolution and Human Behavior, calls this into question. Continue reading “Two meta-analyses find no evidence that “Big Brother” eyes boost generosity”

This is what happened when psychologists tried to replicate 100 previously published findings

While 97 per cent of the original results showed a statistically significant
effect, this was reproduced in only 36 per cent of the replications 

After some high-profile and at times acrimonious failures to replicate past landmark findings, psychology as a discipline and scientific community has led the way in trying to find out more about why some scientific findings reproduce and others don’t, including instituting reporting practices to improve the reliability of future results. Much of this endevour is thanks to the Center for Open Science, co-founded by the University of Virginia psychologist Brian Nosek.

Today, the Center has published its latest large-scale project: an attempt by 270 psychologists to replicate findings from 100 psychology studies published in 2008 in three prestigious journals that cover cognitive and social psychology: Psychological Science, the Journal of Personality and Social Psychology, and the Journal of Experimental Psychology: Learning, Memory and Cognition.

The Reproducibility Project is designed to estimate the “reproducibility” of psychological findings and complements the Many Labs Replication Project which published its initial results last year. The new effort aimed to replicate many different prior results to try to establish the distinguishing features of replicable versus unreliable findings: in this sense it was broad and shallow and looking for general rules that apply across the fields studied. By contrast, the Many Labs Project involved many different teams all attempting to replicate a smaller number of past findings – in that sense it was narrow and deep, providing more detailed insights into specific psychological phenomena.

The headline result from the new Reproducibility Project report is that whereas 97 per cent of the original results showed a statistically significant effect, this was reproduced in only 36 per cent of the replication attempts. Some replications found the opposite effect to the one they were trying to recreate. This is despite the fact that the Project went to incredible lengths to make the replication attempts true to the original studies, including consulting with the original authors.

Just because a finding doesn’t replicate doesn’t mean the original result was false – there are many possible reasons for a replication failure, including unknown or unavoidable deviations from the original methodology. Overall, however, the results of the Project are likely indicative of the biases that researchers and journals show towards producing and publishing positive findings. For example, a survey published a few years ago revealed the questionable practices many researchers use to achieve positive results, and it’s well known that journals are less likely to publish negative results.

The Project found that studies that initially reported weaker or more surprising results were less likely to replicate. In contrast, the expertise of the original research team or replication research team were not related to the chances of replication success. Meanwhile, social psychology replications were less than half as likely to achieve a significant finding compared with cognitive psychology replication attempts, but in terms of declines in size of effect, both fields showed the same average reduction from original study to replication attempt, to less than half (cognitive psychology studies started out with larger effects and this is why more of the replications in this area retained statistical significance).

Among the studies that failed to replicate was research on loneliness increasing supernatural beliefs; conceptual fluency increasing a preference for concrete descriptions (e.g. if I prime you with the name of a city, that increases your conceptual fluency for the city, which supposedly makes you prefer concrete descriptions of that city); and research on links between people’s racial prejudice and their response times to pictures showing people from different ethnic groups alongside guns. A full list of the findings that the researchers attempted to replicate can be found on the Reproducibility Project website (as can all the data and replication analyses).

This may sound like a disappointing day for psychology, but in fact really the opposite is true. Through the Reproducibility Project, psychology and psychologists are blazing a trail, helping shed light on a problem that afflicts all of science, not just psychology. The Project, which was backed by the Association for Psychological Science (publisher of the journal Psychological Science), is a model of constructive collaboration showing how original authors and the authors of replication attempts can work together to further their field. In fact, some investigators on the Project were in the position of being both an original author and a replication researcher.

“The present results suggest there is room to improve reproducibility in psychology,” the authors of the Reproducibility Project concluded. But they added: “Any temptation to interpret these results as a defeat for psychology, or science more generally, must contend with the fact that this project demonstrates science behaving as it should” – that is, being constantly sceptical of its own explanatory claims and striving for improvement. “This isn’t a pessimistic story”, added Brian Nosek in a press conference for the new results. “The project shows science demonstrating an essential quality, self-correction – a community of researchers volunteered their time to contribute to a large project for which they would receive little individual credit.”
_________________________________

  ResearchBlogging.orgOpen Science Collaboration (2015). Estimating the reproducibility of psychological science Science

further reading
How did it feel to be part of the Reproducibility Project?
A replication tour de force
Do psychology findings replicate outside the lab?
A recipe for (attempting to) replicate existing findings in psychology
A special issue of The Psychologist on issues surrounding replication in psychology.
Serious power failure threatens the entire field of neuroscience 

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

The trouble with tDCS? Electrical brain stimulation may not work after all

By guest blogger Neuroskeptic

A widely-used brain stimulation technique may be less effective than previously believed.

Transcranial Direct Current Stimulation (tDCS) is an increasingly popular neuroscience tool. tDCS involves attaching electrodes to the scalp, through which a weak electrical current flows. The idea is that this current modulates the activity of the brain tissue underneath the electrode – safely and painlessly.

Outside of the neuroscience lab, tDCS is also used by hobbyists looking to boost their own brain power and a number of consumer stimulation devices are now being sold. The technique regularly makes the news, under headlines such as “Zapping your brain could help you lose weight”.

However, according to Australian neuroscientists Jared Horvath, Jason Forte and Olivia Carter, a single session of tDCS may have no detectable effect on cognitive function in most people. In a new paper published in the journal Brain Stimulation, Horvath and colleagues reviewed the published evidence on tDCS. They performed a meta-analysis of the data on how tDCS influences cognitive functions such as memory, language, and mental arithmetic.

For example, in experiments investigating language function, neuroscientists generally place the active tDCS electrode over the left frontal lobe of the volunteers. This ensures that the electrode is near to Broca’s area, a part of the brain known to be involved in language production. Then, the current is switched on and the volunteer is asked to do a linguistic task such as verbal fluency, in which the goal is to think of as many words beginning with a certain letter (say “p”) as possible within one minute. The performance of the volunteers given tDCS is compared to the performance of people given “sham” tDCS, in which the electrodes are attached but no current is applied.

Horvath et al. found that overall, there was no statistically significant difference between active and sham tDCS on any of the cognitive tasks that they examined. They say that:

Of the 59 analyses undertaken, tDCS was not found to generate a significant effect on any. Taken together, the evidence does not support the assertion that a single-session of tDCS has a reliable effect on cognitive tasks in healthy adult populations.

That seems pretty clear-cut. However, Horvath et al. acknowledge that their analysis did not include any of the studies that have been conducted on individuals with brain diseases or on the elderly, and they note that tDCS might be more effective in such cases.

What’s more, Horvath et al.’s meta-analysis didn’t utilize all of the studies on healthy people. The authors decided to only include results that had at least one published independent replication attempt. In other words, they only included studies that had measured the effects of tDCS on a given cognitive task, if more than one different research group had published papers using that technique. Even if one team of scientists had published several studies all showing that tDCS does influence some aspect of cognition, those results weren’t included unless at least one other team of researchers had published tDCS results using that same task. One hundred and seventy-six articles were excluded as a result.

Horvath et al. explain their decision not to consider those studies by saying that:

We chose to exclude measures that have only been replicated by a single research group to ensure all data included in and conclusions generated by this review accurately reflect the effects of tDCS itself, rather than any unique device, protocol, or condition utilized in a single lab.

However, this is a slightly unusual restriction to use on a meta-analysis. It might be interesting to see whether including these additional studies would have changed the results.

This is the second time Horvath, Forte and Carter have published a sceptical meta-analysis of tDCS. In November last year they reviewed studies on the neurophysiological effects of tDCS and concluded that tDCS has virtually no measurable effects on brain function. So Horvath et al. seem to have comprehensively shown that tDCS essentially has no impact in healthy people, either on a biological or on a cognitive level.

However, I spoke to Dr Nick Davis, Lecturer in Psychology at Swansea University who has published several papers about tDCS. Davis says that:

This is a really useful review, as it helps us to think about the way we talk about the effects of tDCS.

However I believe that the way the analysis was conducted may have obscured some of the very real effects of tDCS. The authors have made a judgement about which studies can be pooled together and which studies cannot be pooled. One always has to make these kinds of decisions and I am not sure I would have made the same decisions given the same choices.

tDCS is still a developing technology. I think that with more principled methods of targeting the current flow to the desired brain area, we will see tDCS become one of the standard tools of cognitive neuroscience, just as EEG and fMRI have become.

_________________________________ ResearchBlogging.org

Horvath, J., Forte, J., & Carter, O. (2015). Quantitative Review Finds No Evidence of Cognitive Effects in Healthy Populations from Single-Session Transcranial Direct Current Stimulation (tDCS) Brain Stimulation DOI: 10.1016/j.brs.2015.01.400

Post written for the BPS Research Digest by Neuroskeptic, a British neuroscientist who blogs for Discover Magazine.

further reading
It’s shocking – How the press are hyping the benefits of electrical brain stimulation
Read this before zapping your brain
Bloggers behind the blogs: Neuroskeptic

A replication tour de force

In his famous 1974 lecture, Cargo Cult Science, Richard Feynman recalls his experience of suggesting to a psychology student that she should try to repeat a previous experiment before attempting a novel one:

“She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1947 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happened.”

Despite the popularity of the lecture, few took his comments about lack of replication in psychology seriously – and least of all psychologists. Another 40 years would pass before psychologists turned a critical eye on just how often they bother to replicate each other’s experiments. In 2012, US psychologist Matthew Makel and colleagues surveyed the top 100 psychology journals since 1900 and estimated that for every 1000 papers published, just two sought to closely replicate a previous study. Feynman’s instincts, it seems, were spot on.

Now, after decades of the status quo, psychology is finally coming to terms with the idea that replication is a vital ingredient in the recipe of discovery. The latest issue of the journal Social Psychology reports an impressive 15 papers that attempted to replicate influential findings related to personality and social cognition. Are men really more distressed by infidelity than women? Does pleasant music influence consumer choice? Is there an automatic link between cleanliness and moral judgements?

Many supposedly ‘classic’ effects could not be found

Several phenomena replicated successfully. An influential finding by Stanley Schacter from 1951 on ‘deviation rejection’ was successfully repeated by Eric Wesselman and colleagues. Schacter had originally found that individuals whose opinions persistently deviate from a group norm tend to be disempowered by the group and socially isolated. Wesselman replicated the result, though finding that it was smaller than originally supposed.

On the other hand, many supposedly ‘classic’ effects could not be found. For instance, there appears to be no evidence that making people feel physically warm promotes social warmth, that asking people to recall immoral behaviour makes the environment seem darker, or for the Romeo and Juliet effect.

The flagship of the special issue is the Many Labs project, a remarkable effort in which 50 psychologists located in 36 labs worldwide collaborated to replicate 13 key findings, across a sample of more than 6000 participants. Ten of the effects replicated successfully.

Adding further credibility to this enterprise, each of the studies reported in the special issue was pre-registered and peer reviewed before the authors collected data. Study pre-registration ensures that researchers adhere to the scientific method and is rapidly emerging as a vital tool for increasing the credibility and reliability of psychological science.

The entire issue is open access and well worth a read. I think Feynman would be glad to see psychology leaving the cargo cult behind and, for that, psychology can be proud too.

– Further reading: A special issue of The Psychologist on issues surrounding replication in psychology.

_________________________________ ResearchBlogging.org

Klein, R., Ratliff, K., Vianello, M., Adams, Jr., R., Bahník, Bernstein, M., Bocian, K., Brandt, M., Brooks, B., Brumbaugh, C., Cemalcilar, Z., Chandler, J., Cheong, W., Davis, W., Devos, T., Eisner, M., Frankowska, N., Furrow, D., Galliani, E., Hasselman, F., Hicks, J., Hovermale, J., Hunt, S., Huntsinger, J., IJzerman, H., John, M., Joy-Gaba, J., Kappes, H., Krueger, L., Kurtz, J., Levitan, C., Mallett, R., Morris, W., Nelson, A., Nier, J., Packard, G., Pilati, R., Rutchick, A., Schmidt, K., Skorinko, J., Smith, R., Steiner, T., Storbeck, J., Van Swol, L., Thompson, D., van ’t Veer, A., Vaughn, L., Vranka, M., Wichman, A., Woodzicka, J., & Nosek, B. (2014). Data from Investigating Variation in Replicability: A “Many Labs” Replication Project Journal of Open Psychology Data, 2 (1) DOI: 10.5334/jopd.ad

Post written for the BPS Research Digest by guest host Chris Chambers, senior research fellow in cognitive neuroscience at the School of Psychology, Cardiff University, and contributor to the Guardian psychology blog, Headquarters.