Psychologist logo
Wooden carving of a human head with markings
Brain, Methods and statistics

Do you do voodoo?

Pashler and his team found that 54 per cent of the studies had used a seriously biased method of analysis, a problem that probably also undermines the findings of fMRI studies in other fields of psychology.

14 January 2009

By Christian Jarrett

They are beloved by prestigious journals and the popular press, but many recent social neuroscience studies are profoundly flawed, according to a devastating critique – Voodoo Correlations in Social Neuroscience – in press at Perspectives on Psychological Science (PDF).

The studies in question have tended to claim astonishingly high correlations between localised areas of brain activity and specific psychological measures. For example, in 2003, Naomi Eisenberger at the University of California and her colleagues published a paper purporting to show that levels of self-reported rejection correlated at r=.88 (1.0 would be a perfect correlation) with levels of activity in the anterior cingulate cortex.

According to Hal Pashler and his band of methodological whistle-blowers, if Eisenberg’s study and others like it were accurate, this “would be a milestone in understanding of brain-behaviour linkages, full of promise for potential diagnostic and therapeutic spin-offs.” Unfortunately, Pashler’s group argue that the findings from many of these recent studies are virtually meaningless.

The suspicions of Pashler and his colleagues – Ed Vul (lead author), Christine Harris and Piotr Winkielman – were piqued when they realised that many of the cited levels of correlation in social neuroscience were impossibly high given the respective reliability of brain activity measures and measures of psychological factors, such as rejection. To investigate further they conducted a literature search and surveyed the authors of 54 studies claiming significant brain-behaviour correlations. The search wasn’t exhaustive but was thought to be representative, with a slight bias towards higher impact journals.

Pashler and his team found that 54 per cent of the studies had used a seriously biased method of analysis, a problem that probably also undermines the findings of fMRI studies in other fields of psychology. These researchers had identified small areas of brain activity (called voxels) that varied according to the experimental condition of interest (e.g. being rejected or not), and had then focused on just those voxels that showed a correlation, higher than a given threshold, with the psychological measure of interest (e.g. feeling rejected). Finally, they had arrived at their published brain-behaviour correlation figures by taking the average correlation from among just this select group of voxels, or in some cases just one “peak voxel”. Pashler’s team contend that by following this procedure, it would have been nearly impossible for the studies not to find a significant brain-behaviour correlation.

By analogy with a purely behavioural experiment, imagine the author of a new psychometric measure claiming that his new test correlated with a target psychological construct, when actually he had arrived at his significant correlation only after he had first identified and analysed just those items that showed the correlation with the target construct. Indeed, Pashler and his collaborators speculated that the editors and reviewers of mainstream psychology journals would routinely pick up on the kind of flaws seen in imaging-based social neuroscience, but that the novelty and complexity of this new field meant such mistakes have slipped through the net.

‘…[I]n half of the studies we surveyed, the reported correlation coefficients mean almost nothing, because they are systematically inflated by the biased analysis,’ Pashler’s team wrote. Perhaps unsurprisingly, among the papers they surveyed, it was the papers that used this flawed approach that tended to have published the highest correlation figures. ‘…[W]e suspect that while in many cases the reported relationships probably reflect some underlying relationship (albeit a much weaker relationship than the numbers in the articles implied), it is quite possible that a considerable number of relationships reported in this literature are entirely illusory.’

On a more positive note, Pashler’s team say there are ways to analyse social neuroscience data without bias and that it should be possible for many of the studies they’ve criticised to re-analyse their data. For example, one approach is to identify voxels of interest by region, before seeing if their activity levels correlate with a target psychological factor. An alternative approach is to use different sets of data to perform the different steps of analysis used previously. For example, by using one run in the scanner to identify those voxels that correlate with a psychological measure, and then using a second, independent run to assess how highly that subset of voxels correlates with the chosen measure. “We urge investigators whose results have been questioned here to perform such analyses and to correct the record by publishing follow-up errata that provide valid numbers,” Pashler’s team said.

Matthew Lieberman, a co-author on Eisenberger’s social rejection study, told us that he and his colleagues have drafted a robust reply to these methodological accusations, which will be published in Perspectives on Psychological Science alongside the Pashler paper (now available online; PDF). In particular he stressed that concerns over multiple comparisons in fMRI research are not new, are not specific to social neuroscience, and that the methodological approach of the Pashler group, done correctly, would lead to similar results to those already published. “There are numerous errors in their handling of the data that they reanalyzed,” he argued. “While trying to recreate their [most damning] Figure 5, we went through and pulled all the correlations from all the papers. We found around 50 correlations that were clearly in the papers Pashler’s team reviewed but were not included in their analyses. Almost all of these overlooked correlations tend to work against their hypotheses.”

Update: Lead author of the Pashler-group Voodoo critique, Ed Vul, answers some questions about the group’s paper here. He also answers a rebuttal here. Picked up from the comments, Lieberman’s rebuttal is now also available online in full. 

Further reading

Edward Vul, Christine Harris, Piotr Winkielman, Harold Pashler (2009). Voodoo Correlations in Social Neuroscience. Perspectives on Psychological Science. In Press.