Widely Used Neuroimaging Analyses Allow Almost Any Result To Be Presented As A Successful Replication, Paper Claims

Screenshot 2019-02-18 at 09.36.42.png
Of 135 surveyed fMRI papers that contained claims of replicating previous findings, over 40 per cent did not consider peak activity levels within brain regions – a flawed approach that allows almost any result to be claimed as a successful replication (from YongWooK Hong et al, 2019)

By Matthew Warren

As the list of failed replications continues to build, psychology’s reproducibility crisis is becoming harder to ignore. Now, in a new paper that seems likely to ruffle a few feathers, researchers suggest that even many apparent successful replications in neuroimaging research could be standing on shaky ground.  As the paper’s title bluntly puts it, the way imaging results are currently analysed “allows presenting anything as a replicated finding.” 

The provocative argument is put forward by YongWook Hong from Sungkyunkwan University in South Korea and colleagues, in a preprint posted recently to bioRxiv. The fundamental problem, say the researchers, is that scientists conducting neuroimaging research tend to make and test hypotheses with reference to large brain structures. Yet neuroimaging techniques, particularly functional magnetic resonance imaging (fMRI), gather data at a much more fine-grained resolution. 

This means that strikingly different patterns of brain activity could produce what appears to be the same result. For example, one lab might find that a face recognition task activates the amygdala (a structure found on each side of the brain that’s involved in emotional processing). Later, another lab apparently replicates this finding, showing activation in the same structure during the same task. But the amygdala contains hundreds of individual “voxels”, the three-dimensional pixels that form the basic unit of fMRI data. So the second lab could have found activity in a completely different part of the amygdala, yet it would appear that they had replicated the original result. 

The authors’ arguments are not simply theoretical: in a series of experiments, Hong and colleagues demonstrate that replications are not always what they seem. The team used an imaging dataset they’d collected and published in a previous study, which examined the similarity of the brain’s responses to physical pain and social rejection. First, they divided the data into two groups, containing 30 and 29 participants respectively. For the sake of their demonstration, they pretended that these groups were two separate cohorts, the first group the “original” cohort, and the second a cohort used for a replication attempt.

Following the kind of approach typically used in many neuroimaging experiments and replication attempts, they then examined the brain activity exhibited in response to rejection and pain in each of these groups, focusing on an area in the front of the brain called the dorsal anterior cingulate cortex (dACC). The first group showed increased activity in the dACC to both rejection and pain – and so did the second, suggesting that the initial result had been successfully replicated.

But when the team looked within the dACC to see where in the structure activation was strongest, they found that the groups differed considerably. For the social rejection condition in particular, the location of this “peak” activation for the second group was a whopping 43mm away from the peak for the first group, at the opposite end of the dACC.  

These results might seem to suggest that the location of peak activation could provide a better indication of whether a finding has replicated than looking at average activity levels across a brain area. But in a subsequent experiment, the authors found that the location of peak activation didn’t actually have much bearing on the overall pattern of activity. That is, there was a lot of information about the underlying distribution of activity that wasn’t captured by only looking at the peaks either. 

The authors say that pattern-based analysis methods are a better option for figuring out whether or not a result has been replicated. Rather than simply looking at the average activity across a brain structure or where peak activation is, this kind of analysis uses information about the distribution of activity across many voxels and brain regions. This would allow researchers to directly compare the similarity of overall patterns of activity between studies. Alongside other improved methods of analysis, the authors say, “these practices will provide researchers with more robust spatial tests, helping us move one step towards resolving the current replication crisis in neuroimaging studies.”

But they may be facing an uphill battle. In a survey of 135 previously published papers that claimed to have successfully replicated previous findings, Hong and his colleagues found that 85.3 per cent based their conclusions merely on finding activity in the same brain region as the original study, and more than 40 per cent didn’t even compare peak co-ordinates. A meagre 7.4 per cent conducted the kind of pattern-based analysis that the Hong’s team recommend. This suggests that it may be a while before the authors see their recommendations put into practice. 

— False-positive neuroimaging: Undisclosed flexibility in testing spatial hypotheses allows presenting anything as a replicated finding [this paper is a preprint meaning that it has not yet been subjected to peer review and the final version published in a journal may differ from the version that this report was based on]

Matthew Warren (@MattbWarren) is Staff Writer at BPS Research Digest

17 thoughts on “Widely Used Neuroimaging Analyses Allow Almost Any Result To Be Presented As A Successful Replication, Paper Claims”

Comments are closed.