This time there’s no explicit naming and shaming, and the title may not be as colourful, but a new study out today in prestige journal Nature Neuroscience echoes many of the same concerns voiced earlier this year in the leaked paper “Voodoo Correlations in Social Neuroscience” (since renamed as “Puzzlingly High Correlations …”). And the new paper’s implications are surely just as profound for the cognitive neuroscience community.
Nikolaus Kriegeskorte and colleagues analysed all the fMRI studies published in Nature, Science, Nature Neuroscience, Neuron and Journal of Neuroscience, in 2008, and found that 42 per cent of these 134 papers were guilty of performing at least one non-independent selective analysis – what Kriegeskorte’s team dub “double dipping”.
This is the procedure, also condemned by the Voodoo paper, in which researchers first perform an all-over analysis to find a brain region(s) that responds to the condition of interest, before going on to test their hypothesis on data collected in just that brain region. The cardinal sin is that the same data are used in both stages.
A similarly flawed approach can be seen in brain imaging studies that claim to be able to discern a presented stimulus from patterns of activity recorded in a given brain area. These are the kind of studies that lead to “mind reading” headlines in the popular press. In this case, the alleged statistical crime is to use the same data for the training phase of pattern extraction and the subsequent hypothesis testing phase.
Kriegeskorte’s claim is not that all the studies guilty of this procedure are invalid, but that their data will have been distorted to varying degrees. “To decide which neuroscientific claims hold, the community needs to carefully consider each particular case, guided by both neuroscientific and statistical expertise,” they wrote.
To support their case, Kriegeskorte’s team performed two “mock” experiments of the “region of interest” and “pattern extraction” types. In each case they showed how double-dipping can drastically distort results. For example, in a mock pattern-information analysis they achieved a significant result with double-dipping even after feeding purely random data into the analysis.
The ramifications of these statistical observations don’t end with brain imaging. They also have implications for work with electroencephalography, in which researchers are prone to use the same data for selecting relevant channels and testing hypotheses, and for research using single-cell recording.
“A circular analysis is one whose assumptions distort its results,” the authors concluded. “We have demonstrated that practices that are widespread in neuroimaging are affected by circularity.”
UPDATE: A freely available PDF of supplementary info, including how to spot circular analyses and a proposed policy for preventing distortion of data, is now available at Nature Neuroscience.
Kriegeskorte, N., Simmons, W.K., Bellgowan, P.S.F., & Baker, C.I. (2009). Circular analysis in systems neuroscience: the dangers of double dipping Nature Neuroscience. In Press.
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.