By guest blogger Simon Oxenham
Historically, the kind of false memories induced in volunteers by psychologists have been relatively mundane. For example, a seminal study used leading questions and the encouragement to confabulate, to apparently implant in participants the memory of getting lost in a shopping mall as a child. This reliance on mundane false memories has been problematic for experts who believe that false memories have critical real world consequence, from criminal trials involving false murder confessions, to memories of child abuse “recovered” during therapy using controversial techniques.
The discrepancy between psychologists’ lab results and their real world claims vanished abruptly in 2015 when Julia Shaw (based then at the University of Bedfordshire) and Stephen Porter (University of British Columbia) shocked the memory research community with their staggering finding that, over several interview sessions, and by using false accounts purportedly from the participants’ own caregivers, they had successfully implanted false memories of having committed a crime as a teenager in 70 per cent of their participants, ranging from theft to assault with a weapon. But now other experts have raised doubts about these claims.
When first released, Shaw’s and Porter’s findings, published in Psychological Science, received widespread coverage, perhaps most prominently in the US Public Broadcasting Service (PBS) documentary Memory Hackers (see clip above). I also contributed to the media reaction, covering the study in detail on my blog at Big Think, writing that “the risk of forming shocking false memories … may be greater than previously thought”.
However, the famous 2015 findings have now been called into question following a reanalysis of the data, published (also in Psychological Science) by Kimberley Wade at Warwick University and her colleagues. According to them, most of the false memories Shaw and Porter produced weren’t really false memories at all. This was because rather than using trained judges to determine, following established criteria, whether participants had truly recalled having committed a crime, Shaw and Porter developed new criteria for what constitutes a false memory. These included:
- answering “yes” to the question “Did you believe that you had forgotten the event and that it actually happened?”
- citing 10 critical false details presented by the researchers
- providing a basic account of the false event in response to the instruction “tell me everything you remember from start to finish”
“The problem with the [false details] criterion” Wade explains, “is that subjects in false memory studies frequently speculate and imagine, and they talk out loud about what the suggested experience could have been like. They may say ‘I think my classmate was wearing a red t-shirt, he wore that a lot, and it must have happened on my street, I guess, near my house’. But in the next sentence, they might say ‘Well, I can imagine it happening but I just don’t remember it at all’. In a case like this, Shaw and Porter would say that the subject recalled 6 details in this first sentence alone (i.e. classmate + red + t-shirt + wore a lot + my street + near house).” According to Wade et al. this fails to distinguish between people who merely speculated about the possibility that they committed a crime and people who appeared to genuinely remember it. This may have led to a wildly inflated false memory statistic, they argue.
When Wade’s team recoded Shaw and Porter’s data using the latter’s own scheme, they replicated the startling 70 per cent result. However, when they used a more traditional coding scheme, they found only 30 per cent of subjects met the criteria for false memories, with a further 43 per cent showing evidence of holding false beliefs (believing they’d committed the made-up crime, but not remembering it). This still makes Shaw and Porter’s 2015 paper groundbreaking, given the extreme nature of the false events. However, the re-calculated stats are more in keeping with previous false-memory findings.
To test if their doubt in Shaw and Porter’s interpretation would be supported by laypeople, Wade’s team presented transcripts of Shaw and Porter’s data to 300 participants recruited using Amazon’s Mechanical Turk survey website. In cases that Shaw and Porter had classified as a false memory but that Wade’s team reclassified as a false belief, the participants expressed low confidence that the reports showed evidence of remembering.
Shaw, who is now affiliated with UCL, has issued a rebuttal to Wade et al.’s critique, arguing that existing coding strategies are inadequate. “My argument is that I coded my data (intentionally) differently” says Shaw. “One could argue that across all currently accepted types of coding it is agreed that a significant number of individuals in this study came to believe they committed a crime that never happened.” Shaw also questions the use of laymen, saying “There is no reason to assume that laypeople would be any good at identifying or defining false memories”.
The debate raises an interesting question: if someone can be convinced that a false event is real, does it matter whether they remember it or not? Shaw says that “such a distinction was not seen as particularly relevant for the false confessions literature to which the study was intended to contribute”. Regardless of the true nature of the memories in her study, I believe the coding method she produced undoubtedly led to a truly groundbreaking discovery about false beliefs, and that in this respect her approach has been further validated by Wade et al.’s reanalysis.
Further muddying the waters, in an entirely separate development, Nicholas Brown (a PhD candidate in health psychology at UMC Groningen) and James Heathers (an applied physiologist at Northeastern University in Boston) recently found a series of statistical errors in the Shaw and Porter paper. However, the errors do not alter the headline findings. Shaw has issued a correction that’s due for publication and she says the errors were simply caused by rounding values imprecisely using Microsoft Excel and missing cells in the spreadsheet when conducting calculations. Brown and Heathers discovered the errors while developing their GRIM technique for detecting statistical anomalies in psychology research. When they applied their technique to 71 recent studies, including the Shaw and Porter paper, they found half appeared to contain at least one inconsistent result.
Post written by Simon Oxenham for the BPS Research Digest. Simon covers psychology and neuroscience critically in his Brain Scanner column at New Scientist. Follow @simoxenham on Twitter, Facebook and Google+, RSS or on his mailing list.