Why We Continue to Believe False Information Even After We’ve Learned It’s Not True


By guest blogger Rhi Willmot

Is your mental library a haven of accurate and well-informed facts, or are there mistruths hiding on the shelves? It’s natural to assume that we update our beliefs in line with the most recent and well-established evidence. But what really happens to our views when a celebrity endorses a product that becomes discredited by science, or when a newspaper publishes a story which is later retracted?

A recent paper from the Journal of Consumer Psychology presents a novel take on this topic, by investigating the continued influence effect. Anne Hamby and colleagues suggest that our likelihood of continuing to believe retracted information depends on whether or not it helps us to understand the cause-and-effect structure of an event. Crucially, the team proposes, we would rather have a complete understanding of why things happen than a perspective which is more accurate, but less complete.

In the first study, participants read a scenario in which an unwell character takes medication that fails to cure him. Whilst one group was informed the drug was ineffective because the character took it at the wrong time, the other group was given no explanation. Both groups were told the character took the medication with a glass of lemonade, and were then informed the drug is ineffective if consumed with citrus-based drinks. Later, all participants were notified that this last fact was untrue.

A day later, participants were asked to recall why the drug had been ineffective. Those who received no explanation for the drug failure in the original anecdote were more likely to incorrectly reference the lemonade drink as the reason why the medication didn’t work — even though most remembered that this information had later been retracted. This suggests information we later discover to be false is harder to ignore when it fills an explanatory “gap” in a story.

This idea was further supported by a second study, in which participants read the following extract from a successful poker player: “I reach down and pull out my bottle of kombucha which I like to drink at poker matches. I take a long deep swig from the bottle. And I have clarity of mind. I fold.” One group of participants was informed kombucha supports mental performance whilst another group was told the drink increases muscular function. This allowed those in the mental performance group to infer a link between kombucha drinking and poker success, whereas the muscular performance group could not. Later, all individuals were told the link between kombucha and mental performance or muscular function was untrue.

Participants who read that kombucha was related to mental performance were more likely to recall the false information as true, and more likely to attribute it to the poker player’s win. When offered a selection of beverages to drink after the experiment, this group was also more likely to select kombucha. Again, it seems that when information supports our understanding of a story’s cause-and-effect structure, it is particularly “sticky”, even when we are told it isn’t true.

In a final study, the team repeated the original failed-medication scenario, but gave half the sample a positive ending to the story, in which the patient changes other aspects of his routine and improves. The other half were given a negative ending, in which the patient is unable to receive further treatment and does not get better. The researchers found the retracted information about citrus drinks had less of a continued influence on those who read the negative ending. They suggest this is because we are more motivated to accurately understand what leads to negative outcomes, as this will help us survive similar experiences in future.

As a whole, it seems we have a general bias toward creating a complete mental picture of an event, rather than one that is factually accurate but lacking a cause-and-effect explanation. However, Hamby and colleagues also indicate this bias can be overcome when we recognise that prioritising accuracy over completeness will help us in future, or when a retraction offers an alternative explanation for the cause-and-effect structure of an event. Research such as this therefore makes us more aware of the chinks in our mental armour — and better equipped to defend them.

How Stories in Memory Perpetuate the Continued Influence of False Information

Post written by Rhi Willmot (@rhi_willmot) for BPS Research Digest. Rhi is a psychologist with an interest in wellbeing, and has explored how topics from positive psychology influence healthy lifestyle behaviour. As a keen runner, Rhi is also interested in the relationship between psychology and optimal performance. She has published internationally, and worked on a number of transdisciplinary programmes, including an initiative to reduce food waste via altering perceptions of “ugly” fruit and vegetables, and a project to enhance quality of life in deprived areas of Mexico.

At Research Digest we’re proud to showcase the expertise and writing talent of our community. Click here for more about our guest posts.


12 thoughts on “Why We Continue to Believe False Information Even After We’ve Learned It’s Not True”

  1. It is well known in physics education that students are resistant to changing for an Aristotelian view of motion to an Galilean one. This work might provide a clue. If in the student’s mind they think they understand motion, they would be less likely to accept a new view which is incomplete because it is part of a larger educational agenda which is by necessity of time incomplete and thus less understanding. Perhaps this is an argument for metacognition in education, talking with students about their own learning and how difficult it is to accept a new understanding because of characteristics like those described in this article.

  2. I always find it hard to be fully convinced by psychology experiments that involve low or non-existent stakes. As it didn’t really matter to the participants whether they correctly remembered the given ‘facts’, is it of any significance that they mis-remembered them? What would happen if there were real stakes involved (‘get it right and we give you £10,000’ ‘get it wrong and we shoot your dog’). Research budgets and those pesky ethics committees might get in the way, but without real stakes involved I’m not sure how much importance to accord it in explaining real world events.

    1. This article intrigues me as it relates to how people believe incorrect news articles and “research” but seem to miss the retractions and counter research. In these cases, consumers aren’t facing life and death or high dollar decisions (in most cases) but could be making political or social choices. This research frightens me as it seems clear how it could be used to manipulate.

      1. Good point, though I feel like I already know that people are far from rational or well-informed when it comes to political views and voting. And in general people will have already been exposed to an endless stream of contradictory ‘information’, bunking, debunking and rebunking claims, so I’m not sure it is usually reducible to this simple case.

        I guess one could imagine it being relevant if there’s a particular big story/claim on the eve of a major vote (at least one recent example comes to mind).

  3. I can only surmise that the ‘kombucha’ experiment would have to also be influenced by a person’s belief or lack thereof of the claim of the latest ‘superfood’. As far as I know kombucha only potentially improves gut bacteria, and would therefore be extremely unlikely to produce the effects claimed in the experiment.

    Just this small point makes be cynical as to the design of the experiment producing useful data.

Comments are closed.