Millions of people around the world spend time and money on healthcare remedies that mainstream science considers ineffective (in the sense of being no more effective than a placebo), like homeopathy and acupuncture. A study published recently in Psychology and Health investigated how to address this issue in the context of multivitamins, which evidence suggests provide no benefit for healthy people – and may even cause harm in some contexts.
Despite this research evidence, huge numbers of healthy people take multivitamins because they appear to be helpful. Scientists refer to this as the “illusion of causality”: when someone takes a vitamin and then their cold goes away, for example, they may believe it was the vitamin that cured them, even though they would have recovered just as quickly anyway. Past research has shown that simply giving people the raw outcomes of clinical trials that show remedies to be ineffective does not necessarily help combat this problem, perhaps because the data can involve large numbers and complex findings, which are difficult for the public to interpret.
Douglas MacFarlane and colleagues from the University of Western Australia have explored how to better inoculate people against this illusion. The researchers report that people need to be told clearly about the proportion of people who benefit from the remedy versus taking a placebo – and this data has to be accompanied by a scientific explanation for why the remedy is ineffective.
The researchers recruited 245 undergrad participants and split them into several groups, which were each given different levels of information about the effectiveness of a multivitamin. At one extreme, a control group received no information about the number of people who benefited from the remedy, and was simply told that there was insufficient evidence to make a recommendation for or against using it. At the other extreme, another group was told that 3 out of 4 people benefitted from the pills, but that 3 out of 4 people also benefitted from a placebo, so the evidence showed that the remedy doesn’t have any health benefit. This group was also given a scientific explanation for why this should be: healthy people already receive enough vitamins in their diet.
There were also various intermediate groups, such as one that received information about the proportion of people who get a benefit from the pills, but no information on the effects of placebo, and another that received information about the pills and placebo, but no scientific explanation.
Afterwards, all participants were asked how much they would be willing to pay for a tube of the multivitamins. The only group who showed a reduced willingness to pay for the pills was the one given full information on the pills’ efficacy compared with a placebo and the scientific explanation of why they are not effective.
The results suggest that simplified frequencies showing how many people have and haven’t benefited from a remedy compared with placebo could help prevent the illusion of causality. This strategy could be used by health authorities to assist people making healthcare decisions, say the authors. At the moment, they write, bodies like the National Institutes of Health in the United States often frame disclaimers about ineffective remedies in diplomatic language about “insufficient clinical evidence”, rather than giving people this kind of simple, explicit data.
Importantly though, the new findings suggest these frequencies are only useful when given alongside a scientific explanation. “This component may serve to fill the mental gap created when a prior belief is challenged by scientific evidence,” write the authors.
This conclusion comes with a hefty caveat, however. Because only one group was actually given the scientific explanation, it remains unclear based on the current results whether the explanation and the simple frequencies are both necessary. A scientific explanation about why a remedy is ineffective could, in theory, reduce a person’s willingness to pay regardless of how much they know about the underlying data. Of course, scientific explanations alone can be pretty unsuccessful at changing people’s minds, so this possibility seems unlikely – but it would be nice to see this question incorporated into the design of a future study.