Most people who undertake psychotherapy seem to benefit from it. How do we know? Arguably, the most important evidence comes from meta-analyses that combine the results from many – sometimes hundreds – of randomly controlled trials. Based on this, it’s been estimated that psychotherapy is effective for about 80 per cent of people (meanwhile, between five to 10 per cent of clients may suffer adverse effects).
But now the more concerning news: a team of researchers led by Evangelos Evangelou at the University of Ioannina, Greece has assessed the quality of 247 of these psychotherapy meta-analyses and they report in Acta Psychiatrica Scandinavica that many of them have serious methodological short-comings.
Coincidentally, a separate research group led by Brent Roberts at the University of Illinois, Urbana-Champaign has just published in Journal of Personality some of the first observational data on how people’s personalities change after undertaking psychotherapy. In contrast to what’s been found in the clinical literature, they report that people who’ve been in therapy seem to show negative changes in personality and other psychological outcomes.
In their “umbrella review” of the psychotherapy literature, Evangelou and his team chose to include published meta-analyses covering any form of psychotherapy and almost any kind of target mental health condition, but with the proviso that each meta-analysis had itself combined the results from a minimum of ten studies. Their comprehensive search resulted in 247 unique meta-analyses which collectively synthesised data from over five thousand randomly controlled trials.
Overall, 80 per cent of the published psychotherapy meta-analyses had reported a significant and positive benefit of whatever form of psychotherapy was their focus. This sounds impressive at first, but after applying “state-of-the-art” tests of their robustness, Evangelou and his colleagues report that just 16 of the 247 meta-analyses had provided “convincing evidence”.
The researchers identified a number of issues:
- Many meta-analyses showed a statistically significant amount of heterogeneity between the trials that they’d combined. The worry is that too many meta-analyses are comparing apples and oranges, although there is scholarly debate about what level of heterogeneity is unacceptable.
- The researchers found many instances of the “small study bias”, which is the tendency for smaller, less robust studies to report larger effects.
- They found evidence of “excess significance bias“, which is when an over-abundance of trials seem to report positive findings given what we know so far about psychotherapy’s effectiveness. This suggests negative findings are remaining unpublished for whatever reason.
Evangelou and his team conclude that the field of psychotherapy research needs to work harder to ensure that negative results are published as well as good news results, especially given the findings of another recent paper suggesting that the field has a problem with undeclared researcher allegiance to particular therapeutic approaches. One way round this is to ensure all trials and meta-analyses are preregistered before they are conducted, alongside information on the statistical tests that are planned.
For the reader undertaking psychotherapy or who knows someone who is, it is worth keeping some perspective: this is just one critique and the weight of evidence still suggests that psychotherapy is, more often than not, beneficial.
For scholars, the methodological concerns raised by this new paper feed into an already contentious field. Some experts believe that, high-quality or not, randomly controlled trials (and by extension, meta-analyses based on those trials) are not really an appropriate way to gauge the effectiveness of psychotherapy because of all complex, myriad factors involved in the dynamic between a client and his or her therapist.
An alternative approach is to look at observational data. Rather than signing people up to a controlled trial, with all the contrivances that that entails (such as standardising the delivery of therapy as much as possible), this approach is less hands-on and involves looking instead at the outcomes of people who happen to have been in therapy and those who haven’t.
Brent Roberts and his colleagues found two sources of this kind of data: hundreds of students in Tübingen who were enrolled in a long-term personality study and who’d completed measures twice across four years; and a group of thousands of older Americans who similarly had completed personality and other measures twice across four years.
Crucially, both these longitudinal surveys included a question about whether the participants had undertaken psychotherapy in the intervening period between the two data collection points. One hundred and twenty-eight of the Tübingen students had completed some therapy and, compared to the other students, they showed negative changes in their personality in terms of higher scores in neuroticism, less extraversion and conscientiousness, as well as reductions in self-esteem, increases in depression, and less life satisfaction. It was a similar story for the older American adults who’d been in therapy: they showed negative changes in personality and other psychological outcomes.
There are problems with how to interpret these findings – an obvious shortcoming of observational data of this kind is that it’s less controlled than an experimental trial. For example, perhaps undertaking therapy was a consequence of these unwelcome psychological changes rather than the cause (although this still wouldn’t explain why the therapy hadn’t been more helpful, more often). Whatever the explanation, however, the results stand in contrast to the findings from controlled psychotherapy trials which have pointed overwhelming to positive personality changes arising from therapy. As Roberts and his team conclude, “the gravity of the issue necessitates that researchers investigate the apparent discrepancy between these findings and those from well-controlled trials”.
Image via gettyimages.co.uk