A key facet of cognitive behavioural therapy is challenging “cognitive distortions”, inaccurate thought patterns that often affect those with depression. Such distortions could include jumping to conclusions, catastrophising, black and white thinking, or self-blame — and can cause sincere distress to those experiencing them.
But how do we track cognitive distortion in those with depression outside of self-reporting? A new study, published in Nature Human Behaviour, explores cognitive distortions online, finding that those with depression have higher levels of distortion in the language they use on social media.
Krishna Bathina from Indiana University Bloomington and colleagues looked at the language of over 6 million tweets from 7,349 Twitter accounts, some who had previously tweeted that they had a diagnosis of depression and some who were randomly selected. The researchers were specifically interested in how often these tweets contained 241 phrases which they considered to be the “building blocks” of various cognitive distortions associated with depression. For instance, the phrase “everyone believes” was taken to be part of the “mindreading” distortion, in which people think that they know what others are thinking. Other cognitive distortions include catastrophising, overgeneralisation, and discounting positive experiences.
Results showed that those who had tweeted about a diagnosis of depression used significantly more of the cognitive distortion phrases than those in the random condition. This was true for nearly all of the 12 cognitive distortion types, except for fortune-telling (predicting a negative outcome), mind reading and catastrophising, which were not significantly different between the two groups. The distortion types that were most prevalent in the depression cohort compared to the control group were personalising (taking things personally) and emotional reasoning (mistaking a feeling for a fact).
This could mean that depression could be tracked via language — particularly online, where people may be more open about what they’re thinking and feeling. The team also suggests that the findings could have an impact on the way cognitive behavioural therapy is actually delivered: how certain types of language are used can reflect specific cognitive distortions and therefore give better insight for targeted, relevant treatment.
But although all data was anonymised, the team acknowledges that scraping data on mental health from users of social platforms throws up tricky ethical issues; some users have expressed serious discomfort at their personal information being used without consent. The team suggests that in the future “automated interventions” for depression could target people using such language on social media — but when so many people use social platforms to express themselves and find a space and community to support them, such interventions may be unwanted.
There are also questions about the self-disclosure of those in the depression condition. Firstly, there was no way for researchers to verify the diagnoses; secondly, there may have been many in the random condition with diagnoses they just never thought or wanted to share. Although jokey tweets were filtered out (e.g. “That Game of Thrones episode has given me a diagnosis of depression”), the language of mental illness is often co-opted or used in an exaggerated fashion — after political defeats, for example. Looking at the subtlety of online communications may also provide different and interesting answers.