Emojis have become part of our everyday communication online, allowing us to succinctly communicate how we’re feeling in a way that written language cannot. Psychologists are even beginning to use emojis in research, to allow children or other participants to respond without the need for traditional questionnaires.
But is the library of emojis that is available to us truly representative of the range of emotions that we feel? A new study in Scientific Reports suggests that, broadly, it is — but that there are some important gaps too.
This is Episode 29 of PsychCrunch, the podcast from the British Psychological Society’s Research Digest, sponsored by Routledge Psychology. Download here.
Why do people share false information? In this episode, our presenters Ginny Smith and Jon Sutton explore the psychology of misinformation. They hear about the factors that make people more or less likely to share misinformation, discuss strategies to correct false information, and learn how to talk to someone who is promoting conspiracy theories.
Our guests, in order of appearance, are Tom Buchanan, Professor of Psychology at the University of Westminster, and Briony Swire-Thompson, senior research scientist at Northeastern University’s Network Science Institute.
It can be hard to know what’s going to go viral — or even what’s going to get you just a few more likes. For many, however, expressing an outraged opinion on politics has been a good way of garnering interactions, even if it doesn’t always have the intended effect.
A new study, published in Science Advances and authored by William Brady and colleagues from Yale University, looks more closely at how outrage spreads on social media. It finds that likes and shares garnered by outrage act as a reward that “teaches” us to express more of the same.
The impact of technology on young people is an oft-debated topic in the media. Is increased screen time having a serious impact on their mental health? Or have we over-exaggerated the level of risk young people face due to their use of tech?
According to a new study, published in Humanities and Social Sciences Communications, we could be asking the wrong questions. A team led by Nastasia Griffioen at Radboud University Nijmegen suggests that rather than looking at screen time in a binary way, researchers should explore the nuances of smartphone use: how young people are using their phones, rather than the fact they’re using them at all.
What makes something go viral online? A lot of work has highlighted the role of emotion: social media posts that express strong emotions — and particularly negative emotions — tend to spread further.
Now a study in PNAS has identified another factor which seems to have an even greater effect on how often posts are shared. Steve Rathje from the University of Cambridge and colleagues find that tweets and Facebook posts that contain more language referring to political opponents get more shares. These posts may be so popular, the team finds, because they appeal to feelings of anger and outrage towards the political out-group.
A key facet of cognitive behavioural therapy is challenging “cognitive distortions”, inaccurate thought patterns that often affect those with depression. Such distortions could include jumping to conclusions, catastrophising, black and white thinking, or self-blame — and can cause sincere distress to those experiencing them.
But how do we track cognitive distortion in those with depression outside of self-reporting? A new study, published in Nature Human Behaviour, explores cognitive distortions online, finding that those with depression have higher levels of distortion in the language they use on social media.
This article contains discussion of suicide and self-harm
In 2014, the Samaritans launched what seemed like an innovative new project: Radar. Designed to provide what the charity described as an “online safety net”, users could sign up to Radar to receive updates on the content of other people’s tweets, with emails sent out based on a list of key phrases meant to detect whether someone was feeling distressed.
In principle, this meant people could keep an eye on friends who were vulnerable: if they missed a tweet where somebody said they felt suicidal or wanted to self-harm, for example, Radar would send it on, in theory increasing the likelihood that someone might get help or support.
In practice, however, things weren’t so simple. Some pointed out that the app could be used for stalking or harassment, allowing abuse to be targeted during someone’s lowest point. There were false positives, too — “I want to kill myself”, for example, is often used as hyperbole by people who aren’t actually distressed at all. And others felt it was an invasion of privacy: their tweets might be on a public platform, they argued, but they were personal expression. They hadn’t consented to being used as part of a programme like Radar, no matter how well meaning it was.
Samaritans shut down Radar just a week after launch. But since then, the use of social media data in mental health research — including tweets, Facebook and Instagram posts, and blogs — has only increased. Researchers hope that the volume of data social media offers will bring important insights into mental health. But many users worry about how their data is being used.
Over the last few years, memes have played an increasingly important part in online political discussion: in 2016, the Washington Post dubbed the 2016 presidential election “the most-memed election in U.S. history”, and CNN has already christened the 2020 race “the meme election”.
But politicians may want to pause for thought before they hit send on that jokey tweet. New research in Communication Research Reports, from Ohio State University’s Olivia Bullock and Austin Huber,suggests that humour doesn’t always go down well online — and that this can impact what voters think of particular candidates and potentially how they vote.
Is it really believable that Hillary Clinton operated a child sex ring out of a pizza shop — or that Donald Trump was prepared to deport his wife, Melania, after a fight at the White House? Though both these headlines seem obviously false, they were shared millions of times on social media.
The sharing of misinformation — including such blatantly false “fake news” — is of course a serious problem. According to a popular interpretation of why it happens, when deciding what to share, social media users don’t care if a “news” item is true or not, so long as it furthers their own agenda: that is, we are in a “post-truth” era. One recent study suggested, for example, that knowing something is false has little impact on the likelihood of sharing. However, a new paper by a team of researchers from MIT and the University of Regina in Canada further challenges that bleak view.
The studies reported in the paper, available as a preprint on PsyArXiv, suggest that in fact, social media users do care whether an item is accurate or not — they just get distracted by other motives (such as wanting to secure new followers or likes) when deciding what to share. As part of their study, the researchers also showed that a simple intervention that targeted a group of oblivious Twitter users increased the quality of the news that they shared. “Our results translate directly into a scalable anti-misinformation intervention that is easily implementable by social media platforms,” they write.
Over the last few years, so-called “fake news” — purposefully untrue misinformation spread online — has become more and more of a concern. From extensive media coverage of the issue to government committees being set up for its investigation, fake news is at the top of the agenda — and more often than we’d like, on top of our newsfeeds.
But how does exposure to misinformation impact the way we respond to it? A new study, published in Psychological Science, suggests that the more we see it, the more we’re likely to spread it. And considering the fact that fake news is more likely to go viral than real news, this could have worrying implications.