The impact of technology on young people is an oft-debated topic in the media. Is increased screen time having a serious impact on their mental health? Or have we over-exaggerated the level of risk young people face due to their use of tech?
According to a new study, published in Humanities and Social Sciences Communications, we could be asking the wrong questions. A team led by Nastasia Griffioen at Radboud University Nijmegen suggests that rather than looking at screen time in a binary way, researchers should explore the nuances of smartphone use: how young people are using their phones, rather than the fact they’re using them at all.
What makes something go viral online? A lot of work has highlighted the role of emotion: social media posts that express strong emotions — and particularly negative emotions — tend to spread further.
Now a study in PNAS has identified another factor which seems to have an even greater effect on how often posts are shared. Steve Rathje from the University of Cambridge and colleagues find that tweets and Facebook posts that contain more language referring to political opponents get more shares. These posts may be so popular, the team finds, because they appeal to feelings of anger and outrage towards the political out-group.
A key facet of cognitive behavioural therapy is challenging “cognitive distortions”, inaccurate thought patterns that often affect those with depression. Such distortions could include jumping to conclusions, catastrophising, black and white thinking, or self-blame — and can cause sincere distress to those experiencing them.
But how do we track cognitive distortion in those with depression outside of self-reporting? A new study, published in Nature Human Behaviour, explores cognitive distortions online, finding that those with depression have higher levels of distortion in the language they use on social media.
This article contains discussion of suicide and self-harm
In 2014, the Samaritans launched what seemed like an innovative new project: Radar. Designed to provide what the charity described as an “online safety net”, users could sign up to Radar to receive updates on the content of other people’s tweets, with emails sent out based on a list of key phrases meant to detect whether someone was feeling distressed.
In principle, this meant people could keep an eye on friends who were vulnerable: if they missed a tweet where somebody said they felt suicidal or wanted to self-harm, for example, Radar would send it on, in theory increasing the likelihood that someone might get help or support.
In practice, however, things weren’t so simple. Some pointed out that the app could be used for stalking or harassment, allowing abuse to be targeted during someone’s lowest point. There were false positives, too — “I want to kill myself”, for example, is often used as hyperbole by people who aren’t actually distressed at all. And others felt it was an invasion of privacy: their tweets might be on a public platform, they argued, but they were personal expression. They hadn’t consented to being used as part of a programme like Radar, no matter how well meaning it was.
Samaritans shut down Radar just a week after launch. But since then, the use of social media data in mental health research — including tweets, Facebook and Instagram posts, and blogs — has only increased. Researchers hope that the volume of data social media offers will bring important insights into mental health. But many users worry about how their data is being used.
Over the last few years, memes have played an increasingly important part in online political discussion: in 2016, the Washington Post dubbed the 2016 presidential election “the most-memed election in U.S. history”, and CNN has already christened the 2020 race “the meme election”.
But politicians may want to pause for thought before they hit send on that jokey tweet. New research in Communication Research Reports, from Ohio State University’s Olivia Bullock and Austin Huber,suggests that humour doesn’t always go down well online — and that this can impact what voters think of particular candidates and potentially how they vote.
Is it really believable that Hillary Clinton operated a child sex ring out of a pizza shop — or that Donald Trump was prepared to deport his wife, Melania, after a fight at the White House? Though both these headlines seem obviously false, they were shared millions of times on social media.
The sharing of misinformation — including such blatantly false “fake news” — is of course a serious problem. According to a popular interpretation of why it happens, when deciding what to share, social media users don’t care if a “news” item is true or not, so long as it furthers their own agenda: that is, we are in a “post-truth” era. One recent study suggested, for example, that knowing something is false has little impact on the likelihood of sharing. However, a new paper by a team of researchers from MIT and the University of Regina in Canada further challenges that bleak view.
The studies reported in the paper, available as a preprint on PsyArXiv, suggest that in fact, social media users do care whether an item is accurate or not — they just get distracted by other motives (such as wanting to secure new followers or likes) when deciding what to share. As part of their study, the researchers also showed that a simple intervention that targeted a group of oblivious Twitter users increased the quality of the news that they shared. “Our results translate directly into a scalable anti-misinformation intervention that is easily implementable by social media platforms,” they write.
Over the last few years, so-called “fake news” — purposefully untrue misinformation spread online — has become more and more of a concern. From extensive media coverage of the issue to government committees being set up for its investigation, fake news is at the top of the agenda — and more often than we’d like, on top of our newsfeeds.
But how does exposure to misinformation impact the way we respond to it? A new study, published in Psychological Science, suggests that the more we see it, the more we’re likely to spread it. And considering the fact that fake news is more likely to go viral than real news, this could have worrying implications.
From digital detoxes to the recent Silicon Valley fad of “dopamine fasting”, it seems more fashionable than ever to attempt to abstain from consuming digital media. Underlying all of these trends is the assumption that using digital devices — and being on social media in particular — is somehow unhealthy, and that if we abstain, we might become happier, more fulfilled people.
But is there any truth to this belief? When it comes to social media, at least, a new paper in Media Psychology suggests not. In one of the few experimental studies in the field, researchers have found that quitting social media for up to four weeks does nothing to improve our well-being or quality of life.
What is it about social media that makes discussions about controversial topics so caustic and unpleasant? A variety of reasons have been put forward — such as the tendency for outrage to self-perpetuate, as we reported earlier this week. But now a new study, published in PLoS One, implicates a concept so far explored in philosophy rather than psychology. This is “moral grandstanding” — publicly opining on morality and politics to impress others, and so to seek social status.
Spend any amount of time online and you’re likely to see the same patterns repeat themselves over and over again: somebody says something offensive or controversial on social media, they’re met with anger and disgust, and they either apologise or double down.
For some, this cycle has become somewhat of a career, with the garnering of outrage forming the backbone of their (often incredibly tedious) public personas. But does responding to such toxic or offensive remarks, especially en masse, actually work? Or does it simply increase sympathy for the offender, no matter how bigoted their remarks were to begin with?
According to research published in Social Psychological and Personality Science, the latter is more likely. The paper looked at the impact of viral outrage on convincing observers that an offender is blameworthy — and found that as outrage increased, observers believed it was “more normative” to express condemnation, but simultaneously believed that outrage was excessive and felt more sympathy for the offender.