The impact of technology on young people is an oft-debated topic in the media. Is increased screen time having a serious impact on their mental health? Or have we over-exaggerated the level of risk young people face due to their use of tech?
According to a new study, published in Humanities and Social Sciences Communications, we could be asking the wrong questions. A team led by Nastasia Griffioen at Radboud University Nijmegen suggests that rather than looking at screen time in a binary way, researchers should explore the nuances of smartphone use: how young people are using their phones, rather than the fact they’re using them at all.
What makes something go viral online? A lot of work has highlighted the role of emotion: social media posts that express strong emotions — and particularly negative emotions — tend to spread further.
Now a study in PNAS has identified another factor which seems to have an even greater effect on how often posts are shared. Steve Rathje from the University of Cambridge and colleagues find that tweets and Facebook posts that contain more language referring to political opponents get more shares. These posts may be so popular, the team finds, because they appeal to feelings of anger and outrage towards the political out-group.
It’s become somewhat of a truism that you shouldn’t believe everything you see on social media. Where someone’s life looks perfect, we’re often reminded, there are probably a handful of problems silently situated away from the camera. Nobody’s life is as shiny, flawless, or enviable as it might appear in their carefully curated feed.
But presenting ourselves more authentically on social media — ditching those things we want to believe are true about ourselves in favour of those that are — could be good for our wellbeing, according to a new paper in Nature Communications by Erica R. Bailey from Columbia University and colleagues.
This article contains discussion of suicide and self-harm
In 2014, the Samaritans launched what seemed like an innovative new project: Radar. Designed to provide what the charity described as an “online safety net”, users could sign up to Radar to receive updates on the content of other people’s tweets, with emails sent out based on a list of key phrases meant to detect whether someone was feeling distressed.
In principle, this meant people could keep an eye on friends who were vulnerable: if they missed a tweet where somebody said they felt suicidal or wanted to self-harm, for example, Radar would send it on, in theory increasing the likelihood that someone might get help or support.
In practice, however, things weren’t so simple. Some pointed out that the app could be used for stalking or harassment, allowing abuse to be targeted during someone’s lowest point. There were false positives, too — “I want to kill myself”, for example, is often used as hyperbole by people who aren’t actually distressed at all. And others felt it was an invasion of privacy: their tweets might be on a public platform, they argued, but they were personal expression. They hadn’t consented to being used as part of a programme like Radar, no matter how well meaning it was.
Samaritans shut down Radar just a week after launch. But since then, the use of social media data in mental health research — including tweets, Facebook and Instagram posts, and blogs — has only increased. Researchers hope that the volume of data social media offers will bring important insights into mental health. But many users worry about how their data is being used.
In basic terms, online status indicators convey availability: whether someone is on or offline, or when they last logged into a particular app. But if you’ve ever anxiously awaited a response from a prospective partner or suspected your friend might be ignoring you, you’ll be painfully aware of just how much weight that indicator can actually hold.
Do you often spend time with your friends in order to forget about personal problems? Do you think about your friends even when you’re not with them? Have you even gone as far as ignoring your family to spend time with your friends?
If you answered yes to these questions, you might fit the criteria for “offline friend addiction”, according to a new scale described in a preprint on PsyArxiv. Except, of course, that this notion is ridiculous. How can we be addicted to socialising, the fulfilment of one of our basic human needs?
Well, that’s pretty much the point of the new paper, written with tongue firmly in cheek. But behind it is a serious argument: although a scale for offline friend addiction is clearly absurd, there’s another, similar concept for which such scales have already been developed — social media addiction.
Breaking up is never easy, particularly when you’re confronted with memories of happier times. A smell, an old photograph, a note somebody left you — weeks or even months after a break-up and you can still be reminded of your ex-partner, whether you like it or not.
On social media, this can be even worse. If you’re still friends with your ex, you’re likely to still see their posts on your feed; if you’re not, you can still rub salt into the wound by checking their profile anyway. ‘On this Day’ features are also notoriously bad for bringing up unhappy memories at the worst possible time.
According to anew studypublished inProceedings of the ACM on Human-Computer Interaction, we also see our exes so much because of the so-called “social periphery” — the networks of people we know tangentially through our ex-partners. So why not design an algorithm that causes us less pain? The new work suggests that this could be the answer to our online break-up woes.
Scrolling through Facebook or Instagram, it can be easy to feel drawn in by the people you follow. Whether it’s the brands they’re buying, the things they’re doing or what they’re wearing, it’s not uncommon to want to follow suit — they’re called “influencers” for a reason, after all.
However, the visibility of these features is poor at best — and it remains unclear if the public even wants them in the first place. Now a study in JMIR Mental Health has asked whether the general public would be happy for tech companies to use their social media posts to look for signs of depression. The study found that although the public sees the benefit of using algorithms to identify at-risk individuals, privacy concerns still surround the use of this technology.
Is it really believable that Hillary Clinton operated a child sex ring out of a pizza shop — or that Donald Trump was prepared to deport his wife, Melania, after a fight at the White House? Though both these headlines seem obviously false, they were shared millions of times on social media.
The sharing of misinformation — including such blatantly false “fake news” — is of course a serious problem. According to a popular interpretation of why it happens, when deciding what to share, social media users don’t care if a “news” item is true or not, so long as it furthers their own agenda: that is, we are in a “post-truth” era. One recent study suggested, for example, that knowing something is false has little impact on the likelihood of sharing. However, a new paper by a team of researchers from MIT and the University of Regina in Canada further challenges that bleak view.
The studies reported in the paper, available as a preprint on PsyArXiv, suggest that in fact, social media users do care whether an item is accurate or not — they just get distracted by other motives (such as wanting to secure new followers or likes) when deciding what to share. As part of their study, the researchers also showed that a simple intervention that targeted a group of oblivious Twitter users increased the quality of the news that they shared. “Our results translate directly into a scalable anti-misinformation intervention that is easily implementable by social media platforms,” they write.