“There is but one truly serious philosophical problem and that is suicide” the French author and philosopher Albert Camus stated. But it is not only philosophers who are moved by this issue. Psychologists are seeking ways of preventing this tragic death, and health care organisations are sounding the alarm. Around a million people die at their own hand every year, which makes suicide the tenth most common cause of death. Additionally, for every completed suicide, there are 10 to 40 survived attempts, which means that in the USA alone 650,000 people each year are taken to emergency rooms following an attempt on their own life. Yet what is most disturbing is that the number of suicides is continually rising. The WHO reports that since the 1960s this number has grown over 60 per cent.
Is psychology capable of identifying the risk factors that can push people to take their own lives? Joseph Franklin at Florida State University and his research team at the Technology and Psychopathy (TAP) Lab have provided an answer, but it is a disappointing one. Our capacity to predict whether someone will make a suicide attempt is no better than chance. What is worse, we have not made any progress in this area in the last half-century. These striking conclusions come as the result of a meta-analysis of 365 studies into suicide risk conducted over the last 50 years and published recently in Psychological Bulletin (pdf).
Glastonbury 1997, the 2002 Winter Olympics in Salt Lake City, the pilgrimage to Lourdes in 2008: what do they have in common? All three were the backdrop to outbreaks of communicable disease, and so of interest to doctors working in mass gathering medicine. The goal of this relatively young field is to address the specific health problems associated with mass events, but two British psychologists now claim that this can only be done effectively by understanding the psychological transformation that people undergo when they join a crowd.
If you want to maximise a person’s intellectual potential, the general consensus for a long time has been that you need to start young. According to this traditional view, early childhood offers a precious “window of opportunity” or “sensitive period” for learning that closes slowly as we reach adolescence. It’s the reason that toddlers find it easier to master the accent of a foreign language, for instance.
Sarah-Jayne Blakemore at University College London has spent the last decade over-turning some of these assumptions, showing that the adolescent brain is still remarkably flexible as it undergoes profound anatomical changes. “The idea that the brain is somehow fixed in early childhood, which was an idea that was very strongly believed up until fairly recently, is completely wrong,” she told Edge in 2012. The transformation is particularly marked in the prefrontal lobes (located behind the forehead) and the parietal lobes (underneath and just behind the top of your head): two regions that are involved in abstract thought.
The upshot is that teenagers may go through a second sensitive period, in which they are particularly responsive to certain kinds of intellectual stimulation. A new paper from Blakemore’s lab, published in Psychological Science, builds on this idea, showing that our ability to learn certain kinds of analytical skills doesn’t diminish after childhood, but actually increases through adolescence and into early adulthood.
Television programs portraying ordinary people in unexpected situations are almost as old as the medium of television itself. First aired in 1984, Candid Camera is often seen as a prototype of the reality show. Its premise was simple – unsuspecting people were confronted with unusual, funny situations and filmed with hidden cameras. However, the genre exploded as a phenomenon in the late 1990s and 2000s with the global success of such series as Survivor, Idol, and Big Brother, and to this day many people continue to abandon their own activities for the voyeuristic other.
Reality shows have not only amassed incredible popularity but have also become an object of severe, wide-ranging criticism. Among the most serious complaints is the allegation that the shows rely on viewers’ enjoyment of the humiliation and degradation of participants. It is quite difficult to find an individual who is indifferent to such programmes. We either hate reality shows or we watch them, quite often without considering why.
Up until now, scholarly opinion on the subject has been divided. Some maintain that the shows’ appeal constitutes an extension of fictional drama, and is thus driven by positive feelings like empathy and compassion. Others claim that reality TV viewers are driven by a voyeuristic desire to intrude on others and to see them in their most private and embarrassing moments. Michal Hershman Shitrit and Jonathan Cohen from University of Haifa in Israel recently tested these contrasting perspectives for a study in the Journal of Media Psychology. Continue reading “Why do we enjoy reality TV? Researchers say it’s more about empathy than humiliation”→
Besides problems with social interactions, it has been known for a while that many people with autism experience sensory difficulties, such as hypersensitivity to sounds, light or touch. With sensory impairment now officially included in diagnostic manuals, researchers have been trying to see if there’s a link between the sensory and social symptoms. Such a link would make intuitive sense: For instance, it is easy to imagine that if someone experienced sensory stimuli more strongly, they would shun social interaction due to their complexity. More specifically, you would expect them to struggle with filtering out and making sense of social cues against the backdrop of sensory overload.
Past research has suggested that tactile hyper-responsiveness in particular may be relevant. The correct processing of tactile information plays an important role in differentiating yourself from others (so-called “self-other discrimination”), a crucial requirement for social cognition. In fact, touch may be unique among the senses because there is a clear difference in the tactile feedback received when you touch something compared to when you see someone else touch something. Now a study in Social Cognitive and Affective Neuroscience has used recordings of participants’ brain waves to provide more evidence that tactile sensations are processed differently in people with autism and that this may contribute to their social difficulties.
Crisps, coke, and chocolate bars. What might be a special treat for some of us, is now a multi-billion pound industry and a staple of many people’s diets. Advertising campaigns from the snack food companies, often starring sports stars, send the message that we can offset any adverse effects of consuming their products simply by getting more physical exercise. But you can’t really “run off” a burger – recent studies show a lack of exercise is not to blame for rising obesity rates, bad diets are the real driver.
Interventions to help reduce junk food consumption are especially important for children and adolescents – prevention is better than cure in this context because obesity is so difficult to treat. Unfortunately, while health education in the classroom has shown some success among young children, adolescents have been notoriously hard to reach.
But now a large-scale study published in PNAS has tried an innovative approach to change teenagers’ attitudes towards healthy eating, and the results are promising. The researchers, led by Christopher Bryan at the University of Chicago and David Yeager at the University of Texas at Austin, argued that previous interventions have probably been unsuccessful because of a major flaw: they focused on a future, healthier you and assumed that this would be enough motivation for adolescents. In contrast, the new intervention cleverly exploits teenagers’ instinct for rebelliousness and autonomy, and the value they place on social justice. Continue reading “Teens reject junk food when healthy eating is framed as rebellion”→
Imagine if we could capture the words of an angry dog owner holding a chewed-up shoe – “How could you? You terrible dog!” – and digitally alter the tone to sound praising. Would the dog be oblivious to the reprimanding content of the message? I should admit that, until quite recently, I thought that the answer was yes – that no matter how chastising the words you used, you could convince a dog that it is being showered in praise, simply by adopting an affectionate tone. But a recent study published in Science indicates that many of us might be vastly underestimating canine listening skills. The findings reveal that dogs do not rely exclusively on intonation when judging the reward value of human speech, but that they also recognise the meanings that we assign to words. Continue reading “Brain scan study reveals dogs attend to word meaning, not just intonation”→
For decades, we’ve known from twin studies that psychological traits like intelligence and personality are influenced by genes. That’s why identical twins (who share all their genes) are not just more physically similar to each other than non-identical twins (who share half their genes), but also more similar in terms of their psychological traits. But what twin studies can’t tell us is which particular genes are involved. Frustratingly, this has always left an ‘in’ for the incorrigible critics of twin studies: they’ve been able to say “you’re telling me these traits are genetic, but you can’t tell me any of the specific genes!” But not any more. Continue reading “It’s now possible, in theory, to predict life success from a genetic test at birth”→
When I was 13, I once dreamt that a beautiful woman was sensuously stroking the palm of my hand, as a family of fridges hummed in the background. In reality, a huge, buzzing wasp had landed on my right hand. It idly walked around for a bit, then stung me. After the shock had worn off, I was puzzled why my dreaming brain had stopped me from waking up to this potential danger. Contrast this with 6 years ago, when even my deepest sleep would be broken by the first sounds of my newborn baby daughter’s cries. How do our brains decide whether or not to wake us up, based on what’s going on in the world? And why does this policy change depending on whether we’re dreaming or in some other sleep state?
Scientists are still struggling to understand the causes of autism. A difficulty bonding with others represents one of the core symptoms and has been the focus of several theories that try and explain exactly why these deficits come about.
One of the more prominent examples, the “broken mirror hypothesis”, suggests that an impaired development of the mirror neuron system (MNS) is to blame. First observed in monkeys, mirror neurons are more active when you perform a certain action and when you see someone else engage in the same behavior – for example, when you smile or when you see someone else smile.
This “mirroring” has been hypothesised to help us understand what others are feeling by sharing their emotional states, although this is disputed. Another behaviour that is thought to depend on an intact mirror neuron system is facial mimicry – the way that people spontaneously and unconsciously mimic the emotional facial expressions of others.
Interestingly, studies have shown that people with autism do not spontaneously mimic others’ facial expressions, which could explain why they often struggle to “read” people’s emotions or have trouble interacting socially. Some experts have claimed these findings lend support to “broken” mirroring in autism, but this has remained controversial. Now a study in Autism Research has used a new way to measure facial mimicry and the results cast fresh doubt on the idea that autism is somehow caused by a broken mirror neuron system. Continue reading “No, autistic people do not have a "broken" mirror neuron system – new evidence”→