Thursday, 18 September 2014

Why is poverty associated with mental health problems for some people, but not others?

By guest blogger Peter Kinderman
“I’ve been rich and I’ve been poor. Believe me, rich is better” (Mae West).  
Critiques of the rather discredited "disease-model" of mental illness are commonplace, but we also need to articulate the alternative. New research by Sophie Wickham and colleagues helps do that, by providing support for the idea that we learn, as a consequence of our experiences in life, a framework of appraising, understanding and responding to new challenges. This psychological schema then shapes our emotional and behavioural responses to future events.

Wickham and her colleagues used data from over 7000 people and, based on a composite measure of each person’s neighbourhood (including data on income, health, education, and crime), they found that participants living in more deprived neighbourhoods had much higher levels of both depression and paranoia.

But the researchers did not merely correlate social deprivation with mental health. They also looked at a range of psychological mediators. They found that, if people reported low levels of stress, high levels of trust in others and high levels of social support, then social deprivation was no longer associated with more depression. The same was partially true in the case of paranoia – when people reported low levels of stress and high levels of trust, social deprivation had a greatly reduced association with levels of paranoia.

In one sense this is a relatively conventional correlational study using secondary data (the Adult Psychiatric Morbidity Survey) to look at a rather well-established link between social factors and mental health. But I think there’s more to it than that.

There are many ways to be overly simplistic about mental health issues. A simple "disease-model" of mental health problems doesn’t really help us much, but an equally simplistic model of social causation is equally reductionist – reducing people to mechanistic pawns, pushed around by social pressures. Instead, a more elegant psychosocial model might suggest that the emotional (and behavioural) impact of life events is at least in part a consequence of how we appraise and respond to those events… and in turn that our appraisals and responses have been learned over time as a consequence of the events to which we have been exposed.

Wickham and colleagues have gone some way in exploring that hypothesis. Their data suggest that people’s appraisals of their circumstances – their perceived stress, perceived trust and perceived social support – mediate the impact of social deprivation on depression and paranoia. It is also interesting to note that these relationships appeared specific to depression and paranoia; they did not apply to auditory hallucinations or hypomania, the rates of which were not associated with poverty in this study.

If these kinds of findings are replicated in future research, the implications could be important and far-reaching. But first, it seems important to replicate this work, exploring the specific combinations of social circumstances, and mediating psychological processes, that lead to different emotional and behavioural outcomes. Wickham and colleagues speculate about what some of these may be – for example, whereas the combination of deprivation and a person’s appraisal of their situation was associated with more depression and paranoia, perhaps a combination of childhood trauma and a perceptual source monitoring problem (e.g. misattributing one’s own thoughts to a third party) might be associated with auditory hallucinations.

The researchers also speculate about the implications of their findings - how we might intervene at a population level, with social and psychological interventions targeted at specific risk factors and psychological mechanisms. Long term political and social policies could address issues of population-level social disadvantage, deprivation and inequity. Similarly, social interventions and targeted welfare packages might be effective in addressing social risk factors at an individual or family level. And, unsurprisingly given that they are psychologists, Wickham and her colleagues also point out that psychological interventions such as cognitive-behaviour therapy (CBT) and interpersonal psychotherapy could help individuals develop more effective psychological responses to the inevitable social stressors that accompany social deprivation.

One study cannot possibly explore all these issues. But, by examining both the social deprivation that is known to contribute to mental health problems and the psychological mechanisms that mediate the impact of this social stress on the individual, Wickham and colleagues offer a model for an elegant approach to understanding and, ultimately, intervening to improve psychological health and well-being. These ideas are important, and new, but are also evidence of the growing maturity and power of psychosocial explanations in mental health. I discuss these ideas further in my book, A Prescription for Psychiatry: Why We Need a Whole New Approach to Mental Health and Wellbeing
and in my new, free, online course.

_________________________________ ResearchBlogging.org

Wickham S, Taylor P, Shevlin M, & Bentall RP (2014). The impact of social deprivation on paranoia, hallucinations, mania and depression: the role of discrimination social support, stress and trust. PloS one, 9 (8) PMID: 25162703

--further reading--
Worry and self-blame as the "final common pathway" towards poor mental health.

Post written by Peter Kinderman (@peterkinderman), Professor of Clinical Psychology at the University of Liverpool, UK. His research activity and clinical work concentrate on understanding and helping people with serious and enduring mental health problems, and on how psychological science can assist public policy in health and social care. His new book, A Prescription for Psychiatry: Why We Need a Whole New Approach to Mental Health and Wellbeing (Palgrave Macmillan) is available now.

Wednesday, 17 September 2014

There's a problem with assuming the most intelligent candidates make the best employees

Workplace research through the 20th Century suggested that selecting for intelligence is the best way to identify good performers. General mental ability (GMA), a popular recruitment measure that maps closely to the colloquial meaning of "intelligence", is strongly correlated with on-the job performance, well ahead of any other single measure.

This consistent finding came from studies that mostly defined job performance as carrying out the duties expected in that role. Although intuitive, this neglects two types of "extra-role" behaviours identified and studied in more recent years: citizenship behaviours, such as volunteering time or treating colleagues with courtesy; and counter-productive work behaviours, such as spreading rumours, shirking, or theft. Now a new meta-analysis suggests that GMA isn't the best predictor of these crucial aspects of performance. In fact, intelligence may be of little use in predicting who will behave badly at work - although it may predict who can get away with it.

The meta-analysis winnowed the available literature down to 35 relevant studies that looked at citizenship and counterproductive behaviours in real organisations. Intelligence (GMA) was correlated with engaging in more citizenship behaviours, but the association was far weaker than between intelligence and traditional task-based measures of performance. The researchers led by Erik Gonzalez-Mulé then cross-compared their results with previous meta-analyses focused on personality, and concluded that personality and GMA each account for about half the variance in citizenship behaviours. Put another way, you're just as likely to do good because you're inclined that way, as you are because you're smart.

Turning to counterproductive workplace behaviours, the authors predicted a relationship here with intelligence/GMA based on evidence from criminology that’s shown helping people see the consequences of their actions has an inhibitory effect on aberrant behaviour. In fact, the new analysis found no association between intelligence and aberrant behaviour. It's possible that this discrepancy with the criminology findings is because of differences in samples: there may be low-intelligence individuals who are more disposed to malfeasance, but they are underrepresented in workplaces because of adolescent anti-social issues, such as truancy or criminal behaviour. Meanwhile, personality, particularly the trait of agreeableness (but also conscientiousness and openness to experience) was strongly associated with unhelpful behaviours at work.

An interesting footnote - when self-ratings of counterproductive behaviour were removed from the analysis (leaving only third-party ratings), the results showed a significant relationship between intelligence and (fewer) unhelpful workplace behaviours. This means that smarter people report engaging in just as much bad behaviour as the rest of us, but others, such as work supervisors, notice less of it.

In summary, while GMA is the undisputed king of predicting better task performance, it holds equal footing with personality in predicting helpful, altruistic work behaviour, and cedes the ground almost entirely to personality for bad behaviour. Looking at performance as a composite of these three areas, Mulé's team conclude that when it comes to workplace selection, GMA still has a prominent role, but a much diminished one.

 _________________________________ ResearchBlogging.org

Gonzalez-Mulé E, Mount MK, & Oh IS (2014). A Meta-Analysis of the Relationship Between General Mental Ability and Nontask Performance. The Journal of applied psychology PMID: 25133304

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Tuesday, 16 September 2014

Forgive yourself for relaxing in front of the TV and the couch time might actually do you some good

There's a snobbishness about relaxation time. Tell someone your hobby is watching TV and chances are they'll look at you with derision. Mention meditation, reading or yoga and you're far more likely to attract nods of approval.

And yet there is substantial evidence that time watching TV or playing video games can have a powerful restorative effect - just what many of us need after a hard day. This benefit isn't found for everyone, and in new paper Leonard Reinecke and his collaborators propose that a key reason has to do with guilt.

The researchers think that it is people who are mentally exhausted, who are most likely to experience guilt after vegging out with a box set or video game. This is because, in their depleted state, these people see their behaviour as procrastination. This leads to the paradoxical situation in which it is the people who could most benefit from the restorative effects of lounge-based downtime who are the least likely to do so.

To test their ideas, Reinecke's team surveyed nearly 500 people in Germany and Switzerland. Participants were recruited via a gaming website and through psychology and communication classes.  Specifically, the participants answered questions about the previous day, including how much work or study they'd done (answers ranged from half an hour to 16 hours), how depleted they felt after work or college, how much TV they'd watched or video-gaming they'd played (this averaged around two hours), whether they viewed it as procrastination, whether they felt guilty, and how recharged they felt afterwards.

The key finding is that the more depleted people felt after work (agreeing with statements like "I felt like my willpower was gone"), the more they tended to view their TV or gaming as procrastination, the more guilt they felt, and the less likely they were to say they felt restored afterwards. The same findings applied for TV or video games.

"Rather than diminishing the beneficial potential of entertaining media," the researchers said, "we believe that the results of this study may ultimately help to optimise the well-being outcomes of entertaining media use by extending our knowledge of the mechanisms furthering and hindering media-induced recovery and general well-being." If the researchers are correct, then if you cut yourself some slack when you watch TV after a hard day, you're more likely feel rejuvenated afterwards.

Unfortunately, as the researchers admit in their paper, their methodological approach has several limitations. Above all, this wasn't an experimental study (with people allocated randomly to different interventions). This means the data can be interpreted in many different ways. One alternative reading of the results is that when TV or gaming fails to have a restorative effect, this leads people to view the time as wasteful procrastination, thus causing them to feel guilty.

_________________________________ ResearchBlogging.org

Reinecke, L., Hartmann, T., & Eden, A. (2014). The Guilty Couch Potato: The Role of Ego Depletion in Reducing Recovery Through Media Use Journal of Communication, 64 (4), 569-589 DOI: 10.1111/jcom.12107

--further reading--
Do television and video games impact on the wellbeing of younger children?
A psychological problem with snacking in front of the telly.

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 15 September 2014

Pupils benefit from praise, but should teachers give it to them publicly or privately?

There's a best practice guide for teachers, produced by the Association of School Psychologists in the US, that states praise is best given to pupils in private. This advice is not based on experimental research - there hasn't been any - but on surveys of student preferences, and on the rationale that pupils could be embarrassed by receiving praise in public.

Now, in the first study of its kind, John Blaze and his colleagues have systematically compared the effect of public and private praise (also known as "loud" and "quiet" praise) on classroom behaviour. They found that praise had a dramatic beneficial effect on pupils' behaviour, and it didn't matter whether the praise was private or public.

The research was conducted at four high-school public classrooms in rural south-eastern United States (the equivalent to state schools in the UK). The classes were mixed-sex, with a mixture of mostly Caucasian and African American pupils, with between 16 and 25 pupils in each class. The children were aged 14 to 16. Three of the teachers were teaching English, the other taught Transition to Algebra.

The teachers were given training in appropriate praise: it must be contingent on good behaviour; make clear to the pupil why they are being praised; immediate; and effort-based. During the test sessions of teaching, the teachers carried a buzzer on their belt that prompted them, once every two minutes, to deliver praise to one of their pupils, either loudly so the whole class could hear (in the loud condition) or discreetly, by a whisper in the ear or pat on the shoulder, so that hopefully only the child knew they were being praised (in the quiet condition). For comparison, there were also baseline teaching sessions in which the teachers simply carried out their teaching in their usual style.

Trained observers stationed for 20-minute sessions in the classrooms monitored the teachers' praise-giving and the behaviour of the pupils across the different conditions. They found that frequent praise increased pupils' on-task behaviours, such as reading or listening to the teacher, by 31 per cent compared with baseline, and this improvement didn't vary according to whether the praise was private or public. Frequent praise of either manner also reduced naughty behaviours by nearly 20 per cent.

Blaze and his team said that the debate over praise will likely continue, but they stated their results are clear: "both loud and quiet forms of praise are effective tools that can have dramatic effects at the secondary level." A weakness of the study is that the researchers didn't monitor the teachers' use of reprimands, which likely reduced as they spent more time delivering praise.

_________________________________ ResearchBlogging.org

Blaze JT, Olmi DJ, Mercer SH, Dufrene BA, & Tingstom DH (2014). Loud versus quiet praise: A direct behavioral comparison in secondary classrooms. Journal of school psychology, 52 (4), 349-60 PMID: 25107408

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Saturday, 13 September 2014

Link feast

Our pick of the best psychology and neuroscience links from the past week:

“Cyranoids”: Stanley Milgram’s Creepiest Experiment
Milgram is most famous for his obedience experiments, but Neuroskeptic reports on new research into another of Milgram's ideas - that our speech can be fed to us by someone else (so we become a Cyranoid) without anyone realising anything is amiss.

Rising star of business psychology, Professor Adam Grant, has launched a new, free newsletter called "Granted"
The author of NYT Bestseller Give and Take promises to send you videos and articles about work and psychology.

America's New Bedlam
This disturbing radio documentary from BBC World Service investigates the treatment of the more than one million US prisoners who are mentally ill.

A Social Visit with Hallucinated Voices
Over at the PLOS Neuroscience Community blog, Vaughan Bell explores the social side of hearing voices - the fact that most voice-hearers have relationships with their voices, and perceive their voices as having identities. This is a hugely under-researched area, he says.

We Are Entering the Age of Alzheimer's
"Alzheimer’s ... is more than a disease of the brain. It is a pandemic of selfhood," writes Kent Russell in this moving portrait of a man coping with his father's dementia.

How to Be Alone: An Antidote to One of the Central Anxieties and Greatest Paradoxes of Our Time
At Brain Pickings, Maria Popova shares highlights from How to Be Alone by Sara Maitland.

Hundreds Report Waking up During Surgery
NHS Choices takes a calm looking at the study that led to some alarming headlines.

The Truth About Truthiness 
Megan McArdle at Bloomberg View takes issue with the idea (based largely on this study and disseminated by this Slate article) that low effort thought gives rise to conservative ideology.

A to Z of the Human Condition: N is for Natural Curiosity
Over at the Wellcome Collection blog, Mark Rapoza contemplates our unending curiosity with nature (alongside readers' nature photos).
_________________________________

Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Friday, 12 September 2014

Psychologists compare the mental abilities of Scrabble and crossword champions

Completed Scrabble (left) and crossword grids (image from Toma et al 2014).
Every year, hundreds of word lovers arrive from across the US to compete in the American Crossword Puzzle tournament. They solve clues (e.g. "caught some Z's") and place the answers (e.g. "slept") in a grid. Meanwhile, a separate group of wordsmiths gather regularly to compete at Scrabble, the game that involves forming words out of letter tiles and finding a suitable place for them on the board.

Both sets of players have exceptional abilities, but how exactly do they differ from each other and from non-players of matched academic ability? Some answers are provided by Michael Toma and his colleagues, who have performed the first detailed comparison of the mental skills of the most elite crossword and Scrabble players in the US. Previous studies on gaming expertise have mostly involved chess players, so this is a refreshing new research angle.

Toma's team recruited 26 elite Scrabble players (in the top two per cent of competitive players, on average; 20 men) and 31 elite crossword players (in the top 7 per cent, on average; 22 men) to complete several tests of working memory - the kind of memory that we use to juggle and use information over short time-scales.

For example, there was a visuospatial task that involved judging whether images were symmetrical, while also remembering highlighted locations in a series of grids that always appeared after each symmetry image. Another challenge was the reading span task (a test of verbal working memory), which involved judging the grammatical sense of sentences, while also remembering the order of individual letters that were flashed on-screen after each grammatical challenge.

The researchers anticipated that the Scrabble players would outperform the crossworders on visuospatial working memory, whereas they thought the crossword players might be superior on verbal working memory. These predictions were based on the contrasting skills demanded by the two games. Scrabble players often spend hours learning lists of words that are legal in the game, but unlike crossword players, they don't need to know their meaning. In fact many Scrabble players admit to not knowing the meaning of many of the words they play. On the other hand, Scrabble players need skills to rearrange letters and to find a place for their words on the board (a visuospatial skill), whereas crossword players do not need these skills so much because the grid is prearranged for them.

The researchers actually uncovered no group differences on any of the measures of visuospatial and verbal working memory. However, in line with predictions, the crossword competitors outperformed the Scrabble players on an analogies-based word task - identifying a pair of words that have the same relation to each other as a target pair - and the crossworders also had higher (self-reported) verbal SAT scores than the Scrabble players (SAT is a standardised school test used in the US). The two groups also differed drastically in the most important strategies they said they used during game play - for instance, mental flexibility was far more important for crossworders, whereas anagramming was important for Scrabble players but not mentioned by crossworders.

Both expert groups far outperformed a control group of high-achieving students on all measures of verbal and visuospatial working memory. This was despite the fact the students had similar verbal SAT levels to the expert players. So it seems the elite players of both games have highly superior working memory relative to controls, but this enhancement is not tailored to their different games.

Toma and his team said that by looking beyond chess and studying experts in cognitively demanding verbal games, their research "helps to build a more general understanding of the cognitive mechanisms that underlie elite performance." From a theoretical perspective, their finding of no working memory differences between Scrabble and crossword competitors is supportive of a domain general account of working memory - the idea that there exists a single mechanism that supports the processing of verbal and visuospatial information.

_________________________________ ResearchBlogging.org

Toma, M., Halpern, D., & Berger, D. (2014). Cognitive Abilities of Elite Nationally Ranked SCRABBLE and Crossword Experts Applied Cognitive Psychology DOI: 10.1002/acp.3059

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Thursday, 11 September 2014

The illusion that gives you sensations in a rubber tongue

Our sense of where our bodies begin and end usually feels consistent and reliable. However psychologists have been having fun for decades, exposing just how malleable the body concept can be.

You may have heard of the "rubber hand illusion" (video). By visibly stroking a rubber hand in time with stroking a participant's hidden real hand, you can induce for them the feeling of sensation in the rubber hand.

The rubber hand illusion is thought to occur because the brain tends to bind together information arising from different sensory modalities. The stroking sensation arrives at one's real hand, but the stroking is seen occurring at the same time at the rubber hand. The brain binds these two experiences and the visual modality wins, transferring the felt sensation to the rubber appendage.

Because the rubber hand illusion depends on the dominance of vision, Charles Michel and his colleagues wondered if a similar illusion would still occur for the tongue - one of our own body parts that we feel but rarely see.  The researchers purchased a fake tongue from a magic shop (see pic), and for forty seconds they stroked this tongue with a cotton bud (Q-tip) at the same time as they stroked each participant's real tongue. The participants could see the stroking of the fake tongue, but the stroking of their own tongue was hidden from view.

Averaging responses from the 32 student participants, there was an overall sense among the students of being stroked on the rubber tongue, "thus demonstrating," the researchers said, "visual capture over the felt position of the tongue for the very first time." Further evidence for an illusory effect came from the fact that sensation in the rubber tongue was stronger when the stroking of the fake and real tongues was performed in synchrony as opposed to out of time. This synchronous stroking also led to more agreement from the students that they felt as though they could move the fake tongue, and that the fake tongue was their own.

Next, the researchers shone a laser pointer on the fake tongue as the participants watched. Twenty-two of the participants said that this triggered sensations in their real tongue - some said it felt cold, others warm, tactile and/or tingly. "I felt vibrations on my tongue moving in synchrony with the light movement," one student said.

The researchers say their results have shown that an illusion, similar to the rubber hand illusion, can be experienced with the tongue. We call this "the Butcher's Tongue Illusion," they said.

_________________________________ ResearchBlogging.org

Michel, C., Velasco, C., Salgado-Montejo, A., & Spence, C. (2014). The Butcher’s Tongue Illusion Perception, 43 (8), 818-824 DOI: 10.1068/p7733

-Further reading-
Psychologist magazine interview with cognitive neuroscientist and ‘master of illusion’ Henrik Ehrsson.

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Google+