Wednesday, 1 July 2015

What kind of a person volunteers for a free brain scan?

When psychologists scan the brains of a group of people, they usually do so in the hope that the findings will generalise more widely. For example, if they find that there are correlations between localised brain shrinkage and mental performance in a group of healthy older participants, they will usually infer that such correlations apply in healthy older people more generally. But there's an important problem with this logic (one that applies to other fields of psychology): what if the people who volunteer for brain scans are systematically different from those who don't?

To explore this issue, Mary Ganguli and her colleagues turned to 1,982 older participants (aged 65+) who were participating in a large, long-running study into ageing. This study excluded participants who were severely mentally impaired or in long-term care.

Helpfully, the researchers already had a good deal of data from all the participants, including their demographics, health and mental skills. Next they asked the participants if they'd be interested in taking part in a free brain scan study at their local hospital in return for a cash incentive.

Nearly half the sample (46.2 per cent) stated flat out that they would not be interested. The others gave answers ranging from definitely to maybe. Those who expressed an interest in volunteering for a brain scan differed from those who were definitely not keen in many ways: the willing were more likely to be younger, male, better educated, married, employed, free from depressive symptoms, mentally fitter, subjectively healthier, on fewer meds and living unsupervised. There were no differences between the groups in terms of subjective memory concerns or ethnicity.

Next, the researchers conducted an actual brain scan on 48 of those participants who'd expressed an earlier interest. This revealed the expected correlations between grey matter volume in specific brain areas and cognitive performance.

Now the researchers made some adjustments so that the results from each brain scan participant were weighted according to how similar they were to the averaged group of 1,982 participants involved in the larger ageing study. This was a proof of principle, to see if it's possible to correct for the bias introduced by relying on volunteers rather than truly random samples. The adjustment certainly made a difference to the findings – now grey matter volume in fewer regions showed correlations with cognitive test scores, which the researchers attributed to a reduction in bias.

This isn't the most exciting brain scan study you'll read about this year, and the specific findings might only apply to older adults, but it addresses an important issue in neuroimaging and contributes to the gradual refining of psychological methods, helping our science become more reliable by avoiding biased results.


Ganguli, M., Lee, C., Hughes, T., Snitz, B., Jakubcak, J., Duara, R., & Chang, C. (2015). Who wants a free brain scan? Assessing and correcting for recruitment biases in a population-based sMRI pilot study Brain Imaging and Behavior, 9 (2), 204-212 DOI: 10.1007/s11682-014-9297-9

--further reading--
Just how representative are the people who volunteer for psychology experiments?
Beware the "super well" - why the controls in psychology research are often too healthy
How burnt-out students could be skewing psychology research

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Tuesday, 30 June 2015

What the textbooks don't tell you about psychology's most famous case study

Image: Photograph by Jack Wilgus of
a daguerreotype of Phineas Gage
in the collection of Jack and Beverly Wilgus.
It's a remarkable, mythical tale with lashings of gore – no wonder it's a favourite of psychology students the world over. I'm talking about Phineas Gage, the nineteenth century railway worker who somehow survived the passing of a three-foot long tamping iron through the front of his brain and out the top of his head. What happened to him next?

If you turn to many of the leading introductory psychology textbooks (American ones, at least), you'll find the wrong answer, or a misleading account. Richard Griggs, Emeritus Professor of Psychology at the University of Florida, has just analysed the content of 23 contemporary textbooks (either released or updated within the last couple of years), and he finds most of them contain distortions, omissions and inaccuracies.

It needn't be so. Thanks to painstaking historical analysis of primary sources (by Malcolm Macmillan and Matthew Lena) – much of it published between 2000 and 2010 – and the discovery during the same time period of new photographic evidence of post-accident Gage (see image, right), it is now believed that Gage made a remarkable recovery from his terrible injuries. He ultimately emigrated to Chile where he worked as a horse-coach driver, controlling six horses at once and dealing politely with non-English speaking passengers. The latest simulations of his injury help explain his rehabilitation – it's thought the iron rod passed through his left frontal lobe only, leaving his right lobe fully intact.

Image: From Van Horn et al 2012
Yet, the textbooks mostly tell a different story. Of the 21 that cover Gage, only 4 mention the years he worked in Chile. Only three detail his mental recovery. Fourteen of the books tell you about the first research that attempted to identify the extent of his brain injuries, but just four of the books give you the results from the most technically advanced effort, published in 2004, that first suggested his brain damage was limited to the left frontal lobe (watch video). Only 9 of the books feature either of the two photos to have emerged of Gage in recent times.

So the textbooks mostly won't tell you about Gage's rehabilitation, or provide you with the latest evidence on his injuries. Instead, you might hear how hear never worked again and became a vagrant, or that he became a circus freak for the rest of his life, showing off the holes in his head. "The most egregious error," says Griggs, "seems to be that Gage survived for 20 years with the tamping iron embedded in his head!".

Does any of this matter? Griggs argues strongly that it does. There are over one and half million students enrolled in introductory psychology courses in the US alone, and most of them are introduced to the subject via textbooks. We know from past work that psychology textbook coverage of other key cases and studies is also often distorted and inaccurate. Now we learn that psychology's most famous case study is also misrepresented, potentially giving a misleading, overly simplistic impression about the effects of Gage's brain damage. "It is important to the psychological teaching community to identify inaccuracies in our textbooks so that they can be corrected, and we as textbook authors and teachers do not continue to 'give away' false information about our discipline," Griggs concludes.


Griggs, R. (2015). Coverage of the Phineas Gage Story in Introductory Psychology Textbooks: Was Gage No Longer Gage? Teaching of Psychology, 42 (3), 195-202 DOI: 10.1177/0098628315587614

--further reading--
Phineas Gage - Unravelling the Myth
Coverage of Phineas Gage in "Great Myths of the Brain"
Foundations of Sand - the lure of academic myths and their place in classic psychology.

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 29 June 2015

We're more likely to cheat when we think it's our last chance to do so

Imagine spending your school half-term week with a forgetful relative who always leaves money scattered around the house. Would you pinch any? If so, when, and why? A new paper suggests that we are most likely to “cheat at the end”, and uses a neat method to find out why.

A number of theories predict we are likelier to cheat later than earlier. Perhaps we award ourselves moral credits for being good earlier, and later spend them like Catholic indulgences for guilt-free sin. Or maybe the struggle with temptation wears down our self-control, or we become desensitised to the thought of cheating. The job of psychological science is to distinguish between explanations, and Daniel Effron’s team developed a method that argues against these, in favour of an alternative that’s based on “anticipatory regret” or the fear of missing out.

Participants sat alone in a room and tossed a coin 13 times, supposedly as part of an experiment on psychokinesis (the ability to control objects with the mind). Before each toss, participants predicted whether the coin would land heads or tails and then they recorded the outcome themselves using a computer. They were told each “correct” toss would earn them 10 cents. Crucially, the experimenters made it clear that they were depending on the participants to honestly report their successes. So, cheating was both possible and profitable.

The researchers looked for signs of cheating, not in any one individual, but by examining average performance across all 847 participants. For any given toss, when the group’s average success rate exceeded the 50/50 success rate you’d expect based on chance, this was taken as a sign that cheating was at play.

Effron’s team were particularly interested in the success rate on the seventh coin toss. They’d told some participants they would have 13 tosses in total: their seventh toss had a similar success rate to the previous six, at around chance. This suggests that the first six tosses hadn’t eroded willpower, or built up moral credits ready to be cashed in. By contrast, the researchers had told other participants they would only have seven tosses of the coin. What was striking was that these participants appeared to cheat more on the seventh toss, collectively achieving significantly more successes than would be expected based on chance. This result suggests it wasn’t the build-up of prior events that mattered, but the fact that this seemed to be the final opportunity… and if they didn’t act now, they never could.

Indeed, when these cheating participants were informed there would in fact be more tosses to follow, their honesty suddenly popped back up on toss eight and onwards. This suggests their willpower hadn’t been used up, nor were they desensitised to cheating. The researchers also conducted a meta-analysis taking in data from this experiment, a further replication, and other work, with the overall results suggesting that we are three times more likely to cheat at what we believe to be the final opportunity than at any other time.

This may remind some of you of research using the “Prisoner’s Dilemma” economic game, which shows that “defection” or mistreatment of others rises towards the end of a period of interaction. But as Efron’s team notes, that pattern is due to the to-and-fro of the game: if I swindle you at the start of our interaction, I can expect the same from you every time. Here, there was no ongoing interaction and so no reason why cheating on trial one or seven should have any different consequences. So this "cheating at the end" effect isn’t about how others treat you, but how you expect to feel about yourself.

The authors conclude that knowing when cheating is likely to occur – on the last day of a period where a work supervisor is absent, for instance – could be useful in organising the timing and targeting of anti-cheating strategies, such as reminding people of moral standards just before a “peak time” period.


Effron, D., Bryan, C., & Murnighan, J. (2015). Cheating at the End to Avoid Regret. Journal of Personality and Social Psychology DOI: 10.1037/pspa0000026

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Saturday, 27 June 2015

Link feast

Our pick of the week's 10 best psychology and neuroscience links:

Psychology: Heaven and Hell
This December in London we're putting on a little party to celebrate 10 years of the Research Digest blog. Come join us!

It Pays to Be Nice
… even when other people are screwing you over.

Scientists Just Published Ambitious New Guidelines for Conducting Better Research
Jesse Singal reports on the recommendations produced by University of Virginia psychologist Bryan Nosek and others.

A conference held in Manchester this week promised to bring together the worlds of neuroscience and business. As delegates began tweeting soundbites full of neurobunk, skeptics had a field day with the conference hashtag.

Oliver Sacks, Antonio Damasio and Others Debate Christof Koch on the Nature of Consciousness
A group of neuroscience heavy-weights discuss a theory of consciousness that sees a key role for cell membranes.

I Once Tried to Cheat Sleep, and For a Year I Succeeded
For one year, Akshat Rathi managed to keep up the Everyman Sleep Schedule: 3.5 hours at night and 3 x 20-min naps in the day.

Please, Corporations, Experiment on Us
A psychologist and ethicist argue that it's better to test out what works in our best interests, rather than powerful people and corporations relying on their gut instincts.

Clinical Psychologists Launch Guidelines on Hoarding 
Information and recommendations from the British Psychological Society's Division of Clinical Psychology.

What's Happening in Your Brain When You Can't Stay Awake?
Over at Science of Us, I looked at a brain scan study of the moment the battle to stay awake is lost.

The Hard Science of Oxytocin
It's been dubbed the "cuddle hormone" because of its role in love and bonding, but new findings show this is a gross oversimplification. Helen Shen reports for Nature.
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Friday, 26 June 2015

Is dyslexia associated with exceptional visual-spatial abilities?

Image: Jose.Stuefer / Flickr
Children and adults with dyslexia have reading skills that are weak relative to their overall intelligence. That's why it is often referred to as "specific learning disability". But what if such a profile also tended to be associated with exceptional strengths in other areas, such as visual skills? That's certainly what some experts have proposed, for example based on the observation that people with dyslexia are over-represented in fields that involve visual-spatial abilities, such as art and architecture.

Now a team led by Mirela Duranovic has tested 40 children (19 boys), aged 9-11 and diagnosed with dyslexia, on a range of tests of imagery and visual memory. The children with dyslexia performed similarly to 40 age-matched, non-dyslexic controls (19 boys) on most tests, including the mental rotation of shapes; copying a complex, abstract figure (the so-called Rey-Osterrieth Figure); and following the beginning of a line to the end, through a tangle of other lines from the left to right of a page.

On memory for simple geometric shapes there was a tendency for the dyslexic children to underperform. And on one test, the children with dyslexia clearly performed worse than the controls: this was drawing the Rey-Osterrieth Figure from memory.

However, on yet another test, the dyslexic children excelled, outperforming the controls. This was the Paper Folding Test, which requires looking at a depiction of how a piece of paper is folded and where a hole is punched through it, and then judging which one of several illustrations correctly depicts how the paper will look once unfolded again (see below; the correct answer is C).

The superior performance of the dyslexic children on the Paper Folding Test is intriguing – this test is arguably more challenging and complex than simple mental rotation tasks, and involves a larger sequence of mental steps to complete.

This new study adds to a complicated, contradictory literature on visual spatial skills in dyslexia,  filled with studies that have reported no differences between dyslexic people and controls, deficits in dyslexic groups, and advantages in dyslexia.

More research is now needed to explore why the currently reported dyslexia advantage was observed: what is it about the mental processes involved in the Paper Folding Task that meant the dyslexic children performed better than controls? Also, will the finding replicate, and will it generalise to other tasks that require the same mental processes?

"Connecting dyslexia to talent leads us in a more optimistic direction than only associating dyslexia with a deficit," the researchers concluded. "The revelation of talent in individuals with dyslexia opens a door to more effective educational strategies and for choosing professions in which individuals with dyslexia can be successful."


Duranovic, M., Dedeic, M., & Gavrić, M. (2014). Dyslexia and Visual-Spatial Talents Current Psychology, 34 (2), 207-222 DOI: 10.1007/s12144-014-9252-3

--further reading--
The enigma of dyslexic musicians
Most genes that influence maths ability also affect reading

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Thursday, 25 June 2015

Here's a technique that helps self-critical people build confidence from a taste of success

The directed abstraction technique acts a springboard,
allowing the timid to gain confidence from initial success
Last week Kathleen finally put aside her fears about public speaking to give a presentation… and it went pretty well! But when you caught her at lunch today and asked if she wanted future opportunities to present, you found she was as pessimistic about her ability as ever.

This story reflects an unfortunate truth: people with low self-belief are liable to hold onto negative assumptions about themselves despite concrete evidence of the contrary; that is, they fail to "generalise from success". Thankfully, in a new paper, psychologist Peter Zunick and his colleagues describe a technique, called directed abstraction, that can help the self-critical change their mindsets.

Direct abstraction means stopping to consider how a specific success may have more general implications – this is the abstraction part – and also ensuring this thinking is directed towards how personal qualities were key to the success. Let’s see what this means in practice.

In a first study, 86 students guessed the number of dots flashed up on screen, and were given fake but convincing positive feedback on their performance. Half the students were then asked to explain how they completed the task, which kept their thoughts on a very concrete, specific level. The other half were prompted to engage in directed abstraction by completing the sentence: “I was able to score very high on the test because I am: ... ” This query is not about how, but why – a more abstract consideration – and also focuses on the individual’s own qualities.

Engaging in directed abstraction appeared to give a particular boost to those participants who’d earlier reported believing they have low competence day to day:  afterwards, they not only had more confidence in their estimation ability (than similarly self-critical control participants), they also believed they would do better at similar tasks (like guessing jelly beans in a jar) that they faced in the future.

In another experiment, Zunick’s research team sifted through hundreds of students to find 59 with low faith in their public speaking skills. Each of them was given a few minutes to prepare and then make a speech to camera on the topic of transition to college life, a fairly easy one to tackle. Each participant then watched themselves on video, with the experimenter offering reassuring feedback and implying that they did surprisingly well.

The same participants then engaged in directed abstraction (or the control "how" query) before being thrown once more into the breach with a second speechmaking experience, this time on a tough topic, with no coddling feedback afterward – this was the real deal. Did the directed abstraction participants gain confidence from their early success that could survive a rockier second round? They did, reporting more confidence for future public speaking than their peers.

The technique seems to be appropriate for a range of settings, although obviously it’s only useful to use it following an event that can be reasonably seen as a success, otherwise it could backfire. And it’s simple to use to help a friend or yourself, just by taking the time after a success to think through what it owes to your personal qualities. Then confidence can follow.


Zunick PV, Fazio RH, & Vasey MW (2015). Directed abstraction: Encouraging broad, personal generalizations following a success experience. Journal of personality and social psychology, 109 (1), 1-19 PMID: 25984786

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Wednesday, 24 June 2015

New research challenges the idea that willpower is a "limited resource"

A popular psychological theory says that your willpower is
a "limited resource" like the fuel in your car, but is it wrong?
When we use willpower to concentrate or to resist temptation, does it leave us depleted so that we have less self-control left over to tackle new challenges? This is a question fundamental to our understanding of human nature and yet a newly published investigation reveals that psychologists are in open disagreement as to the answer.

The idea that willpower is a limited resource, much like the fuel in your car, is popular in academic psychology and supported by many studies. In their recent report What You Need To Know About Willpower: The Psychological Science of Self-control, the American Psychological Association states "A growing body of research shows that resisting repeated temptations takes a mental toll. Some experts liken willpower to a muscle that can get fatigued from overuse."

This view was backed by an influential meta-analysis published in 2010 [pdf] that looked at the results from nearly 200 published experiments. But now a team led by Evan Carter at the University of Miami has argued that the 2010 study was seriously flawed and they've published their own series of meta-analyses, the findings of which undermine the limited resource theory (also known as the theory of ego depletion).

Many psychology studies on willpower follow a similar format: one group of participants is first asked to perform an initial challenging task designed to drain their willpower, before completing a second "outcome" task that also requires willpower. For comparison, a control group of participants performs the outcome task without the first challenge. Superior performance by the control participants (on the outcome task) is taken as evidence that the willpower of the first group was left depleted by the initial challenge, thus supporting the theory that willpower is a limited resource.

The new meta-analyses and the 2010 effort both consider the combined results from many studies following this format, but the new analyses are far stricter in that they only consider studies that used tasks well-established in the literature as ways to challenge willpower, including suppressing emotional reactions to videos and resisting tempting food, and that also used established tasks as outcome measures, including persistence on impossible anagrams, food consumption and standardised academic tests (such as the graduate record exam). The 2010 analysis, by contrast, included a far wider range of studies including those that stretch the definition of a willpower challenge to its limits, including darts playing and purely hypothetical temptations.

Another key difference between the 2010 study and the new analyses is that Carter and his team trawled conference reports to find unpublished studies on willpower. This is important because in this scientific field, as with most others, it's likely there has been a bias in the literature towards publishing positive results (in this case, those consistent with the popular idea that willpower becomes depleted with repeated use).

When Carter's team analysed the evidence from the 68 relevant published and 48 relevant unpublished studies that they identified, they found very little overall support for the idea that willpower is a limited resource. The one exception was when the outcome measure involved a standardised test – here performance did appear to be diminished by a prior self-control challenge.

But for other outcome tasks such as resisting food, the combined data from published and unpublished experiments either pointed to no effect of a prior self-control challenge, or there was worrying evidence of a publication bias for positive results, as was the case, for example, when the outcome challenge involved impossible anagrams or tests of working memory. The new meta-analyses even found some support for the idea that self-control improves through successive challenges, a result that's consistent with rival theories such as "learned industriousness".

This new series of meta-analyses should be not be taken as the end of the theory of willpower as a limited resource. Proponents of that theory will likely respond with their own counter-arguments, including questioning the use of unpublished work by the new study. However, the results certainly give pause. "We encourage scientists and non-scientists alike to seriously consider other theories of when and why self-control might fail," Carter and his team conclude. It's worth noting too that this message comes after the recent doubts raised about a related idea in willpower research – specifically, the notion that depleted self-control is caused by a lack of sugar in the body.

  ResearchBlogging.orgCarter, E., Kofler, L., Forster, D., & McCullough, M. (2015). A Series of Meta-Analytic Tests of the Depletion Effect: Self-Control Does Not Seem to Rely on a Limited Resource. Journal of Experimental Psychology: General DOI: 10.1037/xge0000083

--further reading--
Self-control – the moral muscle. Roy F. Baumeister outlines intriguing and important research into willpower and ego depletion

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.