Saturday, 20 September 2014

Link feast

Our pick of the best psychology and neuroscience links from the past week:

What’s Up With That: Why Do All My Friends Like the Same Music?
Nick Stockton at WIRED speaks to Petr Janata, a psychologist who studies music and the brain at UC Davis.

Conspiracy Theories Used To Be Just For eccentrics. Now Sensible People Are Getting Carried Away With Them Too 
In one poll, nearly half of Scots said they believed the government was hiding an oil field. This, says Dorian Lynskey, is just the latest example of how belief in conspiracy theories is becoming more widespread.

Are Dolphins Cleverer Than Dogs?
Justin Gregg for the BBC surveys the evidence and concludes this is really the wrong question.

Have You Fallen Victim to the Guru Effect?
Neurobonkers sympathises with Michael Billig's (author of Learn to Write Badly: How to Succeed in the Social Sciences) lament about the lack of transparent writing in the social sciences.

Amnesia - The Reality: Each Day a Blank Slate For the Man With No Memory
With the new Hollywood film Before I Go To Sleep presenting a rather misleading view of amnesia, the Independent profiles the real life struggles of amnesiac John Mills.

Should We All Take a Bit of Lithium?
Communities with more trace lithium in their drinking water have lower suicide rates. Psychiatrist Anna Fels wonders whether we should consider adding more of it to our diets.

At 24, Woman Discovers She Was Born Without A Key Brain Structure, The Cerebellum
Neurosurgeons in China have reported the case of a young woman who went to hospital complaining of dizziness only to discover that she'd been born without a cerebellum.

Resilience: How To Train a Tougher Mind
Emma Young at BBC Future looks at the science of mental resilience.

Runs In The Family
"Cricketing dynasties seem to imply that talent is genetic," writes David Papineau at Aeon Magazine. "Yet the evidence from other sports queers the pitch".

Should Policy Makers and Financial Institutions Have Access to Billions of Brain Scans?
The Neurocritic discusses the possible implications of a new brain imaging study that linked risk propensity with grey matter volume in the parietal lobe.

Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Friday, 19 September 2014

The 10 most controversial psychology studies ever published

Controversy is essential to scientific progress. As Richard Feynman said, "science is the belief in the ignorance of experts." Nothing is taken on faith, all assumptions are open to further scrutiny. It's a healthy sign therefore that psychology studies continue to generate great controversy. Often the heat is created by arguments about the logic or ethics of the methods, other times it's because of disagreements about the implications of the findings to our understanding of human nature. Here we digest ten of the most controversial studies in psychology's history. Please use the comments to have your say on these controversies, or to highlight provocative studies that you think should have made it onto our list.

1. The Stanford Prison Experiment
Conducted in 1971, Philip Zimbardo's experiment had to be aborted when students allocated to the role of prison guards began abusing students who were acting as prisoners. Zimbardo interpreted the events as showing that certain situations inevitably turn good people bad, a theoretical stance he later applied to the acts of abuse that occurred at the Abu Ghraib prison camp in Iraq from 2003 to 2004. This situationist interpretation has been challenged, most forcibly by the British psychologists Steve Reicher and Alex Haslam. The pair argue, on the basis of their own BBC Prison study and real-life instances of prisoner resistance, that people do not yield mindlessly to toxic environments. Rather, in any situation, power resides in the group that manages to establish a sense of shared identity. Critics also point out that Zimbardo led and inspired his abusive prison guards; that the Stanford Prison Experiment (SPE) may have attracted particular personality types; and that many guards did behave appropriately. The debate continues, as does the influence of the SPE on popular culture, so far inspiring at least two feature length movies.

Zimbardo, P. G. (1972). Comment: Pathology of imprisonment. Society, 9(6), 4-8. Google Scholar Citations: 324.
Haney, C., Banks, W. C., & Zimbardo, P. G. (1973). Study of prisoners and guards in a simulated prison. Naval Research Reviews, 9(1-17). Google Scholar Citations: 216.

2. The Milgram "Shock Experiments"
Stanley Milgram's studies conducted in the 1960s appeared to show that many people are incredibly obedient to authority.  Given the instruction from a scientist, many participants applied what they thought were deadly levels of electricity to an innocent person. Not one study, but several, Milgram's research has inspired many imitations, including in virtual reality and in the form of a French TV show. The original studies have attracted huge controversy, not only because of their ethically dubious nature, but also because of the way they have been interpreted and used to explain historical events such as the supposedly blind obedience to authority in the Nazi era. Haslam and Reicher have again been at the forefront of counter-arguments. Most recently, based on archived feedback from Milgram's participants, the pair argue that the observed obedience was far from blind - in fact many participants were pleased to have taken part, so convinced were they that their efforts were making an important contribution to science. It's also notable that many participants in fact disobeyed instructions, and in such cases, verbal prompts from the scientist were largely ineffective.

Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67(4), 371. Google Scholar Citations: 3474

3. The "Elderly-related Words Provoke Slow Walking" Experiment (and other social priming research)
One of the experiments in a 1996 paper published by John Bargh and colleagues showed that when people were exposed to words that pertained to being old, they subsequently walked away from the lab more slowly. This finding is just one of many in the field of "social priming" research, all of which suggest our minds are far more open to influence than we realise. In 2012, a different lab tried to replicate the elderly words study and failed. Professor Bargh reacted angrily. Ever since, the controversy over his study and other related findings has only intensified. Highlights of the furore include an open letter from Nobel Laureate Daniel Kahneman to researchers working in the area, and a mass replication attempt of several studies in social psychology, including social priming effects. Much of the disagreement centres around whether replication attempts in this area fail because the original effects don't exist, or because those attempting a replication lack the necessary research skills, make statistical errors, or fail to perfectly match the original research design.

Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. Journal of personality and social psychology, 71(2), 230. Google Scholar Citations: 3276

4. The Conditioning of Little Albert
Back in 1920 John Watson and his future wife Rosalie Rayner deliberately induced fears in an 11-month-old baby. They did this by exposing him to a particular animal, such as a white rat, at the same time as banging a steel bar behind his head. The research is controversial not just because it seems so unethical, but also because the results have tended to be reported in an inaccurate and overly simplified way. Many textbooks claim the study shows how fears are easily conditioned and generalised to similar stimuli; they say that after being conditioned to fear a white rat, Little Albert subsequently feared all things that were white and fluffy. In fact, the results were far messier and more inconsistent than that, and the methodology was poorly controlled. Over the last few years, controversy has also developed around the identity of poor Little Albert. In 2009, a team led by Hall Beck claimed that the baby was in fact Douglas Merritte. They later claimed that Merritte was neurologically impaired, which if true would only add to the unethical nature of the original research. However, a new paper published this year by Ben Harris and colleagues argues that Little Albert was actually a child known as Albert Barger.

Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3(1), 1. Google Scholar Citations: 2031

5. Loftus' "Lost in The Mall" Study
In 1995, Elizabeth Loftus and Jacqueline Pickrell documented how easy it was to implant in people a fictitious memory of having been lost in a shopping mall as a child. The false childhood event is simply described to a participant alongside true events, and over a few interviews it soon becomes absorbed into the person's true memories, so that they think the experience really happened. The research and other related findings became hugely controversial because they showed how unreliable and suggestible memory can be. In particular, this cast doubt on so-called "recovered memories" of abuse that originated during sessions of psychotherapy. This is a highly sensitive area and experts continue to debate the nature of false memories, repression and recovered memories. One challenge to the "lost in the mall" study was that participants may really have had the childhood experience of having been lost, in which case Loftus' methodology was recovering lost memories of the incident rather than implanting false memories. This criticism was refuted in a later study (pdf) in which Loftus and her colleagues implanted in people the memory of having met Bugs Bunny at Disneyland. Cartoon aficionados will understand why this memory was definitely false.

Loftus, E. F., & Pickrell, J. E. (1995). The formation of false memories. Psychiatric annals, 25(12), 720-725. Google Scholar Citations: 677
Loftus, E. F. (1993). The reality of repressed memories. American psychologist, 48(5), 518. Google Scholar Citations: 1413

6. The Daryl Bem Pre-cognition Study
In 2010 social psychologist Daryl Bem attracted huge attention when he claimed to have shown that many established psychological phenomena work backwards in time. For instance, in one of his experiments, he found that people performed better at a memory task for words they revised in the future. Bem interpreted this as evidence for pre-cognition, or psi - that is, effects that can't be explained by current scientific understanding. Superficially at least, Bem's methodology appeared robust, and he took the laudable step of making his procedures readily available to other researchers. However, many experts have since criticised Bem's methods and statistical analyses (pdf), and many replication attempts have failed to support the original findings. Further controversy came from the the fact that the journal that published Bem's results refused at first to publish any replication attempts. This prompted uproar in the research community and contributed to what's become known as the "replication crisis" or "replication wars" in psychology. Unabashed, Bem published a meta-analysis this year (an analysis that collated results from 90 attempts to replicate his 2010 findings) and he concluded that overall there was solid support for his earlier work. Where will this controversy head next? If Bem's right, you probably know the answer already.

Bem, D. J. (2011). Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. Journal of personality and social psychology, 100(3), 407. Google Scholar Citations: 276

7. The Voodoo Correlations in Social Neuroscience study
This paper was released online before print, where it initially bore the provocative title "Voodoo correlations in social neuroscience". Voodoo in this sense meant non-existent or spurious. Ed Vul and his colleagues had analysed over 50 studies that linked localised patterns of brain activity with specific aspects of behaviour or emotion, such as one that reported feelings of rejection were correlated highly with activity in the anterior cingulate cortex. Vul and his team said the high correlations reported in these papers were due to the use of inappropriate analyses - a form of "double-dipping" in which researchers took two or more steps: first identifying a region, or even a single voxel, linked with a certain behaviour, and then performing further analyses on just that area. The paper caused great offence to the many brain imaging researchers in social neuroscience whose work had been targeted. "Several of [Vul et al's] conclusions are incorrect due to flawed reasoning, statistical errors, and sampling anomalies," said the authors of one rebuttal paper. However, concerns about the statistical analyses used in imaging neuroscience haven't gone away. For example, in 2012 Joshua Carp wrote a paper claiming that most imaging papers fail to provide enough methodological detail to allow others to attempt replications.

Vul, E., Harris, C., Winkielman, P., & Pashler, H. (2009). Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspectives on psychological science, 4(3), 274-290. Google Scholar Citations: 688.

8. The Kirsch Anti-Depressant Placebo Effect Study
In 2008 Irving Kirsch, a psychologist who was then based at the University of Hull in the UK, analysed all the trial data on anti-depressants, published and unpublished, submitted to the US Food and Drug Administration. He and his colleagues concluded that for most people with mild or moderate depression, the extra benefit of anti-depressants versus placebo is not clinically meaningful.  The results led to headlines like "Depression drugs don't work" and provided ammunition for people concerned with the overprescription of antidepressant medication. But there was also a backlash. Other experts analysed Kirsch's dataset using different methods and came to different conclusions. Another group made similar findings to Kirsch, but interpreted them very differently - as showing that drugs are more effective than placebo. Kirsch is standing his ground. Writing earlier this year, he said: "Instead of curing depression, popular antidepressants may induce a biological vulnerability making people more likely to become depressed in the future."

Kirsch, I., Deacon, B. J., Huedo-Medina, T. B., Scoboria, A., Moore, T. J., & Johnson, B. T. (2008). Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS medicine, 5(2), e45. Google Scholar Citations: 1450.

9. Judith Rich Harris and the "Nurture Assumption"
You could fill a library or two with all the books that have been published on how to be a better parent. The implicit assumption, of course, is that parents play a profound role in shaping their offspring. Judith Rich Harris challenged this idea with a provocative paper published in 1995 in which she proposed that children are shaped principally by their peer groups and their experiences outside of the home. She followed this up with two best-selling books: The Nurture Assumption and No Two Alike. Writing for the BPS Research Digest in 2007, Harris described some of the evidence that supports her claims: "identical twins reared by different parents are (on average) as similar in personality as those reared by the same parents ... adoptive siblings reared by the same parents are as dissimilar as those reared by different parents ... [and] ... children reared by immigrant parents have the personality characteristics of the country they were reared in, rather than those of their parents' native land." Harris has powerful supporters, Steven Pinker among them, but her ideas also unleashed a storm of controversy and criticism. "I am embarrassed for psychology," Jerome Kagan told Newsweek after the publication of Harris' Nurture Assumption.

Harris, J. R. (1995). Where is the child's environment? A group socialization theory of development. Psychological review, 102(3), 458. Google Scholar Citations: 1535

10. Libet's Challenge to Free Will

Your decisions feel like your own, but Benjamin Libet's study using electroencephalography (EEG) appeared to show that preparatory brain activity precedes your conscious decisions of when to move. One controversial interpretation is that this challenges the notion that you have free will. The decision of when to move is made non-consciously, so the argument goes, and then your subjective sense of having willed that act is tagged on afterwards. Libet's study and others like it have inspired deep philosophical debate. Some philosophers like Daniel Dennett believe that neuroscientists have overstated the implications of these kinds of findings for people's conception of free will. Other researchers have pointed out flaws in Libet's research, such as people's inaccuracy in judging the instant of their own will. However, the principle of non-conscious neural activity preceding conscious will has been replicated using fMRI, and influential neuroscientists like Sam Harris continue to argue that Libet's work undermines the idea of free will.

Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential) the unconscious initiation of a freely voluntary act. Brain, 106(3), 623-642. Google Scholar Citations: 1483

Where do you stand on the implications and interpretations of these 10 psychology studies/theories? Which controversial studies do you think should have made it onto our list?

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Thursday, 18 September 2014

Why is poverty associated with mental health problems for some people, but not others?

By guest blogger Peter Kinderman
“I’ve been rich and I’ve been poor. Believe me, rich is better” (Mae West).  
Critiques of the rather discredited "disease-model" of mental illness are commonplace, but we also need to articulate the alternative. New research by Sophie Wickham and colleagues helps do that, by providing support for the idea that we learn, as a consequence of our experiences in life, a framework of appraising, understanding and responding to new challenges. This psychological schema then shapes our emotional and behavioural responses to future events.

Wickham and her colleagues used data from over 7000 people and, based on a composite measure of each person’s neighbourhood (including data on income, health, education, and crime), they found that participants living in more deprived neighbourhoods had much higher levels of both depression and paranoia.

But the researchers did not merely correlate social deprivation with mental health. They also looked at a range of psychological mediators. They found that, if people reported low levels of stress, high levels of trust in others and high levels of social support, then social deprivation was no longer associated with more depression. The same was partially true in the case of paranoia – when people reported low levels of stress and high levels of trust, social deprivation had a greatly reduced association with levels of paranoia.

In one sense this is a relatively conventional correlational study using secondary data (the Adult Psychiatric Morbidity Survey) to look at a rather well-established link between social factors and mental health. But I think there’s more to it than that.

There are many ways to be overly simplistic about mental health issues. A simple "disease-model" of mental health problems doesn’t really help us much, but an equally simplistic model of social causation is equally reductionist – reducing people to mechanistic pawns, pushed around by social pressures. Instead, a more elegant psychosocial model might suggest that the emotional (and behavioural) impact of life events is at least in part a consequence of how we appraise and respond to those events… and in turn that our appraisals and responses have been learned over time as a consequence of the events to which we have been exposed.

Wickham and colleagues have gone some way in exploring that hypothesis. Their data suggest that people’s appraisals of their circumstances – their perceived stress, perceived trust and perceived social support – mediate the impact of social deprivation on depression and paranoia. It is also interesting to note that these relationships appeared specific to depression and paranoia; they did not apply to auditory hallucinations or hypomania, the rates of which were not associated with poverty in this study.

If these kinds of findings are replicated in future research, the implications could be important and far-reaching. But first, it seems important to replicate this work, exploring the specific combinations of social circumstances, and mediating psychological processes, that lead to different emotional and behavioural outcomes. Wickham and colleagues speculate about what some of these may be – for example, whereas the combination of deprivation and a person’s appraisal of their situation was associated with more depression and paranoia, perhaps a combination of childhood trauma and a perceptual source monitoring problem (e.g. misattributing one’s own thoughts to a third party) might be associated with auditory hallucinations.

The researchers also speculate about the implications of their findings - how we might intervene at a population level, with social and psychological interventions targeted at specific risk factors and psychological mechanisms. Long term political and social policies could address issues of population-level social disadvantage, deprivation and inequity. Similarly, social interventions and targeted welfare packages might be effective in addressing social risk factors at an individual or family level. And, unsurprisingly given that they are psychologists, Wickham and her colleagues also point out that psychological interventions such as cognitive-behaviour therapy (CBT) and interpersonal psychotherapy could help individuals develop more effective psychological responses to the inevitable social stressors that accompany social deprivation.

One study cannot possibly explore all these issues. But, by examining both the social deprivation that is known to contribute to mental health problems and the psychological mechanisms that mediate the impact of this social stress on the individual, Wickham and colleagues offer a model for an elegant approach to understanding and, ultimately, intervening to improve psychological health and well-being. These ideas are important, and new, but are also evidence of the growing maturity and power of psychosocial explanations in mental health. I discuss these ideas further in my book, A Prescription for Psychiatry: Why We Need a Whole New Approach to Mental Health and Wellbeing
and in my new, free, online course.


Wickham S, Taylor P, Shevlin M, & Bentall RP (2014). The impact of social deprivation on paranoia, hallucinations, mania and depression: the role of discrimination social support, stress and trust. PloS one, 9 (8) PMID: 25162703

--further reading--
Worry and self-blame as the "final common pathway" towards poor mental health.

Post written by Peter Kinderman (@peterkinderman), Professor of Clinical Psychology at the University of Liverpool, UK. His research activity and clinical work concentrate on understanding and helping people with serious and enduring mental health problems, and on how psychological science can assist public policy in health and social care. His new book, A Prescription for Psychiatry: Why We Need a Whole New Approach to Mental Health and Wellbeing (Palgrave Macmillan) is available now.

Wednesday, 17 September 2014

There's a problem with assuming the most intelligent candidates make the best employees

Workplace research through the 20th Century suggested that selecting for intelligence is the best way to identify good performers. General mental ability (GMA), a popular recruitment measure that maps closely to the colloquial meaning of "intelligence", is strongly correlated with on-the job performance, well ahead of any other single measure.

This consistent finding came from studies that mostly defined job performance as carrying out the duties expected in that role. Although intuitive, this neglects two types of "extra-role" behaviours identified and studied in more recent years: citizenship behaviours, such as volunteering time or treating colleagues with courtesy; and counter-productive work behaviours, such as spreading rumours, shirking, or theft. Now a new meta-analysis suggests that GMA isn't the best predictor of these crucial aspects of performance. In fact, intelligence may be of little use in predicting who will behave badly at work - although it may predict who can get away with it.

The meta-analysis winnowed the available literature down to 35 relevant studies that looked at citizenship and counterproductive behaviours in real organisations. Intelligence (GMA) was correlated with engaging in more citizenship behaviours, but the association was far weaker than between intelligence and traditional task-based measures of performance. The researchers led by Erik Gonzalez-Mulé then cross-compared their results with previous meta-analyses focused on personality, and concluded that personality and GMA each account for about half the variance in citizenship behaviours. Put another way, you're just as likely to do good because you're inclined that way, as you are because you're smart.

Turning to counterproductive workplace behaviours, the authors predicted a relationship here with intelligence/GMA based on evidence from criminology that’s shown helping people see the consequences of their actions has an inhibitory effect on aberrant behaviour. In fact, the new analysis found no association between intelligence and aberrant behaviour. It's possible that this discrepancy with the criminology findings is because of differences in samples: there may be low-intelligence individuals who are more disposed to malfeasance, but they are underrepresented in workplaces because of adolescent anti-social issues, such as truancy or criminal behaviour. Meanwhile, personality, particularly the trait of agreeableness (but also conscientiousness and openness to experience) was strongly associated with performing fewer unhelpful behaviours at work.

An interesting footnote - when self-ratings of counterproductive behaviour were removed from the analysis (leaving only third-party ratings), the results showed a significant relationship between intelligence and (fewer) unhelpful workplace behaviours. This means that smarter people report engaging in just as much bad behaviour as the rest of us, but others, such as work supervisors, notice less of it.

In summary, while GMA is the undisputed king of predicting better task performance, it holds equal footing with personality in predicting helpful, altruistic work behaviour, and cedes the ground almost entirely to personality for bad behaviour. Looking at performance as a composite of these three areas, Mulé's team conclude that when it comes to workplace selection, GMA still has a prominent role, but a much diminished one.


Gonzalez-Mulé E, Mount MK, & Oh IS (2014). A Meta-Analysis of the Relationship Between General Mental Ability and Nontask Performance. The Journal of applied psychology PMID: 25133304

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Tuesday, 16 September 2014

Forgive yourself for relaxing in front of the TV and the couch time might actually do you some good

There's a snobbishness about relaxation time. Tell someone your hobby is watching TV and chances are they'll look at you with derision. Mention meditation, reading or yoga and you're far more likely to attract nods of approval.

And yet there is substantial evidence that time watching TV or playing video games can have a powerful restorative effect - just what many of us need after a hard day. This benefit isn't found for everyone, and in new paper Leonard Reinecke and his collaborators propose that a key reason has to do with guilt.

The researchers think that it is people who are mentally exhausted, who are most likely to experience guilt after vegging out with a box set or video game. This is because, in their depleted state, these people see their behaviour as procrastination. This leads to the paradoxical situation in which it is the people who could most benefit from the restorative effects of lounge-based downtime who are the least likely to do so.

To test their ideas, Reinecke's team surveyed nearly 500 people in Germany and Switzerland. Participants were recruited via a gaming website and through psychology and communication classes.  Specifically, the participants answered questions about the previous day, including how much work or study they'd done (answers ranged from half an hour to 16 hours), how depleted they felt after work or college, how much TV they'd watched or video-gaming they'd played (this averaged around two hours), whether they viewed it as procrastination, whether they felt guilty, and how recharged they felt afterwards.

The key finding is that the more depleted people felt after work (agreeing with statements like "I felt like my willpower was gone"), the more they tended to view their TV or gaming as procrastination, the more guilt they felt, and the less likely they were to say they felt restored afterwards. The same findings applied for TV or video games.

"Rather than diminishing the beneficial potential of entertaining media," the researchers said, "we believe that the results of this study may ultimately help to optimise the well-being outcomes of entertaining media use by extending our knowledge of the mechanisms furthering and hindering media-induced recovery and general well-being." If the researchers are correct, then if you cut yourself some slack when you watch TV after a hard day, you're more likely feel rejuvenated afterwards.

Unfortunately, as the researchers admit in their paper, their methodological approach has several limitations. Above all, this wasn't an experimental study (with people allocated randomly to different interventions). This means the data can be interpreted in many different ways. One alternative reading of the results is that when TV or gaming fails to have a restorative effect, this leads people to view the time as wasteful procrastination, thus causing them to feel guilty.


Reinecke, L., Hartmann, T., & Eden, A. (2014). The Guilty Couch Potato: The Role of Ego Depletion in Reducing Recovery Through Media Use Journal of Communication, 64 (4), 569-589 DOI: 10.1111/jcom.12107

--further reading--
Do television and video games impact on the wellbeing of younger children?
Can relationships with fictional characters aid our self development?
A psychological problem with snacking in front of the telly.

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 15 September 2014

Pupils benefit from praise, but should teachers give it to them publicly or privately?

There's a best practice guide for teachers, produced by the Association of School Psychologists in the US, that states praise is best given to pupils in private. This advice is not based on experimental research - there hasn't been any - but on surveys of student preferences, and on the rationale that pupils could be embarrassed by receiving praise in public.

Now, in the first study of its kind, John Blaze and his colleagues have systematically compared the effect of public and private praise (also known as "loud" and "quiet" praise) on classroom behaviour. They found that praise had a dramatic beneficial effect on pupils' behaviour, and it didn't matter whether the praise was private or public.

The research was conducted at four high-school public classrooms in rural south-eastern United States (the equivalent to state schools in the UK). The classes were mixed-sex, with a mixture of mostly Caucasian and African American pupils, with between 16 and 25 pupils in each class. The children were aged 14 to 16. Three of the teachers were teaching English, the other taught Transition to Algebra.

The teachers were given training in appropriate praise: it must be contingent on good behaviour; make clear to the pupil why they are being praised; immediate; and effort-based. During the test sessions of teaching, the teachers carried a buzzer on their belt that prompted them, once every two minutes, to deliver praise to one of their pupils, either loudly so the whole class could hear (in the loud condition) or discreetly, by a whisper in the ear or pat on the shoulder, so that hopefully only the child knew they were being praised (in the quiet condition). For comparison, there were also baseline teaching sessions in which the teachers simply carried out their teaching in their usual style.

Trained observers stationed for 20-minute sessions in the classrooms monitored the teachers' praise-giving and the behaviour of the pupils across the different conditions. They found that frequent praise increased pupils' on-task behaviours, such as reading or listening to the teacher, by 31 per cent compared with baseline, and this improvement didn't vary according to whether the praise was private or public. Frequent praise of either manner also reduced naughty behaviours by nearly 20 per cent.

Blaze and his team said that the debate over praise will likely continue, but they stated their results are clear: "both loud and quiet forms of praise are effective tools that can have dramatic effects at the secondary level." A weakness of the study is that the researchers didn't monitor the teachers' use of reprimands, which likely reduced as they spent more time delivering praise.


Blaze JT, Olmi DJ, Mercer SH, Dufrene BA, & Tingstom DH (2014). Loud versus quiet praise: A direct behavioral comparison in secondary classrooms. Journal of school psychology, 52 (4), 349-60 PMID: 25107408

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Saturday, 13 September 2014

Link feast

Our pick of the best psychology and neuroscience links from the past week:

“Cyranoids”: Stanley Milgram’s Creepiest Experiment
Milgram is most famous for his obedience experiments, but Neuroskeptic reports on new research into another of Milgram's ideas - that our speech can be fed to us by someone else (so we become a Cyranoid) without anyone realising anything is amiss.

Rising star of business psychology, Professor Adam Grant, has launched a new, free newsletter called "Granted"
The author of NYT Bestseller Give and Take promises to send you videos and articles about work and psychology.

America's New Bedlam
This disturbing radio documentary from BBC World Service investigates the treatment of the more than one million US prisoners who are mentally ill.

A Social Visit with Hallucinated Voices
Over at the PLOS Neuroscience Community blog, Vaughan Bell explores the social side of hearing voices - the fact that most voice-hearers have relationships with their voices, and perceive their voices as having identities. This is a hugely under-researched area, he says.

We Are Entering the Age of Alzheimer's
"Alzheimer’s ... is more than a disease of the brain. It is a pandemic of selfhood," writes Kent Russell in this moving portrait of a man coping with his father's dementia.

How to Be Alone: An Antidote to One of the Central Anxieties and Greatest Paradoxes of Our Time
At Brain Pickings, Maria Popova shares highlights from How to Be Alone by Sara Maitland.

Hundreds Report Waking up During Surgery
NHS Choices takes a calm looking at the study that led to some alarming headlines.

The Truth About Truthiness 
Megan McArdle at Bloomberg View takes issue with the idea (based largely on this study and disseminated by this Slate article) that low effort thought gives rise to conservative ideology.

A to Z of the Human Condition: N is for Natural Curiosity
Over at the Wellcome Collection blog, Mark Rapoza contemplates our unending curiosity with nature (alongside readers' nature photos).

Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.