Monday, 2 March 2015

"I did it for the team" – How outsiders cheat in pursuit of popularity

If you would do anything to stay popular with your team-mates, what might follow? Bending the rules? Cheating? Sabotage of rivals? An international team led by Stefan Thau of INSEAD investigated “pro-group” unethical behaviours, and they suggest the people most likely to connive to boost the team are those at its margins, fearful of exclusion.

The experiment gave participants an easy opportunity to cheat at an anagram task, as the setup meant they themselves reported how many they solved, with no way to be checked. (Conveniently, the experimenters had an easy way to verify whether success had been over-reported: the ten anagrams were entirely unsolvable.)

In the key condition, participants were told that if they scored better than their “Red Team” competitor sitting in another room, then the other members of their own (Blue) team would all get a cash reward. The Blue Team had met and chatted at the start of the experiment, and just before the anagram task, they voted provisionally on which member should be excluded from a final group task, with a final vote to follow once the anagram contest results were made public.

The provisional vote was rigged so half of the participants had the impression that they were likely to be excluded. These at-risk individuals reported solving more of the impossible anagrams than their safe peers. They broke the rules to do a good turn for their group, in the hope that it wouldn’t go unrewarded. And the cheating was even higher for those participants who, in a questionnaire, described having a high “need to belong”.

In another condition, anagram victory generated a personal reward, not one shared with team-mates. Neither risk of exclusion nor the need to belong had any effect on cheating in this condition. This suggests that being under threat doesn’t simply increase unethical behaviour but encourages targeted actions aimed at raising standing.

Thau’s team showed that the effect generalised to other behaviours using a survey of 228 working adults. People who felt excluded – sharing heartbreaking beliefs such as  “I feel like it is likely that my workgroup members will not invite me for lunch” – were more likely to withhold information from non-team members or discredit another workgroup, all to make their own group look better.

Supporting your in-group in this way can only hurt the organisation in the longer-term, and can have profoundly damaging effects, such as the example the article gives, of a detective who framed people to get higher rates of arrest for his colleagues. There is no more chilling excuse for the inexcusable than “but I did it all for you!”


Thau, S., Derfler-Rozin, R., Pitesa, M., Mitchell, M., & Pillutla, M. (2015). Unethical for the sake of the group: Risk of social exclusion and pro-group unethical behavior. Journal of Applied Psychology, 100 (1), 98-113 DOI: 10.1037/a0036708

--further reading--
Are children from collectivist cultures more likely to say it's okay to lie for the group?

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Saturday, 28 February 2015

Link feast

Our pick of the best psychology and neuroscience links from the past week or so:

The Science of Why No One Agrees on the Colour of This Dress
The internet is abuzz with talk of the dress that some people see as white and gold, others as blue and black. Adam Rogers at WIRED provides an explanation.

Hard Feelings: Science’s Struggle to Define Emotions
"While it's possible for researchers to study facial expressions, brain patterns, behavior, and more," writes Julie Beck at The Atlantic, "each of these is only part of a more elusive whole".

Words and Sorcery
Simon Oxenham and Jon Sutton at The Psychologist consider the causes and consequences of bad writing in psychology.

Five Things Alice in Wonderland Reveals About the Brain
"All of us can learn something about ourselves from Alice in Wonderland – if only we look in the right way," says David Robson at BBC Future.

Do Blind People Really Experience Complete Darkness?
No. "For me," says Damon Rose at BBC News, who is completely blind, "dark has come to signify quiet, and because my built-in fireworks never go away I describe what I've got as a kind of visual tinnitus."

Why Are Men Committing Suicide?
"We are living in an epidemic of male suicide," writes Marc Judge at Acculturated.

The Elastic Brain
"A child’s brain can master anything from language to music," argues Rebecca Boyle at Aeon. "Can neuroscience extend that genius across the lifespan?"

Confessions Of A Disordered Eater
"After a lifetime struggling with compulsive, secretive, and restrictive eating, I’m still figuring out how to have a healthy relationship with food," says Anita Badejo at Buzzfeed.

Brain-controlled Drone Shown Off by Tekever in Lisbon
"... one aviation expert told the BBC he thought the industry would be unlikely to adopt such technology due to a perception of being potentially unsafe."

Why Reading and Writing on Paper Can be Better For Your Brain
"I can’t imagine teaching my son to read in a house without any physical books, pens or paper. But I can’t imagine denying him the limitless words and worlds a screen can bring to him either," says Tom Chatfield at The Guardian.
Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Friday, 27 February 2015

What do clients think of psychotherapy that doesn't work?

Psychotherapy works for most people, but there's a sizeable group for whom it's ineffective, or worse still, harmful. A new study claims to be the first to systematically investigate what the experience of therapy is like for clients who show no improvement after therapy, or who actually deteriorate.

Andrzej Werbart and his colleagues conducted in-depth interviews with 20 non-improved clients (out of a larger client group of 134) who were enrolled in individual or group psychoanalytic psychotherapy at the former Institute of Psychotherapy in Stockholm. Seventeen of these clients showed no symptom improvement after an average of 22 months therapy, and three showed deterioration. The clients' had an average age of 22 at the treatment start, and 17 of them were female. Their problems included mood disorders, relationship problems and self-reported personality disorders. The interviews took place at the end of the course of therapy, and then again one and half years later.

The researchers transcribed the interviews and identified a key central theme: "spinning one's wheels" as exemplified by this client quote:
"When I think back on the therapy, I get the feeling that I often sat and talked; sometimes something important came up, but often it felt like it was pretty much just spinning my wheels."
What other messages were distilled from the interviews? The clients had largely positive views of their therapists, but they saw them as distant and not fully committed. A recurring issue for the clients was feelings of uncertainty over the goals of therapy and the methods to achieve those goals. Many had expected a more challenging, confrontational, structured style of therapy.

The researchers said the 16 therapists (10 female; average age 53), many of them highly experienced,  who'd worked with these non-improved clients, may have been guilty of sticking too rigidly to traditional psychoanalytic technique:
"The patients' descriptions of therapists' silence and passivity together with a focus on childhood experiences and deep roots of presented problems resemble a caricature of psychoanalytic psychotherapy, but unfortunately the picture may be accurate," they said.   
The researchers urged therapists to address their clients' treatment preferences and expectations – such reflection could have led to the realisation that a more "directive, task and action-oriented" form of therapy may have been more appropriate for these clients (conversely, other research has found that dissatisfied CBT clients tend to say they would prefer an approach with more emphasis on reflection and understanding). Clients need to be involved in setting the goals of therapy and educated about what the process will entail, the researchers added. But also, "the therapist needs to learn to be the unique patient's therapist."

Previous research has already established that therapists are poor at identifying when therapy is not working. Werbart and his team said that "formalised feedback" based on client surveys during therapy "can be a less threatening way to start discussions on negative and hindering therapy experiences."

On a positive note, between the end of therapy and later follow-up, more than half the non-improved clients showed beneficial decreases in their symptoms. Such ongoing change was not observed for clients who showed more immediate improvements after therapy, suggesting these changes were not a mere consequence of maturing. "Rather, the conclusion is that non improvement at [therapy] termination does not imply lasting symptoms," the researchers said.


Werbart, A., von Below, C., Brun, J., & Gunnarsdottir, H. (2014). “Spinning one's wheels”: Nonimproved patients view their psychotherapy Psychotherapy Research, 1-19 DOI: 10.1080/10503307.2014.989291

--further reading--
When therapy causes harm
The mistakes that lead therapists to infer psychotherapy was effective, when it wasn't
What clients think CBT will be like and how it really is

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Thursday, 26 February 2015

Some student-professor pairings lead to "unusually effective teaching" (and it's possible to predict which ones)

Video trailers can be used to predict which
lecturers are the best teachers, and which
students they are especially suited to.
In the near future, students could be presented with a series of video trailers of different professors at their university. Based on their ratings of these videos, the students will be paired with the professors who provide the best fit. The outcome will be superior learning, and greater student satisfaction.

That's the promise of a new study that asked 145 psychology undergrads to rate 6-minute teaching videos of 10 different professors, and then to rate their experience of an actual 40-minute live lecture with those same professors taken several weeks later. The students were also quizzed on the content of those lectures to see how well they'd learned.

Jennifer Gross and her colleagues explain that student evaluations of professors are made up of three key factors: each professor's actual ability (this component tends to correlate across ratings given by different students); each student's rating bias (this component correlates across the ratings given by the same student to different professors – for example, some students are more lenient in their ratings than others); and relationship effects.

This last component that is one of the key points of interest in the new study. It pertains to the specific fit, or not, between a professor and a student. When there is a good fit, this leads to unusually high ratings by that student for the professor, above and beyond what you'd expect given the student's usual rating bias, and given the level of ratings the professor usually attracts.

To zoom in on these relationship effects simply requires factoring out each student's rating bias, and each professor's average rating across students.

The exciting finding is that the researchers were able to use the students' ratings of, and their mood during, the 6-minute trailers to forecast how they later rated the actual lectures, including predicting which professors got the highest average ratings after the lectures, and predicting relationship effects.

This result is important, the researchers explained, because the students' memory for material taught in a given lecture was independently related both to that lecturer's average ratings (some lecturers are better than others), but also to the specific relationship effects (i.e. whether the student in question had given that lecturer unusually high ratings – the sign of a good professor/student fit).

"These findings support the possibility of developing online systems that would provide personalised recommendations that specific students take courses from specific professors," the researchers said.

However, they acknowledged that their results need to be replicated, and they also outlined some limitations of the study. This included the fact they'd carefully compiled the 6-minute trailers to showcase each professor's teaching style (a time-consuming endeavour); that the live evaluations involved just one lecture rather than an entire course; and that the professors' teaching skills were confounded with the topics they taught.

  ResearchBlogging.orgGross, J., Lakey, B., Lucas, J., LaCross, R., R. Plotkowski, A., & Winegard, B. (2015). Forecasting the student-professor matches that result in unusually effective teaching British Journal of Educational Psychology, 85 (1), 19-32 DOI: 10.1111/bjep.12049

--further reading--
Engaging lecturers can breed overconfidence
Is it time to rethink the way university lectures are delivered?

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Wednesday, 25 February 2015

The six forms of resistance shown by participants in Milgram's notorious "obedience studies"

When discussing Milgram's notorious experiments, in which participants were instructed to give increasingly dangerous electric shocks to another person, most commentators take a black or white approach.

Participants are categorised as obedient or defiant, and the headline result is taken as the surprising number of people – the majority – who obeyed by going all the way and administering the highest, lethal voltage.

A new study takes a different stance by looking at the different acts of resistance shown by Milgram's participants, regardless of whether they ultimately completed the experiment. This isn't the first time researchers have explored defiance in the Milgram paradigm (for example, see this 2011 study, and last year's reinterpretation of the findings), but it's the most comprehensive analysis of resistance as revealed through the dialogue in Milgram's original studies.

Sociology doctoral researcher Matthew Hollander has purchased and transcribed audio recordings of 117 of Milgram's participants taken from different versions of the seminal 1960s research. He has carefully analysed the three-way conversational interactions between the experimenter, each participant playing the role of "teacher", and the "learner" (actually an actor) who was subjected to the shocks and cried out in pain and protest. From these interactions, Hollander has identified six different forms of resistance, three implicit and three explicit.

The three implicit forms of resistance were: silences and hesitations (e.g. after the experimenter has instructed the participant to continue with the process); imprecations (often in response to cries from the learner); and laughter. The claim about laughter is controversial because earlier commentators have interpreted laughter by Milgram's participants as a worrying sign of sadism. Hollander is interested in those specific instances when participant laughter followed commands from the experimenter – this laughter, he believes, was an act of resistance because it was intended to show the participant's ability to cope with the difficult situation.

The three explicit forms of resistance were: addressing the learner (e.g. asking him if he's happy to continue); prompting the experimenter (e.g. either querying whether it's necessary to continue, or telling him that the learner is in pain); and finally "stop tries", in which the participant stated he or she did not want to continue.

Comparing participants who ultimately obeyed all the way to the highest shock, and those who refused to complete the experiment, there are some revealing similarities and differences in the forms of resistance they used along the way.

Most participants who completed the experiment, and those who refused, used the implicit "wait and see" resistance strategies, which Hollander says were designed to delay the continuation of the experiment, presumably in the hope that the experimenter would halt proceedings. But only the participants who, at some stage, refused to complete the experiment, used the explicit strategy of addressing the learner – effectively granting him the authority to dictate whether the process should continue. These defiant participants also used more "stop tries" – 98 per cent used at least one, compared with just 19 per cent of the participants who ultimately completed the experiment.

Hollander said his conversation-analytic approach promised to "open up new perspectives on an old experiment whose legacy lives on." What's more, he believes the same approach could usefully be applied to other settings. By improving our understanding of the interpersonal dynamics of authority and the resistance to authority, such research "could save lives and empower potential victims," he said.

Hollander, M. (2015). The repertoire of resistance: Non-compliance with directives in Milgram's ‘obedience’ experiments British Journal of Social Psychology DOI: 10.1111/bjso.12099

--further reading--
More on Stanley Milgram in the Research Digest archive.

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Tuesday, 24 February 2015

Recruiters think they can tell your personality from your resume. They can't

Recruiters are poor at inferring an applicant’s personality from their resume, but that doesn’t stop them from jumping to conclusions on the back of their flawed assumptions. That’s according to a new study that involved over a hundred professional recruiters evaluating pairs of resumes.

The US-based recruiters estimated applicant personality from the limited information in short two-page resumes. Their estimates were poorly correlated with the self-ratings made by the MBA students who’d written the resumes. But the recruiters appeared to rely heavily on these flawed estimates when drawing conclusions on hireability, as their personality estimates accounted for almost half of the variance in their decision-making. Meanwhile the students’ self-ratings – a more reliable source of information on true personality – were a poor predictor of whether the recruiters would favour them.

Another experiment involved 266 participants recruited online and asked to play the role of recruiter. This time, the set of resumes were broken down into their component parts, revealing that a range of elements can provoke personality judgments, from the look and feel of the resume (setting off recruiter inferences about conscientiousness), to mentions of voluntary activities (triggering assumptions of extraversion and agreeableness) and computer skills (interpreted as a sign of openness to experience).

The participants in this experiment were most likely to form conclusions about conscientiousness, extraversion and openness to experience and, like the professional recruiters, the more they saw an element having something to say about personality, the more relevant they saw it for assessing hireability.

As a resume is often the first impression an applicant gives to a potential employer, it’s worth understanding the assumptions they make, argue Gary Burns and his colleagues, who conducted these experiments. The researchers suggest taking time to give a fair impression of yourself, and recommend some less obvious take-aways such as giving detailed information about your education, describing your extracurricular activities, and steering clear of unusual fonts.


Burns, G., Christiansen, N., Morris, M., Periard, D., & Coaster, J. (2014). Effects of Applicant Personality on Resume Evaluations Journal of Business and Psychology, 29 (4), 573-591 DOI: 10.1007/s10869-014-9349-6

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Monday, 23 February 2015

The “Backfire Effect”: Correcting false beliefs about vaccines can be surprisingly counterproductive

Nearly half of the US population wrongly believes the flu vaccine can give you flu,
but correcting this error has the opposite of the desired effect
By guest blogger Simon Oxenham

According to a new study, 43 per cent of the US population wrongly believes that the flu vaccine can give you flu. In actual fact this is not the case – any adverse reaction, besides a temperature and aching muscles for a short time, is rare. It stands to reason that correcting this misconception would be a good move for public health, but the study by Brendan Nyhan and Jason Reifler published in Vaccine found that debunking this false belief had a seriously counterproductive effect.

The researchers looked at 822 US adults who were selected to reflect the general population in terms of their mix of age, gender, race and education. About a quarter of this sample were unduly concerned about the side effects of the flu vaccine. It is amongst these individuals, that attempting to correct the myth that the flu vaccine gives you flu backfired. The researchers showed participants information from the Center for Disease Control (CDC), which was designed to debunk the myth that the flu vaccine can give you flu. This resulted in a fall in people's false beliefs but, among those concerned with vaccine side-effects, it also resulted in a paradoxical decline in their intentions to actually get vaccinated, from 46 per cent to 28 per cent. The intervention had no effect on intentions to get vaccinated amongst people who didn't have high levels of concerns about vaccine side effects in the first place.

Why is it that as false beliefs went down, so did intentions to vaccinate? The explanation suggested by the researchers is that the participants who had "high concerns about vaccine side effects brought other concerns to mind in an attempt to maintain their prior attitude when presented with corrective information". A psychological principle that might explain this behaviour is motivated reasoning: we are often open to persuasion when it comes to information that fits with our beliefs, while we are more critical or even outright reject information that contradicts our world view.

This is not the first time that vaccine safety information has been found to backfire. Last year the same team of researchers conducted a randomised controlled trial comparing messages from the CDC aiming to promote the measles, mumps and rubella (MMR) vaccine. The researchers found that debunking myths about MMR and autism had a similarly counterproductive result - reducing some false beliefs but also ironically reducing intentions to vaccinate.

Taken together, the results suggest that in terms of directly improving vaccination rates, we may be better off doing nothing than using the current boilerplate CDC information on misconceptions about vaccines to debunk false beliefs. If this is the case then the ramifications for public health are huge, but before we can decide whether this conclusion is accurate we'll have to wait to see if the finding can be replicated elsewhere. History has taught us that when it comes to vaccines, acting on scant evidence can have catastrophic consequences.

The studies do have their limitations: both looked at intentions to vaccinate rather than actual vaccination rates, which may be different in practice. Furthermore, in both sets of experiments, only the official US CDC vaccine safety messages were used. It is possible that if the experiments were repeated with other wordings, perhaps those used by the NHS in the UK for example, we would see different results.

If the backfire effect is replicated in future studies, how are we to proceed? Research into the backfire effect can provide some tentative suggestions. To begin with, it is likely we should avoid restating myths wherever possible and when we must restate myths, we should try to precede the myth with a warning that misleading information is coming up. This can help prevent myths from growing in our minds through mere familiarity. When we debunk myths we should also try to offer an alternative explanation for false beliefs, to fill the gap left by misinformation. We should also try to keep our explanations brief, which can help counter the imbalance that often occurs between simple, memorable myths and the more complicated reality. What is clear from the recent findings regarding beliefs about vaccines and the recent outbreaks in vaccine preventable diseases in the UK, the US and elsewhere, is that what we are currently doing to try to convince people to get vaccinated — may no longer be working.


Nyhan, B., & Reifler, J. (2015). Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information Vaccine, 33 (3), 459-464 DOI: 10.1016/j.vaccine.2014.11.017

Brendan Nyhan, Jason Reifler, Sean Richey, & Gary L. Freed (2014). Effective Messages in Vaccine Promotion: A Randomized Trial PEDIATRICS, 133 (4) DOI: 10.1542/peds.2013-2365d

Lewandowsky, S., Ecker, U., Seifert, C., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing Psychological Science in the Public Interest, 13 (3), 106-131 DOI: 10.1177/1529100612451018

--Post written by Simon Oxenham for the BPS Research Digest. Simon Oxenham covers the best and the worst of the world of psychology and neuroscience on his Neurobonkers blog at the Big Think. Follow @Neurobonkers on Twitter, Facebook, Google+, RSS or join the mailing list.

--further reading--
Strong reassurances about vaccines can backfire
Scary health messages can backfire
Apocalyptic climate change warnings can be counter-productive