Category: Methods

Researchers Tried To Explore Why “Stereotype Threat” Harms Performance, But Found It Didn’t Harm Performance At All

GettyImages-185064122.jpg
Is stereotype threat too context-dependent to matter all that much?

By Jesse Singal

Stereotype threat is a very evocative, disturbing idea: Imagine if simply being reminded that you are a member of a disadvantaged group, and that stereotypes hold that members of your group are bad at certain tasks, led to a self-fulfilling prophecy in which you performed worse on such tasks than you would otherwise.

That’s been the claim of stereotype threat researchers since the concept was first introduced in the mid-1990s, and it’s spread far and wide. But as seems to be the case with so many strong psychological claims of late, in recent years the picture has gotten a bit murkier. “A recent review suggested that stereotype threat has a robust but small-to-medium sized effect on performance,” wrote Alex Fradera here at the BPS Research Digest in 2017, “but a meta-analysis suggests that publication bias may be a problem in this literature, inflating the apparent size of the effect.” Adding to the confusion are some results which seem to run exactly opposite to what the theory would suspect, like the one Fradera was reporting on: In that study, female chess players were found to have performed better, not worse, against male opponents, which isn’t what the theory would have predicted.

Now, another study is poised to complicate things yet further. In a paper to be published in the European Journal of Social Psychology, and available as a preprint, a team led by Charlotte Pennington of UWE Bristol recruited female participants to test two mechanisms (reduced effort and working memory disruption) that have been offered to explain the supposed adverse performance effects of gender-related stereotype threat. They also compared different ways of inducing stereotype threat. Interesting questions, you might think, but in all cases the researchers came up empty.

Continue reading “Researchers Tried To Explore Why “Stereotype Threat” Harms Performance, But Found It Didn’t Harm Performance At All”

Psychology research is still fixated on a tiny fraction of humans – here’s how to fix that

GettyImages-472927660.jpg
Nearly 95 per cent of participant samples in a leading psychology journal were from Western countries

By guest blogger Jesse Singal

For a long time, some psychologists have understood that their field has an issue with WEIRDness. That is, psychology experiments disproportionately involve participants who are Western, Educated, and hail from Industrialised, Rich Democracies, which means many findings may not generalise to other populations, such as, say, rural Samoan villagers.

In a new paper in PNAS, a team of researchers led by Mostafa Salari Rad decided to zoom in on a leading psychology journal to better understand the field’s WEIRD problem, evaluate whether things are improving, and come up with some possible changes in practice that could help spur things along.

Continue reading “Psychology research is still fixated on a tiny fraction of humans – here’s how to fix that”

Psychology’s favourite tool for measuring implicit bias is still mired in controversy

GettyImages-1029463886.jpg
A new review aims to move on from past controversies surrounding the implicit association test, but experts can’t even agree on what they’re arguing about

By guest blogger Jesse Singal

It has been a long and bumpy road for the implicit association test (IAT), the reaction-time-based psychological instrument whose co-creators, Mahzarin Banaji and Anthony Greenwald — among others in their orbit — claimed measures test-takers’ levels of unconscious social biases and their propensity to act in a biased and discriminatory manner, be that via racism, sexism, ageism, or some other category, depending on the context. The test’s advocates claimed this was a revelatory development, not least because the IAT supposedly measures aspects of an individual’s bias even beyond what that individual was consciously aware of themselves.

As I explained in a lengthy feature published on New York Magazine’s website last year, many doubts have emerged about these claims, ranging from the question of what the IAT is really measuring (as in, can a reaction-time difference measured in milliseconds really be considered, on its face, evidence of real-world-relevant bias?) to the algorithms used to generate scores to, perhaps most importantly (given that the IAT has become a mainstay of a wide variety of diversity training and educational programmes), whether the test really does predict real-world behaviour.   

On that last key point, there is surprising agreement. In 2015 Greenwald, Banaji, and their coauthor Brian Nosek stated that the psychometric issues associated with various IATs “render them problematic to use to classify persons as likely to engage in discrimination”. Indeed, these days IAT evangelist and critic alike mostly agree that the test is too noisy to usefully and accurately gauge people’s likelihood of engaging in discrimination — a finding supported by a series of meta-analyses showing unimpressive correlations between IAT scores and behavioral outcomes (mostly in labs). Race IAT scores appear to account for only about 1 per cent of the variance in measured behavioural outcomes, reports an important meta-analysis available in preprint, co-authored by Nosek. (That meta-analysis also looked at IAT-based interventions, finding that while implicit bias as measured by the IAT “is malleable… changing implicit bias does not necessarily lead to changes in explicit bias or behavior.”)

So where does this leave the IAT? In a new paper in Current Directions in Psychological Science called “The IAT Is Dead, Long Live The Iat: Context-Sensitive Measures of Implicit Attitudes Are Indispensable to Social and Political Psychology”, John Jost, a social psychologist at New York University and a leading IAT researcher, seeks to draw a clear line between the “dead” diagnostic-version of the IAT, and what he sees as the test’s real-world version – a sensitive, context-specific measure that shouldn’t be used for diagnostic purposes, but which has potential in various research and educational contexts.

Does this represent a constructive manifesto for the future of this controversial psychological tool? Unfortunately, I don’t think it does – rather, it contains many confusions, false claims, and strawman arguments (as well as a misrepresentation of my own work). Perhaps most frustrating, Jost joins a lengthening line of IAT researchers who, when faced with the fact that the IAT appears to have been overhyped for a long time by its creators, most enthusiastic proponents, and by journalists, responds with an endless variety of counterclaims that don’t quite address the core issue itself, or which pretend those initial claims were never made in the first place.

Continue reading “Psychology’s favourite tool for measuring implicit bias is still mired in controversy”

It’s getting increasingly difficult for psychology’s replication-crisis sceptics to explain away failed replications

GettyImages-871203378.jpg
The Many Labs 2 project managed to successfully replicate only half of 28 previously published significant effects

By guest blogger Jesse Singal

Replicating a study isn’t easy. Just knowing how the original was conducted isn’t enough. Just having access to a sample of experimental participants isn’t enough. As psychological researchers have known for a long time, all sorts of subtle cues can affect how individuals respond in experimental settings. A failure to replicate, then, doesn’t always mean that the effect being studied isn’t there – it can simply mean the new study was conducted a bit differently.

Many Labs 2, a project of the Center for Open Science at the University of Virginia, embarked on one of the most ambitious replication efforts in psychology yet – and did so in a way designed to address these sorts of critiques, which have in some cases hampered past efforts. The resultant paper, a preprint of which can be viewed here, is lead-authored by Richard A. Klein of the Université Grenoble Alpes. Klein and his very, very large team – it takes almost four pages of the preprint just to list all the contributors – “conducted preregistered replications of 28 classic and contemporary published findings with protocols that were peer-reviewed in advance to examine variation in effect magnitudes across sample and setting.”

Continue reading “It’s getting increasingly difficult for psychology’s replication-crisis sceptics to explain away failed replications”

Many undergrad psych textbooks do a poor job of describing science and exploring psychology’s place in it

GettyImages-3429097.jpgBy guest blogger Tomasz Witkowski

Psychology as a scientific field enjoys a tremendous level of popularity throughout society, a fascination that could even be described as religious. This is likely the reason why it is one of the most popular undergraduate majors in European and American universities. At the same time, it is not uncommon to encounter the firm opinion that psychology in no way qualifies for consideration as a science. Such extremely critical opinions about psychology are often borrowed from authorities – after all, it was none other than the renowned physicist and Nobel laureate Richard Feynman who, in a famous interview in 1974, compared the social sciences and psychology in particular to a cargo cult. Scepticism toward psychological science can also arise following encounters with the commonplace simplifications and myths spread by pop-psychology, or as a product of a failure to understand what science is and how it solves its dilemmas.

According to William O’Donohue and Brendan Willis of the University of Nevada, these issues are further compounded by undergraduate psychology textbooks. Writing recently in Archives of Scientific Psychology, they argue that “[a] lack of clarity and accuracy in [psych textbooks] in describing what science is and psychology’s relationship to science are at the heart of these issues.” The authors based their conclusions on a review of 30 US and UK undergraduate psychology textbooks, most updated in the last few years (general texts and others covering abnormal, social and cognitive psych), in which they looked for 18 key contemporary issues in philosophy of science. 

Continue reading “Many undergrad psych textbooks do a poor job of describing science and exploring psychology’s place in it”

The in-vogue psychological construct “Grit” is an example of redundant labelling in personality psychology, claims new paper

GettyImages-803227220.jpgBy Christian Jarrett

Part of the strength of the widely endorsed Big Five model of personality is its efficient explanatory power – in the traits of Extraversion, Neuroticism, Openness, Agreeableness and Conscientiousness, it removes the redundancy of more fine-grained approaches and manages to capture the most meaningful variance in our habits of thought and behaviour.

So what to make then of the popular proposal that what marks out high achievers from the rest is that they rank highly on another trait labelled as “Grit”?

Is the recognition of Grit, and the development of a scale to measure it, a breakthrough in our understanding of the psychology of success? Or is it a reinvention of the wheel, a redundant addition to the taxonomy of personality psychology?

In 2016, the US-based authors of a meta-analysis on the topic concluded “that Grit as currently measured is simply a repackaging of Conscientiousness”. Now a different research team, based in Germany and Switzerland, has taken a more intricate look at the links between Grit and Conscientiousness, this time including a focus on their respective facets (or sub-traits). Writing in the European Journal of Personality, Fabian Schmidt and his colleagues conclude that “Grit represents yet another contribution to the common problem of redundant labelling of constructs in personality psychology.”

Continue reading “The in-vogue psychological construct “Grit” is an example of redundant labelling in personality psychology, claims new paper”

Philosophise this – psychology research by philosophers is robust and replicates better than other areas of psychology

GettyImages-826196992.jpg
Experimental philosophy or X-Phi takes the tools of contemporary psychology and applies them to unravelling how people think about major topics from Western philosophy

By guest blogger Dan Jones

Amid all the talk of a “replication crisis” in psychology, here’s a rare good news story – a new project has found that a sub-field of the discipline, known as “experimental philosophy” or X-phi, is producing results that are impressively robust.

The current crisis in psychology was largely precipitated by a mass replication attempt published by the Open Science Collaboration (OSC) project in 2015. Of 100 previously published significant findings, only 39 per cent replicated unambiguously, rising to 47 per cent on more relaxed criteria.

Now a paper in Review of Philosophy and Psychology has trained the replicability lens on the burgeoning field of experimental philosophy. Born roughly 15 years ago, X-Phi takes the tools of contemporary psychology and applies them to unravelling how people think about many of the major topics of Western philosophy, from the metaphysics of free will and the nature of the self, to morality and the problem of consciousness. 

Continue reading “Philosophise this – psychology research by philosophers is robust and replicates better than other areas of psychology”

After analysing the field’s leading journal, a psychologist asks: Is social psychology still the science of behaviour?

JPSP.jpgBy Alex Fradera

Part of my role at the Digest involves sifting through journals looking for research worth covering, and I’ve sensed that modern social psychology generates plenty of studies based on questionnaire data, but far fewer that investigate the kind of tangible behavioural outcomes illuminated by the field’s classics, from Asch’s conformity experiments to Milgram’s research on obedience to authority. A new paper in Social Psychological Bulletin examines this apparent change systematically. Based on his findings, Dariusz Doliński at the SWPS University of Social Sciences and Humanities in Poland asks the bleak question: is psychology still a science of behaviour?

Continue reading “After analysing the field’s leading journal, a psychologist asks: Is social psychology still the science of behaviour?”

Psychology’s favourite moral thought experiment doesn’t predict real-world behaviour

By Christian Jarrett

Would you wilfully hurt or kill one person so as to save multiple others? That’s the dilemma at the heart of moral psychology’s favourite thought experiment and its derivatives. In the classic case, you must decide whether or not to pull a lever to divert a runaway mining trolley so that it avoids killing five people and instead kills a single individual on another line. A popular theory in the field states that, to many of us, so abhorrent is the notion of deliberately harming someone that our “deontological” instincts deter us from pulling the lever; on the other hand, the more we intellectualise the problem with cool detachment, the more likely we will make a utilitarian or consequentialist judgment and divert the trolley.

Armed with thought experiments of this kind, psychologists have examined all manner of individual and circumstantial factors that influence the likelihood of people making  deontological vs. utilitarian moral decisions. However, there’s a fatal (excuse the pun) problem. A striking new paper in Psychological Science finds that our answers to the thought experiments don’t match up with our real-world moral decisions.

Continue reading “Psychology’s favourite moral thought experiment doesn’t predict real-world behaviour”

Are tweets a goldmine for psychologists or just a lot of noise? Researchers clash over the meaning of social media data

GettyImages-500668390.jpgBy guest blogger Jon Brock

Johannes Eichstaedt was sitting in a coffee shop by Lake Atitlan in Guatemala when he received a slack about a tweet about a preprint. In 2015, the University of Pennsylvania psychologist and his colleagues published a headline-grabbing article linking heart disease to the language used on Twitter. They’d found that tweets emanating from US counties with high rates of heart disease mortality tended to exhibit high levels of negative emotions such as anger, anxiety, disengagement, aggression, and hate. The study, published in Psychological Science, has proven influential, already accruing over 170 citations. But three years later, the preprint authors Nick Brown and James Coyne from the University of Groningen claimed to have identified “numerous conceptual and methodological limitations”. Within the month, Eichstaedt and his colleagues issued a riposte, publishing their own preprint that claims further evidence to support their original conclusions.

As recent revelations surrounding Facebook and Cambridge Analytica have highlighted, corporations and political organisations attach a high value to social media data. But, Eichstaedt argues, that same data also offers rich insights into psychological health and well-being. With appropriate ethical oversight, social media analytics could promote population health and perhaps even save lives. That at least is its promise. But with big data come new challenges – as Eichstaedt’s “debate” with Brown and Coyne illustrates.

Continue reading “Are tweets a goldmine for psychologists or just a lot of noise? Researchers clash over the meaning of social media data”