By guest blogger Jesse Singal
For a long time, some psychologists have understood that their field has an issue with WEIRDness. That is, psychology experiments disproportionately involve participants who are Western, Educated, and hail from Industrialised, Rich Democracies, which means many findings may not generalise to other populations, such as, say, rural Samoan villagers.
In a new paper in PNAS, a team of researchers led by Mostafa Salari Rad decided to zoom in on a leading psychology journal to better understand the field’s WEIRD problem, evaluate whether things are improving, and come up with some possible changes in practice that could help spur things along.
For their paper, nicely titled, “Toward a psychology of Homo sapiens: Making psychological science more representative of the human population,” Rad and his colleagues pulled two samples of articles published in Psychological Science: all articles published in 2014, and the last three issues from 2017. Unfortunately for the field of psychology, they found little evidence to suggest that Psychological Science, published by the US-based Association for Psychological Science, has addressed the WEIRD problem.
Looking at the participant groups in the subset of the 2014 articles in which authors included demographic information, “57.76% were drawn from the US, 71.25% were drawn from English-speaking countries (including the US and UK), and 94.15% … sampled Western countries (including English-speaking countries, Europe, and Israel).” The 2017 numbers weren’t much better.
So there’s clearly a problem. But, Rad’s team added, “[p]erhaps the most disturbing aspect of our analysis was the lack of information given about WEIRDness of samples, and the lack of consideration given to issues of cultural diversity in bounding the conclusions”. That is, the articles they examined all too often omitted information that could help other researchers note WEIRDness when it occurs, and all too often explicitly over-extrapolated findings drawn from WEIRD samples. Summing up these problems in the 2014 sample, Rad and his colleagues said: “Over 72% of abstracts contained no information about the population sampled, 83% of studies did not report analysis of any effects of the diversity of their sample (e.g., gender effects), over 85% of studies neglected to discuss the possible effects of culture and context on their findings, and 84% failed to simply recommend studying the phenomena concerned in other cultures, implying that the results indicated something generalizable to humans outside specific cultural contexts.”
The authors don’t just grumble about the problem – they offer some concrete potential fixes based on their findings:
“Required Reporting of Sample Characteristics” — It’s already the norm to report the gender breakdowns of experimental samples; Rad and his colleagues think that authors should be “required to report [other characteristics including] age, SES, ethnicity, religion, and nationality,” when it is practical and realistic to do so.
“Explicitly Tie Findings to Populations” — Less “We discovered X about people”, and more “We discovered X about a small group of undergraduates with the following demographic characteristics in New Haven, Connecticut.”
“Justify the Sampled Population” — Authors should have to explain why they chose the population they chose – and sometimes, as Rad et al note, yes, the answer will be convenience. That’s fine, within reason: The problem isn’t that college students are studied sometimes, it’s that they’re studied far too often.
“Discuss generalisability of the Finding” – Similar to the point about populations above, the idea is simply that authors should explicitly discuss whether they expect a given finding will generalise beyond the experimented-upon population, and why.
“Analytical Investigation of Existing Diversity” – Even WEIRD samples often have some degree of diversity to them along certain dimensions, so here Rad and his colleagues are suggesting that authors check for the presence of diversity-related moderators, both gender (which is already reported, usually) and other characteristics like race (which often aren’t). In other words, even if your experimental sample is mostly WEIRD, it could be informative to check whether, for example, the small handful of black participants produced different data than the rest of the group.
Recommendations for editors and reviewers
“Non-WEIRD = Novel and Important” – “Journal editors should instruct reviewers to treat non-WEIRDness as a marker of the interest and importance of a paper.”
“Diversity Badges” – Some journals already award “badges” when authors pre-register or engage in other open-science best-practices. Badges for research centered on under-studied populations could be a nice little incentive-nudge.
“Diversity Targets” – It would be reasonable, argue the authors, to have at least 50 per cent of published papers analyse non-WEIRD populations. Ideally, it would be higher, but as the numbers above show the situation at the moment is pretty dire, 50 per cent would be a major improvement.
The above suggestions provide a solid jumping-off-point for solving the WEIRD problem: Any one of them could be debated, discussed, and potentially modified or implemented. The next step, then, will be to see whether journals – the most important deciders when it comes scientific standard – will take up the mantle.
At the risk of getting overly meta-psychological – discussing the psychological science of how psychological science is conducted – a great deal of human behaviour can be boiled down to the path of least resistance and to incentives. Often it’s not a deep-seated bias or a lack of concern about other groups that causes researchers to overlook non-WEIRD samples (though I’m sure both are sometimes factors), but rather it’s because college students are right there. As in, literally in the same buildings as the ones where most psych researchers work. You can just put up some flyers, and boom, you have a group you can experiment on! It’s all too tempting. And it’ll take some incentive-shifting – shifts to editors’ behaviour, or the possibility of earning badges, or whatever else – to get researchers out of their WEIRD rut.
Post written by Jesse Singal (@JesseSingal) for the BPS Research Digest. Jesse is a contributing writer at BPS Research Digest and New York Magazine. He is working on a book about why shoddy behavioral-science claims sometimes go viral for Farrar, Straus and Giroux.