If you’re planning to take off weight in the new year and it suddenly seems like food is everywhere – and is especially enticing – that’s probably your mind playing a particularly unhelpful trick on you. Thinking about food, even in terms of trying to avoid it, can actually make it more likely that you’ll notice food in your environment, especially if you’re already overweight or obese.
That’s according to a recent study in the International Journal of Obesity thatcompared how overweight and healthy weight people pay attention to food. Food cues – sights, smells, advertisements and social contexts like parties – are everywhere these days, so understanding why some people find it harder to ignore them could be key to designing weight loss programmes.
For a long time, some psychologists have understood that their field has an issue with WEIRDness. That is, psychology experiments disproportionately involve participants who are Western, Educated, and hail from Industrialised, Rich Democracies, which means many findings may not generalise to other populations, such as, say, rural Samoan villagers.
In a new paper in PNAS, a team of researchers led by Mostafa Salari Rad decided to zoom in on a leading psychology journal to better understand the field’s WEIRD problem, evaluate whether things are improving, and come up with some possible changes in practice that could help spur things along.
It has been a long and bumpy road for the implicit association test (IAT), the reaction-time-based psychological instrument whose co-creators, Mahzarin Banaji and Anthony Greenwald — among others in their orbit — claimed measures test-takers’ levels of unconscious social biases and their propensity to act in a biased and discriminatory manner, be that via racism, sexism, ageism, or some other category, depending on the context. The test’s advocates claimed this was a revelatory development, not least because the IAT supposedly measures aspects of an individual’s bias even beyond what that individual was consciously aware of themselves.
As I explained in a lengthy feature published on New York Magazine’s website last year, many doubts have emerged about these claims, ranging from the question of what the IAT is really measuring (as in, can a reaction-time difference measured in milliseconds really be considered, on its face, evidence of real-world-relevant bias?) to the algorithms used to generate scores to, perhaps most importantly (given that the IAT has become a mainstay of a wide variety of diversity training and educational programmes), whether the test really does predict real-world behaviour.
On that last key point, there is surprising agreement. In 2015 Greenwald, Banaji, and their coauthor Brian Nosek stated that the psychometric issues associated with various IATs “render them problematic to use to classify persons as likely to engage in discrimination”. Indeed, these days IAT evangelist and critic alike mostly agree that the test is too noisy to usefully and accurately gauge people’s likelihood of engaging in discrimination — a finding supported by a series of meta-analyses showing unimpressive correlations between IAT scores and behavioral outcomes (mostly in labs). Race IAT scores appear to account for only about 1 per cent of the variance in measured behavioural outcomes, reports an important meta-analysis available in preprint, co-authored by Nosek. (That meta-analysis also looked at IAT-based interventions, finding that while implicit bias as measured by the IAT “is malleable… changing implicit bias does not necessarily lead to changes in explicit bias or behavior.”)
So where does this leave the IAT? In a new paper in Current Directions in Psychological Sciencecalled “The IAT Is Dead, Long Live The Iat: Context-Sensitive Measures of Implicit Attitudes Are Indispensable to Social and Political Psychology”, John Jost, a social psychologist at New York University and a leading IAT researcher, seeks to draw a clear line between the “dead” diagnostic-version of the IAT, and what he sees as the test’s real-world version – a sensitive, context-specific measure that shouldn’t be used for diagnostic purposes, but which has potential in various research and educational contexts.
Does this represent a constructive manifesto for the future of this controversial psychological tool? Unfortunately, I don’t think it does – rather, it contains many confusions, false claims, and strawman arguments (as well as a misrepresentation of my own work). Perhaps most frustrating, Jost joins a lengthening line of IAT researchers who, when faced with the fact that the IAT appears to have been overhyped for a long time by its creators, most enthusiastic proponents, and by journalists, responds with an endless variety of counterclaims that don’t quite address the core issue itself, or which pretend those initial claims were never made in the first place.
How important is your country, really? It’s a pointed question, especially with Brexit looming and the reinvigoration of nationalistic movements in the U.S. and EU. So it feels like a fitting time to look at a creative study that evaluated differences in, well, national self-importance.
Perhaps no concept has been more important to social psychology in recent years — for good and ill — than “social priming”, or the idea, as the science writer Neuroskeptic once put it, that “subtle cues can exert large, unconscious influences on human behaviour.” This subgenre of research has produced a steady drumbeat of interesting findings, but unfortunately, an increasing number of them are failing to replicate – including modern classics, like the idea that exposure to ageing-related words makes you walk more slowly, or that thinking about money increases your selfishness.
The so-called “Macbeth effect” is another classic example of social priming that gained mainstream recognition and acceptance from psychologists and laypeople alike. The term was first introduced by the psychologists Chen-Bo Zhong and Katie Liljenquist, who reported in a 2006 paper in Sciencethat “a threat to one’s moral purity induces the need to cleanse oneself”.
This claim is such an interesting, provocative example of the connection between body and mind that it’s little wonder it has spread far and wide — there aren’t a lot of social-priming findings with their own Wikipedia page (it was also covered here at the Research Digest). But is it as strong as everyone thinks? For a recent paper in Social Psychology the psychologists Jedediah Siev, Shelby Zuckerman, and Joseph Siev decided to find out by conducting a meta-analysis of the available papers on the Macbeth effect to date.
Outrage: It’s absolutely everywhere. Today’s world, particularly the version of it blasted into our brains by social media, offers endless fodder, from big, simmering outrages (climate change and many powerful institutions’ refusal to do anything about it) to smaller quotidian ones (every day, someone, somewhere does something offensive that comes to Twitter’s attention, leading to a gleeful pile-on).
In part because of rising awareness of the adverse consequences of unfettered digital-age outrage, and of journalistic treatments like So You’ve Been Publicly Shamed by Jon Ronson (which I interviewed him about here), outrage has become a particularly potent dirty word in recent years. Outrage, the thinking goes, is an overly emotional response to a confusing world, and drives people to nasty excesses, from simple online shaming to death threats or actual violence.
But a new paper argues that the concept of outrage has gotten too bad a rap and that its upsides, especially as a motivator of collective action and costly helping, have been overlooked. Writing in Trends in Cognitive Sciences, the psychologists Victoria Spring, Daryl Cameron and Mina Cikara detail important questions about outrage that have yet to be answered, and they highlight how certain findings – especially from the “intergroup relations” literature, in contrast to the mostly negative findings from moral psychology – suggest it can serve a useful purpose.
In 2016, the unexpected outcome of two votes shook the world: the UK voting to leave the European Union, and the US electing President Donald Trump. Even the pollsters got it wrong – for example, based on the latest polling data, the New York Times gave Clinton an 85 per cent chance of winning just the day before the election.
Accurate polling is important for a number of reasons. Poll results influence politicians’ campaign strategies and fundraising efforts; affect market prices and business forecasts; and they can impact voters’ perceptions and even turnout. So, when the polls are wide of the mark – as they were so badly in 2016 – many outcomes are being sent astray by misleading information.
But polling is not as simple as just asking a lot of people who they intend to vote for. Polls are often biased by who is motivated enough to respond, and people can be overly-optimistic about the likelihood they will actually vote.
Another factor, outlined by Andy Brownback and Aaron Novotny of the University of Arkansas in their recent paper in the Journal of Experimental and Behavioural Economics, is people feeling the need to conceal their true voting intentions.
Let’s start with a quick multiple-choice test about multiple-choice tests: when designing them, should you a) avoid using complex questions, b) have lots of potential answers for each question, c) all of the above or d) none of the above? The correct answer is (a), though as we’ll see, this was not a very well-crafted multiple-choice question.
The issue of how best to design multiple-choice questions is important since they have been popular in both education and business settings for many years now. This is due to them being quick to administer and easy to mark and grade. Furthermore, many students often report preferring them over other test formats.
As well as being a useful assessment tool, if they are well-designed they can also aid learning. This is because of the Testing Effect – the way that retrieving knowledge helps consolidate it in memory.
Thankfully in a recent paper in Journal of Applied Research in Memory and Cognition Andrew Butler of Washington University in St. Louis has reviewed the parallel literatures on how best to design multiple-choice tests for learning and assessment, and from this he’s recommended six evidence-based tips:
Psychology as a scientific field enjoys a tremendous level of popularity throughout society, a fascination that could even be described as religious. This is likely the reason why it is one of the most popular undergraduate majors in European and American universities. At the same time, it is not uncommon to encounter the firm opinion that psychology in no way qualifies for consideration as a science. Such extremely critical opinions about psychology are often borrowed from authorities – after all, it was none other than the renowned physicist and Nobel laureate Richard Feynman who, in a famous interview in 1974, compared the social sciences and psychology in particular to a cargo cult. Scepticism toward psychological science can also arise following encounters with the commonplace simplifications and myths spread by pop-psychology, or as a product of a failure to understand what science is and how it solves its dilemmas.
According to William O’Donohue and Brendan Willis of the University of Nevada, these issues are further compounded by undergraduate psychology textbooks. Writing recently in Archives of Scientific Psychology, they argue that “[a] lack of clarity and accuracy in [psych textbooks] in describing what science is and psychology’s relationship to science are at the heart of these issues.” The authors based their conclusions on a review of 30 US and UK undergraduate psychology textbooks, most updated in the last few years (general texts and others covering abnormal, social and cognitive psych), in which they looked for 18 key contemporary issues in philosophy of science.
Towards the end of the Disney film Aladdin, our hero’s love rival, the evil Jafar, discovers Aladdin’s secret identity and steals his magic lamp. Jafar’s wish to become the world’s most powerful sorcerer is soon granted and he then uses his powers to banish Aladdin to the ends of the Earth.
What follows next is a lingering, close-up of Jafar’s body. He leans forward, fists clenched, with an almost constipated look on his face. He then explodes in uncontrollable cackles that echo across the landscape. For many millennials growing up in the 1990s, it is an archetypical evil laugh.
A recent essay by Jens Kjeldgaard-Christiansen in the Journal of Popular Culture asks what the psychology behind this might be. Kjeldgaard-Christiansen is well placed to provide an answer having previously used evolutionary psychology to explain the behaviours of heroes and villains in fiction more generally.