If you want to know about the special relationship between human and canine you need only watch a dog owner slavishly feed, cuddle and clean up after her furry companion, day after day after day. But is this unique cross-species relationship also reflected at a deeper level, in the workings of the canine brain? A recent study in Learning and Behavior suggests so, finding that highly trained dogs have a dedicated neural area for processing human faces, separate from the area involved in processing the faces of other dogs.
The researchers, led by Andie Thompkins at Auburn University, say their results are of theoretical importance (in relation to the evolutionary origin of cognitive abilities) and could have practical use too, potentially paving the way to using brain scans to validate the expertise of trained dogs.
If you Google “holding a warm cup of coffee can” you’ll get a handful of results all telling the same story based on social priming research (essentially the study of how subtle cues affect human thoughts and behavior). “Whether a person is holding a warm cup of coffee can influence his or her views of other people, and a person who has experienced rejection may begin to feel cold,” notes a New York Times blog post, while a Psychology Today article explains that research shows that “holding a warm cup of coffee can make you feel socially closer to those around you.”
These kind of findings are most often associated with John Bargh, a Yale University professor and one of the godfathers of social priming. In his 2017 book Before You Know It: The Unconscious Reasons We Do What We Do, Bargh goes further, even suggesting – based on social priming studies and a small study that found two hours of “hyperthermia” treatment with an infra lamp helped depressed in-patients – that soup might be able to treat depression. “After all,” he writes, “it turns out that a warm bowl of chicken soup really is good for the soul, as the warmth of the soup helps replace the social warmth that may be missing from the person’s life, as when we are lonely or homesick.” He continues, “These simple home remedies are unlikely to make big profits for the pharmaceutical and psychiatric industries, but if the goal is a broader and more general increase in public mental health, some research into their possible helpfulness could pay big dividends for individuals currently in distress, and for society as a whole.”
Psychologists, philosophers and poets have devoted many years reflecting on the meaning of love for another. A less-explored question – the focus of a study to appear in the January 2019 issue of Journal of Social and Personal Relationships – is what makes us feel loved by others?
More specifically, the study investigated whether there is widespread agreement about the everyday experiences, romantic and non-romantic, that lead us (or US citizens, at least) to feel loved. Some of the results are obvious – many participants agreed that making love, being hugged, receiving compliments and gifts, make us feel loved. But there was even stronger agreement that mundane yet touching gestures make us feel loved, such as our pets being happy to see us, a child snuggling up to us, or someone showing us compassion.
When’s the best time of day to give someone bad news? First thing in the morning or early evening? Yes, if it’s in the morning, they have longer to work out what to do about it, but you might be better off plumping for the evening because according to a new study, published open-access in Neuropsychopharmacology, they’re likely to suffer less of a physiological stress response at this time.
“Learning styles” – there can be few ideas that have created such a stark disconnect between the experts on the ground and the evidence published in scholarly journals. Endorsed by the overwhelming majority of teachers, yet dismissed by most psychologists and educational neuroscientists as a “neuromyth”, the basis of learning styles is that people learn better when taught via their preferred learning modality, usually (but not always) described as either visual, auditory or kinaesthetic.
Many studies have already uncovered serious problems with the learning styles concept, such as that measures of learning styles are invalid and that students do not in fact learn better via their preferred modality. Now further evidence against learning styles comes from Greece, in one of the first investigations on the topic to involve primary school pupils.
Writing in Frontiers in Education, Marietta Papadatou-Pastou and her colleagues report that teachers and pupils did not agree on the pupils’ preferred learning modality – a significant blow for the learning styles concept since “teachers typically adopt learning styles within a classroom context by relying on their own assessment of students’ learning styles.”
For a long time, some psychologists have understood that their field has an issue with WEIRDness. That is, psychology experiments disproportionately involve participants who are Western, Educated, and hail from Industrialised, Rich Democracies, which means many findings may not generalise to other populations, such as, say, rural Samoan villagers.
In a new paper in PNAS, a team of researchers led by Mostafa Salari Rad decided to zoom in on a leading psychology journal to better understand the field’s WEIRD problem, evaluate whether things are improving, and come up with some possible changes in practice that could help spur things along.
While there’s still a debate about whether we have free will or not, most researchers at least agree that we feel as if we do. That perception is often considered to have two elements: a sense of having decided to act – called “volition”; and feeling that that decision was our own – having “agency”.
Now in a paper in PNAS, Ryan Darby at Vanderbilt University Medical Center and colleagues have used a new technique – lesion network mapping – to identify for the first time the brain networks that underlie our feelings of volition and for agency. “Together, these networks may underlie our perception of free will, with implications for neuropsychiatric diseases in which these processes are impaired,” the researchers write.
It has been a long and bumpy road for the implicit association test (IAT), the reaction-time-based psychological instrument whose co-creators, Mahzarin Banaji and Anthony Greenwald — among others in their orbit — claimed measures test-takers’ levels of unconscious social biases and their propensity to act in a biased and discriminatory manner, be that via racism, sexism, ageism, or some other category, depending on the context. The test’s advocates claimed this was a revelatory development, not least because the IAT supposedly measures aspects of an individual’s bias even beyond what that individual was consciously aware of themselves.
As I explained in a lengthy feature published on New York Magazine’s website last year, many doubts have emerged about these claims, ranging from the question of what the IAT is really measuring (as in, can a reaction-time difference measured in milliseconds really be considered, on its face, evidence of real-world-relevant bias?) to the algorithms used to generate scores to, perhaps most importantly (given that the IAT has become a mainstay of a wide variety of diversity training and educational programmes), whether the test really does predict real-world behaviour.
On that last key point, there is surprising agreement. In 2015 Greenwald, Banaji, and their coauthor Brian Nosek stated that the psychometric issues associated with various IATs “render them problematic to use to classify persons as likely to engage in discrimination”. Indeed, these days IAT evangelist and critic alike mostly agree that the test is too noisy to usefully and accurately gauge people’s likelihood of engaging in discrimination — a finding supported by a series of meta-analyses showing unimpressive correlations between IAT scores and behavioral outcomes (mostly in labs). Race IAT scores appear to account for only about 1 per cent of the variance in measured behavioural outcomes, reports an important meta-analysis available in preprint, co-authored by Nosek. (That meta-analysis also looked at IAT-based interventions, finding that while implicit bias as measured by the IAT “is malleable… changing implicit bias does not necessarily lead to changes in explicit bias or behavior.”)
So where does this leave the IAT? In a new paper in Current Directions in Psychological Sciencecalled “The IAT Is Dead, Long Live The Iat: Context-Sensitive Measures of Implicit Attitudes Are Indispensable to Social and Political Psychology”, John Jost, a social psychologist at New York University and a leading IAT researcher, seeks to draw a clear line between the “dead” diagnostic-version of the IAT, and what he sees as the test’s real-world version – a sensitive, context-specific measure that shouldn’t be used for diagnostic purposes, but which has potential in various research and educational contexts.
Does this represent a constructive manifesto for the future of this controversial psychological tool? Unfortunately, I don’t think it does – rather, it contains many confusions, false claims, and strawman arguments (as well as a misrepresentation of my own work). Perhaps most frustrating, Jost joins a lengthening line of IAT researchers who, when faced with the fact that the IAT appears to have been overhyped for a long time by its creators, most enthusiastic proponents, and by journalists, responds with an endless variety of counterclaims that don’t quite address the core issue itself, or which pretend those initial claims were never made in the first place.
Replicating a study isn’t easy. Just knowing how the original was conducted isn’t enough. Just having access to a sample of experimental participants isn’t enough. As psychological researchers have known for a long time, all sorts of subtle cues can affect how individuals respond in experimental settings. A failure to replicate, then, doesn’t always mean that the effect being studied isn’t there – it can simply mean the new study was conducted a bit differently.
Many Labs 2, a project of the Center for Open Science at the University of Virginia, embarked on one of the most ambitious replication efforts in psychology yet – and did so in a way designed to address these sorts of critiques, which have in some cases hampered past efforts. The resultant paper, a preprint of which can be viewed here, is lead-authored by Richard A. Klein of the Université Grenoble Alpes. Klein and his very, very large team – it takes almost four pages of the preprint just to list all the contributors – “conducted preregistered replications of 28 classic and contemporary published findings with protocols that were peer-reviewed in advance to examine variation in effect magnitudes across sample and setting.”
After a trauma many people have the sense it has changed them for the better, such as granting them a new appreciation for life or improving their relationships. This has given rise to the appealing notion that there is such a thing as “post-traumatic growth”. However, the majority of investigations into this phenomenon have relied on asking people whether they believe they have changed; very few have assessed people prior to a trauma and then re-assessed them afterwards to see if positive changes have actually occurred.
A new study in the Journal of Social and Personal Relationships is the first to apply this kind of “prospective design” in the context of relationship breakups in young adults, and – unfortunately for anyone who found comfort and inspiration in the principle of post-traumatic growth – the authors Meghan Owenz and Blaine Fowers say their findings are more consistent with the idea that such growth is mostly illusory, the result of a positive re-appraisal of the breakup and one’s current situation.