Many millions of people around the world have taken the “implicit association test (IAT)” hosted by Harvard University. By measuring the speed of your keyboard responses to different word categories (using keys previously paired with a particular social group), it purports to show how much subconscious or “implicit” prejudice you have towards various groups, such as different ethnicities. You might think that you are a morally good, fair-minded person free from racism, but the chances are your IAT results will reveal that you apparently have racial prejudices that are outside of your awareness.
What is it like to receive this news, and what do the public think of the IAT more generally? To find out, a team of researchers, led by Jeffery Yen at the University of Guelph, Ontario, analysed 793 reader comments to seven New York Times articles (op-eds and science stories) about the IAT published between 2008 and 2010. The findings appear in the British Journal of Social Psychology.
It’s common for psychologists to use the terms “self-control” and “cognitive control” interchangeably. Consider the introduction to a review paper published recently in Trends in Cognitive Sciences on whether our self-control is limited or not (I’ve added the emphases): “Whereas cognitive control relies on at least three separate (yet related) executive functions – task switching, working-memory, and inhibition – at its heart, self-control is most clearly related to inhibitory cognitive control …”
When scholars do make a distinction, they mostly use self-control to refer to the ability to delay immediate gratification in the service of a longer-term goal, whereas they use the term cognitive control to refer to the related ability to ignore distracting information or stimuli. Defined this way, do self-control and cognitive control essentially involve the same mental processes? According to a new study by Stefan Scherbaum at Technische Universität Dresden and his colleagues in Acta Psychologica, they do not.
There’s been a lot of talk of the crisis in psychology. For decades, and often with the best of intentions, researchers have engaged in practices that have made it likely their results are “false positives” or not real. But that was in the past. The crisis is ending. “We do not call the rain that follows a long drought a ‘water crisis’,” write Leif Nelson at UC Berkeley and Joseph Simmons and Uri Simonsohn at the University of Pennsylvania. “We do not call sustained growth following a recession an ‘economic crisis'”.
In their paper, due for publication in the Annual Review of Psychology, the trio observe that had any psychologists been in hibernation for the last seven years, they would not recognise their field today. The full disclosure of methods and data, the pre-registration of studies, the publication of negative findings, and replication attempts – all of which help reduce risk of false positives – have increased immeasurably. “The improvements to our field have been dramatic,” they write. “This is psychology’s renaissance.”
As well giving the field of psychology a pep talk, their paper provides a useful review of how we got to this point, the reasons things are getting better, and the ongoing controversies.
“Figures such as Princess Diana, Oprah Winfrey, Mahatma Gandhi, Ronald Reagan, and Adolf Hitler share this triumphant, mysterious, and fascinating descriptor”, write the authors of a new paper on charisma. And yet, they add, “the empirical study of charisma is relatively young and sparse, and no unifying conceptualization of charisma currently exists”. The research and theorizing that has been done has focused on charismatic leadership, they explain, neglecting the everyday variety. In their paper in Journal of Personality and Social Psychology the University of Toronto researchers describe how they developed their new six-item measure “The General Charisma Inventory” (GCI), and they show how scores on the GCI are associated with people’s persuasiveness and likability.
When I was at primary school, we used to type out the word “BOOBIES” using upside-down digits on our electronic calculators and we thought it was hilarious. This was an all-boys school in the late 80s, cut us some slack. And anyway, maybe we weren’t so daft. The word (although spelt differently as “Booby”) was among the top-three most funny words as identified in a new paper in Behaviour Research, which is the first in-depth investigation of the perceived funniness of individual English words.
Among the 5000 words that were studied, Booty was rated the funniest of all, scoring 4.32 on average on a scale from 1 (not funny at all) to 5 (most funny). The lowest scoring word was Rape with an average of 1.18. The researchers Tomas Engelthaler and Thomas Hills at the University of Warwick, England hope their findings will provide a useful resource, a “highly rudimentary ‘fruit fly’ version” of humour” for researchers studying the psychology of what makes us laugh.
Up and down the land parents and teenagers are engaged in tense negotiation and diplomacy in an effort to maintain domestic peace. Some households are finding more success than others. Their secret, according to a new paper in NeuroImage, is a literal meeting of minds – synchronisation of brain cell firing seems to foster emotional harmony. Moreover, when parents and their teenagers display this “neural similarity”, write Tae-Ho Lee and his colleagues, “this promotes youths’ psychological adjustment”.
These are intriguing findings – in the fact the researchers claim this is the first time that anyone has compared the brain activity of parent-child dyads with their interpersonal relations. However, sceptics will baulk at the rampant neuro-reductionism and at the paper’s repeated claims of brain-based causation on the basis of purely correlational evidence.
The failure to reproduce established psychology findings on renewed testing, including some famous effects, has been well-publicised and has led to talk of a crisis in the field. However, psychology is a vast topic and there’s a possibility that the findings from some sub-disciplines may be more robust than others, in the sense of replicating reliably, even in unfavourable circumstances, such as when the participants have been tested on the same effect before.
A new paper currently available as a preprint at PsyArXiv has tested whether this might be the case for nine key findings from cognitive psychology, related to perception, memory and learning. Rolf Zwaan at Erasumus University Rotterdam and his colleagues found that all nine effects replicated reliably. “These results represent good news for the field of psychology,” they said.
In Part One, published yesterday, we reported the views of active research psychologists on the state of their field, as surveyed by Matt Motyl and his colleagues at the University of Illinois at Chicago. Researchers reported a cautious optimism: research practices hadn’t been as bad as feared, and are in any case improving.
But is their optimism warranted? After all, several high-profilereplication projects have found that, more often than not, re-running previously successful studies produces only null results. But defenders of the state of psychology argue that replications fail for many reasons, including defects in the reproduction and differences in samples, so the implications aren’t settled.
To get closer to the truth, Motyl’s team complemented their survey findings with a forensic analysis of published data, uncovering results that seem to bolster their optimistic position. In Part Two of our coverage, we look at these findings and why they’re already proving controversial.
The field of social psychology is reeling from a series of crises that call into question the everyday scientific practices of its researchers. The fuse was lit by statistician John Ioannidis in 2005, in a review that outlined why, thanks particularly to what are now termed “questionable research practices” (QRPs), over half of all published research in social and medical sciences might be invalid. Kaboom. This shook a large swathe of science, but the fires continue to burn especially fiercely in the fields of social and personality psychology, which marshalled its response through a 2012 special issue in Perspectives on Psychological Science that brought these concerns fully out in the open, discussing replication failure, publication biases, and how to reshape incentives to improve the field. The fire flared up again in 2015 with the publication of Brian Nosek and the Open Science Collaboration’s high-profile attempt to replicate 100 studies in these fields, which succeeded in only 36 per cent of cases. Meanwhile, and to its credit, efforts to institute better safeguards like registered reports have gathered pace.
So how bad did things get, and have they really improved?A new article in pre-print at the Journal of Personality and Social Psychology tries to tackle the issue from two angles: first by asking active researchers what they think of the past and present state of their field, and how they now go about conducting psychology experiments, and second by analysing features of published research to estimate the prevalence of broken practices more objectively.
The paper comes from a large group of authors at the University of Illinois at Chicago under the guidance of Linda Skitka, a distinguished social psychologist who participated in the creation of the journal Social Psychological and Personality Science and who is on the editorial board of many more social psych journals, and led by Matt Motyl, a social and personality psychologist who has published with Nosek in the past, including on the issue of improving scientific practice.
Psychology research is the air that we breathe at the Digest, making it crucial that we understand its quality. So in this two-part series, we’re going to explore the issues raised in the University of Illinois at Chicago paper, to see if we can make sense of the state of social psychology, beginning in this post with the findings from Motyl et al’s survey of approximately 1,200 social and personality psychologists, from graduate students to full professors, mainly from the US, Europe and Australasia.
Psychology is overly dependent on student samples, but on the plus side, you might assume that one advantage of comparing across student samples is that you can rule out the influence of complicating background factors, such as differences in average personality profile. In fact, writing in the Journal of Personality, a team of US researchers led by Katherine Corker at Kenyon College has challenged this assumption: their findings suggest that if you test a group of students at one university, it’s not safe to assume that their average personality profile will match that of a sample of students from a university elsewhere in the same country.