When I was at primary school, we used to type out the word “BOOBIES” using upside-down digits on our electronic calculators and we thought it was hilarious. This was an all-boys school in the late 80s, cut us some slack. And anyway, maybe we weren’t so daft. The word (although spelt differently as “Booby”) was among the top-three most funny words as identified in a new paper in Behaviour Research, which is the first in-depth investigation of the perceived funniness of individual English words.
Among the 5000 words that were studied, Booty was rated the funniest of all, scoring 4.32 on average on a scale from 1 (not funny at all) to 5 (most funny). The lowest scoring word was Rape with an average of 1.18. The researchers Tomas Engelthaler and Thomas Hills at the University of Warwick, England hope their findings will provide a useful resource, a “highly rudimentary ‘fruit fly’ version” of humour” for researchers studying the psychology of what makes us laugh.
What can we tell about someone from their face? Their favoured facial expressions can hint at their temperament, the weathering of their skin at their life history, their facial hair and makeup at their aesthetic taste. But now, new research in the journal Attitudes and Social Cognition suggests that we can also intuit their names, because a person’s given name influences their facial appearance in adult life. On the face of it (sorry) this is hard to believe, but the case, made across eight studies, is based on plenty of careful evidence, and also proposes a plausible explanation.
Avid readers of novels know that they often take the perspective of the characters they read about. But just how far does this mental role-playing go? A new paper in the Journal of Memory and Language has provided a clever demonstration of how readily we simulate the thoughts of fictional characters. Borrowing a method from research into the psychology of deliberate forgetting, the researchers at Binghamton University, USA, show that when a story character needs to focus on remembering one series of words rather than another, the reader simulates this same memory process in their own minds. The character’s mental experience becomes the reader’s mental experience.
To lead a good life, we need to make good decisions: manage our health and financial affairs, invest in appropriate relationships, and avoid serious lapses like falling for online scams. What equips us to do this? One candidate is IQ: after all, people who score higher on intelligence tests tend to go on to do better academically and in their careers. But many of us know intellectual titans who still make grave errors of judgment in their lives. Book-smart doesn’t necessarily make you life-smart, and a new article in the journal Thinking Skills and Creativity examines the utility of IQ in navigating existence, and how another mental ability may put it in the shade.
Three years ago, the film Lucy came out starring Scarlett Johansson as the eponymous heroine who is implanted with drugs that allow her to use the full capacity of her brain rather than the mere 10 per cent that the rest of us supposedly use. In response I wrote an article for WIRED “All you need to know about the 10 per cent brain myth in 60 seconds“. Soon afterwards I received an angry, acerbic 1,200-word email from a reader: “I am obviously not going to insist you take your article down since that isn’t my place,” she wrote, “but you should certainly not feel proud to be spreading such misinformed information to the public”.
What particularly shocked me was not just the tone of the correspondence, but the fact this email, endorsing the 10 per cent brain myth, came from a Masters student in neuroscience at Yale. But perhaps this wasn’t such an odd occurrence. A new US survey published in Frontiers in Psychology finds that belief in brain myths remains widespread, and moreover, that extensive education in neuroscience seems to provide little protection from such beliefs.
In a ranking of genuinely important YouTube videos to have gone viral, this one (see above) from 2014 places high: it shows over 100 instances of harassment endured by a woman wearing a hidden camera as she walked around New York City for ten hours, including comments, stares, winks and whistles.
The video was posted in 2014 by the domestic violence activist group Hollaback! to highlight the prevalence of this kind of behaviour. As individual testimony, it was powerful. But, critics could argue, it was just one woman, on just one day. This is an argument they cannot use about the results of a new study, published in the British Journal of Social Psychology, which the researchers, led by Elise Holland at the University of Melbourne in Australia, believe is the first to capture just how common sexual harassment and “objectification” is in the daily lives of young women – and to show the possible impact on how women think about themselves.
No sooner had the American Psychological Association released their 2015 task force report supposedly confirming that violent video games make players aggressive than the criticisms of the report started pouring in, of bias and bad practice. On the issue of whether violent games breed real-world aggression, there’s not much that you can say for certain except that there’s a lot of disagreement among experts. So of course, one more study is not going to settle this long-running debate.
But what a new paper in Brain Imaging and Behaviour does do is provide a good test of a key argument made by the “violent games cause aggression” camp, namely that over time, excessive violent gameplay desensitises the emotional responsiveness of players. Using brain scanning to look for emotional desensitisation at a neural level, Gregor Szycik at Hannover Medical School and his colleagues in fact found no evidence that excessive players of violent video games are emotionally blunted.
Cognitive performance fluctuates throughout the day. Depending on their “chronotype” some people are sharpest in the morning (“larks”), while others generally prefer the later hours of the day (“owls”). For obvious reasons, this is mirrored in our preferred sleep routines: larks get tired in the evenings earlier and, as a consequence, also wake up earlier, while owls show the opposite pattern. Your chronotype is not something that you’re stuck with for the rest of your life, but it changes with age. In fact we’re most likely to show an owl-like chronotype during adolescence, which might at least partly explain why teenagers often stay up late and arrive at school with eyes bloodshot thanks to a hefty sleep debt.
But if late chronotypes are so common in adolescents, why does school start so early (usually well before 9am in the UK, Netherlands and Germany)? Doesn’t that mean that many students are likely to be constantly sleep-deprived and not assessed during their biological peaks? Yes, it does! In fact, there is a lot of evidence to suggest that school kids who have a late chronotype score lower grades. But besides reduced sleep time, there are several alternative explanations for why a late chronotype could be associated with lower academic performance (such as absenteeism, for instance) as explored in a new paper in Scientific Reports. The researchers, led by Giulia Zerbini at University of Groningen, say their findings might help us understand the chronotype/school grade link and how we can fix it.
I confess, I’ve tried having an alcoholic drink before giving a public speech, telling myself that it will take the edge off my nerves. But I’m going to think twice before doing so again: a new study in Behaviour Research and Therapy carefully monitored the effects of moderate alcohol intake on the speech-giving performance of socially anxious and control participants and while the alcohol made the nervous folk feel more relaxed, it actually harmed their performance.
Studying people who have brain damage or illness has been hugely important to progress in psychology. The approach is akin to reverse engineering: study how things go wrong when particular regions of the brain are compromised and it provides useful clues as to how those regions usually contribute to healthy mental function.
As a result, some neuropsychological conditions, such as Broca’s aphasia (speech deficits), prosopagnosia (a difficulty recognising faces, also known somewhat misleadingly as “face blindness”) and Alien Hand syndrome (a limb seeming to act of its own volition) have become extremely well-known – at least in psychological circles – and extensively studied. However, others are virtually unheard of, even though their importance to our understanding of the brain is significant.
Neuropsychologist Alfredo Ardila at Florida International University has just published in the journal Psychology and Neuroscience an overview of four of these little-known conditions, “so rare that they are not even mentioned in basic neuropsychology textbooks”: Central achromatopsia, Bálint’s syndrome, Pure-word deafness, and aphasia of the supplementary area. This follows a paper he published last year covering four other rare but important neuropsychological syndromes: Somatoparaphrenia, Akinetopsia, Reduplicative Paramnesia, Autotopagnosia.
“In neuropsychology … there are some unusual syndromes that are found very sporadically,” he writes. “But their rarity does not diminish their importance in the fundamental understanding about the brain organisation of cognition, as well as in clinical analysis of patients with brain pathologies.”
Here’s a brief breakdown of what Ardila has to say about these rare conditions and why they’re important.