As father to an 18-month-old toddler, I would love to know exactly what my son is thinking. Along with many parents, one of the ways I try to find out is to ask him questions of the variety “Do you want X or Y?” But does his answer to this type of question actually reveal his preference or is it a more a reflection of a quirky cognitive bias that is more powerful in children than adults?
Two competing effects influence how adults respond to binary choices. The first is called “The Primary Effect” which describes the way that the first option we hear tends to stick in our minds. For example, one study found that adults are more likely to choose “heads” when asked if a coin toss is going to be “heads or tails”.The second effect, which sometimes contrasts with the Primary Effect, is called “The Recency Effect”. This captures the way that the last thing we hear or experience can also have more weight in our memory (this is why popstars end their concerts on their best songs, so that everyone leaves thinking the whole gig was great). In adults, neither the Primary Effect or the Recency Effect is always the more pronounced, with evidence suggesting that one’s personality type, familiarity of the information, and how controversial the topic is, all play a mediating role.
Keen to test if the same is true in young children, or if one of the effects is more dominant, a team led by Emily Sumner conducted two experiments and published their findings in a recent paper in PLOS One fantastically titled “Cake or Broccoli: Recency Biases Children’s Verbal Responses”.
The novelist David Foster Wallace famously told a story of two young fish swimming in the sea, whereby an older fish glides by and asks, “how’s the water?”, to which they look at each other in puzzlement and say, “What’s water?” The central point of the parable is that we are constantly immersed in contexts to which we give little thought or consideration, but which nevertheless influence us profoundly. Among the most powerful of such contexts is language. A century of research on the linguistic relativity hypothesis (LHR; also known as the Sapir-Whorf hypothesis) has shown that the language we speak profoundly affects our experience and understanding of life, impacting everything from our perception of time and space to the construction of our self-identity.
What might the implications of the LHR be for psychology itself? As a science, the field generally aims to be neutral and objective, and to discover universal truths about the human mind. Yet it is surely consequential that the field mostly conducts its business in English, this being the default language in international journals and conferences. For instance, if a phenomenon has not been identified in English – even if it has in other languages – it is unlikely to be a topic of concern, and may not even “exist” for English-speaking scholars at all.
One way that the field has sought to address this limitation is by “borrowing” words from other languages and cultures.To ascertain the extent of this cross-cultural borrowing, I analysed a sample of words in psychology and recently published my results in the Journal of Positive Psychology.
Socrates famously declared that “the unexamined life is not worth living” and that “knowing thyself” was the path to true wisdom. But is there a right and a wrong way to go about such self-reflection?
Simple rumination – the process of churning your concerns around in your head – isn’t the answer. It’s likely to cause you to become stuck in the rut of your own thoughts and immersed in the emotions that might be leading you astray. Certainly, research has shown that people who are prone to rumination also often suffer from impaired decision-making under pressure and are at substantially increased risk of depression.
Instead, the scientific research suggests that you should adopt an ancient rhetorical method favoured by the likes of Julius Caesar and known as “illeism” – or speaking about yourself in the third person (the term was coined by Samuel Taylor Coleridge from the Latin ille meaning “he, that”). If I was considering an argument that I’d had with a friend, for instance, I may start by silently thinking to myself “David felt frustrated that…” The idea is that this small change in perspective can clear your emotional fog, allowing you to see past your biases.
A bulk of research has already shown that this kind of third-person thinking can temporarily improve decision making. Now a preprint at PsyArxiv finds that it can also bring long-term benefits to thinking and emotional regulation. It is, according to the authors, “the first evidence that wisdom-related cognitive and affective processes can be trained in daily life and of how to do so.”
Looking at the latest epidemiological data, it could be argued that we are in the midst of a pandemic of mental illness, of dimensions never before seen in human history. The WHO estimates that over 350 million people around the world are presently suffering from depression, which constitutes almost 5-6 per cent of the population. At its extreme, depression may lead to suicide, by which it is estimated that around 1 million people die every year. And the numbers continue growing. Faced with this rising tide of illness, it is impossible to overestimate the importance of hard facts and data indicating the paths researchers and clinicians may follow in search of ways to help. Sometimes, as suggested by a meta-analysis of 50 years of studies on indicators that help predict suicide attempts, we are entirely helpless. In other cases, like with the recent meta-analysis of the neural correlates of the changes brought about by psychotherapy in depressed brains, study results do bring us hope.
The results of the first systematic review and meta-analysis of biological markers evaluated in randomized trials of psychological treatments for depression in Neuroscience and Biobehavioral Reviews are another attempt at understanding methods of treating this terrifying illness. The authors – Ioana A. Cristea, Eirini Karyotaki, Steven D. Hollon, Pim Cuijpers and Claudio Gentili – quite rightly point out that understanding how psychological interventions impact or are impacted by biological variables has important implications. For many people, their depression co-occurs with a bodily illness, such as cancer, diabetes, heart disease, and immune system and neurological disorders, and at times is a consequence of that illness. Although we still know little about the reciprocal cause-and-effect mechanisms between psychic and somatic symptoms, some studies have suggested that psychological interventions not only change mood, but also normalise the functioning of the autonomic nervous system, with a therapeutic effect on physical conditions, such as heart disease. But is this really true?
To win a medal of any kind at the Olympic Games takes years of training, hard work and sacrifice. Standing on an Olympic podium is widely regarded as the pinnacle of an athlete’s career. Nonetheless, only one athlete can win gold, leaving the two runner-up medallists to ponder what might have been. Intriguingly a seminal study from the 1992 Olympic Games suggested that this counterfactual thinking was especially painful for silver medallists, who appeared visibly less happy than bronze medallists. The researchers speculated that this may have been because of the different counterfactual thinking they engaged in, with bronze medallists being happy that they didn’t come fourth while silver medallists felt sad that they didn’t win gold.
However, subsequent research based on the 2000 Olympic Games did not replicate this finding: this time silver medallists were found to be happier than bronze medallists. To further muddy the waters, a study from the 2004 Games was consistent with the seminal research, finding that straight after competition, gold and bronze medallists were more likely to smile than silver medallists, with these smiles being larger and more intense.
Now further insight into the psychology of coming second or third comes via Mark Allen, Sarah Knipler and Amy Chan of the University of Wollongong, who have released their findings based on the 2016 Olympic Games. These latest results, published in Journal of Sports Sciences, again challenge that initial eye-grabbing result that suggested bronze medallists are happier than silver medallists, but they support the idea that the nature of counterfactual thinking differs depending on whether athletes come second or third.
In case you hadn’t noticed, there is an ongoing debate about the existence of differences between women’s and men’s brains, and the extent to which these might be linked to biological or to cultural factors. In this debate, a real game-changer of a study would involve the identification of clear-cut sex differences in foetal brains: that is, in brains that have not yet been exposed to all the different expectations and experiences that the world might offer. A recent open-access study published in Developmental Cognitive Neuroscience by Muriah Wheelock at the University of Washington and her colleagues, including senior researcher Moriah Thomason at New York University School of Medicine, claims to have done just that, hailed by the researchers themselves as “confirmation that sexual dimorphism in functional brain systems emerges during human gestation” and in various ways by the popular press as, for example, The Times of London’s headline: “Proof at last: women and men are born to be different”.
Does this study live up to the claims made by its authors and, more excitedly, those passing the message on? I think not.
It’s now well known that many of us over-estimate our own brainpower. In one study, more than 90 per cent of US college professors famously claimed to be better than average at teaching, for instance – which would be highly unlikely. Our egos blind us to our own flaws.
But do we have an even more inflated view of our nearest and dearest? It seems we do – that’s the conclusion of a new paper published in Intelligence journal, which has shown that we consistently view our romantic partners as being much smarter than they really are.
Philosophers have long debated what constitutes genuine altruism. Some have argued that any acts, no matter however charitable, that benefit both the actor as well as the recipient, are altruistically “impure”, and thus can’t qualify as genuinely selfless. For example, volunteering at a soup kitchen would no longer be considered altruistic if we received a hot meal in return for our efforts.
However, other scholars have argued that the act remains altruistic if the benefits of prosocial behaviour are an unintended consequence. From this perspective, if the meal is unexpected, our actions are still deemed selfless.
For their recent paper in the Journal of Experimental Social Psychology Ryan Carlson and Jamil Zaki have shed light on these questions by investigating what the general population thinks of different prosocial acts, depending on their motives and consequences.
Understanding popular perceptions of prosocial behavior can not only help resolve the altruism debate, but also provide information about how our behaviour might be viewed by others, and whether our personal opinions on selflessness match up with the general belief. For example, why might we perceive the supposedly altruistic behaviour of a public figure differently to our friends, and is social media really the right place to publicise prosocial acts?
Interventions like cognitive behavioural therapy help people better control their emotions by teaching them new ways of thinking. A recent study published in NeuroImage suggests this approach could be augmented by using “neurofeedback” to help regulate activity in a key brain structure – the amygdala.
The profession of “criminal profiler” is one shrouded in secrecy, even giving off a hint of danger. Yet when the American psychiatrist James A. Brussel began profiling a particular suspect in the 1950s, law enforcement officers were not entirely inclined to trust him. However, it turned out Brussel accurately defined the suspect’s height, clothing and even religion. This spectacular success was the beginning of the profession of the profiler. The FBI formed its Behavioral Science Unit in 1974 to study serial predators. Since then, the art and craft of criminal profiling have become the subject of numerous books, TV shows and iconic films such as The Silence of the Lambs. Criminal profilers are not, however, just characters created to make interesting films and books – in the real world the accuracy of their expert opinions is often key to protecting the safety and lives of others.
Can we say, after the passage of 40 years since the job of offender profiling (OP) was established, that this profession is a craft worthy of trust, one whose practitioners make use of tried and tested tools, or rather would it be more accurate to describe it as an art-form grounded in intuition that supplies us with foggy, uncertain predictions? Answers to these questions are given by Bryanna Fox from the University of South Florida and David P. Farrington from the University of Cambridge in the December edition of Psychological Bulletin, where they present a systematic review and meta-analysis of 426 publications on OP from 1976 through 2016.