Philosophers have long debated what constitutes genuine altruism. Some have argued that any acts, no matter however charitable, that benefit both the actor as well as the recipient, are altruistically “impure”, and thus can’t qualify as genuinely selfless. For example, volunteering at a soup kitchen would no longer be considered altruistic if we received a hot meal in return for our efforts.
However, other scholars have argued that the act remains altruistic if the benefits of prosocial behaviour are an unintended consequence. From this perspective, if the meal is unexpected, our actions are still deemed selfless.
For their recent paper in the Journal of Experimental Social Psychology Ryan Carlson and Jamil Zaki have shed light on these questions by investigating what the general population thinks of different prosocial acts, depending on their motives and consequences.
Understanding popular perceptions of prosocial behavior can not only help resolve the altruism debate, but also provide information about how our behaviour might be viewed by others, and whether our personal opinions on selflessness match up with the general belief. For example, why might we perceive the supposedly altruistic behaviour of a public figure differently to our friends, and is social media really the right place to publicise prosocial acts?
Perhaps no concept has been more important to social psychology in recent years — for good and ill — than “social priming”, or the idea, as the science writer Neuroskeptic once put it, that “subtle cues can exert large, unconscious influences on human behaviour.” This subgenre of research has produced a steady drumbeat of interesting findings, but unfortunately, an increasing number of them are failing to replicate – including modern classics, like the idea that exposure to ageing-related words makes you walk more slowly, or that thinking about money increases your selfishness.
The so-called “Macbeth effect” is another classic example of social priming that gained mainstream recognition and acceptance from psychologists and laypeople alike. The term was first introduced by the psychologists Chen-Bo Zhong and Katie Liljenquist, who reported in a 2006 paper in Sciencethat “a threat to one’s moral purity induces the need to cleanse oneself”.
This claim is such an interesting, provocative example of the connection between body and mind that it’s little wonder it has spread far and wide — there aren’t a lot of social-priming findings with their own Wikipedia page (it was also covered here at the Research Digest). But is it as strong as everyone thinks? For a recent paper in Social Psychology the psychologists Jedediah Siev, Shelby Zuckerman, and Joseph Siev decided to find out by conducting a meta-analysis of the available papers on the Macbeth effect to date.
Outrage: It’s absolutely everywhere. Today’s world, particularly the version of it blasted into our brains by social media, offers endless fodder, from big, simmering outrages (climate change and many powerful institutions’ refusal to do anything about it) to smaller quotidian ones (every day, someone, somewhere does something offensive that comes to Twitter’s attention, leading to a gleeful pile-on).
In part because of rising awareness of the adverse consequences of unfettered digital-age outrage, and of journalistic treatments like So You’ve Been Publicly Shamed by Jon Ronson (which I interviewed him about here), outrage has become a particularly potent dirty word in recent years. Outrage, the thinking goes, is an overly emotional response to a confusing world, and drives people to nasty excesses, from simple online shaming to death threats or actual violence.
But a new paper argues that the concept of outrage has gotten too bad a rap and that its upsides, especially as a motivator of collective action and costly helping, have been overlooked. Writing in Trends in Cognitive Sciences, the psychologists Victoria Spring, Daryl Cameron and Mina Cikara detail important questions about outrage that have yet to be answered, and they highlight how certain findings – especially from the “intergroup relations” literature, in contrast to the mostly negative findings from moral psychology – suggest it can serve a useful purpose.
From an evolutionary perspective, altruistic behaviour is still a bit of mystery to psychologists, especially when it comes with a hefty cost to the self and is aimed at complete strangers.
One explanation is that altruism is driven by empathy – experiencing other people’s distress the same way as, or similar to, how we experience our own. However, others have criticized this account – most notably psychologist Paul Bloom, author of Against Empathy: The Case for Rational Compassion. Their reasons are many, but among them is the fact that our empathy tends to be greatest for people who are most similar to us, which would argue against empathy driving the kind of altruism that involves the giver making personal sacrifices for strangers.
Hindering research into this topic is the challenge of measuring empathy objectively and devising a reliable laboratory measure of altruism (including one that overcomes most volunteers’ natural inclination to want to present themselves as morally good).
A new study in Psychological Science overcomes these obstacles by using a neural measure of empathy and by testing a rare group of people whose altruistic credentials are second to none: individuals who have donated one of their kidneys to a complete stranger.
There’s a popular idea in psychology that among the important factors shaping our honesty and generosity is our belief in the concept of free will. Believe more strongly in free will, so the theory goes, and you will be more inclined to prosocial behavior. Supporting this, studies that have momentarily undermined people’s belief in free will – for instance, by giving them a text to read about genetic determinism, or about how neuroscience shows our decisions are out of conscious control – have found that this increases people’s propensity for cheating and selfishness.
Such an effect seems understandable – after all, the notion that humans can choose whether to behave well or badly is fundamental to how we think about moral responsibility. It’s plausible that if you portray free will as an illusion then you provide people with a ready-made excuse for bad, selfish behavior, thus increasing the temptation for them to act that way.
As ever, however, reality is refusing to conform to a simple, intuitively appealing story. Recent attempts to replicate the influence of changing people’s free will beliefs on their subsequent moral behavior have failed, or have applied only to specific groups of people, but not others.
Now a series of four large studies conducted on Amazon’s survey website, each involving hundreds of people, has failed to find a correlation between people’s beliefs about free will and either their generosity toward charities or their inclination to cheat. Writing up their findings in Social Psychological and Personality Science, Damien Crone and Neil Levy at the University of Melbourne and Macquarie University said “… we believe there is good reason to doubt that free will beliefs have any substantial implications for everyday moral behaviors.”
Many commentators considered President Obama’s reversal on same-sex marriage an act of courage. But this isn’t how the public usually perceives moral mind-changers, according to a team led by Tamar Kreps at the University of Utah. Their findings in the Journal of Personality and Social Psychology suggest that leaders who shift from a moral stance don’t appear brave – they just look like hypocrites.
Would you wilfully hurt or kill one person so as to save multiple others? That’s the dilemma at the heart of moral psychology’s favourite thought experiment and its derivatives. In the classic case, you must decide whether or not to pull a lever to divert a runaway mining trolley so that it avoids killing five people and instead kills a single individual on another line. A popular theory in the field states that, to many of us, so abhorrent is the notion of deliberately harming someone that our “deontological” instincts deter us from pulling the lever; on the other hand, the more we intellectualise the problem with cool detachment, the more likely we will make a utilitarian or consequentialist judgment and divert the trolley.
Armed with thought experiments of this kind, psychologists have examined all manner of individual and circumstantial factors that influence the likelihood of people making deontological vs. utilitarian moral decisions. However, there’s a fatal (excuse the pun) problem. A striking new paper in Psychological Science finds that our answers to the thought experiments don’t match up with our real-world moral decisions.
When, in Shakespeare’s Julius Caesar, Marc Anthony delivers his funeral oration for his fallen friend, he famously says “The evil that men do lives on; the good is oft interred with their bones.”
Anthony was talking about how history would remember Caesar, lamenting that doing evil confers greater historical immortality than doing good. But what about literal immortality?
While there’s no room for such a notion in the scientific worldview, belief in an immortal afterlife was common throughout history and continues to this day across many cultures. Formal, codified belief systems like Christianity have a lot to say about the afterlife, including how earthly behaviour determines our eternal fate: the virtuous among us will apparently spend the rest of our spiritual days in paradise, while the wicked are condemned to suffer until the end of time. Yet, according to Christianity and many other formal religions, there’s no suggestion that anyone – good, bad or indifferent – gets more or less immortality, which is taken to be an all-or-nothing affair.
This is not how ordinary people think intuitively about immortality, though. In a series of seven studies published in Personality and Social Psychology Bulletin, Kurt Gray at The University of North Carolina at Chapel Hill, and colleagues, have found that, whether religious or not, people tend to think that those who do good or evil in their earthly lives achieve greater immortality than those who lead more morally neutral lives. What’s more, the virtuous and the wicked are seen to achieve different kinds of immortality.
“Lower your music, you’re upsetting other passengers.” Without social sanction, society frays at the edges. But what drives someone to intervene against bad behaviour? One cynical view is that it appeals to those who want to feel better about themselves through scolding others. But research putting this to the test in British Journal of Social Psychology has found that interveners are rather different in character.
Last year, so few people contracted measles in England and Wales that the disease was declared technically “eliminated”. The national MMR (measles mumps rubella) vaccination programme is to thank. But set against this welcome news were some imperfect stats: in England in 2016/17, only 87.6 per cent of children had received both the required doses of the vaccine by their fifth birthday – a drop compared with the previous two years. At least part of the reason was a reluctance among some parents to have their children vaccinated. This is a problem that affects other countries, and other vaccines, too. And it’s troubling, because clusters of unvaccinated or under-vaccinated children are more susceptible to disease outbreaks – indeed, a measles outbreak in Leeds and Liverpool just last year affected unprotected children, providing a reminder why all children should be vaccinated.
In a new paper, published in Nature Human Behaviour, a team led by Avnika Amin at Emory University, US, reveal a previously overlooked explanation for “vaccine hesitancy”, as it’s called – and it’s to do with parents’ basic moral values.