Imagine that tomorrow a catastrophe wipes out 99% of the world’s population. That’s clearly not a desirable scenario — we would all agree that a peaceful, continued existence is preferable. Now imagine that the disaster kills everyone, wiping out the human race. Most of us would rate that as an even worse occurrence.
But how do we see the relative severity of these different possibilities? Is there a bigger difference between nothing happening and 99% of people dying, or between 99% and 100% of people being wiped out?
This thought-experiment was first posed by the philosopher Derek Parfit, who thought most people would believe the first difference is greater — after all, going from business-as-usual to almost total annihilation is a big step. He, on the other hand, felt the second difference was greater by far: even if just a tiny fraction of humans survive, civilisation could continue for millions of years, but if humanity is wiped from the face of the Earth, then it’s all over.
Now a new study in Scientific Reports has found that, like Parfit predicted, most people don’t seem to share his view of human extinction as a “uniquely bad” catastrophe — until they are forced to go beyond their gut feeling and reflect on what extinction really means in the long term.
If you saw a stranger break into someone’s house in the middle of the night, you’d probably call the police. But what if it was a friend or family member who was committing the crime? A new study in Personality and Social Psychology Bulletin looks at the tension between wanting to punish people who commit immoral acts and protecting those with whom we have close relationships. And it turns out that if someone close to us behaves immorally, we tend to err on the side of protecting them — even if their crime is especially egregious.
Imagine seeing a photograph of a suffering child in the war-torn region of Darfur, in Sudan. Most of us would feel compassion towards that child. Now imagine seeing a photo of a group of eight children in the same terrible predicament. You’d feel correspondingly more compassion towards this larger group… right?
Well, probably not. Plenty of studies have demonstrated what’s known as the “numeracy bias” in compassion — that people’s feelings of compassion do not tend to increase in response to greater numbers of people in distress. This “leads people frequently to experience a disproportionate amount of compassion towards a single suffering individual relative to scores of suffering victims that are part of a larger tragedy,” write Daniel Lim and David DeSteno at Northeastern University, in their new paper, published in the journal Emotion. However, they’ve now found that people who have experienced adversity in their own lives are resistant to this bias — and they have some suggestions for how the rest of us might avoid it.
Philosophers have long debated what constitutes genuine altruism. Some have argued that any acts, no matter however charitable, that benefit both the actor as well as the recipient, are altruistically “impure”, and thus can’t qualify as genuinely selfless. For example, volunteering at a soup kitchen would no longer be considered altruistic if we received a hot meal in return for our efforts.
However, other scholars have argued that the act remains altruistic if the benefits of prosocial behaviour are an unintended consequence. From this perspective, if the meal is unexpected, our actions are still deemed selfless.
For their recent paper in the Journal of Experimental Social Psychology Ryan Carlson and Jamil Zaki have shed light on these questions by investigating what the general population thinks of different prosocial acts, depending on their motives and consequences.
Understanding popular perceptions of prosocial behavior can not only help resolve the altruism debate, but also provide information about how our behaviour might be viewed by others, and whether our personal opinions on selflessness match up with the general belief. For example, why might we perceive the supposedly altruistic behaviour of a public figure differently to our friends, and is social media really the right place to publicise prosocial acts?
Perhaps no concept has been more important to social psychology in recent years — for good and ill — than “social priming”, or the idea, as the science writer Neuroskeptic once put it, that “subtle cues can exert large, unconscious influences on human behaviour.” This subgenre of research has produced a steady drumbeat of interesting findings, but unfortunately, an increasing number of them are failing to replicate – including modern classics, like the idea that exposure to ageing-related words makes you walk more slowly, or that thinking about money increases your selfishness.
The so-called “Macbeth effect” is another classic example of social priming that gained mainstream recognition and acceptance from psychologists and laypeople alike. The term was first introduced by the psychologists Chen-Bo Zhong and Katie Liljenquist, who reported in a 2006 paper in Sciencethat “a threat to one’s moral purity induces the need to cleanse oneself”.
This claim is such an interesting, provocative example of the connection between body and mind that it’s little wonder it has spread far and wide — there aren’t a lot of social-priming findings with their own Wikipedia page (it was also covered here at the Research Digest). But is it as strong as everyone thinks? For a recent paper in Social Psychology the psychologists Jedediah Siev, Shelby Zuckerman, and Joseph Siev decided to find out by conducting a meta-analysis of the available papers on the Macbeth effect to date.
Outrage: It’s absolutely everywhere. Today’s world, particularly the version of it blasted into our brains by social media, offers endless fodder, from big, simmering outrages (climate change and many powerful institutions’ refusal to do anything about it) to smaller quotidian ones (every day, someone, somewhere does something offensive that comes to Twitter’s attention, leading to a gleeful pile-on).
In part because of rising awareness of the adverse consequences of unfettered digital-age outrage, and of journalistic treatments like So You’ve Been Publicly Shamed by Jon Ronson (which I interviewed him about here), outrage has become a particularly potent dirty word in recent years. Outrage, the thinking goes, is an overly emotional response to a confusing world, and drives people to nasty excesses, from simple online shaming to death threats or actual violence.
But a new paper argues that the concept of outrage has gotten too bad a rap and that its upsides, especially as a motivator of collective action and costly helping, have been overlooked. Writing in Trends in Cognitive Sciences, the psychologists Victoria Spring, Daryl Cameron and Mina Cikara detail important questions about outrage that have yet to be answered, and they highlight how certain findings – especially from the “intergroup relations” literature, in contrast to the mostly negative findings from moral psychology – suggest it can serve a useful purpose.
From an evolutionary perspective, altruistic behaviour is still a bit of mystery to psychologists, especially when it comes with a hefty cost to the self and is aimed at complete strangers.
One explanation is that altruism is driven by empathy – experiencing other people’s distress the same way as, or similar to, how we experience our own. However, others have criticized this account – most notably psychologist Paul Bloom, author of Against Empathy: The Case for Rational Compassion. Their reasons are many, but among them is the fact that our empathy tends to be greatest for people who are most similar to us, which would argue against empathy driving the kind of altruism that involves the giver making personal sacrifices for strangers.
Hindering research into this topic is the challenge of measuring empathy objectively and devising a reliable laboratory measure of altruism (including one that overcomes most volunteers’ natural inclination to want to present themselves as morally good).
A new study in Psychological Science overcomes these obstacles by using a neural measure of empathy and by testing a rare group of people whose altruistic credentials are second to none: individuals who have donated one of their kidneys to a complete stranger.
There’s a popular idea in psychology that among the important factors shaping our honesty and generosity is our belief in the concept of free will. Believe more strongly in free will, so the theory goes, and you will be more inclined to prosocial behavior. Supporting this, studies that have momentarily undermined people’s belief in free will – for instance, by giving them a text to read about genetic determinism, or about how neuroscience shows our decisions are out of conscious control – have found that this increases people’s propensity for cheating and selfishness.
Such an effect seems understandable – after all, the notion that humans can choose whether to behave well or badly is fundamental to how we think about moral responsibility. It’s plausible that if you portray free will as an illusion then you provide people with a ready-made excuse for bad, selfish behavior, thus increasing the temptation for them to act that way.
As ever, however, reality is refusing to conform to a simple, intuitively appealing story. Recent attempts to replicate the influence of changing people’s free will beliefs on their subsequent moral behavior have failed, or have applied only to specific groups of people, but not others.
Now a series of four large studies conducted on Amazon’s survey website, each involving hundreds of people, has failed to find a correlation between people’s beliefs about free will and either their generosity toward charities or their inclination to cheat. Writing up their findings in Social Psychological and Personality Science, Damien Crone and Neil Levy at the University of Melbourne and Macquarie University said “… we believe there is good reason to doubt that free will beliefs have any substantial implications for everyday moral behaviors.”
Many commentators considered President Obama’s reversal on same-sex marriage an act of courage. But this isn’t how the public usually perceives moral mind-changers, according to a team led by Tamar Kreps at the University of Utah. Their findings in the Journal of Personality and Social Psychology suggest that leaders who shift from a moral stance don’t appear brave – they just look like hypocrites.
Would you wilfully hurt or kill one person so as to save multiple others? That’s the dilemma at the heart of moral psychology’s favourite thought experiment and its derivatives. In the classic case, you must decide whether or not to pull a lever to divert a runaway mining trolley so that it avoids killing five people and instead kills a single individual on another line. A popular theory in the field states that, to many of us, so abhorrent is the notion of deliberately harming someone that our “deontological” instincts deter us from pulling the lever; on the other hand, the more we intellectualise the problem with cool detachment, the more likely we will make a utilitarian or consequentialist judgment and divert the trolley.
Armed with thought experiments of this kind, psychologists have examined all manner of individual and circumstantial factors that influence the likelihood of people making deontological vs. utilitarian moral decisions. However, there’s a fatal (excuse the pun) problem. A striking new paper in Psychological Science finds that our answers to the thought experiments don’t match up with our real-world moral decisions.