Would you wilfully hurt or kill one person so as to save multiple others? That’s the dilemma at the heart of moral psychology’s favourite thought experiment and its derivatives. In the classic case, you must decide whether or not to pull a lever to divert a runaway mining trolley so that it avoids killing five people and instead kills a single individual on another line. A popular theory in the field states that, to many of us, so abhorrent is the notion of deliberately harming someone that our “deontological” instincts deter us from pulling the lever; on the other hand, the more we intellectualise the problem with cool detachment, the more likely we will make a utilitarian or consequentialist judgment and divert the trolley.
Armed with thought experiments of this kind, psychologists have examined all manner of individual and circumstantial factors that influence the likelihood of people making deontological vs. utilitarian moral decisions. However, there’s a fatal (excuse the pun) problem. A striking new paper in Psychological Science finds that our answers to the thought experiments don’t match up with our real-world moral decisions.
Dries Bostyn and his colleagues at Ghent University recruited nearly 300 participants. All answered several hypothetical moral dilemmas derived from the classic trolley dilemma – for instance, in a building on fire, they had to say whether they would push a man through a locked window to his death in order to make an exit for the five children trapped inside. The participants also completed several questionnaires tapping psychological factors, such as psychopathy and “need for cognition”, previously identified as being associated with being more utilitarian in one’s moral decisions.
A fortnight later, just under 200 of the participants were invited to the psych lab, one at a time, to take part in a real-life moral dilemma involving live mice. The participants saw two cages – one housing one mouse, the other housing five – each wired to an electroshock machine. They were told that in 20 seconds, if they did nothing, the machine would deliver a very painful but nonlethal shock to the cage containing five mice. However, if the participants pressed a button in front of them, they could divert the electric shock to the cage containing one mouse, thus saving the other five from pain (in actuality this was an illusion and all participants were later informed that in fact no mice were shocked or harmed in the study).
The remaining participants went to the psych lab but performed a hypothetical version of the mouse decision. They heard a description of the same two-cage set up faced by the others and they had to say whether they would press the button or not.
The participants who performed the real-life mouse task behaved differently than those who made a purely hypothetical decision – they were less than half as likely to let the five mice get shocked (16 per cent of them left the button unpressed compared with 34 per cent of the hypothetical group). In other words, faced with a real-life dilemma, the volunteers were more consequentialist / utilitarian; that is, more willing to inflict harm for the greater good.
But the most important finding – at least for the validity of moral psychology which so often relies on thought experiments – is that the participants’ preference for deontological vs. utilitarian responding in their answers to the earlier battery of 10 hypothetical moral dilemmas bore no relation to their decision in the real-life mouse task (in contrast, the decisions of participants in the hypothetical mouse group were related to their answers to the earlier moral dilemmas). What is more, none of the psychological factors, such as psychopathy or need for cognition, were related to decision-making in the real-life moral dilemma.
For so long, moral psychology has relied on the notion that you can extrapolate from people’s decisions in hypothetical thought experiments to infer something meaningful about how they would behave morally in the real world. These new findings challenge that core assumption of the field.
That is not to say people’s hypothetical decisions are meaningless. Although participants’ responses to the earlier moral thought experiments did not predict their later real moral decisions (i.e. whether or not to press the button to divert the electric charge), they were not totally unrelated. Among those who pressed the button in the real-life task, if they’d also earlier shown a preference for utilitarian decisions in the thought experiments then they tended to press the button more quickly; they also expressed less doubt and discomfort about their decision.
An obvious criticism of this research is that the trolley problem and its derivatives involve humans, whereas the real-life moral dilemma used in this study involved mice. However, the researchers believe this is not a critical issue since the moral conflicts (deliberately harming the few to save the many) are the same in both cases. They also note that they used a questionnaire to measure their participants’ levels of empathy for animals, and how participants scored made no difference to the pattern of findings (meaning its unlikely that participants’ levels of concern or not for the mice explains the results).
Bostyn and his team don’t know why people’s judgments on the moral thought experiments didn’t predict their choice in the real-life moral task. Current theory – based on the idea that emotional responding leads to more deontological decisions and rational thinking to more utilitarian decisions – isn’t much help because it would actually predict more deontological decisions in the more vivid and emotive real-life task, which is the opposite of what was found. The researchers speculate that perhaps people are more inclined to virtue-signal when answering in the hypothetical (i.e. signalling that they couldn’t possibly choose to deliberately harm another, even to save the majority), but one could just as easily make this case for the very opposite results.
“Future research will have to investigate these and other possibilities,” the researchers concluded. “… [W]e advance the argument that we will be able to bridge the gap between moral judgment and moral behaviour only by exploring new research paradigms that bring more decision making into the real world.”