How accurately could you tell from a person’s display of behaviour and emotions what just happened to them? Dhanya Pillai and her colleagues call this “retrodictive mindreading” and they say it’s a more realistic example of how we perceive emotions in everyday life, as compared with the approach taken by traditional psychological research, in which volunteers name the emotions displayed in static photos of people’s faces.
In Pillai’s study, the task of a group of 35 male and female participants wasn’t to look at pictures and name the facial expression. Instead, the participants watched clips of people reacting to a real-life social scenario and they had to deduce what scenario had led to that emotional display.
Half the challenge Pillai and her colleagues faced was to create the stimuli for this research. They recruited 40 men and women who thought they were going to be doing the usual thing and categorising emotional facial expressions. In fact, it was their own responses that were to become the stimuli for the study proper.
While these volunteers were sitting down ready for the “study” to start, one of four scenarios unfolded. The female researcher either told them a joke (“why did the woman wear a helmet at the dinner table? She was on a crash diet”); told them a story about a series of misfortunes she’d encountered on the way to work; paid them a compliment (e.g. “you’ve got really great hair, what shampoo do you use?”); or made them wait 5 minutes while she had a drink and did some texting. In each case the volunteers’ emotional responses were recorded on film and formed the stimuli for the real experiment.
The researchers ended up with 40 silent clips, lasting 3 to 9 seconds each, comprising ten clips for each of the four scenarios. The real participants for the study proper were first shown footage of the researcher in the four scenarios and how these were categorised as joke, story, compliment or waiting. Then these observer participants watched the 40 clips of the earlier volunteers, and their task in each case was to say which scenario the person in the video was responding to.
The observing participants’ performance was far from perfect – they averaged 60 per cent accuracy – but it was far better than the 25 per cent level you’d expect if they were merely guessing. By far, they were most skilled at recognising when a person was responding to the waiting scenario (90 per cent accuracy). Their accuracy was even for the other scenarios at around 50 per cent. They achieved this success level despite the huge amount of variety in the way the different volunteers responded to the different scenarios. “From observing just a few seconds of a person’s reaction, it appears we can gauge what kind of event might have happened to that individual with considerable success,” the researchers said.
A surprise detail came from the recordings of the observing participants’ eye movements. They focused more on the mouth region rather than the eyes. Based on past research (much of it using static facial displays), Pillai and her colleagues thought that better accuracy would go hand-in-hand with more attention paid to the eye region of the targets’ faces. In fact, for three of the scenarios (all except the joke), the opposite was true. This may be because focusing on the eye region is more beneficial when naming specific mental states, as opposed to the “retrodictive mindreading”challenge involved in the current study.
In contrast to much of the existing psychology literature, Pillia and her team concluded that theirs was an important step towards devising tasks “that closely approximate how we understand other people’s behaviour in real life situations.”
Pillai, D., Sheppard, E., and Mitchell, P. (2012). Can People Guess What Happened to Others from Their Reactions? PLoS ONE, 7 (11) DOI: 10.1371/journal.pone.0049859
Note: the picture above is for illustrative purposes only and was not used in the study.