Category: Faces

Job recruiters may be swayed by signs of our sexuality revealed in our faces

Human resource concept, Young businessman holding white billboarBy Alex Fradera

Vacant job roles should be filled on the candidate’s skills, experience and knowledge, not their identity. But that means dodging our deeply held stereotypes, such as men being a natural fit for decision-making roles like management and women for care-giving professions. Evidence suggests this also applies to sexual orientation, meaning, for instance, that CVs that indicate the candidate is homosexual (by mentioning college experience in a group promoting gay rights, for example) are likely to be seen by recruiters as a better match for care-giving roles. New research from the Journal of Applied Psychology adds to this, suggesting that merely looking gay is enough for a candidate to be treated in a biased way by recruiters.

Continue reading “Job recruiters may be swayed by signs of our sexuality revealed in our faces”

Think you’re good with faces? In fact, you probably don’t know much about your own face-recognition skills

6860363223_718ae3ee16_bBy Christian Jarrett

Life would be awfully confusing if we weren’t able to recognise familiar faces. It’s a skill most of us take for granted, and we rarely stop to consider the impressive cognitive wizardry involved. But some of us are better at it than others: in the last decade or so it’s become apparent that around two per cent of the population are born with a severe face-recognition impairment (known as congenital prosopagnosia), that there is a similar proportion of “super-recognisers” with unusually exceptional face-recognition skills, and that the rest of us are on a spectrum in between.

Where do you think your abilities lie? A new study in the Quarterly Journal of Experimental Psychology suggests that, unless you are severely impaired at face-recognition, you probably don’t have much insight into this question. When participants were confronted with the question: “Overall, from 1-‘very poor’ to 9-‘very good’, how would you describe your general ability to recognise faces?”, the research found that most participants’ answers bore no relation to their performance on a range of lab-based face-recognition tests.

Continue reading “Think you’re good with faces? In fact, you probably don’t know much about your own face-recognition skills”

Bad news for passport control: face-matching is harder than we realised

Passport Officer at Airport SecurityBy Alex Fradera

Experiments suggest that telling if two unfamiliar faces are the same or different is no easy task. Such research has sometimes presented participants with full body shots, has more commonly used cropped shots of people’s heads, but almost never placed the faces in a formal context, such as on a photographic ID card. But these are the situations in which face-to-photo matching is most relevant, when a shop assistant squints at a driver’s license before selling alcohol to a twitchy youth, or an emigration official scrutinises passports before their holders pass ports. Moreover, it’s plausible that the task is harder when juggling extra information, something already found in the realm of fingerprint matching, where biographical information can lead to more erroneous matches because it triggers observer prejudices. A new article in Applied Cognitive Psychology confirms these fears, suggesting that our real-world capacity to spot fakes in their natural setting is even worse than imagined.

Continue reading “Bad news for passport control: face-matching is harder than we realised”

The Metropolitan Police’s elite super-recognisers are the real deal

Back view of metropolitan police officerBy Alex Fradera

Identifying people from video and photographs is a core task for a modern police force, and London – which led the world in implementing and using CCTV – has attempted to meet this need by developing a pool of 140 “police identifiers” made up of Metropolitan Police officers with a strong track record of making IDs from photographs. But who are these individuals? Are they really super-recognisers as the Met has claimed? True super-recognisers are usually identified by formal tests and their dramatic ability to recognise human faces outstrips typical performance to the same extent that many prosopagnosics (people with face-blindness) lag behind. Or, in identifying so many suspects, did the police identifiers just catch a string of lucky breaks? Addressing this through a battery of neuropsychological tests, Josh Davis and his UK-university collaborators scrutinise the scrutinisers in a new paper in Applied Cognitive Psychology. Continue reading “The Metropolitan Police’s elite super-recognisers are the real deal”

No reason to smile – Another modern psychology classic has failed to replicate

25401359824_3f753aaf04_o
Image via Quentin Gronau/Flickr showing how participants were instructed to hold the pen

By Christian Jarrett

The great American psychologist William James proposed that bodily sensations – a thumping heart, a sweaty palm – aren’t merely a consequence of our emotions, but may actually cause them. In his famous example, when you see a bear and your pulse races and you start running, it’s the running and the racing pulse that makes you feel afraid.

Consistent with James’ theory (and similar ideas put forward even earlier by Charles Darwin), a lot of research has shown that the expression on our face seems not only to reflect, but also to shape how we’re feeling. One of the most well-known and highly cited pieces of research to support the “facial feedback hypothesis” was published in 1988 and involved participants looking at cartoons while holding a pen either between their teeth, forcing them to smile, or between their lips, forcing them to pout. Those in the smile condition said they found the cartoons funnier.

But now an attempt to replicate this modern classic of psychology research, involving 17 labs around the world and a collective subject pool of 1894 students, has failed. “Overall, the results were inconsistent with the original result,” the researchers said.  Continue reading “No reason to smile – Another modern psychology classic has failed to replicate”

No, autistic people do not have a "broken" mirror neuron system – new evidence

By guest blogger Helge Hasselmann

Scientists are still struggling to understand the causes of autism. A difficulty bonding with others represents one of the core symptoms and has been the focus of several theories that try and explain exactly why these deficits come about.

One of the more prominent examples, the “broken mirror hypothesis”, suggests that an impaired development of the mirror neuron system (MNS) is to blame. First observed in monkeys, mirror neurons are more active when you perform a certain action and when you see someone else engage in the same behavior – for example, when you smile or when you see someone else smile.

This “mirroring” has been hypothesised to help us understand what others are feeling by sharing their emotional states, although this is disputed. Another behaviour that is thought to depend on an intact mirror neuron system is facial mimicry – the way that people spontaneously and unconsciously mimic the emotional facial expressions of others.

Interestingly, studies have shown that people with autism do not spontaneously mimic others’ facial expressions, which could explain why they often struggle to “read” people’s emotions or have trouble interacting socially. Some experts have claimed these findings lend support to “broken” mirroring in autism, but this has remained controversial. Now a study in Autism Research has used a new way to measure facial mimicry and the results cast fresh doubt on the idea that autism is somehow caused by a broken mirror neuron system. Continue reading “No, autistic people do not have a "broken" mirror neuron system – new evidence”

Facial expressions of intense joy and pain are indistinguishable

Eyes shut tight, face contorted into a grimace. Are they ecstatic or anguished? Ignorant of the context, it can be hard to tell. Recent research that involved participants looking at images of the facial expressions of professional tennis players supported this intuition – participants naive to the context were unable to tell the difference between the winners and losers.

From a scientific perspective, the problem with the tennis study is that the findings might have been affected by the players’ physical exertion or their awareness of being on public display. To test the similarity of facial expressions of joy and pain more robustly, a new study in the journal Emotion has used videos taken from a much wider range of contexts.

Sofia Wenzler and her colleagues began by finding online videos of the ecstatic relatives of soldiers who’d just made a surprise return home. For comparison they found videos of witnesses caught up in real life terror attacks who were expressing intense negative emotion (none were actually harmed themselves).

Example stimuli taken from Wenzler et al 2016.
1=positive 2=negative. 

The researchers took stills of the moment of peak emotional facial expression from the joyful and negative videos and presented them to 28 undergrad students. Naive to the context of the facial expressions, the students’ task was to rate them from 1 “most negative” to 9 “most positive”. On average, they rated the intense joy and intense anguish facial expressions negatively and to a similar extent. In other words, the students couldn’t tell the difference between the facial displays of intense pleasure and pain.

A second experiment involving children’s facial expressions produced largely similar results. This time, for the negative emotional displays, the researchers took stills from pranks shown on the Jimmy Kimmel late-night TV show, such as when children woke to discover their parents had eaten all their sweets earned through trick-or-treating. For children’s facial expressions of intense joy, the researchers found online videos of children receiving surprise treats, such as tickets to see their favourite pop star in concert.

Again, students naive to the context looked at and rated still images of the children’s facial expressions and again they rated intense joy negatively, although in this case not as negatively as intense pain (this might be because the contexts used in this experiment were not as momentous as those used in the first experiment that featured adults).

The findings from the two experiments contradict mainstream psychological theories of emotion, which predict that facial expressions of emotion should be most distinguishable at the opposite ends of the positive/negative spectrum. One explanation for this contradiction considered by the researchers is that in moments of extreme joy, people are actually experiencing negative emotion, for example through the evocation of negative memories. Another is that extreme joy prompts the expression of negative emotion as a way to restore emotional equilibrium. However, Wenzler and her team find both these possibilities unconvincing – for one thing, the equilibrium account predicts incorrectly that negative emotion should manifest in facial expressions of joy.

A matter on which the researchers remain silent is why, from an evolutionary perspective, humans have developed a tendency to express intense joy in a way that is perceived as indistinguishable from intense pain. This is pure speculation, but perhaps it is because for our ancestors, intense joy, like pain, was typically a moment of vulnerability, and it was adaptive for its facial expression to signal a need for support and protection.

_________________________________ ResearchBlogging.org

Wenzler, S., Levine, S., van Dick, R., Oertel-Knöchel, V., & Aviezer, H. (2016). Beyond Pleasure and Pain: Facial Expression Ambiguity in Adults and Children During Intense Situations. Emotion DOI: 10.1037/emo0000185

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Psychologists have identified the length of eye contact that people find most comfortable

It’s a dilemma extremely familiar to anyone with social anxiety – for how long to make eye contact before looking away? The fear is that if you only ever fix the other person’s gaze for very brief spells then you’ll look shifty. If you lock on for too long, on the other hand, then there’s the risk of seeming creepy. Thankfully a team of British researchers has now conducted the most comprehensive study of what people generally regard as a comfortable length of eye contact.

For the research published in Royal Society Open Science, Nicola Binetti and his colleagues recruited nearly 500 visitors to the London Science Museum from 56 nations – 224 of them were male and their ages ranged from 11 to 79 years, with an average age of 30.

The participants’ main task was to sit close to a monitor and watch a series of video clips of the same actor or actress making eye contact with them for various durations between 100ms (a tenth of a second) and 10,300ms (just over ten seconds). The particpiants’ pupil dilation was recorded while they watched the brief clips, and after each clip they had to say whether the length of eye contact felt too long or too short for comfort. Each participant watched 40 clips of the same actor or actress, but there were 8 actors and actresses used in the study, all of them Caucasian. The participants also filled out a personality questionnaire and they rated the actor or actress who’d appeared in their clips for various characteristics including attractiveness and threat.

On average, the participants were most comfortable with eye contact that lasted just over three seconds. Looking at the distribution of preferences, the vast majority of participants preferred a duration between two and five seconds. No-one preferred eye contact durations of less than a second or longer than nine seconds.

The actors and actresses appeared against a green background
and glanced downwards between episodes of eye contact. 

Surprisingly, there were no links between the participants’ personality profiles and their preferences for length of eye contact. There were also no major effects of participant age or gender – the only exception being that among male participants looking at clips featuring an actress, the older the man, the more likely he was to prefer longer eye contact. In terms of the participant ratings of the actor or actress, only threat was relevant, with participants who rated their actor or actress as more threatening tending to prefer shorter eye contact durations.

The researchers were interested in the participants’ pupil dilation because it’s a marker of physiological arousal. They found that participants who showed greater pupil dilation in response to the video clips tended to prefer longer eye contact. The meaning of this finding is unclear – negative emotions usually elicit more arousal, so we might have expected the opposite result. The researchers speculated that the greater physiological arousal in this context might be traceable to a rapid, automatic form of face processing that takes place in subcortical areas of the brain, and that “activity within this early eye contact processing stage is enhanced in participants who favour longer periods of direct gaze and who presumably feel more comfortable in engaging in a communicative link.”

A major issue with the study is of course that it used pre-recorded video clips rather than a live interaction. Readers will rightly wonder just how fair it is to extrapolate from this setup to the real life situation of two people in conversation, where both parties are involved in the dance of eye contact. In fact, the lack of realism in the study may explain why no association was found between personality and preferred gaze duration. However, the researchers point out that their finding of an average preferred eye contact length of 3.3 seconds tallies with some preliminary studies published in the 1970s that did involve participants interacting in pairs.

_________________________________ ResearchBlogging.org

Binetti, N., Harrison, C., Coutrot, A., Johnston, A., & Mareschal, I. (2016). Pupil dilation as an index of preferred mutual gaze duration Royal Society Open Science, 3 (7) DOI: 10.1098/rsos.160086

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

When we draw a face, why do most of us put the eyes in the wrong place?

Go ahead, sketch a face on your note paper. Use a photo of someone as a guide if you want. Unless you’re a trained artist, the chances are that you’ve made an elementary error, placing the eyes too far up the head, when it fact they should be halfway. Research suggests about 95 per cent of us non-artists tend to make this mistake and in a new study in the Psychology of Aesthetics, Creativity and the Arts, psychologists in America have attempted to find out why. The answer it turns out is rather complicated and concerns both our lack of knowledge and basic biases in the way that we pay attention to faces and to space in general.

Justin Ostrofsky and his colleagues asked 75 psychology undergrads to draw two faces shown on a computer screen – both were identical except one had hair and one was bald. Crucially, half the participants were told that the eyes on a human face typically appear halfway down the head, whereas the other participants weren’t given this information.

Overall the participants made the usual error that people make when drawing faces and placed the eyes too far up the head, even though they had the model faces to guide them. But this error wasn’t as extreme in the participants who were given the specific guidance about eye position. This tells us that at least part of the reason that non-artists place the eyes too high is because we don’t know (or we’ve never noticed) their precise schematic location in a face.

However, the fact that the participants given this information still placed the eyes too high suggests that there is more to this than a lack of schematic knowledge. Another factor seems to be that when looking at faces, we tend to ignore the forehead region (this has been shown by prior research that’s tracked people’s gaze while they look at faces). Instead, we pay more attention to the parts of the face that contain features. The relevance of this to drawing was shown by the fact the participants made a smaller error with eye position when drawing the face that had hair than the face that was bald. The researchers explained: “When drawing the bald model, the absence of the hair line creates a larger forehead region to ignore and attenuate, resulting in the eyes drawn even further up the head in the bald model.”

Yet another relevant factor seems to be our natural bias towards ignoring the upper end of vertical space. This is easy to demonstrate by asking people to mark the mid-point of a vertical line – most of us place the mid-point too high, which in neuropsychological jargon is a sign of “altitudinal neglect”, meaning that we neglect to attend to higher space.

In the current study, the researchers asked their participants to perform a vertical line bisection and they found that the greater their altitudinal neglect (marking the line midpoint higher), the higher they tended to place the eyes on the faces they drew. But intriguingly this association was only true for the participants who were given the factual information about the eyes being midway down a human face. It seems being given this schematic knowledge improves our drawing, but only to a point – ultimately we’re still led astray by a basic attentional bias (presumably artists learn to overcome this bias).

It’s amazing that a simple drawing task can reveal so many quirks of the human mind, but it’s not the first time. For instance, last year researchers exposed the foibles of human memory by demonstrating that most people are poor at drawing the Apple logo, even though many of us are exposed to it everyday.

_________________________________ ResearchBlogging.org

Ostrofsky, J., Kozbelt, A., Tumminia, M., & Cipriano, M. (2016). Why Do Non-Artists Draw the Eyes Too Far Up the Head? How Vertical Eye-Drawing Errors Relate to Schematic Knowledge, Pseudoneglect, and Context-Based Perceptual Biases. Psychology of Aesthetics, Creativity, and the Arts DOI: 10.1037/a0040368

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

After learning to identify with someone else’s face, do people think their appearance has changed?

Past research has shown that it’s possible to hack our sense of our own bodies in bewildering ways, such as perceiving another person’s face as our own by stroking both in synchrony. These body illusions can alter our sense of self at a psychological level too. For example, embodying a child-sized body in a virtual reality environment leads people to associate themselves with child-like concepts. Can such effects also operate in the opposite direction, from the psychological to the physical? A new paper published in the Quarterly Journal of Experimental Psychology aimed to find out by seeing if shifting people’s sense of self at a psychological level warped their sense of their facial appearance.

Sophie Payne’s team at Royal Holloway, University of London manipulated their participants’ sense of self by repeatedly presenting them with a black and white cropped photo of a gender-appropriate face that was labelled “self”, and with two other face images that were labelled as “friend” and “stranger”. To consolidate these associations, the researchers then tested the participants, repeatedly showing them one of the earlier faces together with the correct label used earlier or the wrong label, and the participants had to say each time whether the label matched the face or not.

As the test went on, the participants became especially quick at spotting when the “self face” was correctly labelled as “self”, just as the researchers hoped would happen. This suggests that the previously unknown face had been incorporated into their self concept, at least temporarily. Think of it as a weaker version of the way we are particularly sensitive to any sounds that resemble our name, even against the hubbub of a cocktail party.

Having incorporated this face into their self-concept, did the participants view their facial appearance any differently? To address this, the researchers presented the participants with 100 faces and asked them to rate how similar each face was to their own. Fifty of the faces were blends of their own real face with the “stranger” face from earlier, and another 50 blended their real face with the “self face” paired earlier with their self concept.

The participants had actually completed this resemblance task earlier, before they’d learned to associate the “self face” with their self concept. The crucial test was whether, now that they’d learned to associate themselves with the “self face”, they would see themselves as resembling that face physically, more so than they had done earlier. Payne’s team predicted that they would, but in fact the results showed that this hadn’t happened. Identifying themselves with the face hadn’t made them believe that they looked like the face.

Payne’s prediction was credible partly because we know the psychological self is malleable, body perception is malleable, and changes to body perception usually result in shifts in sense of self. Furthermore, and making this new result extra surprising, psychological influences have already been shown to affect our judgments about the physical appearance of our own face.

For example, a study from 2014 showed that people were more likely to say that they resembled a face that reflected a blend of their own face with someone else’s, when that other face belonged to a trustworthy partner in an earlier trading task rather than a cheat. Essentially, that result showed that the lines between self and other can be easily blurred, unlike in the current study. What gives?

The non-significant result in the current study may have uncovered the limits to these kinds of blurring effects. The findings suggest that it may be quite easy to adapt our self-concept, for example attuning us to identify with a new nickname or onscreen avatar, but that for this process to go deeper and influence how we perceive our own physical appearance, we need a more motivated, involving, and perhaps social context, like being betrayed or treated loyally.

The new hypothesis, then, is that we are engineered to perceptually link – or distance ourselves – from those who have helped or wronged us, and that the heat of social emotion is the soldering iron that fixes these connections fast. Further research will tell.

_________________________________ ResearchBlogging.org

Payne, S., Tsakiris, M., & Maister, L. (2016). Can the self become another? Investigating the effects of self-association with a new facial identity The Quarterly Journal of Experimental Psychology, 1-13 DOI: 10.1080/17470218.2015.1137329

further reading
Embodying another person’s face makes it easier to recognise their fear

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!