Research into first impressions is a well-established area. Hundreds of studies have been published with the goal of understanding how the subtleties of facial features cue assumptions about those we meet. Often, the stimuli used are tightly controlled, with some sets using faces whose features are digitally manipulated to be larger or smaller by tiny degrees; the effect of miniscule alterations to the wideness of eyes, for example, can be isolated and analysed without changing any other aspect of the face. By eliminating as many extraneous variables as possible, research teams hope to get a reading of exactly what specific features contribute to the first impressions we form.
While on the surface this sounds like a reasonable and solid scientific approach, it does tend to create one pressing problem in particular. If you’ve ever participated in or run a study using face stimuli, it’s likely that the faces used were exclusively White.
Authors typically give no explicit reasoning for this choice within their published work. Richard Cook of Birkbeck, University of London and Harriet Over of the University of York believe that there are four broad reasons for this common decision. In their recent paper in Royal Society Open Science, the two deconstruct the assumptions behind possible reasonings, and examine the limitations imposed on the field by avoiding non-White face stimuli.
If you’re preparing to receive a flu vaccine — or even a COVID-19 vaccine — this winter, you’ll be interested in the results of a new study that investigates whether it’s better to smile or grimace your way through the pain of an injection.
The idea that manipulating our facial expressions can affect our emotions has a long and storied history. There are many advocates of this “facial feedback hypothesis”, and many critics, too. Indeed, one of the classic findings in the field — that people find cartoons funnier if they hold a pen between their teeth, inducing a smile — recently failed to replicate. This mixed research background was well known to Sarah D. Pressman and Amanda M. Acevedo at the University of California, Irvine, who led the new work, published in Emotion.
A witness to a crime has to describe the offender’s face in as much detail as they can before they work with a police expert to create a visual likeness — a “facial composite”, sometimes called a photo-fit, or e-fit. But the way this is typically handled in police stations could be reducing the accuracy of these images, according to a new paper published in the Journal of Experimental Psychology: Applied.
There have been concerns that the process of describing facial features might create a so-called “verbal overshadowing” that interferes with the visual memories of the offender. Recent work had suggested that waiting half an hour before starting on the composite should allow this predicted over-shadowing to fade away, and so make for a better composite. However, the new research, led by Charity Brown at the University of Leeds, has found that in more real-world situations, a delay actually makes things worse.
A newborn baby knows almost nothing about the world it comes into. To make sense of the onslaught of incoming sensory information, she or he must start to notice meaningful patterns and categorise them: that particular combination of visual data signifies a “face”, for example, while that noise is a “voice”. As the authors of a new paper in Developmental Science point out, “without this fundamental categorisation function, our nervous systems would be overwhelmed by the sheer diversity of our experience.”
It had been thought that infants form these categories using information from just one sense, whichever is the most relevant. Following this account, the category of “faces” results from an accumulation of visual information about what faces look like. However, an intriguing new study, involving four-month-old infants and their mothers’ smelly t-shirts, suggests that babies’ early acquisition of the faces category is a truly multi-sensory process.
We’re taught from an early age that it is polite and assertive to look people in the eyes when we’re talking to them. Psychology research backs this up – people who make plenty of eye contact – as long as it’s not excessive – are usually perceived as more competent, trustworthy and intelligent. If you want to make a good impression, then, it’s probably a good idea to meet the gaze of the person you’re talking to. However, following this advice is not necessarily straight-forward for everyone. It’s well-documented that mutual gaze can be emotionally intense and distracting, even uncomfortably so for some.
If this is your experience, you may welcome a study published recently in the journal Perception that documents a phenomenon known as the “eye contact illusion” – put simply, we are not that good at telling whether an interlocutor is looking us in the eye or not. In fact, we tend to think they are, even when they’re not (a bias that is magnified after we’ve been rejected). Thanks to this illusion, you can give the impression of making eye contact simply by ensuring you are looking in the general direction of your conversant’s face.
If you want to know about the special relationship between human and canine you need only watch a dog owner slavishly feed, cuddle and clean up after her furry companion, day after day after day. But is this unique cross-species relationship also reflected at a deeper level, in the workings of the canine brain? A recent study in Learning and Behavior suggests so, finding that highly trained dogs have a dedicated neural area for processing human faces, separate from the area involved in processing the faces of other dogs.
The researchers, led by Andie Thompkins at Auburn University, say their results are of theoretical importance (in relation to the evolutionary origin of cognitive abilities) and could have practical use too, potentially paving the way to using brain scans to validate the expertise of trained dogs.
We make all kinds of snap decisions about a person based on their facial appearance. How trustworthy we think they are is one of the most important, as it can have many social and financial consequences, from influencing our decisions about whether to lend someone money to which Airbnb property to book.
However, as the authors of a new study, published in the British Journal of Psychology, note, “Although facial impressions of trustworthiness are formed automatically, they are not especially accurate predictors of trustworthy behaviour.” People who are less susceptible to forming these impressions could, then, be at an advantage. And, as Jasmine Hooper at the University of Western of Australia and colleagues now report, men with high levels of autistic traits fall into this category.
“Update: On Twitter, some researchers argued, reasonably in my view, that I wasn’t quite sceptical enough in relating these findings. See the update at the end of this post for more details.”
If you wanted a poster child for the replication crisis and the controversy it has unleashed within the field of psychology, it would be hard to do much better than Fritz Strack’s findings. In 1988, the German psychologist and his colleagues published research that appeared to show that if your mouth is forced into a smile, you become a bit happier, and if it’s forced into a frown, you become a bit sadder. He pulled this off by asking volunteers to view a set of cartoons (paper ones, not animated) while holding a pen in their mouth, either with their teeth (forcing their mouth into a smile), or with their lips (forcing a frown), and to then use the pen in this position to rate how amused they were by the cartoons. The smilers were more amused, and the frowners less so – and best of all, they mostly didn’t discern the true purpose of the experiment, eliminating potential placebo-effect explanations.
This basic idea, that our facial expressions can feed back into our psychological state and behavior, goes back at least as far as Darwin and William James, but “facial feedback”, as it is known, had never been demonstrated in such an elegant and rigorous-seeming manner. Over time, this style of experiment was replicated and expanded upon, and soon it came to be considered a true blockbuster, so famous it found its ways into psychology textbooks, as well as popular books and articles citing it as an example of the unexpectedly subtle ways our bodies and environments can affect us psychologically. Often, facial feedback has been popularised along the lines of Maybe you can smile your way to happiness!, which added an irresistible self-help element that likely helped spread the idea. Either way, it seemed like a genuinely safe and solid psychological finding. That changed rather abruptly in 2016.
Now a team led by Sarah Ketay at the University Hartford have shown how this absorption of friends into our self-concept can manifest at a visual level, affecting our ability to distinguish their faces from our own. Writing in the Journal of Social and Personal Relationships, Ketay’s team said “The present research supports the idea that close others are processed preferentially and may overlap with the self.”
You’re at a ten-pin bowling alley with some friends, you bowl your first ball – and it’s a strike. Do you instantly grin with delight? Not according to a study of bowlers, who smiled not at a moment of triumph but rather when they pivoted in their lanes, to look at their fellow bowlers.
That study provided the earliest evidence for a controversial hypothesis, the Behavioural Ecology View (BECV) of facial displays, outlined in detail in a new opinion piece in Trends in Cognitive Sciences. Carlos Crivelli at De Montfort University, Leicester, UK and Alan Fridlund at the University of California, Santa Barbara, put forward the case that facial displays are not universal, “pre-wired” expressions of emotion – a concept supported by 80 per cent of emotion researchers in a recent poll – but are flexible tools for influencing the behaviour of other people.