Experiments suggest that telling if two unfamiliar faces are the same or different is no easy task. Such research has sometimes presented participants with full body shots, has more commonly used cropped shots of people’s heads, but almost never placed the faces in a formal context, such as on a photographic ID card. But these are the situations in which face-to-photo matching is most relevant, when a shop assistant squints at a driver’s license before selling alcohol to a twitchy youth, or an emigration official scrutinises passports before their holders pass ports. Moreover, it’s plausible that the task is harder when juggling extra information, something already found in the realm of fingerprint matching, where biographical information can lead to more erroneous matches because it triggers observer prejudices. A new article in Applied Cognitive Psychology confirms these fears, suggesting that our real-world capacity to spot fakes in their natural setting is even worse than imagined.
Besides problems with social interactions, it has been known for a while that many people with autism experience sensory difficulties, such as hypersensitivity to sounds, light or touch. With sensory impairment now officially included in diagnostic manuals, researchers have been trying to see if there’s a link between the sensory and social symptoms. Such a link would make intuitive sense: For instance, it is easy to imagine that if someone experienced sensory stimuli more strongly, they would shun social interaction due to their complexity. More specifically, you would expect them to struggle with filtering out and making sense of social cues against the backdrop of sensory overload.
Past research has suggested that tactile hyper-responsiveness in particular may be relevant. The correct processing of tactile information plays an important role in differentiating yourself from others (so-called “self-other discrimination”), a crucial requirement for social cognition. In fact, touch may be unique among the senses because there is a clear difference in the tactile feedback received when you touch something compared to when you see someone else touch something. Now a study in Social Cognitive and Affective Neuroscience has used recordings of participants’ brain waves to provide more evidence that tactile sensations are processed differently in people with autism and that this may contribute to their social difficulties.
Eye movements have a profound influence on our conscious experience. Our vision is only high acuity at the centre, so we only see in detail those things that we shift our eyes to focus on. Also, each move of the eyes – known as a saccade – has massive consequences for visual processing in the brain because the incoming information is suppressed during the eye movement (to prevent the experience of blurring) and, on settling gaze on a new location, millions of neurons in our visual cortex must update to reflect the new slice of the visual world that they are now responsible for processing. Given all this, you’d think we’d have a good idea of where we’ve been pointing our eyes. In fact, as shown across three experiments published in The Quarterly Journal of Experimental Psychology, our insight into our own eye movements is virtually non-existent. Continue reading “It’s surprisingly difficult to introspect about your own eye movements”→
Anyone who’s been on a treadmill at the gym has probably had that strange perceptual experience afterwards – once you start to walk on stable ground again, it feels for a time as though you’re moving forward more quickly than you really are. The illusion, which is especially striking for treadmill newbies, was first documented scientifically in a Nature paper 20 years ago. Since then psychologists have come to better understand what’s going on and the ways the effects can manifest. Continue reading “Investigating the weird effects treadmills have on our perception”→
You may have heard of face-blindness (known formally as prosopagnosia), which is when someone has a particular difficulty recognising familiar faces. The condition was first noticed in brain-damaged soldiers and for a long time psychologists thought it was extremely rare and primarily caused by brain damage. But in recent years they’ve discovered that it’s actually a relatively common condition that some (approximately two per cent of the population) otherwise healthy people are born with. Now research on the related condition of phonagnosia – an impairment in recognising familiar voices – is catching up. A new survey reported in Brain and Language, the largest of its kind published to date, estimates that just over three per cent of the population are born with phonagnosia, many of them probably without even realising it. Continue reading “A surprising number of people are born with a problem recognising familiar voices”→
Immediately after we’ve been shunned, a new study shows our brains engage a subtle mechanism that alters our sense of whether other people are making eye contact with us, so that we think it more likely that they are looking our way. As friendly encounters often begin with a moment of joint eye contact, the researchers, writing in The Quarterly Journal of Experimental Psychology, think this “widening of the cone of gaze” as they call it could help the ostracised to spot opportunities for forging new relationships. Continue reading “After rejection, your brain performs this subtle trick to help you make friends”→
If you were sat in a dark room and the lights flickered off every few seconds, you’d definitely notice. Yet when your blinks make the world go momentarily dark – and bear in mind most of us perform around 12 to 15 of these every minute – you are mostly oblivious. It certainly doesn’t feel like someone is flicking the lights on and off. How can this be?
A new study in Journal of Experimental Psychology: Human Perception and Performance has tested two possibilities – one is that after each blink your brain “backdates” the visual world by the duration of the blink (just as it does for saccadic eye movements, giving rise to the stopped clock illusion); the other is that it “fills in” the blanks created by blinks using a kind of perceptual memory of the visual scene. Neither explanation was supported by the findings, which means that the illusion of visual continuity that we experience through our blinks remains a mystery.
One experiment involved students making several judgments about how long a letter ‘A’ was presented on a computer screen (the actual durations were between 200ms to 1600ms; 1000ms equals 1 second). Sometimes the ‘A’ appeared at the beginning or end of a voluntary eye blink, other times it appeared during a period when the participant did not blink. If we backdate visual events that occur during blinks, then the ‘A’s that appeared at the beginning or end of a blink should have been backdated to the onset of the blink, giving the illusion that they’d been presented longer than they actually had, as compared with ‘A’s that appeared when there was no blink. In fact, the researchers found no evidence that the students overestimated the duration of ‘A’s that appeared during blinks.
Another experiment involved students making a voluntary blink while a letter ‘A’ was already onscreen and making a judgment of how long the ‘A’ was visible, and also making judgments about the duration of other ‘A’s that were onscreen during non-blink periods. If backdating or perceptual “filling in” occurs during blinks, then the students should have judged the time onscreen of an ‘A’ of a given duration as the same whether they blinked during its appearance or they didn’t. But this isn’t want the researchers found – rather, the students consistently underestimated the duration of ‘A’s if they blinked during their appearance.
We do know from past research that the brain to some extent shuts down visual processing during blinks – a study from the 80s shone a light up through people’s mouths and found their ability to detect changes in its brightness was reduced during blinks, even though the blinks obviously didn’t impede the light source. But what the new research shows is still unclear is how the brain weaves the loss of visual input during blinks into a seamless perceptual experience.
Summing up, the University of Illinois researchers David Irwin and Maria Robinson said the brain seems to ignore the perceptual consequences of blinks, but they’re not sure how this is done. “Having ruled out the temporal antedating and perceptual maintenance hypotheses,” they said, “the question still remains: Why does the visual world appear continuous across eye blinks?”.
_________________________________ Irwin, D., & Robinson, M. (2016). Perceiving a Continuous Visual World Across Voluntary Eye Blinks. Journal of Experimental Psychology: Human Perception and Performance DOI: 10.1037/xhp0000267
Go ahead, sketch a face on your note paper. Use a photo of someone as a guide if you want. Unless you’re a trained artist, the chances are that you’ve made an elementary error, placing the eyes too far up the head, when it fact they should be halfway. Research suggests about 95 per cent of us non-artists tend to make this mistake and in a new study in the Psychology of Aesthetics, Creativity and the Arts, psychologists in America have attempted to find out why. The answer it turns out is rather complicated and concerns both our lack of knowledge and basic biases in the way that we pay attention to faces and to space in general.
Justin Ostrofsky and his colleagues asked 75 psychology undergrads to draw two faces shown on a computer screen – both were identical except one had hair and one was bald. Crucially, half the participants were told that the eyes on a human face typically appear halfway down the head, whereas the other participants weren’t given this information.
Overall the participants made the usual error that people make when drawing faces and placed the eyes too far up the head, even though they had the model faces to guide them. But this error wasn’t as extreme in the participants who were given the specific guidance about eye position. This tells us that at least part of the reason that non-artists place the eyes too high is because we don’t know (or we’ve never noticed) their precise schematic location in a face.
However, the fact that the participants given this information still placed the eyes too high suggests that there is more to this than a lack of schematic knowledge. Another factor seems to be that when looking at faces, we tend to ignore the forehead region (this has been shown by prior research that’s tracked people’s gaze while they look at faces). Instead, we pay more attention to the parts of the face that contain features. The relevance of this to drawing was shown by the fact the participants made a smaller error with eye position when drawing the face that had hair than the face that was bald. The researchers explained: “When drawing the bald model, the absence of the hair line creates a larger forehead region to ignore and attenuate, resulting in the eyes drawn even further up the head in the bald model.”
Yet another relevant factor seems to be our natural bias towards ignoring the upper end of vertical space. This is easy to demonstrate by asking people to mark the mid-point of a vertical line – most of us place the mid-point too high, which in neuropsychological jargon is a sign of “altitudinal neglect”, meaning that we neglect to attend to higher space.
In the current study, the researchers asked their participants to perform a vertical line bisection and they found that the greater their altitudinal neglect (marking the line midpoint higher), the higher they tended to place the eyes on the faces they drew. But intriguingly this association was only true for the participants who were given the factual information about the eyes being midway down a human face. It seems being given this schematic knowledge improves our drawing, but only to a point – ultimately we’re still led astray by a basic attentional bias (presumably artists learn to overcome this bias).
_________________________________ Ostrofsky, J., Kozbelt, A., Tumminia, M., & Cipriano, M. (2016). Why Do Non-Artists Draw the Eyes Too Far Up the Head? How Vertical Eye-Drawing Errors Relate to Schematic Knowledge, Pseudoneglect, and Context-Based Perceptual Biases. Psychology of Aesthetics, Creativity, and the Arts DOI: 10.1037/a0040368
When your waistband feels tighter than usual, or the scales say you’ve put on a few pounds, it’s easy to blame the news on clothes shrinkage or an uneven carpet, especially if your body looks just the same in the mirror. And that lack of visual evidence for weight gain (or loss) is especially a problem for more obese people, according to a new paper in the British Journal of Health Psychology.
The researchers asked female participants to estimate the weight of 120 differently sized women (their weights ranged from 28.2 to 104.9 kg; roughly 4.5 to 16.5 stones). The heavier the women in the photos were, the more the participants tended to underestimate their weight – on average, “an observer who judges the weight of a 100 kg woman will underestimate her weight by ~10 kg” the researchers said.
In a second study, participants had to judge whether pairs of real or CGI women had the same or a different BMI (body mass index). When the women in the pictures had a higher BMI, the difference in their respective BMIs had to be greater for participants to notice a difference (this is actually an example of a basic perceptual phenomenon known as Weber’s law).
“Our results clearly point to the potential for perceptual factors contributing to problems with detecting obesity and weight increase,” concluded the researchers led by Katri Cornelissen at Northumbria University. As people get heavier they will find it more difficult to detect extra weight gain, and conversely they will also struggle to detect when they have lost weight, which may undermine dieting efforts.
Our brains are wired such that we pay extra attention to anything that seems to be alive. This makes sense from an evolutionary point of view – after all, other living things might be about to eat us, or maybe we could eat them.
Consistent with this evolutionary perspective, prior research has shown that at a very basic level, we pay more attention to images of animals and people than we do to cars and trucks, even though in modern life, it is cars and trucks that are more of an everyday threat than animals. But now a study in the Canadian Journal of Experimental Psychology has shown that this bias for processing living things extends to LEGO people, despite the fact that they are inanimate and were obviously never encountered by our distant ancestors.
Mitchell LaPointe and his colleagues tested dozens of undergrad students across several studies. The basic challenge was similar throughout. On each experimental trial, the participants looked at a pair of black and white, static images of LEGO scenes that alternated rapidly on a computer screen (each scene appeared for a quarter of a second before it flicked to the other scene in the pair and back again), and they had to indicate as fast as possible when they’d spotted the difference between the two scenes, and which side of the screen the change was on. After the participant responded, the next pair of images appeared and began flicking back and forth until a response was made.
Both LEGO scenes within each of the image pairs was identical but for one small difference, which was either the addition of an extra LEGO person or some other feature, such as a tree or a small tower of LEGO blocks of similar size to a LEGO person. The main finding is that participants were significantly quicker by two or more seconds, on average, at spotting scene changes that involved a LEGO person as compared with some other LEGO element. They were also more accurate at reporting where the changes had occurred when they involved LEGO people.
Variations in the methodology showed that the attentional bias for LEGO people was not due to their having faces (the advantage remained even when these were blurred). Even rotating the scenes 180 degrees, or blurring the entire scene, failed to fully eliminate the participants’ superior performance for spotting changes involving LEGO people.
The researchers who conducted the new research said, “it is clear that our participants treated LEGO people differently than LEGO nonpeople. The explanation that we favour for this difference in performance is that the animate category was generalised to the LEGO people, perhaps because the LEGO people contain some feature overlap with animate objects.” In other words, your brain thinks the little LEGO characters are alive!
What’s not clear from this research is if experience with LEGO figures is required for the attentional bias for LEGO people to be observed (no detail is given in the study on whether or how much the participants had played with LEGO as children, or adults). We also don’t know if these results say something special about LEGO people or if a similar effect would be found for other toy figures.
_________________________________ LaPointe, M., Cullen, R., Baltaretu, B., Campos, M., Michalski, N., Sri Satgunarajah, S., Cadieux, M., Pachai, M., & Shore, D. (2016). An Attentional Bias for LEGO® People Using a Change Detection Task: Are LEGO® People Animate? Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale DOI: 10.1037/cep0000077