Friday, 27 May 2016

Why do some of us avoid being kind to ourselves?

People who are kinder to themselves tend to be happier, healthier and to cope better when bad things happen. There's also some evidence that training to be more self-compassionate is beneficial. Overall, self-compassion seems to be a sensible practice, so why are some of us averse to it? 

In their new study in Self and Identity researchers from Canada, Germany and the USA predicted that people averse to self-compassion think it will make them feel bad about themselves – for example, that they'll feel more selfish – and also that they hold different values from their more self-compassionate peers, such as believing more strongly in the importance of success. They'd probably agree with motivational speaker Zig Ziglar who said:  “When you are tough on yourself, life is going to be infinitely easier on you". 

Kelly Robinson and her colleagues surveyed 161 young adults about their tendency to be self-compassionate or not, the importance they ascribed to different values from prosperity to equality, and then asked them to imagine two scenarios of personal failure, one in which they treated themselves with self-compassion and forgiveness, and one in which they were hard on themselves and self-critical. Finally, the participants said how they'd feel about themselves after these two scenarios, based on 18 different character dimensions.

The less self-compassionate participants tended not to have different values from the self-compassionate, and they also agreed that self-compassion is good for well-being. But the less self-compassionate said they'd see themselves differently after showing care and tenderness towards themselves. Specifically, they said they would feel less industrious, ambitious, responsible, modest, careful, and competitive as compared with the participants who practised more self-compassion in their lives. Also, after being self-critical, the less self-compassionate participants said they would feel stronger and more responsible.

Overall, the results suggest that people who differ in self-compassion are just as interested in success and achievement, it's just that the less self-compassionate think that being kind to themselves will hinder their ability to achieve because they associate self-kindness with being weak and less responsible and ambitious. The findings have implications for self-care interventions – those of us who struggle with self-compassion don't just need to learn ways to be kind to ourselves, but we also need help challenging the negative assumptions we have about showing a little TLC to me.

--Resisting self-compassion: Why are some people opposed to being kind to themselves?

_________________________________
   
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Psychologists have devised a test for measuring one-year-olds' creativity

The study found that creative parents tended to have creative toddlers
A team of psychologists in England say they've developed a reliable way to measure divergent thinking in one-year-old infants. Divergent thinking is a form of creativity that involves uncovering new ideas or ways of doing things. The finding published in Child Development opens up the possibility of exploring the early factors that lead one infant to be more creative than another, and potentially intervening to help foster creativity extremely early in a child's life.

Elena Hoicka and her colleagues filmed 29 toddlers (average age of 19 months) as they played freely on their own with a specially designed box that was paired for 90 seconds at a time with one of five unusual objects, including a wire egg cup and a plastic hook.

Later, researchers watched back the videos and counted how many unique actions each child performed with each object. To be counted as a new action, the child had to do something different with the object, or perform the same action with the object but on a different part of the box. The box featured various compartments, steps, shelves, holes and strings, offering a multitude of ways to play. The greater the number of different actions that the toddlers performed with the objects and box, the higher the divergent thinking score they received.

The researchers found that there was a wide spread of scores on the test showing its ability to differentiate between children. What's more, when the same toddlers performed the test two weeks' later, they tended to achieve very similar scores second time around. In psychological jargon, this is a sign of "test/re-test reliability", which suggests the test is measuring a persistent trait of divergent thinking, rather than the influence of momentary factors such as mood or fatigue. Also, most toddler actions performed during the second test were new, so it wasn't just that higher scoring toddlers were remembering their actions from the first session.

Another aspect of the study was that the researchers asked each toddler's mother or father to complete an adult test of divergent thinking that involved completing partially drawn images in imaginative ways. The parents' creativity scores showed a moderate to high correlation with their toddlers' scores. This could be because creativity is partly inherited through genes, or it could be due to toddlers learning from their parents' creativity. Another intriguing possibility raised by the researchers is that creative toddlers may influence their parents' creativity. "It is possible," they write," that if a parent has a child who tends to explore, parents may be influenced by this and also explore more".

_________________________________ ResearchBlogging.org

Hoicka, E., Mowat, R., Kirkwood, J., Kerr, T., Carberry, M., & Bijvoet-van den Berg, S. (2016). One-Year-Olds Think Creatively, Just Like Their Parents Child Development DOI: 10.1111/cdev.12531

--further reading--
Cultivating little scientists from the age of two
The jokes that toddlers make
Pre-schoolers can tell abstract expressionist art from similar works by children and animals

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Thursday, 26 May 2016

You're not addicted to your phone, it's just that you have an anxious attachment style

Image: Victor/Flickr
If you can barely put your phone down for a minute, and you get all panicky when your juice runs out, past psychology research might describe you as being somehow addicted, dependent or that you have a new condition "nomophobia", literally no mobile phone phobia.

But writing this week in Computers in Human Behaviour, a team of researchers from Hungary say this language of extremity or disorder is probably the wrong approach – after all, most people experience nomophobia. Instead, they argue we should view our relationship with our phones in terms of attachment theory. Specifically, they've tested the idea that we all have a certain kind of attachment to our phones, but that people who are anxiously attached in their human relationships (that is, people who are afraid of being abandoned) are also likely to show anxious attachment towards their phones.

For their exploratory study, Veronika Konok and her team asked 142 young Hungarian adults (aged 19 to 25) to complete measures of their attachment style towards humans – whether they are anxiously attached, anxiously avoidant, or secure – and their attachment style towards their phones. This last measure included questions about phone checking and phone separation anxiety, and the ways the participants use their phone.

The results provided partial support for the researchers' predictions. For example, people with an anxious attachment style said they tended to get more stressed than others if they couldn't reach someone on their phone or couldn't answer a call. Anxiously attached people also described using their phones more for accessing social networking sites.

However, against the researchers' predictions, anxiously attached people did not report greater stress when they were separated from their phones. But there's a simple methodological explanation for this null result – nearly all of the participants, regardless of their attachment style, said they felt bad when they were apart from their phones.

"Some features of [people's] attachment to the phone are influenced by their interpersonal attachment style," the researchers concluded. "Specifically, anxiously attached people need more contact through the phone, and perhaps because of this they use the phone more for smart phone functions." The researchers hope more research in a similar vein will now follow, for example using neuroimaging to see if attachment to mobile phones "co-opts the same neuronal circuits as infant-mother or romantic attachment".

--Humans' attachment to their mobile phones and its relationship with interpersonal attachment style

_________________________________
   
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

It's easy to implant false childhood memories, right? Wrong, says a new review

During the 1990s, groundbreaking work by psychologists demonstrated that human memory is flexible and vulnerable and that it’s very easy for people to experience “false memories” that feel real, but which are actually a fiction. One major implication of this was in the evaluation of adults’ accounts of how they’d been abused in childhood. In a recent journal editorial, for instance, one of the pioneers of false memory research argued that the same techniques used by therapists to recover repressed memories of abuse have been shown in the lab to “produce false memories in substantial numbers of research participants”.

But there are some experts who believe the false memory researchers have gone too far. Chris Brewin and Bernice Andrews are two British psychologists with these concerns. In their new systematic review in Applied Cognitive Psychology they have taken a hard look at all the evidence, and they argue that we need to rethink the idea that false memories are so easily induced.

Key to this reevaluation is the question of what exactly is a false memory. It seems fair to require that it involves a person recollecting an experience, including sensory details, and believing confidently that this is a memory (rather than a dream or imagining) of something that really happened. But studies in this area that involve inducing false memories often adopt a more liberal definition, such as a subtle shift in a person’s  willingness to believe that a given event described to them could possibly have happened in their childhood.

Consider one key experimental technique known as “imagination inflation”, which aims to provoke false memories in participants simply by asking them to write about fictitious events as if they had really happened.

As a first step participants are surveyed about a range of things that might happen in a typical childhood, and then they are asked to use their imagination to write about one of these events that they believe didn’t actually happen in their own childhood. After this writing task, participants are asked again to rate how likely it is that they actually experienced this event in their own childhood.

Overall, after completing the imaginative writing task, most people tend to shift their beliefs, to think that it’s more plausible that they may actually have experienced the event they wrote about. But in 13 of 14 the published datasets that Brewin and Andrews reviewed where this technique was used, belief only changed by one point or less on an eight-point scale (from strongly believing it didn’t happen on one end of the scale, to strongly believing it did on the other). As these shifts in belief often weren’t enough to tip participants over the scale’s half-way point, this supposed induction of "false memories" involved the sowing of doubt but not the creation of a new memory – most participants still considered that the events they’d written about hadn’t happen to them, it’s just that they were less confident in that belief.

Another issue with these research paradigms is that, especially for studies that provided fairly general childhood events (e.g. "you gave someone a gift for no special reason"), the imagination writing task may actually have triggered genuine memories that didn’t pop up the first time around, which are then misclassified by researchers as false.

Another technique to induce false memories is to use an authority figure, such as a therapist, to apply social pressure to a participant that an event was likely to have happened in their childhood. There are problems with this method too. Again, the data mostly involve participants’ increased belief in the possibility of a past event having occurred, rather than any changes in their recollection per se. Again, in 14 of 15 previously published datasets, belief did not shift beyond the midpoint.

Looking at only the most susceptible participants, in some cases there were strong changes in belief, but these tended to be for low-intensity events, such as eating and disliking a particular foodstuff; for more specific events, beliefs were less committed.

Five previously published datasets also looked at actual changes in remembering. At the high end, new recollection (after hearing testimony from an authority figure that an event had occurred in the participants' childhood) was seen in around 30 per cent of the sample – again for fictitious food-related events – but at the lowest end, just two per cent of participants reported new recollections.

The most powerful technique used to induce false memories is memory implantation. This approach involves parents and authority figures conniving over multiple sessions to persuade a participant that an event really happened in their childhood, going as far in some cases as doctoring photographs to produce incontrovertible proof. These studies often produce new recollections of some kind – up to 78 per cent of participants report new, false memories when doctored photographs are used – but Brewin and Andrews show that when an even more stringent definition of a false memory is used – that it must involve mental images – then this rate of new recollection drops to 25 per cent, and regarding memories that the participant is actually confident in, to only 15 per cent.

Overwhelmingly, most participants in these studies disbelieve the childhood event ever happened, and they doubt any apparently new memories that arise, despite the pressure to think otherwise. Tellingly, when studies have collected ratings of the strength of any new memories from both the participants and the researchers, the researchers’ ratings are routinely higher. After hearing their parents’ stories, the participants typically become better able to narrate a plausible and even elaborate account that persuades the researcher a memory has been created. But often the participants themselves aren't buying it, and they can draw the distinction between memory-like content and a true memory.

It’s clear that false memory paradigms can shift how we evaluate past events, and can for a minority of participants provoke memory-like experiences. But the rates are very low and the effects variable, and the one that produces the strongest effect – memory implantation – is also the most invasive, and least likely to match the experiences of people in normal life or within a therapy session. Brewin and Andrews suggest their review “indicates that the majority of participants are resistant to the suggestions they are given” and that the rhetoric that false beliefs are easy to instil should be re-examined.

_________________________________ ResearchBlogging.org

Brewin, C., & Andrews, B. (2016). Creating Memories for False Autobiographical Events in Childhood: A Systematic Review Applied Cognitive Psychology DOI: 10.1002/acp.3220

--further reading--
Fresh doubt cast on memories of abuse recovered in therapy
Negative false memories are more easily implanted in children's minds than neutral ones
Test how much you know about the reliability of memory
False memories have an upside
Mindfulness meditation increases people's susceptibility to false memories

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Wednesday, 25 May 2016

Minimalist, anonymous rooms are probably not a good place to do teamwork

According to the philosophy of "lean space management", a minimalist workspace shorn of clutter is distraction-free and ideal for productivity. But this philosophy turns out to have slim empirical foundations, and as promoting a sense of identity at work, including personalising the work space, generally leads to better outcomes, there’s reason to expect richer, characterful workplaces to be more beneficial. A new article in the Journal of Personnel Psychology builds on this past work, showing that rich and meaningful workplace decor produces better team performance than lean spaces, even in surprising contexts.

Katherine Greenaway and her colleagues asked 54 students (45 women) to form teams of three or four members. The researchers then explained to each team that there were Red teams and Blue teams and that theirs was a Red team. This was a ruse because in reality all teams were told that theirs was a Red team. To stoke a sense of competition,  the researchers added that the participants' team performance and that of other Red teams would be compared against the rival Blue teams. The participants then had a chance to get to know their team-mates and to personalise their own team room with a poster that they made together and with red decorations.

But the teams couldn’t enjoy this for long, as a contrived double booking meant they were cast out from their room into a new work environment that they were told had recently housed another team. Some teams were rehoused in a lean, undecorated room; others in a room that had clearly been used by a Red team; and the remainder in a room that was dressed up as Blue territory.

In this new environment, the teams had to complete a task: finding words in a grid, and then using them to construct sentences. The researchers found that teams moved to a friendly Red room or an unfriendly Blue room performed better than those placed in a lean room.

Remember, the decorations were based on the arbitrary, colour-themed team allocation process, so their specifics couldn’t have been profoundly inspiring. Nor could they represent a shared and personal endeavour: in all cases, the teams’ own poster that they made and their decorative decisions were out of sight in another room.

In the case of those teams rehoused in a different Red room, some insight into their better performance comes from an attitude survey the participants took after the word task. They tended to give higher ratings to items like “I identify with the group that was in this room before us”. It seems the room triggered or sustained a general feeling of “Reds together” and the data suggested this identification drove their better performance.

What about the finding of superior team performance in a Blue-room? The researchers had predicted that being in enemy territory might spark competitive feelings that would boost performance, at least in the short-term. The teams placed in a Blue room did indeed feel more competitive but there was no sign in the data that this was linked with superior performance, so there’s still a question mark over this part of the study.

All in all, the research suggests that workspaces with a rich character are more supportive of team performance than those built for anonymity. As the authors conclude: meaning beats leaning.

_________________________________ ResearchBlogging.org

Greenaway, K., Thai, H., Haslam, S., & Murphy, S. (2016). Spaces That Signal Identity Improve Workplace Productivity Journal of Personnel Psychology, 15 (1), 35-43 DOI: 10.1027/1866-5888/a000148

--further reading--
Why it's important that employers let staff personalise their workspaces
The supposed benefits of open-plan offices do not outweigh the costs

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Tuesday, 24 May 2016

Study of firefighters shows our body schema isn't always as flexible as we need it to be

The results could help explain some of the many injuries
incurred by firefighters each year
Your brain has a representation of where your body extends in space. It's how you know whether you can fit through a doorway or not, among other things. This representation – the "body schema" as some scientists call it – is flexible. For example if you're using a grabbing tool or swinging a tennis raquet, your sense of how far you can reach is updated accordingly. But there are limits to the accuracy and speed with which the body schema can be adjusted, as shown by an intriguing new study in Ecological Psychology about the inability of firefighters to adapt to their protective clothing.

Indeed, the researchers at the University of Illinois at Urbana-Champaign and the Illinois Fire Service Institute believe their findings may help explain some of the many injuries sustained by firefighters (of which there were over 65,000 in 2013 alone), and that they could have implications for training.

The participants were 24 firefighters (23 men) with an average age 29 and an average of 6 years experience in the job, all of whom were recruited through the University of Illinois Fire Service. The researchers led by Matthew Petrucci asked the participants to don the full protective kit, including bunker-style coat, helmet and breathing apparatus. As well as the weight and bulk of the gear affecting the participants'  ability to move freely, it also changed the participants' physical dimensions – for instance, the helmet added 21cm to their height, and the breathing apparatus added 21cm of depth to their body.

The researchers created three main obstacles designed to simulate situations in a real-life fire: a horizontal bar that the firefighters had to go under, a bar that they had to go over, and a vertical gap between a mock door and wall that they had to squeeze through. All of these were adjustable, and the participants' first task was to estimate what height bar they could manoeuvre over, what height they could manoeuvre under, and what width gap they could squeeze through. To make these judgments, the researchers adjusted the obstacles' in height or width, and for each setting the firefighters said whether they thought they could safely pass the obstacle.

For the next stage, the firefighters actually attempted to manoeuvre over, under or through the different obstacles, which were adjusted to make them progressively harder to complete. The idea was to find the lowest, highest and narrowest settings that the firefighters could pass through safely and quickly. To count as a safe passage, the firefighters had to avoid knocking off the delicately balanced horizontal bar for the over and under obstacles, and avoid touching their hands to the floor, or dumping their gear.

Despite having many years experience wearing protective gear and breathing apparatus, the results showed that there was little correspondence between the firefighters' judgments about the dimensions of the obstacles they could safely pass under, over or through, and their actual physical performance. In psychological jargon, the firefighters made repeated "affordance judgment errors", misperceiving the movements "afforded" to them by different environments.

The participants' judgments were most awry for passing under a horizontal bar – on average they thought they could pass under a bar that was 15cm lower than the height they could actually go under. Errors related to the over obstacle were a mix of over- and underestimations, and for the through obstacle 80 per cent of participants underestimated their ability by four to five cm – in other words, they thought they couldn't pass through, when actually they could. In a real life situation, this could lead to time wasting or unnecessary danger as they sought a more circuitous route.

The results suggest that the firefighters struggled to adjust their body schemas to account for their gear, and it's easy to see how this problem could lead to accidents in a burning building. It seems strange that they hadn't learnt to take account of their gear through experience, but in fact the converse was true – the more experienced firefighters made more errors. The researchers propose several explanations for this, including that specific experiences may be needed to recalibrate the body schema to specific obstacles. Also, the firefighters training in manoeuvring in their gear mostly comes at the start of their career and the benefits may have faded. Refresher training may be helpful, especially to learn one's changing capabilities with ageing.

The researchers said that their results were important because "affordance judgment errors made on a fireground could contribute to injuries attributed to contact with ceilings, doors, structural components of buildings, and other objects with slips, trips, and falls."

_________________________________ ResearchBlogging.org

Petrucci, M., Horn, G., Rosengren, K., & Hsiao-Wecksler, E. (2016). Inaccuracy of Affordance Judgments for Firefighters Wearing Personal Protective Equipment Ecological Psychology, 28 (2), 108-126 DOI: 10.1080/

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Friday, 20 May 2016

A classic finding about newborn babies' imitation skills is probably wrong

Pick up any introductory psychology textbook and under the "developmental" chapter you're bound to find a description of "groundbreaking" research into newborn babies' imitation skills. The work, conducted in the 1970s, will typically be shown alongside black and white images of a man sticking his tongue out at a baby, and the tiny baby duly sticking out her tongue in response.

The research was revolutionary because it appeared to show that humans are born with the power to imitate – a skill crucial to learning and relationships – and it contradicted the claims of Jean Piaget, the grandfather of developmental psychology, that imitation does not emerge until babies are around nine months old.

Today it may be time to rewrite these textbooks. A new study in Current Biology, more methodologically rigorous than any previous investigation of its kind, has found no evidence to support the idea that newborn babies can imitate.

Janine Oostenbroek and her colleagues tested 106 infants four times: at one week of age, then at three weeks, six weeks, and nine weeks. Data from 64 of the infants was available at all four time points. At each test, the researcher performed a range of facial movements, actions or sounds for 60 seconds each. There were 11 of these displays in total, including tongue protrusions, mouth opening, happy face, sad face, index finger pointing and mmm and eee sounds. Each baby's behaviour during these 60-second periods was filmed and later coded according to which faces, actions or sounds, if any, he or she performed during the different researcher displays.

Whereas many previous studies have compared babies' responses to only two or a few different adult displays, this study was much more robust because the researchers checked to see if, for example, the babies were more likely to stick out their tongues when that's what the researcher was doing, as compared with when the researcher was doing any of the 10 other displays or sounds. Unlike most prior research, this new study also looked to see how any signs of imitation changed over time, at the different testing sessions. According to the researchers, this makes theirs "the most comprehensive, longitudinal study of neonatal imitation to date".

Following these more robust standards, Oostenbroek and her team found no evidence that newborn babies can reliably imitate faces, actions or sounds. For example, let's take the example of tongue protrusions. Averaged across the different testing time points, the babies were no more likely to stick out their tongue when the researcher did so, as compared with the researcher opened her mouth, pulled a happy face or pulled a sad face. In fact, across all the different displays, actions and sounds, there was no situation in which the babies consistently performed a given facial display, gesture or sound more when the researcher specifically did that same thing, than when the researcher was doing anything else.

Based on their results, the researchers said that the idea of "innate imitation modules" and other such concepts founded on the ideal of neonatal imitation "should be modified or abandoned altogether". They said the truth may be closer to what Piaget originally proposed and that imitation probably emerges from around 6 months.

_________________________________ ResearchBlogging.org

Oostenbroek, J., Suddendorf, T., Nielsen, M., Redshaw, J., Kennedy-Costantini, S., Davis, J., Clark, S., & Slaughter, V. (2016). Comprehensive Longitudinal Study Challenges the Existence of Neonatal Imitation in Humans Current Biology DOI: 10.1016/j.cub.2016.03.047

Top image is part of a figure that appears in Oostenbroek et al. 2016.

--further reading--
10 surprising things babies can do

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Wednesday, 18 May 2016

Why do so many people dislike the word "moist"?

By guest blogger Richard Stephens

A few years ago the New Yorker ran a social media campaign asking what word should be deleted from the English language. Nominations ranged from the political (Obama) to the superfluous (actually) and from the expression of hyperbole (awesome) to an outdated word for trousers (slacks). Intriguingly, the most popular suggestion – the so-called “runaway un-favourite” – might surprise a few people and especially those who enjoy baking.

Psychologist Paul H. Thibodeau from Oberlin College in the US has taken it upon himself to delve deeper. His amusingly titled research paper published recently in PLoS One, “A Moist Crevice for Word Aversion”, has made a case study of “Moist” – the word that the New Yorker found people most love to hate.

Thibodeau asked hundreds of volunteers recruited online in the US to rate how aversive they found various words, including moist. Verifying the results of the New Yorker campaign, he found 20 per of them disliked this specific word.  The people that were averse to “moist” gave it a 24 per cent higher unpleasantness rating than people that were not averse to it; to put this into context, this was a similar difference in how aversive people rated “fuck” compared with “delicious”.

Thibodeau tested several possible reasons for moist’s unusual unpopularity by seeing what other words were unpopular among the moist-haters. One idea is that people are averse to the word “moist” because of how it sounds. If true then people should also be averse to similar sounding words like “hoist” and “foist”, but they weren’t. This isn’t too much of a surprise given that the sounds that make up a language tend to be random, apart from a smattering of onomatopoeic words (words that convey sounds) like “splash”.

Another clue comes from the observation that “moist” can be very good in some contexts, such as when it describes the texture of the slice of cake we’ve just been served, but can be very bad in others, for example when it refers to the condition of the armpit of the person crammed next to us on the London tube. So, perhaps the word “moist” is seen as aversive because there is conflict in many people’s minds between these simultaneous strong positive and negative connotations.

Thibodeau tested this possibility by assessing how people rated “moist” on a 5-point positivity scale (from “Not at all positive” to “Very positive”) and also on a similar 5-point negatively scale (from “Not at all negative” to “Very negative”). In fact, “moist” tended to be rated around the middle of both scales rather than being very high in both as the conflicting connotations explanation would require.

Yet another possibility is that “moist” is aversive because it brings to mind unsavoury associations, such as sexual words or words connected with non-sexual bodily functions. This is actually the most promising explanation because people who were averse to the word “moist” also tended to be averse to bodily function words like “phlegm” or “puke”. But note, these same people were not usually averse to sexual words like “horny” or “pussy” suggesting that their aversion to the word “moist” was driven not by sexual prudishness but a dislike of more mundane bodily functions.

Other intriguing insights come from when Thibodeau asked people to explain their dislike of the word “moist” – whether it was the sound, the meaning, or both. Most often people indicated it was the sound of the word, at odds with the earlier finding that “hoist” and “foist” were not unpopular. That people averse to the word “moist” tended to misappropriate the source of their aversion to the sound of the word may indicate a general tendency for us not to notice when disgust colours our opinions.

On the evidence of this psychological case study of the word “moist”, aversion to certain words is unlikely to be due to the word sound even though people may mistakenly suggest otherwise. This is another example of a psychology research finding contradicting “common sense”. There was also no evidence that word aversion arises from ambiguous and conflicting word connotations. It all came down to semantics – it was the meaning of the word and its associations with other words that underlay the negative evaluation. That aversion to the word “moist” was correlated with aversion to certain revolting bodily functions, like coughing up phlegm and vomiting, suggests that we are most likely to be averse to words that are linked to unsavoury associations.  

Marketeers might want to take note, especially brands that use English names for products on sale in non-English speaking countries. Customers might be wary of Soup For Sluts instant noodles, for example, or Pee Cola fizzy drink and Deeppresso instant coffee!

_________________________________ ResearchBlogging.org

Thibodeau, P. (2016). A Moist Crevice for Word Aversion: In Semantics Not Sounds PLOS ONE, 11 (4) DOI: 10.1371/journal.pone.0153686

--further reading--
Why do people find some nonsense words like "finglam" funnier than others like "sersice"

Post written by Richard Stephens for the BPS Research Digest. You can read more of Richard’s work in his critically acclaimed popular science book, Black Sheep The Hidden Benefits of Being Bad, available from all good book stores and online. Richard is a Senior Lecturer in Psychology at Keele University and Chair of the Psychobiology Section of the British Psychological Society.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Monday, 16 May 2016

Sorry to say, but your pilot's decisions are likely just as irrational as yours and mine

Flying a plane is no trivial task, but adverse weather conditions are where things get seriously challenging. Tragically, a contributing factor to many fatal accidents is when the pilot has misjudged the appropriateness of the flying conditions. Now in a somewhat worrying paper in Applied Cognitive Psychology Stephen Walmsley and Andrew Gilbey of Massey University have shown that pilots’ judgment of weather conditions, and their decisions on how to respond to them, are coloured by three classic cognitive biases. What’s more, expert flyers are often the most vulnerable to these mental errors.

The researchers first addressed the “anchoring effect”, which is when information we receive early on has an undue influence on how we subsequently think about a situation. Nearly 200 pilots (a mix of commercial, transport, student and private pilots) were given the weather forecast for the day and then they looked at visual displays that showed cloud cover and horizontal visibility as if they were in a cockpit, and their task was to quantify these conditions by eye.

The pilots tended to rate the atmospheric conditions as better – higher clouds, greater visibility – when they’d been told earlier that the weather forecast was favourable. Essentially, old and possibly irrelevant information was biasing the judgment they were making with their own eyes. Within the sample were 56 experts with over 1000 hours of experience, and these pilots were especially prone to being influenced by the earlier weather forecast.

Next, hundreds more pilots read about scenarios where a pilot needed to make an unplanned landing. An airstrip was nearby, but the conditions for the route were uncertain. Each participant had to solve five of these landing dilemmas, deciding whether to head for the strip or re-route. For each scenario they were told two statements that were reassuring for heading for the strip (e.g. another pilot had flown the route minutes ago) and one that was problematic (e.g. the visibility was very low). In each case, the participants had to say which piece of information was most important for deciding whether to land at the nearby airstrip or not.

Across the scenarios, the participants showed no real preference for one type of statement over another. This might sound sensible, but actually it’s problematic. When you want to test a hypothesis, like "it seems safe to land", you should seek out information that disproves your theory. (No matter how many security guards, alarms and safety certificates a building possesses, if it’s on fire, you don’t go in.) So pilots should be prioritising the disconfirming evidence over the others, but in fact they were just as likely to rely on reassuring evidence, which is an example of what’s known as “the confirmation bias”.

In a final experiment more pilot volunteers read decisions that other pilots had made about whether to fly or not and the information they’d used to make their decisions. Sometimes the flights turned out to be uneventful, but other times they resulted in a terrible crash. Even though the pilots in the different scenarios always made their decisions based on the exact same pre-flight information, the participants tended to rate their decision making much more harshly when the flight ended in disaster than when all went well.

It concerns Walmsley and Gilbey that pilots are vulnerable to this error – an example of the “outcome bias” – because pilots who decide to fly in unwise weather and get lucky could be led by this bias to see their decisions as wise, and increasingly discount the risk involved. Note that both the confirmation and outcome experiments also contained an expert subgroup, and in neither case did they make better decisions than other pilots.

The use of cognitive heuristics and shortcuts – “thinking fast” in Daniel Kahneman’s memorable phrase – is enormously useful, necessary for helping us surmount the complexities of the world day-to-day. But when the stakes are high, whether it be aviation or areas such as medicine, these tendencies need to be countered. Simply raising awareness that these biases afflict professionals may be one part of the solution. Another may be introducing work processes that encourage slower, more deliberative reasoning. That way, when pilots scan the skies, they might be more likely to see the clouds on the horizon.

_________________________________ ResearchBlogging.org

Walmsley, S., & Gilbey, A. (2016). Cognitive Biases in Visual Pilots' Weather-Related Decision Making Applied Cognitive Psychology DOI: 10.1002/acp.3225

--further reading--
Just two questions predict how well a pilot will handle an emergency
If your plane gets lost you'd better hope there's an orienteer on board

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Friday, 13 May 2016

We all differ in our ability to cope with contradictions and paradoxes. Introducing the "aintegration" test

Life is full of paradoxes and uncertainty – good people who do bad things, and questions with no right or wrong answer. But the human mind abhors doubt and contradictions, which provoke an uncomfortable state of "cognitive dissonance". In turn, this motivates us to see the world in neat, black and white terms. For example, we'll decide the good person must really have been bad all along, or conversely that the bad thing they did wasn't really too bad after all. But a pair of researchers in Israel point out that some of us are better than others at coping with incongruence and doubt than others – an ability they call "aintegration" for which they've concocted a new questionnaire. The full version, together with background theory, is published in the Journal of Adult Development.

If you want to hear what the researchers found out about who copes best with uncertainty, skip past the two example items coming up next.

Jacob Lomranz and Yael Benyamini's test begins: This questionnaire explores the way people think and feel about various attitudes. In the following pages you will be presented with attitudes held by different people. Please read each attitudinal position carefully and use the ratings scale to state your general and personal reaction as to such attitudes.

The test then features 11 items similar to these two:
EXAMPLE ITEM 1 There are people who will avoid making decisions under conditions of uncertainty and ambiguity. In contrast, other people would make decisions even under conditions of uncertainty and ambiguity. 
(a) In general, to what extent do you think it is possible to make decisions under
conditions of uncertainty and ambiguity?
1,2,3,4, or 5 (choose 1 to 5 where 1= not at all and 5=to a very great extent)
(b) Assuming someone does make decisions under conditions of uncertainty and
ambiguity, to what extent do you think this would cause her/him discomfort?
12345
(c) To what extent do you make decisions under conditions of uncertainty and
ambiguity?
12345
(d) Assuming you made a decision under conditions of uncertainty and ambiguity, to
what extent would that cause you discomfort?
12345
EXAMPLE ITEM 2 There is an opinion that in every relationship between couples there are contradictory feelings; on the one hand, the individual benefits from the relationship (for example, love) and on the other hand loses from the relationship (for example, loss of independence).
- Some people claim that even when the couple has contradictory feelings about their relationship, a good relationship can still exist.
- In contrast, there are those who claim that when there are contradictory feelings about the couple relationship, it is impossible to maintain a good relationship.
(a) In general, to what extent do you think it is possible to have a good relationship when a couple has contradictory feelings about that relationship?
1234, or 5 (choose 1 to 5 where 1= not at all and 5=to a very great extent)
(b) Assuming someone persists with a relationship about which they have contradictory feelings, to what extent do you think this would cause her/him discomfort?
12345
(c) To what extent do you have contradictory feelings about your relationship(s)?
12345
(d) Assuming you have contradictory feelings, to what extent would that cause you discomfort?
12345
Higher scores for (a) and (c) questions and lower scores for (b) and (d) questions mean that you have higher aintegration – that is, that you are better able to cope with uncertainty and contradictions.

To road test their questionnaire, the researchers gave the full version with 11 items to hundreds of people across three studies and they found that it had high levels of "internal reliability" – that is, people who scored high for aintegration on one item tended to do so on the others.

Lomranz and Benyamini also found some evidence that older people (middle-aged and up), divorcees, the highly educated and the less religious tended to score higher on aintegration. So too did people who had experienced more positive events in life, and those who saw their negative experiences in more complex terms, as having both good and bad elements. Moreover, higher scorers on aintegration reported experiencing fewer symptoms of trauma after negative events in life.

This last finding raises the possibility that aintegration may grant resilience to hardship, although longer-term research is needed to test this (an alternative possibility is that finding a way to cope with trauma promotes aintegration).

Higher scores on aintegration also tended to correlate negatively with the established psychological construct of "need for structure".

The researchers said their paper was just a "first step" in establishing the validity of aintegration and that the concept could help inform future research especially with people "who dwell in states of transitions or 'betweenness', for example, struggling with national identities, cultural adjustment or conflicting values."

_________________________________ ResearchBlogging.org

Lomranz, J., & Benyamini, Y. (2015). The Ability to Live with Incongruence: Aintegration—The Concept and Its Operationalization Journal of Adult Development, 23 (2), 79-92 DOI: 10.1007/s10804-015-9223-4

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Thursday, 12 May 2016

After learning to identify with someone else's face, do people think their appearance has changed?

Past research has shown that it’s possible to hack our sense of our own bodies in bewildering ways, such as perceiving another person’s face as our own by stroking both in synchrony. These body illusions can alter our sense of self at a psychological level too. For example, embodying a child-sized body in a virtual reality environment leads people to associate themselves with child-like concepts. Can such effects also operate in the opposite direction, from the psychological to the physical? A new paper published in the Quarterly Journal of Experimental Psychology aimed to find out by seeing if shifting people’s sense of self at a psychological level warped their sense of their facial appearance.

Sophie Payne’s team at Royal Holloway, University of London manipulated their participants’ sense of self by repeatedly presenting them with a black and white cropped photo of a gender-appropriate face that was labelled "self", and with two other face images that were labelled as "friend" and "stranger". To consolidate these associations, the researchers then tested the participants, repeatedly showing them one of the earlier faces together with the correct label used earlier or the wrong label, and the participants had to say each time whether the label matched the face or not.

As the test went on, the participants became especially quick at spotting when the “self face” was correctly labelled as “self”, just as the researchers hoped would happen. This suggests that the previously unknown face had been incorporated into their self concept, at least temporarily. Think of it as a weaker version of the way we are particularly sensitive to any sounds that resemble our name, even against the hubbub of a cocktail party.

Having incorporated this face into their self-concept, did the participants view their facial appearance any differently? To address this, the researchers presented the participants with 100 faces and asked them to rate how similar each face was to their own. Fifty of the faces were blends of their own real face with the "stranger" face from earlier, and another 50 blended their real face with the “self face” paired earlier with their self concept.

The participants had actually completed this resemblance task earlier, before they’d learned to associate the “self face” with their self concept. The crucial test was whether, now that they'd learned to associate themselves with the “self face”, they would see themselves as resembling that face physically, more so than they had done earlier. Payne’s team predicted that they would, but in fact the results showed that this hadn’t happened. Identifying themselves with the face hadn't made them believe that they looked like the face.

Payne’s prediction was credible partly because we know the psychological self is malleable, body perception is malleable, and changes to body perception usually result in shifts in sense of self. Furthermore, and making this new result extra surprising, psychological influences have already been shown to affect our judgments about the physical appearance of our own face.

For example, a study from 2014 showed that people were more likely to say that they resembled a face that reflected a blend of their own face with someone else’s, when that other face belonged to a trustworthy partner in an earlier trading task rather than a cheat. Essentially, that result showed that the lines between self and other can be easily blurred, unlike in the current study. What gives?

The non-significant result in the current study may have uncovered the limits to these kinds of blurring effects. The findings suggest that it may be quite easy to adapt our self-concept, for example attuning us to identify with a new nickname or onscreen avatar, but that for this process to go deeper and influence how we perceive our own physical appearance, we need a more motivated, involving, and perhaps social context, like being betrayed or treated loyally.

The new hypothesis, then, is that we are engineered to perceptually link – or distance ourselves – from those who have helped or wronged us, and that the heat of social emotion is the soldering iron that fixes these connections fast. Further research will tell.

_________________________________ ResearchBlogging.org

Payne, S., Tsakiris, M., & Maister, L. (2016). Can the self become another? Investigating the effects of self-association with a new facial identity The Quarterly Journal of Experimental Psychology, 1-13 DOI: 10.1080/17470218.2015.1137329

--further reading--
Embodying another person's face makes it easier to recognise their fear

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Wednesday, 11 May 2016

Why do women do so much better at university than their school test scores predict?

Picture an American high school staff-room, late in the academic year, where a teacher called Alice is listening to her colleagues ride their favourite hobby horse: picking out which students have the most promise.

Eventually Alice leans forward and taps her laptop. “Less talk, guys, more data. If you want to know how a student will do when they get to college, look at their aptitude test scores.” Betty throws her a look. “That won’t work,” she says, ”girls go on to do better than their test scores predict. Those tests are faulty.” Charles, the faculty provocateur, snorts. “Faulty? Not at all. Girls are only getting better grades because they pick softer subjects with easier marking.”

As her older colleagues tear into each other, Alice reflects on a third possibility: that succeeding at university depends on much more than the cognitive abilities measured by the SATs and ACTs (standard tests taken at American high schools), and that women might be better prepared in these other departments. But to resolve this staff-room squabble, who can tell these explanations apart?

Psychologists from the University of Minnesota-Twin Cities, that’s who. In a new paper in the Journal of Applied Psychology, Heidi Keiser’s team examine Alice and Charles’ rival explanations for why high school aptitude tests under-predict girls’ later success at university.

The new research first compared the university grades of 2000 students from a single institution with their high school aptitude scores. Women scored better on their course than you would expect based on an earlier aptitude test, but once the researchers took account of the female students’ higher average trait conscientiousness, 20 per cent of their grade surplus disappeared – a finding that replicates earlier research.

The researchers then decomposed the students’ degree course into elements, reasoning that if conscientiousness has a role in the gender gap, this should be greatest when grades depended highly on discretionary effort, like participating in discussion or research, and least when grades depended on raw smarts. The data showed that high school aptitude scores underestimated female performance on these effort-sensitive course elements, but were no worse at estimating their success on quizzes and tests than they were for men. Overall, this supports Alice’s perspective – that women do better than expected at university because of their greater effort and conscientiousness.

In a second study, the researchers tested Charles’ counter argument that women perform surprisingly well because they pick easier courses. The data, from huge historical datasets comprising nearly 400,000 students, showed that the courses men tended to take were significantly meaner (that is, male and female students on these courses tended to achieve worse grades than expected given their academic history) and were also more likely to be populated with high-achieving students competing for grades. And these factors, primarily course meanness, did explain a little of the overall tendency for female over-performance… but not more than nine per cent, much less than the effect of conscientiousness. A weak score, then, to Charles.

What about our other teacher Betty? Right from the start, she said the aptitude tests, measuring cognitive ability, simply weren’t doing it right. She could still have a case: the hidden variables found in this study – conscientiousness and course selection – together accounted for less than thirty per cent of the gender gap – possibly much less, if the two effects are not independent from each other.

However, it’s also plausible that aptitude tests are doing a reasonable job, it’s just that there are many non-cognitive factors critical to university success, and that conscientiousness is just one slice of this pie (an exploratory look at other personality variables by Keiser’s team suggests as much). If so, it’s not that school tests and exams need to be improved, but that they give us just one part of what higher education requires. We must not lose sight of the wider attributes, found particularly in female students, that travel from classrooms and school projects into our seminar rooms and lecture halls, and beyond.

_________________________________ ResearchBlogging.org

Keiser, H., Sackett, P., Kuncel, N., & Brothen, T. (2016). Why women perform better in college than admission scores would predict: Exploring the roles of conscientiousness and course-taking patterns. Journal of Applied Psychology, 101 (4), 569-581 DOI: 10.1037/apl0000069

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Google+