Category: Methods

Concerning study says psychotherapy research has a problem with undeclared researcher bias

sharp riseBy Alex Fradera

When a good doctor encounters research comparing the effectiveness of drugs A and B, she knows to beware the fact that B was created by the people paying the researchers’ salaries. Pharmaceutical industry funding can be complex, but the general principle of declaring financial conflicts of interest is now embedded in medical research culture. Unfortunately, research into psychological therapies doesn’t yet seem to have got its house in order in an equivalent way. That’s according to a new open access article in the journal BMJ Open which suggests that, while there is less risk in this field of financially-based conflicts, researchers may be particularly vulnerable to non-financial biases, a problem that hasn’t been adequately acknowledged until now.

Continue reading “Concerning study says psychotherapy research has a problem with undeclared researcher bias”

Was that new Science paper hyped and over-interpreted because of its liberal message?

Schoolgirls reading a fairy tale togetherBy guest blogger Stuart Ritchie

It would be very concerning if “girls as young as six years old believe that brilliance is a male trait”, as The Guardian reported last week, especially if “this view has consequences”, as was argued in The Atlantic. Both stories implied girls’ beliefs about gender could be part of the explanation for why relatively few women are found working in fields such as maths, physics, and philosophy. These news stories, widely shared on social media, were based on a new psychology paper by Lin Bian at the University of Illinois at Urbana-Champaign and colleagues, published in Science, entitled “Gender stereotypes about intellectual ability emerge early and influence children’s interests”. The paper reported four studies, which at first appear to have simple, clear-cut conclusions. But a closer look at the data reveals that the results are rather weak, and the researchers’ interpretation goes far beyond what their studies have shown.

Continue reading “Was that new Science paper hyped and over-interpreted because of its liberal message?”

How did Darwin decide which book to read next?

charles_darwin_seated_crop
A new study published in Cognition blends information theory, cognitive science and personal history

By Christian Jarrett

Between 1837 and 1860 Charles Darwin kept a diary of every book he read, including An Essay on the Principle of Population, Principles of Geology and Vestiges of the Natural History of Creation. There were many others: 687 English non-fiction titles alone, meaning that he averaged one book every ten days. After Darwin finished each one, how did he decide what to read next? In this decision, a scientist like Darwin was confronted with a problem similar to that afflicting the squirrel in search of nuts. Is it better to thoroughly search one area (or topic), or to continually jump to new areas (topics)? Foraging, whether for nuts or information, comes down to a choice between exploitation and exploration. In a new paper in Cognition, a team led by Jaimie Murdock has analysed the contents of the English non-fiction books Darwin read, and the order he read them in, to find out his favoured information-gathering approach and how it changed over time.

Continue reading “How did Darwin decide which book to read next?”

Replication success correlates with researcher expertise (but not for the reasons you might think)

Old businessman holding his glassesBy Christian Jarrett

During the ongoing “replication crisis” in psychology, in which new attempts to reproduce previously published results have frequently failed, a common claim by the authors of the original work has been that those attempting a replication have lacked sufficient experimental expertise. Part of their argument, as explained recently by Shane Bench and his colleagues in the Journal of Experimental Social Psychology, is that “just as master chess players and seasoned firefighters develop intuitive expertise that aids their decision making, seasoned experimenters may develop intuitive expertise that influences the ‘micro decisions’ they make about study selection … and data collection.”

To see if there really is any link between researcher expertise and the chances of replication success, Bench and his colleagues have analysed the results of the recent “Reproducibility Project” in which 270 psychologists attempted to replicate 100 previous studies, managing a success rate of less than 40 per cent. Bench’s team found that replication researcher team expertise, as measured by first and senior author’s number of prior publications, was indeed correlated with the size of effect obtained in the replication attempt, but there’s more to the story.

Continue reading “Replication success correlates with researcher expertise (but not for the reasons you might think)”

The evidence for the psychological benefits of animals is surprisingly weak

Senior woman lying with a dog on a white chairBy Christian Jarrett

To see a man’s face light up as he strokes a dog, to hear a child’s laughter as her hamster tickles her skin, it just seems obvious that animals are good for our state of mind. Let’s hope so because not only do millions of us own pets, but also animals are being used therapeutically in an increasing number of contexts, from residential care homes to airports, prisons, hospitals, schools and universities. Unfortunately, as detailed by psychologist Molly Crossman in her new review in the Journal of Clinical Psychology, the research literature has simply not kept pace with the widespread embrace of animal contact as a form of therapy in itself, or as a therapy adjunct. In short, we don’t know whether animal contact is psychologically beneficial, and if it is, we have no idea how.

Continue reading “The evidence for the psychological benefits of animals is surprisingly weak”

No reason to smile – Another modern psychology classic has failed to replicate

25401359824_3f753aaf04_o
Image via Quentin Gronau/Flickr showing how participants were instructed to hold the pen

By Christian Jarrett

The great American psychologist William James proposed that bodily sensations – a thumping heart, a sweaty palm – aren’t merely a consequence of our emotions, but may actually cause them. In his famous example, when you see a bear and your pulse races and you start running, it’s the running and the racing pulse that makes you feel afraid.

Consistent with James’ theory (and similar ideas put forward even earlier by Charles Darwin), a lot of research has shown that the expression on our face seems not only to reflect, but also to shape how we’re feeling. One of the most well-known and highly cited pieces of research to support the “facial feedback hypothesis” was published in 1988 and involved participants looking at cartoons while holding a pen either between their teeth, forcing them to smile, or between their lips, forcing them to pout. Those in the smile condition said they found the cartoons funnier.

But now an attempt to replicate this modern classic of psychology research, involving 17 labs around the world and a collective subject pool of 1894 students, has failed. “Overall, the results were inconsistent with the original result,” the researchers said.  Continue reading “No reason to smile – Another modern psychology classic has failed to replicate”

Are the benefits of brain training no more than a placebo effect?

If you spend time playing mentally taxing games on your smartphone or computer, will it make you more intelligent? A billion dollar “brain training” industry is premised on the idea that it will. Academic psychologists are divided – the majority view is that by playing brain training games you will only improve at those games, you won’t become smarter. But there are scholars who believe in the wider benefits of computer-based brain training and some reviews support their position, such as the 2015 meta-analysis that combined findings from 20 prior studies to conclude “short-term cognitive training on the order of weeks can result in beneficial effects in important cognitive functions”.

But what if those prior studies supporting brain training were fundamentally flawed by the presence of a powerful placebo effect? That’s the implication of a new study in PNAS that suggests the advertising used to recruit participants into brain training research fosters expectations of mental benefits.

Cyrus Foroughi and his colleagues produced two different recruitment adverts (see image below) to attract participants into a brain training study – one explicitly mentioned that the study was about brain training and mentioned that this training can lead to cognitive enhancements; the other was neutral and simply stated that participants were needed for a study. Nearly all previously published brain training research has used an overt, suggestive style of recruitment advertising.

Nineteen young men and 31 young women signed up in response to the two ads, with no gender or age differences between those who responded to each ad. Next, they completed baseline intelligence tests before spending an hour on a task that features in many commercial brain training programmes – the so-called dual n-back task, which involves listening to one stream of numbers or letters and watching another, and spotting whenever the latest item in one of the streams is a repeat of one presented “n” number of items earlier in that stream. As participants improve, “n” is increased, making the task more difficult. The next day, the participants completed more intelligence tests. They also answered questions about their beliefs in the possibility for people’s intelligence to increase.

The participants who’d responded to the overt, suggestive advert showed gains in intelligence after completing just one hour of brain training – a length of training too short to plausibly have produced any genuine benefit linked to the actual experience of doing the training. In contrast, the participants who responded to the neutral ad showed no intelligence gains. This group difference was despite the fact that the two groups performed just as well on the training task, suggesting no group differences in motivation or ability. Also, the group who’d responded to the suggestive ad reported stronger beliefs in the malleability of intelligence. This could be because people with these beliefs were more likely to respond to the suggestive ad, or because they’d been influenced by the claims of the ad – either way, it shows how the use of unsubtle recruitment advertising could be distorting research in this area.

The researchers said they’d provided “strong evidence that placebo effects from overt and suggestive recruitment can affect cognitive training outcomes”. They added that future brain training research should aim to better reduce or account for these placebo effects, for example avoiding hinting to participants what the goals of the study are, or what outcomes are expected. Their call comes after a group of psychologists warned in 2013 that intervention studies in psychology are afflicted by a “pernicious and pervasive” problem, namely the failure to adequately control for the placebo effect.

Placebo effects in cognitive training

_________________________________
   
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

We all differ in our ability to cope with contradictions and paradoxes. Introducing the "aintegration" test

Life is full of paradoxes and uncertainty – good people who do bad things, and questions with no right or wrong answer. But the human mind abhors doubt and contradictions, which provoke an uncomfortable state of “cognitive dissonance“. In turn, this motivates us to see the world in neat, black and white terms. For example, we’ll decide the good person must really have been bad all along, or conversely that the bad thing they did wasn’t really too bad after all. But a pair of researchers in Israel point out that some of us are better than others at coping with incongruence and doubt than others – an ability they call “aintegration” for which they’ve concocted a new questionnaire. The full version, together with background theory, is published in the Journal of Adult Development.

If you want to hear what the researchers found out about who copes best with uncertainty, skip past the two example items coming up next.

Jacob Lomranz and Yael Benyamini’s test begins: This questionnaire explores the way people think and feel about various attitudes. In the following pages you will be presented with attitudes held by different people. Please read each attitudinal position carefully and use the ratings scale to state your general and personal reaction as to such attitudes.

The test then features 11 items similar to these two:

EXAMPLE ITEM 1 There are people who will avoid making decisions under conditions of uncertainty and ambiguity. In contrast, other people would make decisions even under conditions of uncertainty and ambiguity. 

(a) In general, to what extent do you think it is possible to make decisions under
conditions of uncertainty and ambiguity?
1,2,3,4, or 5 (choose 1 to 5 where 1= not at all and 5=to a very great extent)

(b) Assuming someone does make decisions under conditions of uncertainty and
ambiguity, to what extent do you think this would cause her/him discomfort?
12345

(c) To what extent do you make decisions under conditions of uncertainty and
ambiguity?
12345

(d) Assuming you made a decision under conditions of uncertainty and ambiguity, to
what extent would that cause you discomfort?
12345

EXAMPLE ITEM 2 There is an opinion that in every relationship between couples there are contradictory feelings; on the one hand, the individual benefits from the relationship (for example, love) and on the other hand loses from the relationship (for example, loss of independence).

– Some people claim that even when the couple has contradictory feelings about their relationship, a good relationship can still exist.
– In contrast, there are those who claim that when there are contradictory feelings about the couple relationship, it is impossible to maintain a good relationship.

(a) In general, to what extent do you think it is possible to have a good relationship when a couple has contradictory feelings about that relationship?
1234, or 5 (choose 1 to 5 where 1= not at all and 5=to a very great extent)

(b) Assuming someone persists with a relationship about which they have contradictory feelings, to what extent do you think this would cause her/him discomfort?
12345

(c) To what extent do you have contradictory feelings about your relationship(s)?
12345

(d) Assuming you have contradictory feelings, to what extent would that cause you discomfort?
12345

Higher scores for (a) and (c) questions and lower scores for (b) and (d) questions mean that you have higher aintegration – that is, that you are better able to cope with uncertainty and contradictions.

To road test their questionnaire, the researchers gave the full version with 11 items to hundreds of people across three studies and they found that it had high levels of “internal reliability” – that is, people who scored high for aintegration on one item tended to do so on the others.

Lomranz and Benyamini also found some evidence that older people (middle-aged and up), divorcees, the highly educated and the less religious tended to score higher on aintegration. So too did people who had experienced more positive events in life, and those who saw their negative experiences in more complex terms, as having both good and bad elements. Moreover, higher scorers on aintegration reported experiencing fewer symptoms of trauma after negative events in life.

This last finding raises the possibility that aintegration may grant resilience to hardship, although longer-term research is needed to test this (an alternative possibility is that finding a way to cope with trauma promotes aintegration).

Higher scores on aintegration also tended to correlate negatively with the established psychological construct of “need for structure”.

The researchers said their paper was just a “first step” in establishing the validity of aintegration and that the concept could help inform future research especially with people “who dwell in states of transitions or ‘betweenness’, for example, struggling with national identities, cultural adjustment or conflicting values.”

_________________________________ ResearchBlogging.org

Lomranz, J., & Benyamini, Y. (2015). The Ability to Live with Incongruence: Aintegration—The Concept and Its Operationalization Journal of Adult Development, 23 (2), 79-92 DOI: 10.1007/s10804-015-9223-4

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Psychologists don’t REALLY think their field is in crisis (but finding fails to replicate)

Update: This was an April Fools’ joke. (Check out our April Fools’ articles from previous years).

In the wake of recent failures to repeat some of psychology’s most famous findings, not to mention a few cases of outright research fraud, it’s been claimed that psychological science is in a bit of state. Many psychologists have responded by proposing ways to improve research practices, such as making data freely available online, and preregistering planned methods, to avoid issues of data tinkering later on. However, not everyone actually agrees that psychology is in crisis – at least some psychologists take a more rosy view and think the replication problem has been overblown, as illustrated by a recent ebullient opinion piece published in Science.

We hear a lot of commentary about all this from just a few high profile individuals, but no one really knows what the average psychology researcher really thinks. To find out, a team of psychologists in the UK recruited hundreds of psych researchers around the world to complete what’s known as the “implicit association test” tailored to reveal subconscious attitudes towards psychology. The idea was to find out what psychology researchers think of psychology at a subconscious level. Also, in keeping with the growing awareness of the importance of replicability in science, it was planned in advance that a second team in the USA would subsequently perform the same test with hundreds more international researchers. Dr Cass Andra, a reforming psychologist and leader of the UK arm, said she expected to find that psychologists are in denial about the crisis in their field.

The test involved psychologists pressing one of two keyboard keys as fast as possible whenever they saw different categories of word on-screen – words pertaining to psychology or other sciences, and positive and negative words. On some trials, the same key was allocated to psychology terms (e.g. “social psychology”) and positive words (“robust”), with the other key allocated to other sciences and negative words. On other trials, the set-up was switched.

The main finding in the UK arm of the research is that psychology researchers showed an implicit positive bias towards psychology research – that is, they showed their fastest response times when the same key was allocated to psychology terms and positive words, suggesting that they see psychology research in a positive light. Andra and her colleagues said this was as they expected but also extremely worrying –  suggesting that deep down psychologists are confident in their discipline and do not see any need for reform.

However, the American replication attempt failed. These researchers, who also recruited psychology researchers from around the world, made the exact opposite finding – in this case, psychologists were particularly slow to respond when the same key was allocated to psychology terms and positive words, and much quicker when the same key was used for responding to psychology terms and negative words. Professor Polly Anna, who is sceptical about the idea of a replication crisis and a well-known figure through her popular TED talks, said she was disappointed by this result – “Firstly, it’s disappointing from a purely methodological point of view that we failed to replicate the first phase, but also I’m sorry to see that psychologists seem to believe deep-down that their field is in trouble. I think this shows the harm to morale that’s been done by all the talk of a replication crisis.”

Unfortunately, despite the initial collaborative spirit, the two teams are now in dispute. In a surprise move, the American team led by Professor Anna, has written a letter to The International Journal of Psychological Research calling for their own replication attempt to be retracted. Anna and her colleagues acknowledged that they might not have sufficient training in the implicit association test, and that it’s possible their own anxieties influenced their participants, thus invalidating their results. “Our finding that psychology researchers think psychology is in crisis is questionable – it can take skill and creativity to get the right results sometimes, and hand on heart, we might have lacked those things here ” Anna and her team told us. “We think the British finding, showing positive views among psychologists toward psychology, should stand, and we want our own replication attempt removed from the record”.

But in turn, the British researchers have written a letter to the journal calling for a retraction of the American’s retraction letter. “While we would normally hope for a successful replication attempt,” the letter states, “we actually welcome the US finding because it helps to show once again the difficulty of conducting replicable psychological science. It may well be the case that their finding that psychologists think psychology is not robust is more robust than our finding that psychologists think psychology is robust. Either way, we hope the message gets through that we need to work together to make psychology more robust.”
_________________________________

  ResearchBlogging.orgAndra C. et al. (2016). Implicit attitudes toward psychology held by psychological scientists. International Journal of Psychological Research, 1-9 DOI: 10.1090/02699931.2015.1129413

Anna P. et al. (2016). An attempt to replicate the finding of implicit positive bias toward psychology held by psychology researchers. International Journal of Psychological Research, 10-19 DOI: 10.1080/027249931.2015.1129313

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

How trustworthy is the data that psychologists collect online?

The internet has changed the way that many psychologists collect their data. It’s now cheap and easy to recruit hundreds of people to complete questionnaires and tests online, for example through Amazon’s Mechanical Turk website. This is a good thing in terms of reducing the dependence on student samples, but there are concerns about the quality of data collected through websites. For example, how do researchers know that the participants have read the questions properly or that they weren’t watching TV at the same time as they completed the study?

Good news about the quality of online psychology data comes from a new paper in Computers in Human Behavior. Sarah Ramsey and her colleagues at Northern Illinois University first asked hundreds of university students to complete two questionnaires on computer – half of them did this on campus in the presence of a researcher, the others did it remotely, off campus.

The questions were about a whole range of topics from sex to coffee. The researchers started off leading the participants to believe they were really interested in their attitudes to these topics. But when the students started the second questionnaire they were told the real test was to spot how many of the questions on the second questionnaire were repeats from the first. The idea was to see whether the students had really been paying attention to the questions – if they hadn’t, they wouldn’t be very good at spotting duplicates in the second questionnaire.

In fact, both groups of students – those supervised on campus and those who could do the questionnaire anywhere – performed well at spotting when questions were repeated. This suggests that even those who’d completed the questionnaires at home, or out and about, had been paying attention – good news for any researchers who like to collect data online.

A follow-up study was similar, but this time there were three participant groups: students on-campus, students off-campus, and 246 people recruited via Amazon’s Mechanical Turk. Also, the researchers added a trick to see if the participants had read the questionnaire instructions properly – they did this by making an unusual request for how participants should indicate the time they completed the questionnaires.

In terms of the participants’ paying attention to the questionnaire items, the results were again promising – all groups did well at spotting duplicate items. Regarding the reading of instructions, the results were more disappointing in general, but actually the Turkers performed the best. Just under 15 per cent of students on-campus appeared to have read the instructions closely compared with 8.5 per cent of off-campus students and 49.6 per cent of Turkers. Perhaps users of sites like Amazon’s Mechanical Turk are actually more motivated to pay attention than students because they have an inherent interest in participating whereas students might just be fulfilling their course requirements.

Of course this paper has only looked at two specific aspects of conducting psychology research online, both relating to the use of questionnaires. However, the researchers were relatively upbeat – “These results should increase our confidence in data provided by crowdsourced participants [those recruited via Amazon and other sites]” they said. But they also added that their findings raise general concerns about how closely participants read task instructions. There are easy ways round this though – for example, instructions can include a compliance test that must be completed before the proper questionnaire or other task begins, or researchers could try using audio to provide spoken instructions.

_________________________________ ResearchBlogging.org

Ramsey, S., Thompson, K., McKenzie, M., & Rosenbaum, A. (2016). Psychological research in the internet age: The quality of web-based data Computers in Human Behavior, 58, 354-360 DOI: 10.1016/j.chb.2015.12.049

further reading
What are participants really up to when they complete an online questionnaire?
Anonymity may spoil the accuracy of data collected through questionnaires

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!