Category: Methodological

Psychology Research In The Coronavirus Era: A “High Stakes Version Of Groundhog Day”?

By Matthew Warren

As the reality of the coronavirus pandemic set in in March, we looked at the work of psychologists attempting to understand how the crisis is affecting us, and to inform our response to it. A few months later, and hundreds of studies have been conducted or are in progress, examining everything from the spread of conspiracy theories to the characteristics that make people more likely to obey lockdown measures.

However, some researchers have raised alarm. They’re worried that many of these rapid new studies are falling prey to methodological issues which could lead to false results and misleading advice. Of course, these aren’t new problems: the pandemic comes at the end of a decade in which the field’s methodological crises have really been thrust under the spotlight. But is the coronavirus pandemic causing researchers to fall back on bad habits — or could it lead to positive change for the field? Continue reading “Psychology Research In The Coronavirus Era: A “High Stakes Version Of Groundhog Day”?”

Are You Addicted To Spending Time With Your Friends? Study Satirises Measures Of Social Media Addiction

By Matthew Warren

Do you often spend time with your friends in order to forget about personal problems? Do you think about your friends even when you’re not with them? Have you even gone as far as ignoring your family to spend time with your friends?

If you answered yes to these questions, you might fit the criteria for “offline friend addiction”, according to a new scale described in a preprint on PsyArxiv. Except, of course, that this notion is ridiculous. How can we be addicted to socialising, the fulfilment of one of our basic human needs?

Well, that’s pretty much the point of the new paper, written with tongue firmly in cheek. But behind it is a serious argument: although a scale for offline friend addiction is clearly absurd, there’s another, similar concept for which such scales have already been developed — social media addiction.

Continue reading “Are You Addicted To Spending Time With Your Friends? Study Satirises Measures Of Social Media Addiction”

Conservatives Might Not Have A More Potent Fear Response Than Liberals After All

Terrifying SceneBy guest blogger Jesse Singal

If you follow mainstream science coverage, you have likely heard by now that many scientists believe that the differences between liberals and conservatives aren’t just ideological, but biological or neurological. That is, these differences are driven by deeply-seated features of our bodies and minds which exist prior to any sort of conscious evaluation of a given issue.

Lately, though, follow-up research has been poking some holes in this general theory. In November, for example, Emma Young wrote about findings which undermined past suggestions that conservatives are more readily disgusted than liberals. More broadly, as I wrote in 2018, there’s a burgeoning movement in social and political psychology to re-evaluate some of the strongest claims about liberal-conservative personality differences, with at least some evidence to suggest that the nature and magnitude of these differences has been overblown by shoddy or biased research.

Now, a new study set to appear in the Journal of Politics and available in preprint here suggests that another key claim about liberal-conservative differences may be less sturdy than it appears.

Continue reading “Conservatives Might Not Have A More Potent Fear Response Than Liberals After All”

Public Belief In “Memory Myths” Not Actually That Widespread, Study Argues

GettyImages-668658382.jpgBy Emma Young

The general public has a pretty poor understanding of how memory works — and lawyers and clinical psychologists can be just as bad. At least, this is what many researchers have asserted, notes a team at University College London in a new paper, published in the Journal of Experimental Psychology. However, their research reveals that the idea that most people ignorantly subscribe to “memory myths” is itself a myth.

The wording of earlier studies, and also discrepancies in how memory experts and the general public tend to interpret the meaning of statements about memory, have painted a bleaker picture of public understanding than is actually the case, according to a series of studies led by Chris Brewin. This has important implications for cases in which ideas about memory are highly relevant — among jurors in a court room, for example.

Continue reading “Public Belief In “Memory Myths” Not Actually That Widespread, Study Argues”

How do you prove that reading boosts IQ?

By guest blogger Stuart Ritchie.

A recent study on whether reading boosts intelligence attracted global media attention: “Reading at a young age makes you smarter,” announced the Daily Mail. “Early reading boosts health and intelligence,” said The Australian.

In the race for eye-catching headlines, this mainstream media coverage arguably missed the more fascinating story of the hunt for cause and effect. Here lead author Dr Stuart Ritchie explains the science:

“Causality, it turns out, is really difficult to prove. Correlational studies, while interesting, don’t give us information about causation one way or another. The randomised controlled trial is the ‘gold standard’ method of telling whether a manipulation has an effect on an outcome. But what if a randomised experiment isn’t possible, for practical or ethical reasons? Thankfully, there is an entire toolkit of study designs that go beyond correlation, and can be used to take steps up the ladder closer to causation.

Say you wanted to find interventions that cause intelligence to increase. Since childhood intelligence test scores are so powerfully predictive of later educational success, as well as health and wealth, it’s of great importance to find out how they might be improved. All sorts of nutritional supplements and training programmes have been tried, but all have failed (so far) to reliably show benefits for IQ. However, one factor that has been convincingly shown to cause improvements in intelligence test scores is education. It wouldn’t exactly be ethical to remove some children from school at random and see how they do in comparison to their educated peers. But in a step up the aforementioned causal ladder, researchers in 2012 used a ‘natural experiment’ in the Norwegian education system (where compulsory years of education were increased in some areas but not others) to show that each year’s worth of extra education added 3.6 IQ points.

What is it about education that’s driving these effects? Could it be that a very basic process like learning to read is causing the improvements in IQ? Keith Stanovich and colleagues showed, in a number of studies in the 1990s, that earlier levels of reading interest (though not ability) were predictive of later levels of verbal intelligence, even after controlling for children’s initial verbal intelligence. In a 1998 review, they concluded that “reading will make [children] smarter”.

On the ladder of causation, a control for pre-existing ability in a non-experimental design is important, but problems remain. For instance, since we know that common genes contribute to reading and intelligence, any study that fails to measure or control for genetic influences can’t rule out that the possibility that the early reading advantage and the later intelligence benefit are due simply to a shared genetic basis that is, say, expressed at different times in different areas of the brain. If only there were a way of cloning children – comparing one “baseline” version of each child against a second version with improved reading ability, and then seeing if the better reading translated to higher intelligence later in development…

This sounds like a far-fetched fantasy experiment. But in a recent study, my colleagues and I did just that, though we left it to nature to do the cloning. Tim Bates, Robert Plomin, and I analysed data from 1,890 pairs of identical twins who were part of the Twins Early Development Study (TEDS). The twins had their reading ability and intelligence tested on multiple measures (averaged into a composite) at ages 7, 9, 10, 12, and 16. For each twin pair at each age, we calculated the difference between one twin and the other on both variables. Since each pair was near-100 per cent identical genetically, and was brought up in the same family, these differences must have been caused purely by the ‘non-shared environment’ (that is, environmental influences experienced by one twin but not the other).

We found that twins who had an advantage over their co-twin on reading at earlier points in their development had higher intelligence test scores later on. Because this analysis controls for initial IQ differences, as well as genetics and socioeconomic circumstances, it is considerably more compelling than previous results that used less well-controlled designs. It’s important to note that we found associations between earlier reading ability and later nonverbal intelligence, as well as later verbal intelligence. So, beyond the not-particularly-surprising finding that being better at reading might help with a child’s vocabulary, we made the pretty-surprising finding that it might also help with a child’s problem solving and reasoning ability. Why?

We now enter the realm of speculation. It might be that reading allows children to practise the skills of assimilating information and abstract thought that are useful when completing IQ tests. The process of training in reading may also help teach children to concentrate on tasks—like IQ tests—that they’re asked to complete. Our research doesn’t shed light on these mechanisms, but we hope future studies will.

One should not give our study a criticism-free ride just because it tells a cheery, ‘good news’ story. A step up toward causation is not causation. Could there have been alternative explanations for our findings? Certainly. It is possible that, for instance, teachers spot a child with a reading advantage and give them additional attention, raising their intelligence ‘without’, as we say in the paper, ‘reading doing the causal “work”‘. It may also have been that our controls were inadequate – as I said above, identical twins are nearly genetically identical, but a small number of unique genetic mutations might occur within each pair. The largest lacuna in our study, though, was the cause of the initial within-pair reading differences. Whether these were caused by teaching, peers, pure luck, or some other process, we couldn’t tell, and it’s of great interest to find out.

We hope that our study encourages researchers in three ways. First, in the eternal quest for intelligence-boosters, instead of looking to flashy new brain-training games or the like, they might wish to examine, and maximise, the potentially IQ-improving effects of ‘everyday’ education. Second, they could attempt to answer the questions raised by our study. Why do identical twins differ in reading, and are the reasons under a teacher’s control? What are the specific mechanisms that might lead from literacy to intelligence? Third, and more generally, we hope it will inspire them to consider new methods, including the twin-differences design, that edge further up the causal ladder, away from the basic correlational study. The data are, of course, far harder to collect, but the stronger inferences found there are well worth the climb.”

_________________________________

Ritchie, S., Bates, T., & Plomin, R. (2014). Does Learning to Read Improve Intelligence? A Longitudinal Multivariate Analysis in Identical Twins From Age 7 to 16 Child Development DOI: 10.1111/cdev.12272

Post written by Stuart J. Ritchie, a Research Fellow in the Centre for Cognitive Ageing and Cognitive Epidemiology at the University of Edinburgh. Follow him on Twitter: @StuartJRitchie

The mistakes that lead therapists to infer psychotherapy was effective, when it wasn’t

How well can psychotherapists and their clients judge from personal experience whether therapy has been effective? Not well at all, according to a paper by Scott Lilienfeld and his colleagues. The fear is that this can lead to the continued practice of ineffective, or even harmful, treatments.

The authors point out that, like the rest of us, clinicians are subject to four main biases that skew their ability to infer the effectiveness of their psychotherapeutic treatments. This includes the mistaken belief that we see the world precisely as it is (naive realism), and our tendency to pursue evidence that backs our initial beliefs (the confirmation bias). The other two are illusory control and illusory correlations – thinking we have more control over events than we do, and assuming the factors we’re focused on are causally responsible for observed changes.

These features of human thought lead to several specific mistakes that psychotherapists and others commit when they make claims about the effectiveness of psychological therapies. Lilienfeld’s team call these mistakes “causes of spurious therapeutic effectiveness” or CSTEs for short. The authors have created a taxonomy of 26 CSTEs arranged into three categories.

The first category includes 15 mistakes that lead to the perception that a client has improved, when in fact he or she has not. These include palliative benefits (when the client feels better about their symptoms without actually showing any tangible improvement); confusing insight with improvement (when the client better understands their problems, but does not actually show recovery); and the therapist’s office error (confusing a client’s presentation in-session with their behaviour in everyday life).

The second category consists of errors that lead therapists and their clients to infer that symptom improvements were due to the therapy, and not some other factor, such as natural recovery that would have occurred anyway. Among these eight mistakes are a failure to recognise that many disorders are cyclical (periods of recovery interspersed with phases of more intense symptoms); ignoring the influence of events occurring outside of therapy, such as an improved relationship or job situation; and the influence of maturation (disorders seen in children and teens can fade as they develop).

The third and final category of errors are those that lead to the assumption that improvements are caused by unique features of a therapy, rather than factors that are common to all therapies. Examples here include not recognising placebo effects (improvements stemming from expectations) and novelty effects (improvements due to initial enthusiasm).

To counter the many CSTEs, Lilienfeld’s group argue we need to deploy research methods including using well-validated outcome measures, taking pre-treatment measures, blinding observers to treatment condition, conducting repeated measurements (thus reducing the biasing impact of irregular everyday life events), and using control groups that are subjected to therapeutic effects common to all therapies, but not those unique to the treatment approach under scrutiny.

“CSTEs underscore the pressing need to inculcate humility in clinicians, researchers, and students,” conclude Lilienfeld and his colleagues. “We are all prone to neglecting CSTEs, not because of a lack of intelligence but because of inherent limitations in human information processing. As a consequence, all mental health professionals and consumers should be sceptical of confident proclamations of treatment breakthroughs in the absence of rigorous outcome data.”

_________________________________

Lilienfeld, S., Ritschel, L., Lynn, S., Cautin, R., & Latzman, R. (2014). Why Ineffective Psychotherapies Appear to Work: A Taxonomy of Causes of Spurious Therapeutic Effectiveness Perspectives on Psychological Science, 9 (4), 355-387 DOI: 10.1177/1745691614535216

–further reading–
When therapy causes harm

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

What the textbooks don’t tell you – one of psychology’s most famous experiments was seriously flawed

Zimbardo speaking in ’09

By Christian Jarrett

Conducted in 1971, the Stanford Prison Experiment (SPE) has acquired a mythical status and provided the inspiration for at least two feature-length films. You’ll recall that several university students allocated to the role of jailor turned brutal and the study had to be aborted prematurely. Philip Zimbardo, the experiment’s lead investigator, says the lesson from the research is that in certain situations, good people readily turn bad. “If you put good apples into a bad situation, you’ll get bad apples,” he has written.

The SPE was criticised back in the 70s, but that criticism has noticeably escalated and widened in recent years. New details to emerge show that Zimbardo played a key role in encouraging his “guards” to behave in tyrannical fashion. Critics have pointed out that only one third of guards behaved sadistically (this argues against the overwhelming power of the situation). Question marks have also been raised about the self-selection of particular personality types into the study. Moreover, in 2002, the social psychologists Steve Reicher and Alex Haslam conducted the BBC Prison Study to test the conventional interpretation of the SPE. The researchers deliberately avoided directing their participants as Zimbardo had his, and this time it was the prisoners who initially formed a strong group identity and overthrew the guards.

Given that the SPE has been used to explain modern-day atrocities, such as at Abu Ghraib, and given that nearly two million students are enrolled in introductory psychology courses in the US, Richard Griggs, professor emeritus at the University of Florida, says “it is especially important that coverage of it in our texts be accurate.”

So, have the important criticisms and reinterpretations of the SPE been documented by key introductory psychology textbooks? Griggs analysed the content of 13 leading US introductory psychology textbooks, all of which have been revised in recent years, including:  Discovering Psychology (Cacioppo and Freberg, 2012); Psychological Science (Gazzaniga et al, 2012); and Psychology (Schacter et al, 2011).

Of the 13 analysed texts, 11 dealt with the Stanford Prison Experiment, providing between one to seven paragraphs of coverage. Nine included photographic support for the coverage. Five provided no criticism of the SPE at all. The other six provided only cursory criticism, mostly focused on the questionable ethics of the study. Only two texts mentioned the BBC Prison Study. Only one text provided a formal scholarly reference to a critique of the SPE.

Why do the principal psychology introductory textbooks, at least in the US, largely ignore the wide range of important criticisms of the SPE? Griggs didn’t approach the authors of the texts so he can’t know for sure. He thinks it unlikely that ignorance is the answer. Perhaps the authors are persuaded by Zimbardo’s answers to his critics, says Griggs, but even so, the criticisms should be mentioned and referenced. Another possibility is that textbook authors are under pressure to shorten their texts, but surely they are also under pressure to keep them up-to-date.

It would be interesting to compare coverage of the SPE in European introductory texts. Certainly there are contemporary books by British psychologists that do provide more in-depth critical coverage of the SPE.

Griggs’ advice for textbook authors is to position coverage of the SPE in the research methods chapter (instead of under social psychology), and to use the experiment’s flaws as a way to introduce students to key issues such as ecological validity, ethics, demand characteristics and subsequent conflicting results. “In sum,” he writes, “the SPE and its criticisms comprise a solid thread to weave numerous research concepts together into a good ‘story’ that would not only enhance student learning but also lead students to engage in critical thinking about the research process and all of the possible pitfalls along the way.”

_________________________________

Griggs, R. (2014). Coverage of the Stanford Prison Experiment in Introductory Psychology Textbooks Teaching of Psychology, 41 (3), 195-203 DOI: 10.1177/0098628314537968

further reading
Foundations of sand? The lure of academic myths and their place in classic psychology
Tyranny and The Tyrant,  From Stanford to Abu Ghraib (pdf; Phil Banyard reviews Zimbardo’s book The Lucifer Effect).

Image credit: Jdec/Wikipedia

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest

How your mood changes your personality

Participants scored higher on neuroticism & lower on extraversion when they were sad

Except in extreme cases of illness or trauma, we usually expect each other’s personalities to remain stable through life. Indeed, central to the definition of personality is that it describes pervasive tendencies in a person’s behaviour and ways of relating to the world. However, a new study highlights the reality – your personality is swayed by your current mood, especially when you’re feeling down.

Jan Querengässer and Sebastian Schindler twice measured the personality of 98 participants (average age 22; 67 per cent female), with a month between each assessment. Before one of the assessments, the participants either watched a ten-minute video designed to make them feel sad, or to make them feel happy. The sad clip was from the film Philadelphia and Barber’s Adagio for Strings was also added into the mix. The happy video showed families reunited after the fall of the Berlin Wall, together with Mozart’s Eine klieine Nachtmusik. Before their other personality assessment, the participants watched a neutral video about people with extreme skills.

When participants answered questions about their personality in a sad state, they scored “considerably” higher on trait neuroticism, and “moderately” lower on extraversion and agreeableness, as compared with when they completed the questionnaire in a neutral mood state. There was also a trend for participants to score higher on extraversion when in a happy mood, but this didn’t reach statistical significance. The weaker effect of happy mood on personality may be because people’s supposed baseline mood (after the neutral video) was already happy. Alternatively, perhaps sad mood really does have a stronger effect on personality scores than happiness. This would make sense from a survival perspective, the researchers said, because sadness is usually seen as a state to be avoided, while happiness is a state to be maintained. “Change is more urgent than maintenance,” they explained.

These results complement previous research suggesting that a person’s personality traits are associated with more frequent experience of particular emotions. For example, there’s evidence that high scorers on extraversion experience more happiness than lower scorers. However, the new data highlight how the relationship can work both ways – with current emotional state also influencing personality (or the measurement of personality, at least). We are familiar with this in our everyday lives – even our most vivacious friends can seem less friendly and sociable when they’re down. With strangers though, it’s easy to forget these effects and assume that their behaviour derives from fixed personality rather than temporary mood.

Although this research appears to challenge the notion of personality as fixed, the results, if heeded, could actually help us drill down to a person’s underlying long-term traits. As Querengässer and Schindler explained, “becoming aware of participants’ emotional state and paying attention to the possible implications on testing could lead to a notable increase in the stability of assessed personality traits.”
_________________________________

Querengässer, J., & Schindler, S. (2014). Sad but true? – How induced emotional states differentially bias self-rated Big Five personality traits BMC Psychology, 2 (1) DOI: 10.1186/2050-7283-2-14

–further reading–
Why are extraverts happier?
Situations shape personality, just as personality shapes situations

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Facebook mood manipulation study – the outcry and counter reaction (link feast special)

There’s been an outcry after Facebook manipulated the news feeds of nearly 700,000 its users, as part of a newly published investigation into online emotional contagion. Here we bring you a handy round-up of some of the ensuing commentary and reaction.

The Furore
The Psychologist magazine brings us up to speed with the main findings and fallout from the affair (Wired’s detailed coverage is also good).

The “Apology
Lead author on the paper, in-house Facebook researcher Adam Kramer took to Facebook on June 29 to apologise. “…our goal was never to upset anyone,” he writes. Kramer’s co-authors were researchers at Cornell University.

The Statement
Cornell University claim that their researchers analysed data collected by Facebook, and had no part in data collection themselves. “Cornell University’s Institutional Review Board concluded that he [co-author Professor Jeffrey Hancock] was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.

The Evasion
Princeton University psychology Susan Fiske – editor at PNAS where the research was published – told Guardian Blogger Chris Chambers that she didn’t have time to answer all his questions about the study. Retorts Chambers: “In what version of 2014 is it acceptable for journals, universities and scientists to offer weasel words and obfuscation in response to simple questions about research ethics?”

The Bad Research Methods
John Grohol at World of Psychology pointed out that the text analysis used in the research was flawed. For example, “I am not having a great day” and “I am not happy” would be rated positive because of the presence of the words “great” and “happy”.

The Outcry
Over at Znet, tech blogger Steven J. Vaughan-Nichols said he knew all Facebook users were guinea pigs, but not that they were lab rats. “Stop it, Facebook. Stop it now. And, never, ever do anything like this again.”

The Outcry II
NPR blogger Linda Holmes: “I speak here as a Facebook user and straight from the heart: It’s gross. It’s gross.”

The Counter Reaction
“Facebook users more or less get what they should expect — and what they deserve, given that they use Facebook’s service for free”. This the opinion of California Polytechnic ethicist Patrick Lin, as paraphrased by the Wall Street Journal.

The Counter Reaction II
Calm down, says Alice Park at Time, “what Facebook did was scientifically acceptable, ethically allowable and, let’s face it, probably among the more innocuous ways that you’re being manipulated in nearly every aspect of your life.” (similar sentiments from Forbes technology writer & Tal Yarkoni at New Scientist).

The Solution?
“Until Facebook changes its practices,” says Selena Larson at ReadWrite.com “there’s only one way to assuredly remove yourself as a candidate for a scientific experiment: Delete your Facebook account.”

Update 2 July:

Update 4 July:

_________________________________

Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.

How burnt-out students could be skewing psychology research

It’s well known that psychology research relies too heavily on student volunteers. So many findings are assumed to apply to people in general, when they could be a quirk unique to undergrads. Now Michael Nicholls and his colleagues have drawn attention to another problem with relying on student participants – those who volunteer late in their university term or semester lack motivation and tend to perform worse than those who volunteer early.

A little background about student research participants. Psychology students often volunteer for numerous studies throughout a semester. Usually, they’re compelled to do this at least once in return for course credits that count towards their degree. Other times they receive cash or other forms of compensation. When in the semester they opt to volunteer for course credit is usually down to their discretion. To over-generalise, conscientious students tend to volunteer early in semester, whereas less disciplined students leave it until last minute, when time is short and deadlines are pressing.

Nicholls team first recruited 40 students participants (18 men) at Flinders University during the third week of a 14-week semester.  Half of them were first years who’d chosen to volunteer early in return for course credits. The other half of the participants, who hailed from various year groups, had chosen the option to receive $10 compensation. The challenge for both groups of students was the same – to perform 360 trials of a sustained attention task. Each trial they had to press a button as fast as possible if they saw any number between 1 and 9, except for the number 3, in which case they were to withhold responding.

At this early stage of the semester there was no difference in the performance (based on speed and accuracy) of the students who volunteered for course credit or for money. There was also no difference in their motivation levels, as revealed in a questionnaire.

Later in the semester, between weeks 9 to 12, the researchers repeated the exercise, with 20 more students who’d enrolled for course credit and 20 more who’d applied to participate in return for cash compensation. Now the researchers found a difference between the groups. Those participants receiving financial payment outperformed those who had volunteered in return for course credit. The latter group also showed more variability in their performance than their course-credit counterparts had done at the start of the semester, and they reported having lower motivation.

These results suggest that students who wait to volunteer for course credit until late in the semester lack motivation and their performance suffers as a result. Nicholls and his colleagues explained that their findings have serious implications for experimental design. “A lack of motivation and/or poorer performance may introduce noise into the data and obscure effects that may have been significant otherwise. Such effects become particularly problematic when experiments are conducted at different times of semester and the results are compared.”

One possible solution for researchers planning to compare findings across experiments conducted at different ends of a semester, is to ensure that they only test paid participants. Unlike participants who are volunteering for course credit, those who are paid seem to have consistent performance and motivation across the semester.

_________________________________ ResearchBlogging.org

Nicholls, M., Loveless, K., Thomas, N., Loetscher, T., & Churches, O. (2014). Some participants may be better than others: Sustained attention and motivation are higher early in semester The Quarterly Journal of Experimental Psychology, 1-19 DOI: 10.1080/17470218.2014.925481

further reading
The use and abuse of student participants
Improving the student participant experience

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.