There are various issues on which there is a scientific consensus but great public controversy, such as anthropogenic climate change and the safety of vaccines. One previously popular explanation for this mismatch was that an information deficit among the public is to blame. Give people all the facts and then, according to this perspective, the public will catch up with the scientists. Yet time and again, that simply hasn’t happened.
A new paper in Thinking and Reasoning explores the roots of this problem further. Emilio Lobato and Corinne Zimmerman asked 244 American university students and staff whether they agreed with the scientific consensus on climate change, vaccines, genetically modified (GMO) foods and evolution; to give their reasons; and to say what would convince them to change their position.
Past research has already done a good job of identifying the individual characteristics – such as having an analytical thinking style and being non-religious – that tend to correlate with accepting the scientific consensus, but this is the first time that researchers have systematically studied people’s open-ended reasoning about controversial scientific topics. The results show that for many people, there are certain issues for which the truth is less about facts and more about faith and identity.
Let’s start with a quick multiple-choice test about multiple-choice tests: when designing them, should you a) avoid using complex questions, b) have lots of potential answers for each question, c) all of the above or d) none of the above? The correct answer is (a), though as we’ll see, this was not a very well-crafted multiple-choice question.
The issue of how best to design multiple-choice questions is important since they have been popular in both education and business settings for many years now. This is due to them being quick to administer and easy to mark and grade. Furthermore, many students often report preferring them over other test formats.
As well as being a useful assessment tool, if they are well-designed they can also aid learning. This is because of the Testing Effect – the way that retrieving knowledge helps consolidate it in memory.
Thankfully in a recent paper in Journal of Applied Research in Memory and Cognition Andrew Butler of Washington University in St. Louis has reviewed the parallel literatures on how best to design multiple-choice tests for learning and assessment, and from this he’s recommended six evidence-based tips:
Psychology as a scientific field enjoys a tremendous level of popularity throughout society, a fascination that could even be described as religious. This is likely the reason why it is one of the most popular undergraduate majors in European and American universities. At the same time, it is not uncommon to encounter the firm opinion that psychology in no way qualifies for consideration as a science. Such extremely critical opinions about psychology are often borrowed from authorities – after all, it was none other than the renowned physicist and Nobel laureate Richard Feynman who, in a famous interview in 1974, compared the social sciences and psychology in particular to a cargo cult. Scepticism toward psychological science can also arise following encounters with the commonplace simplifications and myths spread by pop-psychology, or as a product of a failure to understand what science is and how it solves its dilemmas.
According to William O’Donohue and Brendan Willis of the University of Nevada, these issues are further compounded by undergraduate psychology textbooks. Writing recently in Archives of Scientific Psychology, they argue that “[a] lack of clarity and accuracy in [psych textbooks] in describing what science is and psychology’s relationship to science are at the heart of these issues.” The authors based their conclusions on a review of 30 US and UK undergraduate psychology textbooks, most updated in the last few years (general texts and others covering abnormal, social and cognitive psych), in which they looked for 18 key contemporary issues in philosophy of science.
A lot of us use what we consider normal behaviour – based on how we think most other people like us behave – to guide our own judgments and decisions. When these perceptions are wide of the mark (known as “pluralistic ignorance”), this can affect our behaviour in detrimental ways. The most famous example concerns students’ widespread overestimation of how much their peers drink alcohol, which influences them to drink more themselves.
Now a team led by Steven Buzinksi at the University of North Carolina at Chapel Hill has investigated whether students’ pluralistic ignorance about how much time their peers spend studying for exams could be having a harmful influence on how much time they devote to study themselves. Reporting their findings in Teaching in Psychology, the team did indeed find evidence of pluralistic ignorance about study behaviour, but it seemed to have some effects directly opposite to what they expected.
Given how important maths skills are in everyday life, it is vital that we develop ways to reliably identify those children with particular learning difficulties related to maths (known as “specific learning disorder in mathematics”/SLDM or dyscalculia) so that they can be provided with appropriate support. Unfortunately, maths-related learning problems are far less understood and recognised compared with similar problems related to reading and language.
A recent study in the British Journal of Psychology highlights this issue, being the first to estimate the prevalence of SLDM/dyscalculia in primary school age children using contemporary criteria (as outlined by the American Psychiatric Association in the latest version of its diagnostic manual). The results provide much needed data on this topic, reveal some worrying facts and also useful insights for policy.
Educational neuromyths include the idea that we learn more effectively when taught via our preferred “learning style”, such as auditory or visual or kinesthetic (hear more about this in our recent podcast); the claim that we use only 10 per cent of our brains; and the idea we can be categorised into left-brain and right-brain learners. Belief in such myths is rife among teachers around the world, according to several surveys published over the last ten years. But does this matter? Are the myths actually harmful to teaching? The researchers who conducted the surveys believe so. For instance, reporting their survey results in 2012, Sanne Dekker and her colleagues concluded that “This [belief in neuromyths] is troublesome, as these teachers in particular may implement wrong brain-based ideas in educational practice”. (Full disclosure: I’ve made similar arguments myself.)
But now this view has been challenged by a team at the University of Melbourne, led by Jared Horvath, who have pointed out that this is merely an assumption: “Put simply,” they write in their new paper in Frontiers in Psychology, “there is no evidence to suggest neuromyths have any impact whatsoever on teacher efficacy or practice”.
Horvath’s team tested the assumption that belief in neuromyths harms teaching by comparing belief in the neuromyths among 50 award-winning teachers from the UK, USA and Australia with the belief in these same myths shown by hundreds of trainee and non-award-winning teachers (as recorded in the earlier surveys) – the logic being that if belief in neuromyths has an adverse effect on teaching then presumably the award-winning teachers will show significantly lower rates of endorsement of the myths than their less celebrated counterparts.
This is Episode 13 of PsychCrunch, the podcast from the British Psychological Society’s Research Digest, sponsored by Routledge Psychology. Download here.
Can psychology help us to learn better? Our presenter Christian Jarrett discovers the best evidence-backed strategies for learning, including the principle of spacing, the benefits of testing yourself and teaching others. He also hears about the perils of overconfidence and the lack of evidence for popular educational ideas like “learning styles” and “brain gym”.
Stimulants available on prescription such as Adderall improve cognitive functioning as well as attention in people with ADHD, but many students without this condition also take them, believing that they will act as “smart drugs” and boost their cognition, and so their academic performance. The limited research to date into whether this is actually the case has produced mixed results. A new double-blind pilot study of healthy US college students, published in Pharmacy, found that though Adderall led to minor improvements in attention, it actually impaired working memory.
Teaching, it has often been said, is the one profession that creates all other professions. Therefore it is so important that we learn how to do it right. The ways that teachers learn from each other is likely to be an important part of this, especially how they discern each other’s expertise and whetherthey are inclined to seek advice and help from the most able.
A team led by James Spillane at Northwestern University has published a study in Educational Evaluation and Policy Analysis that looks into these teacher behaviours. The researchers employed a mixed-method approach that spanned five years and involved staff from fourteen different primary schools in the US. This included surveys and interviews to explore how maths teachers conceptualised expert teaching, and then an analysis of student test scores along with teachers’ self-reported interactions with their colleagues, to assess if expert teachers behave differently from their peers.
We all know someone who is convinced their opinion is better than everyone else’s on a topic – perhaps, even, that it is the only correct opinion to have. Maybe, on some topics, you are that person. No psychologist would be surprised that people who are convinced their beliefs are superior think they are better informed than others, but this fact leads to a follow on question: are people actually better informed on the topics for which they are convinced their opinion is superior? This is what Michael Hall and Kaitlin Raimi set out to check in a series of experiments in the Journal of Experimental Social Psychology.