The Challenger disaster, the Bay of Pigs fiasco, the botched invasion of Iraq … all these historical calamities have in common that they’ve been blamed on dud group decision making. Bang heads together, it seems, and you dull people’s minds. And yet there’s the almost-magic “Wisdom of Crowds” effect – average people’s verdicts together and you’ll arrive at a more accurate answer than any one person would have achieved on their own. How to solve this paradox? A new series of intriguing studies by Asher Koriat provides part of the answer, highlighting the roles played by people’s confidence and the type of problem they’re tackling.
Across five studies Koriat tasked dozens of participants with answering a mix of forced-choice questions – some were to do with visual attention (e.g. which of two displays of patterns includes an odd-one-out?); others were general knowledge (e.g. which of two European cities has the larger population?); and there were visual judgement questions (e.g. which of two squiggly lines is longer?). The participants were asked to say how how confident they were in each of their answers.
For each round of questions, Koriat paired up the participants “virtually”. That is, the partners in a pair didn’t have anything to do with each other. But for each pair, Koriat followed the same rule, always taking the answer from the partner who was more confident.
Over a series of questions, Koriat found that always taking the answer from the most confident partner in a pair led to superior performance for that series (69.88 per cent correct on average in one study) compared with always taking the answer from whichever individual had the most impressive overall performance (67.82 per cent correct). In other words, the more confident of two heads working together nearly always outperformed the most proficient individual working on their own. In the first study using visual patterns, this was true for 18 of the 19 dyads. In further analysis, taking the most confident answer from a virtual group of three led to even more impressive performance.
The strategy even worked for people working alone if they were given two chances, a week apart, to provide answers to a series of questions, as well as rating their confidence. Always taking the more confident of their answers led to superior performance overall and was more effective than simply averaging their two answers (see earlier Digest item: Unleash the crowd within).
But here’s the all-important caveat. This strategy of taking the answer of the most confident partner only worked for questions for which most people, “the crowd”, tend to get the answer right. When the questions were tricky and wrong-footed most people, then the rule was reversed. Take the example of “Which city has the larger population – Zurich or Bern?”. Most people get this question wrong – they think it’s Bern because that’s the capital city, but the correct answer is Zurich. For questions like this, the most effective strategy is actually to always take the answer of the dyad partner who is least confident (doing so beats the average score of the individual with the overall best performance).
Reflecting on these new results, Ralph Hertwig at the University of Basel said there were two important, tantalising questions for future research – is it possible to categorise problems somehow into those that tend to wrong-foot the crowd, and those that don’t? Similarly, are there any cues that can be used to recognise in advance whether a problem is of the kind that the crowd gets right (in which case it’s best to go with the most confident team member) or wrong (if so, go with the least confident member)?
Koriat, A. (2012). When Are Two Heads Better than One and Why? Science, 336 (6079), 360-362 DOI: 10.1126/science.1216549