By Alex Fradera
Imagine contemplating which treatment to undertake for a health problem. Your specialist explains there are two possibilities, and strongly endorses one as right for you. But when you discuss it with a friend, she suggests that based on what she’s heard, the other would be better. Another friend, the same. And another. Does there come a point where the friends outweigh the expert? Given enough information – the accuracy of the expert in the past, the degree to which the public have any insight on the issue – you can in theory mathematically “solve” this issue with a probabilistic model. In fact, according to new research published in Thinking and Reasoning, that’s exactly what we do intuitively and with a high degree of accuracy.
In a collaboration between Radboud University Nijmegen and University College London, Jos Hornikx’s team asked 146 Dutch adults, most commonly educated to graduate level with an average age of 31, to evaluate five scenarios each involving a public policy decision such as whether to create car-free zones in a city and whether it would impact the number of shop customers. In each scenario, an expert (in this case, a professor of retail marketing) had given a recommendation for or against the proposal, and participants were told their track record – e.g. they had been correct in 80 per cent of previous zoning decisions.
Each participant was also told that a subset of the public – for example, local road workers (with no formal training in the topic) – had disagreed with the expert view. Participants were told how much these non-experts had insight into the relevant issues – this varied from participant to participant, with some told the non-experts had been correct only 51 per cent of the time, others as high as 60 per cent. Participants were then asked how many non-experts would need to hold a dissenting view to outweigh the single expert.
The team made a probability model using Bayes’ theorem of belief revision: this shows how much you should shift your initial opinion in the face of new contradictory information. In the current context, the model showed exactly what volume of laypeople should be seen as a counterpoint to an expert, at each of the degrees of accuracy. The researchers evaluated whether human intuition came close to this. It turns out it does.
For instance, in the zoning situation, participants judged that when lay people had only the barest insight – they’d been right just 51 per cent of the time in the past – you would need a group of 40 expressing the same shared opinion to counter a single expert (with an 80 per cent accuracy record) – a judgment that’s in the ballpark of what the statistical model recommended. When the lay group were said to be more accurate, at 55 per cent or 60 per cent, participants judged that just ten in agreement would outweigh the expert – again, in line with the ideal model. The only place where participants really deviated from the ideal was in a scenario where the expert was suggested to have extremely high accuracy (99.99 per cent), which theoretically outmatches even a few hundred low-insight laypeople. The participants imagined that 50 non-experts would be enough to outweigh an expert in this case, suggesting it’s hard for us to process these high-number levels.
The findings suggest we should stop talking about the public and experts as if we have to put our faith in one or the other. Given reasonable context and space to think, we can weigh both in our judgments. It may be rhetorically convenient for advocates to trash experts, or poor scorn on the public, but we can sidestep those claims, to draw on both everyday and specialised wisdom to inform our decisions.
Image: The then Justice Secretary Michael Gove gives a speech at the “Vote Leave” campaign headquarters in Westminster on June 8, 2016 in London, England (Photo by Jack Taylor/Getty Images).