As a futurist, I’m always interested in forecasts and how many people, especially time-poor senior managers, rely on the opinions of experts, who are often not accurate in their outlooks. So I was very interested when I read about the work of Philip Tetlock on the Long Now site and an excellent article by Paul Monk on it.
Tetlock is a psychologist who has made assessments of the predictions of experts. Basically, he has demonstrated that increased expertise does not correlate with improved forecasting accuracy. Who experts were and what they thought did not matter. What matters is how they think – their style of reasoning – their judgement.
This is equated with the analogy of the fox (who are sceptical and circumspect) and hedgehogs (who have one grand theory and courage that it will be pertinent in many domains). Foxes avoid many of the big mistakes of hedgehogs and are somewhat more accurate in shorter term forecasts. Hedgehogs are prone to hubris and closed-mindedness, of dismissing dissonant possibilities too quickly. But foxes don’t get off scot-free; they fall into the danger of the cognitive chaos of excessive open-mindedness.
I liked his view of scenario planning, the next depth stage beyond forecasting for futurists. He notes that scenario exercises often fail to open the minds of the hedgehogs but succeed in confusing the already open-minded foxes! It may be that we are better off to acknowledge the uncertainty and hedge against it rather than map it into detailed scenarios.
A key aspect then to overcome fox and hedgehog limiations is good judgement. This requires high cognitive skills and the cultivation of the art of overhearing – eavesdropping on the mental conversations we have with ourselves. This is incredibly difficult and it needs a commitment to thinking about how we think – of examining our worldview and rethinking our assumptions.
Finally, both the Monk and Brand articles stress the importance of keeping score, of using the scientific method to test the results and see what went wrong and learning from it. It is simply practising double loop learning. But it rarely gets done.
The work of Tetlock also reminds me of the work of Scott Armstrong who I heard present in Scotland in 2004. He noted that forecasts are more accurate if the people making them used simulated interactions (works better with experts) or structured analogies (works better without experts). His conclusion was that structured methods like these work far better than focus groups or simply gaining expert opinions.
The critical thing from this for me is to give permission to listen to that inner questioning voice and to seek out others to question my assumptions. One of my major learnings was from a class that gave permission for us to learn from each other, to highlight shortcomings and freely question assumptions. The trusted conversation was rich and very empowering.