Last week, I posted on Daniel Kahneman’s NYT article explaining how confidence and accuracy appear to have little correlation when it comes to forecasting. Kahneman noted that his forecasts of soldier leadership ability generated from the personal observations of his assessment team were only slightly more accurate than random guessing.
Kahneman’s notion echoes the research of Dr. Philip Tetlock; author of Expert Political Judgement and the basis for much of Dan Gardner’s book Future Babble. Over 20 plus years, Dr. Tetlock surveyed more than 100 experts on a host of different issues building a database of more than 27,000 predictions. Armed with this data, Tetlock conducted a thorough analysis of expert opinion and, like Kahneman, generally found highly confident experts commonly cited in the media were less accurate than random guessing on any given prediction. Tetlock labeled these confident but off-based forecasters “Hedgehogs”. Meanwhile, Tetlock found the more accurate predictors of future outcomes tended to have lower confidence in their predictions. Tetlock labeled these less confident but more accurate experts “Foxes”. Dan Gardner explains in Future Babble that “Foxes”:
“had no template. Instead, they drew information and ideas from multiple sources and sought to synthesize it. They were self-critical, always questioning whether what they believed to be true really was. And when they were shown they had made mistakes, they didn’t try to minimize, hedge, or evade. They simply acknowledged they were wrong and adjusted their thinking accordingly. Most of all, these experts were comfortable seeing the world as complex and uncertain—so comfortable that they tended to doubt the ability of anyone to predict the future.”
I believe Tetlock’s research provides valuable perspective for both policymakers and policy advisers. Policymakers often seek the counsel of experts and routinely put faith in expert analysis depending on the level of confidence expressed by the adviser. Yet, by Kahneman’s admission and Tetlock’s research, those advisers most confident in their predictions and prescriptions may in fact be less accurate than random guessing. Likewise, for policy advisers (so-called experts), they often feel pressured to appear aggressively confident when making their predictions to ensure the respect of policymakers and to sustain their status amongst other experts. Essentially, when policymakers turn to experts, they are seeking certainty about an expert prediction as much or more than the content of the prediction itself.
I’ve lamented many times at this blog my disdain for “Hedgehogs” vaguely predicting every potential scenario with high confidence. I’ll follow up soon with a part 3 related to the polling conducted here in May. Meanwhile, FORA hosts a great series of segments where Tetlock presents some of his findings and I’ll embed his introduction here below.