Everybody's an Expert: Putting Predictions to the Test

Philip Tetlock’s book Expert Political Judgment: How Good Is it? received a fascinating review in the New Yorker. The implications are very fun to think about, and I think his thesis could be extended to any "expert" opinions in the entrepreneurial world too.

Some of the highlights:

  • In reviewing some 83k expert opinion predictions on trends, an astounding number were wrong.
  • The expert forecaster game decreases the important of accuracy. After all, the more novel the prediction the greater their cachet.
  • "We are not natural falsificationists: we would rather find more reasons for believing what we already believe than look for reasons that we might be wrong. In the terms of Karl Popper’s famous example, to verify our intuition that all swans are white we look for lots more white swans, when what we should really be looking for is one black swan."
  • "Experts violate a fundamental rule of probabilities by tending to find scenarios with more variables more likely. If a prediction needs two independent things to happen in order for it to be true, its probability is the product of the probability of each of the things it depends on. If there is a one-in-three chance of x and a one-in-four chance of y, the probability of both x and y occurring is one in twelve. But we often feel instinctively that if the two events “fit together” in some scenario the chance of both is greater, not less."
  • "Plausible detail makes us believers. When subjects were given a choice between an insurance policy that covered hospitalization for any reason and a policy that covered hospitalization for all accidents and diseases, they were willing to pay a higher premium for the second policy, because the added detail gave them a more vivid picture of the circumstances in which it might be needed."
  • "Low scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess."
1 comment on “Everybody's an Expert: Putting Predictions to the Test

Leave A Comment

Your email address will not be published. Required fields are marked *