Wednesday, July 21, 2021

Dart-throwing monkeys

 I don't particularly care for monkeys.  I remember a trip our family took to Bali when I was on active duty in the military.  We visited several local temples, and I vividly remember the monkeys at each temple that tried to steal sunglasses, cameras, food, and pretty much anything that they could get their hands on quickly (which was everything).  Even to this day, my wife and I are haunted (perhaps a bit of an exaggeration) by our short hike through the Ubud Monkey Forest.  We were by ourselves, and there were literally hundreds of monkeys up in the trees following us as we we walked along the path.  So yes, the monkey is not my favorite kind of animal, but wow do they make great metaphors for leadership!

I've written about "The Tale of the Five Monkeys" (about the infamous "the way we doing things around here" problem in organizations), lessons on motivation from monkey experiments"The Invisible Gorilla" (which talks about inattentional blindness with lessons on multi-tasking), "Chimpanzee Politics" (which involves group dynamics), and even about the anthropologist Jane Goodall and "The Power of One".  Just a few days ago, I expanded my previous discussion about invisible gorillas in my post, "Monkey Business".  Perhaps not too surprisingly then, today I want to go back to my monkey metaphor to discuss the nature of expertise.

The psychologist Philip Tetlock has written two books (Expert Political Judgement and Superforecasting) on the science of prediction and forecasting.  Tetlock claims that experts are not any better at predicting future events than dart-throwing chimpanzees!  His study was simple, yet elegant (and incidentally, for those of us in academia who have to deal with tenure, he once claimed, "The project dates back to the year I gained tenure and lost my generic excuse for postponing projects that I knew were worth doing, worthier than anything I was doing back then, but also knew would take a long time to come to fruition...").  Bryan Caplan summarized Tetlock's experimental plan as follows:

1. Ask a large, diverse sample of political experts to make well-defined predictions about the future.
2. Wait for the future to become the past.
3. See how accurate the experts' predictions were; and figure out what, if anything, predicts differences in experts' degree of accuracy.

Tetlock selected 284 people whose job called for "commenting or offering advice on political and economic trends" and started asking them to make different kinds of predictions.  These predictions generally involved changes in nation's borders, economy, or political leadership; economic growth (measured by changes in the Gross Domestic Product); stock-market closing prices; and exchange rates.  After these experts made their predictions, Tetlock simply waited.

By the end of his 20-year study in 2003, Tetlock's group of experts had made over 82,000 different predictions.  He evaluated the accuracy of their predictions based on three alternative outcomes - the status quo, more of something (e.g., a rise in GDP), or less of something (e.g., a fall in GDP).  The results were not very impressive.  Tetlock concluded experts are relatively poor forecasters.  In fact, his so-called experts actually performed worse than if they had just assigned an equal probability to each outcome (e.g., GDP stayed the same 1/3 of the time, GDP increased 1/3 of the time, or GDP decreased 1/3 of the time).  In other words, his group of experts' success rate performed no better than a group of chimpanzees throwing darts at a board displaying the different outcomes (don't worry, he didn't actually make chimpanzees throw darts!).  He published his study results in his book, Expert Political Judgement.  

Tetlock also found that there was an inverse relationship between the accuracy of an expert's predictions and his or her self-confidence, renown, and depth of knowledge.  He consistently found a point of diminishing returns, where increasing the level of expertise actually made the predictions worse.  In other words, the greater the expertise (either real, or imagined in the expert's own head), the worse the prediction!  This finding seems counter-intuitive, but it is actually not a new finding.  Scott Armstrong calls it his "seer-sucker theory of forecasting" ("No matter how much evidence exists that seers do not exist, suckers will pay for the existence of seers").  Expertise beyond a minimal level is actually of limited value in making predictions about the world.

I will spend the next couple of blog posts talking about why experts fail to accurately predict the future. Tetlock also found that some experts are better than others at making predictions, which I will also discuss.  For now, let's just give the chimpanzees their day in the sun.  

No comments:

Post a Comment