Philip E. Tetlock, Expert Political Judgment: How Good Is It? How Can We Know?![]()
Why are professional prognosticators so consistently bad at predicting the future? We know this phenomenon most clearly from basic cable news, where credentialed experts prognosticate about how good, bad, or volatile the near future will be, usually in ways that support their ideology. But it manifests in other environments: business professionals who fail to forecast economic trends. Legislators who let lush opportunities slip away. Inventors pushing questionable technology.
Canadian-American psychologist Philip Tetlock, currently at the University of Pennsylvania, asked himself this question immediately after Operation Iraqi Freedom. Mass-media oracles predicted either swift, easy victory, or else nigh-apocalypse. The reality reflected neither partisan extreme, but instead descended into the same quotidian brutality that has characterized American intervention since WWII. Why, Tetlock asks, were both sides so wrong, and why has nobody paid for their overconfidence?
He favors the metaphor of foxes and hedgehogs, which he pinches from Isaiah Berlin, though it’s far older. Foxes know many things, Tetlock writes; hedgehogs know one big thing. Mass-media operators love highly credentialed experts, especially on economics and world affairs. But those experts’ predictions are often only marginally better than committed dilettantes who read newspapers daily and remains informed. Further, the more advanced one’s credentials, the more marginal the gains.
So far, so good. Tetlock’s description essentially accords with our recent experiences of camera-friendly experts reliably whiffing their predictions. My problem arises when Tetlock transitions from describing to explaining. A consummate scholar, Tetlock is reluctant to say anything which he cannot support strictly from quantifiable evidence. And holy moly, does Tetlock have extensive and thoroughly documented evidence to deploy.
Let’s make something clear: despite his praise (often muted) for well-informed dilettantes, he writes for scholarly audiences motivated by deep research. He fortifies his prose with histograms, p-values, and confidence intervals. He spends several column inches breaking down the mathematical modeling which supports his conclusions, and he seldom goes beyond the evidence. He dedicates an entire chapter to anticipating and transcribing his critics’ likely counterarguments.
![]() |
| Philip E. Tetlock, Ph.D. |
Tetlock briefly acknowledges, but doesn’t expand much upon, the reality of who receives attention. TV pundits, hero CEOs, civil rights activists, and tech bros all broadly favor certainty, volume, and swagger. Reliable predictors, working from diversified backgrounds and intellectual caution, can look timid on Sunday talk shows or corporate board meetings. Put another way, saying wrong things confidently looks more telegenic than trading in likelihoods, conditionals, and caution.
Unfortunately, Tetlock himself demonstrates this. He refuses to offer opinions without sourced evidence, and he refuses to offer evidence without lengthy discursions on mathematical variance. Because his status relies on measurable outcomes—what he terms “reputational bets”—he refuses to place everything on one spin of the roulette wheel. The product he thus creates is more likely to be accurate, but less compelling in a media-saturated “attention economy.”
He also omits something I consider vitally important. The principle of homophily means we’re more likely to spend time around people like ourselves. Scholars congregate with other scholars; journalists chill with media professionals; lawmakers drink with lawyers. We see this particularly in economics, the scholarly field least likely to cite sources from other disciplines: our environment discourages seeking differing influences, disconfirming evidence, or even a diverse friend network.
Invested dilettantes make reliable predictions, perhaps, because they see how hypothetical outcomes postulated in scholarly journals actually unfold in daily life. Unfortunately, to calculate his confidence intervals on reliable predictions, Tetlock generates a core sample of prognosticators who are, like himself, flush with academic credentials. If military historians predict one outcome for war, and generals predict another, maybe consult the enlisted men carrying weapons, not more historians and generals.
Rereading what I’ve written, I feel I’ve misrepresented Tetlock’s product. I like his thesis, that intellectual diversity trumps depth in creating reliable forecasts. Later chapters on public accountability are particularly promising, if underwritten. Especially in subsequent years (Tetlock’s first edition appeared in 2005), we’ve seen public experts become increasingly hostile to criticism or disconfirming sources. Doubt has become, not the precursor to better thinking, but a sign of disloyalty. Unsurprisingly, experts have become more likely to be wrong.
Considering my doubts, and new evidence since 2005, we could perhaps read this volume as a prolegomenon to further research. Tetlock himself co-wrote a subsequent volume, which I’ve already purposed to read. But I feel it actually serves Tetlock’s thesis to suggest that future research should come from an interdisciplinary source, perhaps a public-private partnership. The future of the forecasting business is too valuable to entrust only to other forecasters.


No comments:
Post a Comment