Sunday, December 15, 2019

Experts' predictions and dart-throwing chimpanzees

Niall Ferguson, the American-based British television historian, in The Sunday Times of December 8th feared a hung British parliament "despite the opinion polls," as the Labour Party had a stronger social media presence.

Philip Tetlock, professor of psychology and management at the University of Pennsylvania, saw a 10% chance that Ferguson was right. Tetlock has been studying forecasting since the 1980s and in his 2005 book, 'Expert Political Judgment: How Good Is It? How Can We Know?' reported on a study where he had picked 284 people who made their living “commenting or offering advice on political and economic trends,” and who made 82,361 forecasts over a 20-year period.

The study showed that the average expert was found to be only slightly more accurate than a dart-throwing chimpanzee. Many experts would have done better if they had made random guesses. Prof Tetlock noted that Greek poet Archilochus had observed about 2,700 years ago, “The fox knows many things, but the hedgehog knows one big thing.”

Isaiah Berlin in his famous 1953 essay “The Fox and the Hedgehog” contrasted hedgehogs that “relate everything to a single, central vision” with foxes who “pursue many ends connected...if at all, only in some de facto way.” This was an illustration of specialists and generalists at work.

Louis Menand, the Harvard academic, noted in his review of Tetlock's book in The New Yorker, "that people who make prediction their business — people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables — are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be."

In June 2016, The Washington Post wrote that during an 8-day span in April of that year, 602 pundits appeared on the Fox News, MSNBC and CNN cable networks in the US.

Pundit chancers on TV can get away with bullshit while Wall Street analysts hide behind the "40% rule."

For example, Citigroup analyst Jim Suva wrote that there’s a 40% chance Apple buys Netflix; and Nomura Holdings. economist Lewis Alexander said there’s a 40% chance Nafta gets ripped up.

Peter Tchir, a market strategist at Academy Securities, said to The Wall Street Journal in 2018 in respect of a prediction that the Dow Jones Industrial Average has a 40% chance of hitting 30000 before year-end.

“Get it right and you can say ‘See, I was telling everyone it could happen,’ ” he said. “Get it wrong and you can weasel your way out: ‘I didn’t say it was likely, I just said it was a strong possibility.’ ”

Superforecasting

In his most recent book, 'Superforecasting: The Art and Science of Prediction,' published in 2015 and written with Canadian journalist Dan Gardner, Prof Tetlock and his team focused on how forecasters can improve their accuracy. Tetlock got the cooperation of the US intelligence agency, Intelligence Advanced Research Projects Activity (IARPA), between 2011 and 2015.

Tetlock's team comprised more than 2,000 curious, non-expert volunteers under the banner of the Good Judgement Project. Working in teams and individually the volunteers were asked to forecast the likelihood of various events of the sort intelligence analysts try to predict every day ("Will Saudi Arabia agree to OPEC production cuts by the end of 2014" was one example).

The results were surprising from over 20,000 forecasts.

The Good Judgement Project's predictions were 60% more accurate in the first year than those of a control group of intelligence analysts while results in the second year were even better, with an almost 80% improvement over the control group.

Two per cent of the participants stood out from the group as spectacularly more prescient. They included Bill Flack, a retired US Department of Agriculture employee from Nebraska.

The book describes the cognitive traps that most people fall victim to, all the time. A common forecasting mistake that superforecasters never make involves substituting easy questions for difficult ones.

Prof Tetlock wrote that the superforecasters are naturally aware of the tricks the mind can play to make itself “certain” of something, as the human brain is wont to do. They’re also ready to change their forecast when new evidence emerges, without any fear of damaging one’s ego.

To be sure, the superforecasters were found to be generally more knowledgeable than the general population, and they have higher-than-average IQs. Still, they were neither that much more knowledgeable nor intelligent than forecasters not of the “super” variety. Instead, they were just more apt to keep testing ideas, remaining in what Tetlock calls “perpetual beta” mode. They are careful but not spineless or indecisive, sins for any self-respecting leader in today’s culture.

“The humility required for good judgment is not self-doubt — the sense that you are untalented, unintelligent or unworthy,” Tetlock writes in the book. “It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes.”

The superforecasters performed about 30% better than intelligence community analysts with access to classified information. The very best of the superforecasters were even able to beat prediction markets, where traders bet real money on futures contracts based on the same questions, by as much as 40%.

Why an Open Mind Is Key to Making Better Predictions: Transcript of interview with Prof Tetlock here.

Hedgehogs and foxes:

"Knowledge@Wharton: How do we get people to listen to the foxes when they might not be the sexiest or the most prominent or the people who we want to look at or listen to all the time?

Tetlock: That’s a bit of a dilemma. Imagine you are a producer for a major television show, and you have a choice between someone who’s going to come on the air and tell you … something decisive and bold and interesting — the eurozone is going to melt down in the next two years or the Chinese economy is going to melt down or there’s going to be a jihadist coup in Saudi Arabia. He’s got a big, interesting story to tell, and the person knows quite a bit and can mobilize a lot of reasons to support the doom-and-gloom prediction, say, on the eurozone or China or Saudi Arabia…. The person is charismatic and forceful and can generate a lot of reasons why he or she is right.

As opposed to someone who comes on and says, 'Well, on the one hand there’s some danger the eurozone is going to melt down. But on the other hand there are these countervailing forces. On balance, probably nothing dramatic is going to happen in the next year or so, but it’s possible that this could work." Who makes better television? To ask the question is to answer it.

There is a preference for hedgehogs in part because hedgehogs generate better sound bites."

The Peculiar Blindness of Experts: Credentialed authorities are comically bad at predicting the future. But reliable forecasting is possible. "Stanford biologist Paul R. Ehrlich. In his 1968 best seller, 'The Population Bomb,' Ehrlich insisted that it was too late to prevent a doomsday apocalypse resulting from overpopulation."