One of the interesting things I've witnessed in the last couple years is the effort of futurists and other professional forecasters– particularly financial forecasters– to incorporate behavioral economics and the findings of Expert Political Judgment into their work. The "predictably irrational" quality of decision-making, and the fundamental problem that our normal, popular assumptions the relationship between expertise and utility break down when dealing with the future, have been hard to work through. Many futurists still ignore the work, or argue that they don't really predict, and the rationality or irrationality of decision-making doesn't affect their work.

A recent piece by investment advisor Mitch Tuchman stands as a good example of how financial professionals are struggling with the work: "in most pursuits where dynamic and multiple variables determine what will happen," he notes, "experts are not good at predicting the future. And to make matters worse, those who predict are rarely held accountable for their prognostications." He concludes with a defense of index investing as the only really viable form of trustworthy financial prediction (one can reasonably think of investments as predictions), and that attempts to forecast more precisely are "expensive and distracting noise."

Tuchman also points to an article by Christina Fang and Jerker Denrell, which argued that people do successfully forecast an extreme economic event tend to do more poorly as general forecasters than the average. As their abstract explains,

Successfully predicting that something will become a big hit seems impressive. Managers and entrepreneurs who have made successful predictions and have invested money on this basis are promoted, become rich, and may end up on the cover of business magazines. In this paper, we show that an accurate prediction about such an extreme event, e.g., a big hit, may in fact be an indication of poor rather than good forecasting ability.

As the Financial Times gloomily summarizes,

economists who tend to predict near the consensus are, by definition, unlikely to anticipate extreme events, while those who correctly predict the occasional Black Swan tend to get everything else wrong (or most everything else).

Unfortunately, when it comes to economic forecasting, there’s really nowhere to turn, as the consensus view tends to miss even cyclical, non-Black Swan recessions.

The open question is whether an industry in which forecasting can be immensely important can develop incentives to improve forecasting practice. Financial advisors' decisions are pretty well-documented, and there are clear incentives for having advisor statistics be as accessible as baseball players'. Of course there are also clear disincentives as well. But forecasting is a site in which questions of expertise, transparency, and fairness converge: and so arguably it's as important to have good expert advice, and to know the value of that expertise, as it is to have good standards agencies or regulators who are immune to cognitive regulatory capture.