David Orrell, whose work I find very stimulating (and who I had the pleasure of meeting on my last trip to Oxford), has an essay in the January/February Literary Review of Canada. In theory it's a review of Florin Diacu's Megadisasters: The Science of Predicting the Next Catastrophe, but like many a good review, it uses the book as a launching-point to talk about the bigger issues the book raises.

In this case, it's an especially useful summary of Orrell's basic– and among scientists, somewhat controversial– argument about the limits of prediction.

[E]quations are useful tools for describing and understanding extreme events such as earthquakes or tsunamis (a worthy goal in itself), [but] as far as I can see, none of the scientific models can reliably predict them….

This points to a basic problem with the Newtonian approach to prediction: despite its eminent logic, it just does not seem to work very well when applied to complex systems of the type we really want to know about, such as weather, the economy or our own health…

In economics, the inability to predict the future was explained away in the 1960s by the efficient market hypothesis. It saw the market as a kind of deity whose short-term motions no one can anticipate, but held that the long-term risk could still be modelled by equations. The flaws in this theory became increasingly obvious, as the risk models missed even the chance of the credit crunch, and in fact played a large role in making it happen.

In weather forecasting, lack of prediction was explained by chaos theory and by the “butterfly effect.” According to this theory, the atmosphere is so unstable and chaotic in the short term that a butterfly flapping its wings can later cause a hurricane on the other side of the world; but again, long-term prediction of the climate is assumed to be possible. However, while the atmosphere certainly has some unstable dynamics, experiments show they are hardly its defining feature.

In my opinion, both the efficient market and the butterfly effect are fig leaves that explain away forecast error, while allowing scientists to retain some of their oracular authority for longer-term predictions, which are safe because they are for the distant future. The real reason for our lack of forecast ability in all these areas, I believe, is simply that our traditional modelling approach does not work when applied to complex organic systems. These systems tend to be dominated by emergent properties, which by definition cannot be modelled or predicted from knowledge of the components.

I'm not nearly enough of a mathematician to assess the validity of Orrell's more technical claims, but I find several elements of his critique of scientific prediction especially challenging.

One is that we're not fooled by randomness, but by emergence. Nassim Taleb's arguments about black swans may not be right– or at least, we need to settle whether the big, unexpected phenomena we want to understand are a result of randomness (and thus are completely unpredictable), or a product of emergent phenomena (which are beyond our capacity to reliably model, but which I understand are, in theory, computationally tractable). It doesn't make a lot of difference in the moment if the crisis is caused by randomness or complexity; but over the long run you want to figure these things out.

The second is that there appears to be an inverse relationship between the power of models to explain the past and future. The more time you spend tweaking a model to fit bumps in historical data, and the better the model becomes at reproducing the past, the weaker it becomes as a predictive tool. In contrast, relatively simple models that do a poor job of reproducing past data can sometimes do a better job of prediction. This has the potential to undermine a whole structure of argument among futurists– and among historians, come to think of it– that contends that historical understand is useful for making sense of the future. It may be that our models for understanding the past aren't complex enough to fall prey to Orrell's Paradox (I coined the Nunberg Error, so I might as well try again), but it makes me wonder whether there are ways to refine the way we use historical thinking and models to make sense of the future that avoid this problem.

The third question this raises is, how then do you talk about big future problems like climate change without leading people to believe that since the future is unknowable, we don't need to think about it or act in ways to improve it? I can see the argument for creating an alternative to the IPCC work, for example– given the sensitivity of the models to initial conditions, etc.– but with environmental issues we no longer seem to have a way to think about things that scientists don't fully understand, but which still seem to exist / are happening and need to be dealt with. It's almost like arguing that because the efficient markets hypothesis doesn't work, we're not in a global recession.

None of this makes me think we should give up forecasting– the ability to think about the future is one of the things that makes us human, as Daniel Gilbert put it (though that may not be true)– but we need to think about how to improve it; how we can do so in ways that do not lead us to imitate (and reproduce) the errors in scientific prediction; and how to do so in ways that don't ultimately work to the detriment of our audiences by encouraging passivity in the face of uncertainty.