Alex Soojung-Kim Pang, Ph.D.

I study people, technology, and the worlds they make

Tag: future (page 1 of 2)

“The rise of long-term robots may be upon us”

We all know that computerized and automated trading has had a huge impact on short-term trading, simply by exploiting the speed of computers to react to market changes faster than humans can (or other, slightly slower computers); but could artificial intelligence have an effect on long-term trading?

AIs offer a new form of investment technology that, for the first time in 4,000 years, could give investors a truly game-changing edge that doesn’t rely so heavily on speed.

AIs will be able to consider — just as AlphaGo did for Go — deeper risks and even uncertainties of future markets, future growth and future scenarios and, in turn, provide investors with tools that can dramatically increase long-term returns. Looking out over a longer time horizon, an AI could learn to unearth factors that are truly material for an underlying company or asset and then base investment recommendations on the performance of those characteristics. It may take time for these performances to be realized, but that’s not a problem for long-term investors.

Imagine then that these inferential tools become so powerful that asset managers stop relying on trading technologies for their edge and begin to rely on inferential technologies that extend the average holding periods of investments. Crazy? Well, as my postdoc at Stanford, Dane Rook, occasionally reminds me, we are surely nearing a hard limit in the speed of data transmission: Data can’t travel faster than the speed of light. However, is there any such equivalent upper bound to the inferential depth that is possible? It’s hard to say, but with enough data it may not be so!

The rise of long-term robots may be upon us, and that could be a catalyst for investment time horizons to reverse their current downward trend. That’d be a very good thing for the future of finance and, indeed, capitalism.

Source: Rise, Long-Term Robots. Rise! | Institutional Investor

Quote of the Day: John Kay on aggregating expert opinion

This is a couple years old, but still well worth reading:

The study of business is afflicted by confusion between the results of a survey of what people think about the world and a survey of what the world is really like. At another recent meeting I heard a platform speaker announce that 40 per cent of books would be electronically published by 2020. A pesky academic asked exactly what this number meant and what evidence it was based on. The speaker assured the audience that the number had been obtained in a survey by eminent consultants of the opinions of the industry’s thought leaders.

I imagine most of the thought leaders had no more idea than anyone else what the question implied, or what the answer was, and did not devote more than the briefest consideration to their response, so I am not surprised that the median answer was close to a half. If you want to know the future of publishing, you will learn more by peering into a crystal ball. It will at least give you time to think.

[To the tune of Thom Yorke, “Analyse,” from the album The Eraser (a 3-star song, imo).]

Trip to Oxford

Last week I went to Oxford for a few days. I was giving a talk and had to be back for my daughter’s school play, so it was just a quick trip. I hope to make it back for a longer trip before too long.

Fortunately, Oxford was no longer buried under the show-stopping two inches of snow that has assaulted the nation the week before. By the time I got there the place was back to normal, so I was able to get around without any trouble.

IMG_0676.JPG
Oxford, via flickr

I arrived on Sunday afternoon, worked on my talk for most of the day, then went to a Lebanese restaurant for dinner and walked around afterwards. The restaurant was great, and doubtless I’ll go back there, but it has a bit of an Eastern Promises feel to it: I got the sense that there were plenty of things going on besides grilling lamb and making hummus (which was excellent, don’t get me wrong).

Hummus appetizer
excellent hummus, via flickr

And I was by the far the least swarthy person in the restaurant, which for me is an unusual state of affairs.

I stayed at the Royal Oxford, which was fine as always, though my room looked out at the central courtyard and the ventilation system was about two feet away from my window. But it was a pretty big room, so I guess it was an acceptable trade-off. My feelings about the bathroom design still hold, though: they fell down on the job during the renovation, made the bathtubs too tall, and made it hard to get in an out in a way that feels safe.

Monday was work, so after breakfast I spent most of the rest of the day actually doing what I went there to do. Monday night I had dinner at a rather nice French restaurant in Jericho, one of the neighborhoods of Oxford. I met up with David Orrell, the author of The Future of Everything and someone whose work I find quite interesting.

When I looked it up, it sounded like Jericho was a suburb of Oxford, and I imagined having to take a bus out there; but it turns out to be about a 5-minute walk from the center of town to the edge of the neighborhood. Apparently it started out as a working-class area (Oxford was actually a manufacturing center for a long time, in addition to being a university town), and recently has been gentrified.

IMG_0694.JPG
Brasserie Blanc, via flickr

Orrell is a very interesting character, a physicist who did some really interesting work on model error in meteorology, and now works in synthetic biology. We spent a couple hours at dinner, talking about prediction, futures, computer and mathematical models, and economics. One of the more interesting things he talked about was how simple models often do a poorer job of explaining the past than elaborate models (that to some degree are tailored to fit historical data), but do a better job of predicting the future. I’ve been turning over in my mind whether it’s possible to apply this to the kind of futures that I do. I’m usually sensitive to the complexity and contingency of human action and decisions, and that tends to make me assume that you can’t simply model human behavior in a usefully predictive way– that people’s interactions with scientific ideas and technologies aren’t quantifiable and computationally tractable.

Maybe this observation helps explain Bruce Bueno De Mesquita’s success. His method does well because of its formality and relative simplicity: he claims to be able to predict the outcomes of political negotiations or corporate power struggles with a pretty limited, specific amount of information. Of course, he also succeeds because he recognizes the limits of his model, and doesn’t push it into areas where it seems likely to fail. I’d like to think that there are no good models for predicting scientific and technological change because they’re too complex. But maybe I’m not looking hard enough for the simplicity.

I don’t know if I’m on a lucky streak, or if I tend to gravitate unconsciously to books written by pleasant and generous people instead of self-righteous jerks– Andrew Parker was really a great person to have breakfast with— but David maintained my streak of having interesting meals with people I basically cold call when I’m in Europe. One of the virtues of being American is that you can deploy a level of extroversion (or intrusiveness) when you travel and, so long as you don’t go overboard with it, people will forgive you for it. (I suspect that one of the keys to living abroad is figuring out when you really have to fit it with the local culture, and when you can get away with things because of Where You’re From.)

IMG_0625.JPG
Oxford, via flickr

After dinner I walked around a little, as is my custom when I’m on the road; but since I had to pack and be up very early to catch the bus to Heathrow, I decided not to stop at any of the fifty or so pubs I’ve passed that inspired a “oh that looks good, I’ll have to have a drink there sometime” reaction. Next time. And the time after that.

IMG_0596.JPG
Oxford, via flickr

Tuesday morning I was up at a punishingly early hour to get home. I’ve gotten in the habit of falling asleep to movies or music when I travel, and tonight for some reason had on a playlist of Michael Mann movies; so I drifted in and out of sleep to the sound of gunfire and vague apprehension of beautifully-illuminated but sinister cityscapes. Then I got the X70 bus to Heathrow, had breakfast in the Red Carpet Club, and got on my plane.

[To the tune of Rob Dougan, “Clubbed to Death 2,” from the album Furious Angels (a 4-star song, imo).]

New blog: Future 2.0

I’ve started a new blog on the future of futures, called Future 2.0. Essentially the blog covers the territory I explored in my Futures 2.0 article (which is due out any day in the next issue of Foresight, by the way) and am still very interested in– basically, how we can use Web 2.0 technologies, ubiquitous computing, and science to make futures more perceptive and persuasive. It’s also the nucleus of a new enterprise I’m starting to realize some of the projects I suggested in the article.

The blog includes material I previously posted here and on IFTF’s Future Now (and its Typepad predecessor), but from here on will be the main place I post about futures-related subjects. Given that Signtific seems to be on indefinite hiatus, I didn’t want the stuff I wrote for Future Now to disappear down the memory hole and be lost (most of all to me!).

[To the tune of Howlin’ Wolf, “Moanin’ At Midnight,” from the album Chess Blues 1947-1967 (a 3-star song, imo).]

Bruce Sterling on EFG2WD

Bruce Sterling points out the parallels between the instructions I provide in the “Evil Futurists Guide to World Domination” (henceforth EFG2WD), and religion. Of course he’s right. I should have thought of it earlier.

It’s a little odd that Pang doesn’t seem to realize that he is describing religion here. His “evil futurist” is a morally-certain holy prophet with a scripture. Social figures of this sort carry out practically every tactic that Pang describes, and that scheme’s been working grandly for millennia.

But on the upside, this’ll be good for another dozen really dense footnotes citing works in the psychology of religion and apocalyptic prophecy literature. Win!

The Evil Futurists’ Guide to World Domination

The Evil Futurists’ Guide to World Domination: How to be Successful, Famous, and Wrong

You want to be a futurist, but something’s holding you back. Maybe you’re afraid of being wrong. Maybe you don’t have any ideas. Don’t worry. After years of exhaustive study, I’ve brought together ideas drawn from behavioral economics and neuroscience that will help you succeed without having to be right. All you have to do is follow the simple principles laid out below.

It’s more important to be certain than to be right! People love certainty. They crave it. In experiments, psychologists have shown that “[w]e tend to seek advice from experts who exhibit the most confidence – even when we know they haven’t been particularly accurate in the past.” We just can’t resist certainty.

Further, confidence and certainty aren’t things you arrive at after logical deliberation and reasoning: as UCSF neurologist Robert Burton argues in his book On Being Certain, certainty is a feeling, an emotion, and it has a lot less to do with logic than we realize. So go ahead and feel certain; if other people mistake that for being right, that’s their problem.

So no matter what you do, no matter what you believe, be certain. As Tetlock put it, in this world “only the overconfident survive, and only the truly arrogant thrive.”

Finally, for the moralist or logician in you, here’s this: even if you don’t believe what you’re saying, you could wrongly believe you’re wrong, and actually be right. Stranger things have happened.

Claim to be an expert: it makes people’s brains hurt! In a remarkable new study, Jan Engelmann and colleagues used fMRI to observe the brains of people who received expert advice during a financial simulation. They found that subjects thought differently about their decisions when they received the advice– even if it was bad advice– than when they worked on their own. As the researchers put it, “one effect of expert advice is to “offload” the calculation of value of decision options from the individual’s brain.” Put another way, “the advice made the brain switch off (at least to a great extent) processes required for financial decision-making.”

No expertise? No problem! It’ll actually make your work more accurate if you say you’re an expert– if you’re certain that you’re an expert– but you actually aren’t.

Sounds counterintuitive, right? (Ed.: This is how you know I’m a successful futurist. I said what you didn’t expect. Now I’ll quote some Science to make my point.) In fact, as J. Scott Armstrong has shown over the last twenty or so years, advanced degrees and deep knowledge don’t make you a better forecaster or expert. Statistically, experts are hardly better at predicting the future than chimps throwing darts at a board. As Louis Menand put it, “The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge.”

At the same time, it’s perfectly natural to suffer from what Nassim Taleb calls “epistemic arrogance.” In all sorts of areas, we routinely overestimate our own certainty and breadth of knowledge, and underestimate what we don’t know. If you do that, you’re just like everyone else.

So knowing you’re not an expert should make you more confident in your work. And confidence is everything.

Have one simple idea that you apply to EVERYTHING! The future is complex, but you shouldn’t be. Philip Tetlock explained in Expert Political Judgment that there are two kinds of forecasting personalities: foxes, who tend to appreciate contingency and don’t make big claims, and hedgehogs, who have a hammer and see the whole world as a giant nail. Guess who wins. Having a single big theory, even if it’s totally outrageous, makes you sound more credible. Having a Great Idea also makes it easier for you to seem like a Great Visionary, capable of seeing things that others cannot.

Get prizes for being outrageous! It’s important to get quoted in the media. Being a futurist isn’t like being a doctor or lawyer: there are no pesky state boards, no certification tests, none of that. So how do potential clients figure out who to hire? Media attention is one way. As a resident scholar at a think-tank told Tetlock, “I woo dumb-ass reporters who want glib sound bites.”

So you need to set yourself apart from the pack, differentiate yourself from the competition. If you’re not beautiful, or already famous, the easiest way is to be counterintuitive, or go against the grain. Dissent is always safe, because journalists understand what to do with someone who’s critical of the conventional wisdom, and always want someone who can provide an Alternative View For Balance. There are few more secure places in a reporter’s Rolodex than that of the Reliably Unpredictable Contrarian.

There’s a success hiding in every failure! Let’s say you predicted that something would happen, and it hasn’t. Is your career over? Of course not. Tetlock found that after a certain point, expertise becomes a hindrance to effective forecasting, because experts are better able to construct erudite-sounding (or erudite-feeling) rationalizations for their failure. Here’s how to benefit from this valuable talent.

  • Make predictions that are hard to verify. Be fuzzy about timing: it’s always safest to say that something will happen in your lifetime, because by definition, you’re never around to take flak if you’re wrong.
  • Find similar events. Maybe you predicted that we’d all watch TV on our watches. Instead, we watch YouTube on our computers. That’s pretty close, right? Point proved.
  • Say reality came very close to your prediction. Canada almost went to war with Denmark. It was just the arrival of winter that prevented them from attacking each other over competing claimsclaims to the North Pole.
  • Those damned externalities. Your prediction would have come true if it hadn’t been for the economic downturn, which really messed up everything. (The beauty of this is that economic downturns now come with enough regularity to provide cover for just about everything– yet they’re still unpredictable.)
  • The future is just a little slow. Instead of derailing it, maybe that (unpredictable.) economic downturn has just put off the future you predict. The underlying dynamics are solid, it’s just that the timing is off (because of something you couldn’t have foreseen.) Everything will get back on track once the Dow climbs above 20,000 again.
  • False positives show you care. If you’re working an area where the stakes are high, it would be irresponsible NOT to be extreme. Take WMD in Iraq, for example. If experts hadn’t predicted that there were chemical weapons in Iraq, and there had been, the consequences would have been unthinkable. Better to be safe than sorry.

Regardless of which of these reasons you use, just remember this. You weren’t wrong. The world failed to live up to your prediction.

Don’t remember your failures. No one else will! We don’t remember our own failures is that, well, in retrospect they weren’t failures. Experts retroactively assign greater certainty to forecasts they made that came true, and retroactively downgraded their assessments of competing forecasts. (Put another way, experts tend to suffer more from hindsight bias than average people, not less.) When we’re right, we get smarter, and other people get dumber.

Last but not least, remember that everybody has a track record, but no one knows what it is. As Tetlock put it, “We seek out experts who promise impossible levels of accuracy, then we do a poor job keeping score.” Make this work for you. And good luck.

For a while now, I’ve been working on a think-piece on what futures would look like if it started now: if instead of starting during the Cold War, in the middle of enthusiasm for social engineering, computer programming, and rationalistic visions of future societies, futures was able to draw on neuroscience and neuroeconomics, behavioral psychology, simulation, and other fields and tools.

One of thing things I’ve kept coming back to is that, if you take seriously the criticisms or warnings of people like Nassim Taleb on the impossibility of prediction, Philip Tetlock and J. Scott Anderson on the untrustworthiness of expert opinion, Robert Burton on the emotional bases of certainty, Gary Marcus and Daniel Gilbert on the mind, etc., you could end up with a radically skeptical view of the whole enterprise of futures and forecasting. Or, read another way, you end up with a primer for how to be an incredibly successful futurist, even while you’re a shameless fraud, and always wrong.

I’ve finished a draft of the serious article [PDF], so now it’s time for the next project: The Evil Futurists’ Guide to World Domination: How to be Successful, Famous, and Wrong. It would be too depressing to write a book-length study, so I’ll just post it here.

(This exercise is, by the way, an illustration of Pang’s Law, that the power of an idea can be measured by how outrageously– yet convincingly– it can be misused. Think of Darwin’s ideas morphing into Social Darwinism or being appropriated by the Nazis, or quantum physics being invoked by New Age mystics. And yes, I know Pang’s Law will never be as cool as the Nunberg Error, but I do what I can.)

Full essay in the extended post.

The citations are all real. But no, I don’t really mean a single word of it. Yet, I wonder….

The Evil Futurists' Guide to World Domination: How to be Successful, Famous, and Wrong

You want to be a futurist, but you're afraid of being wrong. Don't worry. Everyone has that concern at first. But here, I've brought together ideas drawn from a number of books and articles that will help you succeed without having to be right. All you have to do is follow the simple principles laid out below.

Be certain, not right. People love certainty. They crave it. In experiments, psychologists have shown that "[w]e tend to seek advice from experts who exhibit the most confidence – even when we know they haven’t been particularly accurate in the past." We just can't resist certainty.

Further, confidence and certainty aren't things you arrive at after logical deliberation and reasoning: as UCSF neurologist Robert Burton argues in his book On Being Certain, certainty is a feeling, an emotion, and it has a lot less to do with logic than we realize. So go ahead and feel certain; if other people mistake that for being right, that's their problem. But before too long, people who listen to you will become invested in believing that you're really an authority and know what you're talking about, and will defend your reputation to salvage their own beliefs.

So no matter what you do, no matter what you believe, be certain. As Tetlock put it, in this world "only the overconfident survive, and only the truly arrogant thrive."

Finally, for the moralist or logician in you, here's this: even if you don't believe what you're saying, you could wrongly believe you're wrong, and actually be right. Stranger things have happened.

Claim to be an expert: it makes people's brains hurt. In a remarkable new study, Jan Engelmann and colleagues used fMRI to observe the brains of people who received expert advice during a financial simulation. They found that subjects thought differently about their decisions when they received the advice– even if it was bad advice– than when they worked on their own. As the researchers put it, "one effect of expert advice is to 'offload' the calculation of value of decision options from the individual’s brain." Put another way, "the advice made the brain switch off (at least to a great extent) processes required for financial decision-making."

No expertise, no problem. It'll actually make your work more accurate if you claim to be an expert– if you're certain that you're an expert– but you actually aren't.

Sounds counterintuitive, right? (Ed.: This is how you know I'm a successful futurist. I said what you didn't expect. Now I'll quote some Science to make my point.) In fact, as J. Scott Armstrong has shown over the last twenty or so years, advanced degrees and deep knowledge don't make you a better forecaster or expert. Statistically, experts are hardly better at predicting the future than chimps throwing darts at a board. As Louis Menand put it, "The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge."

And it's perfectly natural to suffer from what Nassim Taleb calls "epistemic arrogance." In all sorts of areas, we routinely overestimate our own certainty and breadth of knowledge, and underestimate what we don't know. If you do that, you're just like everyone else.

So knowing you're not an expert should make you more confident in your work. And confidence is everything.

One simple idea may be one too many. The future is complex, but you shouldn't be. Philip Tetlock explained in Expert Political Judgment that there are two kinds of forecasting personalities: foxes, who tend to appreciate contingency and don't make big claims, and hedgehogs, who have a hammer and see the whole world as a giant nail. Guess who wins. Having a single big theory, even if it's totally outrageous, makes you sound more credible. Having a Great Idea also makes it easier for you to seem like a Great Visionary, capable of seeing things that others cannot.

Get prizes for being outrageous. It's important to get quoted in the media. Being a futurist isn't like being a doctor or lawyer: there are no pesky state boards, no certification tests, none of that. So how do potential clients figure out who to hire? Media attention is one way. As a resident scholar at a think-tank told Tetlock, "I woo dumb-ass reporters who want glib sound bites."

So you need to set yourself apart from the pack, differentiate yourself from the competition. If you're not beautiful, or already famous, the easiest way is to be counterintuitive, or go against the grain. Dissent is always safe, because journalists understand what to do with someone who's critical of the conventional wisdom, and always want someone who can provide an Alternative View For Balance. There are few more secure places in a reporter's Rolodex than that of the Reliably Unpredictable Contrarian.

There's a success hiding in every failure. Let's say you predicted that something would happen, and it hasn't. Is your career over? Of course not. Tetlock found that after a certain point, expertise becomes a hindrance to effective forecasting, because experts are better able to construct erudite-sounding (or erudite-feeling) rationalizations for their failure. Here's how to benefit from this valuable talent.

  • Make predictions that are hard to verify. Be fuzzy about timing: it's always safest to say that something will happen in your lifetime, because by definition, you're never around to take flak if you're wrong.
  • Find similar events. Maybe you predicted that we'd all watch TV on our watches. Instead, we watch YouTube on our computers. That's pretty close, right? Point proved.
  • Say reality came very close to your prediction. Canada almost went to war with Denmark. It was just the arrival of winter that prevented them from attacking each other over competing cliams to the North Pole.
  • Those damned externalities. Your prediction would have come true if it hadn't been for the economic downturn, which really messed up everything. (The beauty of this is that economic downturns now come with enough regularity to provide cover for just about everything– yet they're still unpredictable.)
  • The future is just a little slow. Instead of derailing it, maybe that (unpredictable!) economic downturn has just put off the future you predict. The underlying dynamics are solid, it's just that the timing is off (because of something you couldn't have foreseen.) The future will get back on track once the Dow climbs above 20,000 again.
  • False positives show you care. If you're working an area where the stakes are high, it would be irresponsible NOT to be extreme. Take WMD in Iraq, for example. If experts hadn't predicted that there were chemical weapons in Iraq, and there had been, the consequences would have been unthinkable. Better to be safe than sorry.

Don't remember your failures. No one else will. We don't remember our own failures because, well, in retrospect they weren't failures.

Experts retroactively assign greater certainty to forecasts they made that came true, and retroactively downgraded their assessments of competing forecasts. (Put another way, experts tend to suffer more from hindsight bias than average people, not less.) When we're right, we get smarter, and other people get dumber.

Last but not least, remember that everybody has a track record, but no one knows what it is. As Tetlock put it, "We seek out experts who promise impossible levels of accuracy, then we do a poor job keeping score." Make this work for you. And good luck.

Future 2.0: Rethinking the Discipline

In Outliers Malcolm Gladwell writes that it takes about 10,000 hours to master something– computer programming, classical violin, tennis, what have you. I've been working as a futurist for almost a decade; I don't know if I've done 10,000 hours of decent work, but I have some feel for how the field works, and what we're good at.

About a year ago– okay, more like two years ago– Angela Wilkinson, a friend who runs the scenario planning master classes at the Saïd Business School, invited me to write a think-piece about the field. I took it as an occasion to run a thought experiment: if you were to start with a clean sheet of paper– if there was no Global Business Network, no IFTF, no organized or professionalized efforts to forecast the future– what would the field look like? What kinds of problems would it tackle? What kinds of science would it draw on? And how would it try to make its impact felt?

As I got into it, I concluded that a new field would look very different from the one I've worked in for the last decade. This essay (it's a PDF, about 260kb) is a first draft at an effort to explain where I think we could go. Lots of what I talk about will be familiar to my colleagues, and indeed to anyone reasonably well-read; but I think there's utility in synthesis and summary, if only to see connections between literatures and chart one's next steps.

All the usual caveats apply: it's unpublished, it's unfinished, it doesn't reflect the thinking of any of the various institutions I'm associated with, all the errors are mine, there are plenty of things I could have talked about but didn't. But so does the usual invitation to comment on it. I could keep tinkering with it, but at this stage I think it's more useful for me to take a step back, work on some other things, and return to it with fresh eyes.

Angela had in mind something quick, short, and provocative. I definitely missed the first two. Angela, I'm sorry to have kept you waiting.

Update, 22 July 2009: I've posted a slightly updated version of the essay, and also reproduced the introduction below the jump.

What is the future of futures?

This essay is a thought experiment. It asks, if the field of futures were invented today, what would it look like? What would be its intellectual foundations? Who would it serve and influence? And how would its ideas and insights be put into practice? A brand-new field that concerned itself with the future—call it Future 2.0 for simplicity's sake—would have four notable features. It would be designed to deal with problems characterized by great complexity, contingency, uncertainty and urgency—properties shared by the critical problems of the 21st century. It would draw on experimental psychology and neuroscience to counter the systematic biases that affect our ability to think about and act upon the future.  It would incorporate tools like social software, prediction markets, and choice architecture into its research methods. Finally, it would seek to lengthen "the shadow of the future" of everyday choices, and influence the future by encouraging small cumulative changes in the behaviors of very large numbers of people over the course of years or decades.

To be clear, my purpose here is not to create a scorecard for evaluating current experiments with new methods or technologies, or to provide a roadmap for the field based on current work. Nor am I arguing that scenarios, forecasts, and other familiar tools—or decades of craft knowledge and experience with creating and using them—should be abandoned. It may seem odd (or even unfair) to omit references to current futures work. But my approach is inspired by engineers "clean slate" exercises that look for radical implications of new science and innovative new technologies by imagining how they would build new systems like the Internet from scratch. By thinking about the potential utility of behavioral economics, neuroscience, and new technologies to futures work without regard to current practices, I hope to spot opportunities or questions that might be overlooked in a more incremental or evolutionary exercise. My approach is further inspired by James Martin's Meaning of the 21st Century, which argued that if we could learn to deal with global problems ranging from climate change to terrorism to food shortages, mankind would develop tools that would allow us to thrive for centuries to come.  The tools of Future 2.0 could be central to creating Martin's future; but conceiving and designing them will require a radical, clean-slate approach.

As a result, the proposals outlined probably may not seem completely unfamiliar or implausible; readers are likely to see pieces of them in the form of exploratory essays, prototype projects, and emerging practices at various consultancies, research centers, and think-tanks. Since the behavioral economics and neuroeconomics literatures are outside the normal range of most futurists' readings, the essay may well provide additional rationale or justification for these efforts; I trust my readers to make those connections. But my hope is that these ad hoc experiments can be drawn together in a single program that provides a theoretical grounding for their integration, explains how they can be extended in the future, and how they might bring otherwise-unexpected benefits to the field. This essay attempts to provide that grounding.

Future 2.0 would be based on four premises. First, the most pressing problems confronting us in the 21st century are quite different than those we faced in the 20th. Second, the range of actors who shape the future has grown dramatically. Third, humans are ill-equipped to think rationally about long-term futures. Finally, expert knowledge is a less reliable guide to understanding the future than we realize.

Richard Posner in the Chronicle of Higher Education

Richard Posner writes in this week’s Chronicle of Higher Education (maybe accessible if you don’t have a subscription, but probably not) about the current financial crisis, and why experts didn’t take early warnings about it seriously.

The financial crisis, when it finally struck the nation full-blown in September 2008, caught the government, the financial community, and the economics profession unawares.

We can get help in understanding the blindness of experts to warning signs from the literature on surprise attacks. Before the Japanese attack on Pearl Harbor, there were many warnings that Japan planned to attack Western possessions in Southeast Asia, and an attack on the U.S. fleet in Hawaii, known to be within range of Japan’s large carrier fleet, was a logical measure, on Japan’s part, for protecting the eastern flank of its attack on the Dutch East Indies, Burma, and Malaya. The warnings were disregarded because of preconceptions (including the belief that Japan would not attack the United States because it was too weak to have a reasonable chance of prevailing), the cost and difficulty of taking effective defensive measures against an uncertain danger, and the absence of a mechanism for aggregating, sifting, and analyzing warning information flowing in from many sources and for pushing it up to the decision-making level of government.

Similar factors made it difficult to heed the warning signs of the 2008 financial crisis. Preconceptions played an especially large role. It is tempting, indeed irresistible under conditions of uncertainty, to base policy to a degree on theoretical preconceptions, on a worldview, an ideology. But shaped as they are by past experiences, preconceptions can impede reactions to novel challenges. Most economists, and the kind of officials who tend to be appointed by Republican presidents, are heavily invested in the ideology of free markets, which teaches that competitive markets are, on the whole, self-correcting. Those officials and the economists to whom they turn for advice don’t like to think of the economy as a kind of epileptic, subject to unpredictable, strange seizures.

Posner also makes the important point that the failure to respond quickly enough to the crisis made it worse– that the absence of effective contingency planning doesn’t just weaken your response to the crisis, but it can deepen the crisis:

By September 2008, however, the probability of a very severe recession was high enough to warrant the government’s undertaking costly efforts to try to prevent the risk from materializing. Yet the officials dithered. They dithered because they were surprised by the crisis and had no contingency plans for dealing with it. Dithering in response to a financial crisis is especially costly because of the adverse feedback involved in a depression. Once a spiral of falling demand, layoffs, a further fall in demand, more layoffs, and so on begins, it feeds on itself; it requires no external source of nourishment, no further shock to the economy.

So what does he propose to do?

Most people, even most experts, were especially unlikely to be persuaded by prophets of doom in the absence of a machinery for aggregating and analyzing information bearing on large-scale economic risk. Little bits of knowledge about the shakiness of the U.S. and global financial systems were widely dispersed among the staffs of banks, other financial institutions, and regulatory bodies and among academic economists, financial consultants, accountants, actuaries, rating agencies, mortgage brokers, real-estate agents, and business journalists. There was no financial counterpart to the CIA to assemble an intelligible mosaic from the scattered pieces. Much of the relevant information was proprietary; investment banks, hedge funds, and other financial firms conceal information about business strategies that might help competitors, and they soft-pedal adverse information about the firm’s prospects. Even the regulatory agencies lacked access to much crucial information about the financial system, because of limitations on their authority that were thought appropriate in an era of deregulation. Lacking authority to regulate new derivative securities such as credit-default swaps, financial regulators could not force disclosure of information that might have revealed how risky the financial system had become.

A focus of reform, therefore, should be the creation of a centralized, unitary financial-intelligence apparatus in government that would have complete and continuous access to the books of all financial institutions. This sounds simple but the details would be complex, and in my view consideration of all nonemergency reform measures should be deferred until the current economic emergency ends, or at least until the recovery has begun. Until then, it is important that the financial sector be spared additional uncertainty concerning its regulatory environment — uncertainty that would exacerbate the tendency of the banks and other financial intermediaries to freeze and hoard in the present unsettled economic conditions — and that the designers of reform not be distracted by the urgencies of responding to the current crisis.

On the unreliability of expert political judgment

I’ve been working on a think-piece on the future of futures work. (It’s an expansion of questions I started asking in my piece on design and futures.) It’s organized around a simple question: If you were to invent a discipline of futures and forecasting today, organized to deal with today’s problems, and drawing on current science, what would it look like? Would be it be just like the field today? Would it look for weak signals, produce roadmaps and scenarios, and seek to influence strategy and policy?

I suspect the answer is no. No, I’m confident– using the term as Robert Burton would warn it should be used-– that the answer is no. Now I’m trying to explain where I think the field will, or ought, to go.

One of the things I’m thinking through is the role of expert knowledge and accountability in futures work. We claim to be experts about a bunch of things, most notably about how to think about the future in ways that can better inform the present. But the work of Philip Tetlock (which I’ve mentioned before) suggests that claims of expert knowledge, particularly when it comes to dealing with the future, are highly suspect.

Teltock’s argument is nicely summarized by Louis Menand in a New Yorker review:

It is the somewhat gratifying lesson of Philip Tetlock’s new book, “ Expert Political Judgment: How Good Is It? How Can We Know” (Princeton; $35), that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones.

Tetlock got a statistical handle on his task by putting most of the forecasting questions into a “three possible futures” form. The respondents were asked to rate the probability of three alternative outcomes: the persistence of the status quo, more of something (political freedom, economic growth), or less of something (repression, recession). And he measured his experts on two dimensions: how good they were at guessing probabilities (did all the things they said had an x per cent chance of happening happen x per cent of the time?), and how accurate they were at predicting specific outcomes. The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes—if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.

Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight.”

The obvious questions are, how relevant is this work to what we futurists do? And are our current attempts to explain that no, we can’t predict the future but our work is still valuable, sufficient in the light of work like Tetlock’s?

Terrorism, scenarios, and fiction

An article in the New Republic takes a critical look at the growing use of writers and creative types in counterterrorism work. Authors and screenwriters are now a regular fixture in brainstorming exercises in which counterterrorism officials develop scenarios for everything from attacks on critical infrastructure to a 21st-century caliphate. I don’t know how common this really is– the Institute doesn’t do classified work– but the article does point out a couple challenges to using fiction in futures.

First, a bit of background:

Our adversaries, the thinking goes, are tougher to understand and predict than in conflicts past. During the cold war, for instance, it was relatively easy to gauge Soviet intentions and capabilities. Not only did we have better human intelligence, but there was a visible political-military apparatus to watch. We could see their missiles and know which ones were pointed at us. Beyond Pentagon red teams that tried to anticipate Soviet responses to U.S. moves, there wasn’t much need to speculate about the Soviet mind.

Radical Islam, by contrast, is a much shadier world. Although jihadists are prolific communicators, issuing videotapes and conversing in Internet chatrooms, it’s difficult to tap into the mind-sets and motives behind the propaganda. So policymakers have increasingly turned to fiction as a way to better understand the enemy, as well as to shake up the intelligence system and fill in knowledge gaps. As Jon Nowick, director of the DHS’s red team program, told The Washington Post, “We paint a picture where there are no dots to connect.” Or, in the more colorful language of the National Intelligence Council’s Robert Hutchings: “[L]inear analysis will get you a much-changed caterpillar, but it won’t get you a butterfly. For that, you need a leap of imagination.”

But there are two big problems with such exercises. First,

they assume a level of organization and strategy that may not exist. Fawaz Gerges, a professor at Sarah Lawrence College and author of Journey of the Jihadist: Inside Muslim Militancy, says his interviews with convicted terrorists reveal a surprising lack of strategic sophistication…. It’s possible to strategize as your enemy would when it’s one military analyzing another and there’s a fixed chain of command. Terrorism, though, is not like that. The bumbling, the spontaneity, the role of chance aren’t easily captured by red-teaming.

But a bigger problem is not

that a lack of creativity will produce bad fiction; it’s that an excess of creativity will yield unrealistic scenarios…. Former Clinton National Security Council staffer Steve Simon, now at the Council on Foreign Relations, concurs. “These exercises are like Rorschach tests,” he says. “Somebody shows you a blot, and you project onto it all your anxieties and all your fevered dreams and fears.” This points to a logical flaw in the idea that the less we understand about our enemies, the more we should use our imagination. In fact, the fewer facts we have to work with, the more likely it is that our imagination will take us in the wrong direction. And there’s a real possibility that wrong direction will attract the attention of policymakers and draw resources away from bigger risks.

One thing that’s made the Institute an exciting place to work in the last few years– and I think, a more interesting place for clients– is that our work has become increasingly visual and interactive. Rather than producing big white papers, we create a mix of shorter articles, maps, interactive CDs, and wikis.

But we don’t do very much fiction. Why?

Artifacts and maps have been useful both as research tools and communications media: they’re instruments that help us both think about the future in a more systematic way, and share those ideas with audiences in ways that will offer something at once compelling and useful. But while fiction may be helpful as a way of communicating ideas about the future, it hasn’t been that useful as a thinking tool. Further, I’ve often seen workshops that stumbled when participants have been asked to do something obviously fictional, and came up with things that were too funny or frivolous, often because they extrapolated some current trend to an amusing extreme.

Older posts

© 2017 Alex Soojung-Kim Pang, Ph.D.

Theme by Anders NorenUp ↑