Alex Soojung-Kim Pang, Ph.D.

I study people, technology, and the worlds they make

Tag: neuroscience

Chocolate and “likes” activate the same parts of the teenage brain

The same brain circuits that are activated by eating chocolate and winning money are activated when teenagers see large numbers of “likes” on their own photos or the photos of peers in a social network, according to a first-of-its-kind UCLA study that scanned teens’ brains while using social media.

The 32 teenagers, ages 13-18, were told they were participating in a small social network similar to the popular photo-sharing app, Instagram. In an experiment at UCLA’s Ahmanson-Lovelace Brain Mapping Center, the researchers showed them 148 photographs on a computer screen for 12 minutes, including 40 photos that each teenager submitted, and analyzed their brain activity using functional magnetic resonance imaging, or fMRI. Each photo also displayed the number of likes it had supposedly received from other teenage participants — in reality, the number of likes was assigned by the researchers. (At the end of the procedure, the participants were told that the researchers decided on the number of likes a photo received.)

“When the teens saw their own photos with a large number of likes, we saw activity across a wide variety of regions in the brain,” said lead author Lauren Sherman, a researcher in the brain mapping center and the UCLA branch of the Children’s Digital Media Center, Los Angeles.

Source: Teenage brain on social media: Study sheds light on influence of peers and much more — ScienceDaily

“Free will is not an illusion”

This 2007 Raymond Tallis essay declaring that “free will is not an illusion” can join the Chabris and Simons piece arguing against neuro-determinism, or more generally arguments that rest on the “because fMRI shows that our brains do X when we’re doing this thing that I’m interested in/think is bad, this thing/bad thing is really important:”

There are several strands of thought woven into neuro-determinism. The first is that we are essentially our brains: our consciousness, our belief in ourselves as free agents, and so on, is neural activity in certain parts of the brain. Secondly, these brains have evolved in such a way as to maximise the likelihood of our genetic material being able to replicate…. Thirdly, for a brain to work effectively, it is not necessary for us to be aware of what it is doing. Cognitive psychologists have, over the last few decades, particularly since the advent of neuro-imaging which reveals activity in the living brain, shown how we are unconscious of many things that influence what is going on in our brain and, it is inferred, the perceptions we form and the decisions we make….

[But] Neuro-determinism, though seemingly self-evident, is also wrong.

The first line of attack is to remove the hype from the neuroscience of consciousness and remind ourselves how little we know…. [T]here is not even the beginning of an explanation of our fundamental sense that we are subjects transcended by objects that are ‘out there’, that exist independently of us and have their own intrinsic properties. From its simplest to its most elaborated forms, intentionality – the property of consciousness of being ‘about’ something – remains mysterious….

Secondly, we should question the focus on the stand-alone brain. The world we live in is not one of sparks of isolated sentience cast amid a rubble of material objects. We live in a world that is collectively constructed. Our consciousness is collectivised…. It is no use, therefore, looking for human being, and its free actions, in isolated brains…. We also need a body (which, too, lights up in different ways when we are presented with stimuli); and that body has to be environed; and the environment consists not of bare, material objects but of nexuses of signification that have two kinds of temporal depth – that which comes from personal memory and the explicit sense of our private past; and that which comes from our collective history, insofar as we have internalised it. As Ortega y Gasset said, unlike other animals ‘Man is an inheritor, not a mere descendent’.

PowerPoint doesn’t make you stupid, and LOLcats doesn’t rewire your brain

Via Duke professor Cathy Davidson, I just came across this L. A. Times piece by Christopher Chabris and Daniel Simons. (They’re authors of The Invisible Gorilla. The essay aim at “digital alarmism,” the argument that the Internet is making us stupider by “trap[ping] us in a shallow culture of constant interruption as we frenetically tweet, text and e-mail,” both leaving us less time to read Proust, and rewiring our brains so we’re incapable of paying serious attention to… anything.

More at Contemplative Computing.

Cognitive neuroscience and literature

Interesting article in the New York Times on the use of brain science in literature:

Literature, like other fields including history and political science, has looked to the technology of brain imaging and the principles of evolution to provide empirical evidence for unprovable theories.

Interest has bloomed during the last decade. Elaine Scarry, a professor of English at Harvard, has since 2000 hosted a seminar on cognitive theory and the arts. Over the years participants have explored, for example, how the visual cortex works in order to explain why Impressionist paintings give the appearance of shimmering. In a few weeks Stephen Kosslyn, a psychologist at Harvard, will give a talk about mental imagery and memory, both of which are invoked while reading.

While this is very interesting, the practice of drawing on the sciences (particularly cognitive science) to inform the humanities is less new than the article suggests: E. H. Gombrich’s classic Art and Illusion opens with a discussion of the latest findings on perception and cognition (from the 1950s, obviously) and how they should be applied to art history and criticism.

[To the tune of Tabla Beat Science, “Tala Matrix,” from the album Live In San Francisco At Stern Grove [Disc 2] (a 3-star song, imo).]

Brain science’s last tribute to H. M.

This, pardon the phrase, is kind of mind-blowing . The Brain Observatory, a UCSD lab, is slicing the brain of amesiac patient H.M., one of the most-studied people in the whole history of science, into 2500 sections– and the process is being broadcast live.

We are slicing the brain of the amnesic patient H.M. into giant histological sections. The whole brain specimen has been successfully frozen to -40C and will be sectioned during one continuous session that we expect will last approximately 30 hours (+ some breaks and some sleep in between). The procedure was designed for the safe collection of all tissue slices of the brain and for the acquisition of blockface images throughout the entire block.

It’s really worth checking out, first as a kind of morbid wonder (“oh my god, that’s really a brain!”), then as a technically fascinating event (“it’s sort of like one of those meat cutters at the deli– and is that a sumi-e brush they’re using to take each slice?”). Where you go from there is up to you. Me, I find it kind of an amazing tribute to someone who contributed a lot to our understanding of the neurological foundations of memory.

The man was named Henry Molaison, though before he died last year he was only publicly known at H.M. According to the Times, he “lost the ability to form new memories after a brain operation in 1953, and over the next half century he became the most studied patient in brain science.”

Before H.M., scientists thought that memory was widely distributed throughout the brain, not dependent on any one area. But by testing Mr. Molaison, researchers in Montreal and Hartford soon established that the areas that were removed — in the medial temporal lobe, about an inch deep in the brain level with the ear — are critical to forming new memories. One organ, the hippocampus, is especially crucial and is now the object of intense study.

In a series of studies, Mr. Molaison soon altered forever the understanding of learning by demonstrating that a part of his memory was fully intact. A 1962 paper by Dr. Brenda Milner of the Montreal Neurological Institute described a landmark study in which she had Mr. Molaison try to trace a line between two five-point stars, one inside the other.

Each time he tried the experiment, it seemed to him an entirely new experience. Yet he gradually became more proficient — showing that there are at least two systems in the brain for memory, one for events and facts and another for implicit or motor learning, for things like playing a guitar or riding a bicycle.

I keep coming back to the Web site, and looking at the computer-driven slicer taking off another section in one window; the control panel of the microtome in another; and some grad students or techs in a third window. Computers and people, workplace and wonder, and the brain– at once intensely human, and seen this way very alien.

[To the tune of Stereolab, “The Man With 100 Cells,” from the album Margerine Eclipse (I give it 1 stars).]

Internet use and brain function among elders

HealthDay News reports on a study of the impact of Internet use on the brains of elders:

Surfing the Internet just might be a way to preserve your mental skills as you age.

Researchers found that older adults who started browsing the Web experienced improved brain function after only a few days.

"You can teach an old brain new technology tricks," said Dr. Gary Small, a psychiatry professor at the Semel Institute for Neuroscience and Human Behavior at the University of California, Los Angeles, and the author of iBrain. With people who had little Internet experience, "we found that after just a week of practice, there was a much greater extent of activity particularly in the areas of the brain that make decisions, the thinking brain — which makes sense because, when you're searching online, you're making a lot of decisions," he said. "It's interactive."…

"We found a number of years ago that people who engaged in cognitive activities had better functioning and perspective than those who did not," said Dr. Richard Lipton, a professor of neurology and epidemiology at Albert Einstein College of Medicine in New York City and director of the Einstein Aging Study. "Our study is often referenced as the crossword-puzzle study — that doing puzzles, writing for pleasure, playing chess and engaging in a broader array of cognitive activities seem to protect against age-related decline in cognitive function and also dementia."…

For the research, 24 neurologically normal adults, aged 55 to 78, were asked to surf the Internet while hooked up to an MRI machine. Before the study began, half the participants had used the Internet daily, and the other half had little experience with it.

After an initial MRI scan, the participants were instructed to do Internet searches for an hour on each of seven days in the next two weeks. They then returned to the clinic for more brain scans.

"At baseline, those with prior Internet experience showed a much greater extent of brain activation," Small said.

Doubtless some readers will recognize this as an updated version of the Proust and the Squid argument, which relies in part on fMRI studies indicating that the brains of literate people have specialized sections for quickly recognizing letters. What's interesting here is that you get a similar kind of stimulation with the elderly.

[To the tune of John Coltrane, "A Love Supreme, Part II – Resolution," from the album The Classic Quartet – The Complete Impulse! Studio Recordings (I give it 1 stars).]

I hope this doesn’t describe what I do….

I love the reflexivity of this study (described by Ben Goldacre):

[A] set of experiments from the March 2008 edition of the Journal of Cognitive Neuroscience… elegantly show that people will buy into bogus explanations much more readily when they are dressed up with a few technical words from the world of neuroscience. Subjects were given descriptions of various psychology phenomena, and then randomly offered one of four explanations for them: the explanations either contained neuroscience, or didn’t; and they were either good explanations or bad ones (bad ones being, for example, simply circular restatements of the phenomenon itself)….

[T]he bogus neuroscience information had a particularly strong effect on peoples’ judgments of bad explanations. As quacks are well aware, adding scientific-sounding but conceptually uninformative information makes it harder to spot a dodgy explanation.

An interesting question is why. The very presence of neuroscience information might be seen as a surrogate marker of a good explanation, regardless of what is actually said. As the researchers say, “something about seeing neuroscience information may encourage people to believe they have received a scientific explanation when they have not.”…

More clues can be found in the extensive literature on irrationality. People tend, for example, to rate longer explanations as being more similar to “experts’ explanations”. There is also the “seductive details” effect: if you present related (but logically irrelevant) details to people, as part of an argument, that seems to make it more difficult for them to encode and later recall the main argument of a text, because attention is diverted.

But any meaningless filler, not just scientific jargon, can change behaviour: studies have found, for example, that people respond positively more often to requests with uninformative “placebo” information in them: office warriors will be interested to hear that “Can I use the photocopier? I have to make some copies,” is more successful than the simple “Can I use the photocopier?”

I hope that my Future 2.0 piece doesn’t fall in this category. Of course, if it does, I just need to sound extra-confident and certain, and throw around some more scientific-sounding terms.

[To the tune of The Cranberries, “Zombie,” from the album The Cranberries: Stars – The Best of 1992-2002 (I give it 2 stars).]

Intuition and danger

Really interesting piece in the New York Times on studies the military is conducting on why some people have a better sense for danger than others.

The study complements a growing body of work suggesting that the speed with which the brain reads and interprets sensations like the feelings in one’s own body and emotions in the body language of others is central to avoiding imminent threats.

“Not long ago people thought of emotions as old stuff, as just feelings — feelings that had little to do with rational decision making, or that got in the way of it,” said Dr. Antonio Damasio, director of the Brain and Creativity Institute at the University of Southern California. “Now that position has reversed. We understand emotions as practical action programs that work to solve a problem, often before we’re conscious of it. These processes are at work continually, in pilots, leaders of expeditions, parents, all of us.”…

So what are the factors that seem to affect the ability to detect problems early?

Experience matters, of course: if you have seen something before, you are more likely to anticipate it the next time. And yet, recent research suggests that something else is at work, too.

Small differences in how the brain processes images, how well it reads emotions and how it manages surges in stress hormones help explain why some people sense imminent danger before most others do.

Studies of members of the Army Green Berets and Navy Seals, for example, have found that in threatening situations they experience about the same rush of the stress hormone cortisol as any other soldier does. But their levels typically drop off faster than less well-trained troops, much faster in some cases….

The men and women who performed best in the Army’s I.E.D. detection study had the sort of knowledge gained through experience, according to a preliminary analysis of the results; but many also had superb depth perception and a keen ability to sustain intense focus for long periods. The ability to pick odd shapes masked in complex backgrounds — a “Where’s Waldo” type of skill that some call anomaly detection — also predicted performance on some of the roadside bomb simulations….

Veterans say that those who are most sensitive to the presence of the bombs not only pick up small details but also have the ability to step back and observe the bigger picture: extra tension in the air, unusual rhythms in Iraqi daily life, oddities in behavior.

Of course, this is a different scale of futures thinking where I work, but I always wonder whether there are things we futurists can take away from such studies that would improve our work, or help us heighten its impact.

[To the tune of Peter Gabriel, “The Feeling Begins,” from the album Passion: Music For The Last Temptation Of Christ (I give it 4 stars).]

The Evil Futurists’ Guide to World Domination

The Evil Futurists’ Guide to World Domination: How to be Successful, Famous, and Wrong

You want to be a futurist, but something’s holding you back. Maybe you’re afraid of being wrong. Maybe you don’t have any ideas. Don’t worry. After years of exhaustive study, I’ve brought together ideas drawn from behavioral economics and neuroscience that will help you succeed without having to be right. All you have to do is follow the simple principles laid out below.

It’s more important to be certain than to be right! People love certainty. They crave it. In experiments, psychologists have shown that “[w]e tend to seek advice from experts who exhibit the most confidence – even when we know they haven’t been particularly accurate in the past.” We just can’t resist certainty.

Further, confidence and certainty aren’t things you arrive at after logical deliberation and reasoning: as UCSF neurologist Robert Burton argues in his book On Being Certain, certainty is a feeling, an emotion, and it has a lot less to do with logic than we realize. So go ahead and feel certain; if other people mistake that for being right, that’s their problem.

So no matter what you do, no matter what you believe, be certain. As Tetlock put it, in this world “only the overconfident survive, and only the truly arrogant thrive.”

Finally, for the moralist or logician in you, here’s this: even if you don’t believe what you’re saying, you could wrongly believe you’re wrong, and actually be right. Stranger things have happened.

Claim to be an expert: it makes people’s brains hurt! In a remarkable new study, Jan Engelmann and colleagues used fMRI to observe the brains of people who received expert advice during a financial simulation. They found that subjects thought differently about their decisions when they received the advice– even if it was bad advice– than when they worked on their own. As the researchers put it, “one effect of expert advice is to “offload” the calculation of value of decision options from the individual’s brain.” Put another way, “the advice made the brain switch off (at least to a great extent) processes required for financial decision-making.”

No expertise? No problem! It’ll actually make your work more accurate if you say you’re an expert– if you’re certain that you’re an expert– but you actually aren’t.

Sounds counterintuitive, right? (Ed.: This is how you know I’m a successful futurist. I said what you didn’t expect. Now I’ll quote some Science to make my point.) In fact, as J. Scott Armstrong has shown over the last twenty or so years, advanced degrees and deep knowledge don’t make you a better forecaster or expert. Statistically, experts are hardly better at predicting the future than chimps throwing darts at a board. As Louis Menand put it, “The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge.”

At the same time, it’s perfectly natural to suffer from what Nassim Taleb calls “epistemic arrogance.” In all sorts of areas, we routinely overestimate our own certainty and breadth of knowledge, and underestimate what we don’t know. If you do that, you’re just like everyone else.

So knowing you’re not an expert should make you more confident in your work. And confidence is everything.

Have one simple idea that you apply to EVERYTHING! The future is complex, but you shouldn’t be. Philip Tetlock explained in Expert Political Judgment that there are two kinds of forecasting personalities: foxes, who tend to appreciate contingency and don’t make big claims, and hedgehogs, who have a hammer and see the whole world as a giant nail. Guess who wins. Having a single big theory, even if it’s totally outrageous, makes you sound more credible. Having a Great Idea also makes it easier for you to seem like a Great Visionary, capable of seeing things that others cannot.

Get prizes for being outrageous! It’s important to get quoted in the media. Being a futurist isn’t like being a doctor or lawyer: there are no pesky state boards, no certification tests, none of that. So how do potential clients figure out who to hire? Media attention is one way. As a resident scholar at a think-tank told Tetlock, “I woo dumb-ass reporters who want glib sound bites.”

So you need to set yourself apart from the pack, differentiate yourself from the competition. If you’re not beautiful, or already famous, the easiest way is to be counterintuitive, or go against the grain. Dissent is always safe, because journalists understand what to do with someone who’s critical of the conventional wisdom, and always want someone who can provide an Alternative View For Balance. There are few more secure places in a reporter’s Rolodex than that of the Reliably Unpredictable Contrarian.

There’s a success hiding in every failure! Let’s say you predicted that something would happen, and it hasn’t. Is your career over? Of course not. Tetlock found that after a certain point, expertise becomes a hindrance to effective forecasting, because experts are better able to construct erudite-sounding (or erudite-feeling) rationalizations for their failure. Here’s how to benefit from this valuable talent.

  • Make predictions that are hard to verify. Be fuzzy about timing: it’s always safest to say that something will happen in your lifetime, because by definition, you’re never around to take flak if you’re wrong.
  • Find similar events. Maybe you predicted that we’d all watch TV on our watches. Instead, we watch YouTube on our computers. That’s pretty close, right? Point proved.
  • Say reality came very close to your prediction. Canada almost went to war with Denmark. It was just the arrival of winter that prevented them from attacking each other over competing claimsclaims to the North Pole.
  • Those damned externalities. Your prediction would have come true if it hadn’t been for the economic downturn, which really messed up everything. (The beauty of this is that economic downturns now come with enough regularity to provide cover for just about everything– yet they’re still unpredictable.)
  • The future is just a little slow. Instead of derailing it, maybe that (unpredictable.) economic downturn has just put off the future you predict. The underlying dynamics are solid, it’s just that the timing is off (because of something you couldn’t have foreseen.) Everything will get back on track once the Dow climbs above 20,000 again.
  • False positives show you care. If you’re working an area where the stakes are high, it would be irresponsible NOT to be extreme. Take WMD in Iraq, for example. If experts hadn’t predicted that there were chemical weapons in Iraq, and there had been, the consequences would have been unthinkable. Better to be safe than sorry.

Regardless of which of these reasons you use, just remember this. You weren’t wrong. The world failed to live up to your prediction.

Don’t remember your failures. No one else will! We don’t remember our own failures is that, well, in retrospect they weren’t failures. Experts retroactively assign greater certainty to forecasts they made that came true, and retroactively downgraded their assessments of competing forecasts. (Put another way, experts tend to suffer more from hindsight bias than average people, not less.) When we’re right, we get smarter, and other people get dumber.

Last but not least, remember that everybody has a track record, but no one knows what it is. As Tetlock put it, “We seek out experts who promise impossible levels of accuracy, then we do a poor job keeping score.” Make this work for you. And good luck.

For a while now, I’ve been working on a think-piece on what futures would look like if it started now: if instead of starting during the Cold War, in the middle of enthusiasm for social engineering, computer programming, and rationalistic visions of future societies, futures was able to draw on neuroscience and neuroeconomics, behavioral psychology, simulation, and other fields and tools.

One of thing things I’ve kept coming back to is that, if you take seriously the criticisms or warnings of people like Nassim Taleb on the impossibility of prediction, Philip Tetlock and J. Scott Anderson on the untrustworthiness of expert opinion, Robert Burton on the emotional bases of certainty, Gary Marcus and Daniel Gilbert on the mind, etc., you could end up with a radically skeptical view of the whole enterprise of futures and forecasting. Or, read another way, you end up with a primer for how to be an incredibly successful futurist, even while you’re a shameless fraud, and always wrong.

I’ve finished a draft of the serious article [PDF], so now it’s time for the next project: The Evil Futurists’ Guide to World Domination: How to be Successful, Famous, and Wrong. It would be too depressing to write a book-length study, so I’ll just post it here.

(This exercise is, by the way, an illustration of Pang’s Law, that the power of an idea can be measured by how outrageously– yet convincingly– it can be misused. Think of Darwin’s ideas morphing into Social Darwinism or being appropriated by the Nazis, or quantum physics being invoked by New Age mystics. And yes, I know Pang’s Law will never be as cool as the Nunberg Error, but I do what I can.)

Full essay in the extended post.

The citations are all real. But no, I don’t really mean a single word of it. Yet, I wonder….

The Evil Futurists' Guide to World Domination: How to be Successful, Famous, and Wrong

You want to be a futurist, but you're afraid of being wrong. Don't worry. Everyone has that concern at first. But here, I've brought together ideas drawn from a number of books and articles that will help you succeed without having to be right. All you have to do is follow the simple principles laid out below.

Be certain, not right. People love certainty. They crave it. In experiments, psychologists have shown that "[w]e tend to seek advice from experts who exhibit the most confidence – even when we know they haven’t been particularly accurate in the past." We just can't resist certainty.

Further, confidence and certainty aren't things you arrive at after logical deliberation and reasoning: as UCSF neurologist Robert Burton argues in his book On Being Certain, certainty is a feeling, an emotion, and it has a lot less to do with logic than we realize. So go ahead and feel certain; if other people mistake that for being right, that's their problem. But before too long, people who listen to you will become invested in believing that you're really an authority and know what you're talking about, and will defend your reputation to salvage their own beliefs.

So no matter what you do, no matter what you believe, be certain. As Tetlock put it, in this world "only the overconfident survive, and only the truly arrogant thrive."

Finally, for the moralist or logician in you, here's this: even if you don't believe what you're saying, you could wrongly believe you're wrong, and actually be right. Stranger things have happened.

Claim to be an expert: it makes people's brains hurt. In a remarkable new study, Jan Engelmann and colleagues used fMRI to observe the brains of people who received expert advice during a financial simulation. They found that subjects thought differently about their decisions when they received the advice– even if it was bad advice– than when they worked on their own. As the researchers put it, "one effect of expert advice is to 'offload' the calculation of value of decision options from the individual’s brain." Put another way, "the advice made the brain switch off (at least to a great extent) processes required for financial decision-making."

No expertise, no problem. It'll actually make your work more accurate if you claim to be an expert– if you're certain that you're an expert– but you actually aren't.

Sounds counterintuitive, right? (Ed.: This is how you know I'm a successful futurist. I said what you didn't expect. Now I'll quote some Science to make my point.) In fact, as J. Scott Armstrong has shown over the last twenty or so years, advanced degrees and deep knowledge don't make you a better forecaster or expert. Statistically, experts are hardly better at predicting the future than chimps throwing darts at a board. As Louis Menand put it, "The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge."

And it's perfectly natural to suffer from what Nassim Taleb calls "epistemic arrogance." In all sorts of areas, we routinely overestimate our own certainty and breadth of knowledge, and underestimate what we don't know. If you do that, you're just like everyone else.

So knowing you're not an expert should make you more confident in your work. And confidence is everything.

One simple idea may be one too many. The future is complex, but you shouldn't be. Philip Tetlock explained in Expert Political Judgment that there are two kinds of forecasting personalities: foxes, who tend to appreciate contingency and don't make big claims, and hedgehogs, who have a hammer and see the whole world as a giant nail. Guess who wins. Having a single big theory, even if it's totally outrageous, makes you sound more credible. Having a Great Idea also makes it easier for you to seem like a Great Visionary, capable of seeing things that others cannot.

Get prizes for being outrageous. It's important to get quoted in the media. Being a futurist isn't like being a doctor or lawyer: there are no pesky state boards, no certification tests, none of that. So how do potential clients figure out who to hire? Media attention is one way. As a resident scholar at a think-tank told Tetlock, "I woo dumb-ass reporters who want glib sound bites."

So you need to set yourself apart from the pack, differentiate yourself from the competition. If you're not beautiful, or already famous, the easiest way is to be counterintuitive, or go against the grain. Dissent is always safe, because journalists understand what to do with someone who's critical of the conventional wisdom, and always want someone who can provide an Alternative View For Balance. There are few more secure places in a reporter's Rolodex than that of the Reliably Unpredictable Contrarian.

There's a success hiding in every failure. Let's say you predicted that something would happen, and it hasn't. Is your career over? Of course not. Tetlock found that after a certain point, expertise becomes a hindrance to effective forecasting, because experts are better able to construct erudite-sounding (or erudite-feeling) rationalizations for their failure. Here's how to benefit from this valuable talent.

  • Make predictions that are hard to verify. Be fuzzy about timing: it's always safest to say that something will happen in your lifetime, because by definition, you're never around to take flak if you're wrong.
  • Find similar events. Maybe you predicted that we'd all watch TV on our watches. Instead, we watch YouTube on our computers. That's pretty close, right? Point proved.
  • Say reality came very close to your prediction. Canada almost went to war with Denmark. It was just the arrival of winter that prevented them from attacking each other over competing cliams to the North Pole.
  • Those damned externalities. Your prediction would have come true if it hadn't been for the economic downturn, which really messed up everything. (The beauty of this is that economic downturns now come with enough regularity to provide cover for just about everything– yet they're still unpredictable.)
  • The future is just a little slow. Instead of derailing it, maybe that (unpredictable!) economic downturn has just put off the future you predict. The underlying dynamics are solid, it's just that the timing is off (because of something you couldn't have foreseen.) The future will get back on track once the Dow climbs above 20,000 again.
  • False positives show you care. If you're working an area where the stakes are high, it would be irresponsible NOT to be extreme. Take WMD in Iraq, for example. If experts hadn't predicted that there were chemical weapons in Iraq, and there had been, the consequences would have been unthinkable. Better to be safe than sorry.

Don't remember your failures. No one else will. We don't remember our own failures because, well, in retrospect they weren't failures.

Experts retroactively assign greater certainty to forecasts they made that came true, and retroactively downgraded their assessments of competing forecasts. (Put another way, experts tend to suffer more from hindsight bias than average people, not less.) When we're right, we get smarter, and other people get dumber.

Last but not least, remember that everybody has a track record, but no one knows what it is. As Tetlock put it, "We seek out experts who promise impossible levels of accuracy, then we do a poor job keeping score." Make this work for you. And good luck.

Future 2.0: Rethinking the Discipline

In Outliers Malcolm Gladwell writes that it takes about 10,000 hours to master something– computer programming, classical violin, tennis, what have you. I've been working as a futurist for almost a decade; I don't know if I've done 10,000 hours of decent work, but I have some feel for how the field works, and what we're good at.

About a year ago– okay, more like two years ago– Angela Wilkinson, a friend who runs the scenario planning master classes at the Saïd Business School, invited me to write a think-piece about the field. I took it as an occasion to run a thought experiment: if you were to start with a clean sheet of paper– if there was no Global Business Network, no IFTF, no organized or professionalized efforts to forecast the future– what would the field look like? What kinds of problems would it tackle? What kinds of science would it draw on? And how would it try to make its impact felt?

As I got into it, I concluded that a new field would look very different from the one I've worked in for the last decade. This essay (it's a PDF, about 260kb) is a first draft at an effort to explain where I think we could go. Lots of what I talk about will be familiar to my colleagues, and indeed to anyone reasonably well-read; but I think there's utility in synthesis and summary, if only to see connections between literatures and chart one's next steps.

All the usual caveats apply: it's unpublished, it's unfinished, it doesn't reflect the thinking of any of the various institutions I'm associated with, all the errors are mine, there are plenty of things I could have talked about but didn't. But so does the usual invitation to comment on it. I could keep tinkering with it, but at this stage I think it's more useful for me to take a step back, work on some other things, and return to it with fresh eyes.

Angela had in mind something quick, short, and provocative. I definitely missed the first two. Angela, I'm sorry to have kept you waiting.

Update, 22 July 2009: I've posted a slightly updated version of the essay, and also reproduced the introduction below the jump.

What is the future of futures?

This essay is a thought experiment. It asks, if the field of futures were invented today, what would it look like? What would be its intellectual foundations? Who would it serve and influence? And how would its ideas and insights be put into practice? A brand-new field that concerned itself with the future—call it Future 2.0 for simplicity's sake—would have four notable features. It would be designed to deal with problems characterized by great complexity, contingency, uncertainty and urgency—properties shared by the critical problems of the 21st century. It would draw on experimental psychology and neuroscience to counter the systematic biases that affect our ability to think about and act upon the future.  It would incorporate tools like social software, prediction markets, and choice architecture into its research methods. Finally, it would seek to lengthen "the shadow of the future" of everyday choices, and influence the future by encouraging small cumulative changes in the behaviors of very large numbers of people over the course of years or decades.

To be clear, my purpose here is not to create a scorecard for evaluating current experiments with new methods or technologies, or to provide a roadmap for the field based on current work. Nor am I arguing that scenarios, forecasts, and other familiar tools—or decades of craft knowledge and experience with creating and using them—should be abandoned. It may seem odd (or even unfair) to omit references to current futures work. But my approach is inspired by engineers "clean slate" exercises that look for radical implications of new science and innovative new technologies by imagining how they would build new systems like the Internet from scratch. By thinking about the potential utility of behavioral economics, neuroscience, and new technologies to futures work without regard to current practices, I hope to spot opportunities or questions that might be overlooked in a more incremental or evolutionary exercise. My approach is further inspired by James Martin's Meaning of the 21st Century, which argued that if we could learn to deal with global problems ranging from climate change to terrorism to food shortages, mankind would develop tools that would allow us to thrive for centuries to come.  The tools of Future 2.0 could be central to creating Martin's future; but conceiving and designing them will require a radical, clean-slate approach.

As a result, the proposals outlined probably may not seem completely unfamiliar or implausible; readers are likely to see pieces of them in the form of exploratory essays, prototype projects, and emerging practices at various consultancies, research centers, and think-tanks. Since the behavioral economics and neuroeconomics literatures are outside the normal range of most futurists' readings, the essay may well provide additional rationale or justification for these efforts; I trust my readers to make those connections. But my hope is that these ad hoc experiments can be drawn together in a single program that provides a theoretical grounding for their integration, explains how they can be extended in the future, and how they might bring otherwise-unexpected benefits to the field. This essay attempts to provide that grounding.

Future 2.0 would be based on four premises. First, the most pressing problems confronting us in the 21st century are quite different than those we faced in the 20th. Second, the range of actors who shape the future has grown dramatically. Third, humans are ill-equipped to think rationally about long-term futures. Finally, expert knowledge is a less reliable guide to understanding the future than we realize.

© 2017 Alex Soojung-Kim Pang, Ph.D.

Theme by Anders NorenUp ↑