Alex Soojung-Kim Pang, Ph.D.

I study people, technology, and the worlds they make

Tag: psychology (page 1 of 6)

Dunkirk and different forms of heroism

My family and I saw Dunkirk late last week, and I spent the next couple days turning it over in my head. It is, of course, a really great movie, as you would expect from Nolan, and one that does a certain amount of time-bending and bobbing and weaving with narrative. The performances are terrific, and the end of the film is just wonderfully bold.

But Dunkirk also defies virtually every wartime movie convention. There’s no movie reel explication, no character has their backstory detailed in conversations over rations or a bottle of wine found in an abandoned farmhouse, the politicians and generals are completely absent; all that’s left is people, many of whom look pretty similar (uniforms and haircuts will do that), trying to escape the machinery of war.

And, as Guardian columnist Zoe Williams points out in her essay “Dunkirk offers a lesson – but it isn’t what Nigel Farage thinks,”* trying to help each other escape the machinery of war:

the emotional heart of the event has nothing to do with battle – give or take a bit of dogfighting – and everything to do with generosity; unarmed sailors saving strangers for no better reason than that they needed to be saved…. Up close, all you can see in a thousand small boats, defenceless against the skies, is what Thatcher dismissed as the “soft virtues”: humility, gentleness, sympathy. Of her “vigorous virtues” – self-sufficiency, independence, rectitude – almost none.

Indeed, that is the immediate legacy of war: that self-reliance is revealed as not just a myth but a peculiarly unattractive one, thin and tasteless against the richness of fellowship. The mood of postwar Britain was the one that built the NHS, created social housing and signed up to the UN refugee convention. If anything is ever learned from bloodshed, and it would be better if we didn’t have to learn it repeatedly, it’s that there is no fit memorial to those who gave their lives but near infinite generosity between those who didn’t.

This reminds me a lot of Harry Leslie Smith’s writing about the legacy of World War II, and how at fantastic cost his generation built a postwar world that was not only peaceful, but incomparably fairer, more secure, and more prosperous for everyone, most especially people who had grown up poor during the Depression (as he had). Dunkirk, unlike most war movies, isn’t mainly about action and killing; it’s mainly about saving people. Even the dogfights are about shooting down German planes that otherwise would strafe and bomb British soldiers and ships, which makes it more like the action on the ground.

It also reminds me of some reading I’ve been doing on heroism, and how to think about it. In a series of essays (this one on “The Banality of Heroism,” is easily accessible), Stanford psychologist Philip Zimbardo argues that people who act heroically voluntarily risk life and limb, or reputation or honor (as whistleblowers and reformers do); that they often have to actively navigate and overcome obstacles to undertake those acts; that they do so without expectation of reward, or even much expectation that their acts will be remembered. Heroism isn’t just something you exhibit on the battlefield or fighting criminals; it’s a quality people exhibit when standing up for justice, or opposing popular but wrong points of view, or rescuing stranded soldiers and allies.

So you go into Dunkirk expecting military heroism, and witness a very different sort: that kind of heroism exhibited by the first responders in the World Trade Center who risked life and limb to help people get out, or the heroism of people who help rescue strangers during an earthquake or flash flood. But better than most movies, Dunkirk makes the case that both varieties of heroism deserve our respect.

* (Of course, “It’s not what Nigel Farage thinks” is one of those lines that typesetters would be able to set in their sleep; they wouldn’t even need to think about where in their trays they’d need to reach for the correct letters, they’d done it so often.)

The rubber-hand illusion can be produced by smartphones

It seems to be the stuff of pure fantasy: a hand made of rubber feels as if it belongs to the owner’s body. Although it is hardly conceivable, it is an illusion which is in fact well-known in the field of psychology – and one that can be produced in skillful experimental setups. Psychologists have now shown for the first time how test persons can also integrate their own smartphones into their bodily selves. This means that whether an object is felt to belong to the owner’s own body does not only depend on

Source: My Smartphone and I: Scientists show that rubber-hand illusion can be produced by smartphones: Experience in using it is important — ScienceDaily

PowerPoint doesn’t make you stupid, and LOLcats doesn’t rewire your brain

Via Duke professor Cathy Davidson, I just came across this L. A. Times piece by Christopher Chabris and Daniel Simons. (They’re authors of The Invisible Gorilla. The essay aim at “digital alarmism,” the argument that the Internet is making us stupider by “trap[ping] us in a shallow culture of constant interruption as we frenetically tweet, text and e-mail,” both leaving us less time to read Proust, and rewiring our brains so we’re incapable of paying serious attention to… anything.

More at Contemplative Computing.

Mindfulness and contemplation in weight loss, futures, and computing

Over the last couple years I’ve lost about fifty pounds. As nerdy as this will sound, while I was a fat kid and spent my adult life overweight, it was only in the last two years, when 1) I started to worry that it was now or never– that my condition in my 40s would determine how long I would live and what kind of life I would have, and 2) I could make it into as much a cerebral challenge as a physical one, that I managed to take off the weight.

By cerebral I mean this: in order to get past the various things that had kept me from losing weight in the past, it was necessary for me to read a lot about nutrition and dieting, dive into the literature on obseity and satiety, and think about how what I’d learned from behavioral economics could be applied to weight loss. At a certain point, I realized that the challenge of losing weight was a classic futures problem: complex, uncertain, requiring all kinds of near-term tradeoffs for long-term benefits, and hard to sustain. Maybe, I wondered, my training as a futurist help me lose weight? Conversely, could I learn something about futures problems through the experience of losing weight?

I think the answer to both is yes, and I’ve written an article– available as a PDF— that explains those answers in detail.

The piece is also kind of personal because it’s a bit of an intellectual pivot. On one hand, it’s the first article that draws on my reading on mindfulness and contemplative practices, and tries to applies that work to futures. There are lots of futurists who have been interested in meditation and Eastern religions– it’s at least as common among Bay Area futurists as 5.11 Tactical shirts— but not much explicit use of the idea of mindfulness as a tool for thinking about the future. Partly, I think, it reflects a certain suspicion that writers on contemplative practice display toward thinking about the future, a suspicion that I try to argue is misplaced. But I’ve come to believe that mindfulness and attention to the now is an essential starting-point for seeing how the future could unfold.

On the other hand, mindfulness and contemplation is a big part of what I’m going to be working on next year at Microsoft Research. I’m going there to start a project on contemplative computing, a form of computing that doesn’t fracture your attention and capacity to think long thoughts, but protects and supports it. It’s become clear that, in our headlong rush to become more connected and accessible, we’re accidentally eroding our capacity to think about complicated problems for long periods. For stockbrokers, pundits, ER doctors, elementary school teachers, and other people whose lives are all about speed and instant reaction, this may not be an issue at all; but for people who are creative for a living, the destruction of our ability to concentrate is a great loss.

Some people have tried to deal with the problem by going off Facebook, taking “digital sabbaths,” and otherwise taking a break from digital devices and the digital world. While I certainly understand the impulse, I don’t like it, for a few reasons. First, in the long run it’s impractical: a movement designed to give us a break from our mobile devices and laptops is going to have trouble dealing with a hyperconnected world of pervasive computing. Second, I actually like being connected, and don’t want to live without my digital augmentarium. Third, while I’m as much in danger of being distracted by the Web and Facebook as anyone, there are also times when I can use devices to be creative and reach that mental state of “flow.” Finally, the digital sabbath movement implicitly accepts the idea that information technologies have to be this way, and that humans and tools are opposites. In contrast, I buy Andy Clark’s idea that we’re natural born cyborgs, and my instinct is that the future will offer great opportunities to design information technologies that are better able to support concentration and contemplation– in other words, to learn how to create tools that help us be better, more focused cyborgs. Figuring out what those tools could look like, and how to design them, is the big task I’ll be taking up in Cambridge.

Weight loss and the challenges of reaching long-term future goals

As I've mentioned a couple times, over the last couple years I've lost about fifty pounds, and am in the best physical condition of my entire life. For someone who grew up as a fat kid and fluctuated between being kind of overweight and really needing to take some serious weight off, and who had a stereotypical academic's contempt for all things seriously athletic, this is no small feat.

Of course, for me it was both a physical endeavor, and an extremely cerebral one: in order to get past the various things that had kept me from losing weight in the past, it was necessary for me to read a lot about nutrition and dieting, dive into the literature on obseity and satiety, and think about how what I'd learned from behavioral economics could be applied to weight loss.

At a certain point, I realized that the challenge of losing weight was a classic futures problem: complex, uncertain, requiring all kinds of near-term tradeoffs for long-term benefits, and hard to sustain. So could what I learned as a futurist help me lose weight? And could the experience of losing weight teach me anything about dealing with futures-related problems?

I think the answer to both is yes, and I've laid out my answers in an article that I just sent into one of those frighteningly efficient online editorial systems. We'll see if the piece is accepted– it may be too first person to qualify as serious research– but in the meantime I've put a copy of the draft online, and it's available as a PDF. The introduction is in the extended post.

Naturally, comments are welcome.

Introduction, Using Futures 2.0 to Manage Intractable Futures

Since its emergence several decades ago, the discipline of futures has concerned itself with describing the forces shaping the future, while also revealing the future's contingency and open-endedness. We futurists have devoted less energy to studying how futures are actually made: how people act on ideas about the future in the present—or just as interesting, why people or organizations fail to act on them. There are several reasons for this. Few of us have opportunities to follow our ideas into client organizations and see how they’re used. We want to avoid the appearance of advocating for particular futures, and thus compromising our objectivity. Finally, we have assumed that people are rational actors, who when presented with a variety of future choices can be counted on to make a self-interested decision. This is a default assumption among financial planners, policymakers, and others who advise on long-term strategic issues, and it reflects and complements the self-perception of our clients, who usually see themselves this way.

In this world-view, implementation isn’t unimportant; it’s just not very interesting. But research in behavioral economics and neuroeconomics has shown that clear-eyed, calculating rationality is in short supply outside economics textbooks and treatises on Realpolitik. What this literature teaches us is that there are deep, interesting reasons why people fail to act in their own long-term self-interest. For futurists, this work presents both a challenge and an opportunity. The challenge is to understand how a behavioral economics understanding of decision-making should inform futures research; this is the subject I took up in a previous article. The opportunity is to expand the domain of futures out of research and facilitation, and to help clients design tools that help them act in the present with the future in mind.

That opportunity is the subject of this article. It focuses on applying behavioral economics and tools to personal futures, a subject that has attracted several writers. In the futures community, Jessica Charlesworth has explored the future of self-knowledge and personal futures. Jarno Koponen has described the architecture of a "personal future simulation system." Verne Wheelwright has advocated applying scenario planning and other traditional forecasting techniques to individuals. There is also work on personal futures outside the futures world. Alexandra Carmichael, Kevin Kelly, and Gary Wolf and others have advocated self-monitoring as a tool for improving personal health. Disabilities advocates use a collaborative process of "personal futures planning" to "develop strategies for success for a person with disabilities… [and] take action to accomplish positive changes for the person."

For the sake of clarity, I will explore the opportunity through a case study involving a simple personal futures-oriented challenge. The case is an example of an intractable future: it is difficult but not impossible to realize, it requires persistent effort for an extended period, and it can be subverted by biases, instincts, and our willingness to let rationalization trump rationality. The case reveals how we can design tools to counter them, and what intellectual instruments we can use when doing so. This intractable future also has the virtue of being exceptionally easy to describe and familiar to many readers.

My case is weight loss. I have lost about 50 pounds (22.7 kilograms) over the last two years; taken up running, cycling and weightlifting; and today am in the best physical shape of my life. For a profession accustomed to thinking about big issues and megatrends like nanotechnology, global warming, and Peak Oil, losing weight may seem trivial and beneath its interest. But it shouldn't be, for two reasons. First, by any objective measure, in much of the developed world obesity is a substantial public health problem: it affects the lives of tens of millions of people, increases chronic diseases like hypertension and diabetes, and costs governments hundreds of billions of dollars. Second, despite the inevitable specificities of personal experience, weight loss illustrates at a human scale the kinds of complex, interconnected problems that characterize life in the 21st century, and for which we are poorly-adapted to deal.

Why do people vote? “A sticker and a 0% chance of changing the results of the election.”

Via Daily Dish, a very interesting article about why people vote.

On Tuesday, 42% of registered voters took time out of their day to travel to their assigned polling location, wait in line, exchange niceties with a grumpy volunteer, and fill in some bubbles with a Sharpie. What did they receive in return?: a sticker and a 0% chance of changing the results of the election.

Political scientists have tried to calculate the probability that one vote will make a difference in a Presidential election. They estimate that the chances are roughly 1 in 10 million to 1 in 100 million, depending on your state. This does not give an individual much incentive to vote. In a YouGov survey, we asked respondents to estimate the same probability. “If you vote in 2012, what are the chances that your vote will determine the winner of the Presidential election?” Some of the responses are illuminating.

Not surprisingly, Americans vastly overestimate the chances that their vote will make a difference. Our median respondent felt that there is a 1 in 1000 chance that their vote could change the outcome of a Presidential election, missing the true chance by a factor of 10,000. However, this dramatic overestimation does not explain the prevalence of turnout, because those who actually vote know that this probability is low. Over 40% of regular voters know that the chances of a pivotal vote are less than 1 in a million. Amazingly, turnout is negatively correlated with the perceived chances that one vote will make a difference—meaning the less likely you are to think your vote will actually matter, the more likely you are to vote [emphasis added].

This reminds me of a study that showed a complicated relationship between knowledge about climate change and a willingness to act on it. As I explained in my article "Futures 2.0,"

the presence of expertise about the future may encourage people to be less engaged in shaping their own futures. A study of popular responses to climate change suggests that a higher degree of confidence in the reality of climate change and the reliability of climate science can promote passivity and a sense that experts will deal with the problem, rather than inspire people to change their lives (Kellstedt et al., 2008; Swim et al., 2009). In another remarkable study, Jan Engelmann and colleagues used fMRI to observe the brains of people who received expert advice during a financial simulation. They found that subjects thought differently about their decisions when they received expert advice – even bad advice – than when they worked on their own. As the researchers put it, "one effect of expert advice is to ‘offload’ the calculation of value of decision options from the individual’s brain" (Engelmann et al., 2009). Put another way, "the advice made the brain switch off (at least to a great extent) processes required for financial decision-making" (Nir, 2009). In an era in which ordinary people play a bigger role in shaping the future, the prospect of an inverse relationship between how much confidence they place in expert opinion about complex problems, and how responsible they feel for acting to solve it, presents a substantial conundrum for futurists.

Clearly just giving people information about the future, or about the choices before them, and assuming they'll then act in a rational (or even straightforward, self-interested) manner doesn't quite work. We like to think we're rational, and we like to think other people are rational; but it's not quite so. As the voting example shows, sometimes that's a good thing; more often, though, it's not, and we need to better deal with that fact.

The Tetlock Gambit

A few years ago, I coined the term Nunberg Error, in honor of Geoffrey Nunberg and his observation about our tendency when forecasting to overestimate the impact of technological change while underestimating social change. It's time now to coin a new term, just in time for the avalanche of punditry around the midterms: the Tetlock Gambit.

Briefly, the Tetlock Gambit (named in honor of Philip Tetlock, author of the fantastic book Expert Political Judgment) is a kind of pundit's hedge: it's an outrageous prediction, made in the hope of a big payoff if it comes true, and with the knowledge that there'll be no penalty if it's false. So you can't be a true believer in, say, the idea that we'll use nanotechnology to rewire our brains, and forecast the same; you must make such a prediction self-consciously and cynically.

The example that inspires all this? Penn professor Justin Wolfers:

The Democrats will retain control of the House and the Senate. And I’m the only person in D.C. insightful enough to make this brave forecast.

If I’m right? Well you can bet that I’ll beat the drums loudly and tell everyone in sight that I called it. I’ll blog it all week. I’ll write an op-ed explaining my insights. I’ll go on to Jon Stewart’s show to explain the fine art of psephology. Hopefully you’ll be calling me the Nouriel Roubini of political punditry. I’ll go on to a new life of lucrative speaking engagements and big book advances, while I beat back my coterie of devoted followers.

And if I’m wrong? We both know there won’t be any real consequences. I’ll be sure to sell some clever story. You know, there was weather on election day (hot or cold, wet or dry — it all works!) and this messed with turnout. Or perhaps, This Time Was Different, and my excellent forecast was knocked off course by our first black president, by rising cellphone penetration or a candidate who may not be a witch. I’ll remind you how I nailed previous elections. (Follow the links, you’ll see I’m doing it already!) I’ll bluster and use long words like sociotropic, or perhaps heteroskedastic. And I’ll remind you that my first name is Professor, and I went to a prestigious school. More to the point, if I’m wrong, I’m sure we’ll all have forgotten by the time the 2012 election rolls around. Shhhh… I won’t tell if you won’t.

As he confesses at the end of his prediction,

[Y]es, my forecast is more about the marketplace for punditry than it is about this election. I’m influenced strongly by my Penn colleague Philip Tetlock, who has spent decades pointing out just how bad expert political judgment is. Given these market failures, I would be a fool not to go for the gold.

It was inevitable that someone would read Tetlock as a manual for how to succeed as a pundit, rather than as a caution against trusting pundits, much as Michael Lewis' Liar's Poker was read by some college students as a how-to manual for success on Wall Street, not a caution against going into finance.

No wait, someone has already done it: I did, in my "Evil Futurists' Guide to World Domination."

Calendars, concentration, and creativity

Via Lifehacker, a nice little essay on “the chokehold of calendars,” and how we’ve accidentally (or thoughtlessly) designed them to kill our productivity and concentration:

The idea of a calendar as a public fire hydrant for colleagues to mark is ludicrous. The time displayed on your calendar belongs to you, not to them….

The problem with calendars is that they are additive rather than subtractive. They approach your time as something to add to rather than subtract from. Adding a meeting is innocuous. You’re acting on a calendar. A calendar isn’t a person. It isn’t even a thing. It’s an abstraction. But subtracting an hour from the life of another human being isn’t to be taken lightly. It’s almost violent. It’s certainly invasive. Shared calendars are vessels you fill by taking things away from other people.

“I’m adding a meeting” should really be “I’m subtracting an hour from your life.”

Amen to that….

Resisting prediction in medicine

A few weeks ago I came across this article in Slate about how physicians don't do a good job of estimating how long terminally ill people have to live:

Doctors prefer not to prognosticate for three reasons: We don't like to be wrong; we don't want to take away hope for survival or good quality of life in the time that remains; and we just aren't adequately trained to do it. And our reluctance to make such guesses means that when we do try to predict the future, we're pretty lousy at it….

Since doctors typically avoid making predictions, these tools are infrequently dusted off and put to use. Our collective reluctance to offer patients a prognosis makes us less accurate in the rare instances we actually do it.

In his seminal book Death Foretold: Prophecy and Prognosis in Medical Care, Nicholas Christakis, a medical doctor and sociologist, argues that medical science has given the processes of diagnosis and treatment disproportionate emphasis in the educational curricula of doctors…. [A]voiding prognosis is a professional norm for doctors at all levels of training. In our research, teaching, and communication, we focus almost exclusively on the ever-expanding sciences of diagnosis and treatment, leaving prognosis almost entirely to the side.

Making predictions about the lives of… cancer patients can be particularly tricky. William Dahut, clinical director of the Center for Cancer Research at the National Cancer Institute, blames "a general lack of understanding of the specific biology of the cancer as well as a general lack of understanding of the biology of the individual." Doctors and scientists often refer to an individual's biology as "host factors," making allowances for the fact that patients are indeed different—in immunity, resilience, and attitude. The difficulty in accounting for such differences is another reason that predictive accuracy is so low….

Christakis argues that studying and delivering prognoses to patients is part of the ethical obligation of doctors to their patients. "Furthermore," he writes, "physicians should legitimate discussions regarding prognosis not only with their patients but with each other." As such, doctors would recast the professional norm to include open and frank discussion of prognosis in medical care.

In so doing, we need to strive for honesty and avoid "hanging crepe," the idea of delivering a poor prognosis simply to combat our tendency to be overly optimistic and to keep our hands clean: If the patient dies, I predicted it and therefore appear accurate; if the patient outlives my prediction, everyone is pleasantly surprised and thus I'm not held accountable.

We know thanks to Philip Tetlock how expert political judgment works. It would be interesting to look at a variety of different professions or disciplines that are under pressure to make predictions or forecasts, and see if there are interesting differences in the ways physicians, meteorologists, financial researchers, intelligence analysts, and others handle those demands.

Dan Ariely on the paradox of productivity tools

Dan Ariely has a good post about why our current “productivity tools” generate time-wasting or addictive behavior: he looks to B. F. Skinner’s work on “schedules of reinforcement” that found that random rewards inspired more work than predictable rewards. (It got more work out of rats, anyway. Come to think of it, it also works for graduate students.)

Ariely comments that Skinner’s work

gives me a better understanding of my own e-mail addiction, and more important, it might suggest a few means of escape from this Skinner box and its variable schedule of reinforcement. One helpful approach I’ve discovered is to turn off the automatic e-mail-checking feature. This action doesn’t eliminate my checking email too often, but it reduces the frequency with which my computer notifies me that I have new e-mail waiting (some of it, I would think to myself, must be interesting, urgent, or relevant). Another way I am trying to wean myself from continuously checking email (a tendency that only got worse for me when I got an iPhone), is by only checking email during specific blocks of time. If we understand the hold that a random schedule of reinforcement has on our email behavior, maybe, just maybe we can outsmart our own nature.

There’s also this observation of Skinner’s own work habits.

Skinner had a trick to counterbalance daily distractions: As soon as he arrived at his office, he would write 800 words on whatever research project he happened to be working on—and he did this before doing anything else. Granted, 800 words is not a lot in the scheme of things but if you think about writing 800 words each day you would realize how this small output can add up over time.

This is something I try to do, but I need to be more disciplined about it. There aren’t THAT many e-mails waiting for me in the morning that require my immediate attention, and I suspect that I’m actually more likely to lose track of tasks or not reply to a message if I read it, think to myself “I’ll deal with this later,” then set it aside. For me, the in-box is not nearly as effective a place to stack tasks than, say, a physical pile (or even better, a written list in my little Moleskine notebook).

[To the tune of The Fixx, “Secret Separation,” from the album The Best of the Fixx (a 3-star song, imo).]
Older posts

© 2019 Alex Soojung-Kim Pang, Ph.D.

Theme by Anders NorenUp ↑