Alex Soojung-Kim Pang, Ph.D.

I study people, technology, and the worlds they make

Category: Uncategorized (page 2 of 84)

How disclosure can make professional judgment worse

An article in the Boston Globe describes some new research by Daylian Cain, George Loewenstein, and Don Moore, looking at the effects on conflict of interest disclosures on expert advice.

In just about any profession– medicine or real estate, accounting or academia– people giving information and advice may carry agendas that bias their judgments, or find themselves in situations where duty and personal benefit clash….

One of the most popular– and least costly– solutions is disclosure. The notion is that requiring experts to put everything on the table should give them an incentive to behave ethically and avoid tarnishing their reputation: Transparency begets honesty. But work by Cain, in collaboration with Don Moore at the University of California Berkeley and George Loewenstein at Carnegie Mellon University, finds that disclosure can have the opposite effect….

By assuming that disclosure is always a benefit, he and his colleagues argue, regulators may be failing to address the real problems caused by conflicts of interest. In fact, biases are rooted deep in our psychology, and can’t be dispelled with a simple confession. Policies of disclosure, far from being a panacea, may be drawing attention away from the much harder work of removing conflicts and making sure that people’s advice and their interests align.

The experiment itself was pretty straightforward:

Cain, Loewenstein, and Moore conducted a series of experiments meant to mimic a situation in which a person in authority– such as a doctor, consultant, or real estate broker– is giving advice that influences another person’s decision. Certain study participants were required to make an estimate– evaluating the prices of houses, for instance. Meanwhile, other participants were selected to serve as experts: They were given additional information with which to advise the estimators. When these experts were put in a conflicted situation– they were paid according to how high the estimator guessed– they gave worse advice than if they were paid according to the accuracy of the estimate.

No surprise there: People with a conflict gave biased advice to benefit themselves. But the twist came when the researchers required the experts to disclose this conflict to the people they were advising. Instead of the transparency encouraging more responsible behavior in the experts, it actually caused them to inflate their numbers even more. In other words, disclosing the conflict of interest– far from being a solution– actually made advisers act in a more self-serving way.

"We call it moral licensing," Moore says. "After having behaved honestly and virtuously, you then feel licensed to indulge in being a little bit bad."… [I]t appeared that disclosing a conflict of interest gave people a green light to behave unethically, as if they were absolved from having to consider others’ interests…. In effect, what the experts were doing was passing the buck on managing their bias to the people they were advising.

RIP Olaf Helmer

The World Futures Society reports that Olaf Helmer, a pioneer of futures and co-founder of Institute for the Future, has died.

A mathematician who helped bring scientific rigor to speculation about the future, Olaf Helmer died in Oak Harbor, Washington, on April 14, less than two months from his 101st birthday.

Helmer (ranked number 37 on the Encyclopedia of the Future’s list of the world's 100 most influential futurists) is best known as the co-inventor of the Delphi forecasting methodology — the systematic polling of experts in multiple rounds to create an authoritative consensus about some aspect of the future.

He was a "legendary futurist," notes Paul Saffo, president of the Institute for the Future, which Helmer co-founded after departing the RAND Corporation in 1968.

It was important, Helmer believed, to use this new methodology for the public good and not exclusively for military strategy. Many of Helmer's early papers on Delphi polling and other futures work are available at RAND.

Rating pundits

Interesting if somewhat impressionistic study from Hamilton College:

A Hamilton College class and their public policy professor analyzed the predicts of 26 pundits — including Sunday morning TV talkers — and used a scale of 1 to 5 to rate their accuracy….

The Hamilton students sampled the predictions of 26 individuals who wrote columns in major print media and who appeared on the three major Sunday news shows – Face the Nation, Meet the Press, and This Week – and evaluated the accuracy of 472 predictions made during the 16-month period. They used a scale of 1 to 5 (1 being “will not happen, 5 being “will absolutely happen”) to rate the accuracy of each, and then divided them into three categories: The Good, The Bad, and The Ugly.

The students found that only nine of the prognosticators they studied could predict more accurately than a coin flip. Two were significantly less accurate, and the remaining 14 were not statistically any better or worse than a coin flip.

The top prognosticators – led by New York Times columnist Paul Krugman – scored above five points and were labeled “Good,” while those scoring between zero and five were “Bad.” Anyone scoring less than zero (which was possible because prognosticators lost points for inaccurate predictions) were put into “The Ugly” category. Syndicated columnist Cal Thomas came up short and scored the lowest of the 26.

Thanassis Cambanis on “The Examined Spy”

Boston Globe columnist Thanassis Cambanis has a long article on IARPA's efforts to "to transform America’s massive data-collection effort into much more accurate analysis and predictions:"

[The intelligence network has] a key role in informing decisions of war and peace, and the near impossible task of preventing another terrorist attack on American soil. With so much at stake, you would assume the intelligence community rigorously tests its methods, constantly honing and adjusting how it goes about the inherently imprecise task of predicting the future in a secretive, constantly shifting world.

You’d be wrong.

In a field still deeply shaped by arcane traditions and turf wars, when it comes to assessing what actually works — and which tidbits of information make it into the president’s daily brief — politics and power struggles among the 17 different American intelligence agencies are just as likely as security concerns to rule the day.

What if the intelligence community started to apply the emerging tools of social science to its work? What if it began testing and refining its predictions to determine which of its techniques yield useful information, and which should be discarded? Director of National Intelligence James R. Clapper, a retired Air Force general, has begun to invite this kind of thinking from the heart of the leviathan. He has asked outside experts to assess the intelligence community’s methods; at the same time, the government has begun directing some of its prodigious intelligence budget to academic research to explore pie-in-the-sky approaches to forecasting….

The Intelligence Advanced Research Projects Activity, or IARPA — a two-year-old agency that funds experimental ideas — is already trying a novel way to generate imaginative new steps to make predictions better. It is funding an unusual contest among academic researchers, a forecasting competition that will pit five teams using different methods of prediction against one another. If they come up with new methods that work better than the old, intelligence analysts could adopt them.

Working on other projects

This is just a note to confirm the obvious: for the next several months I'm working exclusively on my next book, Taming the Digital Monkey: From Perpetual Distraction to Contemplative Computing (forthcoming with the fabulous Little, Brown & Company), and so will not be posting about futures-related things.

I will, though, be writing about the book on my contemplative computing blog, which like this blog I use as a kind of open commonplace notebook.

Please hold while I finish the book

This is just a note to confirm the obvious: for the next several months I'm working exclusively on my next book, Taming the Digital Monkey: From Perpetual Distraction to Contemplative Computing (forthcoming with the fabulous Little, Brown & Company), and so will not be posting about futures-related things.

I will, though, be writing about the book on my contemplative computing blog,which like this blog I use as a kind of open commonplace notebook.

“Futures 2.0: Rethinking the discipline” now available as free download (?)

My article, "Futures 2.0: Rethinking the discipline," won a 2011 Emerald Literati Network Awards for Excellence, and (if you can get to the site) apparently is now available as a free download.

On the other hand, since every browser on my machine routes academic Web sites through the Stanford proxy server, I can never tell what things like this look like to the rest of the world.

Rankings of most and least accurate pundits

More of this, please:

A Hamilton College class and their public policy professor analyzed the predicts of 26 pundits — including Sunday morning TV talkers — and used a scale of 1 to 5 to rate their accuracy. After Paul Krugman, the most accurate pundits were Maureen Dowd, former Pennsylvania Governor Ed Rendell, U.S. Senator Chuck Schumer (D-NY), and former House Speaker Nancy Pelosi. “The Bad” list includes Thomas Friedman, Clarence Page, and Bob Herbert….

Hamilton students sampled the predictions of 26 individuals who wrote columns in major print media and who appeared on the three major Sunday news shows – Face the Nation, Meet the Press, and This Week – and evaluated the accuracy of 472 predictions made during the 16-month period. They used a scale of 1 to 5 (1 being “will not happen, 5 being “will absolutely happen”) to rate the accuracy of each, and then divided them into three categories: The Good, The Bad, and The Ugly.

The students found that only nine of the prognosticators they studied could predict more accurately than a coin flip. Two were significantly less accurate, and the remaining 14 were not statistically any better or worse than a coin flip.

Larry Swedroe on “Why Experts Fail Us”

CBS MoneyWatch blogger Larry Swedroe glosses David Freedman's Wrong and Philip Tetlock's Expert Political Judgment to explain why expert advice goes wrong, and why experts have incentives to oversell their certainty and expertise:

Most of us want certainty, even when we know, logically, that it doesn’t exist. With investing, it’s a desire to believe that there’s someone who can protect us from bear markets and the devastating losses that can result. However, we’ve seen on numerous occasions that experts simply aren’t that expert. Of course, the next question is: “Why?”

Predictive policing

From Slate:

Police departments have long been in the data game, with such efforts as CompStat. But there's a new twist: They're not just using statistics to assess the past. Now they're trying to predict the future. In November 2009, the National Institute of Justice held a symposium on "predictive policing," to figure out the best ways to use statistical data to predict micro-trends in crime. The Los Angeles Police Department then won a $3 million grant from the Justice Department to finance a trial run in predictive methodology. (The grant, like the rest of the 2011 federal budget, is pending congressional approval.) Other police departments are giving predictive policing a shot, too, from Santa Cruz, which recruited a Santa Clara University professor to help rejigger their patrol patterns, to Chicago, which has created a new "criminal forecasting unit" to predict crime before it happens….

Predictive policing is based on the idea that some crime is random—but a lot isn't. For example, home burglaries are relatively predictable. When a house gets robbed, the likelihood of that house or houses near it getting robbed again spikes in the following days. Most people expect the exact opposite, figuring that if lightning strike once, it won't strike again. "This type of lightning does strike more than once," says [UCLA anthropology professor Jeffrey] Brantingham. Other crimes, like murder or rape, are harder to predict. They're more rare, for one thing, and the crime scene isn't always stationary, like a house. But they do tend to follow the same general pattern. If one gang member shoots another, for example, the likelihood of reprisal goes up….

Data-driven law enforcement shows that the criminal mind is not the dark, complex, and ultimately unknowable thing of Hollywood films. Instead, it's depressingly typical—driven by supply, demand, cost, and opportunity. "We have this perception that criminals are a breed apart, psychologically and behaviorally," says Brantingham. "That's not the case."

Older posts Newer posts

© 2019 Alex Soojung-Kim Pang, Ph.D.

Theme by Anders NorenUp ↑