For years I’ve been fascinated by the “extended minds” thesis, the claim that we should regard our minds not as confined to our brains, but including brains, bodies and technologies. (Andy Clark, author of Natural Born Cyborgs, is one influential exponents of the concept.) It’s an idea that guided my book The Distraction Addiction: my contention that we shouldn’t regard technologies as inherently dehumanizing, but instead should be see the best of them as tools we use to become better versions of ourselves, builds on the idea of extended minds.
So I clicked pretty quickly when I saw an article titled “Does a Spider Use Its Web Like You Use Your Smartphone? on The Atlantic Web site. It turns out that for almost the last decade, Brazilian biologist Hilton Japyassú has been conducting experiments on spiders, learning how they use their webs to sense the world and solve unfamiliar problems. He and a colleague now argue that “a spider’s web is at least an adjustable part of its sensory apparatus, and at most an extension of the spider’s cognitive system.”
The whole article, which touches on octopus cognition, other spider species, and Haller’s Rule, is worth reading.
And here’s the abstract from the essay “Extended Spider Cognition” by Hilton Japyassú and Kevin Laland:
There is a tension between the conception of cognition as a central nervous system (CNS) process and a view of cognition as extending towards the body or the contiguous environment. The centralised conception requires large or complex nervous systems to cope with complex environments. Conversely, the extended conception involves the outsourcing of information processing to the body or environment, thus making fewer demands on the processing power of the CNS. The evolution of extended cognition should be particularly favoured among small, generalist predators such as spiders, and here, we review the literature to evaluate the fit of empirical data with these contrasting models of cognition. Spiders do not seem to be cognitively limited, displaying a large diversity of learning processes, from habituation to contextual learning, including a sense of numerosity. To tease apart the central from the extended cognition, we apply the mutual manipulability criterion, testing the existence of reciprocal causal links between the putative elements of the system. We conclude that the web threads and configurations are integral parts of the cognitive systems. The extension of cognition to the web helps to explain some puzzling features of spider behaviour and seems to promote evolvability within the group, enhancing innovation through cognitive connectivity to variable habitat features. Graded changes in relative brain size could also be explained by outsourcing information processing to environmental features. More generally, niche-constructed structures emerge as prime candidates for extending animal cognition, generating the selective pressures that help to shape the evolving cognitive system.
“Nobody knows anything…… Not one person in the entire motion picture field knows for a certainty what’s going to work. Every time out it’s a guess and, if you’re lucky, an educated one.” (William Goldman, screenwriter)
Sums up my feelings this morning.
Via The Reformed Banker
Still going to try to do some writing today, but this captures lots of my friends’ mood:
Nassim Taleb has a short but very worthwhile piece on the Farnam Street Blog about signal and noise, and how thanks to always-on connectivity and real-time data we tend to consume a lot more of the second than the first. The big idea:
In business and economic decision-making, data causes severe side effects —data is now plentiful thanks to connectivity; and the share of spuriousness in the data increases as one gets more immersed into it. A not well discussed property of data: it is toxic in large quantities —even in moderate quantities.
Eric Garland has a great piece in The Atlantic about quitting his job as a futurist.
I am not quitting this industry for lack of passion…. The problem is, the market for intelligence is now largely about providing information that makes decision makers feel better, rather than bringing true insights about risk and opportunity. Our future is now being planned by people who seem to put their emotional comfort ahead of making decisions based on real — and often uncomfortable — information. Perhaps one day, the discipline of real intelligence will return triumphantly to the world's executive suites. Until then, high-priced providers of "strategic intelligence" are only making it harder for their clients — for all of us — to adapt by shielding them from painful truths….
So what's gone wrong? The consolidation of industries and increased power of the state means that the future is driven less by market trends or new technologies, and more by the internal politics of big corporations and regulatory agencies. But in addition,
Strategic intelligence is more and more like reading the Harvard Business Review through a fun house mirror. Sure, people use the words strategy, future, and foresight, but they mean something quite different.
In my experiences, and based on what my colleagues in the field tell me, executives today do not do well when their analysts confront them with challenging, though often relatively benign, predictions. Confusion, anger, and psychological transference are common responses to unwelcome analysis….
For too many business and government executives, foresight is a luxury that is hardly necessary in this new "hypercompetitive" post-crisis world. Perhaps it's always been superfluous, we just didn't notice. The study of the future used to be easier to sell, maybe because the analysis usually predicted the growth of the consumer economy or the next great gadget. But the future is no longer nearly as palatable, and the customers are less interested. That's too bad, because companies and governments still need help planning for the future. But it takes discomfort, courage and humility to face that future, and who wants to pay for bad news?
Clearly my book on The Future: What We Can Know, What We'll Never Know, and What We Don't Want to Know, should be next on my agenda.
Government Computer News has a brief article about the ACE program. It focuses mainly on Applied Research Associates' Forecasting Ace, but it still gives a good overview of what the program is trying to achieve.
Johns Hopkins neuroscientist David Linden explains "the brain science behind gambling with the debt ceiling" on Reuters' Great Debate blog. It draws on, among other things, Barbara Mellers' work investigating how circumstances affect how people assess financial gains and losses.
The debt ceiling debate is raging in Washington. But what’s going on in the minds of the politicians working on the seemingly intractable problem? Barack Obama, Mitch McConnell, John Boehner and Eric Cantor are all taking calculated risks — bets — that they can win the standoff and get more out of the deal than the other side can. Their strategies are rooted in their political beliefs and theories on how government should operate, but their tactics come from the part of the brain that covets social acceptance and individual rewards.
Hans Breiter and his coworkers addressed these issues in some clever in human brain scanning experiments. Initially each subject received an account containing $50 worth of credit. They were instructed that they were working with real money and that they would be paid the balance of their account in cash at the end of the experiment. In the brain scanner, they watched a video screen that showed one of three wheels, each of which was divided into three pie-shaped segments labeled with a monetary outcome. The “bad” wheel had only negative or neutral outcomes (-$6.00, -$1.50, or $0), an “intermediate” one had mixed results (+$2.50, -$1.50, $0), and a final “good” wheel primarily had rewards (+$10.00, +$2.50, $0). After a particular wheel type was presented on the screen, the subject would push a button that would initiate rotation of an animated pointer. The pointer would spin for about five seconds and then come to rest, seemingly randomly, on one of the three possible outcomes, where it would remain for five more seconds.
The design of this experiment makes it possible to measure brain activation during both an anticipation phase (while the pointer is spinning) and an outcome phase (after the pointer has stopped). Of course, the software running the pointer is controlled by the experimenters so that it can deliver all of the possible monetary outcomes in a balanced manner.
The main finding was that key regions of the brain’s pleasure circuit were activated during both the anticipation phase and the outcome phase, when the outcomes were positive. The anticipation phase responses were graded according to the possible outcome: There was greater activity while the “good” wheel’s pointer was spinning than when that of the “intermediate” or “bad” wheel. And finally, during the outcome phase with the “good” wheel, greatest activation was seen for the largest monetary rewards. Thus even anticipation and experience of an abstract reward, like money, can activate the human pleasure circuit—we’re hardwired to catch a buzz of gambling and to catch the biggest buzz when the most is at stake.
This experiment was also designed to test another hypothesis about monetary reward in gambling. Using a related task, Barbara Mellers and coworkers demonstrated that people regard a $0 outcome on the “good” wheel as a loss but a $0 outcome on the “bad” wheel as a win. If our minds were completely rational, we would value these outcomes the same way, but we don’t. We are influenced by the counterfactual possibility of “what might have been.” Was this irrational belief reflected in brain activation? The response strength to the $0 outcome on the “good” wheel was lower than that for the “bad” wheel. However, the responses to the $0 outcome on the “intermediate” wheel did not fall between the levels for the good and the bad $0 responses, as would be predicted. The theory that counterfactual comparison modulates brain pleasure circuit activation is therefore possible, but remains unproven.
Jon Baron points out a new article on widening subjective confidence intervals:
Subjective probabilistic judgments are inevitable in many real life domains. A common way to obtain such judgments is to assess fractiles or confidence intervals. However, such judgments tend to be systematically overconfident. For example, 90% confidence intervals for future uncertain quantities (e.g., future stock prices) are likely to capture only 50-60% of the actual realizations. Furthermore, it has proved particularly difficult to de-bias forecasts and improve the calibration of expressed subjective uncertainty. This paper proposes a simple process that systematically leads to wider assessed confidence intervals than is normally the case, thus potentially improving calibration and hence reducing overconfidence. Using a series of lab and field experiments with professionals forecasting in their domain of expertise, we show that unpacking the distal future into intermediate more proximal futures has a substantial effect on subjective forecasts. For example, simply making it salient that between now and three months from now there is one month from now and two months from now increases the uncertainty assessors have in their three month forecasts, which helps mitigate the overconfidence in those forecasts. We refer to this phenomenon as the time unpacking effect and find that it is robust to different elicitation formats. We also address the possible reasons for the time unpacking effect and propose future research directions.
An article in the Boston Globe describes some new research by Daylian Cain, George Loewenstein, and Don Moore, looking at the effects on conflict of interest disclosures on expert advice.
In just about any profession– medicine or real estate, accounting or academia– people giving information and advice may carry agendas that bias their judgments, or find themselves in situations where duty and personal benefit clash….
One of the most popular– and least costly– solutions is disclosure. The notion is that requiring experts to put everything on the table should give them an incentive to behave ethically and avoid tarnishing their reputation: Transparency begets honesty. But work by Cain, in collaboration with Don Moore at the University of California Berkeley and George Loewenstein at Carnegie Mellon University, finds that disclosure can have the opposite effect….
By assuming that disclosure is always a benefit, he and his colleagues argue, regulators may be failing to address the real problems caused by conflicts of interest. In fact, biases are rooted deep in our psychology, and can’t be dispelled with a simple confession. Policies of disclosure, far from being a panacea, may be drawing attention away from the much harder work of removing conflicts and making sure that people’s advice and their interests align.
The experiment itself was pretty straightforward:
Cain, Loewenstein, and Moore conducted a series of experiments meant to mimic a situation in which a person in authority– such as a doctor, consultant, or real estate broker– is giving advice that influences another person’s decision. Certain study participants were required to make an estimate– evaluating the prices of houses, for instance. Meanwhile, other participants were selected to serve as experts: They were given additional information with which to advise the estimators. When these experts were put in a conflicted situation– they were paid according to how high the estimator guessed– they gave worse advice than if they were paid according to the accuracy of the estimate.
No surprise there: People with a conflict gave biased advice to benefit themselves. But the twist came when the researchers required the experts to disclose this conflict to the people they were advising. Instead of the transparency encouraging more responsible behavior in the experts, it actually caused them to inflate their numbers even more. In other words, disclosing the conflict of interest– far from being a solution– actually made advisers act in a more self-serving way.
"We call it moral licensing," Moore says. "After having behaved honestly and virtuously, you then feel licensed to indulge in being a little bit bad."… [I]t appeared that disclosing a conflict of interest gave people a green light to behave unethically, as if they were absolved from having to consider others’ interests…. In effect, what the experts were doing was passing the buck on managing their bias to the people they were advising.
The World Futures Society reports that Olaf Helmer, a pioneer of futures and co-founder of Institute for the Future, has died.
A mathematician who helped bring scientific rigor to speculation about the future, Olaf Helmer died in Oak Harbor, Washington, on April 14, less than two months from his 101st birthday.
Helmer (ranked number 37 on the Encyclopedia of the Future’s list of the world's 100 most influential futurists) is best known as the co-inventor of the Delphi forecasting methodology — the systematic polling of experts in multiple rounds to create an authoritative consensus about some aspect of the future.
He was a "legendary futurist," notes Paul Saffo, president of the Institute for the Future, which Helmer co-founded after departing the RAND Corporation in 1968.
It was important, Helmer believed, to use this new methodology for the public good and not exclusively for military strategy. Many of Helmer's early papers on Delphi polling and other futures work are available at RAND.