Alex Soojung-Kim Pang, Ph.D.

I study people, technology, and the worlds they make

Tag: future2.0

Weight loss and the challenges of reaching long-term future goals

As I've mentioned a couple times, over the last couple years I've lost about fifty pounds, and am in the best physical condition of my entire life. For someone who grew up as a fat kid and fluctuated between being kind of overweight and really needing to take some serious weight off, and who had a stereotypical academic's contempt for all things seriously athletic, this is no small feat.

Of course, for me it was both a physical endeavor, and an extremely cerebral one: in order to get past the various things that had kept me from losing weight in the past, it was necessary for me to read a lot about nutrition and dieting, dive into the literature on obseity and satiety, and think about how what I'd learned from behavioral economics could be applied to weight loss.

At a certain point, I realized that the challenge of losing weight was a classic futures problem: complex, uncertain, requiring all kinds of near-term tradeoffs for long-term benefits, and hard to sustain. So could what I learned as a futurist help me lose weight? And could the experience of losing weight teach me anything about dealing with futures-related problems?

I think the answer to both is yes, and I've laid out my answers in an article that I just sent into one of those frighteningly efficient online editorial systems. We'll see if the piece is accepted– it may be too first person to qualify as serious research– but in the meantime I've put a copy of the draft online, and it's available as a PDF. The introduction is in the extended post.

Naturally, comments are welcome.

Introduction, Using Futures 2.0 to Manage Intractable Futures

Since its emergence several decades ago, the discipline of futures has concerned itself with describing the forces shaping the future, while also revealing the future's contingency and open-endedness. We futurists have devoted less energy to studying how futures are actually made: how people act on ideas about the future in the present—or just as interesting, why people or organizations fail to act on them. There are several reasons for this. Few of us have opportunities to follow our ideas into client organizations and see how they’re used. We want to avoid the appearance of advocating for particular futures, and thus compromising our objectivity. Finally, we have assumed that people are rational actors, who when presented with a variety of future choices can be counted on to make a self-interested decision. This is a default assumption among financial planners, policymakers, and others who advise on long-term strategic issues, and it reflects and complements the self-perception of our clients, who usually see themselves this way.

In this world-view, implementation isn’t unimportant; it’s just not very interesting. But research in behavioral economics and neuroeconomics has shown that clear-eyed, calculating rationality is in short supply outside economics textbooks and treatises on Realpolitik. What this literature teaches us is that there are deep, interesting reasons why people fail to act in their own long-term self-interest. For futurists, this work presents both a challenge and an opportunity. The challenge is to understand how a behavioral economics understanding of decision-making should inform futures research; this is the subject I took up in a previous article. The opportunity is to expand the domain of futures out of research and facilitation, and to help clients design tools that help them act in the present with the future in mind.

That opportunity is the subject of this article. It focuses on applying behavioral economics and tools to personal futures, a subject that has attracted several writers. In the futures community, Jessica Charlesworth has explored the future of self-knowledge and personal futures. Jarno Koponen has described the architecture of a "personal future simulation system." Verne Wheelwright has advocated applying scenario planning and other traditional forecasting techniques to individuals. There is also work on personal futures outside the futures world. Alexandra Carmichael, Kevin Kelly, and Gary Wolf and others have advocated self-monitoring as a tool for improving personal health. Disabilities advocates use a collaborative process of "personal futures planning" to "develop strategies for success for a person with disabilities… [and] take action to accomplish positive changes for the person."

For the sake of clarity, I will explore the opportunity through a case study involving a simple personal futures-oriented challenge. The case is an example of an intractable future: it is difficult but not impossible to realize, it requires persistent effort for an extended period, and it can be subverted by biases, instincts, and our willingness to let rationalization trump rationality. The case reveals how we can design tools to counter them, and what intellectual instruments we can use when doing so. This intractable future also has the virtue of being exceptionally easy to describe and familiar to many readers.

My case is weight loss. I have lost about 50 pounds (22.7 kilograms) over the last two years; taken up running, cycling and weightlifting; and today am in the best physical shape of my life. For a profession accustomed to thinking about big issues and megatrends like nanotechnology, global warming, and Peak Oil, losing weight may seem trivial and beneath its interest. But it shouldn't be, for two reasons. First, by any objective measure, in much of the developed world obesity is a substantial public health problem: it affects the lives of tens of millions of people, increases chronic diseases like hypertension and diabetes, and costs governments hundreds of billions of dollars. Second, despite the inevitable specificities of personal experience, weight loss illustrates at a human scale the kinds of complex, interconnected problems that characterize life in the 21st century, and for which we are poorly-adapted to deal.

My talk in Malaysia

I've put a copy of my Malaysia talk, which I delivered last week, up on Slide Share.

The version below shows you the slides themselves, but not the notes, which include the text of the talk. As you'll see, I prefer to use slides as, well, illustrations; I know many presenters like to read their talks off their slides, but I find that doesn't work for me for two reasons.

First, I rewrite my talks, so changing text slows this process down considerably. Second, when I have slides that are too complicated, I end up spending more time interacting with them than I do with my audience. So simpler presentations are fresher and more interesting, both for my audiences and for me.

The very long shadow of the history of technology

A confession: when it comes to thinking about the future, I hold two views. On one hand, I find the black swans work of Nassim Taleb– the argument that the speed and complexity of the modern world has left it vulnerable to more, and more unpredictable, crises– pretty convincing. (Call this the New View.)

On the other, I also believe that much of what we claim is novel about this modern age is not so new. Many facets of globalization– the importance of migration, global trade, etc.– are actually as old as civilization. I also believe that other things, like a belief in greater vulnerability to epidemics and financial panics (or just as worrying, the belief that we are now immune from such things), are a product of a relatively short-term view of history. You could better understand our current world, and think more clearly about the future, if you stretch your view of the past from the last 50 years to the last 500 or 5,000. (Call this the Long View.)

Obviously, the New View and the Long View are contradictory. I get around that by not thinking about both of them at the same time. But I'm trying to construct a framework that fits them together.

This morning I ran across another data-point in the Long View: a new piece by Diego Comin, Erick Gong, and William Easterly looking at very long-term trends in technology and economic development. As Easterly explains,

We collected crude but informative data on the state of technology in various parts of the world in 1000 BC, 0 AD, and 1500 AD.

1500 AD technology is a particularly powerful predictor of per capita income today. 78 percent of the difference in income today between sub-Saharan Africa and Western Europe is explained by technology differences that already existed in 1500 AD – even BEFORE the slave trade and colonialism.

From the abstract (pdf):

The emphasis of economic development practitioners and researchers is on modern determinants of per capita income such as quality of institutions to support markets, economic policies chosen by governments, human capital components such as education and health, or political factors such as violence and instability.

Could this discussion be missing an important, much more long-run dimension to economic development?… Is it possible that history as old as 1500 AD or older also matters significantly for today’s national economic development? A small body of previous growth literature also considers very long run factors in economic development…. This paper explores these questions both empirically and theoretically. To this end, we assemble a new dataset on the history of technology over 2,500 years of history prior to the era of colonization and extensive European contacts…. We detect signs of technological differences between the predecessors to today’s modern nations as long ago as 1000 BC, and we find that these differences persisted and/or widened to 0 AD and to 1500 AD (which will be the three data points in our dataset, with 1500 AD estimated from a different collection of sources than 1000 BC and 0 AD). The persistence of technological differences from one of these three “ancient history” data points to the next is high, as well as robust to controlling for continent dummies and other geographic factors.

Our principal finding is that the 1500 AD measure is a statistically significant predictor of the pattern of per capita incomes and technology adoption across nations that we observe today.

Of course, one can get into how this is a different set of forces than most futurists are interested in– but to the degree that it serves as a corrective to the tacit view held some futurists that history doesn't matter at all– a kind of social science version of transhumanism, in which thanks to technology (or migration or whatever) we're able to ignore the past and its gravitational pull– it's worth reading and pondering.

Complexity, complication, and the nature of futures

A while ago, courtesy of Malcolm Gladwell, I came across a distinction between puzzles and mysteries.

The national-security expert Gregory Treverton has famously made a distinction between puzzles and mysteries. Osama bin Laden’s whereabouts are a puzzle. We can’t find him because we don’t have enough information. The key to the puzzle will probably come from someone close to bin Laden, and until we can find that source bin Laden will remain at large.

The problem of what would happen in Iraq after the toppling of Saddam Hussein was, by contrast, a mystery. It wasn’t a question that had a simple, factual answer. Mysteries require judgments and the assessment of uncertainty, and the hard part is not that we have too little information but that we have too much.

This distinction speaks to the difficulty at the heart of futures, I think, and it came to mind recently when I read David Segal's piece in the New York Times on complexity and complication:

Complexity used to signify progress — it was the frisson of a new gadget, the riddle of some advance in technology. Now complexity lurks behind the most expensive and intractable issues of our age. It’s the pet that grew fangs and started eating the furniture….

What we need, suggests Brenda Zimmerman, a professor at Schulich School of Business in Ontario, is a distinction between the complicated and the complex. It’s complicated, she says, to send a rocket to the moon — it requires blueprints, math and a lot of carefully calibrated hardware and expertly written software. Raising a child, on the other hand, is complex. It is an enormous challenge, but math and blueprints won’t help. Performing hip replacement surgery, she says, is complicated. It takes well-trained personnel, precision and carefully calibrated equipment. Running a health care system, on the other hand, is complex. It’s filled with thousands of parts and players, all of whom must act within a fluid, unpredictable environment. To run a system that is complex, it’s not enough to get the right people and the ideal equipment. It takes a set of simple principles that guide and shape the system. For instance: Teach everyone the best practices of doctors who are really good at hip replacement surgery.

“We get seduced by the complicated in Western society,” Ms. Zimmerman says. “We’re in awe of it and we pull away from the duty to ask simple questions, which we do whenever we deal with matters that are complex.”

I begin to think, after reading Treverton, Nassim Taleb, David Orrell, Donald Michael, and others, that the First Principle of Futures 2.0 ought to be: to map as clearly as we can what is fundamentally unknowable about the future– not because it's hard, or because it's just complicated, or because knowledge of potential futures has the capacity to affect the future (a "problem" that I'm coming to believe is tractable), but because the Nature of Things makes it impossible. Once you have that, you have a very firm foundation upon which to develop all your other tools, to measure your success, and to know how you can improve.

Designing an ECAST: How to bring citizens into science policy

Darlene Cavalier has a great piece in Discover about citizen science and reimagining the Office of Technology Assessment. As she explains,

What originally began as Science Cheerleader’s effort to help reopen the Congressional Office of Technology Assessment (an agency, shut down in the 90’s, that helped Congress better understand policy implications of complex, science issues), has evolved into this reincarnation.

Why? It became apparent after two years-worth of numerous discussions with a variety of stakeholders, that reopening the “old” OTA would leave little, if any, opportunities to invoke contemporary applications critical to 21st century governing: decentralized expertise (tapping the knowledge of scientists across the nation) and citizen engagement, to name but two….

Government policymakers, businesses, non-governmental organizations, and citizens rely on analysis to capably navigate the technology-intensive world in which we now live. The new model, described in the report, would provide opportunities to generate input from a diverse public audience, while promoting societal discussions and public education.

This redefines the technology assessment model by recommending the formation of a first-of-its-kind U.S. network to implement the recommendations: Expert and Citizen Assessment of Science and Technology (ECAST).

I'm very interested in systems like this, so I want to take a quick shot at outlining a couple properties that an ECAST would actually have to have to work.

First is a philosophical question. Does this kind of knowledge about the potential impacts of science and technology simply exist somewhere? Or does it need to be created?

Put another way, if you assume the former, your task is to find the person– the orthogonal thinker in a dorm room, the visionary at the startup– who can share a cool insight. If it's the second, then your task is to bring together interesting people, and get them to think together about the future of science and technology.

I've had some clients who were firm believers in the first approach. They wanted me to find the undiscovered visionary: one client more or less told me that my mission was to find the 16 year-old who could become another Steve Jobs, and to find him in China. Wrapped up in this mandate are a couple assumptions: that there's someone out there who sees the future really clearly, and we just need to find them; that such people are the ones who make history (and the future); and that we'll know this person when we find them.

I think each of these three assumptions is faulty. History isn't made by visionaries who spend a lifetime pursuing One Single Vision: I'm not sure that Steve Jobs had a vision for the iPhone that I could have extracted from him in 1973, when he was still– well, before he was Steve. Further, great technologies just aren't made by single people: like all creative endeavours, they're collaborative efforts. Finally, I'm not sure how you'd sort out crackpot from genius ideas about the future in any over-the-transom process.

But this is not say that a simple process that taps the raw "wisdom of the crowds"– say polling people, or opening up a wiki about the implications of science and technology– is a substitute. My experience trying to get experts to contribute to an open future of science platform makes me skeptical that you'll get useful results just by throwing open the doors, however nice they are. (One of the Discover commenters makes this point, too.)

Rather, you need a process that has several properties.

First, it needs to be accessible to just about anyone who wants to participate. There should be some kind of barriers to participation, to discourage people who want to just advocate for their products or talk about how putting microchips in our food will make us all super-geniuses.

Second, it should combine open-ended scanning with events that have clear dates. You need the former because innovation and other interesting things happen all the time; you need the latter because you need mechanisms to encourage concentration and innovative thinking (and hard deadlines and urgency are shown to stimulate more out-of-the-box thinking than leisure and freedom– a fact that many an academic has discovered the hard way).

Third, the system should thoughtfully draw on the wide varieties expertise that can be brought together in a virtual platform. Personally I think talking in terms of "citizens" and "experts" threatens to obscure something important, namely that "expertise" about exceptionally complex phenomena is highly distributed and localized. If you want an opinion about the value of Lie numbers in Garrett Lisi's theoretical physics, there are about a dozen people in the world you want to talk to (mainly this guy); if you want to think about the broad implications of synthetic biology, you want Rob Carlson, but you also want a lot of other people who can contribute expertise in law, engineering, manufacturing, policy, etc. etc. As the history of science shows, sometimes the people least likely to see the long-term implications of ideas or inventions are the scientists and engineers most intimately involved with their creation.

Fourth, you need some real-world events. Virtual meetings can be great– they generally suck, but they can be designed to be great (I make part of my living designing them)– but face-to-face interactions still produce things that you don't get through online itneractions. Even better are events that combine virtual and real interactions and spaces: if properly designed, you get the best of rich social interactions that, as primates, we're so good at, and the virtues of digital scribing and recording and sharing.

Finally, the exercise has to have an obvious payoff. This means two things. First, if it can be designed to provide some immediate benefits to participants– class credit for students, data for grad students, citations for professors, networking opportunities for entrepreneurs, a thousand new Facebook for the rest of us– so much the better. Second, it should be clear that people from NIH (or Merck or CIA or NSF) are actually paying attention to the results of Cubesat Day or Synthetic Biology Week. That raises the stakes, creates more of a sense of urgency, and makes everyone take the event more seriously.

Now, what kind of technology platform would you use?

My answer for now is, try a little of everything. Unless you get caught in the trap of sourcing the whole project to some soul-sucking systems contractor who'll charge your $37 billion and not really ever deliver what you want, you could do a lot of cheap experiments, in lots of cities; so long as you document well and pay close attention, pretty soon you'll see what works and what doesn't, and you can transplant successful efforts to other places. Don't think in terms of a system, in other words: think in terms of an ecosystem, in which you provide some minimal nutrition (seed funding), encourage rapid evolution, have lots of plasmids and transfer RNA around, and quickly reward success. Maybe that sounds like a cop-out, but it's the best way to get a system that's as flexible and interesting as its subject.

Or am I missing something?

Paper Spaces: Visualizing the Future

Years ago, I read Richard Harper and Abigail Sellen's Myth of the Paperless Office. For me, it's like Annie Hall or Houses of the Holy or David Brownlee's modern architecture class: it's one of those works that blows you away when you first encounter it, and still resonates years later. Almost immediately after reading the book, I started thinking about how paper media and their affordances are used– usually quite unself-consciously– by futurists in expert workshops.

The result is an article titled "Paper Spaces: Visualizing the Future." Like many of my articles, it's taken an unseemly amount of time to get into press, but it's finally coming out this spring in World Futures Review. A PDF of the latest draft is available here.

Here's the big argument, from the introduction:

We tend to think of space as irrelevant in creative work, or at best only indirectly influential: for example, architects may use a mix of open office plans, natural lighting, and bold colors to create stimulating, useful workspaces. But for workshops, and for the kinds of visual processes that many futurists use, the relationship between space, ideas, and creativity is much more intimate. Ideas are embodied in materials; they become cognitive and physical spaces that literally surround groups; and the process of creating those spaces can promote a sense of group identity and common vision for the future.

I use the term "paper spaces" to describe these environments, and to highlight several things. First, we're used to thinking of things made of paper as physical objects whose qualities help shape the experience of reading, but it's useful to pay attention to their spatial and architectural qualities as well. Large visuals aren't just things: they're spaces that possess some of the qualities of desks or offices. Workshops exploit their scale and physicality to promote social activity between workshop participants. In this case, the spatiality of paper is fairly self-evident; but many of our interactions with paper, books, and writing have a spatial quality. Scholars could gain much by analyzing print media using conceptual tools from architecture, design, and human-computer interaction, as well as literary theory and book history.

Second, it warns us against taking too passive or formal a view of visual tools in business, of treating them like paintings on a wall. In the way users interact with them– they're annotated, extended, argued over, and played with– they're more like Legos than landscapes. The process of creating maps, and the maps themselves, both reflect a set of attitudes about how to understand and prepare for the future, one that emphasizes user involvement, and the need for actors to develop and possess shared visions of the future. (Ironically, there may be more studies of large interactive displays and other digital media, than of the old media they're meant to displace. )

Third, the term "paper spaces" highlights their hybrid, ephemeral quality. They work because they're simultaneously interactive media and workspace, but their lives are brief and easy to overlook: they are designed to support idea- and image-making, but leave little trace of themselves…. [Despite this, though,] paper spaces are ubiquitous: most of our interactions with texts and other media have a spatial dimension that affects the ways we read, think, and create.

Donald Michael and the problem of retrospection in futures

Recently I came across a discarded copy of a pamphlet by Donald Michael, Cybernation: The Silent Conquest. Michael was part of that generation of American social scientists that created things like the Center for the Study of Democratic Institutions and the Ad Hoc Committee on the Triple Revolution (if I ever start a band that's what I'm going to name it).

Cybernation is a pretty fascinating historical document, because the arguments it makes about the coming revolution in automation sound like the same ones we make today about robotics, the Web, etc.

Computers are being used rather regularly to analyze market portolios for brokers; compute the best combinations of crops and livestock for given farm conditions; design and 'fly' under typical and extreme conditions rockets and airplanes before their are built… write music, translate tolerably if not perfectly from one languages to another, and simulate some logical brain processes…. Also, computers are programmed to play elaborate 'games' by themselves or in collaboration with human beings. Among other reasons, these games are played to understand and plan more efficiently for the conduct of wars and the procedures for industrial and business aggrandizement. Through such games, involving a vast number of variables, and contingencies within which these variables act and interact, the best of most likely solutions to complex problems are obtained. (Cybernation, p. 7)

The National Association of Manufacturers' filmstrip voice-over tone aside, this paragraph from 1962 sounds like a pretty good list of the cool things futurists are still highlighting as Revolutionary Uses of Computers.

This theme of the– what, institutional amnesia?– appeared explicitly tonight, when I came across a retrospective piece Michael published in 1985. Again it inspired a little deja vu:

How is it that, when I reflect on over 23 years of sharing thoughts about the future, I really cannot convince myself that I know why I was right sometimes and wrong other times? Indeed, often I cannot clearly decide whether I have been right or wrong! Inadequate documentation contributes to this but there are other far more profound reasons for my retrospective malaise. (Donald N. Michael, "With both feet planet firmly in mid-air: Reflections on thinking about the future," Futures (April 1985), p. 94.)

I also found this interesting, in a slightly disquieting way:

The pronouncements of experts are useful, when thinking about the future, not because their information is based on esoteric and valid knowledge about social change, though that occasionally may be so (but how is one to know?), but because, by virtue of the authority with which they are endowed, i.e. as experts, they are able to influence the definition of social reality others hold. Their expertness resides not in a prescience their logic engenders but in the 'psychologic' that logic activates: the authority of logic and, therefore, of the expert as a practitioner of logic, is what carries weight. This source of authority legitimizes the stories they tell. But the source also tends to subvert the story- tellers’ own recognition that they are telling stories. Their own belief in their authority, ie the authority of logic, leads them to believe they are doing something very different from 'merely' telling stories.

Over the years these insights and learnings have led me less and less to the doing of futures studies and more and more to questions and understandings regarding the functions futures studies perform, or could perform. (ibid., 96)

Really, the whole thing could have been subtitled, "why we need social scanning."

Prediction markets can now use real money

Late last week the New York Times had a short note on something that could be huge.

Think that this spring’s “Robin Hood” movie will be a blockbuster at the box office? Next week you will be able to put your money on it.

Cantor Futures Exchange, a subsidiary of Cantor Fitzgerald, expects to open an online futures market next month that will allow studios, institutions and moviegoers to place bets on the box-office revenue of Hollywood’s biggest releases. Last week, the company learned from regulators that customers could start putting money into their accounts on March 15.

If I read this right, it means that in the United States the legal precedent is being set to treat prediction markets as the ideal versions of futures markets, rather than as a form of gambling. This removes one of the obstacles to using prediction markets as a funding mechanism in science (something Tom Bell has talked about in an excellent article), and opens the door to futurists getting more serious about using prediction markets (something I advocate in my article "Futures 2.0").

The benefits of social scanning

In earlier posts, I made an argument for turning scanning into a more social activity; drew some lessons from my experience scanning at IFTF; and outlined how a system drawing on the community's use of Web 2.0 might work. Here, I talk about what such a system could deliver: in particular, functionalities that would deliver intellectual benefits; and the professional benefits that the system could deliver over time.

Intellectual benefits first. What could such a system deliver to practitioners that would help them improve their work in the near term? I can envision a couple things.

Heat Maps of the Future. This content could be presented in a variety of ways, at several time scales. A list of most popular subjects or citations from the last 24 hours, akin to the defaults lists on Technorati or Digg, would have the virtue of simplicity and familiarity. Citations and references in today's datastream can tell you what futurists think is interesting right now; but looking at the datastream over longer time periods– weeks or months, say– would give users a clearer picture of what issues are of enduring interest. New product announcements, elections, or disasters generate a frenzy of postings and repostings that die off quickly. In contrast, articles that are still cited after weeks or months are likely to deal with issues of more enduring importance. Looking at a longer stretch of the datastream will also help identify people who are good at spotting important trends early, and who can do so consistently. It will note who first identified the event, who subsequently picked it up, and what chains of influence connect people together.

Weak Signals. These heat maps would provide the background for what many people are really interested in: weak signals of disruptive change. Embedding the search for weak signals in social scanning would improve it greatly, by providing a standard against which the uniqueness of any signal can be measured. Today, the search for weak signals is pretty intuitive, and what counts as a weak signal is personal and subjective: my weak signal may be someone else's conventional wisdom, and vice versa. Aggregating signals from across the futurists' community would help individuals tune their intuition by letting them see when their weak signals are genuinely novel, and are actually well-known to people in other countries or experts in other specialties; and it would help the discipline as a whole by nudging the search for weak signals into something more rigorous and systematic.

Additional Functionalities. Identifying heat maps, trending topics, and weak signals would be basic functions of a social scanning system. Of course, it would be possible to develop additional functionalities based on this content. You could create tools for professional forecasters tools to benchmark and improve their practice, by showing users how their interests compare to those of the field as a whole; how often they identified weak signals that later were cited by others; and how important things they rated highly turned out to be over time.

Other tools could be used by groups. Top-rated topics could be flagged in a prediction markets system whose participants could more explicitly bet on the importance or timing of disruptions or future developments. Yet others could be used with clients. For example, interactive roadmaps based on content material from the system into an online presentation software system Prezi could be used in strategic planning workshops.

But there are larger, longer-term professional benefits that social scanning could provide. It would facilitate better scanning by converting private work into public goods. Social scanning would provide a social platform connecting the field together. The system would identify people who are good broad scanners, who are good at seeing trends early, who can spot weak signals, or who don't know each other but share research interests. Finally, social scanning could improve the profession of futures by giving practitioners incentives to share their work and systematically improve their forecasting.

Social scanning would be better scanning. It would generate a continuously-updated, community-wide and collective view of what trends are shaping the future, and what signals suggest the emergence of new trends. We can see what various futurists (somewhat independently) consider important, by comparing input from multiple sources. In other words, our collective reading patterns may reveal some insights that we could not create individually. At the organizational level, it would reduce the work of starting new scanning platforms for projects; instead, researchers could draw on existing, automatically-updated scans, augmenting them with additional work when necessary.

It would make scanning more efficient at an individual level, too. Today there's a lot of repetition in scanning, since futurists don't have a way to systematically share the work of scanning. If we could pool the results of our work, and trust the whole community to keep up with the most popular (and, one hopes, most critical) trends, individuals would have more time to spend looking through specialized or offbeat sources– a diversification which would enrich the discipline as a whole– as well as working on synthetic, interpretive activities. To draw a parallel to the academic world, most scholars focus their own energies and writing on specialized subjects, and work with colleagues to evolve new approaches, schools of thought, etc. This latter work doesn't always happen formally: it emerges through literature reviews, thematic essays, conferences, and conversations– a whole infrastructure for producing collective knowledge that futurists haven't really replicated.

Social scanning would encourage useful specialization. Social scanning would allow practitioners to build professional reputations for more kinds of work and insight. Today the fastest way for a futurist to build professional capital is to make flamboyant public pronouncements; doing the more mundane work of identifying less flashy trends, or assembling evidence that others can use, receives virtually no credit. There are currently no mechanisms for recognizing researchers who are terrific scanners but lousy forecasters, or who have a brilliant eye for weak signals but no public presence. By awarding users points for each item them contribute to the datastream (i.e., writing posts on their blogs, adding bookmarks to their del.icio.us account, etc.) and additional points for work they do within the system (e.g., tagging content, associating different pieces of content, or rating contributions), it would quickly become possible to identify people who are community-minded and generous with their ideas. Some of these users may turn out to be well-known names in the field; others may not. (Because the system can also analyze the importance of contributions, it could distinguish people who's work is defined by quantity rather than quality.) But by making it public, the system would give scanning and sharing the recognition they deserve.

This in turn will enrich the professional ecology, by making it possible to practitioners to build social capital from a wider variety of intellectual and professionally constructive activities. This would make futures more like better-developed and -organized disciplines like physics, where people can specialize in particular subjects (high-energy physics, cosmology, condensed matter, etc.), but also make careers as theorists, experimentalists, instrument designers, or computational experts. This is not to say that some of these specialties aren't higher-profile than others, but what matters is that the field has mechanisms for recognizing and rewarding all kinds of contributions to science. This is missing in futures, but there is an opportunity here, thanks to the fact that very few futurists make any money from scanning, but instead make money from the things that scanning enables. Turning this largely invisible private activity into a public good would raise the overall quality of scanning, and recognize and reward good scanners for their contributions to the field.

Social scanning could bring gentle coordination to the discipline. The field lacks the centralized, gatekeeping institutions– a few dominant graduate training programs, a strong professional society, government certification– the give shape to other professions like law and medicine. Nor does it have the canonical literature, moral codes, and daily practices that define members of religious orders. Futurists are spread in corporations, government agencies, consulting companies, one- or two-person groups, and academia, and most of us spend much more time talking to clients than to each other. As a result, the field is physically dispersed and intellectually decentered. Social scanning would help build a more cohesive sense of identity by making the community's interests visible to itself; allow far-flung practitioners who share common interests to find each other, and let them build on each other's work in ways we cannot now.

Social scanning would raise the quality of the discipline. It would provide clear benchmarks for practitioners: it would let me compare what I've been reading to my colleagues. Social scanning would also contribute to the development of more solid and rational professional standards. Today, the market rewards the most public futurists for being provocative more than for being useful or right. The upside to analytical rigor and correctness is low, and the downside to being wrong is even lower. Social scanning would begin to shift the economics of professional reputation, and provide a system that ignored flamboyance, gave less credit to single dead-on predictions, and rewarded less spectacular but more consistent performance.

Social scanning would be a lightweight infrastructure. A social scanning platform would do all this without requiring something as elaborate as a World Brain (appealing though that idea might be), or requiring all futurists to adopt common software packages. Like all good knowledge tools (as Mike Love and I argued in a 2008 IFTF report), it lets people do what they're best at, and computers do what they're best at. It can be easily adapted by users and integrated into their existing workflows and habits. We can harvest work that people are already sharing. Nobody who already has a blog or thousands of del.icio.us bookmarks has to switch systems, learn a new tool, or abandon legacy content. They just keep doing what works best for them.

[This is extracted from a longer essay on social scanning. A PDF of the entire piece is available.]

Building new scanning capabilities

Today, futurists using Twitter, Delicious, Digg and other Web 2. 0 services publish a flow of content that is probably already too large for any person to follow, and is growing rapidly.

For example, Twitter publishes roughly 600-700 tweets per day marked with the #future hash tag. The futurists I follow post 70-80 tweets per day (though some of those posts are personal or auto-generated by other agents). Futures-oriented lists on Twitter follow anywhere from a dozen to three hundred people, and almost those lists are all available via RSS.

Other systems generate equally substantial bodies of content. Users on Delicious, the oldest social bookmarking service, post about 350 bookmarks per day with the tag "future." My network (which includes a select few futurists) posts about 220 bookmarks per year. That translates into about 1120 separate data-points per day, or over 400,000 signals per year — just from three services. Futurists' blogs publish between 100 and 200 posts per week.

Casting one's net wider, one can rapidly capture an enormous number of potential signals. Consider Tweet the Future, a Web site that monitors Twitter for tweets containing the word "future." It finds about 30 tweets every minute– over 40,000 a day– though the vast majority of these tweets have nothing to do with futures or forecasting.

So many if not most futurists, consulting companies, and futures-oriented nonprofits are using one or more these systems. Most of these datastreams are real time-reflections of what people are reading. These datastreams represent a vast but untapped resource that could be used to build a picture of the collective attention of the futures community, and detect weak signals: indeed, it can largely replace the kind of commissioned content that fed Delta Scan and Signtific. We no longer have to work alone to find interesting things. Instead, we can detect patterns in our and our colleagues' datastreams.

How would a social scanning platform work? Here's what I imagine a very simple but useful system doing.

Its core functionality would be an engine that gathers signals from the free and nearly real-time content produced by futurists and subject-matter experts on blogs, Twitter, and other social media platforms; analyzes this content to find subjects and citations that are of greatest interest to the futures community; and clusters together material that shares unusual terms, keywords, or links to common references. This would let us identify both popular subjects and outlying wild cards, and create a body of data that could support others tools or services.

The system would harvest RSS feeds generated by a list of blogs, Twitter, del.icio.us, Digg and other services generated by the system's managers. The list would have some simple metadata about sources, most notably their authors; it would also record metadata from its sources, particularly the publication date and time of posts and articles, and whatever tags attach to the content.

What would the system it do with this datastream? The first key task would to filter it. By gathering information about the author of each feed, it would be able to associate multiple feeds with the same author. If the same author has several different sources that the system is following, the system would look across those and filters out repeats. For example, if I have a blog and del.icio.us account, and both automatically push updates to a Twitter account, the system knows to look for cross-posts between those services, and count a blog post that generates a Tweet only once.

The second key piece of filtering involves associating multiple hits on the same subject. Different people may talk about the same event but reference articles published in different places, or the same article published in multiple places– a wire service article that appears in several newspapers, or an article that is reblogged. The system would also need to be able to identify different URLs as pointing to the same article—e.g., the full URL or an article and a bit.ly shortened URL. Identifying these sources could be done by software, by users, or both. So while repetition by an individual would be controlled for, multiple citations and references are recorded. The former is noise in the system, but the latter is signal: the more people who tag or blog about a subject, the more important it is. (Citation and referencing also filters out non-professional noise. Many Twitter users combine references to major new articles with announcements like "I am eating a sandwich;" the latter are far less likely to be referenced by others than the former.)

In Delta Scan and Signtific, contributors or community members were supposed to formally rate the importance of different trends. In this system, we can simply assume that if someone takes the time to share a link to an article, they consider that article to be worth their attention. More links, especially links over time, indicate the emergence of a group consensus that a link points to a trend worth watching.

This kind of filtering could be done automatically, and improved by users. People may be able to identify associations between articles that automated systems don't. They could group together content from the data stream by adding tags to specific pieces of content; and they can either tags or identify synonymous terms (e.g., ubiquitous computing, ubicomp, and ubic, and ubiq all mean the same thing, for example). My experience with Delta Scan and Signtific suggests, however, that this system needs to be kept as simple as possible. People generally don't classify things unless there are clear incentives and immediate rewards. Even then there are huge variations in the use of hash tags, keywords, etc. between users and across systems, and little chance that people can be induced to adopt standard taxonomies or ontologies. However, when you're working with high social knowledge, and information that by nature exists at the boundaries of the human corpus, it's important to maintain a high degree of ontological flexibility.

Rewarding people for doing this kind of tagging and associating would send the important signal that community-oriented work deserves to be recognized and encouraged. This kind of work has traditionally been essential for high-quality scholarly and professional activity (think of the legal profession's vast corpus of precedents and codes, the medical profession's reference works, the scientific world's gigantic structures for sharing everything from raw data to polished research) but has either been done largely by professionals– librarians, catalogers, and others– with little professional visibility, or by organizations that extract high rents for their work. By rewarding users for improving the system and contributing to the professional good, we can harvest some of the benefits of that organizational work without incurring its costs.

[This is extracted from a longer essay on social scanning. A PDF of the entire piece is available.]

© 2017 Alex Soojung-Kim Pang, Ph.D.

Theme by Anders NorenUp ↑