From 2004 to 2009 I worked on a number of content management systems designed to support research at the Institute for the Future. The largest were two public systems: Delta Scan, a project for the British government's Horizon Scanning Centre, which collected over a hundred long forecasts on the future of science and technology to 2050; and Signtific, a National Academy of Sciences-funded project that collected several thousand signals on trends in global science, technology and innovation between 2007 and 2009. Both projects followed a similar workflow. Experts were contracted to contribute short pieces on current trends or on-the-horizon developments, and longer forecasts that discussed the implications of those trends. In-house researchers then used the content to develop topic maps, and worked with clients or other outside organizations to apply the content to their strategic planning or policy processes.

Both systems proved useful, but we also learned some important lessons that could be incorporated into social scanning.

Getting experts to participate for long periods on futures-related projects is hard. In both Delta Scan and Signtific we recruited graduate students and postdocs as contributors, thinking that they would be easier to hire, have a good sense of their fields, and have a strong incentive to think about the future of their disciplines. But personal career interest didn't translate easily into the kind of broad perspective futurists have, nor did it guarantee high participation in the system: thinking about your next professional move isn't the same thing as thinking like a futurist about your discipline as a whole. (It may also be the case that if you're the only one who sees the Next Big Thing, the potential career rewards to keeping that knowledge secret were greater than any incentives we could offer to make it public.)

Even throwing more money at the problem wasn't enough to engender investment in and commitment to the project. On Signtific, we had a corps of experts who received a substantial monthly honorarium, who were expected to write a certain number of short pieces and longer forecasts per month. But it proved difficult for busy people with research to conduct, grant applications to write, conferences to attend, and lives to lead to spend a few hours a month writing for Signtific. The problem was not that it was too large a commitment: it was that it was too easy to defer.

It did help to make the contributions less formalized or formulaic, particularly once it became clear that most contributors don't like thinking about or creating metadata. In Delta Scan experts were required to estimate the likelihood, impact, time frame, and geographical scope of each forecast. A number, however, challenged the possibility of forecasting these dimensions. For scientists accustomed to looking for the right answer, talking about long-term trends seemed too much like pure speculation. In a public venue there was no upside to being right, while it would be easy to expose yourself to ridicule.

In response, in Signtific we made two changes. First, we reduced the number of factors to two: likelihood and impact. Second, we made it possible for anyone to vote on these factors, much in the same way people can vote on articles on Slashdot. Had it gone well, this system would have let us map signals or trends that were low-likelihood but high-impact (and thus wild cards), and compare how users in different fields or parts of the world viewed the same trends. Even with the simpler format, however, it proved difficult to get readers to rate content.

Some of the same challenges hindered broader community-building. We gave users the ability to contribute their own content or rate the importance and likelihood of existing forecasts, and assumed they would participate out of intellectual interest or for public recognition; neither was a powerful draw. Experts hired as freelancers or contractors, in contrast, had a clear understanding of both the scope and the limits of their obligations. It's hard to contract out community participation.

On the other hand, we did find other things that worked well, focused expert contributors' attention and labor, and reduced the amount of work necessary to edit and maintain the database. Most notably, we found that workshops, properly structured and supported with the right electronic tools, could yield a tremendous amount of useful content. (As one participant put it, it was easier to get more done in four focused hours than four distracted months.) Some were writing workshops, in which people wrote signals; in other cases I brought together experts to analyze the current state of the database, and develop scenarios or forecasts based on existing signals. Many of these were one-day events, but eventually I was able to design a half-day workshop format that was still quite productive. The key to making them work is to bring together people physically, and provide the group with a good technical framework and process for capturing their insights. Structuring the work this way allowed them to focus their attention, compare their work with others, and get a better sense that they had made a tangible contribution to the project.

But despite our best efforts, we never quite managed to encourage the development of a self-sustaining online community that would create and rate content, update and enrich the database, and help us identify trends or disruptions we never would have found ourselves.

But while we struggled with this challenge, futurists discovered Web 2.0. And an unexpected solution to our problem– and a whole host of new opportunities– presented themselves.

[This is extracted from a longer essay on social scanning. A PDF of the entire piece is available.]