Alex Soojung-Kim Pang, Ph.D.

I study people, technology, and the worlds they make

Tag: social software (page 1 of 2)

Social Scanning article now officially published

I know I've personally sent copies to all fourteen people who are interested in the article, but my piece on social scanning (cleverly subtitled, in Shakespearean fashion, "or, Finally a Use for Twitter") is formally, officially published in Futures. It's part of a special issue on "Global Mindset Change."

Odds are unless you're behind a university paywall you can't actually get to the article, but here's a draft that lays out the argument reasonably well.

The full citation is "Social scanning: Improving futures through Web 2.0; or, finally a use for twitter," Futures v. 42 no. 10 (December 2010), pp. 1222-1230.

Social bike sharing

Another example of bicycles becoming smarter and more social: Social Bikes.

For those who aren't familiar with how these resource-sharing services typically work, check out our story about the technology behind Zipcar. In a nutshell, there are little car lots (or in the case of B-Cycle, a company that will soon deploy shared bikes in Chicago, bike stations) located all over a city that are locked when not in use. A user can make a reservation online for a car or bike and then pick it up at the designated time.

There is no human interaction required: once the mode of transportation is reserved, the user identifies him or herself to the car or bike either by RFID (Zipcar) or PIN at the cycle station (B-Cycle), which then unlocks the car/bike. When the user is done, he or she returns the vehicle to the same lot so that others can make use of the car. For B-Cycle, users can return bikes to any B-Cycle station, not necessarily the one they rented from.

The SoBi system follows a similar path, but the technology is a bit more advanced than that of services like B-Cycle…. For one, there are no cycle stations: SoBi bikes are parked all over the city (starting in New York City) at regular old bike racks. This means that bikes could, in fact, be anywhere at any given time, and not just at a designated station that could be blocks away. You can pick up any bike that's not already reserved, and drop it off anywhere without having to hunt down a drop-off station….

Like a Zipcar, each SoBi bike is equipped with its own "lockbox" that communicates wirelessly with the SoBi servers via GPS and a cellular receiver (an H-24 module from Motorola, Rzepecki told Ars). When you make a reservation online or via smartphone, you see a map of all the bikes in the area based on their GPS data and are given the option to unlock a specific bike when you click on it….

Since the lockbox contains a GPS module, a cell chip, and a lock that works with a PIN pad, there has to be some way to power it. The SoBi team is still working out the kinks in power consumption, but plans to power the devices with a hub dynamo on the bike's rear wheel. The lockbox is essentially powered by your pedaling—no charging stations required.

[thanks, Heather]

Designing an ECAST: How to bring citizens into science policy

Darlene Cavalier has a great piece in Discover about citizen science and reimagining the Office of Technology Assessment. As she explains,

What originally began as Science Cheerleader’s effort to help reopen the Congressional Office of Technology Assessment (an agency, shut down in the 90’s, that helped Congress better understand policy implications of complex, science issues), has evolved into this reincarnation.

Why? It became apparent after two years-worth of numerous discussions with a variety of stakeholders, that reopening the “old” OTA would leave little, if any, opportunities to invoke contemporary applications critical to 21st century governing: decentralized expertise (tapping the knowledge of scientists across the nation) and citizen engagement, to name but two….

Government policymakers, businesses, non-governmental organizations, and citizens rely on analysis to capably navigate the technology-intensive world in which we now live. The new model, described in the report, would provide opportunities to generate input from a diverse public audience, while promoting societal discussions and public education.

This redefines the technology assessment model by recommending the formation of a first-of-its-kind U.S. network to implement the recommendations: Expert and Citizen Assessment of Science and Technology (ECAST).

I'm very interested in systems like this, so I want to take a quick shot at outlining a couple properties that an ECAST would actually have to have to work.

First is a philosophical question. Does this kind of knowledge about the potential impacts of science and technology simply exist somewhere? Or does it need to be created?

Put another way, if you assume the former, your task is to find the person– the orthogonal thinker in a dorm room, the visionary at the startup– who can share a cool insight. If it's the second, then your task is to bring together interesting people, and get them to think together about the future of science and technology.

I've had some clients who were firm believers in the first approach. They wanted me to find the undiscovered visionary: one client more or less told me that my mission was to find the 16 year-old who could become another Steve Jobs, and to find him in China. Wrapped up in this mandate are a couple assumptions: that there's someone out there who sees the future really clearly, and we just need to find them; that such people are the ones who make history (and the future); and that we'll know this person when we find them.

I think each of these three assumptions is faulty. History isn't made by visionaries who spend a lifetime pursuing One Single Vision: I'm not sure that Steve Jobs had a vision for the iPhone that I could have extracted from him in 1973, when he was still– well, before he was Steve. Further, great technologies just aren't made by single people: like all creative endeavours, they're collaborative efforts. Finally, I'm not sure how you'd sort out crackpot from genius ideas about the future in any over-the-transom process.

But this is not say that a simple process that taps the raw "wisdom of the crowds"– say polling people, or opening up a wiki about the implications of science and technology– is a substitute. My experience trying to get experts to contribute to an open future of science platform makes me skeptical that you'll get useful results just by throwing open the doors, however nice they are. (One of the Discover commenters makes this point, too.)

Rather, you need a process that has several properties.

First, it needs to be accessible to just about anyone who wants to participate. There should be some kind of barriers to participation, to discourage people who want to just advocate for their products or talk about how putting microchips in our food will make us all super-geniuses.

Second, it should combine open-ended scanning with events that have clear dates. You need the former because innovation and other interesting things happen all the time; you need the latter because you need mechanisms to encourage concentration and innovative thinking (and hard deadlines and urgency are shown to stimulate more out-of-the-box thinking than leisure and freedom– a fact that many an academic has discovered the hard way).

Third, the system should thoughtfully draw on the wide varieties expertise that can be brought together in a virtual platform. Personally I think talking in terms of "citizens" and "experts" threatens to obscure something important, namely that "expertise" about exceptionally complex phenomena is highly distributed and localized. If you want an opinion about the value of Lie numbers in Garrett Lisi's theoretical physics, there are about a dozen people in the world you want to talk to (mainly this guy); if you want to think about the broad implications of synthetic biology, you want Rob Carlson, but you also want a lot of other people who can contribute expertise in law, engineering, manufacturing, policy, etc. etc. As the history of science shows, sometimes the people least likely to see the long-term implications of ideas or inventions are the scientists and engineers most intimately involved with their creation.

Fourth, you need some real-world events. Virtual meetings can be great– they generally suck, but they can be designed to be great (I make part of my living designing them)– but face-to-face interactions still produce things that you don't get through online itneractions. Even better are events that combine virtual and real interactions and spaces: if properly designed, you get the best of rich social interactions that, as primates, we're so good at, and the virtues of digital scribing and recording and sharing.

Finally, the exercise has to have an obvious payoff. This means two things. First, if it can be designed to provide some immediate benefits to participants– class credit for students, data for grad students, citations for professors, networking opportunities for entrepreneurs, a thousand new Facebook for the rest of us– so much the better. Second, it should be clear that people from NIH (or Merck or CIA or NSF) are actually paying attention to the results of Cubesat Day or Synthetic Biology Week. That raises the stakes, creates more of a sense of urgency, and makes everyone take the event more seriously.

Now, what kind of technology platform would you use?

My answer for now is, try a little of everything. Unless you get caught in the trap of sourcing the whole project to some soul-sucking systems contractor who'll charge your $37 billion and not really ever deliver what you want, you could do a lot of cheap experiments, in lots of cities; so long as you document well and pay close attention, pretty soon you'll see what works and what doesn't, and you can transplant successful efforts to other places. Don't think in terms of a system, in other words: think in terms of an ecosystem, in which you provide some minimal nutrition (seed funding), encourage rapid evolution, have lots of plasmids and transfer RNA around, and quickly reward success. Maybe that sounds like a cop-out, but it's the best way to get a system that's as flexible and interesting as its subject.

Or am I missing something?

Car cost-sharing: finally around the corner?

Back in 2004, when I was a columnist for Red Herring, I wrote a piece about what would happen when reputation systems make their way into the world— that is, when they stop being things that we only consult in online transactions, and become things we can consult easily in real-world transactions. I talked about how they could jump-start car-sharing systems.

Today, I saw an article about RelayRides, a

person-to-person car-sharing service, which will be launching soon in Baltimore. Unlike fleet-based services—Zipcar, City CarShare, I-GO, and others—which maintain their own vehicles, RelayRides relies on individual car owners to supply the vehicles that other members will rent.

There are a couple other services like this, including Divvycar, but there seems to be a sense that these systems are ready to take off. So "why are peer-to-peer car-sharing services emerging now?"

Part of the answer might lie in the way online and offline services like Zipcar, Prosper, Netflix, and are training us to share our stuff—people are simply getting used to the idea. “‘Zip’ has become a verb to the point that we could ‘zip’ anything—they just happened to start it with cars. Close on their heels was Avelle (formerly Bag, Borrow Or Steal) and now SmartBike for bikes on demand. The next step seems to be a crowd-sourced version of Zipcar,” says Freed.

Another part of the answer might be found in our response to the ecological and economic crises Americans are facing. As Clark explains, “You just think of the number of cars on the road, and the resource that we have in our own communities is so massive… what the peer-to-peer model does is it really allows us to leverage that instead of starting from scratch and building our own fleet.”

From an individual’s perspective, peer-to-peer sharing is a means for owners to monetize their assets during times when they don’t require access to them. But peer-to-peer models can also be understood to utilize existing resources more efficiently—ultimately, to reduce the number of cars on the road—through shifted mentalities about ownership, the intelligent organization of information and, increasingly, through real-time technologies.

Since peer-based car-sharing companies don’t bear the overhead costs of owning and maintaining their own fleets, they don’t require the high utilization rates for vehicles that Zipcar and similar programs do—the result is comparatively fewer limitations for the size and scale of peer-to-peer operations.

Always satisfying for a futurist to see the future actually start to arrive.

Article on social scanning now available

Last week on Future2 I put up a series of posts on social scanning, my experience with scanning tools, the design of a simple scanning system, and its potential individual and professional benefits. They were extracted from a longer think-piece on social scanning, a PDF of which is now available.

In the next few weeks I’ll flesh it out with footnotes, respond to a few points that people have already brought up, and send it off to a journal. The feedback I’ve gotten suggests that the field could definitely benefit from such a system– if built to be simple, social and scalable (qualities my friend Mike Love and I argued are the hallmarks of good knowledge tools).

[To the tune of Charlie Parker, “Night In Tunisia,” from the album The Charlie Parker Story [Disc 3] (a 3-star song, imo).]

PDF of social scanning piece

I've posted a PDF that pulls together my social scanning argument into a single document.

I'll add footnotes to every third word and send it to a journal in the near future.

The benefits of social scanning

In earlier posts, I made an argument for turning scanning into a more social activity; drew some lessons from my experience scanning at IFTF; and outlined how a system drawing on the community's use of Web 2.0 might work. Here, I talk about what such a system could deliver: in particular, functionalities that would deliver intellectual benefits; and the professional benefits that the system could deliver over time.

Intellectual benefits first. What could such a system deliver to practitioners that would help them improve their work in the near term? I can envision a couple things.

Heat Maps of the Future. This content could be presented in a variety of ways, at several time scales. A list of most popular subjects or citations from the last 24 hours, akin to the defaults lists on Technorati or Digg, would have the virtue of simplicity and familiarity. Citations and references in today's datastream can tell you what futurists think is interesting right now; but looking at the datastream over longer time periods– weeks or months, say– would give users a clearer picture of what issues are of enduring interest. New product announcements, elections, or disasters generate a frenzy of postings and repostings that die off quickly. In contrast, articles that are still cited after weeks or months are likely to deal with issues of more enduring importance. Looking at a longer stretch of the datastream will also help identify people who are good at spotting important trends early, and who can do so consistently. It will note who first identified the event, who subsequently picked it up, and what chains of influence connect people together.

Weak Signals. These heat maps would provide the background for what many people are really interested in: weak signals of disruptive change. Embedding the search for weak signals in social scanning would improve it greatly, by providing a standard against which the uniqueness of any signal can be measured. Today, the search for weak signals is pretty intuitive, and what counts as a weak signal is personal and subjective: my weak signal may be someone else's conventional wisdom, and vice versa. Aggregating signals from across the futurists' community would help individuals tune their intuition by letting them see when their weak signals are genuinely novel, and are actually well-known to people in other countries or experts in other specialties; and it would help the discipline as a whole by nudging the search for weak signals into something more rigorous and systematic.

Additional Functionalities. Identifying heat maps, trending topics, and weak signals would be basic functions of a social scanning system. Of course, it would be possible to develop additional functionalities based on this content. You could create tools for professional forecasters tools to benchmark and improve their practice, by showing users how their interests compare to those of the field as a whole; how often they identified weak signals that later were cited by others; and how important things they rated highly turned out to be over time.

Other tools could be used by groups. Top-rated topics could be flagged in a prediction markets system whose participants could more explicitly bet on the importance or timing of disruptions or future developments. Yet others could be used with clients. For example, interactive roadmaps based on content material from the system into an online presentation software system Prezi could be used in strategic planning workshops.

But there are larger, longer-term professional benefits that social scanning could provide. It would facilitate better scanning by converting private work into public goods. Social scanning would provide a social platform connecting the field together. The system would identify people who are good broad scanners, who are good at seeing trends early, who can spot weak signals, or who don't know each other but share research interests. Finally, social scanning could improve the profession of futures by giving practitioners incentives to share their work and systematically improve their forecasting.

Social scanning would be better scanning. It would generate a continuously-updated, community-wide and collective view of what trends are shaping the future, and what signals suggest the emergence of new trends. We can see what various futurists (somewhat independently) consider important, by comparing input from multiple sources. In other words, our collective reading patterns may reveal some insights that we could not create individually. At the organizational level, it would reduce the work of starting new scanning platforms for projects; instead, researchers could draw on existing, automatically-updated scans, augmenting them with additional work when necessary.

It would make scanning more efficient at an individual level, too. Today there's a lot of repetition in scanning, since futurists don't have a way to systematically share the work of scanning. If we could pool the results of our work, and trust the whole community to keep up with the most popular (and, one hopes, most critical) trends, individuals would have more time to spend looking through specialized or offbeat sources– a diversification which would enrich the discipline as a whole– as well as working on synthetic, interpretive activities. To draw a parallel to the academic world, most scholars focus their own energies and writing on specialized subjects, and work with colleagues to evolve new approaches, schools of thought, etc. This latter work doesn't always happen formally: it emerges through literature reviews, thematic essays, conferences, and conversations– a whole infrastructure for producing collective knowledge that futurists haven't really replicated.

Social scanning would encourage useful specialization. Social scanning would allow practitioners to build professional reputations for more kinds of work and insight. Today the fastest way for a futurist to build professional capital is to make flamboyant public pronouncements; doing the more mundane work of identifying less flashy trends, or assembling evidence that others can use, receives virtually no credit. There are currently no mechanisms for recognizing researchers who are terrific scanners but lousy forecasters, or who have a brilliant eye for weak signals but no public presence. By awarding users points for each item them contribute to the datastream (i.e., writing posts on their blogs, adding bookmarks to their account, etc.) and additional points for work they do within the system (e.g., tagging content, associating different pieces of content, or rating contributions), it would quickly become possible to identify people who are community-minded and generous with their ideas. Some of these users may turn out to be well-known names in the field; others may not. (Because the system can also analyze the importance of contributions, it could distinguish people who's work is defined by quantity rather than quality.) But by making it public, the system would give scanning and sharing the recognition they deserve.

This in turn will enrich the professional ecology, by making it possible to practitioners to build social capital from a wider variety of intellectual and professionally constructive activities. This would make futures more like better-developed and -organized disciplines like physics, where people can specialize in particular subjects (high-energy physics, cosmology, condensed matter, etc.), but also make careers as theorists, experimentalists, instrument designers, or computational experts. This is not to say that some of these specialties aren't higher-profile than others, but what matters is that the field has mechanisms for recognizing and rewarding all kinds of contributions to science. This is missing in futures, but there is an opportunity here, thanks to the fact that very few futurists make any money from scanning, but instead make money from the things that scanning enables. Turning this largely invisible private activity into a public good would raise the overall quality of scanning, and recognize and reward good scanners for their contributions to the field.

Social scanning could bring gentle coordination to the discipline. The field lacks the centralized, gatekeeping institutions– a few dominant graduate training programs, a strong professional society, government certification– the give shape to other professions like law and medicine. Nor does it have the canonical literature, moral codes, and daily practices that define members of religious orders. Futurists are spread in corporations, government agencies, consulting companies, one- or two-person groups, and academia, and most of us spend much more time talking to clients than to each other. As a result, the field is physically dispersed and intellectually decentered. Social scanning would help build a more cohesive sense of identity by making the community's interests visible to itself; allow far-flung practitioners who share common interests to find each other, and let them build on each other's work in ways we cannot now.

Social scanning would raise the quality of the discipline. It would provide clear benchmarks for practitioners: it would let me compare what I've been reading to my colleagues. Social scanning would also contribute to the development of more solid and rational professional standards. Today, the market rewards the most public futurists for being provocative more than for being useful or right. The upside to analytical rigor and correctness is low, and the downside to being wrong is even lower. Social scanning would begin to shift the economics of professional reputation, and provide a system that ignored flamboyance, gave less credit to single dead-on predictions, and rewarded less spectacular but more consistent performance.

Social scanning would be a lightweight infrastructure. A social scanning platform would do all this without requiring something as elaborate as a World Brain (appealing though that idea might be), or requiring all futurists to adopt common software packages. Like all good knowledge tools (as Mike Love and I argued in a 2008 IFTF report), it lets people do what they're best at, and computers do what they're best at. It can be easily adapted by users and integrated into their existing workflows and habits. We can harvest work that people are already sharing. Nobody who already has a blog or thousands of bookmarks has to switch systems, learn a new tool, or abandon legacy content. They just keep doing what works best for them.

[This is extracted from a longer essay on social scanning. A PDF of the entire piece is available.]

Building new scanning capabilities

Today, futurists using Twitter, Delicious, Digg and other Web 2. 0 services publish a flow of content that is probably already too large for any person to follow, and is growing rapidly.

For example, Twitter publishes roughly 600-700 tweets per day marked with the #future hash tag. The futurists I follow post 70-80 tweets per day (though some of those posts are personal or auto-generated by other agents). Futures-oriented lists on Twitter follow anywhere from a dozen to three hundred people, and almost those lists are all available via RSS.

Other systems generate equally substantial bodies of content. Users on Delicious, the oldest social bookmarking service, post about 350 bookmarks per day with the tag "future." My network (which includes a select few futurists) posts about 220 bookmarks per year. That translates into about 1120 separate data-points per day, or over 400,000 signals per year — just from three services. Futurists' blogs publish between 100 and 200 posts per week.

Casting one's net wider, one can rapidly capture an enormous number of potential signals. Consider Tweet the Future, a Web site that monitors Twitter for tweets containing the word "future." It finds about 30 tweets every minute– over 40,000 a day– though the vast majority of these tweets have nothing to do with futures or forecasting.

So many if not most futurists, consulting companies, and futures-oriented nonprofits are using one or more these systems. Most of these datastreams are real time-reflections of what people are reading. These datastreams represent a vast but untapped resource that could be used to build a picture of the collective attention of the futures community, and detect weak signals: indeed, it can largely replace the kind of commissioned content that fed Delta Scan and Signtific. We no longer have to work alone to find interesting things. Instead, we can detect patterns in our and our colleagues' datastreams.

How would a social scanning platform work? Here's what I imagine a very simple but useful system doing.

Its core functionality would be an engine that gathers signals from the free and nearly real-time content produced by futurists and subject-matter experts on blogs, Twitter, and other social media platforms; analyzes this content to find subjects and citations that are of greatest interest to the futures community; and clusters together material that shares unusual terms, keywords, or links to common references. This would let us identify both popular subjects and outlying wild cards, and create a body of data that could support others tools or services.

The system would harvest RSS feeds generated by a list of blogs, Twitter,, Digg and other services generated by the system's managers. The list would have some simple metadata about sources, most notably their authors; it would also record metadata from its sources, particularly the publication date and time of posts and articles, and whatever tags attach to the content.

What would the system it do with this datastream? The first key task would to filter it. By gathering information about the author of each feed, it would be able to associate multiple feeds with the same author. If the same author has several different sources that the system is following, the system would look across those and filters out repeats. For example, if I have a blog and account, and both automatically push updates to a Twitter account, the system knows to look for cross-posts between those services, and count a blog post that generates a Tweet only once.

The second key piece of filtering involves associating multiple hits on the same subject. Different people may talk about the same event but reference articles published in different places, or the same article published in multiple places– a wire service article that appears in several newspapers, or an article that is reblogged. The system would also need to be able to identify different URLs as pointing to the same article—e.g., the full URL or an article and a shortened URL. Identifying these sources could be done by software, by users, or both. So while repetition by an individual would be controlled for, multiple citations and references are recorded. The former is noise in the system, but the latter is signal: the more people who tag or blog about a subject, the more important it is. (Citation and referencing also filters out non-professional noise. Many Twitter users combine references to major new articles with announcements like "I am eating a sandwich;" the latter are far less likely to be referenced by others than the former.)

In Delta Scan and Signtific, contributors or community members were supposed to formally rate the importance of different trends. In this system, we can simply assume that if someone takes the time to share a link to an article, they consider that article to be worth their attention. More links, especially links over time, indicate the emergence of a group consensus that a link points to a trend worth watching.

This kind of filtering could be done automatically, and improved by users. People may be able to identify associations between articles that automated systems don't. They could group together content from the data stream by adding tags to specific pieces of content; and they can either tags or identify synonymous terms (e.g., ubiquitous computing, ubicomp, and ubic, and ubiq all mean the same thing, for example). My experience with Delta Scan and Signtific suggests, however, that this system needs to be kept as simple as possible. People generally don't classify things unless there are clear incentives and immediate rewards. Even then there are huge variations in the use of hash tags, keywords, etc. between users and across systems, and little chance that people can be induced to adopt standard taxonomies or ontologies. However, when you're working with high social knowledge, and information that by nature exists at the boundaries of the human corpus, it's important to maintain a high degree of ontological flexibility.

Rewarding people for doing this kind of tagging and associating would send the important signal that community-oriented work deserves to be recognized and encouraged. This kind of work has traditionally been essential for high-quality scholarly and professional activity (think of the legal profession's vast corpus of precedents and codes, the medical profession's reference works, the scientific world's gigantic structures for sharing everything from raw data to polished research) but has either been done largely by professionals– librarians, catalogers, and others– with little professional visibility, or by organizations that extract high rents for their work. By rewarding users for improving the system and contributing to the professional good, we can harvest some of the benefits of that organizational work without incurring its costs.

[This is extracted from a longer essay on social scanning. A PDF of the entire piece is available.]

Lessons about scanning from IFTF

From 2004 to 2009 I worked on a number of content management systems designed to support research at the Institute for the Future. The largest were two public systems: Delta Scan, a project for the British government's Horizon Scanning Centre, which collected over a hundred long forecasts on the future of science and technology to 2050; and Signtific, a National Academy of Sciences-funded project that collected several thousand signals on trends in global science, technology and innovation between 2007 and 2009. Both projects followed a similar workflow. Experts were contracted to contribute short pieces on current trends or on-the-horizon developments, and longer forecasts that discussed the implications of those trends. In-house researchers then used the content to develop topic maps, and worked with clients or other outside organizations to apply the content to their strategic planning or policy processes.

Both systems proved useful, but we also learned some important lessons that could be incorporated into social scanning.

Getting experts to participate for long periods on futures-related projects is hard. In both Delta Scan and Signtific we recruited graduate students and postdocs as contributors, thinking that they would be easier to hire, have a good sense of their fields, and have a strong incentive to think about the future of their disciplines. But personal career interest didn't translate easily into the kind of broad perspective futurists have, nor did it guarantee high participation in the system: thinking about your next professional move isn't the same thing as thinking like a futurist about your discipline as a whole. (It may also be the case that if you're the only one who sees the Next Big Thing, the potential career rewards to keeping that knowledge secret were greater than any incentives we could offer to make it public.)

Even throwing more money at the problem wasn't enough to engender investment in and commitment to the project. On Signtific, we had a corps of experts who received a substantial monthly honorarium, who were expected to write a certain number of short pieces and longer forecasts per month. But it proved difficult for busy people with research to conduct, grant applications to write, conferences to attend, and lives to lead to spend a few hours a month writing for Signtific. The problem was not that it was too large a commitment: it was that it was too easy to defer.

It did help to make the contributions less formalized or formulaic, particularly once it became clear that most contributors don't like thinking about or creating metadata. In Delta Scan experts were required to estimate the likelihood, impact, time frame, and geographical scope of each forecast. A number, however, challenged the possibility of forecasting these dimensions. For scientists accustomed to looking for the right answer, talking about long-term trends seemed too much like pure speculation. In a public venue there was no upside to being right, while it would be easy to expose yourself to ridicule.

In response, in Signtific we made two changes. First, we reduced the number of factors to two: likelihood and impact. Second, we made it possible for anyone to vote on these factors, much in the same way people can vote on articles on Slashdot. Had it gone well, this system would have let us map signals or trends that were low-likelihood but high-impact (and thus wild cards), and compare how users in different fields or parts of the world viewed the same trends. Even with the simpler format, however, it proved difficult to get readers to rate content.

Some of the same challenges hindered broader community-building. We gave users the ability to contribute their own content or rate the importance and likelihood of existing forecasts, and assumed they would participate out of intellectual interest or for public recognition; neither was a powerful draw. Experts hired as freelancers or contractors, in contrast, had a clear understanding of both the scope and the limits of their obligations. It's hard to contract out community participation.

On the other hand, we did find other things that worked well, focused expert contributors' attention and labor, and reduced the amount of work necessary to edit and maintain the database. Most notably, we found that workshops, properly structured and supported with the right electronic tools, could yield a tremendous amount of useful content. (As one participant put it, it was easier to get more done in four focused hours than four distracted months.) Some were writing workshops, in which people wrote signals; in other cases I brought together experts to analyze the current state of the database, and develop scenarios or forecasts based on existing signals. Many of these were one-day events, but eventually I was able to design a half-day workshop format that was still quite productive. The key to making them work is to bring together people physically, and provide the group with a good technical framework and process for capturing their insights. Structuring the work this way allowed them to focus their attention, compare their work with others, and get a better sense that they had made a tangible contribution to the project.

But despite our best efforts, we never quite managed to encourage the development of a self-sustaining online community that would create and rate content, update and enrich the database, and help us identify trends or disruptions we never would have found ourselves.

But while we struggled with this challenge, futurists discovered Web 2.0. And an unexpected solution to our problem– and a whole host of new opportunities– presented themselves.

[This is extracted from a longer essay on social scanning. A PDF of the entire piece is available.]

More on the Facebook as time machine

John Boudreau reports that “the Internet is reconnecting long-lost sweethearts,” while Scott Harris writes about Facebook as a time machine (gee, that sounds familiar).


Not long ago, such rekindlings were largely relegated to once-a-decade school reunions, those awkward gatherings that tend to be more about sizing up past rivals than reconnecting with former sweethearts. But the Internet is now profoundly altering some people’s links to the past and sometimes upending their lives in unexpected ways. For some, the outcome is a blissful recoupling; for others, the reignited embers burn down the house….

[T]he Internet, and now social-networking sites such as and Facebook, make relinking easier and more common. And people are doing it at a much younger age — instead of an uncomfortable phone call to her parents, all he has to do is do a Google search for her name.


Many people tell of reuniting with cherished, long-lost friends, or reviving meaningful social circles that had frayed over the years. I’ve met a couple who were high school sweethearts but had been out of touch for 23 years. Now they credit Facebook for reconnecting them — and the romance is fully rekindled. …

It’s interesting how Facebook has connected a little social network of my high school friends — some close, some not so close. When I couldn’t find an address for a friend whose father had died, I contacted one of her classmates through Facebook. She had the e-mail address.

Why is that?

Unlike predecessors Friendster and MySpace, Facebook succeeded by creating a culture of authenticity — not a dodgy realm of alter egos, but a place where people feel comfortable showing off photos of their children to their friends.

I would say that it didn’t create that culture of authenticity: it set some initial conditions that allowed users to create it.

[To the tune of Django Reinhardt, “It Don’t Mean a Thing (If it Ain’t Got That Swing),” from the album The Best of DJango Reinhardt (I give it 1 stars).]
Older posts

© 2019 Alex Soojung-Kim Pang, Ph.D.

Theme by Anders NorenUp ↑