Via Duke professor Cathy Davidson, I just came across this L. A. Times piece by Christopher Chabris and Daniel Simons. (They’re authors of The Invisible Gorilla. The essay aim at “digital alarmism,” the argument that the Internet is making us stupider by “trap[ping] us in a shallow culture of constant interruption as we frenetically tweet, text and e-mail,” both leaving us less time to read Proust, and rewiring our brains so we’re incapable of paying serious attention to… anything.
Back in 2004, when I was a columnist for Red Herring, I wrote a piece about what would happen when reputation systems make their way into the world— that is, when they stop being things that we only consult in online transactions, and become things we can consult easily in real-world transactions. I talked about how they could jump-start car-sharing systems.
person-to-person car-sharing service, which will be launching soon in Baltimore. Unlike fleet-based services—Zipcar, City CarShare, I-GO, and others—which maintain their own vehicles, RelayRides relies on individual car owners to supply the vehicles that other members will rent.
There are a couple other services like this, including Divvycar, but there seems to be a sense that these systems are ready to take off. So "why are peer-to-peer car-sharing services emerging now?"
Part of the answer might lie in the way online and offline services like Zipcar, Prosper, Netflix, and Kiva.org are training us to share our stuff—people are simply getting used to the idea. “‘Zip’ has become a verb to the point that we could ‘zip’ anything—they just happened to start it with cars. Close on their heels was Avelle (formerly Bag, Borrow Or Steal) and now SmartBike for bikes on demand. The next step seems to be a crowd-sourced version of Zipcar,” says Freed.
Another part of the answer might be found in our response to the ecological and economic crises Americans are facing. As Clark explains, “You just think of the number of cars on the road, and the resource that we have in our own communities is so massive… what the peer-to-peer model does is it really allows us to leverage that instead of starting from scratch and building our own fleet.”
From an individual’s perspective, peer-to-peer sharing is a means for owners to monetize their assets during times when they don’t require access to them. But peer-to-peer models can also be understood to utilize existing resources more efficiently—ultimately, to reduce the number of cars on the road—through shifted mentalities about ownership, the intelligent organization of information and, increasingly, through real-time technologies.
Since peer-based car-sharing companies don’t bear the overhead costs of owning and maintaining their own fleets, they don’t require the high utilization rates for vehicles that Zipcar and similar programs do—the result is comparatively fewer limitations for the size and scale of peer-to-peer operations.
Always satisfying for a futurist to see the future actually start to arrive.
[T]he knowledge economy is less about knowledge management than about knowledge creation— creativity and innovation. Creativity is fast becoming the competitive advantage for leading-edge companies.
As creativity becomes more important, one of the most powerful tools is still the human brain, with its ability to hold tacit knowledge—to synthesize information, see patterns, derive insight, and create something new. This ability has yet to be programmed into a computer system. What’s more, we are discovering that it’s not just the lone human brain working on its own that creates the best innovation these days, but the human brain in collaboration with other human brains, sometimes with many thousands or millions of others, in social networks enabled by the Internet. In other words, there’s a social aspect to knowledge, creativity, and innovation that we are just learning to tap. It is this social aspect of knowledge that the new knowledge tools are designed to leverage.
They don’t consist of a single device or system but an array of devices, systems, methodologies, and services sometimes called the “intelligent web.” The new tools are applications that exploit things like semantic Web functions, microformats, natural
language searching, data-mining, machine learning, and recommendation agents to provide a more productive and intuitive experience for the user. In other words, the new knowledge tools aren’t meant to replace humans, they are meant to enable humans to do what they do best—creativity and innovation—without having to do the heavy lifting of brute information processing….
Organizations are in the middle of a paradigm shift from machine-heavy knowledge management tools designed to maximize efficiency and standardize organizational practices to technically lightweight, human-centered instruments that facilitate creativity and collaboration. It is this human creativity that will differentiate businesses in the future.
Today’s generation of knowledge tools—interrelational databases like Freebase and DBPedia, social networks like OpenSocial, information accessing tools like Snapshots—are flexible and relatively easy for individuals and groups to learn, and thus can serve as “outboard” brains. The result is a kind of human–machine symbiosis in which processing-heavy tasks are offloaded onto software, leaving users to collaborate more freely with each other in search of insight, creativity, and experience.
Even as this new generation of knowledge tools evolves, traditional knowledge management will continue to matter, just as agriculture and manufacturing still have a place in service economies. Companies will continue to track resources, process payroll, maintain centralized databases, and manage IT infrastructures. But the new leading edge will not be organized mainly around management, but discovery; using systems to augment human imagination and creativity.
Computers + communities = best.
That’s pretty much it. There’s also some philosophical (or sociological, depending on team you play or root for) stuff about the nature of knowledge, and the degree to which knowledge tools have to be social; the different types of intelligence that humans and computers exhibit (something I’ve also written about on The End of Cyberspace); and the future of memory.
It also lays out an argument for why simple, social, and symbiotic knowledge tools are triumphing over complex ones– why the mouse conquered the world, but the chord keyboard and the rest of Doug Engelbart’s system languishes in obscurity.
Update: It occurs to me that this is one of a number of pieces that I wrote or co-authored that are available on the IFTF Web site. Others include:
I'm supposed to be taking some of the summer off, finishing the book and a couple articles, but like Michael Corleone in The Godfather, every time I think I'm out they pull me back in. I was at the first day of SciBarCamp today, playing local host / fixer / keeping an eye on the furniture. Sean Mooney (who in addition to being a former professor at Indiana University, was a World Wrestling Federation announcer) gave a very interesting talk about current challenges in bioinformatics.
A fair amount of Sean's talk dealt with the technical challenges of creating federated databases, the differing demands of bench scientists and funders– the former want tools for managing and analyzing data in today's problems, while the latter want to attack Big Questions– and the issues involved in getting people to share their data. The issues aren't so much philosophical or competitive, but practical: people believe in sharing data, and once they're done with it are generally willing to share so long as it doesn't put a burden on them.
But as Sean was talking about how different labs used different procedures for similar experiments, and how those differences manifested themselves in the ways they produced and consumed data (at least, this is what I took away from his talk– he might have meant something complete different), a thought came to me. Projects intended to let scientists assume that data can be converted into something like the reagents or instruments labs buy from suppliers– a commodity that you don't have to think about, you just use. But what if data can't be black-boxed this way? Or, more specifically, what if only really uninteresting data– the kind that everyone understands very well, the kind that's solidly in the realm of normal science– can be cleaned up, repackaged, commodified and standardized, and put online into generally-usable databases?
On one hand, this idea might seem stupid. After all, science is science: data is data, and facts about nature are true no matter where they're created. That makes them scientific. On the other hand, if you buy the argument of people like Harry Collins, scientific research is as much a craft as a– well, a science. Databases tend to reflect the specific, local interests of researchers, working on particular problems. This tends to work against the generalizability of data: the more it's a product of craft, and an object tailored to a particular job, the harder it'll be to make it useful to other people.
So depending on how much databases are expressions of craftwork and problem-solving and bricolage, and how much they reflect a timeless, placeless crystallization of nature's order, they're going to be less or more easily poured into big projects to reuse data.
Yesterday at Project M lab you drew a doodle that read, “Remind me to keep an open mind.”
It’s so easy for us to be a victim of our own orthodoxy and synaptic connections. I’ve often thought about giving Project M’ers t-shirts that they have to wear the whole time that reads, “Please remind me to keep an open mind.” That’s why I wear this stupid little bracelet that says, “Live Wrong” because it’s always a reminder to me to think wrong.
How do we actively keep an open mind?
I try to surround myself with people that encourage that. If you’re just sitting in your cabin in the woods, it’s very easy to get wound up in your own thoughts and they reinforce each other…. The biggest thing is having people to play with, who get it, who are challenging and who keep the conversation activated like that.
Everyone loves groups. What's better (in America at least) than being part of a "team"? Collaboration is cool. (Is there a word that's been rehabilitated more completely than "collaboration"? Fifty years ago, someone who "collaborated" wasn't a good person, but a traitor.) Collective intelligence is the solution to the world's problems. Smart mobs are… mobbish, perhaps, but also smart, and that's what matters.
Groups are powerful… but for all their power, they're also fragile. University of Washington academics Will Felps and Terence Mitchell constructed a very interesting experiment to show just how fragile they are, by demonstrating the effect of "bad apples" on the effectiveness of small groups.
Groups of four college students were organized into teams and given a task to complete some basic management decisions in 45 minutes. To motivate the teams, they're told that whichever team performs best will be awarded $100 per person. What they don't know, however, is that in some of the groups, the fourth member of their team isn't a student. He's an actor hired to play a bad apple, one of these personality types:
The Depressive Pessimist will complain that the task that they're doing isn't enjoyable, and make statements doubting the group's ability to succeed.
The Jerk will say that other people's ideas are not adequate, but will offer no alternatives himself. He'll say "you guys need to listen to the expert: me."
The Slacker will say "whatever", and "I really don't care."
The conventional wisdom in the research on this sort of thing is that none of this should have had much effect on the group at all. Groups are powerful. Group dynamics are powerful. And so groups dominate individuals, not the other way around. There's tons of research, going back decades, demonstrating that people conform to group values and norms.
But Will found the opposite.
Invariably, groups that had the bad apple would perform worse. And this despite the fact that were people in some groups that were very talented, very smart, very likeable. Felps found that the bad apple's behavior had a profound effect — groups with bad apples performed 30 to 40 percent worse than other groups.
A paper describing the experiment, "How, when, and why bad apples spoil the barrel: Negative Members and Dysfunctional Groups," is available as a PDF.
Elsewhere, she writes about the problems of isolation in scholarship that echo things I noticed at AHA:
Although I consider part of what I do scholarship, I don’t think many others would consider me a scholar. I’m not sure I want to be a scholar, at least not as it’s currently conceived–the isolated individual hunched over books (or maybe more contemporarily, the screen)….
Here’s what I learned, or what I’m chewing on right now. The real work of scholarship takes place in isolation and through individual work. From that isolated position, isolated works get created and those works are read only a few people. There are exceptions to this, of course, and the sciences are much more collaborative than other disciplines, although they also are at greater risk of being scooped than humanities faculty, for example. In my work field, instructional technology, much of the thinking and work that looks like scholarship happens online, via blogs, wikis, podcasts, etc. And that’s one of the things that draws me to the field. I like thinking out loud with others. I feel more comfortable moving the thinking and scholarship that happens online within the ed tech community into formal publication than I would going from online to formal within rhetoric and composition field.
Partly, of course, it’s because I’ve lost touch with that scholarly community and what I know of it from reading and contact I’ve had with people in the field, it’s both going in directions that interest me and in directions that really don’t interest me. Honestly, I think to some extent, I’m skeptical of scholarship in many (most) fields. I find some of it very valuable, but the way that scholarship is produced and the reasons it’s produced (for the sake of getting tenure and promotion, maybe to forward the field, maybe to say something new) tend to make it less valuable to me personally.
The assumption that “the real work of scholarship takes place in isolation and through individual work” is one that many humanists, and a fair number of social scientists would recognize; certainly for historians it’s the default. We may have colloquia and seminars and dissertation reading groups, but the core of historical work– the work in the archives, and especially the work of writing– is done alone.
In contrast, my work at the Institute is, at its best, effervescently collaborative. One of our most important research tools is the expert workshop, which is what it sounds like, but is actually more fun. We write just about all our important stuff using Google Docs, sitting around in a space, talking through suggested revisions in real time, batting opening lines and transitions back and forth. (For someone who spends so much time reading, the opportunity to do it in a fundamentally new way is really refreshing.)
Today, working on things like the end of cyberspace book is pleasant a break from that ultra-social routine, a chance to step inside the venerable courts for a few hours. (Not that we should completely abandon older ways of working. The ability to make creative use of solitude is something that psychologists have recognized is quite valuable; and good work often requires moving between collaborative and contemplative modes.)
I’m not sure I’d be happy going back to a life in which I was mainly working by myself. But I’m also curious how long a life like that– one lived, in its most essential parts, alone– will be available to professors of history and literature and the like. Already they’re mild anachronisms on campus: for most departments, I’m willing to bet, collaboration– or at least joint authorship, and a lot of work in which your research projects plug clearly into someone else’s research agenda– is the rule rather than the exception. (It would be interesting to compare the number of jointly-authored articles in leading history journals to the number in, say, economics and sociology.) The infrastructure for more intimately collaborative forms of historical research and writing are emerging, at least for a few specialties. Will it be very long before the students in first-year methods classes all have to work together on joint projects, and before people who become accustomed to working with others don’t feel that that capability makes them different from scholars?
[To the tune of Howlin’ Wolf, “Goin’ Down Slow,” from the album Chess Blues 1947-1967 (a 3-star song, imo).]
Add REST: WHY YOU GET MORE DONE WHEN YOU WORK LESS to your Goodreads shelf