Alex Soojung-Kim Pang, Ph.D.

I study people, technology, and the worlds they make

Category: End of cyberspace (page 1 of 9)

This is how you review a book

Charles Pierce’s review of Ross Douthat’s Bad Religion (shorter version: the Sixties sucked) is a master class in how to take apart a book in a manner that respects the subject, but gives the author the flogging they deserve. This may be my favorite part:

[N]owhere does Douthat so clearly punch above his weight class as when he decides to correct the damage he sees as having been done by the historical Jesus movement, the work of Elaine Pagels and Bart Ehrman and, ultimately, Dan Brown’s novels. Even speaking through Mark Lilla, it takes no little chutzpah for a New York Times op-ed golden child to imply that someone of Pagels’s obvious accomplishments is a “half-educated evangelical guru.” Simply put, Elaine Pagels has forgotten more about the events surrounding the founding of Christianity, including the spectacular multiplicity of sects that exploded in the deserts of the Middle East at the same time, than Ross Douthat will ever know, and to lump her work in with the popular fiction of The Da Vinci Code is to attempt to blame Galileo for Lost in Space.

Fantastic. As good as Adam Gopnik’s epic takedown of The Matrix, Reloaded. It’s made more impressive by the fact that you get the sense that Pierce really knows what he’s talking about. Here are two very different lines that each in their way are quite illuminating:

He describes the eventual calcification of the sprawling Jesus movement into the Nicene Creed as “an intellectual effort that spanned generations” without even taking into account the political and imperial imperatives that drove the process of defining Christian doctrine in such a way as to not disturb the shaky remnants of the Roman empire. The First Council of Nicaea, after all, was called by the Emperor Constantine, not by the bishops of the Church. Constantine — whose adoption of the Christianity that Douthat so celebrates would later be condemned by James Madison as the worst thing that ever happened to both religion and government  — demanded religious peace. The council did its damndest to give it to him. The Holy Spirit works in mysterious ways, but Constantine was a doozy. Douthat is perfectly willing to agree that early Christianity was a series of boisterous theological arguments as long as you’re willing to believe that he and St. Paul won them all….

[Douthat is] yearning for a Catholic Christianity triumphant, the one that existed long before he was born, the Catholicism of meatless Fridays, one parish, and no singing with the Methodists. I lived those days, Ross. That wasn’t religion. It was ward-heeling with incense.

Blogging elsewhere

I realized I’ve not been writing much here, but have been doing more stuff on my professionally-related blogs. So, here’s a list of recent posts on Future2:

And on End of Cyberspace:

Just don’t want to seem like a slacker…

[To the tune of Michael Nyman Band, “An Eye For Optical Theory (from The Draughtsman’s Contract),” from the album The Essential Michael Nyman Band (a 1-star song, imo).]

Google’s cloudy Web clipboard

One of the things this project has taught me is that metaphor is really important: talking about the Internet as a place had very real impacts on copyright law, user interface design, and our expectations about the impact the Internet would have on the future. So shifts in metaphors matter too.

One of the things I've been paying attention to is the growing popular use of the term "cloud" to describe the Web. Usually this is in the context of some service that's migrated from the desktop to the Web, and the implication is that said service– your address book, word processor, calendar, what have you– no longer is chained to your desktop, but it accessible from any devices through "the cloud."

Today I noticed that Google Docs doesn't have a clipboard; instead, it has a "Web clipboard."

webclipboard.jpg

Notice that the Web clipboard isn't a conventional clipboard icon, but a clipboard with a cloud in front of it.

Now, there's a lot of incongruity in this icon. The combination of cloud + clipboard not exactly consistent: you can't attach a cloud to a clipboard, nor do you normally see clipboards rising in the sky.

Yet if you know that cloud = Web, it makes sense. Cloud + clipboard = "Web clipboard." But in order for it to work, you need to be reasonably familiar with the idea of the Web as a cloud. Not a place, but a cloud– something that floats around in the sky, visible from anywhere. Google's icon designers are assuming that people are familiar enough with the cloud = Web equation to make its use uncontroversial. Another step away from cyberspace as place.

Car cost-sharing: finally around the corner?

Back in 2004, when I was a columnist for Red Herring, I wrote a piece about what would happen when reputation systems make their way into the world— that is, when they stop being things that we only consult in online transactions, and become things we can consult easily in real-world transactions. I talked about how they could jump-start car-sharing systems.

Today, I saw an article about RelayRides, a

person-to-person car-sharing service, which will be launching soon in Baltimore. Unlike fleet-based services—Zipcar, City CarShare, I-GO, and others—which maintain their own vehicles, RelayRides relies on individual car owners to supply the vehicles that other members will rent.

There are a couple other services like this, including Divvycar, but there seems to be a sense that these systems are ready to take off. So "why are peer-to-peer car-sharing services emerging now?"

Part of the answer might lie in the way online and offline services like Zipcar, Prosper, Netflix, and Kiva.org are training us to share our stuff—people are simply getting used to the idea. “‘Zip’ has become a verb to the point that we could ‘zip’ anything—they just happened to start it with cars. Close on their heels was Avelle (formerly Bag, Borrow Or Steal) and now SmartBike for bikes on demand. The next step seems to be a crowd-sourced version of Zipcar,” says Freed.

Another part of the answer might be found in our response to the ecological and economic crises Americans are facing. As Clark explains, “You just think of the number of cars on the road, and the resource that we have in our own communities is so massive… what the peer-to-peer model does is it really allows us to leverage that instead of starting from scratch and building our own fleet.”

From an individual’s perspective, peer-to-peer sharing is a means for owners to monetize their assets during times when they don’t require access to them. But peer-to-peer models can also be understood to utilize existing resources more efficiently—ultimately, to reduce the number of cars on the road—through shifted mentalities about ownership, the intelligent organization of information and, increasingly, through real-time technologies.

Since peer-based car-sharing companies don’t bear the overhead costs of owning and maintaining their own fleets, they don’t require the high utilization rates for vehicles that Zipcar and similar programs do—the result is comparatively fewer limitations for the size and scale of peer-to-peer operations.

Always satisfying for a futurist to see the future actually start to arrive.

Laptops, classrooms, and discussion

A article in the Washington Post (via the Volokh Conspiracy) on the mixed value of laptops in the classroom:

A generation ago, academia embraced the laptop as the most welcome classroom innovation since the ballpoint pen. But during the past decade, it has evolved into a powerful distraction. Wireless Internet connections tempt students away from note-typing to e-mail, blogs, YouTube videos, sports scores, even online gaming — all the diversions of a home computer beamed into the classroom to compete with the professor for the student's attention.

This isn't just confined to colleges and graduate schools (law schools figure prominently in the article): I encounter a similar issue in workshops that I run. Especially here in the Valley, within ten minutes at least one person in a group of fifteen is going to have their Blackberry in their lap, checking their messages. It's so common I no longer take it personally, and I find it doesn't really work very well to ask people to turn things off, or remind them that they should be paying attention. People know they should be paying attention. They haven't forgotten.

Instead, I take it as a challenge to be more creative and engaging. And I'm not the only one:

José A. Bowen, dean of the Meadows School of the Arts at Southern Methodist University, is removing computers from lecture halls and urging his colleagues to "teach naked" — without machines. Bowen says class time should be used for engaging discussion, something that reliance on technology discourages.

I think this is good advice. I prefer not to use Power Point in talks or lectures, because I find that I spend more time interacting with the technology than I do actually talking to students. But more fundamentally, Bowen's advice gets at a deeper point, which is what you might call the information delivery model of teaching– the idea that the point of being in the classroom is to engage in a more-or-less formal set of exercises to master a body of information. Everyone has better things to do in the classroom, and there are more intensive and social kinds of learning that you can practice when you're with other people that you can't when you're alone or online.

How Flickr changes my view of the world

In a recent article on experiments using automatic digital photography to improve the memories of Alzheimer's patients, I was struck by these paragraphs:

When researchers began exploring it as a memory aid a few years ago, they had patients and caregivers look at all the pictures together.

Although the exercise helped improve retention of an experience, it was evident that a better way would be to focus on a few key images that might unlock the memories related to it. The interactive nature of that approach would give patients a greater sense of control over their recollections, and allow them to revisit past experiences rather than simply know they had happened.

They soon realized that the capriciousness of memory made answers elusive. For one subject, a donkey in the background of a barnyard photo brought back a flood of recollections. For another, an otherwise unremarkable landscape reminded the subject of a snowfall that had not been expected.

The idea that "the capriciousness of memory" would make efforts to automatically generate summaries of events difficult, mirrors my own experience: I have entire trips that I recall through a couple apparently random things– the look of a hotel room, what I had for dinner. Likewise, looking at an entire album of pictures doesn't necessarily do much for me in terms of helping me remember more of an event.

I wonder if the scientists have tried getting their subjects to consciously manipulate those records afterwards– to make a photo album, for example– and see if that process of sorting helps improve recall. I remember trips much better if I write about them, or choose pictures to put online, much as I remember books better when I take notes on them. In fact, it's safe to say that the ritual of going through pictures, tagging them, and uploading them has both made it easier for me to remember these places, and changed my view of the world.

Let me explain.

One of the Web services I use a lot is the photo sharing site Flickr (if you don't believe me, just go to my account and see for yourself). I'm a fairly obsessive photographer, mainly because I like good pictures, but I'm not a very good one. With a film camera, you really pay for artistic mediocrity or technical clumsiness: you have to throw the same amount of money at a good picture as a bad. With digital cameras, on the other hand, you can play the lottery: take enough pictures, and some of them will accidentally be good. I'm also a doting father whose children aren't old enough to put up a serious fight when I get out the camera. And finally, digital cameras are small enough to fit in a pocket, so my Canon PowerShot is always handy. I don't have to plan to carry a camera with me: it's one of the things I always have when I walk out the door.

One of my favorite features in Flickr is its mapper, which lets you tell Flickr where in the world your picture was taken. Essentially, you put a digital pin in an online map, much as you would in a real map. Flickr and Yahoo! Maps got together to provide the service in 2006, and since then I've become a slightly fanatical geotagger. It started out as pure geekdom: I'd written stuff about the future of geolocation services and information, so it seemed a good chance to play with a future I had already described. But now I do it because it's a way to help me remember my pictures, and where I took them.

When I'm in a place, I like to walk. I want to know enough to stay out of bad neighborhoods, to find interesting ones, and to be aware of significant landmarks. I don't want to miss the big attractions, but I also want the freedom to happen upon that perfect little cafe and pastry shop, or the brilliant bookstore that's not in any of the guidebooks. (How many travelers define themselves as people who want to escape the boundaries of the guidebooks?) This style of wandering is one reason I absolutely love certain cities. In London, for example, you can't go three blocks without coming upon something grand and historic, a charming little square, or an interesting piece of street life. You can never be sure which you'll find. It's one reason Samuel Johnson could say, when you're tired of London you're tired of life. Likewise, Singapore and Budapest reward walking, though for different reasons: Singapore is a kind of life-sized scenario of a prosperous, benevolently authoritarian, multicultural Asian Century could be like, with amazing food. Budapest is a wonderful Old European city, alternating twisty streets, grand boulevards, the magnificent Danube, and faded (but rapidly renovating) buildings and apartment blocks, with great coffee on every block.

So I like to wander. But once I'm back in my room, and have uploaded my pictures from the day, I want to reconstruct my path, and figure out where I've been. I used to do this on maps, tracing out my route with a highlighter. This wasn't always very successful. It required remembering street names, knowing how many blocks it had been since I'd turned left last, or estimating how far I'd walked on the boulevard or embankment before stopping to take those pictures. Given that I often walk at night– my days are taken up with work– all this was tough. Putting that information onto a map that often was in an unfamiliar language didn't make things easier, either.

But what turned me into a Flickr map fanatic? And what bigger lesson could that possibly hold?

The act of putting pictures on the Flickr map combines three different kinds of knowledge. First, it draws on your physical memory of travel and picture-taking. Second, it draws on your visual memory. And third, it connects those two kinds of knowledge and memory to a formal system, the logic of the map. Putting these together help you connect your personal, street-level view of a place with a higher-level, abstract understanding of it.

Consider picture-taking first. Like all forms of knowledge-creation, picture-taking is a physical activity as well as an intellectual or technical one, and that physicality can be something that helps fix in your memory the event of taking the picture. I have pictures of Wiamea Canyon, on the island of Kauai, that I can't look at without being reminded of a long drive, and the pleasant contrast between the warmth of the coast and the chilly interior. I'd probably have long forgotten those sensations without the picture, and without the sensations I'd have a harder time placing the picture; but both memories live together and reinforce each other. Often the order of pictures in a photo stream can be used to reconstruct an evening's path. Something in the distance in one picture is in the center of another, or a corner in one photo is turned in the next. With the visual cues that the photographs provide, combined with a few memories of turning down this street and that boulevard, and a couple landmarks as reference points, I can reconstruct my steps pretty accurately.

Flickr lets you put pictures on an ordinary street map, which is just a grid with street names, rivers, train lines, and the occasional park. Sometimes that's enough information; but when it's not, I switch to the satellite mode, which overlays aerial photographs atop the street map. I find that the satellite photographs let me establish much more precisely just where I was, what this photograph shows, and where it should go on the map. Without it, I can place pictures on the right block; with the satellite photos, I can get to within a few feet.

Of course, that requires knowing how to decode satellite photographs, and how to relate that information to my own experience. Figuring out how to connect what you see in your photograph to what's on a satellite picture is a skill that we didn't have to learn before. Unless you worked for the CIA or had a particularly sadistic geography teacher, you never had to make that connection; and until recently satellite photos weren't easy for ordinary people to get. You could think of the Flickr mapping tool as a giant machine that gives people the chance to learn how to read satellite pictures. Maybe it's a cartographic Ender's Game, training a generation of open-source spooks who twenty years from now won't be fooled by doctored military recon photos or what's really scant evidence of wrongdoing.

Translating the ground-eye view of a landmark or city grid into an aerial view isn't that hard, but it does need to be learned. London's Trafalger Square becomes a set of long shadows (Nelson's column) with a few shapes (the lions around it, the fountains nearby); Leicester Square, trees and park paths bordered by the blocky shapes of theatres. Sometimes you learn how big something really is ("Boy, Suntec City really is HUGE"); when I'm trying to find someplace I've reached by tai or subway, the satellite photos are the only way to find it. I've walked some parts of Copenhagen, for example, but there are some things– the new Information Technology University, for example– that I've only driven to; I don't know the ITU address, but because I know the shape of the building and have a pretty good sense of the buildings around it, I can find it on a satellite map.

Finally, putting the pictures on the map is a way to relate the personal experience and first person view to the formal, high-level view. They're my memories, organized; and organizing my memories builds my knowledge of– and arguably my understanding of– the place and how it's laid out. Given that I may post 500 pictures from a trip, and geocode almost all of them, the simple repetition of the exercise does a lot to fix in my mind what buildings are where, how places relate to each other, and what route I took when walking, say, from the Elizabeth Bridge to St. Stephen's Church in downtown Budapest.

Right now this kind of mapping is mainly fun (believe it or not) and educational, but it will really pay off in a couple years, when I can go back to city with my e-paper travel journal, equipped with wifi and GPS. So equipped, I'll be able to call up those pictures in situ: see what Piccadilly Square looked like the last time I was there, or see exactly where in Singapore I had those rice noodles so memorable I Fickred them. And I can see where I haven't been, since pictures serve as visual crumbs, dropped on the map to mark my earlier travels.

This is why “cyberspace” matters

It's a powerful conceptual metaphor, to borrow a term from Lakoff and Johnson. Venkatesh Rao explains how metaphors structure our thinking about technology, and can hinder innovation:

As much as we focus on developing new technologies, it is also essential that we break free of certain metaphors that bind and restrict our thinking about what these technologies can ultimately achieve. The familiar “document” metaphor, among others, has cast a long shadow on how we think about the web, and is standing in the way of some innovation.

Consider these terms: page, scroll, file, folder, trash can, bookmark, inbox, email, desktop, library, archive and index. They are all part of the document metaphor, a superset of the “desktop” metaphor. Some elements, such as scroll, desktop and library pre-date the printing press, but all are based on some sort of “marks on paper-like material” reference.

I think you could add to this list a similar set of metaphors that have shaped social media, and in some ways limited it. Think of the use of the term "friend" or "follower," as applied by Facebook and Twitter, respectively. Facebook (and other social networking sites) have been accused of collapsing a wide variety of social connections into a flat category of "friend," making it hard to distinguish between people you're actively socializing with in the real world, people you were friendly with in high school but haven't seen in 25 years, people you don't really care about but don't want to offend, coworkers or superiors, and your family. "Followers" has a sound that I find alternately amusing and creepy, as if I were either a cult leader or target of stalkers.

Back to Rao:

It is important to understand that the document metaphor is more than a UI metaphor. It is in fact a fundamental way of understanding one domain in terms of another. For better or worse, we continue to understand the web in relation to how we understand documents. Unlike figurative metaphors, such as “he was a lion in battle,” which are simple rhetorical statements, conceptual metaphors (a notion introduced in the classic “Metaphors We Live By” by Lakoff and Johnson) like document-ness are pre-linguistic, and quietly ubiquitous. They infiltrate how we think about things on a much more basic level….

It is much easier to create technology that conforms to dominant metaphors. What we need to do as we enter the third decade of the web, however, is consider what we want the web to be rather than awkwardly fitting that vision into older descriptive paradigms.

Easier said than done, of course, but it's essential. Perhaps this is one of the reasons user co-creation or reinvention has become such a thing: users may be more likely to engage in this conceptual reframing than inventors and marketers, who spend a lot of time defining products.

Finally, it's worth noting that the whole industry of strategic marketing, as envisioned by people like Regis McKenna and Geoffrey Moore, was intended to define the conceptual metaphors in ways that would help people decide to buy products.

Anthony Townsend on Jane Jacobs and Facebook

I ran across a post written a couple years ago by friend Anthony Townsend about Jane Jacobs, Facebook, and urban neighborhoods:

If the physical form of a neighborhood is conducive to community, so is its virtual form. But the other striking thing about the list was that all the neighborhoods were in a state of change—gentrifying or recently gentrified. It’s certainly demographic: a neat and obvious alignment of hipster and blogger. But it also means that the newly emerging character of these places is being forged, at least in part, online. These are incontrovertibly real-world neighborhoods, but their community is as virtual as it is physical. With each year, we get better at navigating between the two.

Facebook and MySpace have begun to show how textured online group interactions can be. It’s easy to think of social networking in terms of Hudson Street, and easy to think of Hudson Street in terms of social networking. Both are at their best when they can successfully balance the public and the private.

Whole thing is worth reading. (So is Richard Florida's comment.)

Of course, there was a time when we talked about urban and online communities as mutually exclusive: remember when virtual communities were going to make cities obsolete? In contrast, today Anthony can assign Jane Jacbos or Christopher Alexander in a class on IT, and nobody is confused.

The Onion on cyberstalking your children

This is just brilliant:


Facebook, Twitter Revolutionizing How Parents Stalk Their College-Aged Kids

I kind of worry that I’ll turn into one of those parents.

[To the tune of The Police, “Every Breath You Take,” from the album Message In A Box: The Complete Recordings (Disc 4) (I give it 4 stars).]

This is your brain on multimedia

Ed Yong at Not Exactly Rocket Science covers a new study on media multitasking and its impact on cognitive control:

You might think that this influx of media would make the heaviest of users better at processing competing streams of information. But Eyal Ophir from Stanford University thinks otherwise. From a group of 262 students, Ophir indentified two sets of 'light' and 'heavy' multimedia multi-taskers from the extreme ends of the group. The heavy users were more likely to spend more time reading, watching TV, surfing the web, sending emails or text messages, listening to music and more, and more likely to do these at the same time.

The heavy group also fared worse at tasks designed to test their ability to filter out irrelevant information or, surprisingly, to switch from one task to another. In short, they show poorer "cognitive control", a loosely grouped set of abilities that include allocating attention and blocking out irrelevancy in the face of larger goals. They're more easily distracted by their many incoming streams of data, or less good at shining the spotlight of their attention on a single goal, even though they are similar to the light group in terms of general intelligence, performance on creativity tests, basic personality types, and proportion of women to men….

The key question here is whether heavy multimedia use is actually degrading the ability to focus, or whether people who are already easily distracted are more likely to drown themselves in media. "This is really the next big question," says Ophir. "Our study makes no causal claims; we have simply shown that media multitaskers are more distractable." The next step is to follow a group of people with different media habits over time to see how their mental abilities shift, and that's something that Ophir is working to set up.

Nonetheless, as ever-larger computer screens support more applications (Google Wave, anyone?), and social norms shift towards more immediate responses, it seems that multitasking is here to stay and perhaps merely in its infancy. It's important to understand if these technologies will shift our portfolio of mental skills, or equally if people who are naturally easy to distract will gravitate towards this new media environment, and encounter difficulties because of it.

Older posts

© 2017 Alex Soojung-Kim Pang, Ph.D.

Theme by Anders NorenUp ↑