Alex Soojung-Kim Pang, Ph.D.

I study people, technology, and the worlds they make

Tag: ubicomp

Amazing: stealing SIM cards from smart traffic lights

This is going on in Johannesburg:

Hundreds of [traffic] lights have been damaged by thieves targeting the machines' sim cards, which are then used to make mobile phone calls worth millions of South African rand.

More than two-thirds of 600 hi-tech lights have been affected over the past two months, according to the Johannesburg Roads Agency, causing traffic jams, accidents and frustration for motorists.

The traffic lights use sim cards, modem and use GPRS to send and receive information, a system intended to save time and manpower by alerting the road agency's head office when any lights malfunction. According to Thulani Makhubela, a spokesman for the agency, the robberies have been "systematic and co-ordinated", possibly by a syndicate. An internal investigation has now been launched.

"They know which signals to target," Makhubela added. "They clearly have information."

Wow. Real world, meet ubicomp!

The Internet of Things, amusement edition

This caught on Failbooking.

Sensecam, Alzheimer’s, and memory

An article in the New York Times on experiments using what sounds like Gordon Bell’s MyLifeBits technology “to help people with Alzheimer’s and other memory disorders.”

The concept was simple: using digital pictures and audio to archive an experience like a weekend visit from the grandchildren, creating a summary of the resulting content by picking crucial images, and reviewing them periodically to awaken and strengthen the memory of the event….

In Pittsburgh, researchers had Mr. Reznick go on three excursions with a Sensecam around his neck, and a voice recorder in his shirt pocket and a GPS unit. On one trip, he went to an exhibition of glass sculptures with his wife, Sylvia, his son and a granddaughter.

The Sensecam takes hundreds of pictures in a short period. When researchers began exploring it as a memory aid a few years ago, they had patients and caregivers look at all the pictures together.

I’m not surprised that scientists would be using this technology for Alzheimer’s patients. If today’s early adopters are twentysomethings, in the next couple decades we’re going to see a shift: the most augmented, information technology-intensive Americans are likely to be the elderly, who will be using these technologies (often embedded in more ordinary-looking everyday devices) to battle memory loss, stay independent, and of course post pictures to their Facebook accounts.

[To the tune of Emerson Lake & Palmer, “Tarkus,” from the album Live at the Wheeling Civic Center (November 12 1977) (a 2-star song, imo).]

Daniel Lyons on the iTablet

From Newsweek:

For those of us who carry iPhones, this shift to a persistent Internet has already happened, and it's really profound. The Internet is no longer a destination, someplace you "go to." You don't "get on the Internet." You're always on it. It's just there, like the air you breathe.

[To the tune of Future Sound of London, "Room 208," from the album Lifeforms (I give it 2 stars).]

Game features in the real world

When I've spoken about the end of cyberspace, and the displacement of the idea of cyberspace as a Platonic plane of information, separate from and superior to the real world, someone's almost asked, "But what about Second Life?" (or World of Warcraft, or Everquest, depending on what year we're talking about). The idea is that these kinds of games and game-worlds represents a continuation of the vision of cyberspace as alternate world.

My response has been twofold. First, despite claims about the utility (or potential utility) of Second Life to business, or the number of hours devoted players spend in World of Warcraft, so far as I can tell, nobody argues that these constitute alternatives to physical reality that will lead to the death of the office or the transformation of travel. They have their appeal, but their appearance is not a sign that the tectonic plates of reality are starting to rumble. Second, it looks more likely that with the coming of ubiquitous computing, some of the kinds of interactions and feedback that make games compelling are going to migrate into the real world, but with serious social and economic implications.

This evening I ran across a piece by Brett McCallon on the growing pervasiveness of games in everyday life that echoes this last point:

"Lexulous", and the game's incredible popularity on Facebook, does say something about the way that gaming is infiltrating the experience of seemingly non-gaming-related activities. As gaming becomes more mainstream, and as designers learn to use gaming mechanics to enhance our work, education and relaxation, we can envision a time in which nearly every experience offers the possibility, if not the requirement, for play….

Exercise is only one of the non-gaming areas into which gaming has intruded in recent years. Games that teach foreign languages, cooking and other skills are also becoming increasingly popular…. Even such mundane activities as household chores can be made less onerous through the addition of gaming mechanics. A free, web-based game called "Chore Wars" lets players apply traditional role-play game rules to their laundry, dishwashing and vacuuming duties. For each completed task, players are granted "experience", "gold", etc, which helps their characters advance through imagined quests. It's a fairly basic system, but as a means of motivating lazy spouses and housemates to pull their weight, it could be quite helpful.

I think McCallon's argument is inaccurate but in a revealing way. It's inaccurate in the sense that while we are going to see the growth of feedback and incentive systems around everyday activities, they're not going to really be games. They may borrow some bits and pieces from games– familiar visual tropes, rewards, and the like– but they won't turn housework into a game, any more than my offering my son a quarter to clean his room turns my family into a labor market.

But what's revealing about the piece is that it suggests how likely we are to embrace the language of games when thinking about, and interacting with, these technology. I saw something of this when I was interviewing people about the impact of the Prius MPG estimator on driver behavior. As I wrote in 2008,

Interestingly, many drivers describe efforts to boost their fuel efficiency as a kind of game. One driver, a former Silicon Valley tech executive and car afficionado, recalls that "When I got my Prius, it absolutely felt like I was piloting a large, rolling video game, seeing how to optimize the mileage." Another, a Valley educator, reports that driving her Prius has "become a game for me. I always try to improve the mpg over the last trip." When I gave my end of cyberspace talk at IDEO last week, I brought up the Prius MPG estimator, and one personal immediately said, "It's like a game!" Game designer Amy Jo Kim recalled, "When I first got my Prius 4 years ago, I was completely transfixed by the real-time MPG display. Multi-scale feedback! I could see my mileage per tank, in 5-minute increment, and moment-to-moment. I experimented with my driving style, trying to beat my "high score" each day." A 2006 Cnet article described the Prius as "a mobilized video game… surely the most expensive, biggest gaming machine built… so far."

This may sound like a distinction without a difference, but think about how many times we borrow bits and pieces of phrasing from one realm and apply it to another, and how those borrowings have but a limited influence. We talk about business as war, or coworkers as teams, but we understand that these metaphors don't mean we should bomb a competitor's offices. Doubtless we'll be able to learn some things from game designers about how to improve the interfaces for, say, home energy monitoring systems, but it's not clear that creating an entire game– complete with characters, more elaborate rules, goals, etc.– would be necessary or even desirable to achieve substantial energy savings.

[To the tune of Peter Gabriel, "Intruder (Live)," from the album Plays Live (I give it 2 stars).]

Zeroing, Twitter, and ambient awareness

A few weeks ago a friend of mine announced that she was taking a break from Web 2.0.* She was going to prune her Twitter feeds, reduce her time on Facebook, and cut back on her time on IM. She needed to pay more attention to her real life, and to real relationships. Recollecting friends from high school and college was interesting for a while (Web 2.0 is a time machine for my generation, after all), but a large volume of acquaintances can't provide the same satisfaction and support as a handful of friends you can see– or who can take the kids out to the park for an hour. Getting Tweets on her cell phone was also a poor combination of intrusiveness and minutiae. And there was laundry to be done.

As one of the digital lemmings who pushed her over the edge, the episode got me thinking. Why do I Tweet? After thinking about it for a while, I've come the conclusion that while it's certainly popular with lots of my friends, I have a couple serious questions about Twitter, as a writer and a reader.

First, I have to admit that my regular life isn't interesting enough to justify throwing out real-time updates about it. Nobody needs to know that I've just convinced the kids to make their own breakfasts, or have come back from lunch at Zao Noodles, or am trying to decide where to go on this weekend's hike. The exception is when I'm on the road or doing something else unusual: at those times, my life– or my world– might get interesting enough document in detail.

There's also the problem that I'm not sure what I get out of my own tweets. One of the signal features of Web 2.0, I think, is that it's not just broadcasting: it's self-documentation. Some of my friends use Twitter to jot down little notes about what they're reading. But for me, the absence of tags in Twitter makes it hard for me to find things I've looked at long enough to know I should look for them again later, or to keep track of citations; del.icio.us is still the better tool for that. (I suppose you could replicate a little of that functionality with #tags, but that's a workaround, and there's no auto-complete….) And I'm not sure I've gone back and looked at my own Twitter stream, ever. My regular blog is valuable because it's a way to keep track of my own life; this one has been invaluable for recording and trying out ideas for my book; my kids' blog has been a place where I could store huge amounts of detail about my kids' childhoods– those pictures of them doing cute but ordinary things, or saying wonderful things, or just growing up. Tossing out tweets feels like shooting sparks from a wheel: the sparks may be entertaining, but it's the object you're shaping with the wheel that's really valuable.

Finally, as a reader, I find that seeing the raw feed of even a few people's lives can quickly become overwhelming. In the last 24 hours, a relatively quiet time after Thanksgiving, I got 34 tweets; during a busy time– when people are traveling or at SXSW– I can get several times that, easily. There's an argument to be made, as Clive Thompson has done, that the minutiae of tweets resolve into ambient awareness… but as it's currently designed, the system still puts big demands on readers, who have to constantly read their friends' Twitter streams, develop a sense of the rhythm of their posting, and build up a model of their real-world state from their online behavior. In a world in which the challenge is not to broadcast a lot of information, but to generate a lot of meaning, the stream-of-existence quality of tweeting makes it easy to mistake detail for intimacy, quantity of tweets for quality of expression or depth of understanding. As a preview of the world of ubiquitous computing and ambient awareness, Twitter is an interesting experiment (an experiment that's being conducted my hundreds of thousands of people on themselves and their friends.)

This is actually not a bad lesson for designers. Creating ambient devices isn't about pushing information; presence isn't just about connection. Connecting people virtually is as much about quality and meaning in the digital world as it is in the real world.

Which is not to say that Twitter is hopeless. Twitter is strongest as a platform for conversation and reportage. It's easy to share a rapid fire of short notes at conferences, for example, and the final result– assuming people are listening and paying attention– can be useful. (I wonder if there are examples of Twitter being used by students in lecture classes?) A couple of the people I follow use it as much for pinging friends as for talking about what they're doing: for them, Twitter is a cross between the Facebook wall and a chat room. And I find Twitter useful for getting reactions to news events: I stopped watching the presidential debates this fall, for examples, after I realized that most of my friends were tweeting their reactions to them.

So what do I do with my Twitter stream? I'm not going to shut it down, because there are times when I'll want to provide moment-by-moment updates about what I'm doing ("Just cleared customs in Kai Tak! Where's the cab line?" "Have now been in Victoria Stations on four continents…."). But for me, when I do use it, the challenge will be to figure out how to write the Web 2.0 equivalent of Zen koans: to fit meaning into 140 characters, rather than to fight the limitations of the medium by posting a lot.

*After I started working on this piece, I got interested in what other people had written upon getting fed up with some service, technology, or channel. Turns out that the "declaration of zeroing" is almost a literary genre. I first became aware of it through David Levy (whose book I reviewed in the L.A. Times, and who gave a brilliant talk about this stuff a couple years ago), and his ideas of a digital sabbath and information environmentalism. A couple samples:

Edward Vielmetti on Twitter:

The basic idea is that in systems where there is an infinite capacity for the world to send messages to get your attention, the only reasonable queue that you can leave between visits to the system is zero, because if you get behind you will never, ever, ever catch up gradually. Never. No matter how much time you put into it, there will always be more to do, and you will lose sleep over it.

Carmen Joy King, after quitting Facebook:

The amount of time I spent on Facebook had pushed me into an existential crisis. It wasn’t the time-wasting, per se, that bothered me. It was the nature of the obsession – namely self-obsession. Enough was enough. I left Facebook.

Donald Knuth on email:

I have been a happy man ever since January 1, 1990, when I no longer had an email address. I'd used email since about 1975, and it seems to me that 15 years of email is plenty for one lifetime.

Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the bottom of things. What I do takes long hours of studying and uninterruptible concentration. I try to learn certain areas of computer science exhaustively; then I try to digest that knowledge into a form that is accessible to people who don't have time for such study.

On the other hand, I need to communicate with thousands of people all over the world as I write my books. I also want to be responsive to the people who read those books and have questions or comments. My goal is to do this communication efficiently, in batch mode — like, one day every three months.

Mark Bittman on his "secular Sabbath:"

I do believe that there has to be a way to regularly impose some thoughtfulness, or at least calm, into modern life — or at least my version. Once I moved beyond the fear of being unavailable and what it might cost me, I experienced what, if I wasn’t such a skeptic, I would call a lightness of being. I felt connected to myself rather than my computer. I had time to think, and distance from normal demands. I got to stop.

And of course there's at least one blog about turning off all electronics one night a week. "Because of course," Ariel Stallings writes, "I can’t unplug without blogging about it! (Irony, is that you?)"

Wifi wine glasses, medical monitoring, and sociability of aging in place

The New Scientist has an article about a research project that turns drinking glasses into communication devices:

Long-distance lovers can still drink together

Could glowing, Wi-Fi wine glasses let people in long-distance relationships feel more in touch with their other half?… [MIT Media Lab researchers] Jackie Lee and Hyemin Chung… have incorporated a variety of coloured LEDs, liquid sensors and wireless (GPRS or Wi-Fi) links into a pair of glass tumblers….

When either person picks up a glass, red LEDs on their partner’s glass glow gently. And when either puts the glass to their lips, sensors make white LEDs on the rim of the other glass glow brightly, so you can tell when your other half takes a sip. Following tests in separate labs, Lee says the wireless glasses really do “help people feel as if they are sharing a drinking experience together”.

The technology could also be used to check that hospital patients or elderly people are drinking enough water, Lee says.

There are a couple interesting things here. First, it draws on real-world social activity: it doesn’t require users to learn (or learn how to interpret) some new behavior, but piggybacks on very familiar ones. As the article notes, “communal drinking is an important social interaction that helps bind friendships and relationships.” This sort of social mimicry has long been part of GUI design– desktops and trash cans jumped from the real world to computer screens more than twenty years ago– but we can now start to think about getting rid of the middleman: eliminating the intermediate step of creating visual metaphors, and putting the electronics and interactivity directly in things.

The second thing is the last paragraph: that this system has medical monitoring or aging in place applications. Our work suggests that aging in place could be the first major market for smart home and smart devices: elders who want to continue to live independently in their own homes will be the early adopters for these systems. This is significant market for two reasons: first, there’s lots of evidence that retiring Boomers will be willing to spend money to stay healthy and maintain their independence, so potentially it’s a lucrative market; and second, the emotional and psychological value of these systems will be far greater in this market than elsewhere. Technology that lets your refrigerator talk to your local grocer is nice; but it doesn’t hold a candle technology that lets your 80 year-old mother live safely in the same house she’s been in for the last thirty years.

There’s a deeper connection between these two things. Ten years ago, as I understand it, the future of in-home elder care would have focused on automation: creating robot nurses, or other systems that do things for you. Today, the focus is more on systems that enable and encourage good behavior, and aim to keep elderly people active and connected to others. The first person who notices that you’ve been sleeping a lot more, and eating very little, shouldn’t necessarily be your doctor; it should be a sibling, or your son or daughter. More and more, these systems become tools for both gathering information about user patterns– making sure someone’s drinking enough– and communication channels linking elders with friends and family.

© 2017 Alex Soojung-Kim Pang, Ph.D.

Theme by Anders NorenUp ↑