Alex Soojung-Kim Pang, Ph.D.

I study people, technology, and the worlds they make

Category: Devices (page 1 of 4)

The Copenhagen Wheel

For a long time, I've been interested in getting an electric bike, especially after I saw the Optibike at the California Academy of Sciences. Via the Daily Dish, I came across an MIT hybrid bicycle project that looks like just the thing: the Copenhagen Wheel. Check out the video:

Not completely clear from the video exactly how it works, but I like how elegantly it attaches to a bicycle (some bike motors look like real kludges), and that it also is a smart device:

Dyson Award-winning design:

Smart, responsive and elegant, it transforms existing bicycles quickly into hybrid electric-bikes with regeneration and real-time sensing capabilities. Its sleek red hub not only contains a motor, batteries and an internal gear system – helping cyclists overcome hilly terrains and long distances – but also includes environmental and location sensors that provide data for cycling-related mobile applications. Cyclists can use this data to plan healthier bike routes, to achieve their exercise goals or to create new connections with other cyclists. Through sharing their data with friends or their city, they are also contributing to a larger pool of information from which the whole community can benefit.

It's called the Copenhagen Wheel because the bike-friendly wants to increase the number of people who cycle, and worked wit the team to

to investigate how small amounts of technology could improve the cycling experience and how the four main obstacles to getting people on bikes – distance, topography, infrastructure and safety – could be overcome. What has resulted is the Copenhagen Wheel: a new type of electric smart-bike which utilizes a technical solution for overcoming distance and topography (a motor and batteries with regeneration capabilities that can provide riders with a boost when needed) and a real-time data network and series of applications to support infrastructure creation and foster a sense of safety.

Trading intelligence for resources; encouraging mergers of people and devices on human terms rather than device terms; bringing information to users in context– all great examples of an end of cyberspace device.

The Internet of Things, amusement edition

This caught on Failbooking.

More than just a calendar

Virginia Heffernan laments the demise of datebooks like the Filofax:

It’s hard to remember, surveying my dull Google version (“parents in town,” “book club”), that a Filofax was also a place for plot arcs, self-invention and self-regulation. It was, in every sense, a diary — a forward-running record, unlike backward-running blogs. The quality of the paper stock, the slot for the pen, the blank but substantial cover, the hints of grand possibilities that came with the inserts — all of these inspired not just introspection but also the joining of history: the mapping of an individual life onto the grand old Gregorian-calendar template….

[N]ow that I’ve shelved my Filofax in favor of a calendar program that seems somehow to flatten existence, I realize that another year is passing without my building up the compact book of a year’s worth of Filofax pages that, every December, I used to wrap in a rubber band and put on a shelf, just as my new refills came in the mail.

If there is one thing we've discovered about print media, especially in the wake of the disappearance of some artifact (card catalogs, the encyclopedia, etc.), is that readers and users don't treat print media merely as inefficient carriers of information that wanted to be digital (or free, or expensive, as Stewart Brand put it), but developed all kinds of other uses for print that increased their utility, were taken for granted, and tended to be overlooked by engineers. Engineers looked at the Filofax and saw a digital calendar-in-waiting; in Heffernan's hands, in contrast, it was "a place for plot arcs, self-invention and self-regulation. It was, in every sense, a diary — a forward-running record, unlike backward-running blogs."

We don't just act on information or media; we interact with it, and the character of those interactions, as much as the information itself, define our relationships with media. One reason I still prefer printed books to digital is that it's difficult to annotate digital books in a way I find satisfactory: when I'm reviewing a book, or using it in my work, I need to be able to underline, annotate, add Post-Its, and make notes– to document my dialogue with or reflections on the book. (This goes far beyond the kind of annotations you can make on ebooks today, and a world away from leaving comments on blogs, or hitting the "Like" button on a Web page.) This kind of reading is more like a martial art than the quiet, interior activity that many people think of when they think about "reading." And while I don't do it with everything– I never got the calendar bug, for example– there are a few activities in which the affordances of print media support practices and interactions that electronic media cannot.

Car cost-sharing: finally around the corner?

Back in 2004, when I was a columnist for Red Herring, I wrote a piece about what would happen when reputation systems make their way into the world— that is, when they stop being things that we only consult in online transactions, and become things we can consult easily in real-world transactions. I talked about how they could jump-start car-sharing systems.

Today, I saw an article about RelayRides, a

person-to-person car-sharing service, which will be launching soon in Baltimore. Unlike fleet-based services—Zipcar, City CarShare, I-GO, and others—which maintain their own vehicles, RelayRides relies on individual car owners to supply the vehicles that other members will rent.

There are a couple other services like this, including Divvycar, but there seems to be a sense that these systems are ready to take off. So "why are peer-to-peer car-sharing services emerging now?"

Part of the answer might lie in the way online and offline services like Zipcar, Prosper, Netflix, and Kiva.org are training us to share our stuff—people are simply getting used to the idea. “‘Zip’ has become a verb to the point that we could ‘zip’ anything—they just happened to start it with cars. Close on their heels was Avelle (formerly Bag, Borrow Or Steal) and now SmartBike for bikes on demand. The next step seems to be a crowd-sourced version of Zipcar,” says Freed.

Another part of the answer might be found in our response to the ecological and economic crises Americans are facing. As Clark explains, “You just think of the number of cars on the road, and the resource that we have in our own communities is so massive… what the peer-to-peer model does is it really allows us to leverage that instead of starting from scratch and building our own fleet.”

From an individual’s perspective, peer-to-peer sharing is a means for owners to monetize their assets during times when they don’t require access to them. But peer-to-peer models can also be understood to utilize existing resources more efficiently—ultimately, to reduce the number of cars on the road—through shifted mentalities about ownership, the intelligent organization of information and, increasingly, through real-time technologies.

Since peer-based car-sharing companies don’t bear the overhead costs of owning and maintaining their own fleets, they don’t require the high utilization rates for vehicles that Zipcar and similar programs do—the result is comparatively fewer limitations for the size and scale of peer-to-peer operations.

Always satisfying for a futurist to see the future actually start to arrive.

Digital devices and embodied energy

In one of John Thackara's Doors of Perception reports, I came across the concept of digital devices as "embodied energy." The term is used by Kris De Decker in a piece on "the monster footprint of digital technology," and it's intended to call attention to the very large amount of energy that is consumed during the manufacturing of electronics. But it also is a data-point in how we no longer think of digital devices as portals to another world, but rather think more about their connections to this one.

The energy used to produce electronic gadgets is considerably higher than the energy used during their operation. For most of the 20th century, this was different; manufacturing methods were not so energy-intensive.

An old-fashioned car uses many times more energy during its lifetime (burning gasoline) than during its manufacture. The same goes for a refrigerator or the typical incandescent light bulb…. Advanced digital technology has turned this relationship upside down. A handful of microchips can have as much embodied energy as a car. And since digital technology has brought about a plethora of new products, and has also infiltrated almost all existing products, this change has vast consequences.

Not only do electronics use more energy in manufacturing than in use, they require a LOT more energy per unit of material to manufacture.

[W]hile the ratio of fossil fuel use to product weight is 2 to 1 for most manufactured products (you need 2 kilograms of fuel for 1 kilogram of product), the ratio is 12 to 1 for a computer (you need 12 kilograms of fuel for 1 kilogram of computer). Considering an average life expectancy of 3 years, this means that the total energy use of a computer is dominated by production (83% or 7,329 megajoule) as opposed to operation (17%). Similar figures were obtained for mobile phones….

The energy needed to manufacture microchips is disproportional to their size. MIT-researcher Timothy Gutowski compared the material and energy intensity of conventional manufacturing techniques [machining, injection molding and casting] with those used in semiconductor and in nanomaterial production (a technology that is being developed for use in all kinds of products including electronics, solar panels, batteries and LEDs)…. While there are significant differences between configurations, all these manufacturing methods require between 1 and 10 megajoule of electricity per kilogram of material. This corresponds to 278 to 2,780 watt-hour of electricity per kilogram of material. Manufacturing a one kilogram plastic or metal part thus requires as much electricity as operating a flat screen television for 1 to 10 hours (if we assume that the part only undergoes one manufacturing operation).

The energy requirements of semiconductor and nanomaterial manufacturing techniques are much higher than that: up to 6 orders of magnitude (that's 10 raised to the 6th power) above those of conventional manufacturing processes (see figure below, source, supporting information). This comes down to between 1,000 and 100,000 megajoules per kilogram of material, compared to 1 to 10 megajoules for conventional manufacturing techniques.

It would be interesting to compare the amount of intellectual energy that goes into the design of, say, a car versus a microchip. I've long thought of digital devices as being knowledge-intensive and resource-light– a laptop computer embodies a lot more intelligence than an iron bar– but this has always been a conceptual thing, not something that I tried to measure.

Daniel Lyons on the iTablet

From Newsweek:

For those of us who carry iPhones, this shift to a persistent Internet has already happened, and it's really profound. The Internet is no longer a destination, someplace you "go to." You don't "get on the Internet." You're always on it. It's just there, like the air you breathe.

[To the tune of Future Sound of London, "Room 208," from the album Lifeforms (I give it 2 stars).]

Internet use and brain function among elders

HealthDay News reports on a study of the impact of Internet use on the brains of elders:

Surfing the Internet just might be a way to preserve your mental skills as you age.

Researchers found that older adults who started browsing the Web experienced improved brain function after only a few days.

"You can teach an old brain new technology tricks," said Dr. Gary Small, a psychiatry professor at the Semel Institute for Neuroscience and Human Behavior at the University of California, Los Angeles, and the author of iBrain. With people who had little Internet experience, "we found that after just a week of practice, there was a much greater extent of activity particularly in the areas of the brain that make decisions, the thinking brain — which makes sense because, when you're searching online, you're making a lot of decisions," he said. "It's interactive."…

"We found a number of years ago that people who engaged in cognitive activities had better functioning and perspective than those who did not," said Dr. Richard Lipton, a professor of neurology and epidemiology at Albert Einstein College of Medicine in New York City and director of the Einstein Aging Study. "Our study is often referenced as the crossword-puzzle study — that doing puzzles, writing for pleasure, playing chess and engaging in a broader array of cognitive activities seem to protect against age-related decline in cognitive function and also dementia."…

For the research, 24 neurologically normal adults, aged 55 to 78, were asked to surf the Internet while hooked up to an MRI machine. Before the study began, half the participants had used the Internet daily, and the other half had little experience with it.

After an initial MRI scan, the participants were instructed to do Internet searches for an hour on each of seven days in the next two weeks. They then returned to the clinic for more brain scans.

"At baseline, those with prior Internet experience showed a much greater extent of brain activation," Small said.

Doubtless some readers will recognize this as an updated version of the Proust and the Squid argument, which relies in part on fMRI studies indicating that the brains of literate people have specialized sections for quickly recognizing letters. What's interesting here is that you get a similar kind of stimulation with the elderly.

[To the tune of John Coltrane, "A Love Supreme, Part II – Resolution," from the album The Classic Quartet – The Complete Impulse! Studio Recordings (I give it 1 stars).]

Augmented reality contact lenses

IEEE Spectrum has a very interesting article about a University of Washington project to create "a contact lens with simple built-in electronics" that's an early prototype of more sophisticated augmented reality vision technology.

These lenses don’t give us the vision of an eagle or the benefit of running subtitles on our surroundings yet. But we have built a lens with one LED, which we’ve powered wirelessly with RF. What we’ve done so far barely hints at what will soon be possible with this technology.

Conventional contact lenses are polymers formed in specific shapes to correct faulty vision. To turn such a lens into a functional system, we integrate control circuits, communication circuits, and miniature antennas into the lens using custom-built optoelectronic components. Those components will eventually include hundreds of LEDs, which will form images in front of the eye, such as words, charts, and photographs. Much of the hardware is semitransparent so that wearers can navigate their surroundings without crashing into them or becoming disoriented. In all likelihood, a separate, portable device will relay displayable information to the lens’s control circuit, which will operate the optoelectronics in the lens.

These lenses don’t need to be very complex to be useful. Even a lens with a single pixel could aid people with impaired hearing or be incorporated as an indicator into computer games. With more colors and resolution, the repertoire could be expanded to include displaying text, translating speech into captions in real time, or offering visual cues from a navigation system. With basic image processing and Internet access, a contact-lens display could unlock whole new worlds of visual information, unfettered by the constraints of a physical display.

But how do you make an image generated on a contact lens visible?

you’re probably wondering how a person wearing one of our contact lenses would be able to focus on an image generated on the surface of the eye. After all, a normal and healthy eye cannot focus on objects that are fewer than 10 centimeters from the corneal surface… [so] the image must be pushed away from the cornea. One way to do that is to employ an array of even smaller lenses placed on the surface of the contact lens. Arrays of such microlenses have been used in the past to focus lasers and, in photolithography, to draw patterns of light on a photoresist. On a contact lens, each pixel or small group of pixels would be assigned to a microlens placed between the eye and the pixels. Spacing a pixel and a microlens 360 micrometers apart would be enough to push back the virtual image and let the eye focus on it easily. To the wearer, the image would seem to hang in space about half a meter away, depending on the microlens.

There's also the problem of power.

Like all mobile electronics, these lenses must be powered by suitable sources, but among the options, none are particularly attractive. The space constraints are acute. For example, batteries are hard to miniaturize to this extent, require recharging, and raise the specter of, say, lithium ions floating around in the eye after an accident. A better strategy is gathering inertial power from the environment, by converting ambient vibrations into energy or by receiving solar or RF power. Most inertial power scavenging designs have unacceptably low power output, so we have focused on powering our lenses with solar or RF energy.

You could also use contact lenses as medical sensors.

We’ve built several simple sensors that can detect the concentration of a molecule, such as glucose. Sensors built onto lenses would let diabetic wearers keep tabs on blood-sugar levels without needing to prick a finger. The glucose detectors we’re evaluating now are a mere glimmer of what will be possible in the next 5 to 10 years. Contact lenses are worn daily by more than a hundred million people, and they are one of the only disposable, mass-market products that remain in contact, through fluids, with the interior of the body for an extended period of time. When you get a blood test, your doctor is probably measuring many of the same biomarkers that are found in the live cells on the surface of your eye—and in concentrations that correlate closely with the levels in your bloodstream. An appropriately configured contact lens could monitor cholesterol, sodium, and potassium levels, to name a few potential targets.

I find this whole project really fascinating.

[To the tune of The Police, "Contact," from the album Message In A Box: The Complete Recordings (Disc 2) (I give it 1 stars).]

Hands-up displays

John Murrell on augmented reality:

As we know from extensive science fiction research, one day we will be equipped with unobtrusive and tastefully designed technology that will project before our eyes a heads-up display of information related to whatever real-life scene we're looking at. That level of augmented reality, however, is a ways down the road, and unfortunately that road is likely to be strewn with the broken bodies of early adopters.

Thanks to the growth in smartphones equipped with large screens, cameras, compasses and GPS, location- and marker-based augmented reality (AR) is in the early stages of a hype cycle. Companies like Layar are building browser apps that look where you're looking and pull in layers of data from reference sources and social media. Startup Wikitude on Wednesday launched a new update of its software for Android handsets that integrates social tagging of physical locations, and an iPhone version is on the way. Apple's App Store recently got its first AR offering when an app called Metro Paris Subway added a feature that superimposes labels for station locations and points of interest over the view through your iPhone.

At this early stage in AR evolution, however, the displays are not heads-up, but hands-up, and that means we will be seeing a new class of situational zombies roaming our streets. We’ve already grown used to dodging around the people with heads bowed over their phones in the texting prayer position and the distracted pedestrians engrossed in conversation with their invisible companions over their Bluetooth headsets. Soon we'll be seeing more folks shuffling around with their smartphone screen held up in their line of vision, absorbed in their augmented reality data, and we'll be faced with a dilemma: keep a watchful eye on these people and tackle them before they wander into traffic or fall into a manhole, or just allow the Darwinian process to cull the herd.

While part of the point of some augmented reality research is to avoid exactly that kind of zombie state, by creating technologies that layer information on top of views (or displays them on things, or what have you), I suspect Murrell is onto something. I got my iPhone a
Facebook, Twitter Revolutionizing How Parents Stalk Their College-Aged Kids

[To the tune of The Police, "Every Breath You Take," from the album Message In A Box: The Complete Recordings (Disc 4) (I give it 4 stars).]

http://www.askpang.com/2008/11/construction-in.html">few months ago, and quickly found that I couldn't check my e-mail and walk at the same time: listening to music is no problem, and I can even usually stay alert while listening to The Bugle; but e-mail was different. Not because the interface is so incredibly compelling, but because I was so accustomed to tuning out the rest of the world when I checked my mail: my brain had trained itself to go into a kind of tunnelvision mode, which meant I couldn't trust my body to avoid potholes or streetlights while my phone downloaded messages.

[To the tune of Jean Sibelius, "Finlandia, Op. 26," from the album Finlandia/Tone Poems (I give it 3 stars).]

Texting while driving: Still unsafe, stupid, and unaccountably popular

From the Good Morning Silicon Valley blog:

Simple common sense should tell us that trying to text while driving is as stupid and dangerous as trying to crochet. We shouldn’t need a bunch of studies calculating and quantifying the risk to goad us into a response, but if that’s what it takes, here’s the latest. A Virginia Tech study that outfitted the cabs of long-haul trucks with video cameras found that when the drivers were texting, their collision risk was 23 times greater than when they had their attention on the road — a figure far higher than the estimates coming out of lab research and a rate by far more dangerous than other driving distractions. And at the University of Utah, research on college students using driving simulators showed texting raised the crash risk by eight times. The variance in the figures is beside the point. “You’re off the charts in both cases,” said Utah professor David Strayer. “It’s crazy to be doing it.”

And the heck of it is, people already know that and they keep doing it anyway.

This is a near-perfect example of how most humans are geniuses at rationalization: yes, I know it's dangerous, but I'll be careful and do it just this time, because I really need to let the office know where that file is. Oh wait, they've got another question. Well, it would be more dangerous to wait and put the phone down, so I'll just– dammit, can't the kids find anything by themselves? Okay, now I'll make up for it by really focusing on the road.

It's also a nice example of the kinds of dissonance created when we take practices and technologies designed for one use context, and move it into another– a phenomenon that mobile technologies makes increasingly common. It was hard to take a Macintosh SE or IBM PC Junior on the road; a smartphone, on the other hand, is a perfect storm of transportable, always-on, and just usable enough when you're doing other things to be dangerous.

[To the tune of Jean Sibelius, "Tapiola, Op. 112," from the album Finlandia/Tone Poems (I give it 2 stars).]
Older posts

© 2017 Alex Soojung-Kim Pang, Ph.D.

Theme by Anders NorenUp ↑