Alex Soojung-Kim Pang, Ph.D.

I study people, technology, and the worlds they make

Category: Reviews (page 1 of 2)

My new camera and Last Great Things

Yesterday I bought a new camera, a Fujifilm X-E1. I've been coveting it since it was announced: it looks like the rangefinder cameras my dad had when we lived in Brazil in the late 1960s and early 1970s, the specs are fabulous, and the reviews have been pretty ecstatic. My wife and I went to the camera store, checked out a couple different models, and after some deliberation, we took the plunge.


via flickr

We thought about a Nikon D7000, because we already have a D5000 and are quite happy with it. But while the D7000 gets great reviews, I felt that the X-E1 would be better for the kinds of professional uses I expect to put a camera to in the coming years– lots of street photography and observations of people using devices– and it'll be very easy to travel with. The D7000 is fabulous, and feels equally professional, but it's a much heavier camera, both physically and visually. This one will be less obtrusive.

Though I've had it for about 18 hours (8 of which I've been asleep), and have mainly taken pictures of the dog (who I don't photograph enough) and my son and his friend (who are having a sleepover), I think it's going to be a camera I can spend years working with.


via flickr

As you can see, it's got a very retro, Leica rangefinder aesthetic, though it has an electronic viewfinder rather than an optical one (or the cool hybrid that the X-Pro1 has). Of course, you can set everything to adjust itself automatically; but exposure speed, aperture, and focus all have dedicated manual controls on the camera or lens, and the ISO can be accessed from the Fn button just beside the shutter button.

Dive into the options menus, and there are tons of other things you can adjust, custom profiles you can create (that'll be next on my to-do list), and special effects– simulators that mimic the distinctive color profiles of different Fuji films, a couple black-and-white films, and so on.


via flickr

The other two things about it that I think I'm going to love are that it's very light, and it's surprisingly small.

The pictures don't really give you a good sense of how small the camera is. The body is about a quarter inch longer than an iPhone, and perhaps a quarter inch taller, so it's Not Large At All. And the body weighs about 12 ounces (350 g), which is Really Light.

So while it's mean to be a two-handed camera, you can comfortably carry it in one hand.


via flickr

My talks feature all my own pictures, and so having good a good camera is a professional necessity; it's an important part of the Brand of Me, and helps me get my ideas across to my audiences.

More than that, though, I feel like this is the kind of device I could spend a decade working with. These days, as specs constantly improve and costs drop, it's easy to convince yourself that the Next Cool Thing will make you a better photographer, or writer, or golfer, or guitarist. Of course, there is a marginal truth to that, but it's a lot more important to learn how to use a device to improve your own ability to see, or your voice.

That doesn't mean NOT taking advantage of technology. It not relying on its improvement alone, and being thoughtful about how you can both exploit it and improve yourself. (There are things I've almost completely outsourced to devices. In the last ten years I've memorized the phone numbers of my wife and kids, but entrust all the others to my iPhone.)

There's one other calculation for me. As I get older and more reflective, I think less about how many more turns of Moore's Law I can consume, and how many cool devices I can acquire. The challenge isn't to get the Next Great Thing, but the Last Great Thing: as much as possible, to choose things that, whether I live another five years or another fifty, will last; serve me well; constantly give me pleasure; and help me consciously extend or augment my own abilities. This requires a level of thoughtfulness and self-understanding, and frankly a certain amount of money: a $1400 camera is a lot more likely to fall into this category than a $300 one.

So we'll see if I made the right choice.

This is how you review a book

Charles Pierce’s review of Ross Douthat’s Bad Religion (shorter version: the Sixties sucked) is a master class in how to take apart a book in a manner that respects the subject, but gives the author the flogging they deserve. This may be my favorite part:

[N]owhere does Douthat so clearly punch above his weight class as when he decides to correct the damage he sees as having been done by the historical Jesus movement, the work of Elaine Pagels and Bart Ehrman and, ultimately, Dan Brown’s novels. Even speaking through Mark Lilla, it takes no little chutzpah for a New York Times op-ed golden child to imply that someone of Pagels’s obvious accomplishments is a “half-educated evangelical guru.” Simply put, Elaine Pagels has forgotten more about the events surrounding the founding of Christianity, including the spectacular multiplicity of sects that exploded in the deserts of the Middle East at the same time, than Ross Douthat will ever know, and to lump her work in with the popular fiction of The Da Vinci Code is to attempt to blame Galileo for Lost in Space.

Fantastic. As good as Adam Gopnik’s epic takedown of The Matrix, Reloaded. It’s made more impressive by the fact that you get the sense that Pierce really knows what he’s talking about. Here are two very different lines that each in their way are quite illuminating:

He describes the eventual calcification of the sprawling Jesus movement into the Nicene Creed as “an intellectual effort that spanned generations” without even taking into account the political and imperial imperatives that drove the process of defining Christian doctrine in such a way as to not disturb the shaky remnants of the Roman empire. The First Council of Nicaea, after all, was called by the Emperor Constantine, not by the bishops of the Church. Constantine — whose adoption of the Christianity that Douthat so celebrates would later be condemned by James Madison as the worst thing that ever happened to both religion and government  — demanded religious peace. The council did its damndest to give it to him. The Holy Spirit works in mysterious ways, but Constantine was a doozy. Douthat is perfectly willing to agree that early Christianity was a series of boisterous theological arguments as long as you’re willing to believe that he and St. Paul won them all….

[Douthat is] yearning for a Catholic Christianity triumphant, the one that existed long before he was born, the Catholicism of meatless Fridays, one parish, and no singing with the Methodists. I lived those days, Ross. That wasn’t religion. It was ward-heeling with incense.

New category: Reviews

Because I have nothing better to do, I took a break and created a new blog category: book reviews. Given that I've written plenty of them, and work hard on them, I decided they deserved their own category.

I was also motivated to do so because a bunch of my older reviews no longer seem to be available online, and in the interests of historic preservation I've copied them here. You're welcome.

Review of Steven Johnson, “Where Good Ideas Comes From”

My review of Steven Johnson's new book, Where Good Ideas Come From: The Natural History of Innovation, is now available on the Los Angeles Times Web site. (Interestingly they publish some of the reviews online first, then publish them in the newspaper.)

More at Contemplative Computing.

Update 9 December 2011: Here's the full text of the review:

The author explores the history of innovation, which is firmly rooted in collective efforts and learning things the hard way.

Steven Johnson's "Where Good Ideas Come From: The Natural History of Innovation" is misnamed. Natural history was pioneered by 18th century naturalist Gilbert White, and its blend of scientific fieldwork, travel writing, physical geography and anthropology was meant to convey the majesty and intricate interdependency of God's creation. The time-traveling Johnson overshot his mark by a couple of centuries. "Where Good Ideas Come From" reveals hidden relationships between disparate realms, decodes ancient mysteries, argues that we all have untapped powers and shows how to turn everyday materials into valuable ones. In short, it's a Renaissance alchemical guide.

Granted, the everyday materials Johnson writes about in his fluid, accessible book are not lead or dross, but people, places and very tiny animals. But today's alchemist wouldn't be interested in materials. Recently, Facebook was estimated to be worth about $33 billion, and gold was selling for nearly $1,400 an ounce; that means the social networking company was worth more than 700 tons of gold. We live in a world in which Farmville is worth a lot more than Sutter's Mill.

So what's the philosopher's stone for creativity, the elixir for making innovative places?

A "series of shared properties and patterns recur again and again in unusually fertile environments," Johnson argues, be they companies, cities or coral reefs. Good ideas, whether expressed as patents or paintings or DNA, flourish in liquid networks stocked with old ideas and physical resources that can be cannibalized, recycled and repurposed. Liquid networks give creative groups the chance to explore the "adjacent possible," the new functions or capabilities opened up by incremental innovations; discover new uses for old ideas; and explore potentially fruitful errors.

Finally, they serve as a proving ground for ideas, making it easier to experiment, fail quickly and cheaply and iterate faster. (Maddeningly, though, it's not clear how liquid networks select good ideas. In nature, species thrive when they fit their environments; but good ideas aren't inherently good — they can be counterintuitive and perverse — and "Where Good Ideas Come From" never quite explains whether markets are better than patrons, or tastemakers better than crowds, at identifying them.)

What emerges is a vision of innovation and ideas that is resolutely social, dynamic and material. Despite its trendiness, Johnson's perspective is at times wonderfully, subtly contrarian. Ideas don't spring from the minds of solitary, Galtian geniuses: They may start with smart people, but they're refined, extended and finished by creative cultures that are shaped by their physical environments.

But good ideas also don't emerge magically from crowdsourcing and promiscuous networking; they're slow hunches that "fade into view" during years of reflection, tinkering and exploring dead ends. Creative ferment may be accelerated by the Internet, but place still matters. And innovation is driven much less by competition than by obvious and subtle forms of cooperation: Even the most radical- looking invention builds on old ideas and recycled parts.

Like all of Johnson's books, "Where Good Ideas Comes From" is fluidly written, entertaining and smart without being arcane. But is it any more successful than Renaissance recipes for turning lead into gold? "The more we embrace these patterns" in innovative spaces, Johnson says, "the better we will be at tapping our extraordinary capacity for innovative thinking."

I'm not sure it's that easy. Fish might not mind artificial reefs, but humans sure seem to. Efforts to create innovative spaces still yield results that feel like computer animations: bright, sharp and unreal. For example, Frank Gehry designed the Stata Center at MIT to encourage serendipitous connection and intellectual cross-fertilization among computer scientists.

But people are most innovative when they make their own creative spaces and connections, not inhabit someone else's. It's hard to do the kind of appropriation and reinvention of space that supports real innovation when you're working in a building that reflects a creative vision as distinctive as Gehry's.

The surrounding Cambridge neighborhood, on the other hand, is a bricolage of old houses, small factories and warehouses set on streets blazed by cows in the 1600s. It's flexible and can be repurposed endlessly — and it works brilliantly.

In other words, Cambridge (like Hollywood or Silicon Valley) is itself a good idea, the product of serendipitous connections, slow hunches and rich trial and error. If this is so, then creative environments can only be described, not designed. For all its promise to reveal the elixir of innovation, maybe "Where Good Ideas Come From" is a natural history after all.

The long-unpublished review of “What the Dormouse Said”

Last year, I sent the L. A. Times a review of John Markoff's What the Dormouse Said. Unfortunately, it arrived in between editorial regimes, and it never got published. So, I'm posting it here.

I don't mention it in the review, but Peninsula School, where my kids go, makes several appearances in the book. I hadn't realized that the school had this subterranean connection to the origins of personal computing, but it does. Really a fascinating place.

[To the tune of Gil Evans, "Little Wing," from the album "& British Orchestra / BBC".]

Technorati Tags: , , , , ,

John Markoff, What the Dormouse Said: How the 60s Counterculture Shaped the Personal Computer. New York: Viking, 2005.

"Revolutionary:" is there a more over-used adjective for describing computers? Today, Silicon Valley press releases spin any new product or upgrade into the Second Coming of the printing press. But thirty years ago, the personal computer really was revolutionary. The first generation of PCs—the Apple II, TRS 80, Commodore 64, and their kin—were underpowered and nearly useless. Still, to many who had worked with time-sharing systems—who, in effect, had had their own virtual computers, and yearned for the real thing—they were artifacts of a New World, full of wonders and tricks and infinite potential. The PC—or microcomputer, as it was first called—was a Promethean technology, taking fire that had previously been reserved for a technical priesthood cloistered in academia or working for the military, and giving it to the people.

Indeed, for the small group that was designed, programmed, and evangelized the PC in the 1970s, the notion that computers were revolutionary wasn't just colorful talk; for them, the personal computer was a child of the Sixties. The provocative thesis that the PC was political radicalism poured into silicon and magnetic memory, and reflected the spirit of the age every bit as much as tie-dyed shirts or the Grateful Dead's American Beauty, has been advanced in short essays by Theodore Roszak (author of the 1968 classic Making of a Counter-Culture) and Whole Earth Catalog founder Stewart Brand, but it has never been explored extensively. John Markoff's new book, What the Dormouse Said, shows in detail how a blend of high technology, cultural innovation, political radicalism, and—shall we say—pharmacological experimentation spun together in Silicon Valley in the 1960s to yield the personal computer. In so doing, Markoff greatly enriches our understanding of the origins of the personal computer, the history of the Sixties, and the nature of innovation.

A casual visitor to the Peninsula in the early 1960s—it would only come to be called "Silicon Valley" a decade later—would have seen a quintessentially Cold War American tableau. The electronics and aerospace industries were expanding at a ferocious pace, made rich by exploding consumer demand and military contracts. Employers like Lockheed, NASA, Hewlett-Packard and Stanford, and pleasant towns like Palo Alto and Menlo Park, attracted thousands of well-educated, middle-class families. But tucked away in some neighborhoods was a long bohemian and radical tradition: artists, writers, and misfits congregated in Menlo Park's Willows and the hills above Stanford, attracted by cheap housing and beautiful surroundings. The two worlds coexisted but didn't interact until the early 1960s, when several things brought them together.

First, industry insiders were marveling at the spectacular growth in microprocessor power—a phenomenon, Markoff makes clear, that was known to many but best codified by Intel co-founder Gordon Moore. The prospect that computing power would grow exponentially for decades gave some far-sighted researchers the freedom to think about the future of computing in outsized, science-fiction terms, and assume that the technology would deliver whatever capabilities they would need to make their dreams reality. Designing for a radical future became the norm, and Silicon Valley began its addiction to ever-cheaper, ever-stronger microprocessors.

Second, Stanford University attracted two remarkable, but very different computer scientist-evangelists. Douglas Engelbart arrived at Menlo Park's Stanford Research Institute to build computer systems that could augment human intelligence and multiply the power of collaborators, not just automate services or business processes. At the same time, John McCarthy founded the Stanford Artificial Intelligence Laboratory in Palo Alto; his group didn't want to augment human intelligence, but reproduce it electronically. Engelbart's Augmentation Research Lab and McCarthy's SAIL brought together Stanford Ph.D.s, hardware wizards, software hackers, and even high school-age hangers-on—most famously, Apple co-founders Steve Wozniak and Steve Jobs.

Finally, a small coterie of engineers and scientists, led by Ampex engineer Myron Stolaroff, discovered LSD. The drug would later become notorious, thanks to the Merry Pranksters and the Acid Tests; but at Stolaroff's International Foundation for Advanced Study, LSD was a tool for stimulating creativity. They sought out scientists and engineers who had been working on some difficult technical problem for at least three months; until LSD was outlawed in 1967, several hundred of the region's best minds dropped acid, many under the guidance of psychologist Jim Fadmian. For this group, LSD was the ultimate high tech, a road to Nobels and patents rather than Nirvana and inner peace.

Out of this brew, Markoff argues, came the personal computer. Moore's Law suggested that today's expensive minicomputer would eventually be affordable to hobbyists. ARC and SAIL invented many of the key technologies of the digital age, sketched out what a future with abundant computers could be like, and inspired a generation of engineers to want machines of their own. Finally, the tech-savvy counterculture experimented with computers as tools for community organization, activism, and personal growth. As Markoff puts it, "The idea of personal computing was born in the sixties; only later, when falling costs and advancements in technology make it feasible, would the box itself arrive."

Ironically, neither ARC nor SAIL would create that box. Despite Engelbart's brilliance and charisma, the world showed little interest in ARC's powerful but complex system, and the "strong AI" dream of creating intelligent machines collapsed in the 1970s. When Xerox decided to fund its blue-sky Palo Alto Research Center, it rapidly attracted young ARC and SAIL veterans eager to do their own work on personal computing, networks, user interface design, and graphics. A more informal outlet but equally important platform for exploring computers opened soon after. The Homebrew Computing Club attracted a mix of antiwar activists, Whole Earth Catalog veterans, and computer scientists; ultimately, two dozen companies, including Apple, would come out of Homebrew.

Some of this story is already familiar. Homebrew is remembered with great affection in Paul Frieberger and Michael Swain's Fire in the Valley and Steven Levy's Hackers. More recently, Michael Hiltzk's Dealers of Lightning mapped the complex legacy of Xerox PARC. Douglas Engelbart has become the Roy Orbison of computers, his contributions finally remembered and respected in his late years. Those familiar stories are told with greater depth than is usual; but Markoff also does three other important things.

First, Markoff makes a compelling case that Palo Alto and Menlo Park should figure as prominently as San Francisco and Berkeley in the history of the counterculture. Most narratives of the Sixties focus on the Free Speech Movement, anti-Vietnam protests, People's Park, the Black Panthers, and other events in San Francisco and the East Bay. Palo Alto, in contrast, has looked about as revolutionary as an Atlanta suburb.

But over the long run, what happened on the Peninsula was infinitely more important than the Summer of Love. People's Park may loom larger in popular memory than the People's Computer Center, but the former is still an undeveloped block in Berkeley (local activists and the university have fought to a stalemate over it). The latter helped spawn a new attitude to information technology.

Second, Markoff shows just how much the personal computer was the creation of a community, rather than a few individuals. Engelbart and McCarthy were dismissive of the idea of personal computers, and even Xerox PARC's leadership thought the first PCs were little more than toys (which was exactly right) with no potential (which was exactly wrong). No one in Markoff's story has the entire vision of what the PC would become, but everyone contributes something.

Finally, Markoff shows that even the most familiar, universal technologies are products of local environments. The personal computer that we use today—the inexpensive machine that we buy at the local electronics of big-box store—is a miracle of globalization and commodification. Were Adam Smith alive today, he wouldn't talk about the pin factory as a model of efficiency; he'd talk about contract manufacturer Flextronics, supply chains that link Taiwan, Mexico, Malaysia, and Ireland, and global brands like Samsung and Apple.

But the specific blend of establishment and counterculture, Markoff makes clear, is what made Silicon Valley in the 1960s great. Other places had one or the other. San Francisco an abundance of hippies, Princeton and Cambridge a surplus of computer companies, but only on the Peninsula did the two worlds intersect so completely. Buttoned-up engineers did acid; hippies worked at Hewlett-Packard; Engelbart's and McCarthy's labs were full of antiwar protesters whose salaries were paid for by military and government grants.

This mash-up of countercultural and corporate worlds may seem contradictory, but what matters is that it was immensely productive, and it remained so for decades. Steve Jobs liked to tell the designers of the Macintosh, the computer that brought the graphical user interface to the masses, "It's better to be a pirate than join the navy." But they weren't pirates; they were privateers, operating independently but always supported—and funded—by the corporate navy. It's why they succeeded.

But there's one piece of the vision of computing in the 1960s that we're only now starting to fulfill. Engelbart didn't see his work as just enhancing individual creativity; he wanted to boost the collective intelligence of groups, to improve collaboration, group memory, and problem-solving ability.

For a long time, that dream was dumbed-down into things like knowledge management and corporate intranets, and largely ignored by PC pioneers. Now, with the rise of the social software movement, and a deeper theoretical understanding of how groups can use technology to self-organize and cooperate, we're starting to see tools that can make groups smarter, and offer the prospect of putting diverse ideas and practices to good common use.

It's a movement that, for all its technical savvy, puts people and creativity ahead of gadgets. Making it all work will require the combined talents of humanists, designers, geeks, and anthropologists. The outlines of this new mash-up may be coming together in Silicon Valley. If it does, and if it delivers, journalists will write books like What the Dormouse Said about Palo Alto in the 2010s. If it doesn't, the town will become as tranquil as it looks, and the real action will move elsewhere.

Noyce biography

I don't know how long it's been out, but I just stumbled on my review of Leslie Berlin's The Man Behind the Microchip: Robert Noyce and the Invention of Silicon Valley, in the November-December 2005 issue of American Scientist. Doesn't look like there's a password required to get to the piece.

For those who don't know about him, here's a brief bio:

Noyce, the son of a minister, attended Grinnell College and then got a doctorate in physics from MIT. Two years later, in 1955, he moved to California to join a company in Palo Alto that had just been started by William Shockley, one of the coinventors of the transistor. Shockley's remarkable eye for talent was exceeded only by his gift for mismanagement. Less than two years later, the men in the photograph, who had all worked for Shockley—the "Traitorous Eight," he named them—were dissatisfied enough to strike off and found their own company: Fairchild Semiconductor. There Noyce invented the integrated circuit (at about the same time that Texas Instruments engineer Jack Kilby also produced one). And he quickly rose to the rank of general manager. A decade later, Noyce and Gordon Moore left Fairchild to start a second company, Intel, which became a leader in the semiconductor industry in the 1970s and 1980s.

So he's well worth a biography. And Leslie Berlin's is a good one.

[To the tune of Zero, "Little Wing," from the album "1990-05-26 – Sweetwater Saloon".]

Technorati Tags: , ,

It’s out! (Michael Chorost’s Rebuilt)

My review of Michael Chorost's Rebuilt : How Becoming Part Computer Made Me More Human is on the L.A. Times Web site. I think it's also the lead review, but I haven't seen hard copy yet; I've got a couple copies reserved at my local bookstore.

Unlike my review of More Than Human : Embracing the Promise of Biological Enhancement, I actually found this one myself.

Update, 9 December 2011: Since the review doesn't seem to be available on the L. A. Times Web site, I've posted it below.

Human meaning in a world of ones and zeroes

Michael Chorost, Rebuilt: How Becoming Part Computer Made Me More Human. Houghton Mifflin: 224 pp., $24

By Alex Soojung-Kim Pang, Alex Soojung-Kim Pang is a research director at the Institute for the Future, a Silicon Valley think tank; and the author of "Empire and the Sun: Victorian Solar Eclipse Expeditions."

In July 2001, Michael Chorost, a writer and researcher at SRI International, a Silicon Valley research center, became deaf after a viral illness. That fall, he became a cyborg. "I can feel the compact bulk of the implant in my skull," Chorost writes about the surgery. "It is strange to think that by now its two computer chips will have warmed up to exactly the temperature of my body."

Chorost's hearing had been impaired since childhood. When he was 36, while on a business trip to Nevada, he noticed that both his hearing aids were failing. Nothing he did — a change of batteries, adjusting ear molds and tubes — made a difference. In a local emergency room, a specialist told him what was happening to his hearing. "I've always been hard of hearing. I can't go deaf," he thought.

In a normally functioning ear, sound waves are translated into electrical impulses by thousands of tiny hairs in the cochlea, a spiral organ linking the ear to the brain. The hairs in Chorost's cochleas no longer functioned, but the nerves that connected them to the brain were undamaged. Thus, he was a candidate for a cochlear implant, an electronic device that digitally processes sounds and transmits them to 16 electrodes connected to the cochlear nerves.

"Rebuilt: How Becoming Part Computer Made Me More Human" is Chorost's account of learning to hear again. Funny and thoughtful, the book is an extended meditation on the nature of perception, the human brain and the relationship between technology and humanity.

The term "cyborg" is familiar from science fiction. Think of TV's Steve Austin, "the bionic man," or characters on "Star Trek." Some even use the term when referring to any combination of humans and machines: By this standard, someone wearing glasses or even using a toothbrush is a cyborg. Wrong, Chorost argues: To be a cyborg, you need technology that "exerts control of some kind over the body." In his case, the cochlear implant mediates between him and the world of sound; he hears through a tangle of silicon, wire and code.

The implant is the size and thickness of two quarters: Packed into its ceramic casing are a microprocessor, a radio transmitter and a magnet that externally attach to a headpiece worn by Chorost. An insulated wire from the microprocessor is threaded into the cochlea. "The computer's playing my ear," he writes about the time it was first activated. Sixteen electrodes don't produce the same signals that thousands of hairs do. Autumn leaves tinkle rather than crunch. Leaf blowers and toilets sound like explosions. And the implant seems to make little distinction between sounds that are near and far. Chorost must learn to interpret the new signals.

Some of this learning is conscious: Conversations are easier if he doesn't concentrate too hard on them. Much of it happens at the neural level, as Chorost's brain learns to associate new signals with familiar sounds like friends' voices or the rustling of paper. Male and female voices at first sound the same, muffled and indistinct. But then they begin to sound different; eventually, "male voices sounded deep" once more and "women sounded like women again." What happened?

"My brain had somehow reinterpreted a huge frequency change back into a semblance of normality," he explains.

Incredibly, the implant uses two software programs to control the electrodes, which interpret sound in very different ways. One program makes the world sound "big, blocky and fuzzy," he writes, while the other makes the world sound quiet, metallic and tinny. Different programs produce different auditory worlds, and different realities.

"My hearing had not been restored," he writes. "It had been replaced, with an entirely new system that had entirely new rules…. I would have to become emotionally open to what I heard, instead of fighting against it."

The challenge of making human meaning in a world of zeroes and ones colors other parts of Chorost's life too, including his attitude toward technology. He signs up for online dating services but finds that the reduction of people to a set of formal coordinates — height, eye color, profession — may suggest precise matchmaking when it actually short-circuits the slow, serendipitous processes that normally bring and bind people together. (Ellen Ullman's classic memoir, "Close to the Machine: Technophilia and Its Discontents," also describes the effect of technological obsession on private passion.)

In one brilliant passage, Chorost notes that "[s]ocial norms are not taught, they are overheard, but the one thing even the most skilled deaf people cannot do is overhear." And yet, the deaf community in America is a group whose "warmth, intimacy, and cohesiveness … are legendary," and whose members have turned American Sign Language into a remarkably expressive medium. (Chorost was never quite part of that world: He responded to hearing aids, and his parents were determined to raise him in the hearing world rather than have him learn to sign.) Cochlear implants are available to many young deaf children today. Those who get them won't hear perfectly but they probably won't learn enough sign language to be full members of the deaf community: In other words, they'll be like Chorost. In the long run, the author fears, cochlear implants will probably spell the deaf community's end — yet another community whose bonds will be weakened, or destroyed, by technology.

Four years after getting the implant, Chorost has decamped suburban Silicon Valley for urban San Francisco, and he continues his search for the ideal mate. He also hears pretty well. But even more important, as he reflects on the equipment and the software that runs it, he says, "my bionic hearing made me more human, because I was constantly aware that my perception of the universe was provisional, the result of human decisions that would be revised time and again. [I]t was my task as a human being to strive to connect ever more complexly and deeply with the people and places of my life."

"Rebuilt" may be the first of a new genre: the cyborg memoir. It certainly won't be the last. Every era has its characteristic trials and transformations. Thanks to a blend of demographics and technological advance, we can expect an avalanche of stories of transformations that are equal parts technological, neurological and social. Futurist Theodore Roszak argues in "Longevity Revolution" that aging baby boomers will embrace a stunning variety of implants and genetic modifications to remain healthy and independent.

Medical advances are improving formerly life-threatening medical conditions brought on by advancing age or injury. What was once a rare or easily marginalized experience may become a rite of passage as millions of us, entering our last decades, learn to walk, see or hear again. The challenge will not just be to recover but to draw what wisdom we can from the experience.

Chorost shows us the way. His awareness of life's fragility, gained after making a determined effort to overcome its challenges, strikes me as the perfect answer to opponents of implants and genetic modification who worry about the effect of such tinkerings on our selves and souls. Memoirs such as "Rebuilt" will be invaluable guides in this new territory.

[To the tune of Barbie, "How Can I Refuse?," from the album "Barbie Sings!: The Princess Movie Song Collection".]

Technorati Tags: ,

Hey, it’s out! (Ramez Naam, More Than Human)

My review of Ramez Naam's book More Than Human is up on the LA Times Web site. I don't think registration is required. I hate to admit it, but I completely missed its publication. I only found out because Naam e-mailed me about it– and took my criticisms of the book in stride, it sounds like. (Though when you've been a project manager at Microsoft and reported to Bill Gates, you've probably survived far worse than anything that a book critic can dish out.) Which is good: while I had some quibbles with the book– techno-libertarians will find it a great read, but someone who's opposed to the kinds of technological modifications of bodies that Naam talks about won't come away thinking that their position is wrong– I did enjoy it. The one thing I regret is a line that violates an informal rule I've tried to follow when reviewing books. The essence of the review boils down to these lines:

More Than Human is a terrific survey of current work and future possibilities in gene therapy, neurotechnology and other fields. Naam doesn't shy away from technical detail, but his enthusiasm keeps the science from becoming intimidating. But he's less successful in making the case for "embracing the promise of biological enhancement." Yes, people are greedy, regulations are often ineffective and the war on drugs has not gone well. But none of these facts is likely to change the minds of people who oppose gene therapy on moral or theological grounds.

The problem comes in the third sentence. My informal rule is that you should praise the author, but criticize the book: it gives you some critical (as it were) distance separating the author and the book, and keeps an author from taking criticism too personally. Unfortunately, I slipped in the middle of the paragraph. Or maybe the editor changed it, in which case I'll have track him down and beat him up. Just kidding, Nick! Seriously, you're the best.

Here's the whole review:

Era of souped-up humans is coming

Scientific and medical advances in the last 150 years have doubled average life spans in advanced countries; made historical curiosities of fearsome epidemic diseases; eliminated childhood scourges; turned fatal adult diseases into chronic illnesses to be "managed"; and changed the way we think about aging. But if you think these changes have pushed at our sense of what it means to be human, just wait for what will happen in the next 20 years.

Gene therapy could eliminate genetically based diseases; designer drugs could combat neurological or brain disease, improve intelligence or sculpt personality. A variety of therapies could affect life at its beginning and end, allowing parents to modify the genes that shape an unborn child's mind and physique, or elders to dramatically slow the aging process. Brain implants already let us use thought to control prostheses and robotic devices. In a few years, they could evolve into machine-mediated brain-to-brain connection — Internet-enabled telepathy and mind reading.

Authors as different as Bill McKibben in "Enough" and Francis Fukuyama in "Our Posthuman Future" argue that technologies could so dramatically alter our bodies, or challenge our capacity for self-determination and free will, that we should be wise enough to refuse — even ban — them.

Stop worrying, Ramez Naam says in "More Than Human." He argues that efforts to ban such enhancements are either folly or futile for several reasons. Prohibition wouldn't destroy the markets for life-extending therapies or genetic redesign of human embryos, he says; it would just drive them abroad or underground. Banning technologies and therapies also constrains the freedoms of individuals and markets. The Declaration of Independence declared that "Life, liberty and the pursuit of happiness" are inalienable rights: Denying someone access to cortical implants hits the trifecta.

Further, Naam argues, "scientists cannot draw a clear line between healing and enhancing." Banning the latter would inevitably cripple the former. Finally and most provocatively, "far from being unnatural, the drive to alter and improve on ourselves is a fundamental part of who we humans are." This turns the argument of bioethicists like Leon Kass (head of the President's Council on Bioethics, which has been famously conservative in its recommendations) upside down. Our limits don't define us, Naam says; our desire to overcome them does.

"More Than Human" is a terrific survey of current work and future possibilities in gene therapy, neurotechnology and other fields. Naam doesn't shy away from technical detail, but his enthusiasm keeps the science from becoming intimidating. But he's less successful in making the case for "embracing the promise of biological enhancement."

Yes, people are greedy, regulations are often ineffective and the war on drugs has not gone well. But none of these facts is likely to change the minds of people who oppose gene therapy on moral or theological grounds. Many religions see the body as a prison, not a temple, and illness and death as part of life's natural course. Indeed, the Pontifical Academy of Life recently decried the Western world's "health-fiend madness," arguing that it takes money away from simpler but more potent public health measures — and denies us the hard-won wisdom that suffering can bring.

But in today's borderless high-tech world, if gene therapies and neural implants are banned in the U.S., they'll probably be available somewhere else. Medical tourism is already a growth industry in parts of Latin America and Asia that have low labor costs, attractive locations and good facilities. One can only imagine the money a small tropical nation could make restoring youth to the elderly. Rather than focus on banning them, we'd be better off making sure these therapies are not available only to the super-rich and figuring out how their availability could affect the future.

Those efforts might be helped by realizing that "More Than Human" describes two different technologies. Life-extending therapies, despite their likely popularity, probably wouldn't dramatically change our sense of what it means to be human. In contrast, neurotechnologies that allow a prosthetic device to feel like a part of our bodies, or let us directly share thoughts and senses with others, would scramble our basic notions of body and mind, self and other, individual and community.

I tend to agree with Naam that the desire to prolong life, acquire new physical powers and extend the mind does not risk making us less human. There's more to life than trying to recapture lost youth, but no one who defends the humanity of the weak, the disabled and the very old should deny the humanity of those who seek to re-engineer their bodies or minds. "More Than Human" maps some of this future, but it probably won't help you decide whether you want to really go there.

Review of Tom Misa, Leonardo to the Internet

My review of Tom Misa’s Leonardo to the Internet came out in the latest American Scholar. Since the Scholar doesn’t have a Web site, I’m taking the liberty of publishing the review here. (The editors at AS, all of whom are leaving or are being forced out– a fact that I still have a hard time getting my mind around– made some improvements to it; this is really the draft.)

[To the tune of The Beatles, “Let It Be,” from the album 1.]

Thomas J. Misa, Leonardo to the Internet: Technology and Culture from the Reanaissance to the Present. Xxii + 324 pp., illus, tables. Baltimore: Johns Hopkins Press, 2004.

It is a fact universally acknowledged that technology looms large in today’s world. There’s no pressing issue that isn’t influenced, sometimes decisively, by technology. Globalization as we know it would be impossible without computer and telephone networks, international financial systems, low-cost shipping, jet air travel, and cheap electronics; nor would the anti-globalization movement survive long without the Internet to publicize their criticisms and coordinate cross-border campaigns, or cell phones and SMS (short messaging service) to mobilize quickly at demonstrations. Global warming, agriculture, education, cultural divides, political activism, even our sense of what it means to be human are all affected by technology.

It is thus surprising that technology gets short shrift in elite culture. Science is recognized as a critical part of modern life, thanks to the efforts of generations of literate scientists like Jacob Bronowski and Stephen Hawking. (The growing opaqueness of contemporary elite culture has helped, too. If you need a graduate degree to make sense of John Cage and Matthew Barney, quantum physics and molecular biology seem a little less far out.) Educated people should know a little science—or at least feel guilty that they don’t—but no one who can’t read a circuit diagram feels uncultured.

Why is this? Several things work against technology’s public image. In contrast to science, engineering is an analytical activity rather than an imaginative one; like accounting, it isn’t supposed to be creative. Technology is either just applied science, a byproduct of what happens in the laboratory, or inspired tinkering done by visionary (and often solitary) geniuses. Finally, technologies strictly follow the rules of economics: the winner are always the cheapest or most efficient, and make us more productive.

These assumptions drive historians of technology nuts. Thomas Misa’s Leonardo to the Internet gently aims to demolish each one, and to give readers a more nuanced view of technology’s role in history. The best historians of technology have recognized that their underdog position forces them to be more explicit about technology’s role in history, and to make bold claims on their subject’s behalf. Thomas Hughes, the elder statesman of the history of technology (and Misa’s dissertation advisor), argued in his 1989 American Genesis that the achievements of American engineers were equal to those of Greek philosophers, Florentine artists, or Stuart playwrights. Technology isn’t just a part of American history, Hughes argues; it is American history. Leonardo to the Internet follows Hughes’ model of combining an engaging historical narrative with deeper lessons about technology. Technology, Misa argues, has played a central role in Western history since the Renaissance. But that influence is not uniform: different eras have distinct technological styles, each with its own logic and imperatives.

In some cases, political interests shaped technological styles. Renaissance inventors and engineers, like Leonardo da Vinci and Francesco di Giorgio, worked for patrons who “were not much concerned with labor-saving industrial technologies or with profit-spinning commercial ones.” (32) For all their ruthless efficiency in politics and statecraft, figures like Ludovico Sforza and Cosimo di Medici were interested in devices that would bring them success in battle, reflect and enhance their glory, or be entertaining: a court engineer might have to work simultaneously on siege engines, water-driven automata, and theatre equipment. Similarly, nineteenth-century European engineers who “addressed the unparalleled problems of far-flung overseas empires” (97) were less concerned with economic development than political control. Railroads accelerated integration of colonial possessions into imperial economies, and telegraph systems kept metropolitan powers informed of events in colonies; the economic benefits were less important to designers than the military and political concerns. More recently, the Cold War-era “military-industrial complex” (as president Dwight Eisenhower described it) placed economic concerns far behind strategic goals and tactical needs.

In other cases, economics have been more important in shaping technology, but economic imperatives, and the technological styles they produced, varied substantially. The seventeenth-century Dutch Republic, for example, was the center of a vibrant, globe-spanning trading network supported by a variety of technologies. A substantial percentage of European commerce move through Amsterdam and Rotterdam, the Dutch East India Company inserted itself in the Asian silks and spice trade, while its West Indian counterpart moved Swedish copperware to Africa, African slaves to Brazil, and Brazilian sugar to Europe. Supporting these activities were the factory-like herring busses and globetrotting cargo fluyt, which lowered transportation costs enough to make such global traffic economically viable, and a variety of manufacturing innovations aimed at producing high-quality finished goods. The distinctive Dutch technological style, Misa argues, was capitalist but not industrialist, focused on “carrying goods cheaply [and] processing the profitably.” (34) A century later, in contrast, English engineers in London and Manchester focused on technologies that cut cost and allowed high-volume production. Steam power, large centralized manufacturing centers (most notably London breweries and Manchester cotton mills), and dense networks of skilled workers and secondary industries supporting large enterprises were the defining features of British industry. In the late nineteenth century, engineers working in large enterprises like General Electric and AG Farben directed their energies to extending electrical grids, chemical plants, and other large systems. Their style traded the “creative destruction” of innovation for incremental improvements that would yield technological and commercial stability– industrial but not capitalist.

Science, in contrast, plays a less important role than economics or politics in Misa’s story. The notion of science as an activity distinct from technology or philosophy is a recent one; indeed, the term “scientist” is a nineteenth-century invention. A better understanding of physical and chemical laws helped spur the rise of the electrical and chemical industries, but by the late 1800s and early 1900s, scientific research itself was being industrialized and rationalized. Industrial research laboratories at General Electric, AG Farben, Bell Labs, and elsewhere were idea factories, organizing large groups of scientists to create knowledge that satisfied the needs of corporate sponsors. In other words, science itself becomes a kind of technology.

Culture can also play a guiding role in technological styles. In the early twentieth century, advances in mass-production of steel and glass allowed the likes of Filippo Marinetti, J. J. P. Oud, and Walter Gropius to create a new, explicitly modern style in housing and industrial design. This style rejected the historical references and ornament of academic, Beaux Arts design (Viennese architect Adolph Loos famously declared that “ornament is crime”), and sought to create new architectural forms appropriate to urban industrial civilization. More recently, national and regional cultures have adapted global technologies to suit local needs. Japanese and Finnish teens have created subcultures around the mobile phone, a technology originally aimed at busy executives. McDonalds, the exemplar of homogenizing global culture, is being subtly remade as it moves into Asia: Chinese families use it as a cheap banquet hall, while students in Hong Kong and Singapore spend hours there studying. So much for fast food.

Wait. What’s McDonalds doing in a history of technology? There’s a deep point here: fast food is a triumph of technology, a highly efficient system for producing and delivering low-cost goods to mass markets. (Critics would say that like any factory system, it is also ecologically destructive and socially disruptive.) Hamburgers and fries aren’t the first industrialized food: in the nineteenth century, the British brewing industry was so enamored of steam power, large-scale production, and centralized facilities that, according to Misa, porter “deserves full recognition as a prototypical industrial-age product alongside cotton, iron, and coal.” (65) More broadly, a view of technology that focuses exclusively on the most visible products of industrial civilization—hardware, factories, computers—is profoundly incomplete. Instead, one must think in terms of technological systems, which can include hardware, facilities and production techniques, people, technical standards, knowledge, financial and legal institutions. Financial systems help define how large technological systems; copyright and patent law plays a huge role in structuring high-tech marketplaces; users often reinvent technologies, finding uses for them that their creators never intended.

The interdependency that defines systems helps explain why technological changes have unintended consequences. For years, economists puzzled wondered why technologies like the dishwasher and washing machine didn’t dramatically reduce the amount of time women spent cleaning house. It turns out that even as the time required to do any specific task—wash a shirt, got to the store—declined, other changes in the household absorbed those savings. Standards of cleanliness rose, so clothes were washed more often; stores stopped home delivery; household chores that had been done by men devolved to women. Likewise, the computer hasn’t led to the paperless office, because many users find hard copy easier to work with and preserve, and because electronic documents haven’t had the legal status that printed ones do. The interdependency of artifacts, actors, and institutions also means that any problem can have a legal, economic, or technological solution. This is why the record industry, shocked by the popularity of file trading systems like Kazaa, is suing downloaders, lobbying Congress to tighten copyright, and pressuring electronics companies to build digital rights management technology into their products.

Thinking in terms of systems rather than individual artifacts also reveals the degree to which technologies are products of historical contingency, and technological change is susceptible to influence by users. While this is important for understanding the history of technology, it is absolutely critical for shaping its future. For example, the contrast between Dutch and English technological styles suggests that recent movement to create low-polluting “green” technologies and businesses that work with rather than exploit nature may be onto something. Creating such alternatives will not be not easy—a systems view shows that it will require everything from low-energy manufacturing methods, to accounting techniques that can measure (and hence make visible) the social and environmental costs and benefits of business decisions—but it is possible. As Misa puts it, “if technologies come from within society and are products of on-going social processes, we can, in principle alter them… even as they change us.” (xi) This, ultimately, is the strongest reason for taking technology as seriously as science or culture: not because it influences us, but because we can influence it.

Latest L. A. Times review (Edward Tenner, Our Own Devices)

My review of Edward Tenner’s Our Own Devices came out in yesterday’s Los Angeles Times Book Review.

I haven’t actually seen it myself: since I’m not a subscriber, I can’t get to the online version, and I was at a school function all day yesterday and didn’t get to Kepler’s, where I usually go to satisfy my periodical-purchasing needs. Oh well. They’ll send me reprints.

It was a fairly positive review overall: Tenner deserves credit for trying to do some pretty demanding things in the book, and recognizing the importance of a whole class of artifacts that normally get very little attention among historians of technology.

[To the tune of The Who, “My Wife,” from the album The Ultimate Collection (Disc 1).]

The gadgets of our lives

Political campaigns and wars generate plentiful records that can be spun into entertaining stories. The lives of familiar objects, in contrast, are scantily documented (if at all) and rarely follow a straight narrative line. Yet, as Henry Petroski and other historians of technology have demonstrated, it is possible to write histories of mundane artifacts — the pencil, the paper clip, the grocery bag — that are anything but mundane.

Edward Tenner seeks to take these kinds of studies to a new, more intimate level in “Our Own Devices: The Past and Future of Body Technology,” which examines how the human body is changed by “body technologies” — the things we wear, sit in and manipulate. (The forthcoming paperback edition’s subtitle, “How Technology Remakes Humanity,” is clearer and more ambitious.) Writing the history of something as common as the flip-flop sandal is difficult enough; explaining how it has changed the way people around the world walk and perceive their environments is much harder.

It’s one thing to explain how to ride a bicycle; imagine tracing how cycling techniques have changed over the last century as bicycle design, road design, racing strategy, even shoes and clothing evolved. That gives you a sense of the challenge Tenner set out for himself.

In “Our Own Devices,” Tenner explores not only what engineers intend objects to do but how people actually use them. For users are great innovators, finding purposes for devices that their creators never imagined. This interplay between technologies and “technique,” the skills and uses that define a device’s life in the real world, is the book’s organizing point, connecting stories of baby bottles, sandals, running shoes, office seating, recliners, musical and typewriter keyboards, eyeglasses and helmets.

One might think that as technologies advance, technique would become less important, since new technologies can render older skills irrelevant; in fact, they often create a demand for new ones. Thanks to cable TV, we no longer have to fiddle with antennas; but today’s remote controls virtually require graduate degrees in engineering to understand. Computer users may no longer have to write their own programs, but they must create and manage systems that connect CPUs, monitors, MP3 players, digital cameras and other equipment. (This has given rise to “cable management” consultants, who bring order to their clients’ desks.) Now, as scientists work to perfect implants, prostheses and drugs to enhance concentration and memory, we seem poised to take body technology to a new threshold; studying its history might help us anticipate how these new technologies might “bite back,” as Tenner put it in his 1997 book, “Why Things Bite Back.”

The interplay between technology and technique is different in each chapter. In a few cases, Tenner shows that science drives design innovation: Office chairs, running shoes and helmets have been revolutionized in the last 50 years by ergonomics. Technology and techniques also can either inhibit or accelerate the other’s development. In some cases, the investment of time and energy required to learn a new technique inhibits the technical innovations that would make old ones obsolete. (For example, typists who are fluent users of the traditional qwerty keyboard are reluctant to learn an entirely new system, even if it promises to be faster.) At other times, users demand or generate innovations.

Users also have spurred innovation. Nineteenth century pianists (most notably Ludwig van Beethoven and Franz Liszt) pushed the technical limits of the pianoforte in search of “new volume, range, and dynamic shadings in their music,” and drove manufacturers to create the modern grand piano, with its cast-iron frame and eight-octave range. Many 20th- century athletic shoe innovations came from mountaineers and yachtsmen have created specialized types to meet the demands of their terrain; soccer players developed shoes to accommodate a faster, more fluid game that required better control and ball-handling skills.

Tenner also finds big issues in the histories of his small technologies. The worldwide spread of sandals exemplifies how globalization can absorb and then destroy local cultural forms. Zoris, which have two straps that meet between the big and second toes, originated in Japan; Japanese workers took the design to Hawaii, whence they spread to the American mainland, South America — Brazilians call them “Havianas,” or Hawaiians — and Australia. Before World War II, they were handmade from local materials. Now they are mass-produced from plastic and PVC and symbolize “the replacement of the frugal environmental ethic of early Japan with a worldwide throwaway society.” Facsimile machines and computer-aided design programs have made it possible for athletic-shoe manufacturers to quickly develop and transfer complex visual designs, then move the skilled manufacturing jobs that have produced them from Germany and America to Taiwan and South Korea.

The book’s strongest sections deal with technologies that require conscious learning and have a professional or occupational component. Users know they have to work at learning how to type, play the piano, or master a sport; walking and sitting, in contrast, are learned unconsciously, even if they are shaped by culture and technology. It makes their histories far more elusive and, in some ways, less interesting.

History affords many examples of body technologies that are innovative and afford clear improvements in performance — and are controversial precisely because they make difficult things easier. Golf clubs and oversized tennis racquets that make long drives and power serves easier, ballet slippers that provide better support for dancers’ feet and piano keyboards that make playing in different keys effortless have all been rejected because their improvements violated tradition and were seen as unacceptable shortcuts or cheating. Tenner contends that controversies like these are valuable because they force people to explain positions and assumptions that normally remain unspoken and highlight the ways we seek to balance old and new technologies and techniques. Competing lounge chair designs don’t become controversial in the same illuminating way.

Had “Our Own Devices” focused more rigorously on cases from the worlds of sports, music and work, instead of pursuing its head-to-toe coverage of body technologies, the result could have been a tighter, more closely argued book. It also might have provided a more coherent framework for considering the future of body technology, which, though part of the book’s title, is dismissed in a mere six pages. Still, the book is engaging, rather like an actor turned politician: occasionally hard to understand but consistently entertaining and optimistic, if vague, about the future. Maybe it has flaws, but you vote for it anyway.

Older posts

© 2017 Alex Soojung-Kim Pang, Ph.D.

Theme by Anders NorenUp ↑