First published in the New York Times Magazine on April 22, 2001.

As I drive my rental car across Silicon Valley under a cloudless and starry sky, it is fitting that the electronic navigation device on the dashboard should be talking to me. “Approaching left turn,” says Helga (as I call her). “Left turn in point five miles.” Headlights rush past us, exit signs loom and are gone, and now it occurs to me that this freeway doesn’t even have left turns. Helga is trying to show me something on a tiny, color-coded, icon-studded moving-map display at the edge of my peripheral vision. Up in the real world, we hurtle under an overpass. That wasn’t my left turn, I hope. But yes, apparently Helga lacks perfect knowledge of California cloverleaf topography. “Calculating route,” she chirps, as if we can simply begin again with no memory of the past. I am mindful of the German motorist who drove his BMW into the Havel River one night because he put too much trust in his dashboard navigator.

Still, we can’t get lost. We are too well connected, Helga and I. She listens constantly to at least four of the two dozen satellites of the Global Positioning System: orbiting atomic clocks that bathe the globe in their precisely intermingled time signals, enabling any device skilled in trigonometry (and these days what device isn’t?) to reckon its [pullquote align=”left”] This is different from the radio-spectrum Babel that defined the 20th century: the broadcast era. We aren’t expected merely to tune in and listen. This network is push and pull, give and take. [/pullquote]exact location. We are not alone here. My cellular phone, as long as it is on, parleys silently with the network, giving and receiving information about when and where we are. My hand-held Palm-type computer cum wireless modem has already pulled in directions by e-mail and can download new maps in real time. I could plug my laptop computer into the cell phone, or vice versa, and be online that way. (I haven’t felt the need to give all these devices names; most of them aren’t talking to me.)


The network knows where we are. The network is there, all around us, a ghostly electromagnetic presence, pervasive and salient, a global infrastructure taking shape many times faster than the Interstate highway or the world’s railroads. This is different from the radio-spectrum Babel that defined the 20th century: the broadcast era. We aren’t expected merely to tune in and listen. This network is push and pull, give and take. It broadens our reach. If we lock our keys in the car, the network can unlock it for us from thousands of miles away — just a few bytes through the ether.

To play in this game, we must equip ourselves with gadgets. Communicative gadgets: mobile phones, pocket computers, radio-synchronizing wristwatches, remote car keys, smart cards and smart tags, microchips and antennas sewn into our hems and lapels. Two thousand one is shaping up to be the year of the wireless device — the threshold year — just as 1994 was the year of the internet and 1987 the year of the fax machine. Never mind the dismal sounds from Wall Street; the share prices of Palm and Nokia are not leading indicators on this matter. Mobile phones are nearing ubiquity: teenagers depend on them and frenetically instant-message their pals; couples stroll together engaged in parallel telecommunication; the New York/Washington shuttle before departure is one big tubular telephone lounge. But the phone is just the obvious part. I.B.M. is preparing Digital Jewelry: earrings, bracelets, chokers, microphones, cameras, tiny brains, all with minuscule batteries, all communicating wirelessly. We are to lodge these items nightly in their Digital Jewelry Box, [pullquote align=”right”] The computer alters the human. It’s our complement, our partner, our vindication. The goal of all the previous stopgap inventions. [/pullquote]where they will recharge their spirits and swap data. We children go to sleep; our toys stay up and play.

So the editors have sent me forth, equipped. I have joined what the Japanese are calling the oyayubizoku (the thumb tribe), named for the organ we so compulsively poke at our tiny keypads. I am meant to be the Compleat Geek. My hip vibrates with each incoming e-mail message because of the BlackBerry two-way pager clipped there, just like Al Gore’s. I have the i-O Display Systems i-glasses, pixels glistening before my eyeballs, one step short of pumping virtual reality directly into my optic nerve. I feel reluctant to wear this item — so sleek and cumbersome, so fashion-forward and yet so retro — out in Times Square. I share my misgivings with the editors. Their reply comes by e-mail: “We advise you to look at the fine print in your contract, which specifically addresses the type of headgear we can force you to wear in public.”

Oh, fine. I’m connected.


Information everywhere, at light speed, immersing us — is this what we want? We seem unsure. We are the species that defines itself in terms of information: homo sapiens sapiens. We are knowledge connoisseurs. We are being promised some approximation of All Previous Text (and music and pictures) in our pockets. Then again, we didn’t evolve in a world with so much data and buzz. Our sense organs tuned into one slow channel at a time. Now we tune in and out. The dream of perfect ceaseless information flow can slip so easily into a nightmare of perfect perpetual distraction.

Our technologies don’t just empower us: they also harass us, and they change us — for better and for worse. None more than the computer. “Other inventions alter the conditions of human existence,” writes Richard Powers in his novel Plowing the Dark. “The computer alters the human. It’s our complement, our partner, our vindication. The goal of all the previous stopgap inventions. It builds us an entirely new home.” All the more so when the computer is . . . everywhere.

But a long and bumpy path lies between promise and reality. “Wireless” is still a relative term. The cable and plug industry need not panic. Heading out to Silicon Valley with my wireless devices, I find myself gathering the following accessories:

  1. For my laptop, a three-pin AC adapter plus power cord.
  2. For my cell phone, a power cord, plus a cable to the laptop. Plus a hands-free headset — earplug and microphone in one — so I can walk along giving the impression that I’m talking to myself. I do not yet have a wearable cellular phone armband, rapid charger kit, holster or leather case.
  3. For my hand-held PC, a docking cradle. A power cord and adapter. And a detachable cable for synchronizing the data in the hand-held with another PC. The hand-held also has a modem attachment, which has its own power cord.
  4. For my digital-music player, I have plug-type earphones, although my otorhinolaryngologist disapproves of the attempt to apply sound right to the eardrum. Another cable connects the music player to my laptop, for loading the music in the first place.
  5. A different (but indistinguishable) cable connects my digital camera to the laptop, for unloading the pictures. My digital voice recorder, too, has a unique cable. No power cord; it runs on AAA batteries.
  6. Batteries.
  7. Manuals and warning placards. “To satisfy FCC RF exposure compliance requirements,” says one, “the user should generally maintain a separation distance of 4 cm between the person’s body and the device and its antenna.” But relax, we can make an exception for hands, “because they are extremities.”

There are old-style connectors: serial ports, with 9 pins or 25. There are new-style connectors: USB and FireWire. I try to coil some of the wires. They came with twist ties for this purpose. If I were better organized, I would have a box just for the twist ties. Another for the belt clips. My wife watches dolefully: “You’re setting up the Mir space station?”

In the imaginations of the gadget makers, these cables have already vanished. A new wireless standard called Bluetooth (after Harald Bluetooth, first-millennium Viking king) is meant to replace them. Every gadget gets a Bluetooth chip, with its own radio transceiver. All these Bluetooth-enabled devices sense one another’s presence, trade stories and keep one another up to date. They create spontaneous personal networks, where devices can act simultaneously as master and slave to other devices: ad hoc “scatternets,” “personal area networks,” networks within networks. Your Bluetooth mobile phone may obey instructions from a concert hall to switch to vibrate-only mode. Your Bluetooth headset will presumably know, at any given moment, whether to keep playing some song you have downloaded or switch to an incoming phone call or alert you to an impending thunderstorm.

Into the same virtual space, the electromagnetic spectrum, comes a wholly different wireless standard, called 802.11b. (Say “eight oh two dot eleven bee.”) The proponents of this standard are pushing a friendlier name, Wi-Fi, for wireless fidelity. Apple wires all its new laptops for 802.11b, and other manufacturers are following suit. If you install a small base station somewhere on your home network, you can carry laptops from room to room, basement to kitchen counter, and never go off line. By the end of this year, thousands of hotels, airport lounges and coffee shops will be filling their airspace with this same invisible radiation field: information and connectivity all around. Microsoft and Starbucks are teaming up to deploy it. One can imagine grocery stores and department stores beaming real-time information to their gadget-toting customers. One can even imagine properly functional motor-vehicles offices and polling places.

Wi-Fi is “the next big thing,” asserts J. William Gurley, a Menlo Park, Calif., venture capitalist and online columnist. “The history of technology has proven again and again that if a certain open architecture gains escape velocity, there is no turning back.” He’s right. Whether he’s right specifically about 802.11b doesn’t matter. It might be Wi-Fi, or it might be Bluetooth, or it might be a combination of those and something else besides.

Stock-market watchers have their own special perspective, of course, and lately they have been glum. Yet people in the world’s Silicon Valleys are still showing up for work and planning our future. The recent spells of market euphoria and market despair, at their most extreme, have been illusions, with little relation to the real breadth of technological change. We do tend to take our illusions seriously when they involve large sums of money. For that matter, the rising volatility on Wall Street flows directly from the dense, high-velocity interconnectedness of our information sources. When everyone hears the same “news” at the same time and everyone tries to buy or sell the same stocks, any hope of market equilibrium vanishes. We are learning to live with whole new species of mass hysteria.

In other ways, too, these developments pose challenges to the life of the polity. More than ever, our ability to participate in the basic processes of our information-rich culture — commerce, education, entertainment — will depend on technology. The internet has been a democratizing force worldwide, knocking down walls, creating new voices, redistributing knowledge — sometimes, redistributing the kind of knowledge that brings wealth. But there are barriers to entry. Like our other core infrastructures — roads and bridges, the electric power grid, the phone system — the wired and wireless network is being built out largely by private companies, yet the public needs universal access. If laptops and Internet connections and Web-aware mobile phones remain tokens of privilege, then the gap between rich and poor will grow. Digital Jewelry, indeed.


The Lexicographers of the Oxford English Dictionary have an open file on the word “network.” Some of the file is virtual: bits living in the network. Some of it exists in more traditional, detached form, on 4-by-6-inch slips of paper, which, at the moment I inquire, happen to be out on someone’s desk. I’m wondering whether they have tracked this new sense of the word: “the network,” or even “the Network,” meaning a global entity, bigger than the internet. The totality of the world’s computers, databases and communications channels. Maybe the network can be said to possess knowledge and even behaviors.

Sure enough, they are keeping an eye on “the network.” “It would seem that the new sense you mention is closely — maybe inextricably — tied up with a usage which goes back well before the internet came into existence,” says Peter Gilliver, an OED associate editor. Not the original sense, of course (“work in which threads, wires or similar materials are arranged in the fashion of a net”), but something connoting the totality of all information networks — and something we tend to personify (“the network listens”; “the network knows”). Gilliver checks science and [pullquote align=”left”] I care about what you can do with this thing, this magical property, this thing we’ve imbued into our devices. [/pullquote]science fiction without prejudice. “As long ago as 1970,” he notes, “the network was clearly used in very much your sense, the only difference being that in 1970 that ‘totality’ was pretty limited.”

They might also want to take a new look at the word “pervasive.” I’m hearing it all over Silicon Valley, and without the usual pejorative overtones. Pervasive computing is both a buzzword and a new field of study within computer science. It means computers in the walls, in tables and chairs, in your clothing. Computers in the air, when engineers can figure that one out. (A group at Berkeley is working on “Smart Dust,” financed by the Defense Advanced Research Projects Agency.) Computers fading into the environment.

Computer scientists, embracing this vision, see their discipline as a new branch of social science. They look back over their shoulders at the humans in the picture, and sometimes they sound surprised. “Individuals within the space are doing things other than interacting with the computer,” declares a recent research report, “coming and going, and perhaps most strikingly, interacting with each other — not just with the computer.”

Pervasive Computing is also a new division and “strategic initiative” at IBM, spreading across several of the company’s headquarters and research laboratories. Helga has guided me past San Jose, around some hills and dales, to the astoundingly bucolic Almaden Research Center, about 700 scientists in four hypermodern buildings hidden in a field dotted with cows. I head for the User Lab, the place where they are supposedly thinking about us humans and where we fit in. The head of the User Lab is Daniel M. Russell, a lanky, mischievous-looking man with a trim white beard.

He begins by announcing, “I fundamentally don’t care about computers.” But he led computer-research groups at Apple and at Xerox, and he has computers on the desk and computers on the wall, and as his staff members wander in and out, they pretty much all have computers in their pockets, and even that skateboard in the corner happens to be a computer on wheels, so clearly there’s some kind of subtle distinction coming.

“I care about computing,” Russell says. “I care about what you can do with this thing, this magical property, this thing we’ve imbued into our devices. This lab is about computing as a medium for people, a medium of expression and a medium for work and so on.”

He has cognitive psychologists, mechanical engineers and industrial designers. He has a working machine shop. All around are bits of gadgeteer detritus: broken-up pagers and wristwatches, eyeglass frames and limbs from department-store mannequins.

“One of the pieces of what we’re doing is thinking about how can we make devices smaller and smaller and smaller,” he says. “You can imagine where all this leads, right? The obvious terminal point is you implant them, which brings up its own set of issues.” (The jokes about our bionic and cyborg future fly freely around here.) “Or you turn it into jewelry.” Left earring talks wirelessly to right earring. Pendants become annunciators; rings become pointing devices or alarms.

“I want my ring to shine red when my daughter gets home,” Russell says. “Or flash green when I have an urgent message. Or the stock price shoots above 200. So now the question becomes: Once you’ve got rings that talk to your computer and cell phones that are in your ears, how do you get them to work together? How do you dial someone?”

Cameron Miner runs the group’s design lab for working on such questions. “We’re seeing a usability gap emerge with these devices,” he says — devices constantly shrinking while adding new functions. “My eyes are not getting any better. My fingers are not getting any smaller.”

Tiny keyboards are just frustrating. Voice recognition is everyone’s dream, but understanding human speech is one of those fundamental capabilities that continue to elude machines. It’s a hard problem.

These researchers share certain articles of faith, though. One is that their world marches to the steadfast drumbeat of Moore’s Law. However tiny and however powerful this year’s devices are, next year’s will be tinier and more powerful. They can bank on it. So they’re planning ahead. They also believe that no matter how Luddite we feel, deep down we are data addicts, suckers for information. Resistance is futile. With all the stuff being thrown against our walls, some of it has to stick.

It may as well be a law of modern life. Once it was true of machines, as they began infiltrating the fabric of our existence, and now it is particularly true of the technologies of computing and communication. First we disdain them and despise them; then we depend on them. In between, we hardly notice a transition.

Our first wearable information appliance established itself long ago. The wristwatch industry has never been healthier, though some stalwart souls still affect not to wear such a thing. “We find that people look at their watches four dozen times a day,” Miner says. “And at no time do you realize that more than when you forget your watch. It’s not just that you don’t know what time it is; you feel all out of sorts. The rhythm of your day is all thrown off.

“So we’re thinking, What other kinds of information can you push into that peripheral channel? Contacts and schedules and things like that are good. But what about your stock-market portfolio? Or biometric information about your loved ones, so you can see how your parents are doing, just to know whether they’re having a good day or a bad day.”

We wear other devices too. We have cheerfully sported lenses in front of our eyes for several hundred years. They could be smarter. Russell has prototype eyeglasses that translate signs from Japanese into English, displaying the translation as a caption a half-inch from your retina. Now, translation is another of those hard artificial-intelligence problems, but still. “Even if the translation is terrible,” Russell says, “I don’t read any Japanese at all, so for me, this is a lot better than that.”

Pervasive computing isn’t just about gadgets to carry and wear, though. These researchers are thinking about our whole environment. They have rooms that use tiny cameras to watch people’s eyes and keep track of what we’re looking at. They are conducting studies of how we behave, and how we feel about it, when we can glance at an appliance and say, “Turn it on.” They assume that entire houses will be ensembles of hidden computers.

The head of IBM Pervasive Computing is Michel Mayer, a product of the École Supérieure d’Électricité in Paris and a company veteran. “It’s going to be more and more machines talking to machines, things talking to things, without human interaction,” he says. “We’re already there. The infrastructure, although it’s boring and more remote and in the background, is increasingly important. It’s going to be your fridge, your car, your tools, your clothes, doing all those little microelements of tasks. It’s going to be your dishwasher negotiating with your utility company over what the best rate is and when.”

The average American house already contains more than 40 computers embedded in various items. A typical electric toothbrush runs on about 3,000 lines of code. Last year alone, eight billion new microprocessors came into the world. “These are mostly brain dead right now,” Russell says. “They’re tiny, four-bit processors and so on. But you know where our world is going.”

Even here, in this bastion of cheery futurism, they don’t assume this is unalloyed good news. Science-fiction writers have been warning for years about this sort of world, painting scary pictures of a human race dependent on technology that runs amok or just breaks down.

“Well, yeah!” Russell says. “We read that stuff, too! How many times has your cell phone crashed on you? Mine crashed last night. When your house crashes, how do you restart your house?” This question doesn’t have a good answer, although with smaller gadgets we have learned to find the reset button or yank the batteries to cycle the power. And pray it won’t be necessary to phone for technical support.

With new possibilities come new anxieties. How much smarter do you want your house to be, when you still haven’t mastered setting the time on your VCR, your stove and your coffee maker? Then, if the devices learn to reset their own clocks remotely, will you trust them? One central modern fear is that as machines grow too complex to understand and repair easily, we grow helplessly reliant on them. We become their slaves. This was the main argument of Ted Kaczynski, the Unabomber, but that doesn’t mean it is completely insane.

It is certainly time to worry about privacy and personal autonomy. If your truck is GPS-equipped or your car has an electronic toll-paying tag, the network is already capable of keeping track of your whereabouts, so you may not care to implant a tracking chip under your skin. But you could. Your employer may already be testing electronic tags and badges for this purpose.

And every new channel of information is a potential intruder with a sales pitch. Maybe we have become used to advertisements next to magazine articles. [pullquote align=”right”] Turbulent crosscurrents here. We see the computer splitting into its constituent parts, which can float more or less freely. [/pullquote]Maybe we can even handle billboards in public airspace, and commercials at movie theaters where we’ve paid for seats, and telephone promotion from companies keeping us on hold. It’s going to get worse, quickly. You will soon notice lots of little screens beaming messages at you. On airplane seat backs: Improved Data Speed!!! . . . Turn Your E-Mail Into Voice Mail . . . Nasdaq /31.38 . . . Real-Time Stock Quotes . . . Was Weather Something You Planned For? Select Weather Channel. . . .

“When displays become essentially free, they’re all going to be subject to sale,” Russell says. “I have a little display in my home thermostat. Believe me, the thermostat company’s going to want to put in animated graphics, and if they can possibly sell that space to the heating-oil company, they’ll do it. So one of the questions about ubiquitous technology becomes: Who owns your attention? Who owns the right to push inside your personal environment? When you walk past a store, your cell phone could say: ‘Come in! Ten percent off!’ How do you screen that stuff? How do you anti-spam-filter your life?”


The Pervasive-Computing people are breaking the computer apart. Every function — speak, locate, photograph, read, remember — can be detached. They think of it as the constellation model of computing. Your devices form a constellation. They all talk together, and they don’t need much transmission power because they only have to cover the distance from ear to pocket, say. Displays, processors, memory and power can all be separated. It is efficient. We already carry amazing amounts of spare processing power, in our cell phones if nowhere else.

Turbulent crosscurrents here. We see the computer splitting into its constituent parts, which can float more or less freely. At the same time, we see all the different components combining and recombining. Combo digital camera and digital music players are hitting the market. A cell phone available in Hong Kong doubles as an ovulation clock and calorie counter. A Global Positioning System chip can meld with almost anything. The building blocks of electronic life are suddenly . . . building blocks, and manufacturers want to try one permutation after another. We consumers, meanwhile, exhibit signs of craving the single perfect gadget, the Swiss Army knife of digital devices. So which will it be? Free-floating specialization or all-functions-in-one?

This is one of Jeff Hawkins’s favorite problems. If any one person can be said to have invented the Palm Pilot — the hand-held computer that defined the entire product category — it is Hawkins, a loose-limbed, perennially boyish electrical engineer who carved his first prototype from a block of wood and carried it around pretending to scribble on it. He and Donna Dubinsky founded Palm Computing in 1992, sold a million Pilots in a year and a half, sold the company to US Robotics and left in 1998 to found another Silicon Valley start-up called Handspring. Handspring makes hand-held computers much like Palm Pilots, called Visors. They have instantly grabbed 25 percent of the domestic market.

“There’s a yin and yang going on here,” Hawkins says. “I think people would like to have one thing. Sometimes people come to me pleading! Please can you make it one thing. The other side of that is that today maybe we can’t make a Good One Thing. Maybe when you try to combine them, you end up with not-so-good products. The best voice recorder” — he’s gesturing at my digital recorder, which is barely bigger than a cigarette lighter — might be compromised if you try to make it into a cell phone and a hand-held computer.”

Nonetheless, Handspring has started shipping its combo cell phone and hand-held computer, the VisorPhone. The telephone is actually a module that pops in and out of the Visor’s expansion slot, so when you are finished with a call, you can pop in a different module to turn the gadget into a camera or satellite navigator.

“It’s all temporary, in my mind,” Hawkins says. “But in the mobile-device space, my bet is on the singular device. You know about Bluetooth? Everyone says all these little things will talk to one another, and wow, it’ll be great. I don’t buy that. It’s too complex. I’m not so much into body wear and smart clothing and so on. No. You can carry something. I don’t think we’re going to see the retinal implant or the 3-D glasses or whatever. We will have beautiful color displays. Of course, it’s all going to be wireless.”

We should begin taking it for granted that we will all have high-bandwidth connections in our pockets. “What’s really interesting, and I don’t think most people have understood,” Hawkins says, “is that it will be free.”

Maybe I look dubious. Hawkins continues: “People say, ‘Well, how can it be free?’ All right, it won’t be completely free, but it will be free like local telephone service is free. Yeah, I pay my eight bucks a month, but I don’t think twice when I make a call.

“It’s costing billions of dollars to build it all out, but the incremental cost of adding a customer is very, very small. It’s all virtual; it’s just more bits going one way versus the other way. The cost for both voice and data will be virtually free in the whole wireless connected world.”

Information is convenience, and information is power — that’s a given around here. “There’s a fundamental premise here that communications and computing technology is a net benefit for people,” Hawkins says. “Not in all cases, but for the vast majority.”

This means that the most fundamental social processes are poised for transformation. Voice communication, taken as a whole, is “crude.” Voice mail is essentially “broken.” Money is up for grabs.

“The whole concept of writing checks!” Hawkins says. “This is bizarre!” He is excited now. Really, the old unwired world was so baroque. “You know, I have this book of papers, and I’ve got to order them, and I’ve got to write them in and rip them out. I’ve got to log them. I have to take out my calculator. Month to month, they send them back to me, people handling them all along the way! This is a system that’s ripe to be replaced!

“Exactly how it gets replaced, I don’t know. But I would argue that it will be a mobile device that’s wirelessly connected.”

Is this getting scary . . . at all? How much of the planet do I want connected to my checkbook at any given moment?

“Of course, it has to be secure,” Hawkins says. With a flourish, he yanks a wallet from his back pocket. “This thing isn’t very secure, either. People say: ‘Oh, I would never carry around a device that has, you know, access to my personal information. What if I lose it?”‘

He puts his wallet back. “I think they’ll get over that.”


Somewhere in the same family tree, along with the hand-held computers and the mobile phones, sits a seemingly different gadget, the remote control. We take our remote controls for granted these days, but they continue to creep deeper into our lives. It can feel disconcerting — oddly passive or powerless — to watch television without a remote control in hand. Some new homes come with remote controls for dimming the lights. They lengthen our arms. The modern car key is a remote control and, for that matter, a wireless computer. [pullquote align=”left”] They’re mutants! They’re cyborgs! I don’t know how they do it. [/pullquote]You need not actually touch the car to unlock it, start the climate control or reset the alarm. We seem to grow fond of waving a wand, or thumbing one, and seeing things happen at a distance.

My own car key is so complex — three buttons, which can be pressed or “pressed and held” — that I need to keep the instructions close by. In theory, at least, I can open and close windows. I can make the seat and mirrors return to a specific position. All from across the street. This saves me the trouble of pressing a button on the seat. To get it all set up, though, required a long visit to the dealer’s service department. The car spent an hour linked to a computer, getting some sort of data transfusion. I spent the hour in the waiting room, thinking about the time savings that would accrue over the years to come, one millisecond after another. I couldn’t stop wondering whether we are like some species of deer that is going to evolve bigger and bigger antlers until we are the best antlered animals around — but we can no longer lift our heads out of the mud, so we go extinct.

Most wireless gadgetry isn’t quite ready for mass consumption. Most of it works only sporadically and only in certain places. All of it comes with hidden costs not listed on the boxes: time that the consumer must invest in reading manuals, managing batteries, coiling the supposedly nonexistent wires and generally learning new skills. At its best, browsing the internet on a Web-enabled phone feels like looking through the wrong end of a telescope. No wonder some people assert almost religiously that they will never use a cell phone or a hand-held computer or a stereoscopic 3-D optical headset with optional immersion visor.

But five years ago some of the same people felt no need for e-mail or call waiting. The clumsiness and inelegance will pass, as with all new technologies. Some people will adapt after all. Some of us will just die off and be replaced by the next cohort, young enough not to remember a world without e-mail. Hawkins laughs when I say this: “I was kinder. I said ‘generational changes.’ Yeah, I’m amazed at how quickly young people adapt.”

It is an industry truism that children most readily learn the necessary new styles and habits. They know to power-cycle gadgets that crash, and they instantly acquire the most esoteric special typing skills. “They’re mutants!” says Michel Mayer at I.B.M. “They’re cyborgs! I don’t know how they do it!” He doesn’t seem all that unhappy about it.

Another lesson of the television remote control is that no one can predict how we are going to use new technologies, much less tell us how to use them. The inventors of the remote control believed consumers would use them several times an evening — to turn the set on, to change the channel when a program ended, to turn the set off. No one imagined that we would become remote-control virtuosos, personal entertainment maestros, creators of our own nightly medleys. Even when we feel deluged and assailed by technology, even when we suspect marketers of foisting useless gizmos upon us, we tend to make our own choices. Perhaps hand-held gadgets will offer the kind of power over the rest of our experience that the remote control gives us over TV: the power to edit and jump — instant access, fluid montage, snippets and shards. For better and for worse.

Technologists tend to believe that we are actually smarter for having these gadgets, and that as they permeate the texture of modern life, we will grow smarter still. That’s a collective, grand, slightly murky, we. Bernardo Huberman, scientific director of the Sand Hill Labs — a new Hewlett-Packard research center — talks about harnessing social knowledge, “studying the whole Internet ecosystem and designing novel mechanisms and institutions so that we can harvest the distributed knowledge: that such a gigantic social mind is producing.”

We don’t have to become neurons in the New World brain to feel that we’re already gaining something. I have noticed that the mobile-gadget wielder develops the odd sensation of being entitled to all sort of facts. You get in the habit of knowing things, or at least of being able to find out. It’s as if there’s a permanent mental hotline to the information specialists at the public library. Can’t quite identify Bob Dole’s running mate in 1996 or that actor up on the screen or a science-fiction story encountered 10 years ago? You get a twitchy feeling that you ought to push a button and pop up the answer.

But Huberman has more in mind than facts and trivia. His research consistently finds informal communities making better decisions than any of their members, knowing more and thinking better than experts. “We now know that society can work better than any individual,” he says. “There is this notion of a collective mind, a social mind, and today the internet allows us to tap that.” We are distributing intelligence. We are creating social organisms that carry out continuous computation.

It may be true, even if we don’t see it moment by moment. So I drive along, giving Helga another chance, wondering whether to check the e-mail that is even now tingling at my hip. If I can manage this, with one hand, I’ll see a message from “the commander and head of the Secret Unit in charge of Diamond dealing for the Revolutionary United Front (R.U.F.) of Sierra Leone.” It seems the commander would like my help in disposing of a “large quantity of diamond and about US$12,500,000.00 that is in cash with my Wife who does not have the know-how to launder this money.” I need only send along my bank information. Memo to self: add commander to anti-spam filter.

It would be fair to wonder, meanwhile, how well I’m focused on driving the car. I have learned that people with dashboard navigation systems often keep an eye on the moving map even when they know perfectly well where they’re going. Why? For entertainment? To keep from being . . . bored? We work hard to avoid being alone in our heads. Yet things happen when we are alone in our heads, and they are things we can’t live without: contemplation, reflection, focus. As consumers of wireless gadgets, we will need to insist on a Disconnect button. My BlackBerry has one, mainly for use on airplanes. Helga must have one, but I haven’t found it yet.

After a while, I also catch myself wondering whether — if I had to — I could still unfold a paper map and find my own way. Satellite navigation really works, and I could easily learn to depend on it. “Yeah, you can regret that you don’t know how to read maps anymore,” Mayer says. “But all in all, I think it is progress somehow. Mankind is hungry for new capabilities.”

So we engage in this ritual — the never-ending, reluctant dance with the invader. Technology encroaches, and we resist. Our aversion is sensible and honorable. And then, later, we give in. In this case, we connect.


© James Gleick 2001