Latest Posts

Your Whereabouts, Revealed

A couple of British software engineers have just discovered that your iPhone (if, you know, you happen to have one) keeps a permanent detailed record of your movements. Whenever you sync your phone with a computer, the record goes there, too. They’ve written some quick and dirty software to demonstrate. In a matter of seconds, you can see every place you’ve been:


This particular map isn’t me; it’s one of them. I suddenly feel a little queasy about showing everyone where I’ve been. Which is, of course, the point.

They are Alasdair Allan, an astronomer at the University of Exeter, and Pete Warden, formerly of Apple and now living in Boulder. They happened to be collaborating on some projects for visualizing location data—for example, making maps of radiation levels in Japan—when one of them stumbled across a hidden Read More

Where Are They Now: Bell Labs

[pullquote align=”right”] Claude Shannon’s managers were willing to leave him alone, even though they did not understand exactly what he was working on. AT&T at mid-century did not demand instant gratification from its research division. It allowed detours into mathematics or astrophysics with no apparent purpose.
The Information[/pullquote]

Information theory was born at Bell Labs; so was the transistor. Bell Labs scientists laid foundations for radio astronomy and the laser. When I first visited, in 1993, Arno Penzias was running the place as Chief Scientist; he was just one of the laboratory’s many winners of the Nobel Prize in Physics, for his discovery of the cosmic black-body radiation echoing across the universe from the Big Bang.

Not many corporate research labs have ever operated with such far-sighted freedom from the bottom line. Now hardly any do.

Claude Shannon did his great work in a cubbyhole in this 1900 building, the old New York headquarters,Bell Labs in New York the Hudson River to the west, Greenwich Village to the east. That’s the High Line running through it. The building is still there: an artists’ cooperative.

AT&T spun off most of Bell Labs into the new Lucent Technologies in 1996; now it’s a French-owned company, Alcatel-Lucent. They still boast about what they now call Alcatel-Lucent Bell Labs. But basic science, physics, and mathematics are gone. In 2008, the company issued this magnificent specimen of business-speak: “In the new innovation model, research needs to keep addressing the need of the mother company.”

A smaller chunk of the labs remains with the parent company, and AT&T Labs, too, continues to lay claim to a proud tradition. Here is a tribute page to Shannon, headlined, Juggling Genius Claude Shannon Launched the Digital Age. (Juggling genius? Really?)

So I was particularly glad to get Read More

Face Direction of Travel

I’m just back from a short trip to England to talk about The Information. There was a lot of tweeting.

For example, while I was speaking early one afternoon at the Royal Society for the Encouragement of Arts, Manufactures and Commerce, some in the audience were surreptitiously thumbing their little devices. Or not even surreptitiously—there was an official hashtag. One listener tweeted in real time:

James Gleick talk at the RSA “The Information”. Interesting nuggets, but I’m not really getting the big picture.

I entirely sympathize. Two days later, at the British Library, I interrupted myself and asked whether anyone was tweeting. I didn’t see any hands go up. I hope I didn’t sound confrontational about it. I could read their tweets afterward.

The BBC correspondent Nick Higham interviewed me at the Science Museum for his program, Meet the Author, and immediately tweeted as follows:

I guess not. I’m trying, though.

Information is How We Know

When Kevin Kelly interviewed me about The Information for Wired, he asked me to define the word, and I was unprepared. I did some hemming and hawing (which he mercifully omitted). I see it continues to trouble him. Others have asked me the same question, and I continue to hem and haw. You might think I would have it figured out by now.

The problem of definition runs as a a minor thread throughout my book. The very idea that a word has a definition is surprisingly new—barely 400 years old. You might think it is obvious, but it is not. People managed to use words for millennia without worrying too much. John Locke felt it necessary to explain in his Essay Concerning Human Understanding:

Definition being nothing but making another understand by Words, what Idea the term defined stands for.

In the very first English dictionary, Robert Cawdrey’s Table Alphabeticall in 1604, we see that defining words is not so easy. I quote a few of my favorite Cawdrey definitions (in their entirety):

crocodile, [kind of] beast.
vapor, moisture, ayre, hote breath, or reaking.
theologie, divinitie, the science of living blessedly for ever.

Read More

Autocorrect, Unexpurgated

I mention a certain writer in an email, and the reply comes back: “Comcast McCarthy??? Phoner novelist???”

Oops. Did I really call him “Comcast”?

No. The great god Autocorrect has struck again.

It is an impish god. I try retyping the name on a different device. This time the letters reshuffle themselves into “Format McCarthy.” Welcome to the club, Format. Meet the Danish astronomer Touchpad Brahe and the Franco-American actress Natalie Portmanteau.
In past times we were responsible for our own typographical errors. Now Autocorrect has taken charge. This is no small matter. It is a step in our new evolution—the grafting of silicon into our formerly carbon-based species, in the name of collective intelligence. Or unintelligence, as the case may be.

A few months ago the police in Hall County, Ga., locked down the West Hall schools for two hours after someone received a text message saying, “gunman be at west hall today.” The texter had tried to type “gunna,” but Autocorrect had a better idea.

“Dictionaries have a lot of trouble keeping up with the real world, right?”

Who’s the boss of our fingers? Cyberspace is awash with outrage. Even if hardly anyone knows exactly how it works or where it is, Autocorrect is felt to be haunting our cell phones or watching from the cloud. Peter Sagal, the host of NPR’s “Wait Wait … Don’t Tell Me,” complains via Twitter: “Autocorrect changed ‘Fritos’ to ‘frites.’ Autocorrect is effete. Pass it on.”

Its cultural status can be judged from the websites and blogs devoted to it, from the stream of whinging on Twitter, and from the appearance of the New Yorker’s first Autocorrect cartoon. (A hotdog vendor dashes to the pitcher’s mound; the manager looks at his handheld device and says: “Oh, I see what happened. Autocorrect changed ‘southpaw’ to ‘sauerkraut.’”)
Tweets the actor and author Stephen Fry: “Just typed ‘better than hanging around the house rating bisexuals’ to a friend. Thanks, autocorrect. Meant ‘eating biscuits.’”

We are collectively peeved. People blast Autocorrect for mangling their intentions. And they blast Autocorrect for failing to unmangle them. “Why so coy, iPhone?” asks the English writer Scarlett Thomas. “I type ‘fuckung’ and you really can’t think of any suggestions? Not one?”

I try to type “geocentric” and discover that I have typed “egocentric”; is Autocorrect making a sort of cosmic joke? I want to address my tweeps (a made-up word, admittedly, but that’s what people do). No: I get “twerps.” Some pairings seem far apart in the lexicographical space. “Cuticles” becomes “citified.” “Catalogues” turns to “fatalities” and “Iditarod” to “radiator.” What is the logic?

The logic is hard to discern, and consistency is for hobgoblins. Sometimes “Capistrano” may become “vapid tramp”; next time maybe “campus tramp.” Kathryn Schulz, the author of Being Wrong, tweets in verse:

Super fans
sweaty fans
sweaty dreams
sweet dreams.
Autocorrect train wreck over here.

Actually, an assortment of competing algorithms are at work. Autocorrect is not a single entity but a hodgepodge, from different vendors, chief among them Apple, Google and Microsoft. All their algorithms start with the low-hanging fruit. They know what to do when you type “hte.” After that, their goals vary, and so do their capabilities.

On mobile phones, where our elephant thumbs tramp across tiny keypads, the idea is to free us from backtracking and drudgery. The iPhone’s Autocorrect loves to insert apostrophes. You can rely on it: type “dont” and get “don’t.” Type “cant” and get “can’t”—but is that what you wanted? Autocorrect is just playing the odds. Even “ill” turns to “I’ll” and “id” to “I’d” (sorry, Dr. Freud).

The better Autocorrect gets, the more we will come to rely on it.

When Autocorrect can reach out from the local device or computer to the cloud, the algorithms get much, much smarter. I consulted Mark Paskin, a longtime software engineer on Google’s search team. Where a mobile phone can check typing against a modest dictionary of words and corrections, Google uses no dictionary at all.

“A dictionary can be more of a liability than you might expect,” Paskin says. “Dictionaries have a lot of trouble keeping up with the real world, right?” Instead Google has access to a decent subset of all the words people type—“a constantly evolving list of words and phrases,” he says; “the parlance of our times.”

If you type “kofee” into a search box, Google would like to save a few milliseconds by guessing whether you’ve misspelled the caffeinated beverage or the former Secretary General. It uses a probabilistic algorithm with roots in work done at AT&T Bell Labs in the early 1990s. The probabilities are based on a “noisy channel” model, a fundamental concept of information theory. The model envisions a message source—an idealized user with clear intentions—passing through a noisy channel that introduces typos by omitting letters, reversing letters, or inserting letters …

“We’re trying to find the most likely intended word given the word that we see,” Paskin says. “Coffee” is fairly common word, so with its vast corpus of text the algorithm can assign it a far higher probability than “Kofi.” On the other hand, the data show that spelling “coffee” with a K is a relatively low-probability error. The algorithm combines these probabilities. It also learns from experience and gathers further clues from the context.

The same probabilistic model is powering advances in translation and speech recognition, comparable problems in artificial intelligence. In a way, to achieve anything like perfection in one of these areas would mean solving them all; it would require a complete model of human language.

But perfection will surely be impossible. We’re individuals. We’re fickle; we make up words and acronyms on the fly, and sometimes we scarcely even know what we’re trying to say.

One more thing to worry about: the better Autocorrect gets, the more we will come to rely on it. It’s happening already. People who yesterday unlearned arithmetic will soon forget how to spell. One by one we are outsourcing our mental functions to the global prosthetic brain.

I can live with that. We do it with memory, we do it with navigation; what the he’ll, let’s do it with spelling.

First published—slightly shorter—in the New York Times, August 4, 2012

The Google Books Settlement, R.I.P.

Many people, including some I greatly respect, are gleeful about the demise of the arduously worked out settlement of the lawsuits brought by the Authors Guild and book publishers against Google. Not me.

It certainly wasn’t perfect. It involved some messy compromises, as settlements tend to do. It couldn’t satisfy everyone.

In creating a vast and widely accessible digital library, bringing back to life many forgotten books, it seemed to give Google, a private corporation, too much power over what, in an ideal world, should be a public resource. (“Public” most emphatically not being a synonym for “free.”)

So now what? I fear that many people underestimate the difficulties that lie ahead. The New York Times editorial page does, and it botches the law by saying, “Google’s loss means that, for now, its search results will show only snippets of text from books that are under copyright but out of print.”

Quite the contrary. Judge Denny Chin stated clearly that Google was not entitled to copy these books onto its servers in the first place: “Google engaged in wholesale, blatant copying, without first obtaining copyright permissions.” The settlement would have authorized Google’s storage and search of the books. That is no longer permitted.

It’s going to be hard to find a way of letting Google keep its illicitly obtained copies and fairly compensate copyright holders, because, for one thing, there are so many of them.

We’re back to a messy real world now. Perhaps the stars are finally aligned for Congress to create a National Digital Library, assembling and preserving all these books, making them searchable, and sharing them with readers in a way that fairly compensates the rightsholders. This Congress seems pretty dysfunctional, but who knows? The settlement, now defunct, at least provides a well thought-out framework for how it might be done—with or without Google.

Ta! Ra! Ra! Boom De Yay!

Poetry or doggerel? Oh, who cares. John Horgan has unearthed and now presents some verse written by Claude Shannon in 1981, at the height of the Rubik’s Cube craze. Shannon was, of course, the creator of what is now called information theory; he is the central figure in my new book, where I mention that he liked game-playing and never lost his childlike sense of fun.

Case in point: “A Rubric on Rubik Cubics.” Shannon includes footnotes, in the spirit, he says, of T. S. Eliot’s “The Waste Land.” He advises, “this may be either read as a poem or sung to ‘Ta! Ra! Ra! Boom De Yay!’ with an eight-bar chorus.” One of the verses turns (and this, too, is entirely characteristic) to the subject of human vs. machine intelligence:

The issue’s joined in steely grip:
Man’s mind against computer chip.
With theorems wrought by Conway’s eight
‘Gainst programs writ by Thistlethwait.
Can multibillion-neuron brains
Beat multimegabit machines?
The thrust of this theistic schism—
To ferret out God’s algorism!

For the whole poem, with footnotes, back story, and entertaining commentary, see Horgan’s Scientific American blog. (Horgan, coincidentally, reviewed The Information for the Wall Street Journal.)

Now Chaos Is “Enhanced”

“Enhanced” is the word of the day for e-books. It strikes fear into the hearts of some authors, and maybe some readers, too. There is the question of hyperlinks. Let’s say my book begins this way:

The police in the small town of Los Alamos, New Mexico, worried briefly in 1974 …

One doesn’t want the reader yanked away to a page listing the Great Luxury Hotels of Los Alamos. Or to any page. One wants the reader to get sucked into the book, there to remain.

Yet e-books have new possibilities, and authors are beginning to explore them. The very creative people at Open Road Media have now published two of my books, Chaos and Genius, in electronic form, for all devices.

The enhanced Chaos gave us a chance to illustrate some of the ideas and the science in ways that break through the limitations of the printed page. Strange attractors are not, after all, static two-dimensional objects; with videos and applets, we can present them as they were meant to be seen all along. We can fly around phase space and zoom into fractals. The Koch snowflake and the Sierpinski gasket come to life.

The publisher made a serious investment, sending film crews to interview me and several of the book’s Read More