Latest Posts

Secret No More: Google and Power

Just last month, in an essay for the New York Review, I wrote the following sentence about Google and secrecy:

None of these books can tell you how many search queries Google fields, how much electricity it consumes, how much storage capacity it owns, how many streets it has photographed, how much e-mail it stores; nor can you Google the answers, because Google values its privacy.

As of today, that’s out of date. Google has decided to reveal the answer to two of those questions. James Glanz reports in the New York Times that the company’s data centers worldwide consume just under 260 million watts of electricity and field something over a billion searches a day.

This works out (Google says) to about three-tenths of a watt-hour per search. Google had given out that per-search figure before, in hopes of quieting people who wildly estimated that a single search consumes as much energy as bringing half a kettle of water (however much that is) to boil, or running a 100-watt light bulb for an hour.

Now, 0.3 watt-hours isn’t nothing, but it isn’t much. It sounds worse in joules: about a thousand. You yourself, if you are doing nothing more strenuous than reading this item, dissipate that much energy every twenty seconds or so. Google points out (with considerable justice, in my opinion) that any one search has the potential to save vast amounts of energy—a gasoline-powered trip to the library, for example.

It may feel as though there’s something apples-and-orangish about all this. Energy and information. I hear an echo of something I noted in The Information: that at the dawn of the computer era, in 1949, John von Neumann came up with an estimate for the minimal amount of heat that must be dissipated “per elementary act of information, that is per elementary decision of a two-way alternative and per elementary transmittal of one unit of information.” It was a tiny number; he wrote it as kTln 2 joules per bit.

Oh, and by the way, von Neumann was wrong; Charles H. Bennett and Rolf Landauer have explained why. But energy and information are tightly bound. Of that, at least, there is no doubt.

 

Twitter Postscript: Earthquake!

Sitting at one’s desk in New York, one feels a tremor. Dreaming? Naturally one turns to cyberspace.

The U.S. Geological Survey is reporting an earthquake just moments ago, but it’s in Virginia. That’s 300 miles from here—impossible. Or is it?

The real-time seismograph from the Lamont-Doherty observatory is not responding. That in itself seems like a sign.

Then there’s Twitter. Sure enough! Markos says it’s 5.8 in the DC area. Aaron Stewart-Ahn says he felt it in Brooklyn. Irfon-Kim Ahmad says he felt it in Toronto. Colson Whitehead is right in there:

Several of my followers respond to a query within minutes, including Ismet Berkan, in Turkish: “sen o kadar bilim kitabi yaz, sonra da bunu sor.” Andy Borowitz reassures his followers that Justin Bieber is unharmed. And Maria Popova sums up: “Yep. We’ve just had an earthquake. And tweets about it travel faster than seismic waves.”

 

Why Am I on Twitter?

I am not “on” Twitter—what a loathsome expression. Now and then I may be on time or on my way or on a roll or on the phone; I am fortunately not on crack or on the dole or on the rag or on the wagon. But I am not on Twitter (or Facebook or the internet).

I do, however, use Twitter. Occasionally I dispatch tweets of my own, but mostly I just listen. I follow a small number of people. (Very small: less than .000001 percent of the people available to be followed. That’s an important fact about Twitter. No one can sample more than the minutest fraction; everyone is taking droplets of the ocean.)

Last night, for a few excited minutes, I was reminded of why. Something important was happening far away, and I was able to check in, not on the reality, not on the facts, but on my tiny chosen slice of the global consciousness.

Tweets are not facts; they are not news. They are not to be trusted:

The real news will come more slowly, from brave and talented reporters working for the few great news organizations still able to afford them—such as Kareem Fahim (in Tripoli yesterday) and David D. Kirkpatrick (in Zintan) for the New York Times. Yet, considering what passes for news on cable TV these days, it’s not totally silly to speak of getting one’s news from Twitter:

Some of the people I follow (my followees? my leaders?) are friends and acquaintances; some are just people I admire. At least two are imposters: one (Samuel Pepys) entirely faithful to the original; the other, not so much:

The last time I relied this much on my Twitter feed was when the Murdochs pere & fils were testifying before Parliament. On such occasions one feels connected to others who care. I feel Read More

How Google Dominates Us (2011)

Tweets Alain de Botton, philosopher, author, and now online aphorist:

The logical conclusion of our relationship to computers: expectantly to type “what is the meaning of my life” into Google.

You can do this, of course. Type “what is th” and faster than you can find the eGoogle is sending choices back at you: what is the cloud? what is the mean? what is the american dream? what is the illuminati? Google is trying to read your mind. Only it’s not your mind. It’s the World Brain. And whatever that is, we know that a twelve-year-old company based in Mountain View, California, is wired into it like no one else.

Google is where we go for answers. People used to go elsewhere or, more likely, stagger along not knowing. Nowadays you can’t have a long dinner-table argument about who won the Oscar for that Neil Simon movie where she plays an actress who doesn’t win an Oscar; at any moment someone will pull out a pocket device and Google it. If you need the art-history meaning of “picturesque,” you could find it in The Book of Answers, compiled two decades ago by the New York Public Library’s reference desk, but you won’t. Part of Google’s mission is to make the books of answers redundant (and the reference librarians, too). “A hamadryad is a wood-nymph, also a poisonous snake in India, and an Abyssinian baboon,” says the narrator of John Banville’s 2009 novel, The Infinities. “It takes a god to know a thing like that.” Not anymore.

The business of finding facts has been an important gear in the workings of human knowledge, and the technology has just been upgraded from rubber band to nuclear reactor. No wonder there’s some confusion about Google’s exact role in that—along with increasing fear about its power and its intentions.

Most of the time Google does not actually have the answers. When people say, “I looked it up on Google,” they are committing a solecism. When they try to erase their embarrassing personal histories “on Google,” they are barking up the wrong tree. It is seldom right to say that anything is true “according to Google.” Google is the oracle of redirection. Go there for “hamadryad,” and it points you to Wikipedia. Or the Free Online Dictionary. Or the Official Hamadryad Web Site (it’s a rock band, too, wouldn’t you know). Google defines its mission as “to organize the world’s information,” not to possess it or accumulate it. Then again, a substantial portion of the world’s printed books have now been copied onto the company’s servers, where they share space with millions of hours of video and detailed multilevel imagery of the entire globe, from satellites and from its squadrons of roving street-level cameras. Not to mention the great and growing trove of information Google possesses regarding the interests and behavior of, approximately, everyone.

When I say Google “possesses” all this information, that’s not the same as owning it. What it means to own information is very much in flux.

In barely a decade Google has made itself a global brand bigger than Coca-Cola or GE; it has created more wealth faster than any company in history; it dominates the information economy. How did that happen? It happened more or less in plain sight. Google has many secrets but the main ingredients of its success have not been secret at all, and the business story has already provided grist for dozens of books. Steven Levy’s new account, In the Plex, is the most authoritative to date and in many ways the most entertaining. Levy has covered personal computing for almost thirty years, for Newsweek and Wired and in six previous books, and has visited Google’s headquarters periodically since 1999, talking with its founders, Larry Page and Sergey Brin, and, as much as has been possible for a journalist, observing the company from the inside. He has been able to record some provocative, if slightly self-conscious, conversations like this one in 2004 about their hopes for Google:

“It will be included in people’s brains,” said Page. “When you think about something and don’t really know much about it, you will automatically get information.”

“That’s true,” said Brin. “Ultimately I view Google as a way to augment your brain with the knowledge of the world. Right now you go into your computer and type a phrase, but you can imagine that it could be easier in the future, that you can have just devices you talk into, or you can have computers that pay attention to what’s going on around them….”

…Page said, “Eventually you’ll have the implant, where if you think about a fact, it will just tell you the answer.”

In 2004, Google was still a private company, five years old, already worth $25 billion, and handling about 85 percent of Internet searches. Its single greatest innovation was the algorithm called PageRank, developed by Page and Brin when they were Stanford graduate students running their research project from a computer in a dorm room. The problem was that most Internet searches produced useless lists of low-quality results. The solution was a simple idea: to harvest the implicit knowledge already embodied in the architecture of the World Wide Web, organically evolving.

The essence of the Web is the linking of individual “pages” on websites, one to another. Every link represents a recommendation—a vote of interest, if not quality. So the algorithm assigns every page a rank, depending on how many other pages link to it. Furthermore, all links are not valued equally. A recommendation is worth more when it comes from a page that has a high rank itself. The math isn’t trivial—PageRank is a probability distribution, and the calculation is recursive, each page’s rank depending on the ranks of pages that depend…and so on. Page and Brin patented PageRank and published the details even before starting the company they called Google.

Most people have already forgotten how dark and unsignposted the internet once was. A user in 1996, when the Web comprised hundreds of thousands of “sites” with millions of “pages,” did not expect to be able to search for “Olympics” and automatically find the official site of the Atlanta games. That was too hard a problem. And what was a search supposed to produce for a word like “university”? AltaVista, then the leading search engine, offered up a seemingly unordered list of academic institutions, topped by the Oregon Center for Optics.

Levy recounts a conversation between Page and an AltaVista engineer, who explained that the scoring system would rank a page higher if “university” appeared multiple times in the headline. AltaVista seemed untroubled that the Oregon center did not qualify as a major university. A conventional way to rank universities would be to consult experts and assess measures of quality: graduate rates, retention rates, test scores. The Google approach was to trust the Web and its numerous links, for better and for worse.

PageRank is one of those ideas that seem obvious after the fact. But the business of Internet search, young as it was, had fallen into some rigid orthodoxies. The main task of a search engine seemed to be the compiling of an index. People naturally thought of existing technologies for organizing the world’s information, and these were found in encyclopedias and dictionaries. They could see that alphabetical order was about to become less important, but they were slow to appreciate how dynamic and ungraspable their target, the internet, really was. Even after Page and Brin flipped on the light switch, most companies continued to wear blindfolds.

The internet had entered its first explosive phase, boom and then bust for many ambitious startups, and one thing everyone knew was that the way to make money was to attract and retain users. The buzzword was “portal”—the user’s point of entry, like Excite, Go.com, and Yahoo—and portals could not make money by rushing customers into the rest of the internet. “Stickiness,” as Levy says, “was the most desired metric in websites at the time.” Portals did not want their search functions to be too good. That sounds stupid, but then again how did Google intend to make money when it charged users nothing? Its user interface at first was plain, minimalist, and emphatically free of advertising—nothing but a box for the user to type a query, followed by two buttons, one to produce a list of results and one with the famously brash tag “I’m feeling lucky.”

The Google founders, Larry and Sergey, did everything their own way. Even in the unbuttoned culture of Silicon Valley they stood out from the start as originals, “Montessori kids” (per Levy), unconcerned with standards and proprieties, favoring big red gym balls over office chairs, deprecating organization charts and formal titles, showing up for business meetings in roller-blade gear. It is clear from all these books that they believed their own hype; they believed with moral fervor in the primacy and power of information. (Sergey and Larry did not invent the company’s famous motto—“Don’t be evil”—but they embraced it, and now they may as well own it.)

As they saw it from the first, their mission encompassed not just the internet but all the world’s books and images, too. When Google created a free e-mail service—Gmail—its competitors were Microsoft, which offered users two megabytes of storage of their past and current e-mail, and Yahoo, which offered four megabytes. Google could have trumped that with six or eight; instead it provided 1,000—a gigabyte. It doubled that a year later and promised “to keep giving people more space forever.”

They have been relentless in driving computer science forward. Google Translate has achieved more in machine translation than the rest of the world’s artificial intelligence experts combined. Google’s new mind-reading type-ahead feature, Google Instant, has “to date” (boasts the 2010 annual report) “saved our users over 100 billion keystrokes and counting.” (If you are seeking information about the Gobi Desert, for example, you receive results well before you type the word “desert.”)

Somewhere along the line they gave people the impression that they didn’t care for advertising—that they scarcely had a business plan at all. In fact it’s clear that advertising was fundamental to their plan all along. They did scorn conventional marketing, however; their attitude seemed to be that Google would market itself. As, indeed, it did. Google was a verb and a meme. “The media seized on Google as a marker of a new form of behavior,” writes Levy.

Endless articles rhapsodized about how people would Google their blind dates to get an advance dossier or how they would type in ingredients on hand to Google a recipe or use a telephone number to Google a reverse lookup. Columnists shared their self-deprecating tales of Googling themselves…. A contestant on the TV show Who Wants to Be a Millionaire? arranged with his brother to tap Google during the Phone-a-Friend lifeline….And a fifty-two-year-old man suffering chest pains Googled “heart attack symptoms” and confirmed that he was suffering a coronary thrombosis.

gleick_2-081811.jpg

 

 

 

 

 

Google’s first marketing hire lasted a matter of months in 1999; his experience included Miller Beer and Tropicana and his proposal involved focus groups and television commercials. When Doug Edwards interviewed for a job as marketing manager later that year, he understood that the key word was “viral.” Edwards lasted quite a bit longer, and now he’s the first Google insider to have published his memoir of the experience. He was, as he says proudly in his subtitle to I’m Feeling Lucky, Google employee number 59. He provides two other indicators of how early that was: so early that he nabbed the e-mail address doug@google.com; and so early that Google’s entire server hardware lived in a rented “cage.”

Less than six hundred square feet, it felt like a shotgun shack blighting a neighborhood of gated mansions. Every square inch was crammed with racks bristling with stripped-down CPUs [central processing units]. There were twenty-one racks and more than fifteen hundred machines, each sprouting cables like Play-Doh pushed through a spaghetti press. Where other cages were right-angled and inorganic, Google’s swarmed with life, a giant termite mound dense with frenetic activity and intersecting curves.

Levy got a glimpse of Google’s data storage a bit later and remarked, “If you could imagine a male college freshman made of gigabytes, this would be his dorm.”

Not anymore. Google owns and operates a constellation of giant server farms spread around the globe—huge windowless structures, resembling aircraft hangars or power plants, some with cooling towers. The server farms stockpile the exabytes of information and operate an array of staggeringly clever technology. This is Google’s share of the cloud (that notional place where our data live) and it is the lion’s share.

How thoroughly and how radically Google has already transformed the information economy has not been well understood. The merchandise of the information economy is not information; it is attention. These commodities have an inverse relationship. When information is cheap, attention becomes expensive. Attention is what we, the users, give to Google, and our attention is what Google sells—concentrated, focused, and crystallized.

Google’s business is not search but advertising. More than 96 percent of its $29 billion in revenue last year came directly from advertising, and most of the rest came from advertising-related services. Google makes more from advertising than all the nation’s newspapers combined. It’s worth understanding precisely how this works. Levy chronicles the development of the advertising engine: a “fantastic achievement in building a money machine from the virtual smoke and mirrors of the internet.” In The Googlization of Everything (and Why We Should Worry), a book that can be read as a sober and admonitory companion, Siva Vaidhyanathan, a media scholar at the University of Virginia, puts it this way: “We are not Google’s customers: we are its product. We—our fancies, fetishes, predilections, and preferences—are what Google sells to advertisers.”

The evolution of this unparalleled money machine piled one brilliant innovation atop another, in fast sequence:

1. Early in 2000, Google sold “premium sponsored links”: simple text ads assigned to particular search terms. A purveyor of golf balls could have its ad shown to everyone who searched for “golf” or, even better, “golf balls.” Other search engines were already doing this. Following tradition, they charged according to how many people saw each ad. Salespeople sold the ads to big accounts, one by one.

2. Late that year, engineers devised an automated self-service system, dubbed AdWords. The opening pitch went, “Have a credit card and 5 minutes? Get your ad on Google today,” and suddenly thousands of small businesses were buying their first Internet ads.

3. From a short-lived startup called GoTo (by 2003 Google owned it) came two new ideas. One was to charge per click rather than per view. People who click on an ad for golf balls are more likely to buy them than those who simply see an ad on Google’s website. The other idea was to let advertisers bid for keywords—such as “golf ball”—against one another in fast online auctions. Pay-per-click auctions opened a cash spigot. A click meant a successful ad, and some advertisers were willing to pay more for that than a human salesperson could have known. Plaintiffs’ lawyers seeking clients would bid as much as fifty dollars for a single click on the keyword “mesothelioma”—the rare form of cancer caused by asbestos.

4. Google—monitoring its users’ behavior so systematically—had instant knowledge of which ads were succeeding and which were not. It could view “click-through rates” as a measure of ad quality. And in determining the winners of auctions, it began to consider not just the money offered but the appeal of the ad: an effective ad, getting lots of clicks, would get better placement.

Now Google had a system of profitable cycles in place, positive feedback pushing advertisers to make more effective ads and giving them data to help them do it and giving users more satisfaction in clicking on ads, while punishing noise and spam. “The system enforced Google’s insistence that advertising shouldn’t be a transaction between publisher and advertiser but a three-way relationship that also included the user,” writes Levy. Hardly an equal relationship, however. Vaidhyanathan sees it as exploitative: “The Googlization of everything entails the harvesting, copying, aggregating, and ranking of information about and contributions made by each of us.”

By 2003, AdWords Select was serving hundreds of thousands of advertisers and making so much money that Google was deliberating hiding its success from the press and from competitors. But it was only a launching pad for the next brilliancy.

5. So far, ads were appearing on Google’s search pages, discreet in size, clearly marked, at the top or down the right side. Now the company expanded its platform outward. The aim was to develop a form of artificial intelligence that could analyze chunks of text—websites, blogs, e-mail, books—and match them with keywords. With two billion Web pages already in its index and with its close tracking of user behavior, Google had exactly the information needed to tackle this problem. Given a website (or a blog or an e-mail), it could predict which advertisements would be effective.

This was, in the jargon, “content-targeted advertising.” Google called its program AdSense. For anyone hoping to—in the jargon—“monetize” their content, it was the Holy Grail. The biggest digital publishers, such as The New York Times, quickly signed up for AdSense, letting Google handle growing portions of their advertising business. And so did the smallest publishers, by the millions—so grew the “long tail” of possible advertisers, down to individual bloggers. They signed up because the ads were so powerfully, measurably productive. “Google conquered the advertising world with nothing more than applied mathematics,” wrote Chris Anderson, the editor of Wired. “It didn’t pretend to know anything about the culture and conventions of advertising—it just assumed that better data, with better analytical tools, would win the day. And Google was right.” Newspapers and other traditional media have complained from time to time about the arrogation of their content, but it is by absorbing the world’s advertising that Google has become their most destructive competitor.

Like all forms of artificial intelligence, targeted advertising has hits and misses. Levy cites a classic miss: a gory New York Post story about a body dismembered and stuffed in a garbage bag, accompanied on the Post website by a Google ad for plastic bags. Nonetheless, anyone could now add a few lines of code to their website, automatically display Google ads, and start cashing monthly checks, however small. Vast tracts of the Web that had been free of advertising now became Google part- ners. Today Google’s ad canvas is not just the search page but the entire Web, and beyond that, great volumes of e-mail and, potentially, all the world’s books.

Search and advertising thus become the matched edges of a sharp sword. The perfect search engine, as Sergey and Larry imagine it, reads your mind and produces the answer you want. The perfect advertising engine does the same: it shows you the ads you want. Anything else wastes your attention, the advertiser’s money, and the world’s bandwidth. The dream is virtuous advertising, matching up buyers and sellers to the benefit of all. But virtuous advertising in this sense is a contradiction in terms. The advertiser is paying for a slice of our limited attention; our minds would otherwise be elsewhere. If our interests and the advertisers’ were perfectly aligned, they would not need to pay. There is no information utopia. Google users are parties to a complex transaction, and if there is one lesson to be drawn from all these books it is that we are not always witting parties.

Seeing ads next to your e-mail (if you use Google’s free e-mail service) can provide reminders, sometimes startling, of how much the company knows about your inner self. Even without your e-mail, your search history reveals plenty—as Levy says, “your health problems, your commercial interests, your hobbies, and your dreams.” Your response to advertising reveals even more, and with its advertising programs Google began tracking the behavior of individual users from one Internet site to the next. They observe our every click (where they can) and they measure in milliseconds how long it takes us to decide. If they didn’t, their results wouldn’t be so uncannily effective. They have no rival in the depth and breadth of their data mining. They make statistical models for everything they know, connecting the small scales with the large, from queries and clicks to trends in fashion and season, climate and disease.

It’s for your own good—that is Google’s cherished belief. If we want the best possible search results, and if we want advertisements suited to our needs and desires, we must let them into our souls.

The Google corporate motto is “Don’t be evil.” Simple as that is, it requires parsing.

It was first put forward in 2001 by an engineer, Paul Buchheit, at a jawboning session about corporate values. “People laughed,” he recalled. “But I said, ‘No, really.’” (At that time the booming tech world had its elephant-in-the-room, and many Googlers understood “Don’t be evil” explicitly to mean “Don’t be like Microsoft”; i.e., don’t be a ruthless, take-no-prisoners monopolist.)

Often it is misquoted in stronger form: “Do no evil.” That would be a harder standard to meet.

Now they’re mocked for it, but the Googlers were surely sincere. They believed a corporation should behave ethically, like a person. They brainstormed about their values. Taken at face value, “Don’t be evil” has a finer ring than some of the other contenders: “Google will strive to honor all its commitments” or “Play hard but keep the puck down.”

Don’t be evil” does not have to mean transparency. None of these books can tell you how many search queries Google fields, how much electricity it consumes, how much storage capacity it owns, how many streets it has photographed, how much e-mail it stores; nor can you Google the answers, because Google values its privacy.

It does not have to mean “Obey all the laws.” When Google embarked on its program to digitize copyrighted books and copy them onto its servers, it did so in stealth, deceiving publishers with whom it was developing business relationships. Google knew that the copying bordered on illegal. It considered its intentions honorable and the law outmoded. “I think we knew that there would be a lot of interesting issues,” Levy quotes Page as saying, “and the way the laws are structured isn’t really sensible.”

Who, then, judges what is evil? “Evil is what Sergey says is evil,” explained Eric Schmidt, the chief executive officer, in 2002.

As for Sergey: “I feel like I shouldn’t impose my beliefs on the world. It’s a bad technology practice.” But the founders seem sure enough of their own righteousness. (“‘Bastards!’ Larry would exclaim when a blogger raised concerns about user privacy,” recalls Edwards. “‘Bastards!’ they would say about the press, the politicians, or the befuddled users who couldn’t grasp the obvious superiority of the technology behind Google’s products.”)

Google did some evil in China. It collaborated in censorship. Beginning in 2004, it arranged to tweak and twist its algorithms and filter its results so that the native-language Google.cn would omit results unwelcome to the government. In the most notorious example, “Tiananmen Square” would produce sightseeing guides but not history lessons. Google figured out what to censor by checking China’s approved search engine, Baidu, and by accepting the government’s supplementary guidance.

Yet it is also true that Google pushed back against the government as much as any other American company. When results were blocked, Google insisted on alerting users with a notice at the bottom of the search page. On balance Google clearly believed (and I think it was right, despite the obvious self-interest) that its presence benefited the people of China by increasing information flow and making clear the violation of transparency. The adventure took a sharp turn in January 2010, after organized hackers, perhaps with government involvement, breached Google’s servers and got access to the e-mail accounts of human rights activists. The company shut down Google.cn and now serves China only from Hong Kong—with results censored not by Google but by the government’s own ongoing filters.

So is Google evil? The question is out there now; it nags, even as we blithely rely on the company for answers—which now also means maps, translations, street views, calendars, video, financial data, and pointers to goods and services. The strong version of the case against Google is laid out starkly in Search & Destroy, by a self-described “Google critic” named Scott Cleland. He wields a blunt club; the book might as well been have been titled Google: Threat or Menace?!“There is evidence that Google is not all puppy dogs and rainbows,” he writes.

Google’s corporate mascot is a replica of a Tyrannosaurus Rex skeleton on display outside the corporate headquarters. With its powerful jaws and teeth, T-Rex was a terrifying predator. And check out the B-52 bomber chair in Google Chairman Eric Schmidt’s office. The B-52 was a long range bomber designed to deliver nuclear weapons.

Levy is more measured: “Google professed a sense of moral purity…but it seemed to have a blind spot regarding the consequences of its own technology on privacy and property rights.” On all the evidence Google’s founders began with an unusually ethical vision for their unusual company. They believe in information—“universally accessible”—as a force for good in and of itself. They have created and led teams of technologists responsible for a golden decade of genuine innovation. They are visionaries in a time when that word is too cheaply used. Now they are perhaps disinclined to submit to other people’s ethical standards, but that may be just a matter of personality. It is well to remember that the modern corporation is an amoral creature by definition, obliged to its shareholder financiers, not to the public interest.

The Federal Trade Commission issued subpoenas in June in an antitrust investigation into Google’s search and advertising practices; the European Commission began a similar investigation last year. Governments are responding in part to organized complaints by Google’s business competitors, including Microsoft, who charge, among other things, that the company manipulates its search results to favor its friends and punish its enemies. The company has always denied that. Certainly regulators are worried about its general “dominance”—Google seems to be everywhere and seems to know everything and offends against cherished notions of privacy.

The rise of social networking upends the equation again. Users of Facebook choose to reveal—even to flaunt—aspects of their private lives, to at least some part of the public world. Which aspects, and which part? On Facebook the user options are notoriously obscure and subject to change, but most users share with “friends” (the word having been captured and drained bloodless). On Twitter, every remark can be seen by the whole world, except for the so-called “direct message,” which former Representative Anthony Weiner tried and failed to employ. Also, the Library of Congress is archiving all tweets, presumably for eternity, a fact that should enter the awareness of teenagers, if not members of Congress.

Now Google is rolling out its second attempt at a social-networking platform, called Google+. The first attempt, eighteen months ago, was Google Buzz; it was an unusual stumble for the company. By default, it revealed lists of contacts with whom users had been chatting and e-mailing. Privacy advocates raised an alarm and the FTC began an investigation, quickly reaching a settlement in which Google agreed to regular privacy audits for the next twenty years. Google+ gives users finer control over what gets shared with whom. Still, one way or another, everything is shared with the company. All the social networks have access to our information and mean to use it. Are they our friends?

This much is clear: We need to decide what we want from Google. If only we can make up our collective minds. Then we still might not get it.

The company always says users can “opt out” of many of its forms of data collection, which is true, up to a point, for savvy computer users; and the company speaks of privacy in terms of “trade-offs,” to which Vaidhyanathan objects:

 Privacy is not something that can be counted, divided, or “traded.” It is not a substance or collection of data points. It’s just a word that we clumsily use to stand in for a wide array of values and practices that influence how we manage our reputations in various contexts. There is no formula for assessing it: I can’t give Google three of my privacy points in exchange for 10 percent better service.

This seems right to me, if we add that privacy involves not just managing our reputation but protecting the inner life we may not want to share. In any case, we continue to make precisely the kinds of trades that Vaidhyanathan says are impossible. Do we want to be addressed as individuals or as neurons in the world brain? We get better search results and we see more appropriate advertising when we let Google know who we are. And we save a few keystrokes.

 

First published in the New York Review of Books, Augst 18, 2011.

Touching History

I got a thrill in December 1999 in the Reading Room of the Morgan Library in New York when the librarian, Sylvie Merian, brought me, after I had completed an application with a letter of reference and a photo ID, the first, oldest notebook of Isaac Newton. First I was required to study a microfilm version. There followed a certain amount of pomp. The notebook was lifted from a blue cloth drop-spine box and laid on a special padded stand. I was struck by how impossibly tiny it was—58 leaves bound in vellum, just 2¾ inches wide, half the size I would have guessed from the enlarged microfilm images. There was his name, “Isacus Newton,” proudly inscribed by the 17-year-old with his quill, and the date, 1659.

“He filled the pages with meticulous script, the letters and numerals often less than one-sixteenth of an inch high,” I wrote in Isaac Newton a few years later. “He began at both ends and worked toward the middle.”

Apparently historians know the feeling well—the exhilaration that comes from handling the venerable original. It’s a contact high. In this time of digitization, it is said to be endangered. The Morgan Notebook of Isaac Newton is online now (thanks to the Newton Project at the University of Sussex). You can browse it.

The raw material of history appears to be heading for the cloud. What once was hard is now easy. What was slow is now fast.

Is this a case of “be careful what you wish for”?

The British Library has announced a project with Google to digitize 40 million pages of books, pamphlets and periodicals dating to the French Revolution. The European Digital Library, Europeana.eu, well surpassed its initial goal of 10 million “objects” last year, including a Bulgarian parchment manuscript from 1221 and the Rok runestone from Sweden, circa 800, which will save you trips to, respectively, the St. Cyril and St. Methodius National Library in Sofia and a church in Ostergotland.

I’m not buying this. I think it’s sentimentalism, and even fetishization.

Reporting to the European Union in Brussels, the Comité des Sages (sounds better than “Reflection Group”) urged that essentially everything—all the out-of-copyright cultural heritage of all the member states—should be digitized and made freely available online. It put the cost at approximately $140 billion and called this vision “The New Renaissance.”

Inevitably comes the backlash. Where some see enrichment, others see impoverishment. Tristram Hunt, an English historian and member of Parliament, complained in The Observer that “techno-enthusiasm” threatens to cheapen scholarship. “When everything is downloadable, the mystery of history can be lost,” he wrote. “It is only with MS in hand that the real meaning of the text becomes apparent: its rhythms and cadences, the relationship of image to word, the passion of the argument or cold logic of the case.”

I’m not buying this. I think it’s sentimentalism, and even fetishization. It’s related to the fancy that what one loves about books is the grain of paper and the scent of glue.

Some of the qualms about digital research reflect a feeling that anything obtained too easily loses its value. What we work for, we better appreciate. If an amateur can be beamed to the top of Mount Everest, will the view be as magnificent as for someone who has accomplished the climb? Maybe not, because magnificence is subjective. But it’s the same view.

Another worry is the loss of serendipity—as Mr. Hunt says, “the scholar’s eternal hope that something will catch his eye.” When you open a book Newton once owned, which you can do (by appointment) in the library of Trinity College, Cambridge, you may see notes he scribbled in the margins. But marginalia are being digitized, too. And I find that online discovery leads to unexpected twists and turns of research at least as often as the same time spent in archives.

“New Renaissance” may be a bit of hype, but a profound transformation lies ahead for the practice of history. Europeans seem to have taken the lead in creating digital showcases; maybe they just have more history to work with than Americans do. One brilliant new resource among many is the London Lives project: 240,000 manuscript and printed pages dating to 1690, focusing on the poor, including parish archives, records from workhouses and hospitals, and trial proceedings from the Old Bailey.

Storehouses like these, open to anyone, will surely inspire new scholarship. They enrich cyberspace, particularly because without them the online perspective is so foreshortened, so locked into the present day. Not that historians should retire to their computer terminals; the sights and smells of history, where we can still find them, are to be cherished. But the artifact is hardly a clear window onto the past; a window, yes, clouded and smudged like all the rest.

It’s a mistake to deprecate digital images just because they are suddenly everywhere, reproduced so effortlessly. We’re in the habit of associating value with scarcity, but the digital world unlinks them. You can be the sole owner of a Jackson Pollock or a Blue Mauritius but not of a piece of information — not for long, anyway. Nor is obscurity a virtue. A hidden parchment page enters the light when it molts into a digital simulacrum. It was never the parchment that mattered.

Oddly, for collectors of antiquities, the pricing of informational relics seems undiminished by cheap reproduction — maybe just the opposite. In a Sotheby’s auction three years ago, Magna Carta fetched a record $21 million. To be exact, the venerable item was a copy of Magna Carta, made 82 years after the first version was written and sealed at Runnymede. Why is this tattered parchment valuable? Magical thinking. It is a talisman. The precious item is a trick of the eye. The real Magna Carta, the great charter of human rights and liberty, is available free online, where it is safely preserved. It cannot be lost or destroyed.

An object like this—a talisman—is like the casket at a funeral. It deserves to be honored, but the soul has moved on.

First published in The New York Times, 17 July 2011.

Touching History: Addendum

In a little essay in The Times (which you can read here or there) I muse about the differences between the artifacts of history—the tangible, venerable manuscripts and notebooks and other touchstones—and their new digital counterparts. I try to push back against what I see as a little bit of sentimentalizing.

But nothing I say—and nothing I’m pushing back against—is as eloquent as a comment almost thirty years ago, long before the digitization began, by the great historian and biographer Richard Holmes. So let me just quote it here. It’s from his classic book Footsteps: Adventures of a Romantic Biographer.

The past does retain a physical presence for the biographer—in landscapes, buildings, photographs, and above all the actual trace of handwriting on original letters or journals. Anything a hand has touched is for some reason peculiarly charged with personality—Thomas Hardy’s simple steel-tipped pens, each carved with a novel’s name; Shelley’s guitar, presented to Jane Williams; Balzac’s blue china coffee-pot … It is as if the act of repeated touching, especially in the process of daily work or creation, imparts a personal “virtue” to an inanimate object, gives it a fetichistic power in the anthropological sense, which is peculiarly impervious to the passage of time….

And then Holmes adds this wise caveat:

But this physical presence is none the less extremely deceptive. The material surfaces of life are continually breaking down, sloughing off, changing almost as fast as human skin.

 

And then there were eight

pluto[Originally published January 2011]

Now comes news that the “dwarf planet” Eris is no longer the ninth largest object orbiting the sun. New measurements show (so it is claimed) that Eris is the tenth largest. Which is to say, it is a tiny bit smaller than Pluto.

You remember Pluto. Mercury Venus Earth Mars Jupiter Saturn Uranus Neptune Pluto. If you learned your Solar System any time between 1930 and 2006, you learned that it was the ninth planet. Nineteen thirty is when Clyde Tombaugh discovered it. As for 2006—that story is best told in a smart and funny new book, How I Killed Pluto and Why It Had It Coming, by Mike Brown, the Caltech astronomer who discovered Eris.

Surely discovering planets was a one-way street. Maybe we could find new ones, but how could we lose any?

By tightening standards. Raising the bar. Redefining the word planet.

The International Astronomical Union did that, after months of struggle and infighting and philosophical confusion, at a tumultuous meeting in Prague in the summer of 2006—speeches and resolutions and amendments and footnotes and points of clarification. At one point the great Jocelyn Bell Burnell, codiscoverer of pulsars, declared, “Resolutions are nonlinear and small changes have big effects,” and reached under her table to brandish a beach ball, an umbrella, and a stuffed dog. (Pluto—get it?)
Read More

Staying in Sync with the Cosmos

First published in the New York Times Magazine, Dec. 31, 1995. Eventually it led to Faster.

leapsecondI AM IN THE DIRECTORATE OF TIME. Naturally, I am running late. I hurry past a climate-controlled vault in which the world’s No. 1 clock is silently assembling each second from nine billion parts. I am ushered into the presence of Gernot M. R. Winkler, who recently retired as director of the Directorate of Time. (The Government has not been able to find a replacement.) He looks across his desk—sharp blue eyes, craggy features, white hair—and says, “We have to be fast.”

In the era of the nanosecond, timekeeping is serious business. The Directorate, a division of the United States Naval Observatory, has scattered its atomic clocks across a colorfully manicured hilltop near the Potomac River in Washington—the Master Clock and 53 others. This ensemble, consulting continuously with counterparts overseas, has achieved a precision in measurement that surpasses anything else in science. The seconds pass here with a margin of error each day that is smaller than a hairbreadth in the distance from the Earth to the Sun. In a million years, the Master Clock might gain or lose a second.

“I tell you, it wasn’t on a human scale when we were measuring time to a millisecond, and now we are down to a fraction of a nanosecond.”

Time used to be fixed by astronomical reference points — Earth spins once, call it a day. By consensus among scientists and military officials, the absolute reference frame has shifted from the stars to the atomic clocks in their vaults. Stars drift, and the Earth shivers ever so slightly — generally its rotation slows each year. Because the clocks, not the Earth, now provide the ultimate authority, the Earth gets out of sync.

To compensate, the official clocks will all perform a quick two-step tonight, New Year’s Eve, in unison, adding a leap second to the world’s calendar. That makes this, by one second, the longest day of the year. The New Year will click in somewhat sneakily: 11:59:58 P.M., 11:59:59, 11:59:60, 12:00:00 A.M., 12:00:01….

Leap seconds are growing more common. Eventually—in the distant future—there will be at least one every year, and then two, and so on, as the Earth continues to slow. It didn’t have to be that way—in fact, until 1970, the second was always one-86,400th of a real day. The atomic clocks were retuned. The second lengthened a tiny bit each year. This did not trouble most of us, but it did start to annoy physicists: Come on, guys, a second is a second—give me a real SECOND.

That the exactitude of modern timekeepers defies anything in human experience is cheerfully acknowledged here at the Directorate. When events occur within thousandths of a second, we cannot tell the past from the future. “I tell you, it wasn’t on a human scale when we were measuring time to a millisecond, and now we are down to a fraction of a nanosecond,” Winkler says. “Can you miss a plane by a millisecond? Of course not.” He pauses to think. “I missed one by five seconds once.”

Still, humans seem to crave the precision that is available. Internet users can set their computers to update their clocks according to the Directorate’s time signal—and the Directorate now fields more than 300,000 electronic queries each day. By pinging back and forth across the network, the software can correct for delays along the phone lines between the clocks and your PC. The truly time-obsessed used to keep their watches accurate to within seconds; now they keep their computers accurate to within milliseconds.

Nanosecond precision is needed for worldwide communications systems and for navigation by Global Positioning System satellite signals, where an error of a billionth of a second means an error of a foot—the distance light travels in that time. Cellular phone networks and television transmitters need fine timing to squeeze more and more channels of communication into precisely tuned bandwidths. The military, especially, finds ways to use superprecise timing. It is no accident that the Directorate of Time belongs to the Department of Defense. Knowing the exact time is an essential aspect of delivering airborne explosives to exact locations—individual buildings or parts of buildings—thus minimizing one of the department’s crucial euphemisms, collateral damage.

“This is extremely important,” Winkler says, the accent of his native Austria breaking through. He slashes his hand through the air in a karate gesture. “We want to be exact.”

It is often said ironically here, and Winkler says it again, that if you have only one clock, you automatically have perfect time. In the real world, no single clock can serve as a reference. The Directorate’s official collection of cesium atomic and hydrogen maser clocks—54 now, though clocks can come off the bench or get sent down to the minors as their performance warrants— serve as independent witnesses, consulting one another electronically every 100 seconds.

“Different power sources, diesel generators, individual batteries,” Winkler says. “Backup monitor stations. In case we are wiped out by some major disaster, we can carry on.” Official time comes not from the Master Clock alone but from the group, statistically merged. The United States’ statistical contribution to worldwide official time is 40 percent, by far the largest share, with the rest coming from time agencies in other nations.

Few scientific institutions are so intensely focused on so pure a goal. Keeping the right time brings together an assortment of technologies and sciences. The Directorate’s astronomers study the most distant quasars—not for the theoretical subtleties that interest astrophysicists, but for their fixedness in the sky. The stars may wander, but the Directorate’s favorite 462 quasars provide as rigid a frame as can be found. Earth scientists study the slowing rotation and the occasional wobble—a problem that comes down to watching the weather, because nothing affects the planet’s spin in any given year as much as wind blowing on mountains.

And atomic scientists continue to perfect those clocks.

“The clock is a machinery which repeats the same process over and over again,” Winkler says. “Now, the ‘same process’ means ‘undisturbed from the outside.’ The observation itself is a disturbance and we must keep that to a minimum. Magnetic fields, humidity. It’s really technology driven to the utmost perfection in respect to control of a process.”

I cannot resist asking a few questions about the director’s psychological motivation. He cooperates: “Accuracy, precision, control—this is something which is to me esthetically pleasing.”

Are you a punctual person?

“I try to be.”

What kind of watch do you wear?

“None.”

Why is that?

“I don’t need to. This would be an admission of defeat.”

Nevertheless, there is a reasonably accurate clock on the wall just behind my left shoulder, and I see Winkler glance at it. My half-hour is up.