The Pentagon's Brain Read online

Page 3


  The idea was met with laughter. Scientists on the General Advisory Committee were appalled. In the only surviving record of the meeting, one committee member, Dr. James Whitman, expresses shock and says that a 10,000-megaton bomb would “contaminate the earth.” Teller defended his idea, boasting that Lawrence had already approached the Air Force, and the Air Force was interested. Rabi called the idea “a publicity stunt,” and plans for a 10,000-megaton bomb were shelved. But Livermore was allowed to keep its doors open after all.

  Decades later, Herb York explained why he and Edward Teller had felt it necessary to design a 10,000-megaton bomb when the United States had, only months earlier, achieved supremacy over the Soviets with the 15-megaton Castle Bravo bomb. The reason, York said, was that in order to maintain supremacy, American scientists must always take new and greater risks. “The United States cannot maintain its qualitative edge without having an aggressive R&D [research and development] establishment that pushes against the technological frontiers without waiting to be asked,” York said, “and that in turn creates a faster-paced arms race. That is the inevitable result of our continuing quest for a qualitative edge to offset the other side’s quantitative advantage.”

  For Herb York, the way for America to maintain its position as the most militarily powerful country in the world was through the forward march of science. To get the most out of an American scientist was to get him to compete against equally brilliant men. That was what made America great, York said. This was the American way of war. And this was exactly the kind of vision the Department of Defense required of its scientists as it struggled for survival against the Soviet communists. The age of thermonuclear weapons had arrived. Both sides were building vast arsenals at a feverish pace. There was no turning back. The only place to go was ahead.

  It was time to push against technological frontiers.

  CHAPTER TWO

  War Games and Computing Machines

  On the west coast of California, in the sunny Santa Monica sunshine, the defense scientists at RAND Corporation played war games during lunchtime. RAND, an acronym for “research and development,” was the Pentagon’s first postwar think tank, the brains behind U.S. Air Force brawn. By day, during the 1950s, analysts inside RAND’s offices and conference rooms churned out reports, mostly about nuclear weapons. Come lunchtime they moved outdoors, spreading maps of the world across tabletops, taking game pieces from boxes and playing Kriegspiel, a chess variant once favored by the powerful German military.

  Competition was valued and encouraged at RAND, with scientists and analysts always working to outdo one another. Lunchtime war games included at least one person in the role of umpire, which usually prevented competitions from getting out of hand. Still, tempers flared, and sometimes game pieces scattered. Other times there was calculated calm. Lunch could last for hours, especially if John von Neumann was in town.

  In the 1950s, von Neumann was the superstar defense scientist. No one could compete with his brain. At the Pentagon, the highest-ranking members of the U.S. armed services, the secretary of defense and the Joint Chiefs of Staff, all saw von Neumann as an infallible authority. “If anyone during that crucial period in the early and middle-fifties can be said to have enjoyed more ‘credibility’ in national defense circles than all the others, that person was surely Johnny,” said Herb York, von Neumann’s close friend.

  Born in 1903 to a well-to-do Hungarian Jewish family, John von Neumann had been a remarkable child prodigy. In the first grade he was solving complex mathematical problems. By age eight he had mastered calculus, though his talents were not limited to math. By the time von Neumann graduated from high school, he spoke seven languages. He could memorize hundreds of pages of text, including long numbers, after a single read-through. “Keeping up with him was impossible,” remarked the mathematician Israel Halperin. “The feeling was you were on a tricycle chasing a racing car.”

  “Johnny was the only student I was ever afraid of,” said his childhood teacher, George Pólya, also a famous mathematician. “If in the course of a lecture I stated an unsolved problem, the chances were he’d come to me at the end of the lecture with the complete solution scribbled on a slip of paper.”

  By all accounts, von Neumann was gentle and kind, beloved for his warm personality, his courtesy, and his charm. “He was pleasant and plump, smiled easily and often, enjoyed parties and other social events,” recalled Herb York. He loved to drink, play loud music, attend parties, and collect toys. He always wore a three-piece banker’s suit with a watch chain stretched across his plump belly. There exists a photograph of von Neumann traveling down into the Grand Canyon on a donkey’s back, outfitted in the legendary three-piece suit. It is said that the only things von Neumann carried in his pants pockets were unsolvable Chinese puzzles and top secret security clearances, of which he had many.

  To his core, von Neumann believed that man was violent, belligerent, and deceptive, and that he was inexorably prone to fighting wars. “I think the USA-USSR conflict will very probably lead to an armed ‘total’ collision and that a maximum rate of armament is therefore imperative,” von Neumann wrote to Lewis Strauss, head of the Atomic Energy Commission, three years before the Castle Bravo bomb exploded—a weapon that von Neumann helped engineer.

  Only in rare private moments would “the deeply cynical and pessimistic core of his being” emerge, remarks his daughter Marina von Neumann Whitman, a former economic advisor to President Nixon. “I was frequently confused when he shifted, without warning…. [O]ne minute he would have me laughing at his latest courageous pun and the next he would be telling me, quite seriously, why all-out atomic war was almost certainly unavoidable.” Did war stain him? During World War II, when his only daughter was a little girl, John von Neumann helped decide which Japanese civilian populations would be targeted for atomic bombing. But far more revealing is that it was von Neumann who performed the precise calculations that determined at what altitude over Hiroshima and Nagasaki the atomic bombs had to explode in order to achieve the maximum kill rate of civilians on the ground. He determined the height to be 1,800 feet.

  At the RAND Corporation, von Neumann served as a part-time consultant. He was hired by John Davis Williams, the eccentric director of RAND’s Mathematics Division, on unusual terms: Von Neumann was to write down his thoughts each morning while shaving, and for those ideas he would be paid $200 a month—the average salary of a full-time RAND analyst at the time. Von Neumann lived and spent most of his time working in New Jersey, where he had served as a faculty member at the Princeton Institute for Advanced Study since the early 1930s, alongside Albert Einstein.

  To the RAND scientists playing lunchtime war games, less important than beating von Neumann at Kriegspiel was watching how his mind analyzed game play. “If a mentally superhuman race ever develops, its members will resemble Johnny von Neumann,” Edward Teller once said. “If you enjoy thinking, your brain develops. And that is what von Neumann did. He enjoyed the functioning of his brain.”

  John von Neumann was obsessed with what he called parlor games, and his first fascination was with poker. There was strategy involved, yes, but far more important was that the game of poker was predicated on deception: to play and to win, a man had to be willing to deceive his opponent. To make one’s opponent think something false was something true. Second-guessing was equally imperative to a winning strategy. A poker player needs to predict what his opponent thinks he might do.

  In 1926, when von Neumann was twenty-three years old, he wrote a paper called “Theory of Parlor Games.” The paper, which examined game playing from a mathematical point of view, contained a soon-to-be famous proof, called the minimax theorem. Von Neumann wrote that when two players are involved in a zero-sum game—a game in which one player’s losses equal the other player’s gains—each player will work to minimize his own maximum losses while at the same time working to maximize his minimum gains. During the war, von Neumann collaborated with fellow Princeton math
ematician Oskar Morgenstern to explore this idea further. In 1944 the two men co-authored a 673-page book on the subject, Theory of Games and Economic Behavior. The book was considered so groundbreaking that the New York Times carried a page one story about its contents the day it was published. But von Neumann and Morgenstern’s book did more than just revolutionize economic theory. It placed game theory on the world stage, and after the war it caught the attention of the Pentagon.

  By the 1950s, von Neumann’s minimax theorem was legendary at RAND, and to engage von Neumann in a discussion about game theory was like drinking from the Holy Grail. It became a popular pastime at RAND to try to present to von Neumann a conundrum he could not solve. In the 1950s, two RAND analysts, Merrill Flood and Melvin Dresher, came up with an enigma they believed was unsolvable, and they presented it to the great John von Neumann. Flood and Dresher called their quandary the Prisoner’s Dilemma. It was based on a centuries-old dilemma tale. A contemporary rendition of the Prisoner’s Dilemma involves two criminal suspects faced with either prison time or a plea deal.

  The men, both members of a criminal gang, are believed to have participated in the same crime. They are arrested and put in different cells. Separated, the two men have no way of communicating with each other, so they can’t learn what the other man is being offered by way of a plea deal. The police tell each man they don’t have enough evidence to convict either of them individually on the criminal charges they were brought in for. But the police do have enough evidence to convict each man on a lesser charge, parole violation, which carries a prison sentence of one year. The police offer each man, separately, a Faustian bargain. If he testifies against the other man, he will go free and the partner will do ten years’ prison time. There is a catch. Both men are being offered the same deal. If both men take the plea deal and testify against the other, the prison sentence will be reduced to five years. If both men refuse the deal, they will each be given only one year in jail for parole violation—clearly the best way to minimize maximum losses and maximize minimum gains. But the deal is on the table for only a finite amount of time, the police say.

  Von Neumann could not “solve” the Prisoner’s Dilemma. It is an unsolvable paradox. It does not fit the minimax theorem. There is no answer; the outcome of the dilemma game differs from player to player. Dresher and Flood posed the Prisoner’s Dilemma to dozens of RAND colleagues and also to other test subjects outside RAND. While no one could “solve” the Prisoner’s Dilemma, the RAND analysts learned something unexpected from the results. The outcome of the Prisoner’s Dilemma seemed to depend on the human nature of the individual game players involved—whether the player was guided by trust or distrust. Dresher and Flood discovered the participants’ responses also revealed their philosophical construct, which generally correlated to a political disposition. In interviewing RAND analysts, almost all of whom were political conservatives, Dresher and Flood discovered that the majority chose to testify against their criminal partner. They did not trust that partner to follow the concept of self-preservation, gamble against his own best interests, and refuse to talk. Five years in prison was better than ten, the RAND analysts almost universally responded. By contrast, Dresher and Flood found that the minority of game players who refused to testify against their criminal partner were almost always of the liberal persuasion. These individuals were willing to put themselves at risk in order to get the best possible outcome for both themselves and a colleague—just a single year’s jail time.

  Dresher and Flood saw that the paradox of the Prisoner’s Dilemma could be applied to national security decisions. Take the case of Robert Oppenheimer, for example, a liberal. As chairman of the General Advisory Committee, Oppenheimer had appealed to Secretary of State Dean Acheson to try to persuade President Truman not to go forward with the hydrogen bomb. To show restraint, Oppenheimer said, would send a clear message to Stalin that America was offering “limitations on the totality of war and thus eliminating the fear and raising the hope of mankind.” Acheson, a conservative, saw the situation very differently. “How can you persuade a paranoid adversary to ‘disarm by example?’” he asked.

  Von Neumann became interested in the Prisoner’s Dilemma as a means for examining strategic possibilities in the nuclear arms race. The Prisoner’s Dilemma was a non–zero sum game, meaning one person’s wins were not equal to another person’s gains. From von Neumann’s perspective, even though two rational people were involved—or, in the case of national security, two superpower nations—they were far less likely to cooperate to gain the best deal, and far more likely to take their chances on a better deal for themselves. The long-term implications for applying the Prisoner’s Dilemma to the nuclear arms race were profound, suggesting that it would forever be a game of one-upmanship.

  In addition to game theory and nuclear strategy, the RAND Corporation was interested in computer research, a rare and expensive field of study in the 1950s. The world’s leading expert in computers was John von Neumann. While no one person can accurately claim credit for the invention of the computer, von Neumann is often seen as one of the fathers of modern computers, given the critical role he played in their early development. His work on computing machines goes back to World War II, a time when “computer” was the name for a person who performed numerical calculations as part of a job.

  During the war, at the Army’s Aberdeen Proving Ground in Maryland, scores of human computers worked around the clock on trajectory tables, trying to determine more accurate timing and firing methods for various battlefield weapons. Bombs and artillery shells were being fired at targets with ever-increasing speed, and the human computers at Aberdeen simply could not keep up with the trajectory tables. The work was overwhelming. Von Neumann, one of the nation’s leading experts on ballistics at the time and a regular presence at Aberdeen, got to talking with one of the proving ground’s best “computers,” Colonel Herman Goldstine, about this very problem. Goldstine was an Army engineer and former mathematics professor, and still he found computing to be grueling work. Goldstine explained to von Neumann that on average, each trajectory table he worked on contained approximately three thousand entries, all of which had to be multiplied. Performed with paper and pencil, each set of three thousand calculations took a man like Goldstine roughly twelve hours to complete and another twelve hours to verify. The inevitability of human error was what slowed things down.

  Von Neumann told Colonel Goldstine that he believed a machine would one day prove to be a better computer than a human. If so, von Neumann said, this could profoundly impact the speed with which the Army could perform its ballistics calculations. As it so happened, Colonel Goldstine was cleared for a top secret Army program that involved exactly the kind of machine von Neumann was theorizing about. Goldstine arranged to have von Neumann granted clearance, and the two men set off for the University of Pennsylvania. There, inside a locked room at the Moore School, engineers were working on a classified Army-funded computing machine—the first of its kind. It was called the Electronic Numerical Integrator and Computer, or ENIAC.

  ENIAC was huge and cumbersome: one hundred feet long, ten feet high, and three feet deep. It had 17,468 vacuum tubes and weighed sixty thousand pounds. Von Neumann was fascinated. ENIAC was “the first complete automatic, all-purpose digital electronic computer” in the world, von Neumann declared. He was certain ENIAC would spawn a revolution, and that, indeed, computers would no longer be men but machines.

  Von Neumann began developing ideas for creating an electronic computer of his own. Borrowing ideas from the ENIAC construct, and with help from Colonel Goldstine, he drew up plans for a second classified electronic computer, called the Electronic Discrete Variable Automatic Computer, or EDVAC. Von Neumann saw great promise in a redesign of the ENIAC computer’s memory. He believed there was a way to turn the computer into an “electronic brain” capable of storing not just data and instructions, as was the case with ENIAC, but additional information that would allow the comp
uter to perform a myriad of computational functions on its own. This was called a stored-program computer, and it “broke the distinction between numbers that mean things and numbers that do things,” writes von Neumann’s biographer George Dyson, adding, “Our universe would never be the same.” These “instructions” that von Neumann imagined were the prototype of what the world now knows as software.

  Von Neumann believed that this computer could theoretically speed up atomic bomb calculations being performed by his fellow Manhattan Project scientists at Los Alamos, in New Mexico. He and the team at the Moore School proposed that the Army build a second machine, the one he called EDVAC. But the atomic bomb was completed and successfully tested before EDVAC was finished, and after the war, EDVAC was orphaned.

  Von Neumann still wanted to build his own computer from scratch. He secured funding from the Atomic Energy Commission to do so, and in November 1945, John von Neumann began building an entirely new computer in the basement of Fuld Hall at the Institute for Advanced Study in Princeton. Colonel Goldstine arrived to assist him in the winter of 1946, and with help from a small staff of engineers, von Neumann first constructed a machine shop and a laboratory for testing computer components. Officially the project was called the Electronic Computing Instrument Computer; von Neumann preferred to call the machine the Mathematical and Numerical Integrator and Computer, or MANIAC.

  MANIAC was smaller and much more advanced than ENIAC, which weighed thirty tons. ENIAC was rife with limitations; gargantuan and cumbersome, it sucked power, overheated, and constantly needed to be rewired whenever a problem came along. ENIAC technicians spent days unplugging tangled cables in order to find a solution for a numerical problem that took only minutes to compute. MANIAC was compact and efficient, a single six-foot-high, eight-foot-long machine that weighed only a thousand pounds. But the most significant difference between ENIAC and MANIAC was that von Neumann designed his computer to be controlled by its own instructions. These were housed inside the machine, like a brain inside a human being.