Free Novel Read

Computing with Quantum Cats Page 9


  It is on the small scale that we can already see the limits to Moore's Law, at least as it applies to classical computers. Doubling at regular intervals—whether the interval is a year, eighteen months or some other time step—is an exponential process which cannot continue indefinitely. The classic example of runaway exponential growth is the legend of the man who invented chess. The story tells us that the game was invented in India during the sixth century by a man named Sissa ben Dahir al-Hindi to amuse his king, Sihram. The king was so pleased with the new game that he allowed Sissa to choose his own reward. Sissa asked for either 10,000 rupees or 1 grain of corn for the first square of the chess board, two for the second square, four for the third square and so on, doubling the number for each square. The king, thinking he was getting off lightly, chose the second option. But the number of grains Sissa had requested amounted to 18,446,744,073,709,551,615—enough, Sissa told his king, to cover the whole surface of the Earth “to the depth of the twentieth part of a cubit.” Alas, that's where the story ends, and we don't know what became of Sissa, or even if the story is true. But either way, the numbers are correct; and the point is that exponential growth cannot continue indefinitely or it would consume the entire resources not just of the Earth but of the Universe. So where are the limits to Moore's Law?

  At the beginning of the twenty-first century, the switches that turned individual transistors on microchips on and off—the equivalent of the individual electromechanical relays in the Zuse machines or the individual valves in Colossus—involved the movement of a few hundred electrons. Ten years later, it involved a few dozen. We are rapidly2 approaching the stage where an individual on-off switch in a computer, the binary 0 or 1 at the heart of computation and memory storage, is controlled by the behavior of a single electron, sitting in (or on) a single atom; indeed, in 2012, while this book was in preparation, a team headed by Martin Fuechsle, of the University of New South Wales, announced that they had made a transistor from a single atom. This laboratory achievement is only a first step towards putting such devices on your smartphone, but it must herald a limit to Moore's Law as we have known it, simply because miniaturization can go no further; there is nothing smaller than an electron that could do the job in the same way. If there is to be future progress in the same direction, it will depend on something new, such as using photons to do the switching: computers based on optics rather than on electricity.

  There is, though, another reason why the use of single-electron switches takes us beyond the realm of classical computing. Electrons are quintessentially quantum entities, obeying the rules of quantum mechanics rather than the rules of classical (Newtonian) mechanics. They sometimes behave like particles, but sometimes behave like waves, and they cannot be located at a definite point in space at a definite moment of time. And, crucially, there is a sense in which you cannot say whether such a switch is on or off—whether it is recording a 1 or a 0. At this level, errors are inevitable, although below a certain frequency of mistakes they might be tolerated. Even a “classical” computer using single-electron switches would have to be constructed to take account of the quantum behavior of electrons. But as we shall see, these very properties themselves suggest a way to go beyond the classical limits into something completely different, making quantum indeterminacy an asset rather than a liability.

  In December 1959, Richard Feynman gave a now-famous talk with the title “There's Plenty of Room at the Bottom,”3 in which he pointed the way towards what we now call nanotechnology, the ultimate forms of miniaturization of machinery. Towards the end of that talk, he said:

  When we get to the very, very small world—say circuits of seven atoms—we have a lot of new things that would happen that represent completely new opportunities for design. Atoms on a small scale behave like nothing on a large scale, for they satisfy the laws of quantum mechanics. So, as we go down and fiddle around with the atoms down there, we are working with different laws, and we can expect to do different things. We can manufacture in different ways. We can use, not just circuits, but some system involving the quantized energy levels, or the interactions of quantized spins, etc.

  As I have mentioned, just half a century later we have indeed now got down to the level of “circuits of seven atoms”; so it is time to look at the implications of those laws of quantum mechanics. And there is no better way to look at them than through the eyes, and work, of Feynman himself.

  The “interference pattern” built up by electrons passing one at a time through “the experiment with two holes.” How do they know where to go?

  Richard Phillips Feynman was born on May 11, 1918, and grew up in Far Rockaway in the borough of Queens, New York. By the time he went to MIT, in 1935, the “quantum revolution” of the 1920s was complete, and von Neumann had already written his influential book The Mathematical Foundations of Quantum Mechanics,1 although at that time it had not been translated into English. To Feynman's generation, and later students, quantum mechanics was (and is) the received wisdom, not some startling new discovery, and that is the spirit in which I approach it here.

  Feynman's father, Melville, had a fascination with science, especially natural history; he was intelligent and had wanted to become a doctor, but as the son of poor Lithuanian Jewish immigrants could not afford a college education. He ended up in the uniform business. Melville deliberately set out to encourage scientific interest in his son, buying him a set of the Encyclopedia Britannica, taking him on trips to the American Museum of Natural History, and encouraging him to solve puzzles for himself rather than expecting to be given the answers. It turned out that Richard needed little encouragement, and had a natural aptitude for mathematics and, later, mathematical physics. The family was not affluent, but neither were they poor, surviving the Depression in relative comfort. At school, Richard was outstanding academically (at least in science and math), often helping older students out with their assignments, but hopeless at ball games and self-conscious about his lack of what were perceived as “manly” skills. He built radio receivers, repaired them for other people, learned to dance so that he could meet girls (he later said that as a teenager he was interested in only two things, math and girls), and graduated from high school in the summer of 1935 with top honors.

  Even so, the passage into college wasn't straightforward. Feynman applied to Columbia and MIT, but was rejected by Columbia because they still operated a quota on Jewish students, and had filled this already. MIT had a different hoop that had to be jumped through—applicants had to have a recommendation from an MIT graduate before they would be considered. Melville persuaded an acquaintance to provide the endorsement; Feynman later described this system as “evil, wrong, and dishonest.”2

  MIT

  Feynman's reputation as a budding scientist preceded him to MIT, where he was the subject of rivalry between the only two Jewish fraternities, Phi Beta Delta and Sigma Alpha Mu, each eager to add him to its number. Although Feynman had no religious beliefs, his family background meant that he had to join one or other of these fraternities; so he settled on Phi Beta Delta, partly because two older members had advised him that as an outstanding student he would be allowed to take examinations on arrival at MIT which, once passed, would enable him to skip the first-year math lectures and start with the second-year course. As this shows, the frats were not all about partying, but provided a mutual support system for members. For example, the more academic students were expected to help the more social animals with their work, and in return the social types helped the academics to come out of their shells and learn the social graces. Feynman described it as “a good balancing act” from which he benefited, losing the self-consciousness that had handicapped him in high school.

  Feynman benefited in another way from living in the frat house. Two of the senior students there3 were taking an advanced course in physics which included the latest developments in quantum mechanics. Through conversations with them, Feynman decided to switch from mathematics to physics, and si
gned up for the same physics course (intended for seniors and graduate students) at the start of his second (sophomore) year. Even in this advanced company, Feynman stood out. For the first semester, the course was taught by a young professor, Julius Stratton, who later became President of MIT, but in 1936 was sometimes careless about preparing his lectures. Whenever he got stuck, he would turn to the audience and ask, “Mr. Feynman, how did you handle this problem?” and Richard would take over.4 Nobody else in the class was ever singled out in this way.

  Along with advanced physics, during his time as an undergraduate Feynman took regular courses in chemistry, metallurgy, experimental physics and optics, and signed up for another advanced course in nuclear physics. He sailed through anything scientific. But he only passed the compulsory courses in English, history and philosophy, which he regarded as “dippy” subjects, with the aid of the fraternity support system. He published two scientific papers before he had even graduated, and wanted to stay on at MIT to work for a PhD, but was told it would be better for his scientific development to go somewhere else. Somewhat grudgingly, he complied, moving in 1939 from MIT to Princeton; he later acknowledged that his teachers were right, and that the move was the right thing for him at that time.

  FROM PRINCETON TO LOS ALAMOS

  Princeton had been alerted that something special was coming their way, but even so nearly turned him down when they saw his grades. He had scored 100 percent in physics, and almost as high in math. Both were the best scores the Princeton Graduate Admissions Committee had ever seen. But they had never admitted anybody with such low scores as Feynman had achieved (if that is the right word) in English and history. In the end, he was offered a research assistantship, which meant that he worked for a senior scientist and actually got paid while doing his own PhD research. The scientist Feynman worked with was John Wheeler, later famous for his investigations of the physics of black holes. But “senior” is a relative term: when they met, Wheeler was twenty-eight and Feynman twenty-one. They became good friends, and Wheeler also acted as Feynman's thesis adviser.

  Feynman's thesis was entitled “The Principle of Least Action in Quantum Mechanics,” and dealt with a way of describing how quantum entities such as electrons travel from A to B. This led to the so-called “path integral approach,” and to the work for which Feynman would later receive the Nobel Prize. I shall explain all this shortly; but Feynman's career was interrupted, just at the point he was finishing his thesis in 1941, by the involvement of the United States in the Second World War.

  Even before the attack on Pearl Harbor, like many of his contemporaries Feynman had realized that war was inevitable, and in the summer of 1941 had been working at the Frankfort Arsenal in Philadelphia on a mechanical predictor for anti-aircraft gunnery. He made such an impression that he was offered a full-time job at the head of his own design team, but went back to Princeton to finish his PhD; had his decision gone the other way, he might well have become a leading light in the early development of electronic computers. Feynman was initially recruited to war work in December 1941, at first on the problem of separating radioactive uranium-235 from the stable variety, uranium-238. This was before he had completed his thesis, but he took a few weeks’ leave in the spring of 1942 to write it up. The oral examination, held on June 3, 1942, was a formality, and he received the degree the same month. Before June was out, Richard also married his childhood sweetheart, Arline Greenbaum, even though she was seriously ill (indeed, hospitalized) with tuberculosis. Later that year, the uranium enrichment project that Feynman was involved with was dropped, in favor of a more successful method, and he was moved, along with other members of the team, to Los Alamos, where, among other things, he worked with the IBM machines needed to help von Neumann with his calculations. Arline also moved west, to a hospital as close as possible to Los Alamos, where she died in 1945.

  Before picking up the threads of Feynman's work on quantum physics after the war, and in particular his prescient ideas about computing and quanta, we can put all this in context by looking at the kind of quantum physics he had been taught as an undergraduate—the kind of quantum physics espoused by von Neumann in his book. In its most widely used form, this was based on an equation discovered by the Austrian Erwin Schrödinger.

  SCHRÖDINGER AND HIS EQUATION

  One of the peculiarities of quantum physics is that although we have very good, reliable equations to describe what is going on in the subatomic world, we do not have a single clear understanding of what it is those equations describe. The problem is not that we have no picture of what is going on, but that we have many, equally valid, pictures. There are several different ways of interpreting the equations in terms of the behavior of quantum entities such as electrons, all of them equally valid in the sense that they are described by equations which allow physicists to make accurate and correct predictions about the outcome of experiments. I have gone into the details of all this in my book Schrödinger's Kittens; here, I shall mention just one (but the most profound) aspect of this intriguing puzzle.

  In the middle of the 1920s, two completely different ways of understanding the quantum world were developed in dependently, at almost the same time. The first approach, stemming from the work of Werner Heisenberg, treated electrons as particles, whose behavior could be described with great precision by a certain set of equations and mathematical rules. At one level, this matches the idea most of us have of electrons as tiny, subatomic particles, like little billiard balls, each carrying a certain amount of electric charge. To be sure, there were some oddities about the rules, not least that the “particles” could jump from one place to another instantaneously, without crossing the space in between. But the equations worked. The second approach, developed initially by Schrödinger, treated electrons as waves. This meant that they could be described in terms of the rules of wave behavior, which physicists were confident they knew all about from studying things like ripples in water. To be sure, there were some oddities about the picture, not least the puzzle of how the electric charge of the electron could be carried by a wave. But the equations worked.

  Very soon, several people (most notably Paul Dirac) proved that both these versions of quantum mechanics (and, indeed, all versions of quantum mechanics) are mathematically equivalent to one another, not unlike the way in which a book like this might be written in English and also in German or some other language but still contain, and convey, the same message. This meant that in a sense it didn't matter which version you chose to work with, since they all gave the same answers. Because physicists were already familiar with the idea of waves and wave equations, Schrödinger's version of quantum mechanics quickly became the most popular, and was developed into a standard version which became known as the Copenhagen Interpretation, because one of the leading proponents of the idea, Niels Bohr, worked in Copenhagen. This is the version that I am going to tell you about now, the version Richard Feynman learned as a student; but you should not imagine for one moment that it is the ultimate truth about quantum physics, or that electrons “really are” waves. If you only want to do calculations about the outcome of experiments involving subatomically tiny things like electrons it works fine; and for half a century hardly anybody worried about what the quantum world is “really” like. In a memorable phrase coined by John Bell (of whom more shortly), the quantum world behaves “for all practical purposes” (FAPP) as if electrons were waves obeying the Schrödinger equation as interpreted by the Copenhagen school.

  The fundamental feature of the Copenhagen Interpretation is that a quantum entity such as an electron can be represented by a wave, described by a wave equation (also known as a “wave function”). This wave occupies a large volume of space (potentially, an infinitely large volume). The wave function has a value at any point in space, and this number is interpreted, following a suggestion made by Max Born, as representing the probability of finding the electron at that point. In some places, the wave is, in a sense, strong (the number associa
ted with the wave function is large), and there is a high probability that if we look for the electron we will find it in one of those places; in other places the wave is weak, and there is a small probability of finding the electron in one of those places. But when we look for the electron we do find it in a definite place, like a particle, not as a spread-out wave. The wave function is said to “collapse” onto that place. But as soon as the experiment is over, the wave starts spreading out across the Universe. It is this combination of waves, probability and collapse which makes up the Copenhagen Interpretation, and which von Neumann wrapped up in one neat package in his book.

  It's worth spelling this out in detail. Imagine that we have a single electron confined in a large box. According to the Copenhagen Interpretation, the wave function fills the box evenly—the chance of finding the electron at any point in the box is the same as the chance of finding it anywhere else in the box. Now, we take a measurement to detect the electron. We find it, looking just like a little particle, at a definite point in the box.5 But as soon as we stop monitoring the electron, the wave function immediately spreads out from the point where we discovered it. If we quickly take another measurement, there is a high probability of finding the electron close to the place where we last saw it. That matches common sense—but there is still some quantifiable probability of finding it anywhere in the box. The longer we wait, the more the wave function develops, and the chances of finding the electron anywhere in the box even out. That's weird, but not completely crazy.

  But it is just the beginning. Richard Feynman was fond of presenting what he called “the central mystery” of quantum mechanics by applying the Copenhagen Interpretation to a description of what happens to an electron (or any other quantum entity) when it passes through what he called “the experiment with two holes.” It “has in it,” he said, “the heart of quantum mechanics.”6