- Home
- John Gribbin
Computing with Quantum Cats Page 8
Computing with Quantum Cats Read online
Page 8
Working at von Neumann's direction, Goldstine took von Neumann's notes and the letters they had exchanged and prepared a document just over 100 pages long, called “First Draft of a Report on the EDVAC,” which was reproduced in limited numbers and distributed to a select few at the end of June 1945. The ideas discussed in this report were less advanced than those of Alan Turing discussed in the previous chapter, but were much more influential, because of the way they were promoted.
But the way they were promoted did not please everyone. The snag was that the report appeared with only one name on it as author: that of John von Neumann. Adding injury to insult, the document was later regarded as a publication in the legal sense of the term, placing the ideas in the public domain and preventing them from being patented. And compounding the injury, it transpired that in May 1945 von Neumann had signed a lucrative consultancy deal with IBM, in exchange for which he assigned (with a few exceptions) all the rights to his ideas and inventions to them. Hardly surprising that Eckert later complained that “he sold all our ideas through the back door to IBM.”14
Von Neumann now decided that he really wanted a computer at the IAS, where he was based, and with breathtaking insouciance asked the Moore School team to join him there. Goldstine took up the offer; Mauchly and Eckert left the academic world and formed the Electronic Control Company, which became a pioneering and successful commercial computer business, achieving particular success with their UNIVAC machine (Universal Automatic Computer). The EDVAC project staggered on without them, but by the time it was completed, in 1951, it was largely outdated, made obsolescent by other machines built using the architecture described in the “First Draft”—machines conforming to what was now widely known as the von Neumann architecture, bottleneck and all.
The scene was now set for decades of development of those ideas, with valves giving way to transistors and chips, machines getting smaller, faster and more widely available, but with no essential change in the logical structure of computers. The Turing machine in your pocket owes as much to von Neumann (who always acknowledged his debt to “On Computable Numbers”) as to Turing, but is no more advanced in terms of its logical structure than EDVAC itself.
There is no need to go into details about the development of faster and more powerful computers in the second half of the twentieth century. But I cannot resist mentioning one aspect of that story. In a book published as late as 1972, Goldstine commented: “It is however remarkable that Great Britain had such vitality that it could immediately after the war embark on so many well-conceived and well-executed projects in the computer field.”15 Goldstine had been at the heart of the development of electronic computers, but the veil (actually, more like an iron curtain) of secrecy surrounding the British codebreaking activities was such that a quarter of a century after the developments described here, he was unaware of the existence of Colossus, and thought that the British had had to start from scratch on the basis of the “First Draft.” What they had really done, as we have seen, was even more remarkable.
SELF-REPLICATING ROBOTS
This isn't quite the end of the story of von Neumann and the machines. Like Turing, von Neumann was fascinated by the idea of artificial intelligence, although he had a different perspective on the rise of the robot. But unlike Turing, he lived long enough (just) to begin to flesh out those ideas.
There were two parts to von Neumann's later work. He was interested in the way that a complex system like the brain can operate effectively even though it is made up of fallible individual components, neurons. In the early computers (and many even today), if one component, such as a vacuum tube, failed the whole thing would grind to a halt. Yet in the human brain it is possible for the “hardware” to suffer massive injuries and continue to function adequately, if not quite in the same way as before. And he was also interested in the problem of reproduction. Jumping off from Turing's idea of a computer that could mimic the behavior of any other computer,16 he suggested, first, that there ought to be machines that could make copies of themselves and, secondly, that there could be a kind of universal replicating machine that could make copies of itself and also of any other machine. Both kinds of mimic, or copying machines, come under the general heading “automata.”.
Von Neumann's interests in working out how workable devices can be made from parts prone to malfunction, and in how complex a system would have to be in order to reproduce itself, both began to grow in 1947. This was partly because he was moving on from the development of computers like the one then being built at the IAS and other offspring of EDVAC, but also because he became involved in the pressing problem for the US air force in the early 1950s of how to develop missiles controlled by “automata” that would function perfectly, if only during the brief flight time of the rocket.
Von Neumann came up with two theoretical solutions to the problem of building near-infallible computing machines out of fallible, but reasonably accurate, components. The first is to set up each component in triplicate, with a means to compare automatically the outputs of the three subunits. If all three results, or any two results, agree, the computation proceeds to the next step, but if none of the subunits agrees with any other, the computation stops. This “majority voting” system works pretty well if the chance of any individual subunit making a mistake is small enough. It is even better if the number of “voters” for each step in the calculation is increased to five, seven, or even more. But this has to be done for every step of the computation (not just every “neuron”), vastly (indeed, exponentially) increasing the amount of material required. The second technique involves replacing single lines for input and output by bundles containing large numbers of lines—so-called multiplexing. The data bit (say, 1) from the bundle would only be accepted if a certain proportion of the lines agreed that it was correct. This involves complications which I will not go into here;17 the important point is that although neither technique is practicable, von Neumann proved that it is possible to build reliable machines, even brains, from unreliable components.
As early as 1948, von Neumann was lecturing on the problem of reproduction to a small group at Princeton.18 The biological aspects of the puzzle were very much in the air at the time, with several teams of researchers looking for the mechanism by which genetic material is copied and passed from one generation to the next; it would not be until 1952 that the structure of DNA was determined. And it is worth remembering that von Neumann trained as a chemical engineer, so he understood the subtleties of complex chemical interactions. So it is no surprise that von Neumann says that the copying mechanism performs “the fundamental act of reproduction, the duplication of the genetic material.” The surprise is that he says this in the context of self-reproducing automata. It was around this time that he also surmised that up to a certain level of complexity automata would only be able to produce less complicated offspring, while above this level not only would they be able to reproduce themselves, but “syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself.” He made the analogy with the evolution of living organisms, pointing out that “today's organisms are phylogenetically descended from others which were vastly simpler.” How did the process begin? Strikingly, von Neumann pointed out that even if the odds are against the existence of beings like ourselves, self-reproduction only has to happen once to produce (given time and evolution) an ecosystem as complex as that on Earth. “The operations of probability somehow leave a loophole at this point, and it is by the process of self-reproduction that they are pierced.”
By the early 1950s, von Neumann was working on the practicalities of a cellular model of automata. The basic idea is that an individual component, or cell, is surrounded by other cells, and interacts with its immediate neighbors. Those interactions, following certain rules, determine whether the cell reproduces, dies, or does nothing. At first, von Neumann thought three-dimensionally. Goldstine:
/> [He] bought the largest box of “Tinker Toys” to be had. I recall with glee his putting together these pieces to build up his cells. He discussed this work with [Julian] Bigelow and me, and we were able to indicate to him how the model could be achieved two-dimensionally. He thereupon gave his toys to Oskar Morgenstern's little boy Karl.
The two-dimensional version of von Neumann's model of cellular automata can be as simple as a sheet of graph paper on which squares are filled in with a pencil, or rubbed out, according to the rules of the model. But it is also now widely available in different forms that run on computers, and is sometimes known as the “game of life.” With a few simple rules, groups of cells can be set up that perform various actions familiar in living organisms. Some just grow, spreading as more cells grow around the periphery; others pulsate, growing to a certain size, dying back and growing again; others move, as new cells are added on one side and other cells die on the opposite side; and some produce offspring, groups of cells that detach from the main body and set off on their own. In his discussion of such systems, von Neumann also mentioned the possibility of arbitrary changes in the functioning of a cell, equivalent to mutations in living organisms.
Von Neumann did not live long enough to develop these ideas fully. He died of cancer on February 28, 1957, at the age of fifty-three. But he left us with the idea of a “universal constructor,” a development of Turing's idea of a universal computer—a machine which could make copies of itself and of any other machine: that is, a self-reproducing robot. Such devices are now known as von Neumann machines, and they are relevant to one of the greatest puzzles of our, or any other time—is there intelligent life elsewhere in the Universe? One form of a von Neumann machine would be a space-traveling robot that could move between the stars, stopping off whenever it found a planetary system to explore it and build copies of itself to speed up the exploration while sending other copies off to other stars. Starting with just one such machine, and traveling at speeds well within the speed of light limit, it would be possible to explore every planet in our home Milky Way galaxy in a few million years, an eyeblink as astronomical timescales go. The question posed by Enrico Fermi (Why, if there are alien civilizations out there, haven't they visited us?) then strikes with full force.
There's one other way to spread intelligence across the Universe, of which von Neumann was also aware. A universal constructor would operate by having blueprints, in the form of digitally coded instructions, which we might as well call programs, telling it how to build different kinds of machines. It would be far more efficient to spread this information across the Universe in the form of a radio signal traveling at the speed of light than in a von Neumann machine pottering along more slowly between the stars. If a civilization like ours detected such a signal, it would surely be copied and analyzed on the most advanced computers available, ideal hosts for the program to come alive and take over the operation of the computer. In mentioning this possibility, George Dyson makes an analogy with the way a virus takes over a host cell; he seems not to be aware of the entertaining variation on this theme discussed back in 1961 by astrophysicist Fred Hoyle in his fictional work A for Andromeda,19 where the interstellar signal provides the instructions for making (or growing) a human body with the mind of the machine. Hoyle, though, was well aware of the work of Turing and von Neumann.
There is something even more significant that Turing and von Neumann left us to ponder. How does our kind of intelligence “work” in the first place? Each of them was convinced that an essential feature of the human kind of intelligence is the capacity for error. In a lecture he gave in February 1947, Turing said:
…fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words, then, if a machine is expected to be infallible, it cannot also be intelligent.20
Wrapped up in this passage are two of Turing's ideas about our kind of intelligence. One is the process of learning by trial and error, the way, say, that a baby learns to walk and talk. We make mistakes, but we learn from the mistakes and make fewer errors of the same kind as time passes. His dream was to have a computer prepared in a blank state, capable of learning about its environment and growing mentally as it did so. That dream is now becoming a reality, at least in a limited sense—for example, with robots that learn how their arms move by watching their own image in a mirror. The second idea concerns intuition, and the way humans can sometimes reach correct conclusions on the basis of limited information, without going through all the logical steps from A to Z. A computer programmed to take all those logical steps could never make the leap if some steps were missing.
Von Neumann shared this view of the importance of errors. In lectures he gave at Caltech in 1952, later published as a contribution to a volume edited by John McCarthy and Claude Shannon,21 he said:
Error is viewed, therefore, not as an extraneous and misdirected or misdirecting accident, but as an essential part of the process.
If the capacity to make mistakes of the kind just discussed is what distinguishes the human kind of intelligence from the machine kind of intelligence, would it ever be possible to program a classical computer, based on the principles involved in the kind of machines discussed so far, to make deliberate mistakes and become intelligent like us? I think not, for reasons that will become clear in the rest of this book, but basically because I believe that the mistakes need to be more fundamental—part of the physics rather than part of the programming. But I also think it will indeed soon be possible to build non-classical machines with the kind of intelligence that we have, and the capacity for intellectual growth that Turing dreamed of.
Two questions that von Neumann himself raised are relevant to these ideas, and to the idea of spacefaring von Neumann machines:
Can the construction of automata by automata progress from simpler types to increasingly complicated types?
and
Assuming some suitable definition of efficiency, can this evolution go from less efficient to more efficient automata?
That provides plenty of food for thought about the future of computing and self-reproducing robots. But I'll leave the last word on Johnny von Neumann to Jacob Bronowski, no dullard himself, who described him as “the cleverest man I ever knew, without exception…but not a modest man.” I guess he had little to be modest about.
In the decades since EDSAC calculated the squares of the numbers from 0 to 99, computers have steadily got more powerful, faster and cheaper. Glowing valves have been replaced by transistors and then by chips, each of which contains the equivalent of many transistors; data storage on punched cards has been superseded by magnetic tape and discs, and then by solid state memory devices. Even so, the functioning of computers based on all of these innovations would be familiar to the pioneers of the 1940s, just as the functioning of a modern airplane would be familiar to the designers of the Hurricane and Spitfire. But the process cannot go on indefinitely; there are limits to how powerful, fast and cheap a “classical” computer can be.
One way of getting a handle on these ideas is in terms of a phenomenon known as Moore's Law, after Gordon Moore, one of the founders of Intel, who pointed it out in 1964. It isn't really a “law” so much as a trend. In its original form, Moore's Law said that the number of transistors on a single silicon chip doubles every year; with another half-century of observation of the trend, today it is usually quoted as a doubling every eighteen months. And to put that in perspective, the number of transistors per chip has now passed the billion mark. That's like a billion-valve Manchester Baby or EDVAC on a single chip, occupying an area of a few hundred square millimeters.1 At the same time, the cost of individual chips has plunged, they have become more reliable and their use of energy has b
ecome more efficient.
But there are problems at both ends of the scale. Although the cost of an individual chip is tiny, the cost of setting up a plant to manufacture chips is huge. The production process involves using lasers to etch patterns on silicon wafers on a tiny scale, in rooms which have to be kept scrupulously clean and free from any kind of contamination. Allowing for the cost of setting up such a plant, the cost of making a single example of a new type of chip is in the billions of dollars; but once you have made one, you can turn out identical chips at virtually no unit cost at all.
There's another large-scale problem that applies to the way we use computers today. Increasingly, data and even programs are stored in the Cloud. “Data” in this sense includes your photos, books, favorite movies, e-mails and just about everything else you have “on your computer.” And “computer,” as I have stressed, includes the Turing machine in your pocket. Many users of smartphones and tablets probably neither know nor care that this actually means that the data are stored on very large computers, far from where you or I are using our Turing machines. But those large computer installations have two problems. They need a lot of energy in the form of electricity; and because no machine is 100 percent efficient they release a lot of waste energy, in the form of heat. So favored locations for the physical machinery that represents the ephemeral image of the Cloud are places like Iceland and Norway, where there is cheap electricity (hydrothermal or just hydroelectric) and it is cold outside. Neither of these problems of the large scale is strictly relevant to the story I am telling here, but it is worth being aware that there must be limits to such growth, even if we cannot yet see where those limits are.