## All Perfectly Logical

—————————

**The Universal Computer: The Road from Leibnitz to Turing**

By Martin Davis

W.W. Norton
& Co.; 237 pp. $26.95

**The Computer and the Brain**

By John von Neumann

Yale University Press; 112 pp.
$9.95

Every morning I would sit down before a blank sheet of paper. Throughout the day, with a brief interval for lunch, I would stare at the blank sheet. Often when evening came it was still empty … [T]he two summers of 1903 and 1904 remain in my mind as a period of complete intellectual deadlock … [I]t seemed quite likely that the whole of the rest of my life might be consumed in looking at that blank sheet of paper.

That is from Bertrand Russell's autobiography. What was stumping him was the attempt to find a definition of "number" in terms of pure logic. What does "three," for example, actually mean? The German logician Gottlob Frege had come up with an answer: "three" is merely the set of all threesomes, the set of all those sets whose members can be exhaustively paired off with Larry, Curly and Moe.

However, if the concept "the set of all sets with a given property" can be used
indiscriminately, as Frege used it, then we can construct the set * W* of all sets that are not
members of themselves. The set of all turtles is not a member of itself, since it is a set, not a turtle. It is
therefore a member of

*. But the set of all things that can be defined in fewer than a hundred words is a member of itself, and therefore not a member of*

**W***. Now pose the question: is*

**W***a member of*

**W***? If it is, it isn't, by definition; and if it isn't, it is. This contradiction is named "Russell's antinomy," and, until a way round it could be found, the enterprise that both Frege and Russell were engaged upon — the derivation of mathematics from logic — was dead in the water.*

**W**If you had asked Russell, during those summers of frustration, whether his perplexities were likely to
lead to any practical application, he would have hooted with laughter. This was the purest of pure
intellection, to the degree that even Russell, a pure mathematician by training, found himself wondering what
the point was: "It seemed unworthy of a grown man to spend his time on such trivialities …
" So little can we tell where disinterested inquiry will lead! In fact, Russell's work brought forth
*Principia Mathematica*, a key advance in one of the strangest and most unexpected enterprises of the
modern age. Among the fruits of that enterprise have been, so far, victory in World War Two (or at any rate,
victory at a lower cost than would otherwise have been possible) and machines like the one on which I am
writing this review.

*The Universal Computer* tells this story in eight chapters, each concentrating on a key figure
in the story: Leibnitz, Boole, Frege, Cantor, Hilbert, Gödel, Turing and von Neumann. Some of those names
will be familiar to any educated person; some have even escaped into the larger culture. Turing was the subject
of a rather good play by Hugh Whitmore, *Breaking the Code* (1986). He and Gödel both turn up as
characters in Apostolos Doxiadis's 1992 novel *Uncle Petros and Goldbach's Conjecture*, of which the
English translation was published last year to considerable success.

The strength of this book is in its tracing the continuous chain of events from Leibnitz's early attempt at a calculus of propositions all the way through to the stored-program computers of our own time. To follow the stored-program concept backwards: developed by John von Neumann, it rests on the idea that code (the instructions that tell a computer how to act) and data (the stuff that is to be acted upon) can be represented in just the same way in a computer's memory.

Von Neumann got this idea from Turing, whose imaginary "Turing machines" encoded both
instructions and data as arbitrary numbers. This in turn followed Gödel, who was able to prove important
theorems about symbolic logic by assigning numbers to the symbols. Both Turing and Gödel were inspired by
Hilbert's program to encompass both logic and mathematics in a common symbolism, more elegant and waterproof
than *Principia Mathematica*. After all, Russell had spotted the contradictions in Frege's system by
chance. Who could be sure there were not similar contradictions lurking undetected in the *Principia?*
Some more rigorous method was needed for scrutinizing the propositions generated by a symbolic system. Hilbert
developed such a method, which he called "metamathematics."

The *Principia*, as I noted above, was an attempt to finish the work Frege had begun, the
derivation of mathematics from logic. Both Russell and Frege employed Cantor's set theory to define numbers,
and it was by contemplating Cantor's work on infinite sets that Russell uncovered those lethal contradictions
in Frege's system. Frege was trying to remove the circularity inherent in Boole's work: if, as Frege believed,
mathematics derives from logic, how can you reduce logic to a branch of applied mathematics, as Boole claimed
he had done? Boole seems not to have been aware of Leibnitz's fragmentary researches, none of which had then
been published, and can fairly be given the honor of having breathed new life into the subject of logic.

There are quite a lot of books now attempting to explain advanced mathematical ideas to a general
educated public. It is a noble enterprise, but I cannot help wondering what proportion of those books that are
bought are ever finished. Davis is a decently good writer and has been intimately involved in this topic for
over half a century. He does his best with the material, but still there are parts where the reader will need
paper and pencil to follow the argument. Probably the ideal reader for *The Universal Computer* would be
someone who got a passing acquaintance with modern logic at college, and wants to refresh his understanding and
fill some gaps.

The author is at his weakest explaining Hilbert's metamathematics. I doubt, for example, if a
nonspecialist reader could grasp the difference between what Gödel proved about completeness in 1930 and
what Alonzo Church proved about decidability in 1936. By way of compensation, there is a good Hilbert anecdote,
though not my favorite one. (My favorite one: Hilbert's best student died suddenly and the family asked Hilbert
to give the graveside eulogy. At the proper time, Hilbert stepped up to the grave, weeping relatives all
around, and commenced: "So-and-so's death is a terrible loss. At the time he was taken from us, he was
developing some powerful new techniques for dealing with problems in function theory. Consider, for example, a
single-valued function *f*, meromorphic in some bounded open set *S*, … ")

A computer is, at its heart, simply an instantiation in electronic circuitry of the logical algebra worked out by Boole. Since logic is the systematization of deductive reasoning, which is an activity of the brain, the question irresistibly arises: How similar to what the human brain does is what the computer does? We all know that the brain can do many other things beside deductive reasoning. It can recognize a face, see the point of a joke, prefer one political party to another, form an opinion about a book on mathematical logic. Attempts to replicate these processes by means of algorithms have so far, after forty years of work, not been very satisfactory, to put it mildly. Is this because they are different in kind from the things computers do? Or are they just immensely more complicated?

These questions were of great interest to John von Neumann, a genius of breathtaking scope, and the
person with the best claim to have invented the modern computer. *The Computer and the Brain* is the
text of some lectures von Neumann prepared in response to an invitation from Yale University. Tragically, the
lectures were never delivered. The invitation came in early 1955, the lectures to be given the following
Spring. In August of 1955, however, von Neumann was diagnosed with bone cancer. By early 1956 he was thoroughly
disabled; he died in February of the following year at age 53.

On opening *The Computer and the Brain*, I expected to find it "of historical interest
only" (as one of my own professors used to say rather loftily of *Principia Mathematica*). To the
contrary, the book abounds with insights so deep they have not yet been internalized by any but a very small
number of specialists. For example:

It should also be noted that the message-system used in the nervous system … is of an essentially statistical character. In other words, what matters are not the precise positions of definite markers … but the statistical characteristics of their occurrence … [T]his leads to a lower level of arithmetic precision but to a higher level of logical reliability …

Which is why we are much better at recognizing faces than we are at multiplying ten-digit numbers, and computers contrariwise. One of the revelations of twentieth-century science, in fields from subatomic physics to genetics, is that the world is a very statistical place. Most of the great truths of our time are best expressed as probabilities. This has taken some getting used to — most of us, in fact, are still not used to it.

Everybody knows that mathematicians are burned out by age thirty. Like many things everybody knows, this
is not in fact true: de Branges was 52 when he proved the truth of the Bieberbach Conjecture, which had vexed
the best minds for 70 years. These last papers of von Neumann's show the power and fertility of his mind at the
time he died, and leave us wondering what else he might have given us if he had lived a normal lifespan. *Si
monumentum requiris, circumspice*.