Daniel Dennett published an article titled ‘A Perfect and Beautiful Machine’: What Darwin’s Theory of Evolution Reveals About Artificial Intelligence in The Atlantic on 22nd June, 2012. What follows is the text of a post of mine on the Polanyi Discussion List, polanyi_list.
Daniel Dennett is performing conjuring tricks for his Atlantic audience.
To this day many people cannot get their heads around the unsettling idea that a purposeless, mindless process can crank away through the eons, generating ever more subtle, efficient, and complex organisms without having the slightest whiff of understanding of what it is doing.
How true. Drop the last couple of clauses, and he’s describing Polanyi.
In order to be a perfect and beautiful computing machine it is not requisite to know what arithmetic is.
This in bold, no less. I’ll come back to this.
Right there we see the reduction of all possible computation to a mindless process. We can start with the simple building blocks Turing had isolated, and construct layer upon layer of more sophisticated computation, restoring, gradually, the intelligence Turing had so deftly laundered out of the practices of human computers.
I didn’t juxtapose those two sentences; that’s how they appear, complete with italics.
No less a thinker that Roger Penrose has expressed skepticism about the possibility that artificial intelligence could be the fruit of nothing but mindless algorithmic processes.
Naughty Daniel. The passage linked is from The Emperor’s New Mind. Following the link, you will discover that the passage is not about artificial intelligence, but about human intelligence. Penrose is expressing a skepticism, made potent by his name, that strikes, not at the heart of Turing’s work, but at the heart of Dennett’s. Strangely, Dennett misrepresents this.
He introduces the sorta function. Early Arithmetic Processing Units sorta understand addition. Communication programs sorta understand that they are checking for communication errors. A chess program playing a Grand Master sorta understands that it’s queen is in jeopardy.
About such elements, what
it is can be described in terms of the structural organization of the parts from which it is made… What it does is some (cognitive) function that it (sorta) performs — well enough so that at the next level up, we can make the assumption that we have in our inventory a smarter building block that performs just that function — sorta, good enough to use.
Here’s where it helps to have a background in software or hardware engineering. There is no such difference between what a circuit or a programmed function is and what it does. It does what it is designed and specified to do. Nothing more, nothing less; unless it has bugs. Dennett proceeds,
This is the key to breaking the back of the mind-bogglingly complex question of how a mind could ever be composed of material mechanisms. What we might call the sorta operator is, in cognitive science, the parallel of Darwin’s gradualism in evolutionary processes. Before there were bacteria there were sorta bacteria, and before there were mammals there were sorta mammals and before there were dogs there were sorta dogs, and so forth. We need Darwin’s gradualism to explain the huge difference between an ape and an apple, and we need Turing’s gradualism to explain the huge difference between a humanoid robot and hand calculator.
The topic is the materiality of mind. But sorta in cognitive science parallels Darwinian gradualism. So, in this instance, “cognitive science” must refer to artificial intelligence. Trouble is, sorta dogs had to be be actual not-dogs-of-some-sort. Likewise for sorta mammals, and sorta bananas and sorta everything. In their time, they were all actual somethings. By the same token, the world must be full of sorta somethings now. We just don’t know how to recognise them. However, as pointed out above, that does not apply to circuits and programs. They are not sorta anything; they are precisely what they are designed to do.
…we eventually arrive at parts so (sorta) intelligent that they can be assembled into competences that deserve to be called comprehending.
Let’s go back for a moment to the bolded statement, which was also selected by the editors as the signature quote for the whole article. “In order to be a perfect and beautiful computing machine it is not requisite to know what arithmetic is.” So how did we arrive at the “deserving” machine? In order to be a perfect and beautiful computing machine, is it required to deserve to be called comprehending?
“There is no threshold above which true comprehension is to be found,” says Dennett. In fact, there is a threshold below which no comprehension is to be found. I can confidently assert that no machine created by human beings out of non-living materials has ever, or will ever, comprehend anything; not the telephone system, not Watson, not my old five-function calculator, not an abacus – nothing, not a skerrick of comprehension.
(My proviso is meant to exclude the truly horrifying prospect of the construction of artificial biological systems by molecular-level manufacturing processes. That, however, is not what Dennett is talking about.)
How can my assertion ever be proven wrong?
Here’s another: I assert that my species is composed of people who, like me, are self-conscious individuals who contemplate their own and others’ individuality with their minds.
How can my assertion ever be proven wrong? Or right?
It seems to me that, in addition to the various moral inversions that have been, and are, current, we labour under a logical inversion. Without ever having to prove their case, or even to demonstrate a plausible program for proving it, scientific materialists have managed to establish a virtually unchallenged base position that consciousness is a physical phenomenon, comprehensible with the currently available tools of science. Consequently, anyone who makes the irrefutable observation (as MP did) that there is a vast logical gulf separating the subjective experience of being – consciousness – from any observations that can be made of the conscious individual, is held not to be stating a simple truism, but to be making an unsupportable assertion.
Meanwhile, the assertion of the existence of other minds like my own remains the bedrock assumption of all human experience. One might almost say that it is a tacit assumption.
Comprehension is one of the jewels in the crown of consciousness. The desire and struggle to achieve it is the engine of intellectual innovation. It is at the tacit heart of human thought and action. Desire, struggle, the sense of an impending solution, the conviction that a solution, though remote, is feasible along this axis, the Eureka moment, the majestic triumph of Kepler, the dogged campaign to persuade: this is the stuff of human comprehension on those peaks of achievement that interested Polanyi.
Dennett tries to blur the distinction between the human faculty of comprehension and the deterministic workings of computing machines by trivialising the ranges of understanding and comprehension in any human knowledge. He applies sorta, again. But, as his usage in evolution was sorta different from his usage in circuitry, so it is sorta different from this context. Maybe no-one will notice, because it’s sorta similar.
We still haven’t arrived at “real” understanding in robots, but we are getting closer. That, at least, is the conviction of those of us inspired by Turing’s insight… If the history of resistance to Darwinian thinking is a good measure, we can expect that long into the future, long after every triumph of human thought has been matched or surpassed by “mere machines,” there will still be thinkers who insist that the human mind works in mysterious ways that no science can comprehend.
Dennett ends with a fine burst of question-begging; seemingly the common currency of “cognitive science.” “Getting closer… is the conviction of…us…long after every triumph of human thought has been matched or surpassed by ‘mere machines’…” And Dawkins is in the habit of saying, of Christians, “they don’t have a shred of evidence for any of this.” Both are expressing a philosophical commitment.
I’m mathematically retarded; however, the wikipedia article on Turing machines (one thinks of the Mille Miglia) gives some interesting background to Turing’s work on computability, and its relationship to Gödel’s contemporaneous developments. This association is mentioned by Polanyi in the presentation notes to the Manchester seminar (with dissenting comments in the margin,) and by Dennett here. IIUC, Gödel was defining he limits of mathematical systems and Turing likewise explored the limits of computability.
The “maybe”[1] article could explore the relationship between Turing and Polanyi, stressing that Polanyi insisted that currently physics, chemistry and biology can not account for mind. At the same time, he believed that evolution was the process by which human beings, and their minds, came to be, but by triggering an ordering principle inherent in the universe in such a way as to initiate a self-sustaining, and increasingly self-driven, directed arc of evolution.
[1] On the list, it had been suggested that a response to Dennett’e article might be written.