Phil. 2220: The mind
Lecture notes, 18/11/02
ANNOUNCEMENTS
The Turing Test: What grounds are there for claiming that the guy, or the system of which he is a part, really understands the Chinese stories? The only plausible answer is that the guy, or the system, can pass the Turing Test—can fool native Chinese speakers. But the thought experiment is meant to challenge that very idea, the idea that passing the Turing test suffices for understanding. Compare the guy/system’s understanding of English sentences with the guy/system’s “understanding” of Chinese ones. There is a radical difference. It’s no good to reply that system understands both English and Chinese because it passes the Turing Test with respect to questions in both English and Chinese. The very adequacy of the Test is part of what is at issue.
The Other Minds Reply: There is an epistemological problem re: other minds—how do I we know that there are thinkers other than ourselves? But forget about how we know this and just assume (as we all do) that there are other minds. The question is: Is the guy/system one of these others? Searle’s thought experiment appears to show that it is not.
The 2 views of MBP:
The negative view: Manipulating formal symbols is not sufficient for understanding or intentionality. Why not? Because the guy in the Chinese room is a formal symbol manipulator but not an understander.
The positive view: An answer to this question: What is it that the English system has that the Chinese system does not?
“What matters about brain
operations is not the formal shadow cast by the sequence of synapses but rather
the actual properties of the sequences. All the arguments for the strong
version of artificial intelligence that I have seen insist on drawing an
outline around the shadows cast by cognition and then claiming that the shadows
are the real thing.”
“Stones, toilet paper, wind, and
water pipes are the wrong kind of stuff to have intentionality in the first
place—only something that has the same causal powers as brains can have
intentionality—and though the English speaker has the right kind of stuff for
intentionality you can easily see that he doesn't get any extra intentionality
by memorizing the program, since memorizing it won't teach him Chinese.”
“No one supposes that computer
simulations of a five-alarm fire will burn the neighborhood down or that a
computer simulation of a rainstorm will leave us all drenched. Why on earth
would anyone suppose that a computer simulation of understanding actually
understood anything? It is sometimes said that it would be frightfully hard to
get computers to feel pain or fall in love, but love and pain are neither
harder nor easier than cognition or anything else. For simulation, all you need
is the right input and output and a program in the middle that transforms the
former into the latter. That is all the computer has for anything it does. To
confuse simulation with duplication is the same mistake, whether it is pain,
love, cognition, fires, or rainstorms.”
“The single most surprising
discovery that I have made in discussing these issues is that many AI workers
are quite shocked by my idea that actual human mental phenomena might be
dependent on actual physical-chemical properties of actual human brains. But if
you think about it a minute you can see that I should not have been surprised;
for unless you accept some form of dualism, the strong AI project hasn't got a
chance. The project is to reproduce and explain the mental by designing
programs, but unless the mind is not only conceptually but empirically
independent of the brain you couldn't carry out the project, for the program is
completely independent of any realization. Unless you believe that the mind is
separable from the brain both conceptually and empirically—dualism in a strong
form—you cannot hope to reproduce the mental by writing and running programs
since programs must be independent of brains or any other particular forms of
instantiation. If mental operations consist in computational operations on
formal symbols, then it follows that they have no interesting connection with
the brain; the only connection would be that the brain just happens to be one
of the indefinitely many types of machines capable of instantiating the
program. This form of dualism is not the traditional Cartesian variety that
claims there are two sorts of substances,
but it is Cartesian in the sense that it insists that what is specifically
mental about the mind has no intrinsic connection with the actual properties of
the brain. This underlying dualism is masked from us by the fact that AI
literature contains frequent fulminations against "dualism"; what the
authors seem to be unaware of is that their position presupposes a strong
version of dualism.”
“"Could a machine
think?" My own view is that only
a machine could think, and indeed only very special kinds of machines, namely
brains and machines that had the same causal powers as brains. And that is the
main reason strong AI has had little to tell us about thinking, since it has
nothing to tell us about machines. By its own definition, it is about programs,
and programs are not machines. Whatever else intentionality is, it is a
biological phenomenon, and it is as likely to be as causally dependent on the
specific biochemistry of its origins as lactation, photosynthesis, or any other
biological phenomena. No one would suppose that we could produce milk and sugar
by running a computer simulation of the formal sequences in lactation and
photosynthesis, but where the mind is concerned many people are willing to
believe in such a miracle because of a deep and abiding dualism: the mind they
suppose is a matter of formal processes and is independent of quite specific
material causes in the way that milk and sugar are not.”
Question: What are
these “causal powers” possessed by the brain that Searle thinks must be
replicated if understanding and intentionality are to be replicated? More importantly, is the idea that these
causal powers must be replicated an advance? Does the idea help us with the question of
what distinguishes the Chinese system from the English one?
IS THE BRAIN A DIGITAL COMPUTER? (IBDC):
The strengthening of the Chinese Room argument: The original argument shows that syntax is not sufficient for semantics. But the argument can be strengthened by
noting that syntax is not intrinsic
to physics (or, better, that syntax
is not intrinsic to the physical nature of the world). Syntactical processes are not intrinsic to
physical systems. A process is
syntactical only if viewed as such by
a being capable of so doing. Being
syntactical is observer-relative.
The homunculus fallacy and “recursive decomposition”: “Many writers feel that the homunculus fallacy is not really a problem, because, with Dennett (1978), they feel that the homunculus can be "discharged". The idea is this: Since the computational operations of the computer can be analyzed into progressively simpler units, until eventually we reach simple flip-flop, "yes-no", "1-0" patterns, it seems that the higher-level homunculi can be discharged with progressively stupider homunculi, until finally we reach the bottom level of a simple flip-flop that involves no real homunculus at all. The idea, in short, is that recursive decomposition will eliminate the homunculi.
It took me a long time to figure out what these people were driving at, so in case someone else is similarly puzzled I will explain an example in detail: Suppose that we have a computer that multiplies six times eight to get forty-eight. Now we ask "How does it do it?" Well, the answer might be that it adds six to itself seven times.\** But if you ask "How does it add six to itself seven times?", the answer might be that, first, it converts all of the numerals into binary notation, and second, it applies a simple algorithm for operating on binary notation until finally we reach the bottom level at which the only instructions are of the form, "Print a zero, erase a one." So, for example, at the top level our intelligent homunculus says "I know how to multiply six times eight to get forty-eight". But at the next lower-level he is replaced by a stupider homunculus who says "I do not actually know how to do multiplication, but I can do addition." Below him are some stupider ones who say "We do not actually know how to do addition or multiplication, but we know how to convert decimal to binary." Below these are stupider ones who say "We do not know anything about any of this stuff, but we know how to operate on binary symbols." At the bottom level are a whole bunch of a homunculi who just say "Zero one, zero one". All of the higher levels reduce to this bottom level. Only the bottom level really exists; the top levels are all just as-if.”