Readings

  • Searle, John. R. (1980). Minds, brains, and programs. In Behavioral and Brain Sciences 3 (3), 417-457. doi:10.1017/S0140525X00005756
  • Section 11.6, Chapter 11 of Osherson, Daniel N., Gleitman, Lila R. (1995). (Eds.), An Invitation to Cognitive Science. Vol. 3, Thinking Cambridge, Mass: MIT Press. [accessible from netlibrary.com]
  • Searle, John. R. (1990). Is the Brain's Mind a Computer Program? In Scientific American, 262, 26-31.
  • Churchland, Paul, and Patricia Smith Churchland (1990). Could a machine think? In Scientific American 262, 32-39.
  • Cole, David. The Chinese Room Argument. In The Stanford Encyclopedia of Philosophy. stanford:chinese-room.


Video: http://www.youtube.com/v/TryOC83PH1g

Introduction

  • Weak AI - "the principal value of the computer in the study of the mind is that it gives us a very powerful tool."
  • Strong AI - "the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states"
  • AI examples
  • The claim to be discussed - whether there is a computer program that is sufficient for the existence of mental states (when it is being implemented).
  • Which mental states? The mental state of understanding the Chinese language - an intentional mental state.

The Chinese Room Argument

  1. Suppose it is possible to have a program P that is sufficient for understanding of Chinese.
  2. In principle a person in the Chinese room can carry out P.
  3. But such a person would not understand Chinese.
  4. So P is not sufficient for producing understanding of Chinese.
  5. So there is no program sufficient for producing understanding of Chinese.

Comments

  • This is a reductio argument.
  • The argument can also be understood as a criticism of the Turing test.

Response #1: Deny premise #2

Example: Churchland and Chunrchland (1990)

  • Serial computations are not sufficient for understanding. Parallel computations are needed.
  • Searle : not sure why it makes a difference. Change the example to the Chinese gym.
  • Presumably what can be done in parallel can be done in serial.

Response #2 : Deny premise 3 - Understanding can be unconscious

  • The person understands Chinese, but he does not know that he does. He does not know he understands Chinese because he lacks conscious awareness of his knowledge of Chinese.
  • But how can someone knows a language without knowing that he knows it?
    • It is implausible for all components of linguistics knowledge to be inaccessible to consciousness.
    • If the person understands Chinese, and he understands English, why can't he translate a Chinese passage into English, and vice versa?

Response #3: The robot reply

  • A robot running the program, and which can interact with people and the environment, would have understanding.
  • Comment #1 : Agrees with Searle then. Program not sufficient. Requires embedding.
  • Comment #2 : Why think that embedding helps?
  • Searle : Putting Searle in the robot does not make Searle understands Chinese.

Here is what Searle says:

@I am receiving "information" from the robot's "perceptual" apparatus, and I am giving out "instructions" to its motor apparatus without knowing either of these facts. ... I don't know what's going on. I don't understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all.@

Response #4: The System reply - The argument is not valid

  • Claim 4 does not follow from claim 3.
  • It is true that the person (as the processor of the whole system) does not understand Chinese.
  • But it is the system (person + books + symbols) as a whole that implements the program.
  • Just because the processor lacks understanding, it does not follow that the system also lacks understanding.
  • Searle's reply : But imagine a case where the person is the system by internalizing the rules and everything into his memory.

Searle's reply to the system reply

@All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way that the system could understand because the system is just a part of him. @

Argument

  1. Searle does not understand Chinese.
  2. There isn't anything in the system that isn't in Searle. (The system is just a part of Searle.)
  3. So the system also does not understand Chinese.

Is this a good argument? Compare:

  1. Searle does not fit into a shoe box.
  2. There isn't anything in Searle's heart that isn't in Searle. (Searle's heart is just a part of Searle.)
  3. So Searle's heart does not fit into a shoebox.

What Searle should say:

  1. Searle is identical to the system.
  2. Searle does not understand Chinese.
  3. The system does not understand Chinese.

But is Searle identical to the system?

An explanation in terms of emulation

  • Some examples of emulator projects:
  • Emulation - a computer X emulates a computer Y when X simulates the processor of Y and other subsystems in software.
  • Emulation vs. virtualization - In virtualization (e.g. Vmware), the hardware is partitioned in a way that allows more than one operating system to run simultaneously. Each OS and its applications run on the native hardware, with the instructions being executed natively by the processor.
  • It is easy to setup a system such that:
    • A computer X emulates a different computer Y.
    • Y can access the information in a Chinese document.
    • X cannot access the information in the same document.

A related argument from Searle

  1. Computer programs are formal (syntactic).
  2. Thoughts and understanding have content (semantics).
  3. Syntax by itself is neither constitutive nor sufficient for semantics.
  4. So computer programs are not constitutive or sufficient for thinking or understanding.

The Chinese room argument can be seen as an argument in support of premise 3.

Replies

  • Syntax is indeed sufficient for semantics. Some people have defended functional role semantics (or inferential role semantics). The meaning of a symbol depends on how the symbols are related to each other when it comes to deduction.
    • Example: What does # mean? P#Q->P, Q#P->P, P#Q->Q, P#Q->P P,Q->P#Q
  • Syntax is not sufficient for semantics. The symbols have to be causally connected to the world ultimately. See reply (f) on page 30. This is associated with the idea of externalism, that mental content depends on our connection to the environment, not just the properties inside our heads.

@Computers would have semantics and not just syntax if their inputs and outputs were put in appropriate causal relation to the rest of the world. Imagine that we put the computer into a robot, attached television cameras to the robot's head, installed transducers connecting the television messages to the computer and had the computer output operate the robot's arms and legs. Then the whole system would have a semantics.@

  • These two replies can help us respond to Searle's argument that programs are formal and have no intrinsic semantics, and can be interpreted anyway we want.
  • The important issue here is about how to give a semantics for mental representations. How do symbols in the head acquire meaning? There are different theories.

Discussion questions

  1. "Searle is wrong because we know there are computations going on in the brain and we understand languages."
  2. "Since Searle's argument is mistaken, it follows that all programs are sufficient for producing understanding in Chinese."
  3. "A clever person in the Chinese Room might be able to work out the meanings of the Chinese symbols given enought time. So the program can be sufficient for producing understanding after all."

Category.Mind