Main.TheChineseRoomArgument History

Hide minor edits - Show changes to output

October 19, 2015, at 05:51 PM by 61.238.62.121 -
Added line 18:
** Apple's Siri "Siri not only understands what you say, it’s smart enough to know what you mean."
October 19, 2015, at 05:49 PM by 61.238.62.121 -
Changed lines 15-17 from:
* Joseph Weizenbaum's Eliza online - http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm
to:
* AI examples
** Joseph
Weizenbaum's Eliza online - http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm
** IBM's Watson http://ibmresearchnews.blogspot.hk/2011/02/knowing-what-it-knows-selected-nuances.html
October 19, 2015, at 05:41 PM by 61.238.62.121 -
Changed line 15 from:
* Eliza online - http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm
to:
* Joseph Weizenbaum's Eliza online - http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm
Changed line 119 from:
** Example: What does # mean? P#Q->P, Q#P->P, P#Q->Q, P#Q->P P->P#P
to:
** Example: What does # mean? P#Q->P, Q#P->P, P#Q->Q, P#Q->P P,Q->P#Q
Added line 125:
* The important issue here is about how to give a [[Main.SemanticsOfMentalRepresentations|semantics for mental representations]]. How do symbols in the head acquire meaning? There are different theories.
Changed line 66 from:
* But it is the systen (person + books + symbols) as a whole that implements the program.
to:
* But it is the system (person + books + symbols) as a whole that implements the program.
Changed lines 118-121 from:
# Syntax is indeed sufficient for semantics. Some people have defended functional role semantics (or inferential role semantics). The meaning of a symbol depends on how the symbols are related to each other when it comes to deduction.
## Example: What does # mean? P#Q->P, Q#P->P, P#Q->Q, P#Q->P P->P#P
* Syntax is not sufficient for semantics. The symbols have to be causally connected to the world ultimately. See reply (f) on page 30.
to:
* Syntax is indeed sufficient for semantics. Some people have defended functional role semantics (or inferential role semantics). The meaning of a symbol depends on how the symbols are related to each other when it comes to deduction.
** Example: What does # mean? P#Q->P, Q#P->P, P#Q->Q, P#Q->P P->P#P
* Syntax is not sufficient for semantics. The symbols have to be causally connected to the world ultimately. See reply (f) on page 30. This is associated with the idea of externalism, that mental content depends on our connection to the environment, not just the properties inside our heads.
Added line 124:
* These two replies can help us respond to Searle's argument that programs are formal and have no intrinsic semantics, and can be interpreted anyway we want.
Changed lines 116-117 from:
* Functional role semantics - Syntax is indeed sufficient for semantics. What matters is how the symbols are connected to each other. P#Q->P, Q#P->P, P#Q->Q, P#Q->P P->P#P
* Maybe the symbols have
to be causally connected to the world ultimately?
to:

Replies
# Syntax is indeed sufficient for semantics. Some people have defended functional role semantics (or inferential role semantics). The meaning of a symbol depends on how the symbols are related
to each other when it comes to deduction.
## Example: What does # mean
? P#Q->P, Q#P->P, P#Q->Q, P#Q->P P->P#P
* Syntax is not sufficient for semantics. The symbols have to be causally connected to the world ultimately. See reply (f) on page 30.

@@@Computers would have semantics and not just syntax if their inputs and outputs were put in appropriate causal relation to the rest of the world. Imagine that we put the computer into a robot, attached television cameras to the robot's head, installed transducers connecting the television messages to the computer and had the computer output operate the robot's arms and legs. Then the whole system would have a semantics.@@@
Changed lines 29-31 from:
Comment: This is a ''reductio'' argument.
to:
Comments
*
This is a ''reductio'' argument.
* The argument can also be understood as a criticism of the Turing test
.
Added lines 8-9:

youtube:TryOC83PH1g
Added lines 112-113:
* Functional role semantics - Syntax is indeed sufficient for semantics. What matters is how the symbols are connected to each other. P#Q->P, Q#P->P, P#Q->Q, P#Q->P P->P#P
* Maybe the symbols have to be causally connected to the world ultimately?
Changed lines 93-105 from:
Some examples of emulator projects:

* MAME http://www.mame.net
* Bochs http://bochs.sourceforge.net/ Screenshot of [[http://bochs.sourceforge.net/screenshot/SiteInWinInBochs.png|MS Windows under linux]]

Emulation - a computer X emulates a computer Y when X simulates the processor of Y and other subsystems in software.

->[-Emulation vs. virtualization - In virtualization (e.g. Vmware), the hardware is partitioned in a way that allows more than one operating system to run simultaneously. Each OS and its applications run on the native hardware, with the instructions being executed natively by the processor. -]

It is easy to setup a system such that:
* A computer X emulates a different computer Y.
* Y can access the information in a Chinese document.
* X cannot access the information in the same document.
to:
* Some examples of emulator projects:
** https://www.scullinsteel.com/apple2
** MAME http://www.mame.net
**  Bochs http://bochs.sourceforge.net/ Screenshot of [[http://bochs.sourceforge.net/screenshot/SiteInWinInBochs.png|MS Windows under linux]]
* Emulation - a computer X emulates a computer Y when X simulates the processor of Y and other subsystems in software.
* Emulation vs. virtualization - In virtualization (e.g. Vmware), the hardware is partitioned in a way that allows more than one operating system to run simultaneously. Each OS and its applications run on the native hardware, with the instructions being executed natively by the processor.
* It is easy to setup a system such that:
** A computer X emulates a different computer Y.
** Y can access the information in a Chinese document.
** X cannot access the information in the same document.
Changed line 3 from:
* Searle, John. R. (1980). Minds, brains, and programs. In ''Behavioral and Brain Sciences 3 (3)'', 417-457. 10.1017/S0140525X00005756
to:
* Searle, John. R. (1980). Minds, brains, and programs. In ''Behavioral and Brain Sciences 3 (3)'', 417-457. doi:10.1017/S0140525X00005756
Changed line 3 from:
* Searle, John. R. (1980). [[http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html|Minds, brains, and programs]]. In ''Behavioral and Brain Sciences 3 (3)'', 417-457.
to:
* Searle, John. R. (1980). Minds, brains, and programs. In ''Behavioral and Brain Sciences 3 (3)'', 417-457. 10.1017/S0140525X00005756
Changed line 13 from:
* Eliza online - http://www-ai.ijs.si/eliza/eliza.html
to:
* Eliza online - http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm
Changed lines 3-4 from:
* [Required] Lecture notes on the Chinese Room from James Pyror. URL : http://www.princeton.edu/~jimpryor/courses/mind/notes/searle.html
* [Required]
Searle, John. R. (1980). [[http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html|Minds, brains, and programs]]. In ''Behavioral and Brain Sciences 3 (3)'', 417-457.
to:
* Searle, John. R. (1980). [[http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html|Minds, brains, and programs]]. In ''Behavioral and Brain Sciences 3 (3)'', 417-457.
August 14, 2015, at 09:58 PM by 202.189.100.152 -
Changed line 117 from:
!Questions for class discussion
to:
!Discussion questions
August 14, 2015, at 09:57 PM by 202.189.100.152 -
Changed lines 30-33 from:
!Responses

!!Churchland and Chunrchland (1990) - Deny premise
2
to:
!Response #1: Deny premise #2
Added line 34:
Example: Churchland and Chunrchland (1990)
Changed lines 39-40 from:
!!Understanding can be unconscious - Deny premise 3
to:
!Response #2 : Deny premise 3 - Understanding can be unconscious
Changed lines 46-47 from:
!!The robot reply
to:
!Response #3: The robot reply
Changed line 59 from:
!!The System reply - The argument is not valid
to:
!Response #4: The System reply - The argument is not valid
August 14, 2015, at 09:55 PM by 202.189.100.152 -
Added lines 68-69:
!!!Searle's reply to the system reply
Changed lines 72-73 from:
!!Analysis of the reply to the system reply
to:
Argument
Changed line 74 from:
# There isn't anything in the system that isn't in Searle.
to:
# There isn't anything in the system that isn't in Searle. (The system is just a part of Searle.)
Changed lines 77-83 from:
But is this a good argument? Compare:

# Searle is not red in color.
# There isn't anything in Searle's heart that isn't in Searle.
# So Searle's heart is not red in color.

What
Searle should say:
to:
Is this a good argument? Compare:

# Searle does not fit into a shoe box.
# There isn't anything in Searle's heart that isn't in Searle. (Searle's heart is just a part of Searle.)
# So
Searle's heart does not fit into a shoebox.

What Searle ''should''
say:
Changed lines 43-44 from:
* But how can someone knows a language without knowing that he knows it? It is implausible for all components of linguistics knowledge to be inaccessible to consciousness.
to:
* But how can someone knows a language without knowing that he knows it?
** It is implausible for all components of linguistics knowledge to be inaccessible to consciousness.
** If the person understands Chinese, and he understands English, why can't he translate a Chinese passage into English, and vice versa?
Changed lines 8-9 from:
to:
*  Cole, David. The Chinese Room Argument. In ''The Stanford Encyclopedia of Philosophy''. stanford:chinese-room.
Changed lines 4-8 from:
* [Required] Searle, John. R. (1980) [[http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html|"Minds, brains, and programs"]] ''Behavioral and Brain Sciences'' 3 (3): 417-457.
* Section 11.6, Chapter 11 of Osherson, Daniel N.; Gleitman, Lila R. (1995) ''An Invitation to Cognitive Science. Vol. 3, Thinking'' Cambridge, Mass. MIT Press. [accessible from netlibrary.com]
* Searle, John. R. (1990) "Is the Brain's Mind a Computer Program?" ''Scientific American'', vol. 262, pp. 26-31.
* Churchland, Paul, and Patricia Smith Churchland (1990) "Could a machine think?" ''Scientific American'' 262 (1, January): 32-39.
to:
* [Required] Searle, John. R. (1980). [[http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html|Minds, brains, and programs]]. In ''Behavioral and Brain Sciences 3 (3)'', 417-457.
* Section 11.6, Chapter 11 of Osherson, Daniel N., Gleitman, Lila R. (1995). (Eds.), ''An Invitation to Cognitive Science. Vol. 3, Thinking'' Cambridge, Mass: MIT Press. [accessible from netlibrary.com]
* Searle, John. R. (1990). Is the Brain's Mind a Computer Program? In ''Scientific American, 262'', 26-31.
* Churchland, Paul, and Patricia Smith Churchland (1990). Could a machine think? In ''Scientific American 262'', 32-39.