Main.LOT History

Hide minor edits - Show changes to output

October 04, 2015, at 09:04 PM by 61.238.62.121 -
Changed line 101 from:
* Another other alternative reply?
to:
* Any other possible reply?
Changed lines 3-4 from:
!Readings
to:
!!Readings
Changed lines 11-12 from:
!Introduction
to:
!!Introduction
Changed lines 19-20 from:
!What is the language of thought hypothesis?
to:
!!What is the language of thought hypothesis?
Changed lines 35-36 from:
!Argument for LOT
to:
!!Argument for LOT
Changed lines 54-55 from:
!An Alternative : The Map Theory
to:
!!An Alternative : The Map Theory
Changed lines 64-65 from:
!Objections to the map theory
to:
!!Objections to the map theory
Changed line 103 from:
!Further issues
to:
!!Further issues
Changed lines 78-81 from:
!Dennett's objection - beliefs without explicit representations
to:
!!Two objections from Daniel Dennett

!!!Objection #1:  We can have beliefs without explicit representations

Added lines 93-101:

!!!Objection #2:  We can have explicit representations without beliefs

The sister in Cleveland example

@@@Suppose that a neurosurgeon operates on a someone's Belief Box, inserting the sentence "I have a sister in Cleveland". When the patient wakes up, the doctor says "Do you have a sister?" "Yes", the patient says, "In Cleveland." Doctor: "What's her name?" Patient: "Gosh, I can't think of it." Doctor: "Older or younger?" Patient: "I don't know, and by golly I'm an only child. I don't know why I'm saying that I have a sister at all." Finally, the patient concludes that she never really believed she had a sister in Cleveland, but rather was a victim of some sort of compulsion to speak as if she did. The upshot is supposed to be that the language of thought theory is false because you can't produce a belief just by inserting a sentence in the Belief Box.@@@

* Ned Block: "Belief box" is somewhat misleading. Belief is not simply a matter of "storing" a mental sentence. The sentence needs to have the right computational role, e.g. sufficient coherence with other sentences.
* Another other alternative reply?
Changed lines 92-93 from:
* Implication with regard to language learning?
* Any
alternatives to LOT?
to:

* Any other alternatives to LOT?
Added line 96:
Changed lines 98-99 from:
** How are the sentences structured?
to:
** What kind of language is LOT? Is it a natural language?
** Implication with regard to language learning
?
Changed line 49 from:
* Opacity: we can believe that Lu Xun is a writer without believing that Zhou Shuren is a writer even though Lu Xun = Zhou Shuren. (魯迅=周樹人)
to:
* Opacity: we can believe that Lu Xun (魯迅) is a famous Chinese author without believing that Zhou Shuren (周樹人) is a famous Chinese author even though Lu Xun = Zhou Shuren.
Changed line 49 from:
* Opacity: we can believe that x is F without believing that y is F even though x = y.
to:
* Opacity: we can believe that Lu Xun is a writer without believing that Zhou Shuren is a writer even though Lu Xun = Zhou Shuren. (魯迅=周樹人)
Changed line 50 from:
* LOT provides a model of reasoning - reasoning might involve rule-based operations on representations according to their syntactic structure.
to:
* LOT provides a [[LOT reasoning|model of reasoning]] - reasoning might involve rule-based operations on representations according to their syntactic structure.
Changed lines 29-37 from:
* The LOT hypothesis does not imply :

** LOT is innate.
** Everyone has the same LOT
.
** LOT is a natural language e.g. English.
**
LOT requires interpretation by an agent. 

!Arguments for
LOT
to:
Note that the LOT hypothesis does not imply :
* LOT is innate (although Fodor thinks it is).
* Everyone has the same LOT.
*
LOT is a natural language e.g. English.
*
LOT requires interpretation by an agent.

!Argument for LOT

Inference to the best explanation arguments are very common.
# We observer that X is true. (e.g. the street and the cars are all wet)
# Theory T provides the best explanation of X. (it rained)
# So, it is most likely that theory T is true.

Changed lines 44-64 from:
* It explains how intentional states can causally interact with perception, behaviour and other mental states.
* It explains systematicity, that to have one belief you need to have other beliefs which are systematicity related in content. If you can think a is F, b is G, then you must be able to think a is G, and b is F.
* It explains productivity, that
there are indefinitely many beliefs we can have.
* It also explains opacity, that one can believe that x is F without believing that y is F even though x = y.
* It also provides a model of reasoning - reasoning might involve rule-based operations on representations according to their syntactic structure.

These phenomena support
LOT only if we cannot find better explanations of these phenomena.

!Dennett's objection - beliefs without explicit representations
media:queen-chess.jpg

In Dennett, D
.C. (1981). Cure for the Common Code. In ''Brainstorms: Philosophical Essays on Mind and Psychology''. Cambridge, Massachusetts: MIT Press, 1981. (Originally appeared in Mind, April 1977.)

@@@In a recent conversation with the designer of a chess-playing program I heard the following criticism of a rival program: "it thinks it should get its queen out early." This ascribes a propositional attitude to the program in a very useful and predictive way, for as the designer went on to say, one can usefully count on chasing that queen around the board. But for all the many levels of explicit representation to be found in that program, nowhere is anything roughly synonymous with "I should get my queen out early" explicitly tokened. The level of analysis to which the designer's remark belongs describes features of the program that are, in an entirely innocent way, emergent properties of the computational processes that have "engineering reality." I see no reason to believe that the relation between belief-talk and psychological talk will be any more direct. (Dennett 1981, p.107)@@@

* Claim: It is not true that whenever X is in the state of thinking that ''p'', there is a mental representation in X that has the content ''p'' .
* Block's reply: (1) Distinguish between attributions of thoughts that are causally efficacious, and those that are not. (2) LOT applies only to the former.
* Followup issues
** Which attributions are causally efficacious? Are there any?
** How likely is it that these attributions correspond to explicit mental representations?

to:
* Intentional states can causally interact with perception, behaviour and other mental states.
* Systematicity: to have one belief you need to have other beliefs which are systematically related in content.
** Gareth Evan's generality constraint: If you can think a is F, b is G, then you must be able to think a is G, and b is F.
* Productivity:
there are indefinitely many beliefs we can have.
** Think of all the thoughts we can have of the form: x likes y but not z; x > z; x went to y to buy z ...
* Opacity: we can believe that x is F without believing that y is F even though x = y.
*
LOT provides a model of reasoning - reasoning might involve rule-based operations on representations according to their syntactic structure.

So opponents to LOT have to either deny the phenomena, or that LOT provides the best explanation (because there is a better alternative)
,
Added lines 77-89:

!Dennett's objection - beliefs without explicit representations
media:queen-chess.jpg

In Dennett, D.C. (1981). Cure for the Common Code. In ''Brainstorms: Philosophical Essays on Mind and Psychology''. Cambridge, Massachusetts: MIT Press, 1981. (Originally appeared in ''Mind'', April 1977.)

@@@In a recent conversation with the designer of a chess-playing program I heard the following criticism of a rival program: "it thinks it should get its queen out early." This ascribes a propositional attitude to the program in a very useful and predictive way, for as the designer went on to say, one can usefully count on chasing that queen around the board. But for all the many levels of explicit representation to be found in that program, nowhere is anything roughly synonymous with "I should get my queen out early" explicitly tokened. The level of analysis to which the designer's remark belongs describes features of the program that are, in an entirely innocent way, emergent properties of the computational processes that have "engineering reality." I see no reason to believe that the relation between belief-talk and psychological talk will be any more direct. ^^^Dennett 1981, p.107@@@

* Claim: It is not true that whenever X is in the state of thinking that ''p'', there is a mental representation in X that has the content ''p'' .
* Block's reply: (1) Distinguish between attributions of thoughts that are causally efficacious, and those that are not. (2) LOT applies only to the former.
* Followup issues
** Which attributions are causally efficacious? Are there any?
** How likely is it that these attributions correspond to explicit mental representations?
Changed lines 5-7 from:
* [Required] Section 4.1 "Intentional States and Intentional Content" in Tye, Michael (1995). ''Ten Problems of Consciousness : A Representational Theory of the Phenomenal Mind Representation and Mind'' MIT Press. [accessible from http://www.netlibrary.com]
* [Required] Aydede, Murat [[stanford:language-thought|The Language of Thought Hypothesis]] ''The Stanford Encyclopedia of Philosophy''
* [Strongly recommended] Fodor, J. and Pylyshyn, Z. (1988). [[http://citeseer.ist.psu.edu/fodor88connectionism.html|Connectionism and Cognitive Architecture]]. In ''Cognition 28'', 3-71.
to:
* Section 4.1 "Intentional States and Intentional Content" in Tye, Michael (1995). ''Ten Problems of Consciousness : A Representational Theory of the Phenomenal Mind Representation and Mind'' MIT Press. [accessible from http://www.netlibrary.com]
* Aydede, Murat [[stanford:language-thought|The Language of Thought Hypothesis]] ''The Stanford Encyclopedia of Philosophy''
* Fodor, J. and Pylyshyn, Z. (1988). [[http://citeseer.ist.psu.edu/fodor88connectionism.html|Connectionism and Cognitive Architecture]]. In ''Cognition 28'', 3-71.
Changed lines 57-60 from:
to:
* Followup issues
** Which attributions are causally efficacious? Are there any?
** How likely is it that these attributions correspond to explicit mental representations?

May 13, 2008, at 09:17 PM by 219.77.142.151 -
Added lines 1-91:
!The Language of Thought Hypothesis

!Readings

* [Required] Section 4.1 "Intentional States and Intentional Content" in Tye, Michael (1995). ''Ten Problems of Consciousness : A Representational Theory of the Phenomenal Mind Representation and Mind'' MIT Press. [accessible from http://www.netlibrary.com]
* [Required] Aydede, Murat [[stanford:language-thought|The Language of Thought Hypothesis]] ''The Stanford Encyclopedia of Philosophy''
* [Strongly recommended] Fodor, J. and Pylyshyn, Z. (1988). [[http://citeseer.ist.psu.edu/fodor88connectionism.html|Connectionism and Cognitive Architecture]]. In ''Cognition 28'', 3-71.
* Section 3 of Block (1995). [[http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/msb.html|The Mind as the Software of the Brain]]. In D. Osherson, L. Gleitman, S. Kosslyn, E. Smith and S. Sternberg (Eds.), ''An Invitation to Cognitive Science''. MIT Press.
* Braddon-Mitchell & Jackson (1996). ''Philosophy of Mind and Cognition''. Blackwell.

!Introduction

William of Ockham (c. 1287-1347)

@@@Ockham was perhaps the first person to give not just lip service to the notion of “mental language” (because Aristotle and Boethius had mentioned it), but actually to develop the notion in some detail and to put it to work for him. Written language for Ockham is “subordinated” to spoken language, and spoken language is “subordinated” to mental language. For Ockham, the terms of mental language are concepts; its propositions are mental judgments. [-stanford:ockham/#3.3-]@@@

media:fodor.jpg [-Jerry Fodor isbn:0674510305-]

!What is the language of thought hypothesis?

* The LOT hypothesis says : [[Main/intentionality|intentional mental states]] are constituted by mental representations that are language-like.
* Language-like = the mental representations have a combinatorial syntax and semantics.
* Combinatorial syntax = the mental representations are either complex or atomic. The complex ones are composed of the atomic ones according to a set of syntactic rules.
* Combinatorial semantics = the content of a complex representation depends on its syntax and the content of the atomic representations.

media:treelet.jpg [- treelet from Marcus's ''The Algebraic Mind'' isbn:0262133792 -]
media:brain-lot.jpg

* The LOT hypothesis does not imply :

** LOT is innate.
** Everyone has the same LOT.
** LOT is a natural language e.g. English.
** LOT requires interpretation by an agent.

!Arguments for LOT

We should accept LOT because LOT provides the best explanation of such phenomena:

* It explains how intentional states can causally interact with perception, behaviour and other mental states.
* It explains systematicity, that to have one belief you need to have other beliefs which are systematicity related in content. If you can think a is F, b is G, then you must be able to think a is G, and b is F.
* It explains productivity, that there are indefinitely many beliefs we can have.
* It also explains opacity, that one can believe that x is F without believing that y is F even though x = y.
* It also provides a model of reasoning - reasoning might involve rule-based operations on representations according to their syntactic structure.

These phenomena support LOT only if we cannot find better explanations of these phenomena.

!Dennett's objection - beliefs without explicit representations
media:queen-chess.jpg

In Dennett, D.C. (1981). Cure for the Common Code. In ''Brainstorms: Philosophical Essays on Mind and Psychology''. Cambridge, Massachusetts: MIT Press, 1981. (Originally appeared in Mind, April 1977.)

@@@In a recent conversation with the designer of a chess-playing program I heard the following criticism of a rival program: "it thinks it should get its queen out early." This ascribes a propositional attitude to the program in a very useful and predictive way, for as the designer went on to say, one can usefully count on chasing that queen around the board. But for all the many levels of explicit representation to be found in that program, nowhere is anything roughly synonymous with "I should get my queen out early" explicitly tokened. The level of analysis to which the designer's remark belongs describes features of the program that are, in an entirely innocent way, emergent properties of the computational processes that have "engineering reality." I see no reason to believe that the relation between belief-talk and psychological talk will be any more direct. (Dennett 1981, p.107)@@@

* Claim: It is not true that whenever X is in the state of thinking that ''p'', there is a mental representation in X that has the content ''p'' .
* Block's reply: (1) Distinguish between attributions of thoughts that are causally efficacious, and those that are not. (2) LOT applies only to the former.

!An Alternative : The Map Theory

media:map.jpg

* According to such a theory intentional mental states are map-like and not language-like.
* See Braddon-Mitchell and Jackson for some differences between map-like and language-like representations.
** "Maps give some information by giving lots of information."
** Maps might not have basic representational units.
* The map theory can also explain causal interaction, systematicity, productivity, opacity.

!Objections to the map theory

Some of these objections might be more appropriate for an imagery theory rather than a map theory.
* It does not seem to provide a good model of reasoning.
* Abstract beliefs - beliefs about logic or mathematics.
* Beliefs involving logical concepts - conditional beliefs (P->Q) or disjunctive beliefs (PvQ).
* Beliefs about unobservable objects.
* It is possible that we are conscious of only one particular thought or belief. But this is often not possible with a map-like representation.
* Does it take longer to form a belief about objects with a complex appearance? Recall Descartes' distinction between imagination and conception.

@@@I remark in the first place the difference that exists between the imagination and pure intellection [or conception]. For example, when I imagine a triangle, I do not conceive it only as a figure comprehended by three lines, but I also apprehend these three lines as present by the power and inward vision of my mind, and this is what I call imagining. But if I desire to think of a chiliagon, I certainly conceive truly that it is a figure composed of a thousand sides, just as easily as I conceive of a triangle that it is a figure of three sides only; but I cannot in any way imagine the thousand sides of a chiliagon [as I do the three sides of a triangle], nor do I, so to speak, regard them as present [with the eyes of my mind]. - [-Section 2 of Descartes' Meditation VI-]@@@

* This does not mean that there are no map-like representations. We do experience having mental images and prima facie they are more like maps and pictures than sentences. (However, it has been argued that mental images are actually language-like mental representations.)

!Further issues
* Implication with regard to language learning?
* Any alternatives to LOT?
** Is the [[http://www.tcd.ie/Psychology/Ruth_Byrne/mental_models/|mental models]] theory an example of LOT, or it an alternative?
**What about connectionism?
* Nature of LOT.
** How are the sentences structured?
** How do the representations of LOT get their content?

[[Category.Mind]]