The Language of Thought Hypothesis

Readings

  • Section 4.1 "Intentional States and Intentional Content" in Tye, Michael (1995). Ten Problems of Consciousness : A Representational Theory of the Phenomenal Mind Representation and Mind MIT Press. [accessible from http://www.netlibrary.com]
  • Aydede, Murat The Language of Thought Hypothesis The Stanford Encyclopedia of Philosophy
  • Fodor, J. and Pylyshyn, Z. (1988). Connectionism and Cognitive Architecture. In Cognition 28, 3-71.
  • Section 3 of Block (1995). The Mind as the Software of the Brain. In D. Osherson, L. Gleitman, S. Kosslyn, E. Smith and S. Sternberg (Eds.), An Invitation to Cognitive Science. MIT Press.
  • Braddon-Mitchell & Jackson (1996). Philosophy of Mind and Cognition. Blackwell.

Introduction

William of Ockham (c. 1287-1347)

@Ockham was perhaps the first person to give not just lip service to the notion of “mental language” (because Aristotle and Boethius had mentioned it), but actually to develop the notion in some detail and to put it to work for him. Written language for Ockham is “subordinated” to spoken language, and spoken language is “subordinated” to mental language. For Ockham, the terms of mental language are concepts; its propositions are mental judgments. stanford:ockham/#3.3@

Jerry Fodor isbn:0674510305

What is the language of thought hypothesis?

  • The LOT hypothesis says : intentional mental states are constituted by mental representations that are language-like.
  • Language-like = the mental representations have a combinatorial syntax and semantics.
  • Combinatorial syntax = the mental representations are either complex or atomic. The complex ones are composed of the atomic ones according to a set of syntactic rules.
  • Combinatorial semantics = the content of a complex representation depends on its syntax and the content of the atomic representations.
treelet from Marcus's The Algebraic Mind isbn:0262133792

Note that the LOT hypothesis does not imply :

  • LOT is innate (although Fodor thinks it is).
  • Everyone has the same LOT.
  • LOT is a natural language e.g. English.
  • LOT requires interpretation by an agent.

Argument for LOT

Inference to the best explanation arguments are very common.

  1. We observer that X is true. (e.g. the street and the cars are all wet)
  2. Theory T provides the best explanation of X. (it rained)
  3. So, it is most likely that theory T is true.

We should accept LOT because LOT provides the best explanation of such phenomena:

  • Intentional states can causally interact with perception, behaviour and other mental states.
  • Systematicity: to have one belief you need to have other beliefs which are systematically related in content.
    • Gareth Evan's generality constraint: If you can think a is F, b is G, then you must be able to think a is G, and b is F.
  • Productivity: there are indefinitely many beliefs we can have.
    • Think of all the thoughts we can have of the form: x likes y but not z; x > z; x went to y to buy z ...
  • Opacity: we can believe that Lu Xun (魯迅) is a famous Chinese author without believing that Zhou Shuren (周樹人) is a famous Chinese author even though Lu Xun = Zhou Shuren.
  • LOT provides a model of reasoning - reasoning might involve rule-based operations on representations according to their syntactic structure.

So opponents to LOT have to either deny the phenomena, or that LOT provides the best explanation (because there is a better alternative),

An Alternative : The Map Theory

  • According to such a theory intentional mental states are map-like and not language-like.
  • See Braddon-Mitchell and Jackson for some differences between map-like and language-like representations.
    • "Maps give some information by giving lots of information."
    • Maps might not have basic representational units.
  • The map theory can also explain causal interaction, systematicity, productivity, opacity.

Objections to the map theory

Some of these objections might be more appropriate for an imagery theory rather than a map theory.

  • It does not seem to provide a good model of reasoning.
  • Abstract beliefs - beliefs about logic or mathematics.
  • Beliefs involving logical concepts - conditional beliefs (P->Q) or disjunctive beliefs (PvQ).
  • Beliefs about unobservable objects.
  • It is possible that we are conscious of only one particular thought or belief. But this is often not possible with a map-like representation.
  • Does it take longer to form a belief about objects with a complex appearance? Recall Descartes' distinction between imagination and conception.

@I remark in the first place the difference that exists between the imagination and pure intellection [or conception]. For example, when I imagine a triangle, I do not conceive it only as a figure comprehended by three lines, but I also apprehend these three lines as present by the power and inward vision of my mind, and this is what I call imagining. But if I desire to think of a chiliagon, I certainly conceive truly that it is a figure composed of a thousand sides, just as easily as I conceive of a triangle that it is a figure of three sides only; but I cannot in any way imagine the thousand sides of a chiliagon [as I do the three sides of a triangle], nor do I, so to speak, regard them as present [with the eyes of my mind]. - Section 2 of Descartes' Meditation VI@

  • This does not mean that there are no map-like representations. We do experience having mental images and prima facie they are more like maps and pictures than sentences. (However, it has been argued that mental images are actually language-like mental representations.)

Two objections from Daniel Dennett

Objection #1: We can have beliefs without explicit representations

In Dennett, D.C. (1981). Cure for the Common Code. In Brainstorms: Philosophical Essays on Mind and Psychology. Cambridge, Massachusetts: MIT Press, 1981. (Originally appeared in Mind, April 1977.)

@In a recent conversation with the designer of a chess-playing program I heard the following criticism of a rival program: "it thinks it should get its queen out early." This ascribes a propositional attitude to the program in a very useful and predictive way, for as the designer went on to say, one can usefully count on chasing that queen around the board. But for all the many levels of explicit representation to be found in that program, nowhere is anything roughly synonymous with "I should get my queen out early" explicitly tokened. The level of analysis to which the designer's remark belongs describes features of the program that are, in an entirely innocent way, emergent properties of the computational processes that have "engineering reality." I see no reason to believe that the relation between belief-talk and psychological talk will be any more direct. ^^^Dennett 1981, p.107@

  • Claim: It is not true that whenever X is in the state of thinking that p, there is a mental representation in X that has the content p .
  • Block's reply: (1) Distinguish between attributions of thoughts that are causally efficacious, and those that are not. (2) LOT applies only to the former.
  • Followup issues
    • Which attributions are causally efficacious? Are there any?
    • How likely is it that these attributions correspond to explicit mental representations?

Objection #2: We can have explicit representations without beliefs

The sister in Cleveland example

@Suppose that a neurosurgeon operates on a someone's Belief Box, inserting the sentence "I have a sister in Cleveland". When the patient wakes up, the doctor says "Do you have a sister?" "Yes", the patient says, "In Cleveland." Doctor: "What's her name?" Patient: "Gosh, I can't think of it." Doctor: "Older or younger?" Patient: "I don't know, and by golly I'm an only child. I don't know why I'm saying that I have a sister at all." Finally, the patient concludes that she never really believed she had a sister in Cleveland, but rather was a victim of some sort of compulsion to speak as if she did. The upshot is supposed to be that the language of thought theory is false because you can't produce a belief just by inserting a sentence in the Belief Box.@

  • Ned Block: "Belief box" is somewhat misleading. Belief is not simply a matter of "storing" a mental sentence. The sentence needs to have the right computational role, e.g. sufficient coherence with other sentences.
  • Any other possible reply?

Further issues

  • Any other alternatives to LOT?
    • Is the mental models theory an example of LOT, or it an alternative?
    • What about connectionism?
  • Nature of LOT.
    • What kind of language is LOT? Is it a natural language?
    • Implication with regard to language learning?
    • How do the representations of LOT get their content?

Category.Mind