Readings

What is connectionism?

  • It is a theoretical framework in cognitive science.
  • Connectionists explain mental processes as computations carried out by neurally-implemented connectionist networks.
  • ConnectionismHistory

http://www.alanturing.net/turing_archive/graphics/realneurons.gif

https://upload.wikimedia.org/wikipedia/commons/3/30/Chemical_synapse_schema_cropped.jpg


http://www.mind.ilstu.edu/curriculum/connectionism_intro

Example of a representation:

Example of a computational process:

Details

  • Connectionist networks are networks of interconnected neuron-like computing units of a certain kind.
  • Computing units are activated by inputs and produce outputs.
  • A typical network is an input/output system. The pattern of activation given to the input units constitutes the input representation. The pattern of activation of the output units constitutes the output representation.
  • Each connection carries activation which is modulated by the weight of the connection.

Total input = T = w1x1+ w2x2 + ... + wn xn

Output = F(T) where F is some function depending on the nature of the node.

Some attractive features of connectionism

  • Biological plausibility - Connectionist networks look like networks of neurons, and so they are more biological plausible.
  • Fast distributed processing - No single processor. The 100-step argument. (See Feldman, J. A., & Ballard, D. H. (1982). Connectionist models and their properties. Cognitive Science, 6, 205-254.)

@The brain can solve many difficult cognitive problems in half a second. If a neuron takes 5 milliseconds to perform one operation, then it should be possible to devise algorithms for solving the same problems that require only 100 steps. Although the SOAR program of the late Allen Newell respected this constraint, most AI programs take millions of steps; connectionist algorithms, by contrast, distribute the information over many simple processing units and work in parallel.^^^Terrence J. Sejnowski on Jerome Feldman's 100-step rule.@

  • Graceful degradation - A connectionist network does not completely fail to perform a task when dealing with noisy inputs, or when the network is partially damaged.
  • Learn from examples - Connectionist networks are very good at pattern recognition through learning from examples.
    • Sejnowski, T. J. and Rosenberg, C. R. (1986). NETtalk: a parallel network that learns to read aloud. Cognitive Science, 14, 179-211. See www.youtube.com/watch?v=gakJlr3GecE
  • Content-addressable memory

Applications

Issues and disputes

Category.Mind