Lecture 14                                                                  Philosophy of Cognitive Science

Introduction to Connectionism                                                         Dr. Ron Mallon     

 

 

1.  Three levels:  We’ve now discussed the prospect of looking at the mind as a computing device that can be described at (at least) three levels: ecological, computational, and physical.

     The computational level, as we have been conceiving of it, involves symbol manipulation according to rules that allow the symbols and the processes defined over them to represent objects and relations in a target domain.

 

2.  Connectionist architectures:  Today I want to take a step back and consider an alternative form of computation, one that forgoes reference to individual symbols.  These ‘connectionist networks’ employ a large number of ‘cells’ or ‘nodes’.  These nodes are connected to each other by one way connections that are ‘weighted’ and may ‘fire’ - sending a value to the next node.  The connections may also be of different types, excitatory or inhibitory, depending on whether the value sent to the next node is negative or positive.

 

Whether a node fires depends on stimulations of incoming nodes and the node’s own ‘threshold value’.   For example, if a node has a threshold value of 3 and an initial value of 0, it will fire if it receives values adding up to 3 from the nodes that are connected to it.

 

3.  Training a network:  Such networks exhibit interesting properties.  In particular, they can be trained to recognize patterns and perform other seemingly intelligent tasks.  The most common way of training a network is via a technique called ‘backpropogation’.  This technique works as follows: we begin with a network whose connections are randomly weighted.  We create some sort of input to some of the cells of the network (sometimes called the input layer).  Also, we assign some of the network cells the function of ‘output’ cells, and we assign an interpretation to those cells.  For example, suppose we are trying to train a network to distinguish between faces and other objects.  We attach an input device (e.g. a camera) to the network.  The camera gives input to the network by differentially firing various ‘input layer’ cells, depending upon the image in front of the camera.  Then, we assign to certain output layer cells the interpretation ‘face’ or ‘not a face’.   The cells in between the ‘input layer’ and the ‘output layer’ are called the ‘hidden layer’.  We then show either a face or not a face to the network, and we see how it responds.  At first, it does horribly - it cannot distinguish faces from things that are not faces.  But every time the network gets a correct answer, we go through the network, and we strengthen all the connections that lead to the correct answer.  Conversely, when the network makes a mistake, we weaken the connections leading to the incorrect answer.  (This is what is called ‘backpropogation.’)  Eventually, after many trials, the computer will ‘learn’ to make the distinction.

       Once the computer can make the distinction for the training set, it can apply the distinction to new instances.  It’s learned to make the distinction!

 

4.  Connectionist networks as an alternative model of the mind: Connectionist networks, to a casual observer, seem bear a nonaccidental connection to a schematic of brain neurons.  Such networks were designed to reveal the computational properties of individual interacting units.

    Notice that connectionist networks of this sort have no symbols or ‘local’ representations.  That is, one cannot really say that certain nodes represent certain features or facts.  Moreover, the computer proceeds not by inference like manipulations of symbols according to rules, but by simple causal connections between nodes.  This has lead many to think that connectionist networks eliminate the need for a ‘computational layer’ in understanding cognition.

 

 

 

[Back to Index Page]

[Back to Course Web Sites]