Why Neurons Have Thousands of Synapses, a Theory of Sequence Mamory in Neocortex

– Paper by Jeff Hawkins and Subutai Ahmad

Neuron Reliability Recognize Multiple Sparse Patterns

Former View: A neuron computes a single weighted sum of all its synapses.

Active dendrites suggest a different view of the neurons, where neurons recognize many independent unique patterns. Some paper s show a small set of neighboring synapses acts as a pattern detector, and the thousands of synapses on a cell’s dendrites act as a set of independent pattern detectors.

Using sparse encoding / sub-sampling, 8-20 synapses is possible for robust recognition.

Sources of Synaptic Input to Cortical Neurons

  1. Proximal synapses define the classic receptive field of a cell.
  2. Basal synapses learn transition in sequences. They recognize patterns of cell activity that precede the neuron firing. (Also local inhibition)
  3. Apical synapses invoke a top-down expectation. Depolarization caused by the apical dendrites is used to establish a top-down expectation (prediction).

Learning Rule

Summary: 1. learning occurs by growing and removing synapses from a pool of “potential” synapses. 2. Hebbian learning and synaptic changes occur at the level of the dendritic segment, not the entire neuron.

A threshold is used to represent the establishment of a synapse.

1
weight_of_synapse = 1 if permanence > threshold else 0

Using a scalar permanence value enables on-line learning in the presence of noise.

The activation strength of any one neuron or synapse is not very important.

Network Capacity and Generalization

Though the model is called “sequence memory”, it is actually a memory of transitions. There is no representation or concept of the length of sequences or of the number of stored sequences. The network only learns transitions between inputs.