Skip to main content
  • Poster presentation
  • Open access
  • Published:

Forming cooperative representations via solipsistic synaptic plasticity rules

The canonical model of primary visual cortex (V1) is that it forms a linear generative model of the image stimulus presented to the eyes. Thus, for a given image with pixel values Xj, the representation Xj*=∑ibiψij is formed by multiplying the activity of each neuron (bi) by the feature that neuron encodes (ψij), and summing over all neurons. We call this a cooperative representation, since it involves all of the neurons collectively forming a single representation. Over time, the network is thought to adapt so as to minimize, on average, the mean-squared error between the representation X* and the input X, ||X-X*||2. Performing gradient descent on this error function yields the usual learning rule Δψij= α bi(Xj- ∑ibiψij ), where α is some small positive constant called the learning rate. Typically, the features ψij are interpreted as the receptive fields (RF’s; features to which a neuron responds) of the neurons; indeed, there is strong evidence [1] that the feature encoded by a neuron is very similar to its RF. In that case, the value ψij can be thought of as the strength of the synaptic connection between input pixel value Xj, and neuron i. With that interpretation in mind, it is clear that the canonical learning rule Δψij= α bi(Xj- ∑ibiψij ), used by most previous work in this field [1, 2], fails to be biologically realistic because the rule for updating one synaptic strength ψij requires knowledge of the strengths of many synaptic connections, all on different neurons (with indices i), and it is not clear that such information is available to each individual synapse in the brain.

We consider instead a Hebbian learning rule that respects synaptic locality, Δψij= α bi(Xj- biψij ) [3]. In this case, the information required to change the strength of synapse ψij consists solely of the pre-synaptic activity Xj, the post-synaptic activity bi, and the current strength of the synaptic connection ψij. While this rule respects the locality of synaptic information, it does not appear to perform gradient descent on the desired error function ||Xj- ∑ibiψij||2. Instead, our local rule can be seen as gradient descent on the error function ∑i||Xj- biψij ||2, which is the sum over all neurons of the error between each neuron’s own internal representation of the input, biψij, and the input image. In other words, a network that follows Oja’s [3] local learning rule is a solipsistic one: each neuron makes its own individual representation of the input, and learning optimizes each of those representations individually.

We have proven that, if neuronal activities {bi} are uncorrelated, and sufficiently sparse (the majority of the bi’s are zero for any given image), the local and non-local learning rules are approximately equal, when averaged over many image presentations: <Δψij> = α <bi(Xj- biψij)> ≈ α <bi(Xj- ∑ibiψij )>. This suggests a previously undiscovered role for independence and sparseness in visual cortex: these properties allow the neuronal network to (approximately) form the optimal cooperative representation, despite the locality of its learning rules. The same proof applies other neuronal networks that form linear generative models.

We will present the details of our proof, and an example network (similar to that of [4]) of leaky integrate-and-fire neurons that learns a sparse image code using the local learning rule Δψij= α bi(Xj- biψij ). In our network, inhibitory inter-neuronal connections and variable firing thresholds keep the neuronal activities uncorrelated and sparse throughout the learning process. When trained on natural scenes, this network learns the same diversity of receptive fields as do previous non-local algorithms [1, 2].

References

  1. Olshausen BA, Field DJ: Emergence of simple cell-receptive fields by learning a sparse code for natural images. Nature. 1996, 381: 607-609. 10.1038/381607a0.

    Article  CAS  PubMed  Google Scholar 

  2. Rehn M, Sommer FT: A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields. J Comput Neurosci. 2007, 22: 135-146. 10.1007/s10827-006-0003-9.

    Article  PubMed  Google Scholar 

  3. Oja E: A simplified neuron model as a principal component analyzer. J Math Biol. 1982, 15: 267-273. 10.1007/BF00275687.

    Article  CAS  PubMed  Google Scholar 

  4. Falconbridge MS, Stamps RL, Badcock DR: A simple Hebbian/anti-Hebbian network learns the sparse, independent components of natural images. Neural Comput. 2006, 18: 415-429. 10.1162/089976606775093891.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the William J. Fulbright foundation and the University of California for financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joel Zylberberg.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zylberberg, J., DeWeese, M.R. Forming cooperative representations via solipsistic synaptic plasticity rules. BMC Neurosci 12 (Suppl 1), P69 (2011). https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2202-12-S1-P69

Download citation

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2202-12-S1-P69

Keywords