Skip to main content
  • Poster presentation
  • Open access
  • Published:

Integration as sequence detection in a feedforward neural integrator

In neural integrator networks, transient inputs are accumulated into sustained output signals that reflect the mathematical integral, over time, of their inputs. This computation has been identified as an important component of a wide variety of brain functions ranging from accumulation of sensory evidence for decision making to the motor control of eye movements. All current network models of neural integration assume that the conversion of transient inputs to sustained responses is accomplished by feedback among recurrently connected neuronal elements. Here we show that neural integration can occur even in feedforward networks and describe the properties of this novel class of integrators.

We consider a feedforward network consisting of multiple stages that each have a time constant Ï„ with which they linearly filter their inputs. We show that the effective dynamics of this network can be reduced to that of a simple network consisting of a linear chain of neurons with input entering one end and getting successively filtered by each successive stage of the network. As a result of this filtering, later stages of the network have prolonged responses that peak at successively later times. Thus, the network effectively forms a delay-line set of basis functions that are localized in time and that can be flexibly summed to generate a variety of temporal responses. We show analytically that with appropriate choices of synaptic weights, the network can perform a nearly perfect integral of its inputs over a duration of time of order NÏ„, where N is the number of stages in the network. We further show that although the performance of the network is best understood in terms of basis functions corresponding to a delay-line, the responses of the actual neurons in the network will generally be linear combinations of these basis functions that may not be easily recognized as originating from dynamics governed by a delay line.

The robustness of the network to uniform changes in all synaptic weights can be shown analytically to be similar to that of linear recurrent networks, exhibiting exponential decay of the integrated activity if the weights are too small and exponential growth until signals begin to exit the network if the weights are too large. We show that proper tuning of the weights can be accomplished by a homeostatic learning rule in which neurons scale their intrinsic gain and/or synaptic weights until their activity reaches an average target level over time.

In conclusion, this work suggests a novel mechanism for neural integration. Although we focus on its role as an integrator, the network bears strong similarities to previous networks proposed for temporal sequence recognition and production. This suggests that common underlying principles may be relevant to a host of temporal processing computations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark S Goldman.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Goldman, M.S. Integration as sequence detection in a feedforward neural integrator. BMC Neurosci 8 (Suppl 2), P170 (2007). https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2202-8-S2-P170

Download citation

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2202-8-S2-P170

Keywords