Skip to main content
  • Poster presentation
  • Open access
  • Published:

Optimal sigmoidal tuning curves for intensity encoding sensory neurons with quasi-Poisson variability

Background

Rate-coding neurons are often characterized by their tuning curve, that is, the average firing rate, T(x), as a function of stimulus intensity, x. However the substantial natural variability in firing rate that often occurs for a fixed stimulus provides a limitation on the fidelity of firing rate encoding of stimuli. Consequently, stimulus-dependent variance in firing rate, V(x), is crucial in studies of tuning curve optimality. Information theory can be used to quantify such limits and to address the question of finding the tuning curve that maximizes information rate [1].

Firing activity is often modeled as a Poisson point process, such that V(x) = T(x). However, this assumption can break down for intensity encoding neurons with monotonically non-decreasing (e.g. sigmoidal) tuning curves, such as primary afferent auditory nerve fibers, where refractoriness can cause firing rate saturation. As the rate nears this point, variability decreases, and to a first approximation becomes binomial rather than Poisson, so that V(x) varies quadratically with T(x). Such neurons are sometimes called quasi-Poisson.

Results

We have derived a sufficient condition for achieving maximum Shannon mutual information between stimulus intensity and firing rate when the variability is quasi-Poisson such that V(x) = s2T(x)(1-T(x)), and s is small [2]. The sufficient condition leads to analytical expressions for two ways to achieve maximize mutual information: (i) an optimal monotonically non-decreasing tuning curve for any given stimulus distribution and (ii) an optimal stimulus for any given monotonically non-decreasing tuning curve [2].

The optimal tuning curve for a stimulus with cumulative distribution function F x (x) is To(x) = 0.5–0.5cos(π F x (x)), while for a tuning curve T(x), the optimal probability density function of the stimulus is f x o(x) = (dT(x)/dx)/(π (T(x)(1-T(x)))0.5). Our derivation also provides an expression for the reduction in mutual information when the tuning curve and stimulus distribution are not optimally matched [2]. This expression is a function of the relative entropy between the stimulus distribution, and a distribution known as Jeffrey's prior. The derivation makes use of a relationship between Shannon mutual information and Fisher information discussed, for example, in [1].

Discussion

Unlike neurons with a 'preferred stimulus' (unimodal tuning curves), optimality conditions for neurons where firing rates increase monotonically with stimulus intensity (e.g. sigmoidally) have received little attention. A notable exception is [3, 4], which maximizes Fisher information, and considers only Poisson variability. In contrast, we maximize mutual information, and consider quasi-Poisson variability. This leads to a very versatile analytical solution that allows for refractoriness. A limitation to be addressed in future work is how well the quadratic relationship V(x) = s2T(x)(1-T(x)) compares with measured variability. Finally, while we assume small s, our solution provides a lower bound to the achievable mutual information for larger s, and is hence a worst-case scenario.

References

  1. Brunel N, Nadal J: Mutual information, Fisher information and population coding. Neural Computation. 1998, 10: 1731-1757. 10.1162/089976698300017115.

    Article  CAS  PubMed  Google Scholar 

  2. McDonnell MD, Stocks NG: Maximally informative stimuli and tuning curves for sigmoidal rate-coding neurons with quasi-poisson variability. Submitted to Physical Review Letters. arXiv:0802.1570v1.

  3. Bethge M, Rotermund D, Pawelzik K: Optimal short-term population coding: When Fisher information fails. Neural Computation. 2002, 14: 2317-2351. 10.1162/08997660260293247.

    Article  CAS  PubMed  Google Scholar 

  4. Bethge M, Rotermund D, Pawelzik K: Optimal neural rate coding leads to bimodal firing rate distributions. Network: Computation in Neural Systems. 2003, 14: 303-319. 10.1088/0954-898X/14/2/307.

    Article  CAS  Google Scholar 

Download references

Acknowledgements

This work was funded by Australian Research Council grant DP0770747, and EPSRC grant EP/C523334/1, and we gratefully acknowledge this support. We also thank Emilio Salinas and Simon Durrant for valuable discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark D McDonnell.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

McDonnell, M.D., Stocks, N.G. Optimal sigmoidal tuning curves for intensity encoding sensory neurons with quasi-Poisson variability. BMC Neurosci 9 (Suppl 1), P117 (2008). https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2202-9-S1-P117

Download citation

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2202-9-S1-P117

Keywords