- Poster presentation
- Open Access
Stochastic gradient ascent learning with spike timing dependent plasticity
BMC Neuroscience volume 12, Article number: P250 (2011)
Stochastic gradient ascent learning exploits correlations of parameter variations with overall success of a system. This algorithmic idea has been related to neuronal network learning by postulating eligibility traces at synapses, which make them selectable for synaptic changes depending on later reward signals ( and ). Formalizations of the synaptic and neuronal dynamics supporting gradient ascent learning in terms of differential equations exhibit strong similarities with a recent formulation of spike timing dependent plasticity (STDP)  when it is combined with a reward signal. Here we present conditions under which reward modulated STDP is in fact guaranteed to maximize expected reward. We present numerical simulations underlining the relevance of realistic STDP models for reward dependent learning. In particular, we find that the nonlinear adaptation to pre- and post-synaptic activities of STDP  contributes to stable learning.
Sebastian Seung H: Learning in Spiking Neural Networks by Reinforcement of Stochastic Synaptic Transmission. Neuron. 2003, 40 (6): 1063-1073. 10.1016/S0896-6273(03)00761-X.
Xiaohui Xie, Sebastian Seung H: Learning in neural networks by reinforcement of irregular spiking. Phys Rev E. 2004, 69 (4): 041909-1-041909-10.
Schmiedt Joscha T, Christian Albers, Klaus Pawelzik: Spike timing-dependent plasticity as dynamic filter. Advances in Neural Information Processing Systems 23. Edited by: J. Lafferty and C. K. I. Williams and J. Shawe-Taylor and R.S. Zemel and A. Culotta. 2010, 2110-2118.
About this article
Cite this article
Vieira, J., Arévalo, O. & Pawelzik, K. Stochastic gradient ascent learning with spike timing dependent plasticity. BMC Neurosci 12, P250 (2011). https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2202-12-S1-P250
- Differential Equation
- Animal Model
- Algorithmic Idea
- Parameter Variation
- Neuronal Network