Poster presentation | Open | Published:
Simulating mirror-neuron responses using a neural model for visual action recognition
BMC Neurosciencevolume 9, Article number: P112 (2008)
The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1, 2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing.
We present a neurophysiologically inspired model for the visual recognition of hand movements. It demonstrates that several experimentally shown properties of mirror neurons can be explained by the analysis of spatio-temporal visual features within a hierarchical neural system that reproduces fundamental properties of the visual pathway and premotor cortex. The model integrates several physiologically plausible computational mechanisms within a common architecture that is suitable for the recognition of grasping actions from real videos: (1) A hierarchical neural architecture that extracts 2D form features with subsequently increasing complexity and invariance to position along the hierarchy [3–5]. (2) Extraction of optimal features on different hierarchy levels by eliminating features which are not contributing to correct classification results. (3) Simple recurrent neural circuits for the realization of temporal sequence selectivity [6–8]. (4) A simple neural mechanism that combines the spatial information about goal object and its affordance and the information about the end effector and its movement. The model is validated with video sequences of both monkey and human grasping actions.
We show that simple well-established physiologically plausible mechanisms can account for important aspects of visual action recognition and experimental data of the mirror neuron system. Specifically, these results are independent of explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned 2D pattern sequences arising in the visual input. Our results complements those of existing models  and motivates a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions.
di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G: Understanding motor events: a neurophysiological study. Exp Brain Res. 1992, 91: 176-180. 10.1007/BF00230027.
Rizzolatti G, Craighero L: The mirror-neuron system. Annu Rev Neurosci. 2004, 27: 169-192. 10.1146/annurev.neuro.27.070203.144230.
Giese MA, Poggio T: Neural mechanisms for the recognition of biological movements. Nat Rev Neurosci. 2003, 4: 179-192. 10.1038/nrn1057.
Riesenhuber M, Poggio T: Hierarchical models of object recognition in cortex. Nat Neurosci. 1999, 2: 1019-1025. 10.1038/14819.
Serre T, Wolf L, Bileschi S, Riesenhuber M, Poggio T: Robust object recognition with cortex-like mechanisms. IEEE Trans Pattern Anal Mach Intell. 2007, 29: 411-426. 10.1109/TPAMI.2007.56.
Xie X, Giese MA: Nonlinear dynamics of direction-selective recurrent neural media. Phys Rev E Stat Nonlin Soft Matter Phys. 2002, 65: 051904-
Zhang K: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J Neurosci. 1996, 16: 2112-2126.
Hopfield JJ, Brody CD: What is a moment? "Cortical" sensory integration over a brief interval. Proc Natl Acad Sci USA. 2000, 97: 13919-13924. 10.1073/pnas.250483697.
Oztop E, Kawato M, Arbib M: Mirror neurons and imitation: a computationally guided review. Neural Netw. 2006, 19: 254-271. 10.1016/j.neunet.2006.02.002.
Supported by DFG, the Volkswagenstiftung, and Hermann und Lilly Schilling Foundation.