Skip to main content

Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders



Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD.

We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI).


Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group.


These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD.


Individuals with autism spectrum disorders (ASD) are characterized primarily by qualitative impairments in social interaction [1]. One of the most evident features of their social impairment involves deficient communication via emotional facial expressions [2]. For example, several previous behavioral studies reported that individuals with ASD exhibited less attention [3], attenuated emotional behaviors [4], and reduced and/or inappropriate facial reactions [5] in response to the facial expressions of other individuals compared with typically developing individuals.

Several neuroimaging studies using functional magnetic resonance imaging (fMRI) and positron emission tomography tested the neural substrates of impaired facial-expression processing in ASD and reported inconsistent findings. Almost all these studies used photos of emotional facial expressions as stimuli and found that individuals with ASD showed abnormal activities in several brain regions, including the posterior superior temporal sulcus (STS) or its adjacent regions such as the middle temporal gyrus (MTG) [68], the posterior fusiform gyrus (FG) [7, 913], amygdala (AMY) [68, 12], medial prefrontal cortex (MPFC) at around the medial superior frontal gyrus [8, 14], and the inferior frontal gyrus (IFG) [9, 15, 16]. Most of these studies reported hypo activation of these regions [611, 1316] (however, see [12]). Substantial neuroimaging and neuropsychological evidence in typically developing individuals has suggested that these brain regions are related to social activities, such as the visual analysis of dynamic aspects of faces involving the STS/MTG [17], the visual analysis of invariant aspects of faces and/or subjective perception of faces involving the FG [18], emotional processing involving the AMY [19], attribution of mental states involving the MPFC [20], and motor mimicry involving the IFG [21]. Based on these data, these regions have been called “social brain” regions [2228]. Hence, the findings in individuals with ASD appear to account for their impaired processing of emotional facial expressions. However, it must be noted that different studies have reported abnormalities in different parts of the social brain, and thus the results appear to be far from consistent. Furthermore, whether the neural substrates of impaired expression processing in ASD can be traced to reduced activity in any specific brain region and/or to reduced connectivity among the regions, which has been suggested in other lines of ASD research (cf. [29]), remains unknown.

Dynamic facial expressions are more natural and powerful cues in real-life social interaction than are static expressions. From an evolutionary perspective [3032], human minds are programmed to efficiently process dynamic facial expressions of conspecifics compared with their static expressions, which are artificial signals or products of technology. The importance of the dynamic properties of facial expressions is illustrated by behavioral studies of typically developing individuals. Researchers who observed facial expressions in real situations described rich, dynamic information in emotional facial expressions [31, 33]. Several experimental studies have indicated that dynamic facial expressions, as compared with static expressions, induced more evident psychological activities, such as perception (e.g., [34]), emotional reactions (e.g., [35]), and facial mimicry (e.g., [36]). Advantages of using dynamic compared to static facial expressions to induce behavioral reactions have even been shown in newborn infants [37]. Consistent with these behavioral data, several neuroimaging studies with typically developing participants have shown that the social brain regions were more active when viewing dynamic as compared to static facial expressions [3842]. These regions included the STS/MTG [3842], FG [3840], AMY [39, 40, 42], MPFC [38, 39], and IFG [39, 40, 42].

Nevertheless, few studies have investigated brain activities in response to dynamic facial expressions in individuals with ASD. Impaired social interaction via emotional expression has consistently been shown in individuals with ASD in real situations [35], and dynamic, not static, facial expressions would be plausible mediums for such impairments. Consistent with this idea, several behavioral studies have demonstrated that impairments in the ability of individuals with ASD to process emotional expressions were more evident in response to dynamic than to static facial expressions (e.g., [43]). Therefore, it is reasonable to assume that neuroimaging studies using dynamic facial expressions would more clearly identify abnormal brain activities in these participants. Pelphrey et al. [44] tested this issue by presenting dynamic and static facial expressions depicting anger, fear, and neutral emotions to a group of individuals with ASD and to typically developing controls. The researchers found that the observation of dynamic facial expressions elicited less activation in the ASD group as compared with the control group in several social brain regions including the STS/MTG, FG, AMY, and MPFC. These data suggest that this reduced brain activation in response to dynamic facial expressions reflects the neural basis of impaired facial expression processing in individuals with ASD. However, this study did not reveal clear IFG activity in either the ASD or the control group. This issue could be critical because the IFG has recently received considerable interest in the neuroscientific literature on ASD. Indeed, it has been suggested that the IFG contains specific neuronal populations, known as “mirror neurons,” that discharge both when observing and when executing specific actions (for reviews, see [45, 46]). In the context of behavioral data indicating abnormal mimicking in ASD (e.g., [47]), some researchers have proposed that IFG dysfunction may constitute a fundamental deficit in ASD [4850]. We reasoned that we could clarify this issue by using dynamic facial expression stimuli that were shown to effectively activate the IFG in typically developing individuals [40]. We hypothesized that the observation of dynamic, compared with static, facial expressions would clearly reveal hypo activation of social brain regions (i.e., STS/MTG, FG, AMY, MPFC, and IFG) in individuals with ASD.

Furthermore, the functional network patterns of the social brain regions for processing dynamic facial expressions in both typically developing individuals and those with ASD remain unknown. A previous study tested the effective connectivity in typically developing control and ASD groups using dynamic facial expressions as stimuli and found differential patterns of effective connectivity between groups [51]. However, because that study focused on the effects of tasks, the functional network underpinning the processing of dynamic facial expressions per se remains to be tested. Among the components of the social brain, converging data from anatomical and theoretical studies suggest that the STS/MTG and IFG constitute the circuit. Several anatomical studies, including histological examinations in humans [52, 53] and non-human primates [54, 55], as well as diffusion tensor imaging in humans [5658] and non-human primates [57], indicated that the STS/MTG and IFG are directly connected. Some researchers have proposed that this circuit serves an important function in social interaction as the mirror neuron system (MNS) in typically developing individuals and is impaired in individuals with ASD [50, 59, 60]. However, this idea remains to be empirically tested. Based on these data, we hypothesized that observation of dynamic versus static facial expressions would enhance the functional couplings of the neural networks including the STS/MTG and IFG of typically developing individuals and that reductions would be found in the same functional neural networks of individuals with ASD.

In the present fMRI study, we examined the brain activities of a group of high-functioning individuals with ASD and age- and sex-matched typically developing controls while they viewed dynamic and static facial expressions. The stimuli used to depict dynamic facial expressions were shown to activate the social brain regions, including the IFG, in typically developing participants [40]. The stimuli were also found to sufficiently represent natural changes in facial expressions [61] and to effectively induce subjective emotion [35] and facial mimicry [36] in typically developing individuals. We prepared facial expressions with both negative (fearful) and positive (happy) emotional valences. The participants were asked to discriminate the sex of the presented faces to ensure that they were attending to the stimuli and to prevent their explicit processing of the emotional expressions. By comparing the brain activities under dynamic versus static facial expression conditions, we identified the regions involved in the processing of dynamic facial expressions. Furthermore, to investigate effective connectivity, we conducted dynamic causal modeling (DCM).


Behavioral performance

The correct response percentage of the sex-discrimination task was comparable across groups: dynamic fear (control: M = 98.4, SD = 1.1; ASD: M = 92.4, SD = 5.3), dynamic happiness (control: M = 98.7, SD = 1.0; ASD: M = 93.4, SD = 5.3), static fear (control: M = 98.7, SD = 1.0; ASD: M = 93.9, SD = 4.3) and static happiness (control: M = 97.4, SD = 1.3; ASD: M = 93.8, SD = 3.8). A three-way repeated-measures analysis of variance (ANOVA) using group, presentation condition, and emotion as factors on the correct response percentage showed no significant main effects or interactions.

Correct response reaction times (RTs) were also comparable across groups: dynamic fear (control: M = 231.3, SD = 25.5; ASD: M = 242.0, SD = 52.7), dynamic happiness (control: M = 237.8, SD = 28.6; ASD: M = 205.7, SD = 53.0), static fear (control: M = 183.0, SD = 21.0; ASD: M = 186.4, SD = 45.5) and static happiness (control: M = 182.0, SD = 22.1; ASD: M = 201.3, SD = 41.2). An ANOVA with the same design as described above on the correct RTs showed only a significant main effect of presentation condition, indicating longer RTs in response to dynamic than to static presentations (F(1,23) = 13.96, p < .005).

In summary, behavioral performance data revealed no significant effects related to group.

Regional brain activity

We tested regional brain activity using the three-way repeated-measures ANOVA model with group, presentation condition, and emotion as factors (Additional file 1: Figure S1). Initially, the simple main effect of presentation condition, contrasting dynamic and static presentations, was tested for each group (Table 1; Figure 1). For the control group, broad ranges of bilateral posterior regions, which included activation of the MTG and FG, were detected as areas of significant activation. Significant activation was also observed in the bilateral AMY, bilateral MPFC, and right IFG. For the ASD group, bilateral activation of the posterior regions was found to be significant, although its size was smaller than that of the control group. No other areas showed significant activation, including such social brain regions as the AMY, MPFC, and IFG.

Table 1 Brain regions showing significant activation for dynamic versus static facial expressions
Figure 1

Statistical parametric maps showing significant brain activation for dynamic versus static facial expressions. The control (CON) and autism spectrum disorders (ASD) groups are shown in the left and right panels, respectively. The areas of activation are rendered on spatially normalized brains (upper) and overlaid on the normalized anatomical MRI of one of the participants at the coronal section showing amygdala activation (lower). The cross hairs in the lower panels are centered on the activation focus of the left amygdala in the control group (x -26, y -6, z -16; t = 4.65; cluster size = 5368 mm3). An extent threshold of p < .05, corrected for multiple comparisons, with a height threshold of p < .01 (uncorrected) were used. L = Left hemisphere; R = Right hemisphere.

Then, a planned contrast of the interaction between group and presentation condition was conducted, testing for reduced activation in the ASD as compared with the control group under dynamic versus static conditions (Table 2; Figure 2). The bilateral posterior regions, including the activation foci in the MTG in the right hemisphere and the FG in both hemispheres, were significantly activated. Significant activation was also found in the left AMY, bilateral MPFC, and right IFG. No significant activation was observed in any other region.

Table 2 Brain regions showing significant interactions between group and presentation condition
Figure 2

Brain activation for the significant interaction between group and presentation condition. Weaker activation was found in the autism spectrum disorders (ASD) group than in the control (CON) group for dynamic (DY) versus static (ST) expressions. A. Statistical parametric maps rendered on spatially normalized brains. A height threshold of p < .01 (uncorrected) was used without extent threshold restriction for display purposes. L = Left hemisphere; R = Right hemisphere. B. Statistical parametric maps of representative brain regions overlaid on the normalized anatomical MRI of one of the participants in this study. From left to right, the activation of the middle temporal gyrus (MTG; x 52, y -62, z 0; t = 5.08), fusiform gyrus (FG; x 40, y -58, z -14; t = 3.00), amygdala (AMY; x -28, y -4, z -18; t = 2.89), medial prefrontal cortex (MPFC; x 8, y 66, z 20; t = 3.87), and inferior frontal gyrus (IFG; x 48, y 26, z 8; t = 3.06) is shown. The statistical thresholds are the same as above. C. Mean parameter estimates (± SE) of brain regions corresponding to the above overlaid MRIs. The data were extracted at the sites of peaks. FE = Fear; HA = Happiness.

We also conducted exploratory analyses for other interactions related to the group factor in the whole brain, but found no significant results.


DCM analyses were conducted to test the MNS network for each group. Bi-directional (forward and backward) intrinsic connections were constructed between the primary visual cortex (V1) and MTG and between the MTG and IFG (Figure 3a). The modulatory effect of dynamic presentation was modeled to modulate each of these bi-directional connections. Based on the locations of the modulatory effects, we constructed the following four models (Figure 3b): (1) the null model, with no modulatory effect; (2) the MNS-entrance modulation model, with modulatory effects on the V1–MTG connections; (3) the MNS-core modulation model, with modulatory effects on the MTG–IFG connections; and (4) the full model, with modulatory effects on both the V1–MTG and MTG–IFG connections. The exceedance probability of the Bayesian model selection (BMS) indicated that the full model was the most likely for both groups (Table 3).

Figure 3

Models and results of dynamic causal modeling (DCM) regarding the mirror neuron system (MNS). A. Analyzed brain regions rendered on the spatially normalized brain. V1 = Primary visual cortex; MTG = Middle temporal gyrus; IFG = Inferior frontal gyrus. B. Analyzed models. Thin arrows indicate intrinsic connections between brain regions. Bold arrows indicate the modulatory effects of dynamic presentation. C. Mean coupling parameters (± SE) for the control (CON) and autism spectrum disorders (ASD) groups. Statistical comparisons showed that all parameters were significantly weaker in the ASD than in the control group (t-test, p < .05).

Table 3 Summary of the results of Bayesian model selection (BMS) and Bayesian model averaging (BMA)

To test group differences in coupling parameters, Bayesian model averaging (BMA) analysis was conducted (Table 3), and the resultant posterior means of modulatory effect parameters (Figure 3c) were analyzed. First, to test for differences from zero, one-sample t-tests were conducted for each group. The results showed that the facilitative modulatory effects of dynamic presentation were significant among members of the control group for all bi-directional connections between the V1 and MTG and the MTG and IFG (t(12) > 3.76; p < .005). Significant facilitative modulatory effects of dynamic presentation were found for the connection from the V1 to the MTG (t(11) = 2.73; p < .05) but not for any other connections (t(11) < 1.20; p > .1) in the ASD group. To test for differences between groups, two-sample t-tests were conducted. The results showed reduced modulatory effects under the dynamic condition with respect to all connections in the ASD group as compared with the control group (t(23) > 1.91; p < .05).


Regional brain activity

Our results regarding regional brain activity in the control group showed that observation of dynamic facial expressions was associated with greater activation than observation of static facial expressions in distributed brain regions including the MTG, FG, AMY, MPFC, and IFG. The activation of these regions is consistent with the findings of previous studies (e.g., [40]). All of these brain regions have been proposed to constitute the social brain network (e.g., [28]); our results confirm that the presentations of dynamic versus static facial expressions are appropriate for activating the social brain networks of typically developing individuals.

More importantly, the group comparison results showed that these social brain regions were less activated in response to dynamic than to static facial expressions in the ASD compared with the control group. Because the participants in the ASD group had no symptoms other than social impairment and repetitive traits, these results can be attributed to the core deficits of ASD. The reduced activation of the social brain regions in individuals with ASD in response to dynamic facial expressions is consistent with the findings of a previous study [44]. Because group differences in IFG activities were not reported in the previous study, the current study is the first to provide evidence that functional abnormality in this region is related to the impaired processing of dynamic facial expressions in ASD. We consider the possibility that some methodological differences may account for the disparity in the results. For example, the stimuli depicting dynamic facial expressions in the present study reflected more rapid changes than did those used in the study conducted by Pelphrey et al. [44]. A previous behavioral study reported that the speed at which dynamic facial expressions changed influenced the recognition of natural facial expressions and suggests that the speed used in the present study was preferable for natural dynamic facial expressions [61]. Because several anatomical studies have reported single-cell and/or population level structural abnormalities in the social brain regions (i.e., STS/MTG [6264], FG [65, 66], AMY [67, 68], MPFC [64, 69], and IFG [62, 64, 70]) it is plausible that these regions reflect characteristics of abnormal brain functioning in ASD. Because dynamic facial expressions are realistic mediums for social interaction, our results suggest that the weak activation in these social brain regions is related to the real-life impairments in communication via facial expressions experienced by individuals with ASD.

Previous neuroimaging studies of typically developing participants (e.g., [7173]; for reviews, see [17, 18]) have shown that the STS/MTG is involved in visual analyses of the dynamic or changeable aspects of faces. Previous neuroimaging studies also showed that observation of dynamic point-light displays of human actions activated the STS/MTG in typically developing individuals but not in those with ASD [74, 75]. Consistent with these neuroscientific data, several behavioral studies have reported that individuals with ASD showed impaired perception of dynamic human actions [7680]. In their review of behavioral and neuroscientific studies, Dakin and Frith [81] proposed that individuals with ASD experience impairment in the perception of human actions and that this impairment appears to be related to dysfunction in the STS/MTG. Together with these data, our results suggest that reduced STS/MTG activation is involved in impaired visual analyses of the dynamic aspects of emotional facial expressions experienced by those with ASD.

In contrast, the FG has been shown to relate to the visual analyses of invariant aspects of faces and/or the subjective perception of faces in typically developing participants (e.g., [72, 82]; for a review, see [18]). Several previous neuroimaging studies in individuals with ASD have also reported reduced FG activation in processes involved in basic visual discrimination of faces versus non-faces [8385]. Together with these data, our results suggest that the dynamic presentations of facial expressions enhance the visual analyses or perception of faces in typically developing individuals but not in individuals with ASD.

The AMY has been shown to be involved in emotional processing of typically developing participants while they view dynamic facial expressions [86]. A previous neuroimaging study reported consistent changes in the AMY activities of typically developing controls but not of those with ASD as a function of the intensity of the emotional facial expressions depicted in photos, suggesting abnormal emotional processing in the AMY of individuals with ASD [8]. Several lesion studies in animals have also indicated that damage to the AMY induced abnormal emotional reactions to the emotional expressions of other individuals (e.g., [87]), which have been likened to the socioemotional impairments in ASD [88]. Consistent with these neuroscientific data, a previous behavioral study reported that individuals with ASD did not show higher autonomic and behavioral responses to distressed than to neutral dynamic expressions, although typically developing controls did show such responses [4]. Combined with these data, our results suggest that reduced AMY activation is involved in the impaired emotional reactions to dynamic facial expressions shown by individuals with ASD.

The MPFC has been shown to be activated when participants attributed mental states to others (i.e., mentalizing or theory of mind; e.g., [89]; for a review, see [20]). The ability to mentalize has been proposed as a the specific characteristic that has emerged over the course of human evolution [90] and as constituting a crucial social deficit in ASD [91]. The reduced MPFC activation in mentalizing tasks among individuals with ASD compared with typically developing individuals has also been shown in previous neuroimaging studies [14, 92, 93]. Our results showing that this region was active in response to dynamic facial expressions among those in the control group suggest that typically developing individuals automatically try to read others’ mental states in real-life social interaction. Furthermore, our results showing group differences in the activities in this region suggest that such automatic mentalizing is relatively less pronounced in those with ASD.

Several previous neuroimaging studies involving typically developing participants have reported greater IFG activation not only when participants passively observed dynamic versus static facial actions [39, 40, 42, 94, 95], but also when participants imitated the dynamic facial expressions that they were viewing than compared with when they passively viewed these stimuli [96, 97]. This finding is consistent with theories proposing that the IFG contains mirror neurons [45, 46], which are activated in response to both the observation and the execution of facial expressions. Previous neuroimaging [16] and magnetoencephalographic [98] studies have consistently indicated that the imitation of facial actions while viewing static facial stimuli induced less activation in the IFG in the ASD than in the control group. Together with these data, our results suggest that the reduced IFG activation in individuals with ASD in response to dynamic facial expressions is related to deficits in automatic facial mimicry in ASD.

It is interesting to note that visual inspection of IFG activities (Figure 2) indicates that the ASD group participants showed clear IFG activation against the resting condition, although the differences between dynamic and static conditions were smaller than those in the control group. Consistent with these data, previous behavioral studies reported that individuals with ASD did not lack facial reactions to the emotional facial expressions of other individuals but instead reacted to the facial expressions differently from the ways in which typically developing individuals reacted [5, 99101]. Collectively, our results suggest that the activation patterns of the mirror neurons in the IFG in individuals with ASD may be altered, perhaps producing abnormal facial mimicry during social interaction involving facial expressions.

Effective connectivity

Our results regarding the DCM in the control group showed that observation of dynamic compared with static facial expressions enhanced effective connectivity of the MNS network connecting the V1, MTG, and IFG. These results provide a mechanistic account of the enhanced activities manifested by sets of brain regions in response to dynamic facial expressions by construing them as a positively connected circuit. For example, the STS/MTG is more active in response to dynamic than to static faces because the inputs from the V1 through the feed forward connection and the inputs from the IFG through the feedback connection are enhanced. The result also provides suggestions for information flow in the neural processing of dynamic facial expressions: When we observe dynamic facial expressions, the visual information processed through the V1 and STS/MTG is transmitted to the motor processing area in the IFG; then, the motor representation in the IFG modulates visual decoding in the STS/MTG, which then modulates basic visual processing in the V1. These systematic views are consistent with previous theoretical proposals that these brain regions constitute the functional network of the MNS and/or social brain network (e.g., [59]). To our knowledge, this is the first evidence that dynamic facial expressions enhance not only regional brain activities but also effective connectivity among these regions.

More interestingly, our results revealed weaker modulatory effects of dynamic facial expressions on the MNS connections in the ASD group than in the control group. As in the case of the control group, our results provide a mechanistic account of the relatively weak activities of the social brain regions for processing dynamic facial expressions in individuals with ASD: In these individuals, positive connectivity among the regions is weak. For example, STS/MTG activation induced by dynamic versus static facial expressions is reduced because feed forward inputs from the V1 and feedback inputs from the IFG are weaker than those in typically developing individuals. The effect of weak neural connectivity in ASD has been theoretically proposed in several previous studies (e.g., [102]). Previous empirical studies have also reported that individuals with ASD showed reduced functional connectivity while engaging in social tasks, such as expression recognition [51, 103], face perception [104, 105], mentalizing [92], and other non-social cognitive tasks [106110]. Our results extend the literature by providing the first evidence that effective connectivity modulation of the social brain network for processing of dynamic facial expressions is reduced in ASD.

Our results showed reduced modulatory effects in both the core (MTG–IFG) and the entrance (V1–MTG) connections of the MNS in the ASD group. These results provide insights into the loci of abnormalities in the social brain networks of those with ASD. As mentioned above, several previous studies have found abnormal activities in the social brain regions of individuals with ASD (e.g., [16]). These data suggest the existence of problems in the core parts of the social brain network in ASD. However, some other studies have reported abnormal activities in the early visual cortices in individuals with ASD (e.g., [111]; for a review, see [112]), suggesting that problems begin before the social brain is involved. Our results allow reconciliation of these lines of research by indicating functional problems at both the entrance and the core of the social brain network among those with ASD.

Our results provide unique explanations and predictions of the behaviors of typically developing individuals and of those with ASD. For example, a previous behavioral study among typically developing individuals showed that intentional facial mimicking facilitated the recognition of dynamic facial expressions [113]. Our results explain this finding by indicating that one’s own facial motor commands related to IFG activation facilitate the visual analyses of others’ facial expressions that are related to MTG activation. Such an idea provides the basis for predicting that the facilitative effect of facial mimicry on expression recognition may be impaired in individuals with ASD.

Implications, limitations, and future directions

Our results showing the group differences in the functioning of the social brain network in response to dynamic versus static facial expressions have practical implications for experimental studies on ASD. Several behavioral and neuroscientific studies have previously used static emotional facial expressions as stimuli to investigate abnormalities in the processing of emotional expressions in individuals with ASD and have produced inconsistent findings. Based on our results, we propose that the presentations of dynamic facial expressions are more appropriate than the presentations of static expressions for revealing abnormalities in social interaction among those with ASD. Consistent with this idea, some pioneering behavioral studies have found that dynamic presentations of facial stimuli revealed abnormal behavioral patterns characterizing the social interaction of individuals with ASD; these results have not been observed in studies using static presentations. For example, Uono et al. [43] reported that experiments using dynamic facial expressions as stimuli revealed the facilitative effect of emotional expression on automatic gaze-triggered attentional shifts in typically developing individuals and the impairment in this regard among individuals with ASD, although such effects were not found in response to static presentations [114]. We expect that further studies using dynamic facial expressions as stimuli will provide pronounced evidence of the cognitive mechanisms and neural substrates underlying the social impairments of ASD.

Some limitations of this study should be acknowledged. First, the contrast between dynamic emotional and dynamic neutral expressions remains untested. Such a contrast would allow us to discriminate between the effects of facial motion and those of the emotional messages conveyed by dynamic facial expressions. This issue could be intriguing because inconsistent findings have been reported in studies with typically developing individuals regarding social brain activation patterns for dynamic emotional versus dynamic neutral faces (e.g., [39, 115, 116]). Regarding this issue, Pelphrey et al. [44] measured brain activation in response to dynamic neutral faces, which were derived from identity morphing, and static neutral faces in ASD and typically developing control groups. They found no significant interaction between group (ASD vs. control) and presentation condition (dynamic neutral vs. static neutral) in the activation of the AMY, FG, or STS/MTG. These results suggest that weaker activation of these regions induced by dynamic facial expressions in ASD might not be accounted for by facial motion per se. However, this question remains unresolved for the activities of other social brain regions (e.g., the IFG), and further investigation on dynamic neutral faces is an important matter for future research.

Second, we tested only fearful and happy facial expressions. Hence, the effects of dynamic presentations of other emotions on individuals with ASD remain to be examined. Observation of dynamic facial expressions depicting other emotions may reveal abnormal activities in other brain regions among those with ASD. For example, some previous neuroimaging studies with typically developing participants have reported that the observation of dynamic and/or static disgusted facial expressions activated brain regions that were not activated in the present study, including the basal ganglia and insula (e.g., [117, 118]; for a review, see [19]). A previous neuroimaging study showed that the observation of photos depicting disgusted facial expression induced less activation in these brain regions in the ASD than in the control group [15], although such a group difference was not evident in another study [13]. We speculate that the observation of dynamic versus static facial expressions of disgust may provide clear evidence of abnormal activities of these brain regions in individuals with ASD.

Third, our study did not record eye movements during participants’ observations of dynamic and static facial expressions, although a previous neuroimaging study suggested that an abnormal fixation pattern on faces reduced FG activation in individuals with ASD [12]. This issue may be relevant because we presented stimuli for 1500 ms to depict dynamic aspects of facial expressions, which is long enough for the participants to make eye movements. To reduce the effect of eye movements, we instructed participants to fixate on a point between the eyes (the center of the screen). Some previous studies [119, 120] using the same instruction reported that the FG of individuals with ASD showed normal activation in response to faces. Accordingly, our fMRI results (Figure 2) demonstrated that FG activities in response to static facial expressions were comparable across the ASD and control groups. These data may rule out the possibility that the abnormal fixation pattern on faces would account for the lower levels of brain activation in individuals with ASD. However, such speculation should be verified in future studies recording eye movements during the processing of dynamic facial expressions.

Fourth, our functional coupling analyses were restricted to a part of the social brain network because DCM was designed to test specific hypotheses rather than to act as an exploratory technique [121, 122]. Currently, knowledge about the anatomical and functional connections among all social brain regions remains lacking. It is plausible that the MNS is a sub-component in a more widespread network. For example, the AMY may directly modulate the activities of the MTG and IFG or may exert a bilinear modulatory effect on the connection between these regions. Further studies regarding anatomical and functional connectivity are necessary to elucidate the social brain network and related impairments in ASD.


In summary, our results showed that activation of several brain regions (i.e., MTG, FG, AMY, MPFC, and IFG) in response to dynamic versus static expressions was weaker in the ASD than in the control group. The results also revealed that the modulatory effects of dynamic facial expressions on bi-directional effective connectivity in the V1–MTG–IFG circuit were weaker in the ASD than in the control group. These data suggest that weak activity and connectivity of the social brain network for processing dynamic facial expressions underlie the impairments demonstrated by individuals with ASD in real-life social interaction.



The ASD group comprised 12 adults (1 female, 11 males; age, M = 27.5, SD = 7.6). Although an additional male candidate actually participated, his data were not analyzed due to large motion artifacts (>3 mm). The group consisted of eight males with Asperger’s disorder and four (1 female, 3 males) with pervasive developmental disorder not otherwise specified (PDD-NOS). As defined in the Diagnostic and Statistical Manual-Fourth Edition-Text Revision (DSM-IV-TR)[1], PDD-NOS includes heterogeneous subtypes of ASD, ranging from so-called atypical autism to a subgroup with symptoms milder than Asperger’s disorder (i.e., satisfying fewer diagnostic criteria than required for a diagnosis of Asperger’s disorder). In this study, only high-functioning PDD-NOS participants with milder symptoms than those associated with Asperger’s disorder were included. Neurological and psychiatric problems other than those associated with ASD were ruled out. Participants were not taking medication. Therefore, all participants in the ASD group had only the core deficits of ASD (i.e., social impairments and repetitive traits).

The diagnosis was made using DSM-IV-TR by a stringent procedure in which every item of the ASD diagnostic criteria was investigated in interviews with the participants and their parents (and professionals who helped them, if any) by two psychiatrists with expertise in developmental disorders. Only participants who met at least one of the four social impairment items (i.e., impairment in nonverbal communication including lack of joint attention, sharing interest, relationship with peers, and emotional and interpersonal mutuality) without satisfying any items of the criteria of autistic disorder, such as language delay, were included. Comprehensive interviews were administered in order to obtain information about the participants’ developmental histories for diagnostic purposes.

For 10 individuals among the ASD group, the level of symptom severity was quantitatively assessed using the Japanese version of Childhood Autism Rating Scale (CARS) [123] administered by a psychiatrist with expertise in developmental disorders. The CARS is one of the most widely used scales to evaluate the degree of ASD [124]. The CARS scores in the ASD group (M = 21.1, SD = 1.7) were comparable to those in previous studies with individuals with Asperger’s disorder [124] and individuals with Asperger’s disorder and Asperger type PDD-NOS [125] (t-test, p > .1). These data support that the symptoms were severe enough in the ASD group.

Full-scale intelligence quotients (IQs), measured by the Wechsler Adult Intelligence Scale-Revised (WAIS-R), of all participants in the ASD group fell within the normal range (full-scale IQ: M = 113.1, SD = 12.5; verbal IQ: M = 117.3, SD = 10.8; performance IQ: M = 106.3, SD = 14.9).

The control group comprised 13 adults (1 female, 12 males; age, M = 24.3, SD = 3.4). They had no neurological or psychiatric problems. They were recruited through advertisements and were matched with the ASD group for age and sex. The full-scale IQs, measured by the WAIS-R, of all control participants also fell within the normal range (full-scale IQ: M = 126.3, SD = 6.1; verbal IQ: M = 128.1, SD = 7.2; performance IQ: M = 118.8, SD = 11.2).

All participants had normal or corrected-to-normal visual acuity. All subjects were right handed, as assessed by the Edinburgh Handedness Inventory [126]. Each participant provided informed consent to participate in the study, which was conducted in accordance with institutional ethical provisions and the Declaration of Helsinki.

Experimental design

The experiment involved a three-way repeated-measures factorial design, with group (ASD, control) as a between-participant factor and presentation condition (dynamic, static) and emotion (fear, happiness) as within-participant factors.


The stimuli were almost identical to those used in a previous fMRI study [40]. The raw materials were grayscale photographs of faces of eight individuals (4 females, 4 males) chosen from a standard set [127] depicting fearful, happy, and neutral expressions. Neutral expressions were adopted as the starting point of the emotional expressions. None of these faces was familiar to any of the participants.

Dynamic expressions were created from photos via computer animation. First, 24 images that increased emotional expression by increments of 4% were created between the neutral (0%) and emotional (100%) expressions using computer-morphing software [128] implemented on a computer operating with Linux. This software was used in several other studies (e.g., [35, 36]). Next, to create a moving video clip, a total of 26 images (i.e., one neutral image, 24 intermediate images, and the image of the final emotion) were presented in succession. Each image was presented for 40 ms, and the first and last images were presented for 230 additional ms; thus, each clip lasted for 1500 ms.

The final expressions under the dynamic expression condition were presented as static expressions for 1500 ms.

Presentation apparatus

The events were controlled by Presentation version 10.0 (Neurobehavioral System) implemented on a Windows computer. The stimuli were projected from a liquid crystal projector (DLA-G150CL, Victor) onto a mirror that was positioned on a scanner in front of the participants. Under these visual conditions, the stimuli subtended a visual angle of about 15.0° vertical × 10.0° horizontal.


The scan session consisted of 12, 20-sec epochs interleaved with 12, 20-sec rest periods in which a blank screen was presented. Each epoch consisted of eight trials, and a total of 96 trials were performed in the scan. Each of the four stimulus conditions (dynamic fear, dynamic happiness, static fear, and static happiness) was presented in different epochs. The order of the epochs was pseudorandomized, and the order of trials within each epoch was randomized.

In each trial, a single individual stimulus was presented for 1500 ms. There was an interval of 1000 ms before the next trial began, during which a fixation point (a picture with a small gray “+” of the same size as the stimulus) was presented on a white background at the center of the screen. The participants were instructed to direct their attention to the center of the screen until the face had disappeared and to specify the sex of the face presented by pressing one of two buttons with the forefinger after the face had disappeared. This task ensured participants’ attention to the stimulus and also prevented idiosyncratic explicit processing for the emotional expression. Post hoc debriefing confirmed that the participants were not aware that the purpose of the experiment was unrelated to sex discrimination.

MRI acquisition

Image scanning was performed on a 3-T scanning system at the ATR Brain Activity Imaging Center (MAGNETOM Trio A, Tim System, Siemens) using a 12-channel array coil without acceleration mode. The functional images consisted of 40 consecutive slices parallel to the anterior–posterior commissure plane covering the whole brain. A T2*-weighted gradient-echo echo planar imaging sequence was used with the following parameters: repetition time (TR) = 2500 ms; echo time (TE) = 30 ms; flip angle (FA) = 90°; field of view (FOV) = 192 × 192 mm; matrix size = 64 × 64; voxel size = 3 × 3 × 4 mm. The order of slices was ascending. After the acquisition of functional images, a T1-weighted high-resolution anatomical image was also obtained using a magnetization-prepared rapid gradient-echo sequence (TR = 2250 ms; TE = 3.06 ms; FA = 9°; inversion time = 900 ms; FOV = 256 × 256 mm; matrix size = 256 × 256; voxel size = 1 × 1 × 1 mm). Elastic pads placed around each side of the participant’s head were used to stabilize head position during functional image acquisition.

Behavioral data analysis

The percentage and RTs of correct responses were analyzed using three-way repeated-measures ANOVAs with group as a between-participant factor and presentation condition and emotion as within-participant factors. We had no specific predictions for the behavioral data, and hence conducted two-tailed tests. Results were considered statistically significant at p < .05.

Image analysis: Preprocessing

Image preprocessing and regional brain activity analyses were performed using SPM5 ( implemented in MATLAB version 7 (Mathworks). First, we performed slice-timing correction to correct for the different times needed to acquire slices in functional images. This process was also important to the robustness of the DCM. To correct for head movements, the functional images of each run were then realigned using the first scan as a reference. Data from all participants showed small motion corrections (<2 mm). Subsequently, the T1 anatomical image was co registered to the first scan of the functional images. Next, the co registered T1 anatomical image was normalized to a standard T1 template image as defined by the Montreal Neurological Institute (MNI), which involved linear and non-linear three-dimensional transformations [129, 130]. The parameters from this normalization process were then applied to each of the functional images. Finally, these spatially normalized functional images were resample to a voxel size of 2 × 2 × 2 and smoothed with an isotopic Gaussian kernel (8 mm) to improve the signal-to-noise ratio and to compensate for the anatomical variability among participants.

Image analysis: Regional brain activity analysis

We used random-effects analyses to identify significantly activated voxels at the population level [131]. First, we performed a single-subject analysis [132, 133]. The task-related blood-oxygen-level-dependent (BOLD) responses under each condition were modeled with a boxcar function and convoluted with a canonical hemodynamic response function. We used a high-pass filter composed of a discrete cosine basis function with a cut-off period of 128 sec to eliminate the artifactual low-frequency trend. Serial autocorrelation, assuming a first-order autoregressive model, was estimated from the pooled active voxels with a restricted maximum likelihood (ReML) procedure and was used to whiten the data and the design matrix [134]. To reduce the motion-related artifacts, the six realignment parameters of the rigid-body transformation used in the realignment step in the preprocessing were added to the model.

Planned contrast was then performed. The four contrast images of dynamic fear, dynamic happiness, static fear, and static happiness versus rest were entered into the flexible factorial model for each participant and each group, generating a three-way repeated-measures ANOVA to create a random-effect SPM{T}. The model included group, presentation condition, and emotion as factors of interest; participant was a factor of no interest (Additional file 1: Figure S1). Based on preliminary analyses, the sex of participants, which showed no significant main effect or interaction in the results, was disregarded in the reported analyses. The non-sphericity correction used in the flexible factorial model corrected for possible differences in variance between the groups due to the unequal sizes of the samples. The same settings were used under the presentation and emotion conditions to correct for uneven variance between levels. The observations that were dependent on presentation and emotion conditions were also corrected. The ensuing covariance components were estimated using ReML and then used to adjust the statistics. This is exactly the same procedure used for serial correlations in single-subject fMRI models. We conducted preliminary analyses to test brain activation under each condition in each group against the resting condition using the same threshold criterion with reported results and found that none of the predicted social brain regions showed significant deactivation. Hence, we did not use any masking procedures.

First, the simple main effect of dynamic versus static presentations was tested for each group. For these analyses, active regions were reported as statistically significant only if they survived the correction for multiple comparisons across the entire brain. Next, our prediction of the interaction between group and presentation condition was tested. For this analysis, about which we had specific predictions, we selected regions of interest (ROIs): the MTG, FG, AMY, MPFC, and IFG. The ROIs were defined as 8-mm-radius spheres centered on the activation foci in the above simple main effect analysis for the control group (cf. [135]). Anatomical specification of the ROIs was conducted using the Talairach Daemon [136] after the transformation of coordinates from the MNI to Talairach systems. All ROIs were confirmed to overlap with the activation foci in previous studies (e.g., [44]). These ROIs were independently examined in an a priori manner (cf. [137, 138]) by applying small-volume correction [139]. Analyses for this interaction in other brain regions and for other interactions related to the factor of group were conducted in an a posteriori manner correcting for the volume of the entire brain. Significantly activated voxels were identified if they reached the extent threshold of p < .05 corrected for multiple comparisons, with a height threshold of p < .01 (uncorrected). In this setting, the minimum cluster size for the significant extent threshold with the small-volume correction was 58 voxels.

To display the activation patterns across conditions, the parameter estimate under each experimental condition (the beta value in the SPM) at the peak voxel of the random-effect analysis was extracted and then averaged across participants.

Image analysis: DCM

We used DCM [121] to explore how the effective connectivity between brain regions was modulated by dynamic facial expressions. DCM enabled us to draw inferences about the influences that one neural system exerted over another and about how this was affected by the experimental context. Technically, DCM is described as an input–state–output model with multiple inputs and outputs, where inputs are represented by experimental factors determined by the experimental paradigm and outputs are the BOLD signals of all regions. The system dynamics of the interacting brain regions are described by changes in the neural state over time. The modeled neural dynamics are transformed into area-specific BOLD signals by a hemodynamic state model. DCM estimates neural and hemodynamic state parameters with a Bayesian inversion scheme [121]. DCM allowed us to estimate three different types of interactions: (1) intrinsic connections, which represent fixed or baseline connectivity among neural states; (2) modulations of these connections by experimental manipulations; and (3) driving input, which embodies the influences of exogenous input on neural states. In this study, we focused on the modulatory effect of dynamic presentation on the cortical network for facial expression processing.

DCM was performed using SPM8 ( implemented in MATLAB version 7 (Mathworks). To construct driving and modulatory inputs in our DCM analysis, we remodeled single-subject analyses. The design matrix contained the following three experimental factor-specific regressors: visual input (i.e., all experimental conditions) as the driving input in the DCM; dynamic presentation as the modulatory input; and emotion (fear vs. happiness, which were coded as 1 and -1, respectively). Emotion regressors were included as effects of no interest. Other nuisance regressors (realignment parameters and constant terms), high-pass filters, and serial autocorrelations were at the same settings as for regional brain activity analyses.

To define the cortico–cortical connectivity, we selected three brain regions: the V1 (x 22, y -84, z -4), MTG (x 52, y -62, z 0), and IFG (x 56, y 28, z 10) in the right hemisphere. These ROIs were selected based on our hypothesis described in the Background. The coordinates of the MTG and IFG were defined based on the results of the simple main effect of presentation condition (dynamic vs. static) in the control group. The coordinates of the V1 were derived from the strongest activation focus in the search region of the V1 in response to all stimulus presentations in the control group; this value was defined by the cytoarchitectonic map derived from data on human postmortem brains using the Anatomy Toolbox version 1.5 [140]. The identical activation focus was found in the ASD group using the same procedure to define the V1. The ROIs were restricted to the right hemisphere because some ROIs showed significant activities only in the right hemisphere. ROI time series were extracted for each participant as the first eigenvariate of all voxels within a 3-mm radius around the selected coordinate. These time series were adjusted for the effect of interest and the nuisance effects, high-pass filtered, and corrected for serial correlation.

Next, the hypothesized model was constructed for each participant. The visual input was modeled as the driving input into the V1. The bi-directional (forward and backward) intrinsic connections were constructed between the V1 and MTG and between the MTG and IFG. The modulatory effect of dynamic presentation was modeled to modulate each of these bi-directional connections. Based on the locations of the modulatory effects, we constructed following four models (Figure 3b): the null model, MNS-entrance modulation model, MNS-core modulation model, and full model.

To examine group differences in effective connectivity, we first tested the most appropriate model for each group using random-effect BMS [141]. We used the exceedance probabilities as the evaluation measures based on the belief that a particular model was more likely than any other model given the group data (cf. [142, 143]). We next analyzed parameter estimates of the averaged model resulting from BMA. We used the entire model space and computed weighted averages of each model parameter for which the weighting was given by the posterior probability for each model [122, 144]. This approach is preferable in a group DCM study in which BMS may indicate a group difference in the model space. To expedite BMA calculation, the low-probability models were excluded from the summation using an Occam's window approach. In this study, Occam’s window was defined using a minimal posterior odds ratio of 1/20 [144]. The modulatory effect parameters were tested with a priori interests (cf. [138]) in terms with differences from zero and differences between groups using t-tests (one-tailed). The results were deemed statistically significant at p < .05.

Author contributions

WS, MT, SU and TK designed research; WS, MT, SU and TK obtained the data; WS and TK analyzed the data; and WS, MT, SU and TK wrote the manuscript. All authors read and approved the final manuscript.


  1. 1.

    American Psychiatric Association: Diagnostic and statistical manual of mental disorders. 2000, APA, Washington, text revision, 4

    Google Scholar 

  2. 2.

    Hobson RP: Autism and the development of mind. 1993, Lawrence Erlbaum Associates, Hove

    Google Scholar 

  3. 3.

    Sigman MD, Kasari C, Kwon JH, Yirmiya N: Responses to the negative emotions of others by autistic, mentally retarded, and normal children. Child Dev. 1992, 63: 796-807. 10.2307/1131234.

    CAS  PubMed  Google Scholar 

  4. 4.

    Corona R, Dissanayake C, Arbelle S, Wellington P, Sigman M: Is affect aversive to young children with autism? Behavioral and cardiac responses to experimenter distress. Child Dev. 1998, 69: 1494-1502.

    CAS  PubMed  Google Scholar 

  5. 5.

    Yirmiya N, Kasari C, Sigman M, Mundy P: Facial expressions of affect in autistic, mentally retarded and normal children. J Child Psychol Psychiatry. 1989, 30: 725-735. 10.1111/j.1469-7610.1989.tb00785.x.

    CAS  PubMed  Google Scholar 

  6. 6.

    Baron-Cohen S, Ring HA, Wheelwright S, Bullmore ET, Brammer MJ, Simmons A, Williams SC: Social intelligence in the normal and autistic brain: An fMRI study. Eur J Neurosci. 1999, 11: 1891-1898. 10.1046/j.1460-9568.1999.00621.x.

    CAS  PubMed  Google Scholar 

  7. 7.

    Critchley HD, Daly EM, Bullmore ET, Williams SC, Van Amelsvoort T, Robertson DM, Rowe A, Phillips M, McAlonan G, Howlin P, Murphy DG: The functional neuroanatomy of social behaviour: Changes in cerebral blood flow when people with autistic disorder process facial expressions. Brain. 2000, 123: 2203-2212. 10.1093/brain/123.11.2203.

    PubMed  Google Scholar 

  8. 8.

    Ashwin C, Baron-Cohen S, Wheelwright S, O'Riordan M, Bullmore ET: Differential activation of the amygdala and the 'social brain' during fearful face-processing in Asperger syndrome. Neuropsychologia. 2007, 45: 2-14. 10.1016/j.neuropsychologia.2006.04.014.

    PubMed  Google Scholar 

  9. 9.

    Hall GB, Szechtman H, Nahmias C: Enhanced salience and emotion recognition in autism: A PET study. Am J Psychiatry. 2003, 160: 1439-1441. 10.1176/appi.ajp.160.8.1439.

    PubMed  Google Scholar 

  10. 10.

    Piggot J, Kwon H, Mobbs D, Blasey C, Lotspeich L, Menon V, Bookheimer S, Reiss AL: Emotional attribution in high-functioning individuals with autistic spectrum disorder: A functional imaging study. J Am Acad Child Adolesc Psychiatry. 2004, 43: 473-480. 10.1097/00004583-200404000-00014.

    PubMed  Google Scholar 

  11. 11.

    Wang AT, Dapretto M, Hariri AR, Sigman M, Bookheimer SY: Neural correlates of facial affect processing in children and adolescents with autism spectrum disorder. J Am Acad Child Adolesc Psychiatry. 2004, 43: 481-490. 10.1097/00004583-200404000-00015.

    PubMed  Google Scholar 

  12. 12.

    Dalton KM, Nacewicz BM, Johnstone T, Schaefer HS, Gernsbacher MA, Goldsmith HH, Alexander AL, Davidson RJ: Gaze fixation and the neural circuitry of face processing in autism. Nat Neurosci. 2005, 8: 519-526.

    CAS  PubMed Central  PubMed  Google Scholar 

  13. 13.

    Deeley Q, Daly EM, Surguladze S, Page L, Toal F, Robertson D, Curran S, Giampietro V, Seal M, Brammer MJ, Andrew C, Murphy K, Phillips ML, Murphy DG: An event related functional magnetic resonance imaging study of facial emotion processing in Asperger syndrome. Biol Psychiatry. 2007, 62: 207-217. 10.1016/j.biopsych.2006.09.037.

    PubMed  Google Scholar 

  14. 14.

    Wang AT, Lee SS, Sigman M, Dapretto M: Reading affect in the face and voice: Neural correlates of interpreting communicative intent in children and adolescents with autism spectrum disorders. Arch Gen Psychiatry. 2007, 64: 698-708. 10.1001/archpsyc.64.6.698.

    PubMed Central  PubMed  Google Scholar 

  15. 15.

    Ogai M, Matsumoto H, Suzuki K, Ozawa F, Fukuda R, Uchiyama I, Suckling J, Isoda H, Mori N, Takei N: fMRI study of recognition of facial expressions in high-functioning autistic patients. Neuroreport. 2003, 14: 559-563. 10.1097/00001756-200303240-00006.

    PubMed  Google Scholar 

  16. 16.

    Dapretto M, Davies MS, Pfeifer JH, Scott AA, Sigman M, Bookheimer SY, Iacoboni M: Understanding emotions in others: Mirror neuron dysfunction in children with autism spectrum disorders. Nat Neurosci. 2006, 9: 28-30. 10.1038/nn1611.

    CAS  PubMed Central  PubMed  Google Scholar 

  17. 17.

    Allison T, Puce A, McCarthy G: Social perception from visual cues: Role of the STS region. Trends Cogn Sci. 2000, 4: 267-278. 10.1016/S1364-6613(00)01501-1.

    PubMed  Google Scholar 

  18. 18.

    Haxby JV, Hoffman EA, Gobbini MI: The distributed human neural system for face perception. Trends Cogn Sci. 2000, 4: 223-233. 10.1016/S1364-6613(00)01482-0.

    PubMed  Google Scholar 

  19. 19.

    Calder AJ, Lawrence AD, Young AW: Neuropsychology of fear and loathing. Nat Rev Neurosci. 2001, 2: 352-363. 10.1038/35072584.

    CAS  PubMed  Google Scholar 

  20. 20.

    Frith U, Frith CD: Development and neurophysiology of mentalizing. Proc R Soc Lond B Biol Sci. 2003, 358: 459-473. 10.1098/rstb.2002.1218.

    Google Scholar 

  21. 21.

    Iacoboni M: Neural mechanisms of imitation. Curr Opin Neurobiol. 2005, 15: 632-637. 10.1016/j.conb.2005.10.010.

    CAS  PubMed  Google Scholar 

  22. 22.

    Brothers L: The social brain: A project for integrating primate behavior and neurophysiology in a new domain. Concepts Neurosci. 1990, 1: 27-51.

    Google Scholar 

  23. 23.

    Emery NJ, Perrett DI: How can studies of the monkey brain help us understand "theory of mind" and autism in humans. Understanding other minds: Perspectives from developmental cognitive neuroscience. Edited by: Baron-Cohen S, Tager-Flusberg H, Cohen DJ. 2000, Oxford University Press, Oxford, 274-305. 2

    Google Scholar 

  24. 24.

    Adolphs R: Cognitive neuroscience of human social behaviour. Nat Rev Neurosci. 2003, 4: 165-178.

    CAS  PubMed  Google Scholar 

  25. 25.

    Johnson MH, Griffin R, Csibra G, Halit H, Farroni T, de Haan M, Tucker LA, Baron-Cohen S, Richards J: The emergence of the social brain network: Evidence from typical and atypical development. Dev Psychopathol. 2005, 17: 599-619.

    PubMed Central  PubMed  Google Scholar 

  26. 26.

    Frith CD: The social brain?. Proc R Soc Lond B Biol Sci. 2007, 362: 671-678. 10.1098/rstb.2006.2003.

    Google Scholar 

  27. 27.

    Blakemore SJ: The social brain in adolescence. Nat Rev Neurosci. 2008, 9: 267-277.

    CAS  PubMed  Google Scholar 

  28. 28.

    Pelphrey KA, Carter EJ: Charting the typical and atypical development of the social brain. Dev Psychopathol. 2008, 20: 1081-1102. 10.1017/S0954579408000515.

    PubMed  Google Scholar 

  29. 29.

    Belmonte MK, Allen G, Beckel-Mitchener A, Boulanger LM, Carper RA, Webb SJ: Autism and abnormal development of brain connectivity. J Neurosci. 2004, 24: 9228-9231. 10.1523/JNEUROSCI.3340-04.2004.

    CAS  PubMed  Google Scholar 

  30. 30.

    Barton RA, Aggleton JP: Primate evolution and the amygdala. The amygdala: A functional analysis. Edited by: Aggleton JP. 2000, Oxford University Press, NewYork, 479-508.

    Google Scholar 

  31. 31.

    Darwin C: The expression of the emotions in man and animals. 1872, John Murray, London

    Google Scholar 

  32. 32.

    Tooby J, Cosmides L: The past explains the present: Emotional adaptations and the structure of ancestral environments. Ethol Sociobiol. 1990, 11: 375-424. 10.1016/0162-3095(90)90017-Z.

    Google Scholar 

  33. 33.

    Ekman P, Friesen WV: Unmasking the face: A guide to recognizing emotions from facial clues. 1975, Prentice-Hall, Englewood Cliffs

    Google Scholar 

  34. 34.

    Yoshikawa S, Sato W: Dynamic facial expressions of emotion induce representational momentum. Cogn Affect Behav Neurosci. 2008, 8: 25-31. 10.3758/CABN.8.1.25.

    PubMed  Google Scholar 

  35. 35.

    Sato W, Yoshikawa S: Enhanced experience of emotional arousal in response to dynamic facial expressions. J Nonverbal Behav. 2007, 31: 119-135. 10.1007/s10919-007-0025-7.

    Google Scholar 

  36. 36.

    Sato W, Yoshikawa S: Spontaneous facial mimicry in response to dynamic facial expressions. Cognition. 2007, 104: 1-18. 10.1016/j.cognition.2006.05.001.

    PubMed  Google Scholar 

  37. 37.

    Vinter A: The role of movement in eliciting early imitations. Child Dev. 1986, 13: 66-71.

    Google Scholar 

  38. 38.

    Kilts CD, Egan G, Gideon DA, Ely TD, Hoffman JM: Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. NeuroImage. 2003, 18: 156-168. 10.1006/nimg.2002.1323.

    PubMed  Google Scholar 

  39. 39.

    LaBar KS, Crupain MJ, Voyvodic JT, McCarthy G: Dynamic perception of facial affect and identity in the human brain. Cereb Cortex. 2003, 13: 1023-1033. 10.1093/cercor/13.10.1023.

    PubMed  Google Scholar 

  40. 40.

    Sato W, Kochiyama T, Yoshikawa S, Naito E, Matsumura M: Enhanced neural activity in response to dynamic facial expressions of emotion: An fMRI study. Brain Res Cogn Brain Res. 2004, 20: 81-91. 10.1016/j.cogbrainres.2004.01.008.

    PubMed  Google Scholar 

  41. 41.

    Schultz J, Pilz KS: Natural facial motion enhances cortical responses to faces. Exp Brain Res. 2009, 194: 465-475. 10.1007/s00221-009-1721-9.

    PubMed Central  PubMed  Google Scholar 

  42. 42.

    Trautmann SA, Fehr T, Herrmann M: Emotions in motion: Dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion-specific activations. Brain Res. 2009, 1284: 100-115.

    CAS  PubMed  Google Scholar 

  43. 43.

    Uono S, Sato W, Toichi M: Dynamic fearful gaze does not enhance attention orienting in individuals with Asperger’s disorder. Brain Cogn. 2009, 71: 229-233. 10.1016/j.bandc.2009.08.015.

    PubMed  Google Scholar 

  44. 44.

    Pelphrey KA, Morris JP, McCarthy G, Labar KS: Perception of dynamic changes in facial affect and identity in autism. Soc Cogn Affect Neurosci. 2007, 2: 140-149. 10.1093/scan/nsm010.

    PubMed Central  PubMed  Google Scholar 

  45. 45.

    Rizzolatti G, Fogassi L, Gallese V: Neurophysiological mechanisms underlying the understanding and imitation of action. Nat Rev Neurosci. 2001, 2: 661-670. 10.1038/35090060.

    CAS  PubMed  Google Scholar 

  46. 46.

    Gallese V, Keysers C, Rizzolatti G: A unifying view of the basis of social cognition. Trends Cogn Sci. 2004, 8: 396-403. 10.1016/j.tics.2004.07.002.

    PubMed  Google Scholar 

  47. 47.

    Hobson RP, Lee A: Imitation and identification in autism. J Child Psychol Psychiatry. 1999, 40: 649-659. 10.1111/1469-7610.00481.

    CAS  PubMed  Google Scholar 

  48. 48.

    Iacoboni M, Dapretto M: The mirror neuron system and the consequences of its dysfunction. Nat Rev Neurosci. 2006, 7: 942-951. 10.1038/nrn2024.

    CAS  PubMed  Google Scholar 

  49. 49.

    Ramachandran VS, Oberman LM: Broken mirrors: A theory of autism. Sci Am. 2006, 295: 62-69.

    PubMed  Google Scholar 

  50. 50.

    Williams JHG, Whiten A, Suddendorf T, Perrett DI: Imitation, mirror neurons and autism. Neurosci Biobehav Rev. 2001, 25: 287-295. 10.1016/S0149-7634(01)00014-8.

    CAS  PubMed  Google Scholar 

  51. 51.

    Wicker B, Fonlupt P, Hubert B, Tardif C, Gepner B, Deruelle C: Abnormal cerebral effective connectivity during explicit emotional processing in adults with autism spectrum disorder. Soc Cogn Affect Neurosci. 2008, 3: 135-143. 10.1093/scan/nsn007.

    PubMed Central  PubMed  Google Scholar 

  52. 52.

    Dejerine J: Anatomie des centres nerveux. 1895, Rueff et Cie, Paris

    Google Scholar 

  53. 53.

    Ludwig E, Klinger J: Atlas cerebri humani. 1956, Karger, Basel

    Google Scholar 

  54. 54.

    Deacon TW: Cortical connections of the inferior arcuate sulcus cortex in the macaque brain. Brain Res. 1992, 573: 8-26. 10.1016/0006-8993(92)90109-M.

    CAS  PubMed  Google Scholar 

  55. 55.

    Petrides M, Pandya DN: Comparative cytoarchitectonic analysis of the human and the macaque ventrolateral prefrontal cortex and corticocortical connection patterns in the monkey. Eur J Neurosci. 2002, 16: 291-310. 10.1046/j.1460-9568.2001.02090.x.

    CAS  PubMed  Google Scholar 

  56. 56.

    Catani M, Howard RJ, Pajevic S, Jones DK: Virtual in vivo interactive dissection of white matter fasciculi in the human brain. NeuroImage. 2002, 17: 77-94. 10.1006/nimg.2002.1136.

    PubMed  Google Scholar 

  57. 57.

    Rilling JK, Glasser MF, Preuss TM, Ma X, Zhao T, Hu X, Behrens TEJ: The evolution of the arcuate fasciculus revealed with comparative DTI. Nat Neurosci. 2008, 11: 426-428. 10.1038/nn2072.

    CAS  PubMed  Google Scholar 

  58. 58.

    Thiebaut de Schotten M, Ffytche DH, Bizzi A, Dell'Acqua F, Allin M, Walshe M, Murray R, Williams SC, Murphy DG, Catani M: Atlasing location, asymmetry and inter-subject variability of white matter tracts in the human brain with MR diffusion tractography. NeuroImage. 2011, 54: 49-59. 10.1016/j.neuroimage.2010.07.055.

    PubMed  Google Scholar 

  59. 59.

    Hamilton AF: Emulation and mimicry for social interaction: A theoretical approach to imitation in autism. Q J Exp Psychol. 2008, 61: 101-115. 10.1080/17470210701508798.

    Google Scholar 

  60. 60.

    Kana RK, Wadsworth HM, Travers BG: A systems level analysis of the mirror neuron hypothesis and imitation impairments in autism spectrum disorders. Neurosci Biobehav Rev. 2011, 35: 894-902. 10.1016/j.neubiorev.2010.10.007.

    PubMed  Google Scholar 

  61. 61.

    Sato W, Yoshikawa S: The dynamic aspects of emotional facial expressions. Cogn Emo. 2004, 18: 701-710. 10.1080/02699930341000176.

    Google Scholar 

  62. 62.

    Levitt JG, Blanton RE, Smalley S, Thompson PM, Guthrie D, McCracken JT, Sadoun T, Heinichen L, Toga AW: Cortical sulcal maps in autism. Cereb Cortex. 2003, 13: 728-735. 10.1093/cercor/13.7.728.

    PubMed  Google Scholar 

  63. 63.

    Boddaert N, Chabane N, Gervais H, Goodm CD, Bourgeois M, Plumet MH, Barthélémy C, Mouren MC, Artiges E, Samson Y, Brunelle F, Frackowiak RS, Zilbovicius M: Superior temporal sulcus anatomical abnormalities in childhood autism: A voxel-based morphometry MRI study. NeuroImage. 2004, 23: 364-369. 10.1016/j.neuroimage.2004.06.016.

    CAS  PubMed  Google Scholar 

  64. 64.

    Hadjikhani N, Joseph RM, Snyder J, Tager-Flusberg H: Anatomical differences in the mirror neuron system and social cognition network in autism. Cereb Cortex. 2006, 16: 1276-1282.

    PubMed  Google Scholar 

  65. 65.

    Kwon H, Ow AW, Pedatella KE, Lotspeich LJ, Reiss AL: Voxel-based morphometry elucidates structural neuroanatomy of high-functioning autism and Asperger syndrome. Dev Med Child Neurol. 2004, 46: 760-764.

    PubMed  Google Scholar 

  66. 66.

    Van Kooten IA, Palmen SJ, Von Cappeln P, Steinbusch HW, Korr H, Heinsen H, Hof PR, Van Engeland H, Schmitz C: Neurons in the fusiform gyrus are fewer and smaller in autism. Brain. 2008, 131: 987-999. 10.1093/brain/awn033.

    PubMed  Google Scholar 

  67. 67.

    Nacewicz BM, Dalton KM, Johnstone T, Long MT, McAuliff EM, Oakes TR, Alexander AL, Davidson RJ: Amygdala volume and nonverbal social impairment in adolescent and adult males with autism. Arch Gen Psychiatry. 2006, 63: 1417-1428. 10.1001/archpsyc.63.12.1417.

    PubMed  Google Scholar 

  68. 68.

    Schumann CM, Amaral DG: Stereological analysis of amygdala neuron number in autism. J Neurosci. 2006, 26: 7674-7679. 10.1523/JNEUROSCI.1285-06.2006.

    CAS  PubMed  Google Scholar 

  69. 69.

    Hyde KL, Samson F, Evans AC, Mottron L: Neuroanatomical differences in brain areas implicated in perceptual and other core features of autism revealed by cortical thickness analysis and voxel-based morphometry. Hum Brain Mapp. 2010, 31: 556-566.

    PubMed  Google Scholar 

  70. 70.

    Kosaka H, Omori M, Munesue T, Ishitobi M, Matsumura Y, Takahashi T, Narita K, Murata T, Saito DN, Uchiyama H, Morita T, Kikuchi M, Mizukami K, Okazawa H, Sadato N, Wada Y: Smaller insula and inferior frontal volumes in young adults with pervasive developmental disorders. NeuroImage. 2010, 50: 1357-1363. 10.1016/j.neuroimage.2010.01.085.

    PubMed  Google Scholar 

  71. 71.

    Puce A, Allison T, Bentin S, Gore JC, McCarthy G: Temporal cortex activation in humans viewing eye and mouth movements. J Neurosci. 1998, 18: 2188-2199.

    CAS  PubMed  Google Scholar 

  72. 72.

    Hoffman EA, Haxby JV: Distinct representations of eye gaze and identity in the distributed human neural system for face perception. Nat Neurosci. 2000, 3: 80-84. 10.1038/71152.

    CAS  PubMed  Google Scholar 

  73. 73.

    Wheaton KJ, Thompson JC, Syngeniotis A, Abbott DF, Puce A: Viewing the motion of human body parts activates different regions of premotor, temporal, and parietal cortex. NeuroImage. 2004, 22: 277-288. 10.1016/j.neuroimage.2003.12.043.

    PubMed  Google Scholar 

  74. 74.

    Freitag CM, Konrad C, Haberlen M, Kleser C, von Gontard A, Reith W, Troje NF, Krick C: Perception of biological motion in autism spectrum disorders. Neuropsychologia. 2008, 46: 1480-1494. 10.1016/j.neuropsychologia.2007.12.025.

    PubMed  Google Scholar 

  75. 75.

    Herrington JD, Baron-Cohen S, Wheelwright SJ, Singh KD, Bullmore ET, Brammer M, Williams SCR: The role of MT+/V5 during biological motion perception in Asperger syndrome: An fMRI study. Res Autism Spectr Disord. 2007, 1: 14-27. 10.1016/j.rasd.2006.07.002.

    Google Scholar 

  76. 76.

    Atkinson AP: Impaired recognition of emotions from body movements is associated with elevated motion coherence thresholds in autism spectrum disorders. Neuropsychologia. 2009, 47: 3023-3029. 10.1016/j.neuropsychologia.2009.05.019.

    PubMed  Google Scholar 

  77. 77.

    Blake R, Turner LM, Smoski MJ, Pozdol SL, Stone WL: Visual recognition of biological motion is impaired in children with autism. Psychol Sci. 2003, 14: 151-157. 10.1111/1467-9280.01434.

    PubMed  Google Scholar 

  78. 78.

    Hubert B, Wicker B, Moore DG, Monfardini E, Duverger H, Da Fonséca D, Deruelle C: Brief report: Recognition of emotional and non-emotional biological motion in individuals with autistic spectrum disorders. J Autism Dev Disord. 2007, 37: 1386-1392. 10.1007/s10803-006-0275-y.

    CAS  PubMed  Google Scholar 

  79. 79.

    Kaiser MD, Delmolino L, Tanaka JW, Shiffrar M: Comparison of visual sensitivity to human and object motion in autism spectrum disorder. Autism Res. 2010, 3: 191-195. 10.1002/aur.137.

    PubMed  Google Scholar 

  80. 80.

    Moore DG, Hobson RP, Lee A: Components of person perception: An investigation with autistic, nonautistic retarded and typically developing children and adolescents. Br J Dev Psychol. 1997, 15: 401-423. 10.1111/j.2044-835X.1997.tb00738.x.

    Google Scholar 

  81. 81.

    Dakin S, Frith U: Vagaries of visual perception in autism. Neuron. 2005, 48: 497-507. 10.1016/j.neuron.2005.10.018.

    CAS  PubMed  Google Scholar 

  82. 82.

    Tong F, Nakayama K, Vaughan JT, Kanwisher N: Binocular rivalry and visual awareness in human extrastriate cortex. Neuron. 1998, 21: 753-759. 10.1016/S0896-6273(00)80592-9.

    CAS  PubMed  Google Scholar 

  83. 83.

    Hubl D, Bölte S, Feineis-Matthews S, Lanfermann H, Federspiel A, Strik W, Poustka F, Dierks T: Functional imbalance of visual pathways indicates alternative face processing strategies in autism. Neurology. 2003, 61: 1232-1237. 10.1212/01.WNL.0000091862.22033.1A.

    CAS  PubMed  Google Scholar 

  84. 84.

    Pierce K, Müller RA, Ambrose J, Allen G, Courchesne E: Face processing occurs outside the fusiform 'face area' in autism: Evidence from functional MRI. Brain. 2001, 124: 2059-2073. 10.1093/brain/124.10.2059.

    CAS  PubMed  Google Scholar 

  85. 85.

    Schultz RT, Gauthier I, Klin A, Fulbright RK, Anderson AW, Volkmar F, Skudlarski P, Lacadie C, Cohen DJ, Gore JC: Abnormal ventral temporal cortical activity during face discrimination among individuals with autism and Asperger syndrome. Arch Gen Psychiatry. 2000, 57: 331-340. 10.1001/archpsyc.57.4.331.

    CAS  PubMed  Google Scholar 

  86. 86.

    Sato W, Kochiyama T, Yoshikawa S: Amygdala activity in response to forward versus backward dynamic facial expressions. Brain Res. 2010, 1315: 92-99.

    CAS  PubMed  Google Scholar 

  87. 87.

    Emery NJ, Capitanio JP, Mason WA, Machado CJ, Mendoza SP, Amaral DG: The effects of bilateral lesions of the amygdala on dyadic social interactions in rhesus monkeys (Macaca mulatta). Behav Neurosci. 2001, 115: 515-544.

    CAS  PubMed  Google Scholar 

  88. 88.

    Bachevalier J: Brief report: Medial temporal lobe and autism: A putative animal model in primates. J Autism Dev Disord. 1996, 26: 217-220. 10.1007/BF02172015.

    CAS  PubMed  Google Scholar 

  89. 89.

    Gallagher HL, Happé F, Brunswick N, Fletcher PC, Frith U, Frith CD: Reading the mind in cartoons and stories: An fMRI study of 'theory of mind' in verbal and nonverbal tasks. Neuropsychologia. 2000, 38: 11-21. 10.1016/S0028-3932(99)00053-6.

    CAS  PubMed  Google Scholar 

  90. 90.

    Tomasello M, Carpenter M, Call J, Behne T, Moll H: Understanding and sharing intentions: The origins of cultural cognition. Behav Brain Sci. 2005, 28: 675-691.

    PubMed  Google Scholar 

  91. 91.

    Baron-Cohen S, Leslie AM, Frith U: Does the autistic child have a “theory of mind”?. Cognition. 1985, 21: 37-46. 10.1016/0010-0277(85)90022-8.

    CAS  PubMed  Google Scholar 

  92. 92.

    Castelli F, Frith C, Happé F, Frith U: Autism, Asperger syndrome and brain mechanisms for the attribution of mental states to animated shapes. Brain. 2002, 125: 1839-1849. 10.1093/brain/awf189.

    PubMed  Google Scholar 

  93. 93.

    Happé F, Ehlers S, Fletcher P, Frith U, Johansson M, Gillberg C, Dolan R, Frackowiak R, Frith C: 'Theory of mind' in the brain. Evidence from a PET scan study of Asperger syndrome. Neuroreport. 1996, 8: 197-201. 10.1097/00001756-199612200-00040.

    PubMed  Google Scholar 

  94. 94.

    Buccino G, Binkofski F, Fink GR, Fadiga L, Fogassi L, Gallese V, Seitz RJ, Zilles K, Rizzolatti G, Freund HJ: Action observation activates premotor and parietal areas in a somatotopic manner: An fMRI study. Eur J Neurosci. 2001, 13: 400-404.

    CAS  PubMed  Google Scholar 

  95. 95.

    Buccino G, Lui F, Canessa N, Patteri I, Lagravinese G, Benuzzi F, Porro CA, Rizzolatti G: Neural circuits involved in the recognition of actions performed by nonconspecifics: An fMRI study. J Cogn Neurosci. 2004, 16: 114-126. 10.1162/089892904322755601.

    PubMed  Google Scholar 

  96. 96.

    Leslie KR, Johnson-Frey SH, Grafton ST: Functional imaging of face and hand imitation: Towards a motor theory of empathy. NeuroImage. 2004, 21: 601-607. 10.1016/j.neuroimage.2003.09.038.

    PubMed  Google Scholar 

  97. 97.

    Lee TW, Josephs O, Dolan RJ, Critchley HD: Imitating expressions: Emotion-specific neural substrates in facial mimicry. Soc Cogn Affect Neurosci. 2006, 1: 122-135. 10.1093/scan/nsl012.

    PubMed Central  PubMed  Google Scholar 

  98. 98.

    Nishitani N, Avikainen S, Hari R: Abnormal imitation-related cortical activation sequences in Asperger's syndrome. Ann Neurol. 2004, 55: 558-562. 10.1002/ana.20031.

    PubMed  Google Scholar 

  99. 99.

    McIntosh DN, Reichmann-Decker A, Winkielman P, Wilbarger JL: When the social mirror breaks: Deficits in automatic, but not voluntary, mimicry of emotional facial expressions in autism. Dev Sci. 2006, 9: 295-302. 10.1111/j.1467-7687.2006.00492.x.

    PubMed  Google Scholar 

  100. 100.

    Oberman LM, Winkielman P, Ramachandran VS: Slow echo: Facial EMG evidence for the delay of spontaneous, but not voluntary, emotional mimicry in children with autism spectrum disorders. Dev Sci. 2009, 12: 510-520. 10.1111/j.1467-7687.2008.00796.x.

    PubMed  Google Scholar 

  101. 101.

    Tardif C, Lainé F, Rodriguez M, Gepner B: Slowing down presentation of facial movements and vocal sounds enhances facial expression recognition and induces facial-vocal imitation in children with autism. J Autism Dev Disord. 2007, 37: 1469-1484. 10.1007/s10803-006-0223-x.

    PubMed  Google Scholar 

  102. 102.

    Brock J, Brown CC, Boucher J, Rippon G: The temporal binding deficit hypothesis of autism. Dev Psychopathol. 2002, 14: 209-224.

    PubMed  Google Scholar 

  103. 103.

    Welchew DE, Ashwin C, Berkouk K, Salvador R, Suckling J, Baron-Cohen S, Bullmore E: Functional disconnectivity of the medial temporal lobe in Asperger's syndrome. Biol Psychiatry. 2005, 57: 991-998. 10.1016/j.biopsych.2005.01.028.

    PubMed  Google Scholar 

  104. 104.

    Bird G, Catmur C, Silani G, Frith C, Frith U: Attention does not modulate neural responses to social stimuli in autism spectrum disorders. NeuroImage. 2006, 31: 1614-1624. 10.1016/j.neuroimage.2006.02.037.

    PubMed  Google Scholar 

  105. 105.

    Kleinhans NM, Richards T, Sterling L, Stegbauer KC, Mahurin R, Johnson LC, Greenson J, Dawson G, Aylward E: Abnormal functional connectivity in autism spectrum disorders during face processing. Brain. 2008, 131: 1000-1012. 10.1093/brain/awm334.

    PubMed  Google Scholar 

  106. 106.

    Just MA, Cherkassky VL, Keller TA, Minshew NJ: Cortical activation and synchronization during sentence comprehension in high-functioning autism: Evidence of underconnectivity. Brain. 2004, 127: 1811-1821. 10.1093/brain/awh199.

    PubMed  Google Scholar 

  107. 107.

    Just MA, Cherkassky VL, Keller TA, Kana RK, Minshew NJ: Functional and anatomical cortical underconnectivity in autism: Evidence from an fMRI study of an executive function task and corpus callosum morphometry. Cereb Cortex. 2007, 17: 951-961.

    PubMed Central  PubMed  Google Scholar 

  108. 108.

    Koshino H, Carpenter PA, Minshew NJ, Cherkassky VL, Keller TA, Just MA: Functional connectivity in an fMRI working memory task in high-functioning autism. NeuroImage. 2005, 24: 810-821. 10.1016/j.neuroimage.2004.09.028.

    PubMed  Google Scholar 

  109. 109.

    Koshino H, Kana RK, Keller TA, Cherkassky VL, Minshew NJ, Just MA: fMRI investigation of working memory for faces in autism: Visual coding and underconnectivity with frontal areas. Cereb Cortex. 2008, 18: 289-300.

    PubMed Central  PubMed  Google Scholar 

  110. 110.

    Villalobos ME, Mizuno A, Dahl BC, Kemmotsu N, Müller RA: Reduced functional connectivity between V1 and inferior frontal cortex associated with visuomotor performance in autism. NeuroImage. 2005, 25: 916-925. 10.1016/j.neuroimage.2004.12.022.

    PubMed Central  PubMed  Google Scholar 

  111. 111.

    Hazlett EA, Buchsbaum MS, Hsieh P, Haznedar MM, Platholi J, LiCalzi EM, Cartwright C, Hollander E: Regional glucose metabolism within cortical Brodmann areas in healthy individuals and autistic patients. Neuropsychobiology. 2004, 49: 115-125. 10.1159/000076719.

    CAS  PubMed  Google Scholar 

  112. 112.

    Behrmann M, Thomas C, Humphreys K: Seeing it differently: Visual processing in autism. Trends Cogn Sci. 2006, 10: 258-264. 10.1016/j.tics.2006.05.001.

    PubMed  Google Scholar 

  113. 113.

    Niedenthal PM, Brauer M, Halberstadt JB, Innes-Ker ÅH: When did her smile drop? Facial mimicry and the influences of emotional state on the detection of change in emotional expression. Cogn Emo. 2001, 15: 853-864. 10.1080/02699930143000194.

    Google Scholar 

  114. 114.

    Hietanen JK, Leppänen JM: Does facial expression affect attention orienting by gaze direction cues?. J Exp Psychol Hum Percept Perform. 2003, 29: 1228-1243.

    PubMed  Google Scholar 

  115. 115.

    van der Gaag C, Minderaa RB, Keysers C: The BOLD signal in the amygdala does not differentiate between dynamic facial expressions. Soc Cogn Affect Neurosci. 2007, 2: 93-103. 10.1093/scan/nsm002.

    PubMed Central  PubMed  Google Scholar 

  116. 116.

    van der Gaag C, Minderaa RB, Keysers C: Facial expressions: What the mirror neuron system can and cannot tell us. Soc Neurosci. 2007, 2: 179-222. 10.1080/17470910701376878.

    PubMed  Google Scholar 

  117. 117.

    Phillips ML, Young AW, Senior C, Brammer M, Andrew C, Calder AJ, Bullmore ET, Perrett DI, Rowland D, Williams SC, Gray JA, David AS: A specific neural substrate for perceiving facial expressions of disgust. Nature. 1997, 389: 495-498. 10.1038/39051.

    CAS  PubMed  Google Scholar 

  118. 118.

    Wicker B, Keysers C, Plailly J, Royet JP, Gallese V, Rizzolatti G: Both of us disgusted in my insula: The common neural basis of seeing and feeling disgust. Neuron. 2003, 40: 655-664. 10.1016/S0896-6273(03)00679-2.

    CAS  PubMed  Google Scholar 

  119. 119.

    Hadjikhani N, Joseph RM, Snyder J, Chabris CF, Clark J, Steele S, McGrath L, Vangel M, Aharon I, Feczko E, Harris GJ, Tager-Flusberg H: Activation of the fusiform gyrus when individuals with autism spectrum disorder view faces. NeuroImage. 2004, 22: 1141-1150. 10.1016/j.neuroimage.2004.03.025.

    PubMed  Google Scholar 

  120. 120.

    Pierce K, Haist F, Sedaghat F, Courchesne E: The brain response to personally familiar faces in autism: Findings of fusiform activity and beyond. Brain. 2004, 127: 2703-2716. 10.1093/brain/awh289.

    PubMed  Google Scholar 

  121. 121.

    Friston KJ, Harrison L, Penny W: Dynamic causal modeling. NeuroImage. 2003, 19: 1273-1302. 10.1016/S1053-8119(03)00202-7.

    CAS  PubMed  Google Scholar 

  122. 122.

    Stephan KE, Penny WD, Moran RJ, den Ouden HE, Daunizeau J, Friston KJ: Ten simple rules for dynamic causal modeling. NeuroImage. 2010, 49: 3099-3109. 10.1016/j.neuroimage.2009.11.015.

    CAS  PubMed Central  PubMed  Google Scholar 

  123. 123.

    Schopler E, Reichler RJ, Renner BR: The Childhood Autism Rating Scale (CARS): For diagnostic screening and classification of autism. 1986, Irvington, New York

    Google Scholar 

  124. 124.

    Koyama T, Tachimori H, Osada H, Takeda T, Kurita H: Cognitive and symptom profiles in Asperger's syndrome and high-functioning autism. Psychiatry Clin Neurosci. 2007, 61: 99-104. 10.1111/j.1440-1819.2007.01617.x.

    PubMed  Google Scholar 

  125. 125.

    Uono S, Sato W, Toichi M: The specific impairment of fearful expression recognition and its atypical development in pervasive developmental disorder. Soc Neurosci. 2011, 6: 452-463. 10.1080/17470919.2011.605593.

    PubMed  Google Scholar 

  126. 126.

    Oldfield RC: The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia. 1971, 9: 97-113. 10.1016/0028-3932(71)90067-4.

    CAS  PubMed  Google Scholar 

  127. 127.

    Ekman P, Friesen WV: Pictures of facial affect. 1976, Consulting Psychologist, Palo Alto

    Google Scholar 

  128. 128.

    Mukaida S, Kamachi M, Kato T, Oda M, Yoshikawa S, Akamatsu S: Foolproof utilities for facial image manipulation (unpublished computer software). 2000, ATR, Kyoto

    Google Scholar 

  129. 129.

    Friston KJ, Ashburner J, Frith CD, Poline JB, Heather JD, Frackowiak RSJ: Spatial registration and normalization of images. Hum Brain Mapp. 1995, 2: 165-189.

    Google Scholar 

  130. 130.

    Ashburner J, Friston KJ: Nonlinear spatial normalization using basis functions. Hum Brain Mapp. 1999, 7: 254-266. 10.1002/(SICI)1097-0193(1999)7:4<254::AID-HBM4>3.0.CO;2-G.

    CAS  PubMed  Google Scholar 

  131. 131.

    Holmes AP, Friston KJ: Generalizability, random effects and population inference. NeuroImage. 1998, 7: S754.

    Google Scholar 

  132. 132.

    Friston KJ, Holmes AP, Poline JB, Grasby PJ, Williams SC, Frackowiak RSJ, Turner R: Analysis of fMRI time-series revisited. NeuroImage. 1995, 2: 45-53. 10.1006/nimg.1995.1007.

    CAS  PubMed  Google Scholar 

  133. 133.

    Worsley KJ, Friston KJ: Analysis of fMRI time-series revisited–again. NeuroImage. 1995, 2: 173-181. 10.1006/nimg.1995.1023.

    CAS  PubMed  Google Scholar 

  134. 134.

    Friston KJ, Glaser DE, Henson RN, Kiebel S, Phillips C, Ashburner J: Classical and Bayesian inference in neuroimaging: Applications. NeuroImage. 2002, 16: 484-512. 10.1006/nimg.2002.1091.

    CAS  PubMed  Google Scholar 

  135. 135.

    Hadjikhani N, Joseph RM, Snyder J, Tager-Flusberg H: Abnormal activation of the social brain during face perception in autism. Hum Brain Mapp. 2007, 28: 441-449. 10.1002/hbm.20283.

    PubMed  Google Scholar 

  136. 136.

    Lancaster JL, Woldorff MG, Parsons LM, Liotti M, Freitas CS, Rainey L, Kochunov PV, Nickerson D, Mikiten SA, Fox PT: Automated Talairach atlas labels for functional brain mapping. Hum Brain Mapp. 2000, 10: 120-131. 10.1002/1097-0193(200007)10:3<120::AID-HBM30>3.0.CO;2-8.

    CAS  PubMed  Google Scholar 

  137. 137.

    Friston KJ, Frith CD, Liddle PF, Frackowiak RSJ: Comparing functional (PET) images: The assessment of significant change. J Cereb Blood Flow Metab. 1991, 11: 690-699. 10.1038/jcbfm.1991.122.

    CAS  PubMed  Google Scholar 

  138. 138.

    Sokal RR, Rohlf FJ: Biometry: The principles and practice of statistics in biological research. 1995, Freeman, New York, 3

    Google Scholar 

  139. 139.

    Worsley KJ, Marrett S, Neelin P, Vandal AC, Friston KJ, Evans AC: A unified statistical approach for determining significant signals in images of cerebral activation. Hum Brain Mapp. 1996, 4: 58-73. 10.1002/(SICI)1097-0193(1996)4:1<58::AID-HBM4>3.0.CO;2-O.

    CAS  PubMed  Google Scholar 

  140. 140.

    Eickhoff SB, Stephan KE, Mohlberg H, Grefkes C, Fink GR, Amunts K, Zilles K: A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. NeuroImage. 2005, 25: 1325-1335. 10.1016/j.neuroimage.2004.12.034.

    PubMed  Google Scholar 

  141. 141.

    Stephan KE, Penny WD, Daunizeau J, Moran RJ, Friston KJ: Bayesian model selection for group studies. NeuroImage. 2009, 46: 1004-1017. 10.1016/j.neuroimage.2009.03.025.

    PubMed Central  PubMed  Google Scholar 

  142. 142.

    Liu L, Vira A, Friedman E, Minas J, Bolger D, Bitan T, Booth J: Children with reading disability show brain differences in effective connectivity for visual, but not auditory word comprehension. PLoS One. 2010, 5: e13492-10.1371/journal.pone.0013492.

    PubMed Central  PubMed  Google Scholar 

  143. 143.

    Seghier ML, Josse G, Leff AP, Price CJ: Lateralization is predicted by reduced coupling from the left to right prefrontal cortex during semantic decisions on written words. Cereb Cortex. 2011, 21: 1519-1531. 10.1093/cercor/bhq203.

    PubMed Central  PubMed  Google Scholar 

  144. 144.

    Penny WD, Stephan KE, Daunizeau J, Rosa MJ, Friston KJ, Schofield TM, Leff AP: Comparing families of dynamic causal models. PLoS Comput Biol. 2010, 6: e1000709-10.1371/journal.pcbi.1000709.

    PubMed Central  PubMed  Google Scholar 

Download references


We thank Professor S. Yoshikawa for her helpful advice and ATR Brain Activity Imaging Center for their supports of acquiring fMRI data. This study was supported by funds from the Benesse Corporation, JSPS Grants-in-Aid for Scientific Research, JSPS Funding Program for Next Generation World-Leading Researchers, and the Organization for Promoting Developmental Disorder Research.

Author information



Corresponding author

Correspondence to Wataru Sato.

Additional information

Competing interests

The authors declare that they have no competing interests.

Wataru Sato, Motomi Toichi contributed equally to this work.

Electronic supplementary material


Additional file 1: Figure S1. The model for the analysis of regional brain activity.We constructed a three-way repeated-measures ANOVA design including participant as a factor of no interest and group, presentation condition, and emotion as factors of interest. CON = Control; ASD = Autism spectrum disorders; DY = Dynamic; ST = Static; FE = Fear; HA = Happiness. (TIFF 2 MB)

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Sato, W., Toichi, M., Uono, S. et al. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders. BMC Neurosci 13, 99 (2012).

Download citation


  • Amygdala
  • Autism spectrum disorders (ASD)
  • Dynamic facial expression
  • Fusiform gyrus
  • Inferior frontal gyrus
  • Medial prefrontal cortex
  • Middle temporal gyrus/superior temporal sulcus
  • Mirror neuron system