<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Hum. Neurosci.</journal-id>
<journal-title>Frontiers in Human Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Hum. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5161</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnhum.2014.00714</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The amygdala&#x00027;s response to face and emotional information and potential category-specific modulation of temporal cortex as a function of emotion</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>White</surname> <given-names>Stuart F.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/110651"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Adalio</surname> <given-names>Christopher</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/147874"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Nolan</surname> <given-names>Zachary T.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Yang</surname> <given-names>Jiongjiong</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/70551"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Martin</surname> <given-names>Alex</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/8773"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Blair</surname> <given-names>James R.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/49960"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Section on Affective Cognitive Neuroscience, National Institute of Mental Health, National Institutes of Health</institution> <country>Bethesda, MD, USA</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Psychology, University of California, Berkeley</institution> <country>Berkeley, CA, USA</country></aff>
<aff id="aff3"><sup>3</sup><institution>Department of Psychology, Peking University</institution> <country>Beijing, China</country></aff>
<aff id="aff4"><sup>4</sup><institution>Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health</institution> <country>Bethesda, MD, USA</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Aron K. Barbey, University of Illinois at Urbana-Champaign, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Stephan Hamann, Emory University, USA; Carla Harenski, The MIND Research Network, USA</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Stuart F. White, National Institute of Mental Health, 9000 Rockville Pike, Bldg. 15k, room 205, MSC 2670, Bethesda, MD 20892, USA e-mail: <email>whitesf&#x00040;mail.nih.gov</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to the journal Frontiers in Human Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>11</day>
<month>09</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>714</elocation-id>
<history>
<date date-type="received">
<day>27</day>
<month>03</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>08</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 White, Adalio, Nolan, Yang, Martin and Blair.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>The amygdala has been implicated in the processing of emotion and animacy information and to be responsive to novelty. However, the way in which these functions interact is poorly understood. Subjects (<italic>N</italic> &#x0003D; 30) viewed threatening or neutral images that could be either animate (facial expressions) or inanimate (objects) in the context of a dot probe task. The amygdala showed responses to both emotional and animacy information, but no emotion by stimulus-type interaction; i.e., emotional face and object stimuli, when matched for arousal and valence, generate comparable amygdala activity relative to neutral face and object stimuli. Additionally, a habituation effect was not seen in amygdala; however, increased amygdala activity was observed for incongruent relative to congruent <italic>negative</italic> trials in second vs. first exposures. Furthermore, medial fusiform gyrus showed increased response to inanimate stimuli, while superior temporal sulcus showed increased response to animate stimuli. Greater functional connectivity between bilateral amygdala and medial fusiform gyrus was observed to negative vs. neutral objects, but not to fearful vs. neutral faces. The current data suggest that the amygdala is responsive to animate and emotional stimuli. Additionally, these data suggest that the interaction between the various functions of the amygdala may need to be considered simultaneously to fully understand how they interact. Moreover, they suggest category-specific modulation of medial fusiform cortex as a function of emotion.</p></abstract>
<kwd-group>
<kwd>amygdala</kwd>
<kwd>animate</kwd>
<kwd>emotion</kwd>
<kwd>fusiform gyrus</kwd>
<kwd>temporal cortex</kwd>
</kwd-group>
<counts>
<fig-count count="4"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="46"/>
<page-count count="9"/>
<word-count count="7209"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Considerable work implicates the amygdala in emotional processing (LeDoux, <xref ref-type="bibr" rid="B24">2012</xref>). There are data demonstrating greater amygdala responses to emotional (fearful) relative to neutral facial expressions (Murphy et al., <xref ref-type="bibr" rid="B30">2003</xref>). But animal and human data also indicate that the amygdala is simply responsive to face stimuli (see Pessoa and Adolphs, <xref ref-type="bibr" rid="B33">2010</xref>). The amygdala shows greater responses to animate stimuli, including faces (Gobbini et al., <xref ref-type="bibr" rid="B21">2011</xref>; Yang et al., <xref ref-type="bibr" rid="B46">2012</xref>), animals (Chao et al., <xref ref-type="bibr" rid="B13">2002</xref>; Yang et al., <xref ref-type="bibr" rid="B46">2012</xref>; Coker-Appiah et al., <xref ref-type="bibr" rid="B14">2013</xref>) and inanimate objects moving in animate ways (Martin and Weisberg, <xref ref-type="bibr" rid="B27">2003</xref>; Wheatley et al., <xref ref-type="bibr" rid="B43">2007</xref>; Santos et al., <xref ref-type="bibr" rid="B38">2010</xref>), relative to inanimate stimuli. There remains a question though regarding the extent to which the amygdala&#x00027;s response to emotional stimuli is limited to only <italic>animate</italic> emotional stimuli. One study examining BOLD responses to faces, animals and objects that were either emotional or neutral in the context of a repetition detection task reported an animacy-by-emotion interaction within the amygdala (Yang et al., <xref ref-type="bibr" rid="B46">2012</xref>). The differential response to threatening faces vs. neutral faces was significantly greater than the differential response to threatening objects vs. neutral objects. Indeed, in this study, the amygdala showed no significant response to threatening relative to neutral objects. In contrast, two additional studies, one using a very similar paradigm to Yang and colleagues (Cao et al., <xref ref-type="bibr" rid="B11">2014</xref>) and a second examining the differential response to approaching or receding animate or inanimate threats or neutral stimuli (Coker-Appiah et al., <xref ref-type="bibr" rid="B14">2013</xref>), both reported main effects for emotion within the amygdala. All three of these studies reported increased amygdala responses to faces and other animate stimuli relative to objects. However, while the data reported by Yang et al. (<xref ref-type="bibr" rid="B46">2012</xref>) suggested the amygdala&#x00027;s response to emotional stimuli was confined to animate stimuli, those of Coker-Appiah et al. (<xref ref-type="bibr" rid="B14">2013</xref>) and Cao et al. (<xref ref-type="bibr" rid="B11">2014</xref>) both indicated the amygdala responded to emotional stimuli whether they were animate or not. Furthermore, other work found greater amygdala response to sharp relative to curved contours in a series of neutral objects (e.g., sharp cornered vs. round baking pans); the authors argue that sharp contours are more threatening than rounded ones (Bar and Neta, <xref ref-type="bibr" rid="B4">2007</xref>). These data also suggest that amygdala responds to threat information independently of animacy information.</p>
<p>An important role of the amygdala concerns its role in emotional attention. The suggestion is that the amygdala primes representations of emotional stimuli in temporal cortex such that these are neurally represented more strongly than non-emotional stimuli. Thus, emotional stimuli are more likely to win the competition for representation and thereby become the focus of attention (Pessoa and Ungerleider, <xref ref-type="bibr" rid="B35">2004</xref>; Blair et al., <xref ref-type="bibr" rid="B8">2007</xref>). Emotional modulation of attention is thought to occur via direct feedback projections from the amygdala to visual processing areas, including temporal cortex (Pessoa et al., <xref ref-type="bibr" rid="B34">2002</xref>; Mitchell et al., <xref ref-type="bibr" rid="B29">2007</xref>). Interestingly, it has been argued that object concepts belonging to different categories are represented in partially distinct, sensory- and motor property&#x02013;based neural networks (Caramazza and Shelton, <xref ref-type="bibr" rid="B12">1998</xref>; Martin, <xref ref-type="bibr" rid="B26">2007</xref>). For common tools, the neural circuitry includes the medial portion of the fusiform gyrus and posterior medial temporal gyrus, assumed to represent their visual form and action properties (motion and manipulation; Martin, <xref ref-type="bibr" rid="B26">2007</xref>). For faces and animate objects, this circuitry includes two regions in posterior temporal cortex: the lateral portion of the fusiform gyrus, including the fusiform face area (FFA; Kanwisher and Yovel, <xref ref-type="bibr" rid="B22">2006</xref>; Nguyen et al., <xref ref-type="bibr" rid="B31">2013</xref>) and a region of posterior superior temporal sulcus (STS; Chao et al., <xref ref-type="bibr" rid="B13">2002</xref>; Martin, <xref ref-type="bibr" rid="B26">2007</xref>; Gobbini et al., <xref ref-type="bibr" rid="B21">2011</xref>). These have been implicated in representing their visual form and motion, respectively (Beauchamp et al., <xref ref-type="bibr" rid="B6">2003</xref>; Pelphrey et al., <xref ref-type="bibr" rid="B32">2005</xref>; Beauchamp and Martin, <xref ref-type="bibr" rid="B7">2007</xref>). Given differential representation of objects and faces within medial fusiform gyrus and lateral fusiform gyrus/STS respectively, one can anticipate emotional priming to occur in a category specific pattern within temporal cortex. However, this has not been formally tested.</p>
<p>Furthermore, the amygdala is sensitive to novel stimuli and shows rapid habituation to repeated presentation of the same stimulus (Breiter et al., <xref ref-type="bibr" rid="B10">1996</xref>; Fischer et al., <xref ref-type="bibr" rid="B19">2000</xref>, <xref ref-type="bibr" rid="B20">2003</xref>; Wright et al., <xref ref-type="bibr" rid="B44">2001</xref>). There are indications that this habituation effect is comparable for happy, fearful and neutral faces (e.g., Fischer et al., <xref ref-type="bibr" rid="B20">2003</xref>) and for snakes and flowers (Balderston et al., <xref ref-type="bibr" rid="B3">2013</xref>), the interaction between repetition effects, emotion and animacy in the amygdala has not to our knowledge been previously examined.</p>
<p>The goal of the current study is to examine the functional roles of the amygdala. A dot probe paradigm, rather than the previously used repetition detection or stimulus detection paradigms, was chosen because it provides the possibility of generating behavioral data regarding the functional impact of emotion and animacy information. Behavioral data that is interpretable on a trial-by-trial basis provides a useful context in which neural data can be interpreted.</p>
<p>The current study tests six predictions. First, given previous findings (Pessoa and Adolphs, <xref ref-type="bibr" rid="B33">2010</xref>; Yang et al., <xref ref-type="bibr" rid="B46">2012</xref>), we predicted that the amygdala would show increased responses to faces relative to objects. Second, based on previous findings (Fischer et al., <xref ref-type="bibr" rid="B20">2003</xref>), we predicted that habituation effects would be seen in the amygdala regardless of emotional or animacy information. Third, we predicted that if the amygdala is responsive to emotional information irrespective of animacy (cf. Coker-Appiah et al., <xref ref-type="bibr" rid="B14">2013</xref>; Cao et al., <xref ref-type="bibr" rid="B11">2014</xref>), there would be comparably increased responses within the amygdala to emotional faces and objects relative to neutral faces and objects. Fourth, if the amygdala is only, or much more strongly, responsive to the emotional content of face stimuli (cf. Yang et al., <xref ref-type="bibr" rid="B46">2012</xref>), then there would be a significant animacy-by-emotion interaction within the amygdala such that responding is significantly greater to fearful vs. neutral faces relative to threatening vs. neutral objects. Fifth, following previous work implicating medial fusiform cortex in preferential responding to inanimate stimuli (Beauchamp et al., <xref ref-type="bibr" rid="B5">2002</xref>; Mahon et al., <xref ref-type="bibr" rid="B25">2007</xref>; Gobbini et al., <xref ref-type="bibr" rid="B21">2011</xref>), we predicted greater responses within medial fusiform cortex to objects relative to faces. Moreover, we predicted: (i) modulation by threat would only occur for inanimate object stimuli within this region; and (ii) this region would show differential connectivity such that there would be greater correlation in signaling between this region and the amygdala as a function of threatening relative to neutral objects than threatening relative to neutral faces. Sixth, following previous work implicating FFA and STS in preferentially responding to animate stimuli (Beauchamp et al., <xref ref-type="bibr" rid="B5">2002</xref>; Mahon et al., <xref ref-type="bibr" rid="B25">2007</xref>; Gobbini et al., <xref ref-type="bibr" rid="B21">2011</xref>), we predicted greater responses within FFA and STS to faces relative to objects. Moreover, we predicted: (i) modulation by threat would only occur for faces within this region; and (ii) this region would show differential connectivity such that there would be greater correlation in signaling between this region and the amygdala as a function of threat relative to neutral faces than threat relative to neutral objects.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Method</title>
<sec>
<title>Subjects</title>
<p>Thirty right-handed subjects (13 females; aged 21.1&#x02013;36.7, mean age &#x0003D; 26.0, <italic>SD</italic> &#x0003D; 4.20) volunteered for the study and were paid for their participation. Subjects were in good physical health as confirmed by a complete physical exam, with no history of any psychiatric illness as assessed by the DSM-IV (1994) criteria based on the Structural Clinical Interview for DSM-IV Axis I disorders (SCID; First et al., <xref ref-type="bibr" rid="B18">1997</xref>). All subjects gave written informed assent/consent to participate in the study, which was approved by the National Institute of Mental Health Institutional Review Board.</p>
</sec>
<sec>
<title>Animacy attention task</title>
<p>The animacy attention task is a dot probe task (Figure <xref ref-type="fig" rid="F1">1</xref>). The stimuli consisted of images that were threatening and animate (e.g., fearful expression), threatening and inanimate (e.g., gun), neutral and animate (e.g., neutral expressions), or neutral and inanimate (e.g., mug). There were 20 items per category (80 different images). Each image was presented two times total and never more than once per run. The stimuli were taken from the Yang et al. (<xref ref-type="bibr" rid="B46">2012</xref>). Based on the data from Yang and colleagues, stimuli were matched so that the facial expression stimuli did not differ from the object stimuli on valence [<italic>t</italic><sub>(78)</sub> &#x0003D; 0.938, <italic>p</italic> &#x0003D; 0.351], arousal [<italic>t</italic><sub>(78)</sub> &#x0003D; 1.632, <italic>p</italic> &#x0003D; 0.107] or luminance [<italic>t</italic><sub>(78)</sub> &#x0003D; 1.235, <italic>p</italic> &#x0003D; 0.220]. Additionally, the magnitude of these differences [(fearful faces&#x02014;neutral faces)&#x02014;(threatening objects&#x02014;neutral objects)] was directly compared for valence and arousal. The magnitude of these differences (Cohen&#x00027;s d) did not significantly differ for either valence (<italic>z</italic> &#x0003D; 0.97, <italic>p</italic> &#x0003D; 0.33) or arousal (<italic>z</italic> &#x0003D; 1.51, <italic>p</italic> &#x0003D; 0.13).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Animacy attention task</bold>. The task consisted of a presentation of an image, which was either neutral and animate, neutral, and inanimate, threatening and animate or threatening and inanimate, on either the left or right side of the display followed by a &#x0201C;<sup>&#x0002A;</sup>&#x0201D; probe, also on either the left or right side of the display. The participants were required to indicate, via button press, whether the probe appeared on the left or right side of the display. <bold>(A)</bold> Neutral inanimate congruent trial. <bold>(B)</bold> Neutral inanimate incongruent trial. <bold>(C)</bold> Example stimuli.</p></caption>
<graphic xlink:href="fnhum-08-00714-g0001.tif"/>
</fig>
<p>Each trial began with a 30 ms fixation, followed by a 300 ms stimulus presentation on either the left of the right side of the screen occupying 40% of the width and 45% of the height of the screen. The stimuli were immediately followed by the presentation of a probe (x) for 1000 ms. During congruent trials the probe appeared on the same side of the screen as the stimulus. During incongruent trials, the probe appeared on the opposite side of the screen to the stimulus. Following the probe was a 970 ms fixation. Participants were to make a button press corresponding to the side of the screen the probe appeared on as quickly as possible after the presentation of the probe. The task included 4 runs of 2 min and 10 s each, each consisting of 10 threatening faces, 10 threatening objects, 10 neutral faces and 10 neutral objects images as well as 10 fixation trials. Sixty percent of trials were congruent and images were randomized across trials and participants. No image was presented as incongruent more than once.</p>
</sec>
<sec>
<title>Imaging methods</title>
<sec>
<title>fMRI data acquisition and preprocessing</title>
<p>Whole-brain blood oxygen level dependent (BOLD) fMRI data were acquired using a 3.0 Tesla GE MRI scanner. Following sagital localization, functional T2<sup>&#x0002A;</sup> weighted images were acquired using an echo-planar single-shot gradient echo pulse sequence [matrix &#x0003D; 64 &#x000D7; 64 mm, repetition time (TR) &#x0003D; 2900 ms, echo time (TE) &#x0003D; 27 ms, field-of-view (FOV) &#x0003D; 240 mm (3.75 &#x000D7; 3.75 mm)]. Images were acquired in 34 2.5 mm axial slices with 0.5 mm spacing per brain volume. A high-resolution anatomical scan (3-dimensional spoiled gradient recalled acquisition in a steady state; <italic>TR</italic> &#x0003D; 7 ms; <italic>TE</italic> &#x0003D; 2.984 ms; 24 cm field of view; 12&#x000B0; flip angle; 128 axial slices; thickness, 1.2 mm; 256 &#x000D7; 256 matrix) in register with the EPI data set was obtained covering the whole brain.</p>
</sec>
<sec>
<title>Imaging data preprocessing</title>
<p>Data were analyzed within the framework of the general linear model using Analysis of Functional Neuroimages (AFNI; Cox, <xref ref-type="bibr" rid="B16">1996</xref>). Both individual and group-level analyses were conducted. The first four volumes in each scan series, collected before equilibrium magnetization was reached, were discarded. Motion correction was performed by registering all volumes in the EPI dataset to a volume collected close to acquisition of the high-resolution anatomical dataset.</p>
<p>The EPI datasets for each subject were spatially smoothed (isotropic 6 mm kernel) to reduce variability among individuals and generate group maps. Next, the time series data were normalized by dividing the signal intensity of a voxel at each time point by the mean signal intensity of that voxel for each run and multiplying the result by 100, producing regression coefficients representing percent-signal change.</p>
<p>Following this, the following 16 regressors were generated: correct responses for the following trial types: (i) threatening faces, congruent, first exposure; (ii) threatening faces, congruent, second exposure; (iii) threatening objects, congruent, first exposure; (iv) threatening objects, congruent, second exposure; (v) neutral faces, congruent, first exposure; (vi) neutral faces, congruent, second exposure; (vii) neutral objects, congruent, first exposure; (viii) neutral objects, congruent, second exposure; (ix) threatening faces, incongruent, first exposure; (x) threatening faces, incongruent, second exposure; (xi) threatening objects, incongruent, first exposure; (xii) threatening objects, incongruent, second exposure; (xiii) neutral faces, incongruent, first exposure; (xiv) neutral faces, incongruent, second exposure; (xv) neutral objects, incongruent, first exposure; (xvi) neutral objects, incongruent, second exposure. There was also a seventeenth regressor for incorrect responses. These 17 regressors were created by convolving the train of stimulus events with a gamma-variate hemodynamic response function to account for the slow hemodynamic response. The participants&#x00027; anatomical scans were individually registered to the Talairach and Tournoux atlas (Talairach and Tournoux, <xref ref-type="bibr" rid="B40">1988</xref>). The individuals&#x00027; functional EPI data were then registered to their Talairach anatomical scan within AFNI. Linear regression modeling was performed using the 17 regressors described above plus 6 head motion regressors. This produced a &#x003B2; coefficient and associated <italic>t</italic> statistic for each voxel and regressor.</p>
</sec>
<sec>
<title>fMRI data analysis</title>
<p>A whole-brain analysis of the BOLD data was conducted using a 2 (emotion: threatening, neutral) &#x000D7; 2 (object type: faces, objects) by 2 (congruency: congruent, incongruent) by 2 (exposure: first, second) ANOVA. The ClustSim program in AFNI was utilized to determine that, at an initial threshold of <italic>p</italic> &#x0003D; 0.005, a whole-brain <italic>p</italic> &#x0003D; 0.05 correction required clusters of 39 voxels. Due to its small size and theoretical importance, a small volume correction was made for the amygdala (as defined by all voxels of the Eickhoff&#x02013;Zilles architectonic atlas with at least a 50% probability of being in the amygdala) at an initial threshold of 0.02, which yielded a minimum cluster size of 6 voxels. <italic>Post-hoc</italic> analyses were conducted on the average percent signal change taken from all voxels within each ROI generated from functional masks generated by AFNI and <italic>t</italic>-tests carried out in SPSS to examine interaction effects.</p>
<p>In addition, two generalized psychophysiological connectivity analyses was conducted to examine task-dependent connectivity between task conditions (McLaren et al., <xref ref-type="bibr" rid="B28">2012</xref>). Seed regions were left and right amygdala (as defined above). For each seed region, the average activation was extracted across the time series. Interaction regressors were created by multiplying the average time series with 16 task time course vectors (one for each task condition), which were coded: 1 &#x0003D; task condition present and 0 &#x0003D; task condition not present. The average activation for the seed region was entered into a linear regression model along with the 16 interaction regressors (one per task condition), the 16 original task regressors described above, the incorrect response regressor and 6 motion regressors. A series of <italic>t</italic>-tests were conducted to test our hypotheses of greater connectivity between amygdala and STS and FFA for negative relative to neutral faces and greater connectivity between amygdala and medial fusiform gyrus for negative relative to neutral objects.</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Behavioral results</title>
<p>Two 2 (emotion: threatening, neutral) &#x000D7; 2 (object type: faces, objects) &#x000D7; 2 (congruence: congruent, incongruent) &#x000D7; 2 (exposure: first, second) ANOVAs were conducted on the subjects&#x00027; accuracy and RT data. A significant main effect of exposure was observed for accuracy [<italic>F</italic><sub>(1, 29)</sub> &#x0003D; 6.061, <italic>p</italic> &#x0003D; 0.02]. While accuracy was high throughout the task (97.4%), participants were marginally more accurate for first [<italic>M</italic><sub>(first)</sub> &#x0003D; 0.984, <italic>SE</italic> &#x0003D; 0.005] relative to second exposures [<italic>M</italic><sub>(first)</sub> &#x0003D; 0.965, <italic>SE</italic> &#x0003D; 0.011]. No other main effects or interactions were significant.</p>
<p>With respect to response latency, significant main effects were observed for object type [<italic>F</italic><sub>(1, 29)</sub> &#x0003D; 8.558, <italic>p</italic> &#x0003D; 0.007], congruency [<italic>F</italic><sub>(1, 29)</sub> &#x0003D; 5.184, <italic>p</italic> &#x0003D; 0.030] and exposure [<italic>F</italic><sub>(1, 29)</sub> &#x0003D; 15.717, <italic>p</italic> &#x0003C; 0.001]. Participants were quicker to respond to faces relative to objects [<italic>M</italic><sub>(faces)</sub> &#x0003D; 413.51 (<italic>SE</italic> &#x0003D; 12.00); <italic>M</italic><sub>(objects)</sub> &#x0003D; 420.55 (<italic>SE</italic> &#x0003D; 11.25)], to congruent relative to incongruent stimuli [<italic>M</italic><sub>(congruent)</sub> &#x0003D; 412.20 (<italic>SE</italic> &#x0003D; 11.24); <italic>M</italic><sub>(incongruent)</sub> &#x0003D; 421.87 (<italic>SE</italic> &#x0003D; 12.27)] and to second exposures relative to first exposures [<italic>M</italic><sub>(first)</sub> &#x0003D; 427.07 (<italic>SE</italic> &#x0003D; 12.77); <italic>M</italic><sub>(second)</sub> &#x0003D; 406.99 (<italic>SE</italic> &#x0003D; 10.84)]. There was also an emotion-by-congruence interaction [<italic>F</italic><sub>(1, 29)</sub> &#x0003D; 5.009, <italic>p</italic> &#x0003D; 0.033]. Participants were quicker to respond to neutral congruent stimuli relative to neutral incongruent stimuli (<italic>t</italic> &#x0003D; 2.961, <italic>p</italic> &#x0003D; 0.006), but response latencies did not differ between negative congruent and negative incongruent trials (<italic>t</italic> &#x0003D; 1.150, <italic>p</italic> &#x0003D; 0.260). No other main effects or interactions were significant.</p>
</sec>
<sec>
<title>fMRI results</title>
<p>A 2 (emotion: threatening, neutral) &#x000D7; 2 (stimulus type: faces, objects) &#x000D7; 2 (congruence: congruent, incongruent) &#x000D7; 2 (exposure: first exposure, second exposure) ANOVA was conducted on the subjects&#x00027; BOLD responses (Table <xref ref-type="table" rid="T1">1</xref>).</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Brain regions demonstrating differential BOLD responses during task performance in 30 healthy participants</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left"><bold>Contrast</bold></th>
<th align="left"><bold>Left/right</bold></th>
<th align="center"><bold><italic>BA</italic></bold></th>
<th align="center" colspan="3"><bold>Coordinates of peak activation<xref ref-type="table-fn" rid="TN1"><sup>a</sup></xref></bold></th>
<th align="center"><bold><italic>F</italic><sub>(<italic>df</italic> &#x0003D; 1, 20)</sub></bold></th>
<th align="center"><bold><italic>p</italic></bold></th>
<th align="center"><bold>Voxels</bold></th>
</tr>
<tr>
<th/>
<th/>
<th/>
<th align="center"><bold><italic>x</italic></bold></th>
<th align="center"><bold><italic>y</italic></bold></th>
<th align="center"><bold><italic>z</italic></bold></th>
<th/>
<th/>
<th/>
</tr>
</thead>
<tbody>
<tr>
<td align="left" colspan="9"><bold>MAIN EFFECT OF EMOTION</bold></td>
</tr>
<tr>
<td align="left">Amygdala</td>
<td align="left">Left</td>
<td/>
<td align="center">&#x02212;11.5</td>
<td align="center">&#x02212;5.5</td>
<td align="center">&#x02212;13.5</td>
<td align="center">12.39</td>
<td align="center">0.0014</td>
<td align="center">8</td>
</tr>
<tr>
<td align="left">Anterior insula/inferior frontal cortex</td>
<td align="left">Right</td>
<td align="center">13</td>
<td align="center">40.5</td>
<td align="center">13.5</td>
<td align="center">17.5</td>
<td align="center">15.63</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">64</td>
</tr>
<tr>
<td align="left">Fusiform gyrus</td>
<td align="left">Left</td>
<td align="center">37</td>
<td align="center">&#x02212;37.5</td>
<td align="center">&#x02212;46.5</td>
<td align="center">&#x02212;15.5</td>
<td align="center">16.64</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">42</td>
</tr>
<tr>
<td align="left" colspan="9"><bold>MAIN EFFECT OF OBJECT TYPE</bold></td>
</tr>
<tr>
<td align="left">Amygdala</td>
<td align="left">Left</td>
<td/>
<td align="center">&#x02212;14.5</td>
<td align="center">&#x02212;5.5</td>
<td align="center">&#x02212;12.5</td>
<td align="center">11.58</td>
<td align="center">0.0029</td>
<td align="center">8</td>
</tr>
<tr>
<td align="left">Fusiform gyrus</td>
<td align="left">Left</td>
<td align="center">34</td>
<td align="center">&#x02212;25.5</td>
<td align="center">&#x02212;40.5</td>
<td align="center">&#x02212;12.5</td>
<td align="center">79.42</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">454</td>
</tr>
<tr>
<td align="left">Fusiform gyrus</td>
<td align="left">Right</td>
<td align="center">34</td>
<td align="center">25.5</td>
<td align="center">&#x02212;52.5</td>
<td align="center">&#x02212;12.5</td>
<td align="center">82.45</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">232</td>
</tr>
<tr>
<td align="left">Precuneus/middle occipital gyrus</td>
<td align="left">Left</td>
<td align="center">19</td>
<td align="center">&#x02212;31.5</td>
<td align="center">&#x02212;79.5</td>
<td align="center">11.5</td>
<td align="center">28.65</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">230</td>
</tr>
<tr>
<td align="left">Precuneus/middle occipital gyrus</td>
<td align="left">Right</td>
<td align="center">7</td>
<td align="center">28.5</td>
<td align="center">&#x02212;67.5</td>
<td align="center">29.5</td>
<td align="center">21.31</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">85</td>
</tr>
<tr>
<td align="left">Inferior parietal cortex</td>
<td align="left">Left</td>
<td align="center">40</td>
<td align="center">&#x02212;40.5</td>
<td align="center">&#x02212;34.5</td>
<td align="center">38.5</td>
<td align="center">14.51</td>
<td align="center">0.0007</td>
<td align="center">52</td>
</tr>
<tr>
<td align="left">Middle occipital/fusiform gyrus</td>
<td align="left">Right</td>
<td align="center">37</td>
<td align="center">43.5</td>
<td align="center">&#x02212;58.5</td>
<td align="center">&#x02212;6.5</td>
<td align="center">19.81</td>
<td align="center">0.0001</td>
<td align="center">58</td>
</tr>
<tr>
<td align="left">Middle occipital gyrus</td>
<td align="left">Right</td>
<td align="center">19</td>
<td align="center">37.5</td>
<td align="center">&#x02212;82.5</td>
<td align="center">8.5</td>
<td align="center">21.97</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">62</td>
</tr>
<tr>
<td align="left" colspan="9"><bold>MAIN EFFECT OF EXPOSURE</bold></td>
</tr>
<tr>
<td align="left">Postcentral gyrus/inferior parietal cortex</td>
<td align="left">Left</td>
<td align="center">40</td>
<td align="center">&#x02212;40.5</td>
<td align="center">&#x02212;31.5</td>
<td align="center">44.5</td>
<td align="center">28.11</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">522</td>
</tr>
<tr>
<td align="left">Precentral gyrus/inferior parietal cortex</td>
<td align="left">Right</td>
<td align="center">6</td>
<td align="center">28.5</td>
<td align="center">&#x02212;16.5</td>
<td align="center">53.5</td>
<td align="center">21.32</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">334</td>
</tr>
<tr>
<td align="left">Middle occipital gyrus/middle temporal cortex</td>
<td align="left">Right</td>
<td align="center">19</td>
<td align="center">46.5</td>
<td align="center">&#x02212;73.5</td>
<td align="center">20.5</td>
<td align="center">22.62</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">112</td>
</tr>
<tr>
<td align="left">Dorsomedial frontal/anterior cingulate cortex</td>
<td align="left">Left</td>
<td align="center">6</td>
<td align="center">&#x02212;4.5</td>
<td align="center">&#x02212;10.5</td>
<td align="center">53.5</td>
<td align="center">29.59</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">68</td>
</tr>
<tr>
<td align="left">Declive</td>
<td align="left">Right</td>
<td align="center">37</td>
<td align="center">40.5</td>
<td align="center">&#x02212;61.5</td>
<td align="center">&#x02212;15.5</td>
<td align="center">15.11</td>
<td align="center">0.0005</td>
<td align="center">45</td>
</tr>
<tr>
<td align="left">Middle occipital/inferior temporal cortex</td>
<td align="left">Left</td>
<td align="center">37</td>
<td align="center">&#x02212;43.5</td>
<td align="center">&#x02212;64.5</td>
<td align="center">&#x02212;0.5</td>
<td align="center">16.46</td>
<td align="center">0.0003</td>
<td align="center">45</td>
</tr>
<tr>
<td align="left" colspan="9"><bold>CONGRUENCE-BY-EXPOSURE INTERACTION</bold></td>
</tr>
<tr>
<td align="left">Precuneus</td>
<td align="left">Right</td>
<td align="center">7</td>
<td align="center">13.5</td>
<td align="center">&#x02212;55.5</td>
<td align="center">38.5</td>
<td align="center">18.43</td>
<td align="center">0.0002</td>
<td align="center">50</td>
</tr>
<tr>
<td align="left" colspan="9"><bold>EMOTION-BY-EXPOSURE INTERACTION</bold></td>
</tr>
<tr>
<td align="left">Lingual gyrus/occipital cortex/fusiform cortex</td>
<td align="left">Right</td>
<td align="center">18</td>
<td align="center">31.5</td>
<td align="center">&#x02212;70.5</td>
<td align="center">&#x02212;6.5</td>
<td align="center">34.18</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">535</td>
</tr>
<tr>
<td align="left">Lingual gyrus</td>
<td align="left">Left</td>
<td align="center">19</td>
<td align="center">&#x02212;31.5</td>
<td align="center">&#x02212;61.5</td>
<td align="center">&#x02212;3.5</td>
<td align="center">31.82</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">108</td>
</tr>
<tr>
<td align="left">Culmen</td>
<td align="left">Left</td>
<td align="center">19</td>
<td align="center">&#x02212;13.5</td>
<td align="center">&#x02212;52.5</td>
<td align="center">&#x02212;9.5</td>
<td align="center">21.12</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">55</td>
</tr>
<tr>
<td align="left" colspan="9"><bold>EMOTION-BY-CONGRUENCE-BY-EXPOSURE</bold></td>
</tr>
<tr>
<td align="left">Inferior frontal gyrus</td>
<td align="left">Left</td>
<td align="center">45</td>
<td align="center">&#x02212;46.5</td>
<td align="center">19.5</td>
<td align="center">2.5</td>
<td align="center">21.76</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">76</td>
</tr>
<tr>
<td align="left">Culmen</td>
<td align="left">Left</td>
<td align="center">19</td>
<td align="center">&#x02212;7.5</td>
<td align="center">&#x02212;55.5</td>
<td align="center">&#x02212;3.5</td>
<td align="center">19.49</td>
<td align="center">0.0001</td>
<td align="center">64</td>
</tr>
<tr>
<td align="left">Thalamus</td>
<td align="left">Left</td>
<td/>
<td align="center">&#x02212;7.5</td>
<td align="center">&#x02212;7.5</td>
<td align="center">8.5</td>
<td align="center">20.34</td>
<td align="center">&#x0003C;0.0001</td>
<td align="center">55</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="TN1"><label>a</label><p><italic>Based on the standard Talairach and Tournoux brain template, BA, Brodmann&#x00027;s area; df, degrees of freedom.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
<sec>
<title>Amygdala</title>
<p>With respect to our a priori predictions, the ROI analyses examining the amygdala were mixed. In line with predictions, there was a significant main effect of object type (faces &#x0003E; objects: left amygdala: <italic>x</italic>, <italic>y</italic>, <italic>z</italic> &#x0003D; &#x02212;14.5, &#x02212;5.5, &#x02212;12.5, <italic>k</italic> &#x0003D; 8, Table <xref ref-type="table" rid="T1">1</xref>, Figure <xref ref-type="fig" rid="F2">2</xref>). There was also a significant main effects of emotion (threatening &#x0003E; neutral: left: <italic>x</italic>, <italic>y</italic>, <italic>z</italic> &#x0003D; &#x02212;11.5, &#x02212;5.5, &#x02212;13.5, <italic>k</italic> &#x0003D; 8). Against predictions, a main effect of exposure was not observed in amygdala.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Regions showing significant whole-brain main effects of emotion and animacy. (A)</bold> Main effect of emotion in right anterior insula/inferior frontal cortex. <bold>(B)</bold> Main effect of emotion in left amygdala. <bold>(C)</bold> Main effect of animacy in bilateral fusiform gyrus. <bold>(D)</bold> Main effect of animacy in left amygdala.</p></caption>
<graphic xlink:href="fnhum-08-00714-g0002.tif"/>
</fig>
<p>However, several significant interactions involving exposure were observed. There was a significant emotion-by-congruence-by-exposure interaction within right amygdala. The amygdala differentiated between incongruent and congruent <italic>negative</italic> trials in exposure 2 (<italic>t</italic> &#x0003D; 2.448, <italic>p</italic> &#x0003D; 0.021) but not exposure 1 (<italic>t</italic> &#x0003D; 1.600, <italic>p</italic> &#x0003D; 0.120). No significant difference in activation to incongruent relative to congruent stimuli was observed for neutral stimuli (<italic>t</italic> &#x0003C; 1.072, <italic>p</italic> &#x0003E; 0.293). There was also a significant emotion-by-congruence-by-animacy-by-exposure interaction within left and right amygdala. Similar to the emotion-by-congruence-by-exposure interaction, the amygdala differentiated between incongruent and congruent <italic>negative inanimate</italic> trials in exposure 2 relative to exposure 1 (<italic>t</italic> &#x0003D; 3.119 and 3.536 respectively, <italic>p</italic> &#x0003C; 0.004). No other congruent vs. incongruent trial type differences for either exposure were significant (<italic>t</italic> &#x0003C; 1.432, <italic>p</italic> &#x0003E; 0.163).</p>
</sec>
<sec>
<title>Whole brain analysis</title>
<p><bold><italic>Main effect of emotion</italic></bold>. Regions showing a significant main effect of emotion included right anterior insula cortex/inferior frontal cortex and left fusiform gyrus. Significantly greater response in both regions was observed to threatening relative to neutral stimuli (Table <xref ref-type="table" rid="T1">1</xref>).</p>
<p><bold><italic>Main effect of stimulus type</italic></bold>. Regions showing a significant effect of stimulus type included bilateral medial fusiform cortex, left inferior parietal cortex, left middle occipital cortex, two regions of right middle occipital gyrus and right precuneus. In all regions greater activation was observed to objects relative to faces (Table <xref ref-type="table" rid="T1">1</xref>, Figure <xref ref-type="fig" rid="F2">2</xref>). Given the consistent findings in previous work suggesting that FFA and STS show increased activation to faces relative to objects, but our failure to observe this at the whole-brain level, a <italic>post-hoc</italic> ROI analysis was conducted. Using coordinates of peak activation from previous work examining human stimuli relative to objects, 10 mm spheres ROIs were created for STS (&#x02212;47, &#x02212;56, 15) and for the FFA (44, &#x02212;42, &#x02212;15; Beauchamp et al., <xref ref-type="bibr" rid="B6">2003</xref>). At this less stringent (but still corrected for multiple comparisons; <italic>p</italic> &#x0003D; 0.005, <italic>k</italic> &#x0003E; 4 for both regions) threshold, a main effect of stimulus type was observed in STS (&#x02212;49.5, &#x02212;52.5, 11.5; <italic>k</italic> &#x0003D; 10; Figure <xref ref-type="fig" rid="F3">3</xref>), but not in FFA. Within STS BOLD response was greater for faces relative to objects.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Regions showing a significant main effect of animacy in a Superior Temporal Sulcus Region of Interest and an emotion-by-exposure interaction effect in visual cortex. (A)</bold> Main effect of animacy in left superior temporal sulcus. <bold>(B)</bold> Emotion-by-exposure effect in bilateral visual cortex. The white circles specify the brain regions from which the activation in the graphs is drawn.</p></caption>
<graphic xlink:href="fnhum-08-00714-g0003.tif"/>
</fig>
<p><bold><italic>Main effect of exposure</italic></bold>. Regions showing a main effect of exposure included dorsomedial frontal cortex/anterior cingulate cortex (dmFC/ACC), bilateral regions encompassing motor and parietal cortex and bilateral regions of visual cortex. In all regions, greater activation was observed to the first exposure of a stimulus relative to the second exposure of the stimulus.</p>
<p><bold><italic>Main effect of congruence</italic></bold>. No regions survived correction for multiple comparisons for the main effect of congruence.</p>
<p><bold><italic>Emotion-by-exposure interaction</italic></bold>. Regions showing an emotion-by-exposure interaction included bilateral visual cortex and left culmen. In both regions, there was a significantly greater reduction in activity to neutral trials in exposure 2 relative to exposure 1 relative to negative trials (<italic>t</italic> &#x0003D; 4.878 and 5.529 respectively, <italic>p</italic> &#x0003C; 0.001). Indeed, there was no significant decrease in response to negative trials in exposure 2 relative to exposure 1 (<italic>t</italic> &#x0003D; 0.587 and 0.082 respectively, <italic>p</italic> &#x0003E; 0.582) (see Figure <xref ref-type="fig" rid="F3">3</xref>).</p>
<p><bold><italic>Congruence-by-exposure interaction</italic></bold>. A significant congruence-by-exposure interaction was observed in right precuneus. Within this region, there was a significantly greater increase in activity for incongruent relative to congruent trials in exposure 2 [incongruent trials were associated with greater activity than congruent trials during exposure 2 (<italic>t</italic> &#x0003D; 3.126, <italic>p</italic> &#x0003D; 0.004)] relative to exposure 1 [where there was no significant difference in responsiveness to incongruent relative to congruent trials (<italic>t</italic> &#x0003D; 1.776, <italic>p</italic> &#x0003D; 0.086)] (<italic>t</italic> &#x0003D; 4.376, <italic>p</italic> &#x0003C; 0.001).</p>
<p><bold><italic>Emotion-by-congruence-by-exposure interaction</italic></bold>. Regions showing an emotion-by-congruence-by-exposure interaction included left inferior frontal gyrus, left culmen and thalamus. In all regions, there was a significantly greater increase in activity for incongruent relative to congruent <italic>negative</italic> trials in exposure 2 (incongruent trials were indeed associated with greater activity than congruent <italic>negative</italic> trials during exposure 2 (<italic>t</italic> &#x0003D; 3.143&#x02013;4.019, <italic>p</italic> &#x0003D; 0.004 &#x02013; &#x0003C;0.001]) relative to exposure 1 (where there was no significant difference in responsiveness to incongruent relative to congruent negative trials [<italic>t</italic> &#x0003D; 0.517&#x02013;1.846, <italic>p</italic> &#x0003D; 0.609&#x02013;0.075]) (<italic>t</italic> &#x0003D; 2.076&#x02013;2.915, <italic>p</italic> &#x0003D; 0.045&#x02013;0.007). There was typically not a different between congruent and incongruent neutral trials for either exposure (<italic>t</italic> &#x0003D; 0.637&#x02013;1.903, <italic>p</italic> &#x0003D; 0.526&#x02013;0.067) (though within left culmen incongruent neutral trials were associated with greater activity than congruent neutral trials during exposure 1 [<italic>t</italic> &#x0003D; 4.035, <italic>p</italic> &#x0003C; 0.001]).</p>
<p><bold><italic>Non-significant interactions</italic></bold>. No regions survived correction for multiple comparisons for the emotion-by-animacy, emotion-by-congruence, animacy-by-congruence, animacy-by-exposure, emotion-by-animacy-by-congruence, emotion-by-animacy-by-exposure, animacy-by-congruence-by-exposure and emotion-by-animacy-by-congruence-by-emotion interactions.</p>
</sec>
<sec>
<title>Generalized PPI analysis</title>
<p>Significantly greater connectivity between left amygdala and left medial fusiform gyrus was observed to negative relative to neutral objects (left amygdala <italic>x</italic>, <italic>y</italic>, <italic>z</italic> &#x0003D; &#x02212;34.5, &#x02212;46.5, &#x02212;18.5, <italic>p</italic> &#x0003D; 0.005, <italic>k</italic> &#x0003D; 27; right amygdala <italic>x</italic>, <italic>y</italic>, <italic>z</italic> &#x0003D; &#x02212;34.5, &#x02212;46.5, &#x02212;18.5, <italic>p</italic> &#x0003D; 0.005, <italic>k</italic> &#x0003D; 30; Figure <xref ref-type="fig" rid="F4">4</xref>). While these clusters did not survive correction for multiple comparison, they survived small volume correction using an anatomical mask for left fusiform cortex (<italic>p</italic> &#x0003D; 0.005, <italic>k</italic> &#x0003E; 5). This difference in connectivity with the amygdala was not observed for negative relative to neutral faces. A further <italic>t</italic>-test ([negative faces &#x02212; negative objects] &#x02212; [neutral faces &#x02212; neutral objects]) found that the difference between these negative and neutral objects was significantly greater than the difference between negative and neutral faces.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Left medial fusiform gyrus shows a greater difference in connectivity between negative and neutral objects relative to the difference in connectivity between negative and neutral faces</bold>. A greater difference in connectivity between right amygdala and left medial fusiform gyrus for negative compared to neutral objects was observed relative to negative compared to neutral faces.</p></caption>
<graphic xlink:href="fnhum-08-00714-g0004.tif"/>
</fig>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>The goal of the current study was to test contrasting assumptions regarding the responsiveness of the amygdala to emotional relative to neutral stimuli and faces relative to objects and to determine whether modulation of activity within temporal cortex was category specific. There were five main findings: First, the amygdala showed significant responses to both threatening relative to neutral stimuli (including a significant response to threatening objects relative to neutral objects) and to faces relative to objects. Second, there was no main effect of exposure within the amygdala but this primarily reflected <italic>increased</italic> amygdala responses to second exposure negative incongruent trials. Third, medial fusiform cortex showed significantly increased activity for objects relative to faces. Fourth, STS showed significantly increased activity for faces relative to objects, albeit at a less stringent threshold. Fifth, bilateral amygdala showed greater functional connectivity with medial fusiform cortex for threatening vs. neutral objects relative to fearful vs. neutral faces.</p>
<p>There have been claims that the amygdala is part of the domain-specific circuitry for responding to social and animate stimuli (Adolphs, <xref ref-type="bibr" rid="B2">2009</xref>; Yang et al., <xref ref-type="bibr" rid="B46">2012</xref>). In line with previous work (Gobbini et al., <xref ref-type="bibr" rid="B21">2011</xref>; Yang et al., <xref ref-type="bibr" rid="B46">2012</xref>), the current study showed significantly greater amygdala responses to faces relative to objects. However, in contrast to Yang et al. (<xref ref-type="bibr" rid="B46">2012</xref>), but in line with the findings of Coker-Appiah et al. (<xref ref-type="bibr" rid="B14">2013</xref>) and Cao et al. (<xref ref-type="bibr" rid="B11">2014</xref>), there was a main effect of emotion, but not a stimulus-type by emotion interaction. As such, the current data indicate that animate and inanimate threatening stimuli, when matched for arousal and valence, generate comparable amygdala activity.</p>
<p>The effects of exposure were more complicated than we had anticipated. They were seen within dmFC/ACC, motor, parietal cortex and visual cortex replicating previous work (Wright et al., <xref ref-type="bibr" rid="B44">2001</xref>; Phan et al., <xref ref-type="bibr" rid="B36">2003</xref>; Yamaguchi et al., <xref ref-type="bibr" rid="B45">2004</xref>; Summerfield et al., <xref ref-type="bibr" rid="B39">2008</xref>; Weigelt et al., <xref ref-type="bibr" rid="B42">2008</xref>). However, they were not seen within the amygdala. Moreover, several regions such as visual cortex, precuneus, culmen, and thalamus as well as the amygdala, showed interactions between emotion and exposure or emotion and congruence and exposure. With respect to the emotion-by-exposure interactions seen within bilateral visual cortex and left culmen, this primarily reflected greater habituation for neutral relative to negative stimuli; i.e., the reduction in activity on exposure 2 was particularly marked for neutral stimuli (Figure <xref ref-type="fig" rid="F3">3</xref>). However, for the regions showing interactions between congruence and exposure (and emotion), this reflected instead a greater differentiation between incongruent and congruent trials on exposure 2 relative to exposure 1, particularly if they involved negative stimuli. Habituation reflects a basic form of learning where there is a decrease in response to a stimulus following repeated exposure <italic>with no meaningful consequence</italic> (Rankin et al., <xref ref-type="bibr" rid="B37">2009</xref>). We suggest that the absence of habituation seen in several areas, particularly for negative incongruent trials, reflects that for these trials the information had <italic>meaningful consequence</italic>. Future work will investigate this issue more deeply.</p>
<p>Previous work has reported that objects are associated with greater activity within the medial portion of the fusiform gyrus and middle temporal gyrus (Beauchamp et al., <xref ref-type="bibr" rid="B5">2002</xref>; Mahon et al., <xref ref-type="bibr" rid="B25">2007</xref>; Gobbini et al., <xref ref-type="bibr" rid="B21">2011</xref>), while faces and other animate stimuli are associated within greater activity within FFA (Kanwisher and Yovel, <xref ref-type="bibr" rid="B22">2006</xref>; Nguyen et al., <xref ref-type="bibr" rid="B31">2013</xref>) and posterior STS (Beauchamp et al., <xref ref-type="bibr" rid="B5">2002</xref>, <xref ref-type="bibr" rid="B6">2003</xref>; Chao et al., <xref ref-type="bibr" rid="B13">2002</xref>; Beauchamp and Martin, <xref ref-type="bibr" rid="B7">2007</xref>; Martin, <xref ref-type="bibr" rid="B26">2007</xref>; Gobbini et al., <xref ref-type="bibr" rid="B21">2011</xref>). It is argued that object concepts belonging to different categories are represented in partially distinct, sensory- and motor property&#x02013;based neural networks (Caramazza and Shelton, <xref ref-type="bibr" rid="B12">1998</xref>; Martin, <xref ref-type="bibr" rid="B26">2007</xref>). The current results were consistent with this previous research. Medial fusiform gyrus showed greater responses to objects relative to faces while STS showed greater responses to faces relative to objects. Interestingly, no finding was observed in the FFA. We suggest that the sub-threshold finding in STS and the lack of a finding in FFA may reflect parameters of our task where participants had to respond to cues devoid of animacy information.</p>
<p>Our predictions regarding category specific modulation by emotion were only partially supported. Given the direct feedback projections from the amygdala to visual processing areas, including temporal cortex (Pessoa et al., <xref ref-type="bibr" rid="B34">2002</xref>; Mitchell et al., <xref ref-type="bibr" rid="B29">2007</xref>), we had hypothesized that emotional modulation would only occur in medial fusiform cortex for inanimate objects and only in lateral fusiform cortex (including FFA and STS) for faces. However, no emotion-by-object type interactions within temporal cortex were observed. Even at a lenient threshold (<italic>p</italic> &#x0003D; 0.005, <italic>k</italic> &#x0003E; 10) no significant emotion-by-object type or emotion-by-object type-by-congruence interactions were observed. There was, though, differential connectivity by stimulus category as a function of emotion between bilateral amygdala and left medial fusiform gyrus. Significantly greater functional connectivity between left and right amygdala and left medial fusiform gyrus was observed for threatening objects relative to neutral objects, but not between fearful faces relative to neutral faces. This would suggest a degree of integrated functioning between the amygdala and the region of medial fusiform gyrus implicated in processing objects with respect to the emotional significance of objects. However, we found no evidence of a comparable process within lateral fusiform gyrus or STS for face stimuli (fearful relative to neutral). We again suggest that partial findings may reflect parameters of our task where participants had to respond to cues devoid of animacy information. This process is something we will investigate in future work.</p>
<p>Three caveats should be noted with respect to the current data. First, previous work with dot probe tasks has reported that participants respond more quickly to congruent relative to incongruent trials (Corbetta and Shulman, <xref ref-type="bibr" rid="B15">2002</xref>). Moreover, previous work has reported that inferior frontal gyrus (iFG), medial frontal gyrus (mFG), and temporal-parietal junction (TPJ) show greater activation in incongruent relative to congruent trials (Corbetta and Shulman, <xref ref-type="bibr" rid="B15">2002</xref>). In the current study, no regions survived corrections for multiple comparisons for the main effect of congruence in the whole brain fMRI analysis. However, it should be noted that an emotion-by-congruence-by-exposure interaction was observed in iFG. The expected increase in activation to incongruent relative to congruent stimuli was observed here, albeit only for negative stimuli during second exposures. A congruence effect restricted to negative stimuli was also observed in the amygdala. It is interesting to note here though that while the congruence effect was present for neutral stimuli it was not significant for emotional stimuli. This is consistent with a body of studies where the congruence effect in dot probe tasks, in at least healthy participants, is abolished if the stimuli are threatening (e.g., Waters et al., <xref ref-type="bibr" rid="B41">2010</xref>). In short, we believe the weak response latency and BOLD response congruence effect seen here may reflect the use of emotional primes. Second, the current study used only faces as animate stimuli. The current results therefore may not generalize to other animate stimuli, such as animals. Third, it is possible that by selecting faces and objects matched for arousal and/or valence, we may have artificially removed regions displaying an emotion-by-animacy interaction; i.e., there may be a greater differentiation in participant judgments and BOLD response between emotional and neutral faces relative to emotional and neutral objects and by matching for judgment (faces vs. objects), we effectively matched for BOLD response. We cannot discount this possibility. We can only be confident, on the basis of the current data, that there is no interaction for matched stimuli.</p>
<p>In summary, the current results support suggestions that the amygdala is both responsive to animate as well as emotional stimuli. Additionally, these data suggest that the interaction between the various functions of the amygdala may need to be considered simultaneously to fully understand how they interact. Moreover, they suggest category-specific modulation of medial fusiform cortex as a function of emotion. In our future work, we aim to determine whether psychiatric conditions associated with amygdala dysfunction, particularly Conduct Disorder (Blair, <xref ref-type="bibr" rid="B9">2013</xref>), PTSD (Admon et al., <xref ref-type="bibr" rid="B1">2013</xref>), and mood and anxiety disorders (Damsa et al., <xref ref-type="bibr" rid="B17">2009</xref>; Kerestes et al., <xref ref-type="bibr" rid="B23">2013</xref>), show impairment in both the amygdala&#x00027;s responsiveness to emotional and face/animacy information.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>This work was supported by the Intramural Research Program of the National Institute of Mental Health, National Institutes of Health under grant number 1-ZIA-MH002860.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Admon</surname> <given-names>R.</given-names></name> <name><surname>Milad</surname> <given-names>M. R.</given-names></name> <name><surname>Hendler</surname> <given-names>T.</given-names></name></person-group> (<year>2013</year>). <article-title>A causal model of post-traumatic stress disorder: disentangling predisposed from acquired neural abnormalities</article-title>. <source>Trends Cogn. Sci</source>. <volume>17</volume>, <fpage>337</fpage>&#x02013;<lpage>347</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2013.05.005</pub-id><pub-id pub-id-type="pmid">23768722</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Adolphs</surname> <given-names>R.</given-names></name></person-group> (<year>2009</year>). <article-title>The social brain: neural basis of social knowledge</article-title>. <source>Annu. Rev. Psychol</source>. <volume>60</volume>, <fpage>693</fpage>&#x02013;<lpage>716</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.60.110707.163514</pub-id><pub-id pub-id-type="pmid">18771388</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Balderston</surname> <given-names>N. L.</given-names></name> <name><surname>Schultz</surname> <given-names>D. H.</given-names></name> <name><surname>Helmstetter</surname> <given-names>F. J.</given-names></name></person-group> (<year>2013</year>). <article-title>The effect of threat on novelty evoked amygdala responses</article-title>. <source>PLoS ONE</source> <volume>8</volume>:<fpage>e63220</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0063220</pub-id><pub-id pub-id-type="pmid">23658813</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bar</surname> <given-names>M.</given-names></name> <name><surname>Neta</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Visual elements of subjective preference modulate amygdala activation</article-title>. <source>Neuropsychologia</source> <volume>45</volume>, <fpage>2191</fpage>&#x02013;<lpage>2200</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2007.03.008</pub-id><pub-id pub-id-type="pmid">17462678</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Beauchamp</surname> <given-names>M. S.</given-names></name> <name><surname>Lee</surname> <given-names>K. E.</given-names></name> <name><surname>Haxby</surname> <given-names>J. V.</given-names></name> <name><surname>Martin</surname> <given-names>A.</given-names></name></person-group> (<year>2002</year>). <article-title>Parallel visual motion processing streams for manipulable objects and human movements</article-title>. <source>Neuron</source> <volume>34</volume>, <fpage>149</fpage>&#x02013;<lpage>159</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(02)00642-6</pub-id><pub-id pub-id-type="pmid">11931749</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Beauchamp</surname> <given-names>M. S.</given-names></name> <name><surname>Lee</surname> <given-names>K. E.</given-names></name> <name><surname>Haxby</surname> <given-names>J. V.</given-names></name> <name><surname>Martin</surname> <given-names>A.</given-names></name></person-group> (<year>2003</year>). <article-title>FMRI responses to video and point-light displays of moving humans and manipulable objects</article-title>. <source>J. Cogn. Neurosci</source>. <volume>15</volume>, <fpage>991</fpage>&#x02013;<lpage>1001</lpage>. <pub-id pub-id-type="doi">10.1162/089892903770007380</pub-id><pub-id pub-id-type="pmid">14614810</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Beauchamp</surname> <given-names>M. S.</given-names></name> <name><surname>Martin</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>Grounding object concepts in perception and action: evidence from fMRI studies of tools</article-title>. <source>Cortex</source> <volume>43</volume>, <fpage>461</fpage>&#x02013;<lpage>468</lpage>. <pub-id pub-id-type="doi">10.1016/S0010-9452(08)70470-2</pub-id><pub-id pub-id-type="pmid">17533768</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blair</surname> <given-names>K. S.</given-names></name> <name><surname>Smith</surname> <given-names>B. W.</given-names></name> <name><surname>Mitchell</surname> <given-names>D. G.</given-names></name> <name><surname>Morton</surname> <given-names>J.</given-names></name> <name><surname>Vythilingam</surname> <given-names>M.</given-names></name> <name><surname>Pessoa</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2007</year>). <article-title>Modulation of emotion by cognition and cognition by emotion</article-title>. <source>Neuroimage</source> <volume>35</volume>, <fpage>430</fpage>&#x02013;<lpage>440</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.11.048</pub-id><pub-id pub-id-type="pmid">17239620</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blair</surname> <given-names>R. J.</given-names></name></person-group> (<year>2013</year>). <article-title>The neurobiology of psychopathic traits in youths</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>14</volume>, <fpage>786</fpage>&#x02013;<lpage>799</lpage>. <pub-id pub-id-type="doi">10.1038/nrn3577</pub-id><pub-id pub-id-type="pmid">24105343</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Breiter</surname> <given-names>H. C.</given-names></name> <name><surname>Etcoff</surname> <given-names>N. L.</given-names></name> <name><surname>Whalen</surname> <given-names>P. J.</given-names></name> <name><surname>Kennedy</surname> <given-names>W. A.</given-names></name> <name><surname>Rauch</surname> <given-names>S. L.</given-names></name> <name><surname>Buckner</surname> <given-names>R. L.</given-names></name> <etal/></person-group>. (<year>1996</year>). <article-title>Response and habituation of the human amygdala during visual processing of facial expression</article-title>. <source>Neuron</source> <volume>17</volume>, <fpage>875</fpage>&#x02013;<lpage>887</lpage>. <pub-id pub-id-type="pmid">8938120</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cao</surname> <given-names>Z.</given-names></name> <name><surname>Zhao</surname> <given-names>Y.</given-names></name> <name><surname>Tan</surname> <given-names>T.</given-names></name> <name><surname>Chen</surname> <given-names>G.</given-names></name> <name><surname>Ning</surname> <given-names>X.</given-names></name> <name><surname>Zhan</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Distinct brain activity in processing negative pictures of animals and objects&#x02013;the role of human contexts</article-title>. <source>Neuroimage</source> <volume>84</volume>, <fpage>901</fpage>&#x02013;<lpage>910</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2013.09.064</pub-id><pub-id pub-id-type="pmid">24099847</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caramazza</surname> <given-names>A.</given-names></name> <name><surname>Shelton</surname> <given-names>J. R.</given-names></name></person-group> (<year>1998</year>). <article-title>Domain-specific knowledge systems in the brain: the animate-inanimate distinction</article-title>. <source>J. Cogn. Neurosci</source>. <volume>10</volume>, <fpage>1</fpage>&#x02013;<lpage>34</lpage>. <pub-id pub-id-type="pmid">9526080</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chao</surname> <given-names>L. L.</given-names></name> <name><surname>Weisberg</surname> <given-names>J.</given-names></name> <name><surname>Martin</surname> <given-names>A.</given-names></name></person-group> (<year>2002</year>). <article-title>Experience-dependent modulation of category-related cortical activity</article-title>. <source>Cereb. Cortex</source> <volume>12</volume>, <fpage>545</fpage>&#x02013;<lpage>551</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/12.5.545</pub-id><pub-id pub-id-type="pmid">11950772</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Coker-Appiah</surname> <given-names>D. S.</given-names></name> <name><surname>White</surname> <given-names>S. F.</given-names></name> <name><surname>Clanton</surname> <given-names>R.</given-names></name> <name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Martin</surname> <given-names>A.</given-names></name> <name><surname>Blair</surname> <given-names>R. J.</given-names></name></person-group> (<year>2013</year>). <article-title>Looming animate and inanimate threats: the response of the amygdala and periaqueductal gray</article-title>. <source>Soc. Neurosci</source>. <volume>8</volume>, <fpage>621</fpage>&#x02013;<lpage>630</lpage>. <pub-id pub-id-type="doi">10.1080/17470919.2013.839480</pub-id><pub-id pub-id-type="pmid">24066700</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Corbetta</surname> <given-names>M.</given-names></name> <name><surname>Shulman</surname> <given-names>G. L.</given-names></name></person-group> (<year>2002</year>). <article-title>Control of goal-directed and stimulus-driven attention in the brain</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>3</volume>, <fpage>201</fpage>&#x02013;<lpage>215</lpage>. <pub-id pub-id-type="doi">10.1038/nrn755</pub-id><pub-id pub-id-type="pmid">11994752</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cox</surname> <given-names>R. W.</given-names></name></person-group> (<year>1996</year>). <article-title>AFNI: software for analysis and visualization of functional magnetic resonance neuroimages</article-title>. <source>Comput. Biomed. Res</source>. <volume>29</volume>, <fpage>162</fpage>&#x02013;<lpage>173</lpage>. <pub-id pub-id-type="pmid">8812068</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Damsa</surname> <given-names>C.</given-names></name> <name><surname>Kosel</surname> <given-names>M.</given-names></name> <name><surname>Moussally</surname> <given-names>J.</given-names></name></person-group> (<year>2009</year>). <article-title>Current status of brain imaging in anxiety disorders</article-title>. <source>Curr. Opin. Psychiatry</source> <volume>22</volume>, <fpage>96</fpage>&#x02013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1097/YCO.0b013e328319bd10</pub-id><pub-id pub-id-type="pmid">19122541</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>First</surname> <given-names>M.</given-names></name> <name><surname>Spitzer</surname> <given-names>R.</given-names></name> <name><surname>Gibbon</surname> <given-names>M.</given-names></name> <name><surname>Williams</surname> <given-names>J.</given-names></name></person-group> (<year>1997</year>). <source>Structured Clinical Interview for DSM-IV</source>. <publisher-loc>Washington, DC</publisher-loc>: <publisher-name>American Psychiatric Press</publisher-name>.</citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fischer</surname> <given-names>H.</given-names></name> <name><surname>Furmark</surname> <given-names>T.</given-names></name> <name><surname>Wik</surname> <given-names>G.</given-names></name> <name><surname>Fredrikson</surname> <given-names>M.</given-names></name></person-group> (<year>2000</year>). <article-title>Brain representation of habituation to repeated complex visual stimulation studied with PET</article-title>. <source>Neuroreport</source> <volume>11</volume>, <fpage>123</fpage>&#x02013;<lpage>126</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-200001170-00024</pub-id><pub-id pub-id-type="pmid">10683842</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fischer</surname> <given-names>H.</given-names></name> <name><surname>Wright</surname> <given-names>C. I.</given-names></name> <name><surname>Whalen</surname> <given-names>P. J.</given-names></name> <name><surname>McInerney</surname> <given-names>S. C.</given-names></name> <name><surname>Shin</surname> <given-names>L. M.</given-names></name> <name><surname>Rauch</surname> <given-names>S. L.</given-names></name></person-group> (<year>2003</year>). <article-title>Brain habituation during repeated exposure to fearful and neutral faces: a functional MRI study</article-title>. <source>Brain Res. Bull</source>. <volume>59</volume>, <fpage>387</fpage>&#x02013;<lpage>392</lpage>. <pub-id pub-id-type="doi">10.1016/S0361-9230(02)00940-1</pub-id><pub-id pub-id-type="pmid">12507690</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gobbini</surname> <given-names>M. I.</given-names></name> <name><surname>Gentili</surname> <given-names>C.</given-names></name> <name><surname>Ricciardi</surname> <given-names>E.</given-names></name> <name><surname>Bellucci</surname> <given-names>C.</given-names></name> <name><surname>Salvini</surname> <given-names>P.</given-names></name> <name><surname>Laschi</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>Distinct neural systems involved in agency and animacy detection</article-title>. <source>J. Cogn. Neurosci</source>. <volume>23</volume>, <fpage>1911</fpage>&#x02013;<lpage>1920</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.2010.21574</pub-id><pub-id pub-id-type="pmid">20849234</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kanwisher</surname> <given-names>N.</given-names></name> <name><surname>Yovel</surname> <given-names>G.</given-names></name></person-group> (<year>2006</year>). <article-title>The fusiform face area: a cortical region specialized for the perception of faces</article-title>. <source>Philos. Trans. R. Soc. Lond. B Biol. Sci</source>. <volume>361</volume>, <fpage>2109</fpage>&#x02013;<lpage>2128</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.2006.1934</pub-id><pub-id pub-id-type="pmid">17118927</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kerestes</surname> <given-names>R.</given-names></name> <name><surname>Davey</surname> <given-names>C. G.</given-names></name> <name><surname>Stephanou</surname> <given-names>K.</given-names></name> <name><surname>Whittle</surname> <given-names>S.</given-names></name> <name><surname>Harrison</surname> <given-names>B. J.</given-names></name></person-group> (<year>2013</year>). <article-title>Functional brain imaging studies of youth depression: a systematic review</article-title>. <source>Neuroimage Clin</source>. <volume>4</volume>, <fpage>209</fpage>&#x02013;<lpage>231</lpage>. <pub-id pub-id-type="doi">10.1016/j.nicl.2013.11.009</pub-id><pub-id pub-id-type="pmid">24455472</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>LeDoux</surname> <given-names>J.</given-names></name></person-group> (<year>2012</year>). <article-title>Rethinking the emotional brain</article-title>. <source>Neuron</source> <volume>73</volume>, <fpage>653</fpage>&#x02013;<lpage>676</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2012.02.004</pub-id><pub-id pub-id-type="pmid">22365542</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mahon</surname> <given-names>B. Z.</given-names></name> <name><surname>Milleville</surname> <given-names>S. C.</given-names></name> <name><surname>Negri</surname> <given-names>G. A.</given-names></name> <name><surname>Rumiati</surname> <given-names>R. I.</given-names></name> <name><surname>Caramazza</surname> <given-names>A.</given-names></name> <name><surname>Martin</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>Action-related properties shape object representations in the ventral stream</article-title>. <source>Neuron</source> <volume>55</volume>, <fpage>507</fpage>&#x02013;<lpage>520</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2007.07.011</pub-id><pub-id pub-id-type="pmid">17678861</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Martin</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>The representation of object concepts in the brain</article-title>. <source>Annu. Rev. Psychol</source>. <volume>58</volume>, <fpage>25</fpage>&#x02013;<lpage>45</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.57.102904.190143</pub-id><pub-id pub-id-type="pmid">16968210</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Martin</surname> <given-names>A.</given-names></name> <name><surname>Weisberg</surname> <given-names>J.</given-names></name></person-group> (<year>2003</year>). <article-title>Neural foundations for understanding social and mechanical concepts</article-title>. <source>Cogn. Neuropsychol</source>. <volume>20</volume>, <fpage>575</fpage>&#x02013;<lpage>587</lpage>. <pub-id pub-id-type="doi">10.1080/02643290342000005</pub-id><pub-id pub-id-type="pmid">16648880</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McLaren</surname> <given-names>D. G.</given-names></name> <name><surname>Ries</surname> <given-names>M. L.</given-names></name> <name><surname>Xu</surname> <given-names>G.</given-names></name> <name><surname>Johnson</surname> <given-names>S. C.</given-names></name></person-group> (<year>2012</year>). <article-title>A generalized form of context-dependent psychophysiological interactions (gPPI): a comparison to standard approaches</article-title>. <source>Neuroimage</source> <volume>61</volume>, <fpage>1277</fpage>&#x02013;<lpage>1286</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2012.03.068</pub-id><pub-id pub-id-type="pmid">22484411</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mitchell</surname> <given-names>D. G.</given-names></name> <name><surname>Nakic</surname> <given-names>M.</given-names></name> <name><surname>Fridberg</surname> <given-names>D.</given-names></name> <name><surname>Kamel</surname> <given-names>N.</given-names></name> <name><surname>Pine</surname> <given-names>D. S.</given-names></name> <name><surname>Blair</surname> <given-names>R. J.</given-names></name></person-group> (<year>2007</year>). <article-title>The impact of processing load on emotion</article-title>. <source>Neuroimage</source> <volume>34</volume>, <fpage>1299</fpage>&#x02013;<lpage>1309</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.10.012</pub-id><pub-id pub-id-type="pmid">17161627</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murphy</surname> <given-names>F. C.</given-names></name> <name><surname>Nimmo-Smith</surname> <given-names>I.</given-names></name> <name><surname>Lawrence</surname> <given-names>A. D.</given-names></name></person-group> (<year>2003</year>). <article-title>Functional neuroanatomy of emotions: a meta-analysis</article-title>. <source>Cogn. Affect. Behav. Neurosci</source>. <volume>3</volume>, <fpage>207</fpage>&#x02013;<lpage>233</lpage>. <pub-id pub-id-type="doi">10.3758/CABN.3.3.207</pub-id><pub-id pub-id-type="pmid">14672157</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nguyen</surname> <given-names>V. T.</given-names></name> <name><surname>Breakspear</surname> <given-names>M.</given-names></name> <name><surname>Cunnington</surname> <given-names>R.</given-names></name></person-group> (<year>2013</year>). <article-title>Fusing concurrent EEG-fMRI with dynamic causal modeling: application to effective connectivity during face perception</article-title>. <source>Neuroimage</source>. [Epub ahead of print]. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2013.06.083</pub-id><pub-id pub-id-type="pmid">23850464</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pelphrey</surname> <given-names>K. A.</given-names></name> <name><surname>Morris</surname> <given-names>J. P.</given-names></name> <name><surname>Michelich</surname> <given-names>C. R.</given-names></name> <name><surname>Allison</surname> <given-names>T.</given-names></name> <name><surname>McCarthy</surname> <given-names>G.</given-names></name></person-group> (<year>2005</year>). <article-title>Functional anatomy of biological motion perception in posterior temporal cortex: an fMRI study of eye, mouth and hand movements</article-title>. <source>Cereb. Cortex</source> <volume>15</volume>, <fpage>1866</fpage>&#x02013;<lpage>1876</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhi064</pub-id><pub-id pub-id-type="pmid">15746001</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pessoa</surname> <given-names>L.</given-names></name> <name><surname>Adolphs</surname> <given-names>R.</given-names></name></person-group> (<year>2010</year>). <article-title>Emotion processing and the amygdala: from a &#x02018;low road&#x02019; to &#x02018;many roads&#x02019; of evaluating biological significance</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>11</volume>, <fpage>773</fpage>&#x02013;<lpage>783</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2920</pub-id><pub-id pub-id-type="pmid">20959860</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pessoa</surname> <given-names>L.</given-names></name> <name><surname>Kastner</surname> <given-names>S.</given-names></name> <name><surname>Ungerleider</surname> <given-names>L. G.</given-names></name></person-group> (<year>2002</year>). <article-title>Attentional control of the processing of neutral and emotional stimuli</article-title>. <source>Cogn. Brain Res</source>. <volume>15</volume>, <fpage>31</fpage>&#x02013;<lpage>45</lpage>. <pub-id pub-id-type="doi">10.1016/S0926-6410(02)00214-8</pub-id><pub-id pub-id-type="pmid">12433381</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pessoa</surname> <given-names>L.</given-names></name> <name><surname>Ungerleider</surname> <given-names>L. G.</given-names></name></person-group> (<year>2004</year>). <article-title>Neuroimaging studies of attention and the processing of emotion-laden stimuli</article-title>. <source>Prog. Brain Res</source>. <volume>144</volume>, <fpage>171</fpage>&#x02013;<lpage>182</lpage>. <pub-id pub-id-type="doi">10.1016/S0079-6123(03)14412-3</pub-id><pub-id pub-id-type="pmid">14650848</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Phan</surname> <given-names>K. L.</given-names></name> <name><surname>Liberzon</surname> <given-names>I.</given-names></name> <name><surname>Welsh</surname> <given-names>R. C.</given-names></name> <name><surname>Britton</surname> <given-names>J. C.</given-names></name> <name><surname>Taylor</surname> <given-names>S. F.</given-names></name></person-group> (<year>2003</year>). <article-title>Habituation of rostral anterior cingulate cortex to repeated emotionally salient pictures</article-title>. <source>Neuropsychopharmacology</source> <volume>28</volume>, <fpage>1344</fpage>&#x02013;<lpage>1350</lpage>. <pub-id pub-id-type="doi">10.1038/sj.npp.1300186</pub-id><pub-id pub-id-type="pmid">12784119</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rankin</surname> <given-names>C. H.</given-names></name> <name><surname>Abrams</surname> <given-names>T.</given-names></name> <name><surname>Barry</surname> <given-names>R. J.</given-names></name> <name><surname>Bhatnagar</surname> <given-names>S.</given-names></name> <name><surname>Clayton</surname> <given-names>D. F.</given-names></name> <name><surname>Colombo</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>Habituation revisited: an updated and revised description of the behavioral characteristics of habituation</article-title>. <source>Neurobiol. Learn. Mem</source>. <volume>92</volume>, <fpage>135</fpage>&#x02013;<lpage>138</lpage>. <pub-id pub-id-type="doi">10.1016/j.nlm.2008.09.012</pub-id><pub-id pub-id-type="pmid">18854219</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Santos</surname> <given-names>N. S.</given-names></name> <name><surname>Kuzmanovic</surname> <given-names>B.</given-names></name> <name><surname>David</surname> <given-names>N.</given-names></name> <name><surname>Rotarska-Jagiela</surname> <given-names>A.</given-names></name> <name><surname>Eickhoff</surname> <given-names>S. B.</given-names></name> <name><surname>Shah</surname> <given-names>J. N.</given-names></name> <etal/></person-group>. (<year>2010</year>). <article-title>Animated brain: a functional neuroimaging study on animacy experience</article-title>. <source>Neuroimage</source> <volume>53</volume>, <fpage>291</fpage>&#x02013;<lpage>302</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.05.080</pub-id><pub-id pub-id-type="pmid">20570742</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Summerfield</surname> <given-names>C.</given-names></name> <name><surname>Trittschuh</surname> <given-names>E. H.</given-names></name> <name><surname>Monti</surname> <given-names>J. M.</given-names></name> <name><surname>Mesulam</surname> <given-names>M. M.</given-names></name> <name><surname>Egner</surname> <given-names>T.</given-names></name></person-group> (<year>2008</year>). <article-title>Neural repetition suppression reflects fulfilled perceptual expectations</article-title>. <source>Nat. Neurosci</source>. <volume>11</volume>, <fpage>1004</fpage>&#x02013;<lpage>1006</lpage>. <pub-id pub-id-type="doi">10.1038/nn.2163</pub-id><pub-id pub-id-type="pmid">19160497</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="book"><person-group person-group-type="author"><collab>Talairach, Tournoux.</collab></person-group> (<year>1988</year>). <source>Co-Planar Stereotaxic Atlas of the Human Brain</source>. <publisher-loc>Stuttgart</publisher-loc>: <publisher-name>Thieme</publisher-name>.</citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Waters</surname> <given-names>A. M.</given-names></name> <name><surname>Henry</surname> <given-names>J.</given-names></name> <name><surname>Mogg</surname> <given-names>K.</given-names></name> <name><surname>Bradley</surname> <given-names>B. P.</given-names></name> <name><surname>Pine</surname> <given-names>D. S.</given-names></name></person-group> (<year>2010</year>). <article-title>Attentional bias towards angry faces in childhood anxiety disorders</article-title>. <source>J. Behav. Ther. Exp. Psychiatry</source> <volume>41</volume>, <fpage>158</fpage>&#x02013;<lpage>164</lpage>. <pub-id pub-id-type="doi">10.1016/j.jbtep.2009.12.001</pub-id><pub-id pub-id-type="pmid">20060097</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Weigelt</surname> <given-names>S.</given-names></name> <name><surname>Muckli</surname> <given-names>L.</given-names></name> <name><surname>Kohler</surname> <given-names>A.</given-names></name></person-group> (<year>2008</year>). <article-title>Functional magnetic resonance adaptation in visual neuroscience</article-title>. <source>Rev. Neurosci</source>. <volume>19</volume>, <fpage>363</fpage>&#x02013;<lpage>380</lpage>. <pub-id pub-id-type="doi">10.1515/REVNEURO.2008.19.4-5.363</pub-id><pub-id pub-id-type="pmid">19145990</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wheatley</surname> <given-names>T.</given-names></name> <name><surname>Milleville</surname> <given-names>S. C.</given-names></name> <name><surname>Martin</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>Understanding animate agents: distinct roles for the social network and mirror system</article-title>. <source>Psychol. Sci</source>. <volume>18</volume>, <fpage>469</fpage>&#x02013;<lpage>474</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-9280.2007.01923.x</pub-id><pub-id pub-id-type="pmid">17576256</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wright</surname> <given-names>C. I.</given-names></name> <name><surname>Fischer</surname> <given-names>H.</given-names></name> <name><surname>Whalen</surname> <given-names>P. J.</given-names></name> <name><surname>McInerney</surname> <given-names>S. C.</given-names></name> <name><surname>Shin</surname> <given-names>L. M.</given-names></name> <name><surname>Rauch</surname> <given-names>S. L.</given-names></name></person-group> (<year>2001</year>). <article-title>Differential prefrontal cortex and amygdala habituation to repeatedly presented emotional stimuli</article-title>. <source>Neuroreport</source> <volume>12</volume>, <fpage>379</fpage>&#x02013;<lpage>383</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-200102120-00039</pub-id><pub-id pub-id-type="pmid">11209954</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yamaguchi</surname> <given-names>S.</given-names></name> <name><surname>Hale</surname> <given-names>L. A.</given-names></name> <name><surname>D&#x00027;Esposito</surname> <given-names>M.</given-names></name> <name><surname>Knight</surname> <given-names>R. T.</given-names></name></person-group> (<year>2004</year>). <article-title>Rapid prefrontal-hippocampal habituation to novel events</article-title>. <source>J. Neurosci</source>. <volume>24</volume>, <fpage>5356</fpage>&#x02013;<lpage>5363</lpage>. <pub-id pub-id-type="doi">10.1523/jneurosci.4587-03.2004</pub-id><pub-id pub-id-type="pmid">15190108</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Bellgowan</surname> <given-names>P. S.</given-names></name> <name><surname>Martin</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Threat, domain-specificity and the human amygdala</article-title>. <source>Neuropsychologia</source> <volume>50</volume>, <fpage>2566</fpage>&#x02013;<lpage>2572</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2012.07.001</pub-id><pub-id pub-id-type="pmid">22820342</pub-id></citation>
</ref>
</ref-list>
</back>
</article>