<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Behav. Neurosci.</journal-id>
<journal-title>Frontiers in Behavioral Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Behav. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5153</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnbeh.2013.00120</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Supramodal neural processing of abstract information conveyed by speech and gesture</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Straube</surname> <given-names>Benjamin</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>He</surname> <given-names>Yifei</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Steines</surname> <given-names>Miriam</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Gebhardt</surname> <given-names>Helge</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Kircher</surname> <given-names>Tilo</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Sammer</surname> <given-names>Gebhard</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Nagels</surname> <given-names>Arne</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Psychiatry and Psychotherapy, Philipps-University Marburg</institution> <country>Marburg, Germany</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of General Linguistics, Johannes Gutenberg-University Mainz</institution> <country>Mainz, Germany</country></aff>
<aff id="aff3"><sup>3</sup><institution>Cognitive Neuroscience at Centre for Psychiatry, Justus Liebig University Giessen</institution> <country>Giessen, Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Leonid Perlovsky, Harvard University and Air Force Research Laboratory, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Nashaat Z. Gerges, Medical College of Wisconsin, USA; Yueqiang Xue, The University of Tennessee Health Science Center, USA</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Benjamin Straube, Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Rudolf-Bultmann-Str. 8, 35039 Marburg, Germany e-mail: <email>straubeb&#x00040;med.uni-marburg.de</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to the journal Frontiers in Behavioral Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>13</day>
<month>09</month>
<year>2013</year>
</pub-date>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<volume>7</volume>
<elocation-id>120</elocation-id>
<history>
<date date-type="received">
<day>09</day>
<month>07</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>08</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2013 Straube, He, Steines, Gebhardt, Kircher, Sammer and Nagels.</copyright-statement>
<copyright-year>2013</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>Abstractness and modality of interpersonal communication have a considerable impact on comprehension. They are relevant for determining thoughts and constituting internal models of the environment. Whereas concrete object-related information can be represented in mind irrespective of language, abstract concepts require a representation in speech. Consequently, modality-independent processing of abstract information can be expected. Here we investigated the neural correlates of abstractness (abstract vs. concrete) and modality (speech vs. gestures), to identify an abstractness-specific supramodal neural network. During fMRI data acquisition 20 participants were presented with videos of an actor either speaking sentences with an abstract-social [AS] or concrete-object-related content [CS], or performing meaningful abstract-social emblematic [AG] or concrete-object-related tool-use gestures [CG]. Gestures were accompanied by a foreign language to increase the comparability between conditions and to frame the communication context of the gesture videos. Participants performed a content judgment task referring to the person vs. object-relatedness of the utterances. The behavioral data suggest a comparable comprehension of contents communicated by speech or gesture. Furthermore, we found common neural processing for abstract information independent of modality (AS &#x0003E; CS &#x02229; AG &#x0003E; CG) in a left hemispheric network including the left inferior frontal gyrus (IFG), temporal pole, and medial frontal cortex. Modality specific activations were found in bilateral occipital, parietal, and temporal as well as right inferior frontal brain regions for gesture (G &#x0003E; S) and in left anterior temporal regions and the left angular gyrus for the processing of speech semantics (S &#x0003E; G). These data support the idea that abstract concepts are represented in a supramodal manner. Consequently, gestures referring to abstract concepts are processed in a predominantly left hemispheric language related neural network.</p></abstract>
<kwd-group>
<kwd>gesture</kwd>
<kwd>speech</kwd>
<kwd>fMRI</kwd>
<kwd>abstract semantics</kwd>
<kwd>emblematic gestures</kwd>
<kwd>tool-use gestures</kwd>
</kwd-group>
<counts>
<fig-count count="4"/>
<table-count count="3"/>
<equation-count count="0"/>
<ref-count count="93"/>
<page-count count="14"/>
<word-count count="10630"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Human communication is distinctly characterized by the ability to convey abstract concepts such as feeling, evaluations, cultural symbols, or theoretical assumptions. This can be differentiated from references to our physical environment consisting of concrete objects and their relationships to each other. In addition to our language capacity, humans also employ gestures as flexible tool to communicate both concrete and abstract information (Kita et al., <xref ref-type="bibr" rid="B38">2007</xref>; Straube et al., <xref ref-type="bibr" rid="B71">2011a</xref>). The investigation of abstractness and modality of communicated information can deliver important insight into the neural representation of concrete and abstract meaning. However, up to now, evidence about communalities or differences in the neural processing of abstract vs. concrete meaning communicated by speech vs. gesture is missing.</p>
<p>Recently, a hierarchical model of language and thought has been suggested (Perlovsky and Ilin, <xref ref-type="bibr" rid="B54">2010</xref>) which proposes that abstract thinking is impossible without speech (Perlovsky and Ilin, <xref ref-type="bibr" rid="B55">2013</xref>). According to this model, abstract information is processed by a neural language system, regardless of whether speech or gesture is chosen as a tool to convey this information. Following this assumption, concrete object-related information is represented in mind independent of speech and hence in a modality-dependent manner in brain regions sensitive to for example visual or motor information. The latter assumption&#x02014;at least partly&#x02014;contradicts existing embodiment theories, which suggest a strong overlap of the sensory-motor and language system in particular with respect to the processing of concrete concepts (Gallese and Lakoff, <xref ref-type="bibr" rid="B24">2005</xref>; Arbib, <xref ref-type="bibr" rid="B3">2008</xref>; Fischer and Zwaan, <xref ref-type="bibr" rid="B22">2008</xref>; D&#x00027;Ausilio et al., <xref ref-type="bibr" rid="B11">2009</xref>; Pulverm&#x000FC;ller and Fadiga, <xref ref-type="bibr" rid="B58">2010</xref>). However, the particular role of the communication modality for the neural representation of abstract as opposed to concrete concepts has not been investigated so far.</p>
<p>The impact of abstractness on speech processing (e.g., Rapp et al., <xref ref-type="bibr" rid="B59">2004</xref>, <xref ref-type="bibr" rid="B60">2007</xref>; Eviatar and Just, <xref ref-type="bibr" rid="B19">2006</xref>; Lee and Dapretto, <xref ref-type="bibr" rid="B40">2006</xref>; Kircher et al., <xref ref-type="bibr" rid="B37">2007</xref>; Mashal et al., <xref ref-type="bibr" rid="B44">2007</xref>, <xref ref-type="bibr" rid="B45">2009</xref>; Shibata et al., <xref ref-type="bibr" rid="B69">2007</xref>; Schmidt and Seger, <xref ref-type="bibr" rid="B68">2009</xref>; Desai et al., <xref ref-type="bibr" rid="B15">2011</xref>) and on the neural integration of speech and gesture information has been demonstrated in several functional magnetic resonance imaging (fMRI) studies using different experimental approaches (Cornejo et al., <xref ref-type="bibr" rid="B10">2009</xref>; Kircher et al., <xref ref-type="bibr" rid="B36">2009b</xref>; Straube et al., <xref ref-type="bibr" rid="B77">2009</xref>, <xref ref-type="bibr" rid="B71">2011a</xref>, <xref ref-type="bibr" rid="B76">2013a</xref>; Ib&#x000E1;&#x000F1;ez et al., <xref ref-type="bibr" rid="B33">2011</xref>). There is converging evidence suggesting that especially the left inferior frontal gyrus (IFG) plays a decisive role in the processing of abstract semantic figurative meaning in speech (Rapp et al., <xref ref-type="bibr" rid="B59">2004</xref>, <xref ref-type="bibr" rid="B60">2007</xref>; Kircher et al., <xref ref-type="bibr" rid="B37">2007</xref>; Shibata et al., <xref ref-type="bibr" rid="B69">2007</xref>). However, results can further differ due to other factors, such as familiarity, imagibility, figurativeness, or processing difficulty (Mashal et al., <xref ref-type="bibr" rid="B45">2009</xref>; Schmidt and Seger, <xref ref-type="bibr" rid="B68">2009</xref>; Cardillo et al., <xref ref-type="bibr" rid="B9">2010</xref>; Schmidt et al., <xref ref-type="bibr" rid="B67">2010</xref>; Diaz et al., <xref ref-type="bibr" rid="B16">2011</xref>).</p>
<p>In contrast to abstract information processing, it has been suggested that concrete information is processed in different brain regions sensitive to the specific information type: e.g., spatial information in the parietal lobe (Ungerleider and Haxby, <xref ref-type="bibr" rid="B85">1994</xref>; Straube et al., <xref ref-type="bibr" rid="B73">2011c</xref>), form or color information in the temporal lobe (Patterson et al., <xref ref-type="bibr" rid="B52">2007</xref>). A similar finding is illustrated by Binder and Desai (<xref ref-type="bibr" rid="B6">2011</xref>): by reviewing 38 imaging studies that examined concrete knowledge processing during language comprehension tasks, the authors found that the processing of action-related speech material activates brain regions that are also involved in action execution (see also Hauk et al., <xref ref-type="bibr" rid="B27">2004</xref>; Hauk and Pulverm&#x000FC;ller, <xref ref-type="bibr" rid="B28">2004</xref>); similarly, the processing of other concrete speech information such as sound and color all tend to show activations in areas that process these perceptual modalities (Binder and Desai, <xref ref-type="bibr" rid="B6">2011</xref>). In sum, abstract information processing has been shown to recruit a mainly left-lateralized fronto-temporal neural network whereas concrete information comprehension involves rather diverse activation foci, which are primarily related to the corresponding perceptual origin.</p>
<p>In addition to our speech capacity, gesturing is a flexible communicative tool which humans use to communicate both concrete and abstract information via the visual modality. Previous studies on object- or person-related gesture processing have either presented pantomimes of tool or object use, hands grasping for tools or objects (e.g., Decety et al., <xref ref-type="bibr" rid="B14">1997</xref>; Faillenot et al., <xref ref-type="bibr" rid="B20">1997</xref>; Decety and Gr&#x000E8;zes, <xref ref-type="bibr" rid="B13">1999</xref>; Gr&#x000E8;zes and Decety, <xref ref-type="bibr" rid="B26">2001</xref>; Buxbaum et al., <xref ref-type="bibr" rid="B8">2005</xref>; Filimon et al., <xref ref-type="bibr" rid="B21">2007</xref>; Pierno et al., <xref ref-type="bibr" rid="B56">2009</xref>; Biagi et al., <xref ref-type="bibr" rid="B5">2010</xref>; Davare et al., <xref ref-type="bibr" rid="B12">2010</xref>; Emmorey et al., <xref ref-type="bibr" rid="B18">2010</xref>; Jastorff et al., <xref ref-type="bibr" rid="B34">2010</xref>); or symbolic gestures like &#x0201C;thumbs up&#x0201D; (Nakamura et al., <xref ref-type="bibr" rid="B48">2004</xref>; Molnar-Szakacs et al., <xref ref-type="bibr" rid="B46">2007</xref>; Husain et al., <xref ref-type="bibr" rid="B32">2009</xref>; Xu et al., <xref ref-type="bibr" rid="B93">2009</xref>; Andric et al., <xref ref-type="bibr" rid="B2">2013</xref>). However, few studies directly compared abstract-social (person-related) with concrete-object-related gestures. A previous study demonstrated that the left IFG is involved in the processing of expressive (emotional) in contrast to body referred and isolated (object-related) hand gestures (Lotze et al., <xref ref-type="bibr" rid="B42">2006</xref>). This finding suggests that the left IFG is sensitive for the processing of abstract information irrespective of communication modality (speech or gestures).</p>
<p>In sum, the left IFG represents a sensitive region for abstract information processing in speech or gesture, whereas the brain areas activated by concrete information depend on communication modality and semantic content. However, whether the same neural structures are relevant for the processing of gestures and sentences with an abstract content or gestures and sentences with a concrete content remains unknown.</p>
<p>Common neural networks for the processing of speech and gesture information have been suggested (Willems and Hagoort, <xref ref-type="bibr" rid="B89">2007</xref>), and empirically tested in several recent studies (Xu et al., <xref ref-type="bibr" rid="B93">2009</xref>; Andric and Small, <xref ref-type="bibr" rid="B1">2012</xref>; Straube et al., <xref ref-type="bibr" rid="B78">2012</xref>; Andric et al., <xref ref-type="bibr" rid="B2">2013</xref>). Andric et al. (<xref ref-type="bibr" rid="B2">2013</xref>) performed an fMRI study on gesture processing presenting two different kinds of hand actions (emblematic gestures and grasping movements) and speech to their participants. Thus, either emblematic gestures&#x02014;hand and arm movements conveying social or symbolic meaning (e.g., &#x0201C;thumbs up&#x0201D; for having done a good job)&#x02014;or grasping movements (e.g., grasping a stapler) not carrying any semantic meaning <italic>per se</italic> were presented. The authors identified two different types of brain responses for the processing of emblematic gestures: the first type was related to the processing of linguistic meaning, the other type corresponded to the processing of hand actions or movements, regardless of the symbolic meaning conveyed. The latter type involved brain responses in parietal and premotor areas in connection with hand movements, whereas meaning bearing information, e.g., emblem and speech, resulted in activations in left lateral temporal and inferior frontal areas. Altogether, different modalities were involved distinguishing the level of mere perceptual recognition and interpretation of socially and culturally relevant emblematic gestures. More importantly, although lacking baseline conditions containing more concrete semantics (either in gesture or speech), the results from this study tentatively imply a common neural network for processing abstract meaning, irrespective of its input modality.</p>
<p>In a similar vein, Xu et al. (<xref ref-type="bibr" rid="B93">2009</xref>) investigated the processing of emblems and pantomimes and their corresponding speech utterances via fMRI. Their finding converges with Andric and colleagues imaging results in the sense that both input modalities activated a common, left-lateralized network encompassing inferior frontal and posterior temporal regions. However, although utilizing emblems (abstract) and pantomimes (concrete) as stimuli, the authors did not elaborate on how different levels of semantics (abstract/concrete) are processed via gesture or speech. Moreover, in a recent study from our laboratory, Straube et al. (<xref ref-type="bibr" rid="B78">2012</xref>) looked at less conventionalized gesture&#x02014;iconic gesture, but still found a fronto-temporal network which was responsible for both the processing of gesture and speech semantics. Altogether, the three aforementioned studies unanimously suggest a common fronto-temporal neural network to be responsible for the processing of not only speech but also gesture semantics.</p>
<p>Although tentative proposals regarding a supramodal neural network for speech and gesture semantics have been made (Xu et al., <xref ref-type="bibr" rid="B93">2009</xref>; Straube et al., <xref ref-type="bibr" rid="B78">2012</xref>), it remains unclear how different levels of semantics&#x02014;either concrete or abstract&#x02014;are processed differently with respect to the input modalities. To date, no study results on a direct comparison between abstract and concrete semantic information processing with visual (gesture) or auditory (speech) input are available.</p>
<p>As hypothesized above, concrete object-related information might be represented in mind with and/or without speech, whereas abstract information could require/rely on a representation in speech. Consequently, common processing mechanisms for the processing of speech and gesture semantics can be specifically expected when abstract (in contrast to concrete) information is communicated. Therefore, the current study focused on the neural correlates of abstractness and modality in a communication context. With a factorial manipulation of content (abstract vs. concrete) and communication modality (speech vs. gestures) we wanted to shed light on supramodal neural network properties relevant for the processing of abstract in contrast to concrete information. We tested the following alternative hypotheses: first, if only abstract concepts&#x02014;activated through speech or gesture in natural communication situations&#x02014;are processed in a supramodal manner, then we predict consistent neural signatures only for abstract in contrast to concrete contents across different types of communication modality. However, if concrete concepts&#x02014;activated through speech or gestures&#x02014;are also represented in a supramodal network, we predict overlapping neural responses for concrete in contrast to abstract contents across modality.</p>
<p>To manipulate abstractness and communication modality we used video clips of an actor either speaking sentences with an abstract-social [AS] or concrete-object-related content [CS], or performing meaningful abstract-social (emblematic) [AG] or concrete-object-related (tool-use) gestures [CG]. Gestures were accompanied by a foreign language (Russian) to increase the comparability between conditions and naturalness of the gesture videos where spoken language frames the communication context. We used emblematic and tool-related gestures to guarantee high comprehensibility of the gestures. During the experiments participants performed a content judgment task referring to the person vs. object-relatedness of the speech and gesture communications to ensure their attention to the semantic information and the adequate comprehension of the corresponding meaning. We hypothesized modality independent activations exclusively for the processing of abstract information (AS &#x0003E; CS &#x02229; AG &#x0003E; CG) in language-related regions encompassing the left inferior frontal gyrus, the left middle, and superior temporal gyrus (MTG/STG) as well as regions related to social/emotional processing such as the temporal pole, the medial frontal, and anterior cingulate cortex (ACC). In addition, modality specific activations were expected in bilateral occipital, parietal, and temporal brain regions for gesture (G &#x0003E; S) and in left temporal, temporo-parietal, and inferior frontal regions for the processing of speech semantics (S &#x0003E; G).</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Methods</title>
<sec>
<title>Participants</title>
<p>Twenty healthy subjects (7 females) participated in the study. The mean age of the subjects was 25.4 years (<italic>SD</italic>: 3.42, range: 22.0&#x02013;35.0). All participants were right handed (Oldfield, <xref ref-type="bibr" rid="B50">1971</xref>), native German speakers and had no knowledge of Russian. All subjects had normal or corrected-to-normal vision, none reported any hearing deficits. Exclusion criteria were a history of relevant medical or psychiatric illness of the participants. All subjects gave written informed consent prior to participation in the study. The study was approved by the local ethics committee.</p>
</sec>
<sec>
<title>Stimulus material</title>
<p>Video clips were selected from a large pool of different videos. Some of them have been used in previous fMRI studies, focusing on different aspects of speech and gesture processing (Green et al., <xref ref-type="bibr" rid="B25">2009</xref>; Kircher et al., <xref ref-type="bibr" rid="B36">2009b</xref>; Straube et al., <xref ref-type="bibr" rid="B77">2009</xref>, <xref ref-type="bibr" rid="B74">2010</xref>, <xref ref-type="bibr" rid="B71">2011a</xref>,<xref ref-type="bibr" rid="B72">b</xref>, <xref ref-type="bibr" rid="B78">2012</xref>, <xref ref-type="bibr" rid="B76">2013a</xref>,<xref ref-type="bibr" rid="B75">b</xref>; Leube et al., <xref ref-type="bibr" rid="B41">2012</xref>; Mainieri et al., <xref ref-type="bibr" rid="B43">2013</xref>). Here, we used emblematic and tool-related gestures and corresponding sentences to guarantee high comprehensibility of the gestures and a strong difference in abstractness between conditions. For the current analysis, 208 (26 videos per condition &#x000D7; 4 conditions &#x000D7; 2 sets) short video clips depicting an actor were used. The actor performed the following conditions: (1) German sentences with an abstract-social content [AS], (2) Russian sentences with abstract-social (emblematic) gestures [AG], (3) German sentences with a concrete-object-related content [CS], and (4) Russian sentences with concrete-object-related (tool-use) gestures [CG] (Figure <xref ref-type="fig" rid="F1">1</xref>). Thus, we presented videos with semantic information only in speech or only in gesture, both of them in either a highly abstract-social or a concrete-object-related version. Additionally, two bimodal meaningful speech-gesture conditions and one meaningless speech-gesture condition have been presented, which are not of interest for the current analysis.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>For each of the four conditions (AG, abstract-gesture; CG, concrete-gesture; AS, abstract-speech; CS, concrete-speech) an example of the stimulus material is depicted</bold>. Note: For illustrative purposes the spoken German sentences were translated into English and all spoken sentences were written into speech bubbles.</p></caption>
<graphic xlink:href="fnbeh-07-00120-g0001.tif"/>
</fig>
<p>We decided to present gestures accompanied by a foreign language to increase the comparability between conditions and the naturalness of the gesture videos where spoken language frames the communication context. All sentences had a similar grammatical structure (subject&#x02014;predicate&#x02014;object) and were translated into Russian for the gesture conditions. Words that sounded similar in each language were avoided. Examples for the German sentences are: &#x0201C;The blacksmith <bold>hammers</bold> on the metal plate&#x0201D; (&#x0201C;Der Schmied h&#x000E4;mmert auf die Metallplatte&#x0201D;; CS condition) or &#x0201C;The bishop <bold>exhorts</bold> the believers&#x0201D; (&#x0201C;Der Bischof ermahnt die Gl&#x000E4;ubigen&#x0201D;; AS condition; see Figure <xref ref-type="fig" rid="F1">1</xref>). Thus, the sentences had a similar length of five to eight words and a similar grammatical form, but differed considerably in content. The corresponding gestures (keyword indicated in bold) matched the corresponding speech content, but were presented here only in a foreign language context.</p>
<p>The same male bilingual actor (German and Russian) performed all the utterances and gestures in a natural spontaneous way. Intonation, prosody and movement characteristics in the corresponding variations of one item were closely matched. At the beginning and at the end of each clip the actor stood with arms hanging comfortably. Each clip had a duration of 5 s including 500 ms before and after the experimental manipulation, where the actor neither spoke nor moved. In the present study the semantic aspects of the stimulus material refer to differences in abstractness of the communicated information (abstract vs. concrete content).</p>
<p>For stimulus validation, 20 participants not taking part in the fMRI study rated each video on a scale from 1 to 7 concerning understandability, imageability and naturalness (1 &#x0003D; very low to 7 &#x0003D; very high). In order to assess <italic>understandability</italic> participants were asked: How understandable is the video clip? (original: &#x0201C;Wie VERST&#x000C4;NDLICH ist dieser Videoclip?&#x0201D;). The rating scale ranged from 1 &#x0003D; very difficult to understand (sehr schlecht verst&#x000E4;ndlich) to 7 &#x0003D; very easy/good to understand (sehr gut verst&#x000E4;ndlich). For <italic>naturalness</italic> ratings the participants were asked: How natural is the scene? (original: &#x0201C;Wie NAT&#x000DC;RLICH ist diese Szene?&#x0201D;). The rating scale ranged from 1 &#x0003D; very unnatural (sehr unnat&#x000FC;rlich) to 7 &#x0003D; very natural (sehr nat&#x000FC;rlich). Finally, for judgments of <italic>imageability</italic> the participants were asked: How pictorial/imageable is the scene? (original: &#x0201C;Wie BILDHAFT ist dieser Videoclip?&#x0201D;). The rating scale ranged from 1 &#x0003D; very abstract (sehr abstrakt) to 7 &#x0003D; very pictoral/imageable (sehr bildhaft). These scales have been used in previous investigations, too (Green et al., <xref ref-type="bibr" rid="B25">2009</xref>; Kircher et al., <xref ref-type="bibr" rid="B36">2009b</xref>; Straube et al., <xref ref-type="bibr" rid="B77">2009</xref>, <xref ref-type="bibr" rid="B74">2010</xref>, <xref ref-type="bibr" rid="B71">2011a</xref>,<xref ref-type="bibr" rid="B72">b</xref>). A set of 338 video clips (52 German sentences with concrete-object-related content, 52 German sentences with abstract-social content and their counterparts in Russian-gesture and German-gesture condition and 26 Russian control condition) were chosen as stimuli for the fMRI experiment on the basis of high naturalness and high understandability for the German and gesture conditions. The stimuli were divided into two sets in order to present each participant with 182 clips during the scanning procedure (26 items per condition), counterbalanced across subjects. A single participant only saw complementary derivatives of one item, i.e., the same sentence or gesture information was only presented once per participant. This was done to avoid speech or gesture repetition or carryover effects. Again, all parameters listed above were used for an equal assignment of the video clips to the two experimental sets, to avoid set-related between-subject differences. As an overview, Table <xref ref-type="table" rid="T1">1</xref> lists the mean durations of speech and gestures as well as the mean ratings of comprehension, imageability, and naturalness of the items used for the current analyses.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Number of videos and their mean durations of stimulus parameters speech and gesture as well as their mean stimulus ratings of understandability, imageability, and naturalness according to the four conditions abstract-gesture (AG), concrete-gesture (CG), abstract-speech (AS), and concrete-speech (CS) for set 1, set 2 and in total</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left"><bold>Set</bold></th>
<th align="left"><bold>Condition</bold></th>
<th align="left"><bold><italic>N</italic></bold></th>
<th align="center" colspan="4"><bold>Stimulus parameter</bold></th>
<th align="center" colspan="6"><bold>Rating evaluations</bold></th>
</tr>
<tr>
<th/>
<th/>
<th/>
<th align="center" colspan="2"><bold>Speech duration</bold></th>
<th align="center" colspan="2"><bold>Gesture duration</bold></th>
<th align="center" colspan="2"><bold>Understandability</bold></th>
<th align="center" colspan="2"><bold>Imagebility</bold></th>
<th align="center" colspan="2"><bold>Naturalness</bold></th>
</tr>
<tr>
<th/>
<th/>
<th/>
<th align="left"><bold>Mean</bold></th>
<th align="left"><bold><italic>SD</italic></bold></th>
<th align="left"><bold>Mean</bold></th>
<th align="left"><bold><italic>SD</italic></bold></th>
<th align="left"><bold>Mean</bold></th>
<th align="left"><bold><italic>SD</italic></bold></th>
<th align="left"><bold>Mean</bold></th>
<th align="left"><bold><italic>SD</italic></bold></th>
<th align="left"><bold>Mean</bold></th>
<th align="left"><bold><italic>SD</italic></bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">1</td>
<td align="left">AG</td>
<td align="left">26</td>
<td align="left">2.163</td>
<td align="left">0.391</td>
<td align="left">2.313</td>
<td align="left">0.440</td>
<td align="left">3.625</td>
<td align="left">0.578</td>
<td align="left">4.498</td>
<td align="left">0.587</td>
<td align="left">4.565</td>
<td align="left">0.379</td>
</tr>
<tr>
<td/>
<td align="left">CG</td>
<td align="left">26</td>
<td align="left">2.303</td>
<td align="left">0.434</td>
<td align="left">3.033</td>
<td align="left">0.364</td>
<td align="left">3.537</td>
<td align="left">0.808</td>
<td align="left">4.785</td>
<td align="left">0.695</td>
<td align="left">4.340</td>
<td align="left">0.540</td>
</tr>
<tr>
<td/>
<td align="left">AS</td>
<td align="left">26</td>
<td align="left">2.400</td>
<td align="left">0.308</td>
<td/>
<td/>
<td align="left">6.527</td>
<td align="left">0.179</td>
<td align="left">3.481</td>
<td align="left">0.321</td>
<td align="left">4.077</td>
<td align="left">0.258</td>
</tr>
<tr>
<td/>
<td align="left">CS</td>
<td align="left">26</td>
<td align="left">2.332</td>
<td align="left">0.290</td>
<td/>
<td/>
<td align="left">6.650</td>
<td align="left">0.209</td>
<td align="left">2.967</td>
<td align="left">0.308</td>
<td align="left">3.181</td>
<td align="left">0.293</td>
</tr>
<tr>
<td/>
<td align="left">Total</td>
<td align="left">104</td>
<td align="left">2.299</td>
<td align="left">0.366</td>
<td align="left">2.673</td>
<td align="left">0.540</td>
<td align="left">5.085</td>
<td align="left">1.595</td>
<td align="left">3.933</td>
<td align="left">0.894</td>
<td align="left">4.041</td>
<td align="left">0.649</td>
</tr>
<tr>
<td align="left">2</td>
<td align="left">AG</td>
<td align="left">26</td>
<td align="left">2.144</td>
<td align="left">0.296</td>
<td align="left">2.219</td>
<td align="left">0.336</td>
<td align="left">3.392</td>
<td align="left">0.766</td>
<td align="left">4.381</td>
<td align="left">0.698</td>
<td align="left">4.479</td>
<td align="left">0.501</td>
</tr>
<tr>
<td/>
<td align="left">CG</td>
<td align="left">26</td>
<td align="left">2.160</td>
<td align="left">0.391</td>
<td align="left">2.989</td>
<td align="left">0.415</td>
<td align="left">3.327</td>
<td align="left">0.660</td>
<td align="left">4.598</td>
<td align="left">0.621</td>
<td align="left">4.181</td>
<td align="left">0.444</td>
</tr>
<tr>
<td/>
<td align="left">AS</td>
<td align="left">26</td>
<td align="left">2.332</td>
<td align="left">0.281</td>
<td/>
<td/>
<td align="left">6.490</td>
<td align="left">0.154</td>
<td align="left">3.454</td>
<td align="left">0.372</td>
<td align="left">3.935</td>
<td align="left">0.237</td>
</tr>
<tr>
<td/>
<td align="left">CS</td>
<td align="left">26</td>
<td align="left">2.274</td>
<td align="left">0.229</td>
<td/>
<td/>
<td align="left">6.652</td>
<td align="left">0.155</td>
<td align="left">3.083</td>
<td align="left">0.207</td>
<td align="left">3.181</td>
<td align="left">0.279</td>
</tr>
<tr>
<td/>
<td align="left">Total</td>
<td align="left">104</td>
<td align="left">2.228</td>
<td align="left">0.311</td>
<td align="left">2.604</td>
<td align="left">0.539</td>
<td align="left">4.965</td>
<td align="left">1.693</td>
<td align="left">3.879</td>
<td align="left">0.810</td>
<td align="left">3.944</td>
<td align="left">0.612</td>
</tr>
<tr>
<td align="left">Total</td>
<td align="left">AG</td>
<td align="left">52</td>
<td align="left">2.153</td>
<td align="left">0.343</td>
<td align="left">2.266</td>
<td align="left">0.390</td>
<td align="left">3.509</td>
<td align="left">0.682</td>
<td align="left">4.439</td>
<td align="left">0.641</td>
<td align="left">4.522</td>
<td align="left">0.442</td>
</tr>
<tr>
<td/>
<td align="left">CG</td>
<td align="left">52</td>
<td align="left">2.231</td>
<td align="left">0.415</td>
<td align="left">3.011</td>
<td align="left">0.387</td>
<td align="left">3.432</td>
<td align="left">0.738</td>
<td align="left">4.691</td>
<td align="left">0.659</td>
<td align="left">4.261</td>
<td align="left">0.496</td>
</tr>
<tr>
<td/>
<td align="left">AS</td>
<td align="left">52</td>
<td align="left">2.366</td>
<td align="left">0.294</td>
<td/>
<td/>
<td align="left">6.509</td>
<td align="left">0.166</td>
<td align="left">3.467</td>
<td align="left">0.344</td>
<td align="left">4.006</td>
<td align="left">0.256</td>
</tr>
<tr>
<td/>
<td align="left">CS</td>
<td align="left">52</td>
<td align="left">2.303</td>
<td align="left">0.260</td>
<td/>
<td/>
<td align="left">6.651</td>
<td align="left">0.182</td>
<td align="left">3.025</td>
<td align="left">0.266</td>
<td align="left">3.181</td>
<td align="left">0.283</td>
</tr>
<tr>
<td/>
<td align="left">Total</td>
<td align="left">208</td>
<td align="left">2.263</td>
<td align="left">0.340</td>
<td align="left">2.639</td>
<td align="left">0.538</td>
<td align="left">5.025</td>
<td align="left">1.642</td>
<td align="left">3.906</td>
<td align="left">0.851</td>
<td align="left">3.992</td>
<td align="left">0.631</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>SD, standard deviation.</italic></p>
</table-wrap-foot>
</table-wrap>
<p>The ratings on understandability for the videos of the four conditions used in this study clearly show a main effect of modality, with the speech varieties scoring higher than the gesture varieties [<italic>F</italic><sub>(1, 113.51)</sub> &#x0003D; 1878.79, <italic>P</italic> &#x0003C; 0.001, two-factorial between-subjects ANOVA with adjusted degrees of freedom according to Brown&#x02013;Forsythe]. This effect stems from the fact that different languages were used for speech only and gesture with speech conditions. Video clips with German speech scored higher than 6 while Russian speech with gestures videos scored between 3 and 4 (6.58 vs. 3.47, respectively). This difference is in line with the assumption that when presented without the respective sentence context isolated gestures are less meaningful, but even then they still are more or less understandable, which was important for the current study.</p>
<p>Imageability ratings indicated that there were also differences between the conditions concerning their property to evoke mental images. A significant main effect for modality showed that videos consisting of Russian sentences with gesture were evaluated as being better imaginable than videos consisting only of German sentences [4.57 vs. 3.25, respectively; <italic>F</italic><sub>(1, 144.92)</sub> &#x0003D; 349.89, <italic>P</italic> &#x0003C; 0.001, two-factorial between-subjects ANOVA with adjusted degrees of freedom according to Brown&#x02013;Forsythe]. A significant interaction effect indicated that this difference was even more pronounced for the concrete conditions [<italic>F</italic><sub>(1, 144.92)</sub> &#x0003D; 24.22, <italic>P</italic> &#x0003C; 0.001, two-factorial between-subjects ANOVA with adjusted degrees of freedom according to Brown&#x02013;Forsythe].</p>
<p>Naturalness ratings showed a main effect for modality as well. Videos including Russian sentences with gestures were evaluated as more natural than videos including German speech [4.39 vs. 3.59, respectively; <italic>F</italic><sub>(1, 160.63)</sub> &#x0003D; 225.65, <italic>P</italic> &#x0003C; 0.001, two-factorial between-subjects ANOVA with adjusted degrees of freedom according to Brown&#x02013;Forsythe]. There was also a difference in naturalness ratings concerning the abstractness of the included content. Videos depicting concrete content were evaluated as being less natural than videos depicting abstract content [4.26 vs. 3.72, respectively; <italic>F</italic><sub>(1, 160.63)</sub> &#x0003D; 104.48, <italic>P</italic> &#x0003C; 0.001, two-factorial between-subjects ANOVA with adjusted degrees of freedom according to Brown&#x02013;Forsythe]. Additionally, an interaction effect indicated that videos consisting of German speech with concrete content were evaluated as least natural [<italic>F</italic><sub>(1, 160.63)</sub> &#x0003D; 28.18, <italic>P</italic> &#x0003C; 0.001, two-factorial between-subjects ANOVA with adjusted degrees of freedom according to Brown&#x02013;Forsythe].</p>
<p>The sentences had an average speech duration of 2263 ms (<italic>SD</italic> &#x0003D; 340 ms), with German sentences being somewhat longer than Russian sentences [2335 vs. 2192 ms, respectively; <italic>F</italic><sub>(1, 180.94)</sub> &#x0003D; 9.51, <italic>P</italic> &#x0003C; 0.05, two-factorial between-subjects ANOVA with adjusted degrees of freedom according to Brown&#x02013;Forsythe]. The gestures analyzed here had an average gesture duration of 2639 ms (<italic>SD</italic> &#x0003D; 538 ms), with gestures for concrete content being longer than gestures for abstract content [3011 vs. 2266 ms, respectively; <italic>T</italic><sub>(102)</sub> &#x0003D; 9.78, <italic>P</italic> &#x0003C; 0.001].</p>
<p>Events for the fMRI statistical analysis were defined in accordance with the bimodal German conditions [compare for example Green et al. (<xref ref-type="bibr" rid="B25">2009</xref>); Straube et al. (<xref ref-type="bibr" rid="B78">2012</xref>)] as the moment with the highest semantic correspondence between speech and gesture stroke (peak movement): Each sentence contained only one element that could be illustrated, which was intuitively done by the actor. The events occurred on average 2036 ms (<italic>SD</italic> &#x0003D; 478 ms) after the video start and were used for the modulation of events in the event-related fMRI analysis. The use of these predefined integration time points (see Green et al., <xref ref-type="bibr" rid="B25">2009</xref>) for the fMRI data analysis had the advantage that the timing for all conditions of one stimulus was identical since conditions were counterbalanced across subjects. Additionally, speech and gesture duration were used as parameters of no interest on single trial level to control for condition specific differences in these parameters.</p>
</sec>
<sec>
<title>Experimental procedure</title>
<p>During fMRI data acquisition participants were presented with videos of an actor either speaking sentences (S) or performing meaningful gestures (G) with an abstract-social (A) or concrete-object-related (C) content. Gestures were accompanied by an unknown foreign language (Russian). Participants performed a content judgment task referring to the person vs. object-relatedness of the utterances.</p>
</sec>
<sec>
<title>fMRI data acquisition</title>
<p>All MRI data were acquired on a 3T scanner (Siemens MRT Trio series). Functional images were acquired using a T2-weighted echo planar image sequence (<italic>TR</italic> &#x0003D; 2 s, <italic>TE</italic> &#x0003D; 30 ms, flip angle 90&#x000B0;, slice thickness 4 mm with a 0.36 mm interslice gap, 64 &#x000D7; 64 matrix, FoV 230 mm, in-plane resolution 3.59 &#x000D7; 3.59 mm, 30 axial slices orientated parallel to the AC-PC line covering the whole brain). Two runs of 425 volumes were acquired during the experiment. The onset of each trial was synchronized to a scanner pulse.</p>
</sec>
<sec>
<title>Experimental design and procedure</title>
<p>An experimental session comprised 182 trials (26 for each condition) and consisted of two 14-min blocks. Each block contained 91 trials with a matched number of items from each condition (13). The stimuli were presented in an event-related design in pseudo-randomized order and counterbalanced across subjects. As described above (stimulus material) across subjects each item was presented in corresponding conditions, but a single participant only saw complementary derivatives of one item, i.e., the same sentence or gesture information was only seen once per participant. Each clip was followed by a gray background with a variable duration of 2154&#x02013;5846 ms (jitter average: 4000 ms).</p>
<p>Before scanning, each participant received at least six practice trials outside the scanner to ensure comprehensive understanding of the experimental task. Prior to the start of the experiment, the volume of the videos was individually adjusted so that the clips were clearly audible. During scanning, participants were instructed to watch the videos and to indicate via left hand key presses whether the content of the sentence or the gesture referred to objects index finger or interpersonal social information (e.g., feelings, requests, etc.) middle finger. This task enabled us to focus participants&#x00027; attention to the semantic content of speech and gesture and to investigate comprehension in a rather implicit manner. Performance rates and reaction times were recorded.</p>
</sec>
<sec>
<title>MRI data analysis</title>
<p>MR images were analyzed using Statistical Parametric Mapping (SPM8) standard routines and templates (<ext-link ext-link-type="uri" xlink:href="http://www.fil.ion.ucl.ac.uk">www.fil.ion.ucl.ac.uk</ext-link>). After discarding the first five volumes to minimize T1-saturation effects, all images were spatially and temporally realigned, normalized (resulting voxel size 2 &#x000D7; 2 &#x000D7; 2 mm<sup>3</sup>), smoothed (8 mm isotropic Gaussian filter) and high-pass filtered (cut-off period 128 s).</p>
<p>Statistical whole-brain analysis was performed in a two-level, mixed-effects procedure. In the first level, single-subject BOLD responses were modeled by a design matrix comprising the onsets of each event within the videos (see stimulus material) of all seven experimental conditions. As additional factor each video phase was modeled as mini-bock with 5 s duration. To control for condition specific differences in speech and gesture duration these stimulus characteristics were used as parameters of no interest on single trial level. The hemodynamic response was modeled by the canonical hemodynamic response function (HRF). Parameter estimate (&#x003B2;-) images for the HRF were calculated for each condition and each subject. Parameter estimates for the four relevant conditions were entered into a within-subject flexible factorial ANOVA.</p>
<p>A Monte Carlo simulation of the brain volume was employed to establish an appropriate voxel contiguity threshold (Slotnick and Schacter, <xref ref-type="bibr" rid="B70">2004</xref>). This correction has the advantage of higher sensitivity to smaller effect sizes, while still correcting for multiple comparisons across the whole brain volume. Assuming an individual voxel type I error of <italic>P</italic> &#x0003C; 0.001, a cluster extent of 50 contiguous resampled voxels was indicated as necessary to correct for multiple voxel comparisons at <italic>P</italic> &#x0003C; 0.05. This cluster threshold (based on the whole brain volume) has been applied to all contrasts. The reported voxel coordinates of activation peaks are located in MNI space. For the anatomical localization, functional data were referenced to probabilistic cytoarchitectonic maps (Eickhoff et al., <xref ref-type="bibr" rid="B17">2005</xref>) and the AAL toolbox (Tzourio-Mazoyer et al., <xref ref-type="bibr" rid="B82">2002</xref>).</p>
</sec>
<sec>
<title>Contrasts of interest</title>
<p>The neural processing of abstract information was isolated by computing the difference contrast of abstract-social vs. concrete-object-related sentences [AS &#x0003E; CS] and gestures [AG &#x0003E; CG], whereas the opposite contrasts were applied to reveal brain regions sensitive for the processing of concrete information communicated by speech [CS &#x0003E; AS] and gesture [CG &#x0003E; AG].</p>
<p>In order to find regions that are commonly activated by both processes, contrasts were entered into a conjunction analysis (abstract: [AS &#x0003E; CS &#x02229; AG &#x0003E; CG]; concrete: [CS &#x0003E; AS &#x02229; CG &#x0003E; AG]), testing for independently significant effects compared at the same threshold (conjunction null, see Nichols et al., <xref ref-type="bibr" rid="B49">2005</xref>).</p>
<p>The identical approach has been applied to demonstrate the effect of modality by calculating the following conjunctional analyses, for gesture [AG &#x0003E; AS &#x02229; CG &#x0003E; CS] and for speech semantics [AS &#x0003E; AG &#x02229; CS &#x0003E; CG].</p>
<p>Finally, interaction analyses were performed ([AS vs. AG] vs. [CS vs. CG]) to explore modality specific effects with regard to the processing of abstract vs. concrete information. Masking procedure has been used to ensure that all interactions are based on significant differences of the first contrast (e.g., [CG &#x0003E; CS] &#x0003E; [AG &#x0003E; AS] inclusively masked by [CG &#x0003E; CS]).</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Behavioral results</title>
<p>Subjects were instructed to indicate via button press whether the actor in the video described a socially related action or an object-related action. Correct responses and their reaction times were analyzed each with a Two-Way within-subjects ANOVA with the repeated measurement factors modality (gesture vs. speech) and abstractness (abstract vs. social).</p>
<p>Correct responses showed a significant main effect for modality with videos depicting gesture with Russian speech receiving slightly lower scores than videos depicting German speech only [21.8 vs. 22.95 out of 26, respectively; <italic>F</italic><sub>(1, 19)</sub> &#x0003D; 8.369, <italic>P</italic> &#x0003C; 0.05, partial-eta-squared &#x0003D; 0.31]. A significant main effect for abstractness clearly indicated that videos describing abstract social content were less often identified correctly than videos showing concrete object-related content [20.3 vs. 24.45 out of 26, respectively; <italic>F</italic><sub>(1, 19)</sub> &#x0003D; 15.361, <italic>P</italic> &#x0003C; 0.001, partial-eta-squared &#x0003D; 0.45]. The factors modality and abstractness also showed a modest significant interaction effect on correct responses [<italic>F</italic><sub>(1, 19)</sub> &#x0003D; 4.572, <italic>P</italic> &#x0003C; 0.05, partial-eta-squared &#x0003D; 0.19] stemming from the fact that for videos depicting abstract content the difference between gesture with Russian speech and German speech was more pronounced than for videos showing concrete object-related content (Figure <xref ref-type="fig" rid="F2">2A</xref>).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Graphical illustration of the interaction effects of the two factors modality (gesture vs. speech) and abstractness (abstract vs. concrete) on (A) the number of correct responses in percent and on (B) the corresponding reaction times in ms (vertical lines indicate standard errors of the mean)</bold>.</p></caption>
<graphic xlink:href="fnbeh-07-00120-g0002.tif"/>
</fig>
<p>For each participant the median reaction time for each condition was computed from all correct responses of that condition. A significant interaction effect of modality and abstractness [<italic>F</italic><sub>(1, 19)</sub> &#x0003D; 5.227, <italic>P</italic> &#x0003C; 0.05, partial-eta-squared &#x0003D; 0.22] indicated that while there was no difference for videos depicting concrete content, participants reacted slightly faster to videos depicting abstract content with gesture and slightly slower to videos of abstract content with German speech (Figure <xref ref-type="fig" rid="F2">2B</xref>).</p>
</sec>
<sec>
<title>fMRI results</title>
<sec>
<title>Effects of modality</title>
<p>For the effect of gesture in contrast to speech semantics independent of the abstractness [AG &#x0003E; AS &#x02229; CG &#x0003E; CS] we found activation in bilateral occipital, parietal, and right frontal brain regions (see Table <xref ref-type="table" rid="T2">2</xref>, and Figure <xref ref-type="fig" rid="F3">3C</xref>, yellow). By contrast, for the processing of speech semantics independent of abstractness [AS &#x0003E; AG &#x02229; CS &#x0003E; CG] we found activations in the left anterior temporal lobe and the supramarginal gyrus (see Table <xref ref-type="table" rid="T2">2</xref>, and Figure <xref ref-type="fig" rid="F3">3D</xref>, yellow).</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p><bold>Activation peaks and anatomical regions comprising activated clusters for the conjunction contrasts representing effects of modality (speech vs. gesture and vice versa)</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left"><bold>Contrast</bold></th>
<th align="left"><bold>Anatomical regions/hem</bold>.</th>
<th align="left"><bold>No. voxels</bold></th>
<th align="center" colspan="3"><bold>Peak MNI coordinates</bold></th>
<th align="left"><bold><italic>t</italic>-value</bold></th>
</tr>
<tr>
<th/>
<th/>
<th/>
<th align="left"><bold><italic>x</italic></bold></th>
<th align="left"><bold><italic>y</italic></bold></th>
<th align="left"><bold><italic>z</italic></bold></th>
<th/>
</tr>
</thead>
<tbody>
<tr>
<td align="left">AS &#x0003E; AG &#x02229; CS &#x0003E; CG</td>
<td align="left">Middle temporal gyrus L</td>
<td align="left">673</td>
<td align="left">&#x02212;52</td>
<td align="left">&#x02212;12</td>
<td align="left">&#x02212;20</td>
<td align="char" char=".">5.61</td>
</tr>
<tr>
<td/>
<td align="left">Middle temporal pole L</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">Angular gyrus L</td>
<td align="left">166</td>
<td align="left">&#x02212;54</td>
<td align="left">&#x02212;68</td>
<td align="left">34</td>
<td align="char" char=".">4.81</td>
</tr>
<tr>
<td/>
<td align="left">Precuneus L</td>
<td align="left">69</td>
<td align="left">&#x02212;4</td>
<td align="left">&#x02212;56</td>
<td align="left">34</td>
<td align="char" char=".">3.78</td>
</tr>
<tr>
<td align="left">AG &#x0003E; AS &#x02229; CG &#x0003E; CS</td>
<td align="left">Middle occipital gyrus L</td>
<td align="left">6691</td>
<td align="left">&#x02212;48</td>
<td align="left">&#x02212;74</td>
<td align="left">4</td>
<td align="char" char=".">19.91</td>
</tr>
<tr>
<td/>
<td align="left">Inferior temporal gyrus L</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">Middle temporal gyrus R</td>
<td align="left">9536</td>
<td align="left">50</td>
<td align="left">&#x02212;62</td>
<td align="left">0</td>
<td align="char" char=".">19.62</td>
</tr>
<tr>
<td/>
<td align="left">Fusiform gyrus R</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">Superior occipital gyrus R</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">IFG, pars opercularis R</td>
<td align="left">1313</td>
<td align="left">44</td>
<td align="left">10</td>
<td align="left">28</td>
<td align="char" char=".">6.72</td>
</tr>
<tr>
<td/>
<td align="left">Middle frontal gyrus R</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">Precentral gyrus R</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">Supramarginal gyrus L</td>
<td align="left">202</td>
<td align="left">&#x02212;62</td>
<td align="left">&#x02212;36</td>
<td align="left">32</td>
<td align="char" char=".">4.56</td>
</tr>
<tr>
<td/>
<td align="left">Superior parietal lobe L</td>
<td align="left">299</td>
<td align="left">&#x02212;38</td>
<td align="left">&#x02212;54</td>
<td align="left">60</td>
<td align="char" char=".">4.22</td>
</tr>
<tr>
<td/>
<td align="left">Inferior parietal lobe L</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Table lists the respective contrast, anatomical regions, cluster size, MNI coordinates, and t-values for each significant activation (p &#x0003C; 0.05 corrected for multiple comparisons). MNI, Montreal Neurological Institute; AS, abstract speech; AG, abstract gesture; CS, concrete speech; CG, concrete gesture; IFG, inferior frontal gyrus; L, left; R, right.</italic></p>
</table-wrap-foot>
</table-wrap>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Illustrates the fMRI results for abstract semantics (red), concrete semantics (green), and common neural structures (yellow) for each condition in contrast to low-level baseline (gray background; A, Gesture; B, German), for gesture conditions in contrast to German conditions (C) and for German in contrast to the gesture conditions (D)</bold>. Results were rendered on brain slices and surface using the MRIcron toolbox (<ext-link ext-link-type="uri" xlink:href="http://www.mccauslandcenter.sc.edu/mricro/mricron/install.html">http://www.mccauslandcenter.sc.edu/mricro/mricron/install.html</ext-link>).</p></caption>
<graphic xlink:href="fnbeh-07-00120-g0003.tif"/>
</fig>
<p>The exploration of general activation for each condition in contrast to low-level baseline (gray background) indicates that other regions are commonly activated in all conditions (Figures <xref ref-type="fig" rid="F3">3A,B</xref>). Most interestingly, the IFG seems to be activated bilaterally in the gesture conditions (Figure <xref ref-type="fig" rid="F3">3A</xref>) and left lateralized in the speech conditions (Figure <xref ref-type="fig" rid="F3">3B</xref>).</p>
</sec>
<sec>
<title>Within modality effects of abstractness</title>
<p>Analyses targeting at within-modality processing of abstractness in language semantics [AS &#x0003E; CS] showed activation in a mainly left-lateralized network encompassing an extended fronto-temporal cluster (IFG, precentral gyrus, middle, inferior, and superior temporal gyrus) as well as medial frontal regions and the right anterior middle temporal gyrus (Table <xref ref-type="table" rid="T3">3</xref> and Figure <xref ref-type="fig" rid="F4">4</xref> top, blue). We obtained a comparable activation pattern for the within-modality processing of abstractness in gesture semantics ([AG &#x0003E; CG] see Figure <xref ref-type="fig" rid="F4">4</xref> top, yellow). The opposite contrasts revealed activation in clusters encompassing the left cerebellum, fusiform, and inferior temporal gyrus in the language contrast (CS &#x0003E; AS; see Figure <xref ref-type="fig" rid="F4">4</xref> bottom, blue) and the bilateral occipital lobe for the gesture contrast (CG &#x0003E; AG; see Figure <xref ref-type="fig" rid="F4">4</xref> bottom, yellow).</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p><bold>Activation peaks and anatomical regions comprising activated clusters for the contrasts representing effects of abstractness (abstract vs. concrete and vice versa) dependent of modality (speech or gesture)</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left"><bold>Contrast</bold></th>
<th align="left"><bold>Anatomical regions/hem</bold>.</th>
<th align="left"><bold>No. Voxels</bold></th>
<th align="center" colspan="3"><bold>Peak MNI coordinates</bold></th>
<th align="left"><bold><italic>t</italic>-value</bold></th>
</tr>
<tr>
<th/>
<th/>
<th/>
<th align="left"><bold><italic>x</italic></bold></th>
<th align="left"><bold><italic>y</italic></bold></th>
<th align="left"><bold><italic>z</italic></bold></th>
<th/>
</tr>
</thead>
<tbody>
<tr>
<td align="left">AS &#x0003E; CS</td>
<td align="left">Middle temporal gyrus L</td>
<td align="left">3150</td>
<td align="left">&#x02212;52</td>
<td align="left">&#x02212;34</td>
<td align="left">&#x02212;6</td>
<td align="left">5.98</td>
</tr>
<tr>
<td/>
<td align="left">IFG, pars orbitalis L</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">Medial superior frontal gyrus L</td>
<td align="left">1441</td>
<td align="left">&#x02212;8</td>
<td align="left">56</td>
<td align="left">34</td>
<td align="left">5.72</td>
</tr>
<tr>
<td/>
<td align="left">Middle temporal pole R</td>
<td align="left">289</td>
<td align="left">48</td>
<td align="left">12</td>
<td align="left">&#x02212;34</td>
<td align="left">5.16</td>
</tr>
<tr>
<td/>
<td align="left">Middle temporal gyrus R</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">Angular gyrus L</td>
<td align="left">458</td>
<td align="left">&#x02212;42</td>
<td align="left">&#x02212;58</td>
<td align="left">24</td>
<td align="left">4.41</td>
</tr>
<tr>
<td/>
<td align="left">Precentral gyrus L</td>
<td align="left">248</td>
<td align="left">&#x02212;38</td>
<td align="left">0</td>
<td align="left">62</td>
<td align="left">4.36</td>
</tr>
<tr>
<td/>
<td align="left">Precuneus L</td>
<td align="left">195</td>
<td align="left">&#x02212;8</td>
<td align="left">&#x02212;50</td>
<td align="left">34</td>
<td align="left">4.22</td>
</tr>
<tr>
<td align="left">AG &#x0003E; CG</td>
<td align="left">Superior temporal pole L</td>
<td align="left">910</td>
<td align="left">&#x02212;36</td>
<td align="left">18</td>
<td align="left">&#x02212;24</td>
<td align="left">5.43</td>
</tr>
<tr>
<td/>
<td align="left">IFG, pars triangularis L</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">IFG, pars orbitalis L</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">Medial superior frontal gyrus L</td>
<td align="left">3215</td>
<td align="left">&#x02212;4</td>
<td align="left">30</td>
<td align="left">54</td>
<td align="left">5.10</td>
</tr>
<tr>
<td/>
<td align="left">Angular gyrus L</td>
<td align="left">682</td>
<td align="left">&#x02212;60</td>
<td align="left">&#x02212;60</td>
<td align="left">30</td>
<td align="left">4.64</td>
</tr>
<tr>
<td/>
<td align="left">Caudate nucleus R</td>
<td align="left">786</td>
<td align="left">12</td>
<td align="left">2</td>
<td align="left">8</td>
<td align="left">4.54</td>
</tr>
<tr>
<td/>
<td align="left">Thalamus L</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">Middle temporal gyrus L</td>
<td align="left">209</td>
<td align="left">&#x02212;48</td>
<td align="left">&#x02212;16</td>
<td align="left">&#x02212;18</td>
<td align="left">4.04</td>
</tr>
<tr>
<td align="left">AS &#x0003E; CS &#x02229; AG &#x0003E; CG</td>
<td align="left">Medial superior frontal gyrus L</td>
<td align="left">1015</td>
<td align="left">&#x02212;8</td>
<td align="left">56</td>
<td align="left">30</td>
<td align="left">4.97</td>
</tr>
<tr>
<td/>
<td align="left">Superior temporal pole L</td>
<td align="left">779</td>
<td align="left">&#x02212;36</td>
<td align="left">18</td>
<td align="left">&#x02212;22</td>
<td align="left">4.93</td>
</tr>
<tr>
<td/>
<td align="left">IFG, pars triangularis L</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">IFG, pars orbitalis L</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">Middle temporal gyrus L</td>
<td align="left">161</td>
<td align="left">&#x02212;48</td>
<td align="left">&#x02212;14</td>
<td align="left">&#x02212;20</td>
<td align="left">3.99</td>
</tr>
<tr>
<td/>
<td align="left">Angular gyrus L</td>
<td align="left">253</td>
<td align="left">&#x02212;54</td>
<td align="left">&#x02212;56</td>
<td align="left">26</td>
<td align="left">3.95</td>
</tr>
<tr>
<td align="left">CS &#x0003E; AS</td>
<td align="left">Cerebellum L</td>
<td align="left">580</td>
<td align="left">&#x02212;32</td>
<td align="left">&#x02212;36</td>
<td align="left">&#x02212;28</td>
<td align="left">5.95</td>
</tr>
<tr>
<td/>
<td align="left">Inferior temporal gyrus L</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="left">Fusiform gyrus L</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left">CG &#x0003E; AG</td>
<td align="left">Middle occipital gyrus L</td>
<td align="left">1046</td>
<td align="left">&#x02212;44</td>
<td align="left">&#x02212;76</td>
<td align="left">8</td>
<td align="left">5.86</td>
</tr>
<tr>
<td/>
<td align="left">Middle temporal gyrus R</td>
<td align="left">285</td>
<td align="left">50</td>
<td align="left">&#x02212;62</td>
<td align="left">2</td>
<td align="left">4.73</td>
</tr>
<tr>
<td align="left">CS &#x0003E; AS &#x02229; CG &#x0003E; AG</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td align="left">n.s.</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Table lists the respective contrast, anatomical regions, cluster size, MNI coordinates, and t-values for each significant activation (p &#x0003C; 0.05 corrected for multiple comparisons). MNI, Montreal Neurological Institute; AS, abstract speech; AG, abstract gesture; CS, concrete speech; CG, concrete gesture; IFG, inferior frontal gyrus; L, left; R, right.</italic></p>
</table-wrap-foot>
</table-wrap>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Top illustrates the within-modality processing of abstractness in language semantics ([AS &#x0003E; CS], blue), gesture semantics ([AG &#x0003E; CG], yellow), and in common neural structures (green, overlapping regions)</bold>. Bar graphs in the <bold>middle</bold> of figure illustrate the contrast estimates (extracted eigenvariates) for the commonly activated (green) medial superior frontal (left) and temporal pole/IFG cluster (right). These are representative for all overlapping activation clusters. The within-modality processing of concrete in contrast to abstract language semantics ([CS &#x0003E; AS], blue) and gesture semantics ([CG &#x0003E; AG], yellow) is illustrated at the <bold>bottom</bold> of figure. Here we found no overlap between activation patterns.</p></caption>
<graphic xlink:href="fnbeh-07-00120-g0004.tif"/>
</fig>
</sec>
<sec>
<title>Common activations for abstractness contained in gestures and spoken language</title>
<p>Processing of abstract information independent of input modality as disclosed by the conjunction of [AS &#x0003E; CS &#x02229; AG &#x0003E; CG] was related to a left-sided frontal cluster including the temporal pole, the IFG (pars triangularis and orbitalis), the middle temporal and angular as well as the medial superior frontal gyrus (Table <xref ref-type="table" rid="T3">3</xref> and Figure <xref ref-type="fig" rid="F4">4</xref> top middle/right, green). The opposite conjunction analyses [CS &#x0003E; AS &#x02229; CG &#x0003E; AG] revealed no significant common activation for the processing of concrete in contrast to abstract information.</p>
</sec>
<sec>
<title>Interaction</title>
<p>No significant activation could be identified in the interaction analyses on the selected significance threshold. However, by applying a different cluster size to voxel level threshold proportion to correct for multiple comparisons (<italic>p</italic> &#x0003C; 0.005 and 86 voxels) as indicated by an additional Monte Carlo simulation, we found an interaction in occipital (MNI <italic>x, y, z</italic>: &#x02212;20, &#x02212;90, &#x02212;8, <italic>t</italic> &#x0003D; 3.63, <italic>p</italic> &#x0003C; 0.001, 140 voxels), parietal (MNI <italic>x, y, z</italic>: &#x02212;34, &#x02212;48, 68, <italic>t</italic> &#x0003D; 3.80, <italic>p</italic> &#x0003C; 0.001, 143 voxels; MNI <italic>x, y, z</italic>: &#x02212;34, &#x02212;40, 48, <italic>t</italic> &#x0003D; 3.11, <italic>p</italic> &#x0003C; 0.001, 88 voxels) and premotor (MNI <italic>x, y, z</italic>: &#x02212;34, &#x02212;4, 62, <italic>t</italic> &#x0003D; 3.55, <italic>p</italic> &#x0003C; 0.001, 129 voxels) regions reflecting an specific increase of activation in these regions for the processing of concrete-object-related gesture meaning ([CG &#x0003E; CS] &#x0003E; [AG &#x0003E; AS] inclusively masked by [CG &#x0003E; CS]).</p>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>We hypothesized that the processing of abstract semantic information of spoken language and symbolic emblematic gestures is based on a common neural network. Our study design tailored the comparison to the level of abstract semantics, controlling for processing of general semantic meaning of speech and gesture by using highly meaningful concrete object-related information as control condition. The results demonstrate that the pathways engaged in the processing of semantics contained in both abstract spoken language and abstract-social gestures comprise the temporal pole, the IFG (pars triangularis and orbitalis), the middle temporal, angular and the superior frontal gyri. Thus, in line with our hypothesis we found modality-independent activation in a left hemispheric fronto-temporal network for the processing of abstract information. The strongly left lateralized activation pattern supports the theory that abstract semantics is independent of communication modality represented in language (at least on neural level represented in language-related brain regions).</p>
<sec>
<title>Effects of modality</title>
<p>The results of the speech [CS &#x0003E; CG &#x02229; AS &#x0003E; AG] and gesture contrasts [CG &#x0003E; CS &#x02229; AG &#x0003E; AS] clearly demonstrate that communication modality affects neural processing in the brain independent of the communication content (abstract/concrete). In line with other studies that contrasted the processing of a native against an unknown foreign language (Perani et al., <xref ref-type="bibr" rid="B53">1996</xref>; Schlosser et al., <xref ref-type="bibr" rid="B66">1998</xref>; Pallier et al., <xref ref-type="bibr" rid="B51">2003</xref>; Straube et al., <xref ref-type="bibr" rid="B78">2012</xref>), we found activation along the left temporal lobe (including STG, MTG, and ITG) for German speech contrasted with Russian speech and gesture. This strongly left-lateralized pattern has been found in all of the above mentioned studies. Apart from these studies with conditions very similar to ours, temporal as well as inferior frontal regions have been frequently implicated in various language tasks (for reviews see Bookheimer, <xref ref-type="bibr" rid="B7">2002</xref>; Vigneau et al., <xref ref-type="bibr" rid="B86">2006</xref>; Price, <xref ref-type="bibr" rid="B57">2010</xref>). The lack of IFG activation in our study is probably dependent on the fact that we compared a native language (CS, AS) with a foreign language which was accompanied by a meaningful gesture (CG, AG). Thus, motoric or semantic processes of the left IFG might be equally involved in the speech and gesture conditions as indicated by baseline contrasts (see Figures <xref ref-type="fig" rid="F3">3A,B</xref>).</p>
<p>In line with studies on action observation (e.g., Decety et al., <xref ref-type="bibr" rid="B14">1997</xref>; Decety and Gr&#x000E8;zes, <xref ref-type="bibr" rid="B13">1999</xref>; Gr&#x000E8;zes and Decety, <xref ref-type="bibr" rid="B26">2001</xref>; Filimon et al., <xref ref-type="bibr" rid="B21">2007</xref>) and co-verbal gesture processing (e.g., Green et al., <xref ref-type="bibr" rid="B25">2009</xref>; Kircher et al., <xref ref-type="bibr" rid="B36">2009b</xref>; Straube et al., <xref ref-type="bibr" rid="B71">2011a</xref>), we found for the processing of gesture in contrast to speech information a bilaterally distributed network of activation including occipital, parietal, posterior temporal, and right frontal brain regions.</p>
</sec>
<sec>
<title>Supramodal processing of abstract semantics of speech and gesture</title>
<p>The processing of abstract spoken language semantics (AS &#x0003E; CS) and abstract semantic information conveyed through abstract-social in contrast to concrete-object-related gestures (AG &#x0003E; CG) activated an overlapping network of brain regions. These include a cluster in the left inferior frontal cortex (BA 44, 45) which expanded into the temporal pole, the left inferior, and middle temporal gyrus as well as a cluster in the left medial superior frontal gyrus. Those findings support the model of a supramodal semantic network for the processing of abstract information. By contrast, for concrete vs. abstract information we obtained no overlapping activation.</p>
<p>These results extend studies from both the gesture and the language domain (see above) in showing a common neural representation of specific speech and gesture semantics. Furthermore, the findings go beyond previous reports about common activation for symbolic gestures and speech semantics (Xu et al., <xref ref-type="bibr" rid="B93">2009</xref>), in showing a specific effects for abstract but not concrete speech and gesture information. Interestingly, we previously found similar activation of the left IFG and temporal brain regions for the processing of concrete speech and gesture semantics of iconic gestures (Straube et al., <xref ref-type="bibr" rid="B78">2012</xref>). Whereas iconic gestures are not symbolic and usually occur in a concrete sentence context (e.g., &#x0201C;The ball is round,&#x0201D; using both hands to indicate a round shape), they might implicate rather abstract information without speech, since any concrete meaning can be revealed from these iconic gestures in this context. Thus, the left IFG activation in our previous study could also be explained by an abstract interpretation of isolated iconic gestures (Straube et al., <xref ref-type="bibr" rid="B78">2012</xref>).</p>
<p>The left-lateralization of our findings is congruent with the majority of fMRI studies on language (see Bookheimer, <xref ref-type="bibr" rid="B7">2002</xref>; Price, <xref ref-type="bibr" rid="B57">2010</xref>, for reviews). Left fronto-temporal activations have been frequently observed for semantic processing [e.g., Gaillard et al., <xref ref-type="bibr" rid="B23">2004</xref>; for a review see Vigneau et al. (<xref ref-type="bibr" rid="B86">2006</xref>)], the decoding of meaningful actions (e.g., Decety et al., <xref ref-type="bibr" rid="B14">1997</xref>; Gr&#x000E8;zes and Decety, <xref ref-type="bibr" rid="B26">2001</xref>) and also with regard to co-verbal gesture processing (Willems et al., <xref ref-type="bibr" rid="B91">2007</xref>, <xref ref-type="bibr" rid="B92">2009</xref>; Holle et al., <xref ref-type="bibr" rid="B29">2008</xref>, <xref ref-type="bibr" rid="B30">2010</xref>; Kircher et al., <xref ref-type="bibr" rid="B36">2009b</xref>; Straube et al., <xref ref-type="bibr" rid="B71">2011a</xref>).</p>
<p>With regard to the inferior frontal activations, functional imaging studies have underlined the importance of this region in the processing of language semantics. The junction of the precentral gyrus and the pars opercularis of the left IFG has been involved in controlled semantic retrieval (Thompson-Schill et al., <xref ref-type="bibr" rid="B79">1997</xref>; Wiggs et al., <xref ref-type="bibr" rid="B88">1999</xref>; Wagner et al., <xref ref-type="bibr" rid="B87">2001</xref>), semantic priming (Sachs et al., <xref ref-type="bibr" rid="B61">2008a</xref>,<xref ref-type="bibr" rid="B62">b</xref>, <xref ref-type="bibr" rid="B63">2011</xref>; Kircher et al., <xref ref-type="bibr" rid="B35">2009a</xref>; Sass et al., <xref ref-type="bibr" rid="B64">2009a</xref>,<xref ref-type="bibr" rid="B65">b</xref>) and a supramodal network for semantic processing of words and pictures (Kircher et al., <xref ref-type="bibr" rid="B35">2009a</xref>). The middle frontal gyrus (MFG) was found activated by intramodal semantic priming (e.g., Tivarus et al., <xref ref-type="bibr" rid="B80">2006</xref>). However, medial frontal activation in our study might be better explained by differences in social-emotional content between conditions, which have been often found for social functioning, social cognition, theory of mind, or mentalizing (e.g., Uchiyama et al., <xref ref-type="bibr" rid="B83">2006</xref>, <xref ref-type="bibr" rid="B84">2012</xref>; Krach et al., <xref ref-type="bibr" rid="B39">2009</xref>; Straube et al., <xref ref-type="bibr" rid="B74">2010</xref>).</p>
<p>Since semantic memory represents the basis of semantic processing, an amodal semantic memory (Patterson et al., <xref ref-type="bibr" rid="B52">2007</xref>) is a likely explanation for how speech and gesture semantics could activate a common neural network. Our findings suggest supramodal semantic processing in regions including the left temporal pole, which has been described as best candidate for a supramodal semantic &#x0201C;hub&#x0201D; (Patterson et al., <xref ref-type="bibr" rid="B52">2007</xref>). Thus, abstract semantic information contained in speech and gestures might have activated supramodal semantic knowledge in our study more strongly than concrete information communicated by speech and gesture.</p>
<p>Our data also partially coincide with Binder and Desai&#x00027;s (<xref ref-type="bibr" rid="B6">2011</xref>) neuroanatomical model of semantic processing: in this model, low level (concrete) sensory, action and emotion semantics are processed in brain areas that are located near corresponding perceptual networks; higher-level semantics (abstract semantics), on the contrary, converges at temporal, and inferior parietal regions (Binder and Desai, <xref ref-type="bibr" rid="B6">2011</xref>). Additionally, as a next step, inferior prefrontal cortices are responsible for the selection of the information stored in temporo-parietal cortices. In the current experiment, abstract information activates both temporal and inferior frontal cortices, and this could be considered as evidence supporting the role of fronto-temporal pathways in the processing of higher-level semantics. More importantly, our results suggest that this processing of abstract information is independent of input modality.</p>
<p>As for the processing of concrete semantics, our results are somewhat surprising because we did not find an overlap between gestural and verbal-auditory input. This result falls beyond the prediction of both strict embodiment theories (Barsalou, <xref ref-type="bibr" rid="B4">1999</xref>; Gallese and Lakoff, <xref ref-type="bibr" rid="B24">2005</xref>; Pulverm&#x000FC;ller and Fadiga, <xref ref-type="bibr" rid="B58">2010</xref>) and theories which propose less strict embodiment: all these theories would predict that the concrete semantics in our experiment, being predominantly action-driven, would activate motoric brain regions such as (pre-)motor and parietal cortices, and this activation pattern should be independent of the input modality. However, previous support for these theories is based on studies using single words (e.g., Willems et al., <xref ref-type="bibr" rid="B90">2010</xref>; Moseley et al., <xref ref-type="bibr" rid="B47">2012</xref>) instead of sentences, which might increase the task effort and specifically trigger motoring simulation. Thus, one explanation for the discrepancy between studies could be that we investigated the processing of tool-use information in a sentence context (see Tremblay and Small, <xref ref-type="bibr" rid="B81">2011</xref>). Here, motoric simulation might not be necessary since contextual information facilitates semantic access (e.g., the blacksmith primes the hammer).</p>
<p>Our results are also in line with a recent mathematically-motivated language-cognition model proposed by Perlovsky and Ilin (<xref ref-type="bibr" rid="B55">2013</xref>). This model suggests that high-level abstract thinking relies on the language system and low-level and concrete thinking does not necessarily have to. Transferred to a neural perspective, both abstract meaning (irrespective of input modality) and language (processing) would recruit similar neural networks. In our experiment, the left-lateralized network for abstract meaning comprehension fits perfectly to this prediction. Although it still remains unclear how language and higher-level thinking are related at a functional level, our study provides initial neural evidence, which closely connects the two different domains.</p>
</sec>
</sec>
<sec sec-type="conclusion" id="s5">
<title>Conclusion</title>
<p>Language is not only a communication device, but also a fundamental part of cognition and learning concepts, especially with respect to abstract concepts (Perlovsky and Ilin, <xref ref-type="bibr" rid="B55">2013</xref>). In the last years the understanding of speech and gesture processing has increased; both communication channels have been disentangled and brought together again. Here we investigated the neural correlates of abstractness (abstract vs. concrete) and modality (speech vs. gestures), to demonstrate the existence of an abstractness specific supramodal neural network.</p>
<p>In fact, we could demonstrate the activation of a supramodal network for abstract speech and abstract gestures semantics. The identified left lateralized fronto-temporal network not only maps sound patterns and their corresponding abstract meanings in the auditory domain, but also combines gestures and their abstract meanings in the gestural-visual domain. This modality-independent network most likely gets input from modality-specific areas in the superior temporal (speech) and occipito-temporal brain regions (gestures), where the main characteristics of the spoken and gestured signals are decoded. The inferior frontal regions are responsible for the process of selection and integration, relying on more general world knowledge distributed throughout the brain (Xu et al., <xref ref-type="bibr" rid="B93">2009</xref>). The challenge for future studies will be the identification of specific aspects of speech and gesture semantics or the respective format relevant for the understanding of natural receptive and productive communicative behavior and its dysfunctions in patients, for example with schizophrenia or autism (Hubbard et al., <xref ref-type="bibr" rid="B31">2012</xref>; Straube et al., <xref ref-type="bibr" rid="B76">2013a</xref>,<xref ref-type="bibr" rid="B75">b</xref>).</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</sec>
</body>
<back>
<ack>
<p>This research project is supported by a grant from the &#x0201C;Von Behring-R&#x000F6;ntgen-Stiftung&#x0201D; (project no. 59-0002) and by the &#x0201C;Deutsche Forschungsgemeinschaft&#x0201D; (project no. DFG: Ki 588/6-1). Yifei He and Helge Gebhardt are supported by the &#x0201C;Von Behring-R&#x000F6;ntgen-Stiftung&#x0201D; (project no. 59-0002). Arne Nagels and Miriam Steines are supported by the DFG (project no. Ki 588/6-1). Benjamin Straube is supported by the BMBF (project no. 01GV0615).</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andric</surname> <given-names>M.</given-names></name> <name><surname>Small</surname> <given-names>S. L.</given-names></name></person-group> (<year>2012</year>). <article-title>Gesture&#x00027;s neural language</article-title>. <source>Front. Psychol</source>. <volume>3</volume>:<issue>99</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2012.00099</pub-id><pub-id pub-id-type="pmid">22485103</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andric</surname> <given-names>M.</given-names></name> <name><surname>Solodkin</surname> <given-names>A.</given-names></name> <name><surname>Buccino</surname> <given-names>G.</given-names></name> <name><surname>Goldin-Meadow</surname> <given-names>S.</given-names></name> <name><surname>Rizzolatti</surname> <given-names>G.</given-names></name> <name><surname>Small</surname> <given-names>S. L.</given-names></name></person-group> (<year>2013</year>). <article-title>Brain function overlaps when people observe emblems, speech, and grasping</article-title>. <source>Neuropsychologia</source> <volume>51</volume>, <fpage>1619</fpage>&#x02013;<lpage>1629</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2013.03.022</pub-id><pub-id pub-id-type="pmid">23583968</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arbib</surname> <given-names>M. A.</given-names></name></person-group> (<year>2008</year>). <article-title>From grasp to language: embodied concepts and the challenge of abstraction</article-title>. <source>J. Physiol. Paris</source> <volume>102</volume>, <fpage>4</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1016/j.jphysparis.2008.03.001</pub-id><pub-id pub-id-type="pmid">18440207</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barsalou</surname> <given-names>L. W.</given-names></name></person-group> (<year>1999</year>). <article-title>Perceptual symbol systems</article-title>. <source>Behav. Brain Sci</source>. <volume>22</volume>, <fpage>577</fpage>&#x02013;<lpage>609</lpage>. discussion: 610&#x02013;660. <pub-id pub-id-type="doi">10.1017/S0140525X99002149</pub-id><pub-id pub-id-type="pmid">11301525</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Biagi</surname> <given-names>L.</given-names></name> <name><surname>Cioni</surname> <given-names>G.</given-names></name> <name><surname>Fogassi</surname> <given-names>L.</given-names></name> <name><surname>Guzzetta</surname> <given-names>A.</given-names></name> <name><surname>Tosetti</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Anterior intraparietal cortex codes complexity of observed hand movements</article-title>. <source>Brain Res. Bull</source>. <volume>81</volume>, <fpage>434</fpage>&#x02013;<lpage>440</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainresbull.2009.12.002</pub-id><pub-id pub-id-type="pmid">20006682</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Binder</surname> <given-names>J. R.</given-names></name> <name><surname>Desai</surname> <given-names>R. H.</given-names></name></person-group> (<year>2011</year>). <article-title>The neurobiology of semantic memory</article-title>. <source>Trends Cogn. Sci</source>. <volume>15</volume>, <fpage>527</fpage>&#x02013;<lpage>536</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2011.10.001</pub-id><pub-id pub-id-type="pmid">22001867</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bookheimer</surname> <given-names>S.</given-names></name></person-group> (<year>2002</year>). <article-title>Functional MRI of language: new approaches to understanding the cortical organization of semantic processing</article-title>. <source>Annu. Rev. Neurosci</source>. <volume>25</volume>, <fpage>151</fpage>&#x02013;<lpage>188</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.25.112701.142946</pub-id><pub-id pub-id-type="pmid">12052907</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buxbaum</surname> <given-names>L. J.</given-names></name> <name><surname>Kyle</surname> <given-names>K. M.</given-names></name> <name><surname>Menon</surname> <given-names>R.</given-names></name></person-group> (<year>2005</year>). <article-title>On beyond mirror neurons: internal representations subserving imitation and recognition of skilled object-related actions in humans</article-title>. <source>Brain Res. Cogn. Brain Res</source>. <volume>25</volume>, <fpage>226</fpage>&#x02013;<lpage>239</lpage>. <pub-id pub-id-type="doi">10.1016/j.cogbrainres.2005.05.014</pub-id><pub-id pub-id-type="pmid">15996857</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cardillo</surname> <given-names>E. R.</given-names></name> <name><surname>Schmidt</surname> <given-names>G. L.</given-names></name> <name><surname>Kranjec</surname> <given-names>A.</given-names></name> <name><surname>Chatterjee</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Stimulus design is an obstacle course: 560 matched literal and metaphorical sentences for testing neural hypotheses about metaphor</article-title>. <source>Behav. Res. Methods</source> <volume>42</volume>, <fpage>651</fpage>&#x02013;<lpage>664</lpage>. <pub-id pub-id-type="doi">10.3758/BRM.42.3.651</pub-id><pub-id pub-id-type="pmid">20805587</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cornejo</surname> <given-names>C.</given-names></name> <name><surname>Simonetti</surname> <given-names>F.</given-names></name> <name><surname>Ib&#x000E1;&#x000F1;ez</surname> <given-names>A.</given-names></name> <name><surname>Aldunate</surname> <given-names>N.</given-names></name> <name><surname>Ceric</surname> <given-names>F.</given-names></name> <name><surname>L&#x000F3;pez</surname> <given-names>V.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>Gesture and metaphor comprehension: electrophysiological evidence of cross-modal coordination by audiovisual stimulation</article-title>. <source>Brain Cogn</source>. <volume>70</volume>, <fpage>42</fpage>&#x02013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandc.2008.12.005</pub-id><pub-id pub-id-type="pmid">19200632</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>D&#x00027;Ausilio</surname> <given-names>A.</given-names></name> <name><surname>Pulverm&#x000FC;ller</surname> <given-names>F.</given-names></name> <name><surname>Salmas</surname> <given-names>P.</given-names></name> <name><surname>Bufalari</surname> <given-names>I.</given-names></name> <name><surname>Begliomini</surname> <given-names>C.</given-names></name> <name><surname>Fadiga</surname> <given-names>L.</given-names></name></person-group> (<year>2009</year>). <article-title>The motor somatotopy of speech perception</article-title>. <source>Curr. Biol</source>. <volume>19</volume>, <fpage>381</fpage>&#x02013;<lpage>385</lpage>. <pub-id pub-id-type="doi">10.1016/j.cub.2009.01.017</pub-id><pub-id pub-id-type="pmid">19217297</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davare</surname> <given-names>M.</given-names></name> <name><surname>Rothwell</surname> <given-names>J. C.</given-names></name> <name><surname>Lemon</surname> <given-names>R. N.</given-names></name></person-group> (<year>2010</year>). <article-title>Causal connectivity between the human anterior intraparietal area and premotor cortex during grasp</article-title>. <source>Curr. Biol</source>. <volume>20</volume>, <fpage>176</fpage>&#x02013;<lpage>181</lpage>. <pub-id pub-id-type="doi">10.1016/j.cub.2009.11.063</pub-id><pub-id pub-id-type="pmid">20096580</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Decety</surname> <given-names>J.</given-names></name> <name><surname>Gr&#x000E8;zes</surname> <given-names>J.</given-names></name></person-group> (<year>1999</year>). <article-title>Neural mechanisms subserving the perception of human actions</article-title>. <source>Trends Cogn. Sci</source>. <volume>3</volume>, <fpage>172</fpage>&#x02013;<lpage>178</lpage>. <pub-id pub-id-type="doi">10.1016/S1364-6613(99)01312-1</pub-id><pub-id pub-id-type="pmid">10322473</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Decety</surname> <given-names>J.</given-names></name> <name><surname>Gr&#x000E8;zes</surname> <given-names>J.</given-names></name> <name><surname>Costes</surname> <given-names>N.</given-names></name> <name><surname>Perani</surname> <given-names>D.</given-names></name> <name><surname>Jeannerod</surname> <given-names>M.</given-names></name> <name><surname>Procyk</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>1997</year>). <article-title>Brain activity during observation of actions. Influence of action content and subject&#x00027;s strategy</article-title>. <source>Brain</source> <volume>120(Pt 10)</volume>, <fpage>1763</fpage>&#x02013;<lpage>1777</lpage>. <pub-id pub-id-type="doi">10.1093/brain/120.10.1763</pub-id><pub-id pub-id-type="pmid">9365369</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Desai</surname> <given-names>R. H.</given-names></name> <name><surname>Binder</surname> <given-names>J. R.</given-names></name> <name><surname>Conant</surname> <given-names>L. L.</given-names></name> <name><surname>Mano</surname> <given-names>Q. R.</given-names></name> <name><surname>Seidenberg</surname> <given-names>M. S.</given-names></name></person-group> (<year>2011</year>). <article-title>The neural career of sensory-motor metaphors</article-title>. <source>J. Cogn. Neurosci</source>. <volume>23</volume>, <fpage>2376</fpage>&#x02013;<lpage>2386</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.2010.21596</pub-id><pub-id pub-id-type="pmid">21126156</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Diaz</surname> <given-names>M. T.</given-names></name> <name><surname>Barrett</surname> <given-names>K. T.</given-names></name> <name><surname>Hogstrom</surname> <given-names>L. J.</given-names></name></person-group> (<year>2011</year>). <article-title>The influence of sentence novelty and figurativeness on brain activity</article-title>. <source>Neuropsychologia</source> <volume>49</volume>, <fpage>320</fpage>&#x02013;<lpage>330</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2010.12.004</pub-id><pub-id pub-id-type="pmid">21146553</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eickhoff</surname> <given-names>S. B.</given-names></name> <name><surname>Stephan</surname> <given-names>K. E.</given-names></name> <name><surname>Mohlberg</surname> <given-names>H.</given-names></name> <name><surname>Grefkes</surname> <given-names>C.</given-names></name> <name><surname>Fink</surname> <given-names>G. R.</given-names></name> <name><surname>Amunts</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2005</year>). <article-title>A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data</article-title>. <source>Neuroimage</source> <volume>25</volume>, <fpage>1325</fpage>&#x02013;<lpage>1335</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.12.034</pub-id><pub-id pub-id-type="pmid">15850749</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Emmorey</surname> <given-names>K.</given-names></name> <name><surname>Xu</surname> <given-names>J.</given-names></name> <name><surname>Gannon</surname> <given-names>P.</given-names></name> <name><surname>Goldin-Meadow</surname> <given-names>S.</given-names></name> <name><surname>Braun</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>CNS activation and regional connectivity during pantomime observation: no engagement of the mirror neuron system for deaf signers</article-title>. <source>Neuroimage</source> <volume>49</volume>, <fpage>994</fpage>&#x02013;<lpage>1005</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2009.08.001</pub-id><pub-id pub-id-type="pmid">19679192</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eviatar</surname> <given-names>Z.</given-names></name> <name><surname>Just</surname> <given-names>M. A.</given-names></name></person-group> (<year>2006</year>). <article-title>Brain correlates of discourse processing: an fMRI investigation of irony and conventional metaphor comprehension</article-title>. <source>Neuropsychologia</source> <volume>44</volume>, <fpage>2348</fpage>&#x02013;<lpage>2359</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2006.05.007</pub-id><pub-id pub-id-type="pmid">16806316</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Faillenot</surname> <given-names>I.</given-names></name> <name><surname>Toni</surname> <given-names>I.</given-names></name> <name><surname>Decety</surname> <given-names>J.</given-names></name> <name><surname>Gr&#x000E9;goire</surname> <given-names>M. C.</given-names></name> <name><surname>Jeannerod</surname> <given-names>M.</given-names></name></person-group> (<year>1997</year>). <article-title>Visual pathways for object-oriented action and object recognition: functional anatomy with PET</article-title>. <source>Cereb. Cortex</source> <volume>7</volume>, <fpage>77</fpage>&#x02013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/7.1.77</pub-id><pub-id pub-id-type="pmid">9023435</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Filimon</surname> <given-names>F.</given-names></name> <name><surname>Nelson</surname> <given-names>J. D.</given-names></name> <name><surname>Hagler</surname> <given-names>D. J.</given-names></name> <name><surname>Sereno</surname> <given-names>M. I.</given-names></name></person-group> (<year>2007</year>). <article-title>Human cortical representations for reaching: mirror neurons for execution, observation, and imagery</article-title>. <source>Neuroimage</source> <volume>37</volume>, <fpage>1315</fpage>&#x02013;<lpage>1328</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2007.06.008</pub-id><pub-id pub-id-type="pmid">17689268</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fischer</surname> <given-names>M. H.</given-names></name> <name><surname>Zwaan</surname> <given-names>R. A.</given-names></name></person-group> (<year>2008</year>). <article-title>Embodied language: a review of the role of the motor system in language comprehension</article-title>. <source>Q. J. Exp. Psychol. (Hove)</source> <volume>61</volume>, <fpage>825</fpage>&#x02013;<lpage>850</lpage>. <pub-id pub-id-type="doi">10.1080/17470210701623605</pub-id><pub-id pub-id-type="pmid">18470815</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gaillard</surname> <given-names>W. D.</given-names></name> <name><surname>Balsamo</surname> <given-names>L.</given-names></name> <name><surname>Xu</surname> <given-names>B.</given-names></name> <name><surname>McKinney</surname> <given-names>C.</given-names></name> <name><surname>Papero</surname> <given-names>P. H.</given-names></name> <name><surname>Weinstein</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2004</year>). <article-title>fMRI language task panel improves determination of language dominance</article-title>. <source>Neurology</source> <volume>63</volume>, <fpage>1403</fpage>&#x02013;<lpage>1408</lpage>. <pub-id pub-id-type="doi">10.1212/01.WNL.0000141852.65175.A7</pub-id><pub-id pub-id-type="pmid">15505156</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Lakoff</surname> <given-names>G.</given-names></name></person-group> (<year>2005</year>). <article-title>The brain&#x00027;s concepts: the role of the Sensory-motor system in conceptual knowledge</article-title>. <source>Cogn. Neuropsychol</source>. <volume>22</volume>, <fpage>455</fpage>&#x02013;<lpage>479</lpage>. <pub-id pub-id-type="doi">10.1080/02643290442000310</pub-id><pub-id pub-id-type="pmid">21038261</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Green</surname> <given-names>A.</given-names></name> <name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Weis</surname> <given-names>S.</given-names></name> <name><surname>Jansen</surname> <given-names>A.</given-names></name> <name><surname>Willmes</surname> <given-names>K.</given-names></name> <name><surname>Konrad</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>Neural integration of iconic and unrelated coverbal gestures: a functional MRI study</article-title>. <source>Hum. Brain Mapp</source>. <volume>30</volume>, <fpage>3309</fpage>&#x02013;<lpage>3324</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.20753</pub-id><pub-id pub-id-type="pmid">19350562</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gr&#x000E8;zes</surname> <given-names>J.</given-names></name> <name><surname>Decety</surname> <given-names>J.</given-names></name></person-group> (<year>2001</year>). <article-title>Functional anatomy of execution, mental simulation, observation, and verb generation of actions: a meta-analysis</article-title>. <source>Hum. Brain Mapp</source>. <volume>12</volume>, <fpage>1</fpage>&#x02013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1002/1097-0193(200101)12:1&#x0003C;1::AID-HBM10&#x0003E;3.0.CO;2-V</pub-id><pub-id pub-id-type="pmid">11198101</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hauk</surname> <given-names>O.</given-names></name> <name><surname>Johnsrude</surname> <given-names>I.</given-names></name> <name><surname>Pulverm&#x000FC;ller</surname> <given-names>F.</given-names></name></person-group> (<year>2004</year>). <article-title>Somatotopic representation of action words in human motor and premotor cortex</article-title>. <source>Neuron</source> <volume>41</volume>, <fpage>301</fpage>&#x02013;<lpage>307</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(03)00838-9</pub-id><pub-id pub-id-type="pmid">14741110</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hauk</surname> <given-names>O.</given-names></name> <name><surname>Pulverm&#x000FC;ller</surname> <given-names>F.</given-names></name></person-group> (<year>2004</year>). <article-title>Neurophysiological distinction of action words in the fronto-central cortex</article-title>. <source>Hum. Brain Mapp</source>. <volume>21</volume>, <fpage>191</fpage>&#x02013;<lpage>201</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.10157</pub-id><pub-id pub-id-type="pmid">14755838</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holle</surname> <given-names>H.</given-names></name> <name><surname>Gunter</surname> <given-names>T. C.</given-names></name> <name><surname>Ruschemeyer</surname> <given-names>S. A.</given-names></name> <name><surname>Hennenlotter</surname> <given-names>A.</given-names></name> <name><surname>Iacoboni</surname> <given-names>M.</given-names></name></person-group> (<year>2008</year>). <article-title>Neural correlates of the processing of co-speech gestures</article-title>. <source>Neuroimage</source> <volume>39</volume>, <fpage>2010</fpage>&#x02013;<lpage>2024</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2007.10.055</pub-id><pub-id pub-id-type="pmid">18093845</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holle</surname> <given-names>H.</given-names></name> <name><surname>Obleser</surname> <given-names>J.</given-names></name> <name><surname>Rueschemeyer</surname> <given-names>S. A.</given-names></name> <name><surname>Gunter</surname> <given-names>T. C.</given-names></name></person-group> (<year>2010</year>). <article-title>Integration of iconic gestures and speech in left superior temporal areas boosts speech comprehension under adverse listening conditions</article-title>. <source>Neuroimage</source> <volume>49</volume>, <fpage>875</fpage>&#x02013;<lpage>884</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2009.08.058</pub-id><pub-id pub-id-type="pmid">19733670</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hubbard</surname> <given-names>A. L.</given-names></name> <name><surname>McNealy</surname> <given-names>K.</given-names></name> <name><surname>Scott-Van Zeeland</surname> <given-names>A. A.</given-names></name> <name><surname>Callan</surname> <given-names>D. E.</given-names></name> <name><surname>Bookheimer</surname> <given-names>S. Y.</given-names></name> <name><surname>Dapretto</surname> <given-names>M.</given-names></name></person-group> (<year>2012</year>). <article-title>Altered integration of speech and gesture in children with autism spectrum disorders</article-title>. <source>Brain Behav</source>. <volume>2</volume>, <fpage>606</fpage>&#x02013;<lpage>619</lpage>. <pub-id pub-id-type="doi">10.1002/brb3.81</pub-id><pub-id pub-id-type="pmid">23139906</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Husain</surname> <given-names>F. T.</given-names></name> <name><surname>Patkin</surname> <given-names>D. J.</given-names></name> <name><surname>Thai-Van</surname> <given-names>H.</given-names></name> <name><surname>Braun</surname> <given-names>A. R.</given-names></name> <name><surname>Horwitz</surname> <given-names>B.</given-names></name></person-group> (<year>2009</year>). <article-title>Distinguishing the processing of gestures from signs in deaf individuals: an fMRI study</article-title>. <source>Brain Res</source>. <volume>1276</volume>, <fpage>140</fpage>&#x02013;<lpage>150</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2009.04.034</pub-id><pub-id pub-id-type="pmid">19397900</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ib&#x000E1;&#x000F1;ez</surname> <given-names>A.</given-names></name> <name><surname>Toro</surname> <given-names>P.</given-names></name> <name><surname>Cornejo</surname> <given-names>C.</given-names></name> <name><surname>Urquina</surname> <given-names>H.</given-names></name> <name><surname>Hurquina</surname> <given-names>H.</given-names></name> <name><surname>Manes</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>High contextual sensitivity of metaphorical expressions and gesture blending: a video event-related potential design</article-title>. <source>Psychiatry Res</source>. <volume>191</volume>, <fpage>68</fpage>&#x02013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1016/j.pscychresns.2010.08.008</pub-id><pub-id pub-id-type="pmid">21129937</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jastorff</surname> <given-names>J.</given-names></name> <name><surname>Begliomini</surname> <given-names>C.</given-names></name> <name><surname>Fabbri-Destro</surname> <given-names>M.</given-names></name> <name><surname>Rizzolatti</surname> <given-names>G.</given-names></name> <name><surname>Orban</surname> <given-names>G. A.</given-names></name></person-group> (<year>2010</year>). <article-title>Coding observed motor acts: different organizational principles in the parietal and premotor cortex of humans</article-title>. <source>J. Neurophysiol</source>. <volume>104</volume>, <fpage>128</fpage>&#x02013;<lpage>140</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00254.2010</pub-id><pub-id pub-id-type="pmid">20445039</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kircher</surname> <given-names>T.</given-names></name> <name><surname>Sass</surname> <given-names>K.</given-names></name> <name><surname>Sachs</surname> <given-names>O.</given-names></name> <name><surname>Krach</surname> <given-names>S.</given-names></name></person-group> (<year>2009a</year>). <article-title>Priming words with pictures: neural correlates of semantic associations in a cross-modal priming task using fMRI</article-title>. <source>Hum. Brain Mapp</source>. <volume>30</volume>, <fpage>4116</fpage>&#x02013;<lpage>4128</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.20833</pub-id><pub-id pub-id-type="pmid">19530217</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kircher</surname> <given-names>T.</given-names></name> <name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Leube</surname> <given-names>D.</given-names></name> <name><surname>Weis</surname> <given-names>S.</given-names></name> <name><surname>Sachs</surname> <given-names>O.</given-names></name> <name><surname>Willmes</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2009b</year>). <article-title>Neural interaction of speech and gesture: differential activations of metaphoric co-verbal gestures</article-title>. <source>Neuropsychologia</source> <volume>47</volume>, <fpage>169</fpage>&#x02013;<lpage>179</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2008.08.009</pub-id><pub-id pub-id-type="pmid">18771673</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kircher</surname> <given-names>T. T.</given-names></name> <name><surname>Leube</surname> <given-names>D. T.</given-names></name> <name><surname>Erb</surname> <given-names>M.</given-names></name> <name><surname>Grodd</surname> <given-names>W.</given-names></name> <name><surname>Rapp</surname> <given-names>A. M.</given-names></name></person-group> (<year>2007</year>). <article-title>Neural correlates of metaphor processing in schizophrenia</article-title>. <source>Neuroimage</source> <volume>34</volume>, <fpage>281</fpage>&#x02013;<lpage>289</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.08.044</pub-id><pub-id pub-id-type="pmid">17081771</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kita</surname> <given-names>S.</given-names></name> <name><surname>de Condappa</surname> <given-names>O.</given-names></name> <name><surname>Mohr</surname> <given-names>C.</given-names></name></person-group> (<year>2007</year>). <article-title>Metaphor explanation attenuates the right-hand preference for depictive co-speech gestures that imitate actions</article-title>. <source>Brain Lang</source>. <volume>101</volume>, <fpage>185</fpage>&#x02013;<lpage>197</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandl.2006.11.006</pub-id><pub-id pub-id-type="pmid">17166576</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krach</surname> <given-names>S.</given-names></name> <name><surname>Blumel</surname> <given-names>I.</given-names></name> <name><surname>Marjoram</surname> <given-names>D.</given-names></name> <name><surname>Lataster</surname> <given-names>T.</given-names></name> <name><surname>Krabbendam</surname> <given-names>L.</given-names></name> <name><surname>Weber</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>Are women better mindreaders? Sex differences in neural correlates of mentalizing detected with functional MRI</article-title>. <source>BMC Neurosci</source>. <volume>10</volume>:<fpage>9</fpage>. <pub-id pub-id-type="doi">10.1186/1471-2202-10-9</pub-id><pub-id pub-id-type="pmid">19193204</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>S. S.</given-names></name> <name><surname>Dapretto</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Metaphorical vs. literal word meanings: fMRI evidence against a selective role of the right hemisphere</article-title>. <source>Neuroimage</source> <volume>29</volume>, <fpage>536</fpage>&#x02013;<lpage>544</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2005.08.003</pub-id><pub-id pub-id-type="pmid">16165371</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leube</surname> <given-names>D.</given-names></name> <name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Green</surname> <given-names>A.</given-names></name> <name><surname>Bl&#x000FC;mel</surname> <given-names>I.</given-names></name> <name><surname>Prinz</surname> <given-names>S.</given-names></name> <name><surname>Schlotterbeck</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>A possible brain network for representation of cooperative behavior and its implications for the psychopathology of schizophrenia</article-title>. <source>Neuropsychobiology</source> <volume>66</volume>, <fpage>24</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1159/000337131</pub-id><pub-id pub-id-type="pmid">22797274</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lotze</surname> <given-names>M.</given-names></name> <name><surname>Heymans</surname> <given-names>U.</given-names></name> <name><surname>Birbaumer</surname> <given-names>N.</given-names></name> <name><surname>Veit</surname> <given-names>R.</given-names></name> <name><surname>Erb</surname> <given-names>M.</given-names></name> <name><surname>Flor</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2006</year>). <article-title>Differential cerebral activation during observation of expressive gestures and motor acts</article-title>. <source>Neuropsychologia</source> <volume>44</volume>, <fpage>1787</fpage>&#x02013;<lpage>1795</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2006.03.016</pub-id><pub-id pub-id-type="pmid">16730755</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mainieri</surname> <given-names>A. G.</given-names></name> <name><surname>Heim</surname> <given-names>S.</given-names></name> <name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Binkofski</surname> <given-names>F.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2013</year>). <article-title>Differential role of mentalizing and the mirror neuron system in the imitation of communicative gestures</article-title>. <source>NeuroImage</source> <volume>81</volume>, <fpage>294</fpage>&#x02013;<lpage>305</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2013.05.021</pub-id><pub-id pub-id-type="pmid">23684882</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mashal</surname> <given-names>N.</given-names></name> <name><surname>Faust</surname> <given-names>M.</given-names></name> <name><surname>Hendler</surname> <given-names>T.</given-names></name> <name><surname>Jung-Beeman</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>An fMRI investigation of the neural correlates underlying the processing of novel metaphoric expressions</article-title>. <source>Brain Lang</source>. <volume>100</volume>, <fpage>115</fpage>&#x02013;<lpage>126</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandl.2005.10.005</pub-id><pub-id pub-id-type="pmid">16290261</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mashal</surname> <given-names>N.</given-names></name> <name><surname>Faust</surname> <given-names>M.</given-names></name> <name><surname>Hendler</surname> <given-names>T.</given-names></name> <name><surname>Jung-Beeman</surname> <given-names>M.</given-names></name></person-group> (<year>2009</year>). <article-title>An fMRI study of processing novel metaphoric sentences</article-title>. <source>Laterality</source> <volume>14</volume>, <fpage>30</fpage>&#x02013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1080/13576500802049433</pub-id><pub-id pub-id-type="pmid">18608849</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Molnar-Szakacs</surname> <given-names>I.</given-names></name> <name><surname>Wu</surname> <given-names>A. D.</given-names></name> <name><surname>Robles</surname> <given-names>F. J.</given-names></name> <name><surname>Iacoboni</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Do you see what I mean? Corticospinal excitability during observation of culture-specific gestures</article-title>. <source>PLoS ONE</source> <volume>2</volume>:<fpage>e626</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0000626</pub-id><pub-id pub-id-type="pmid">17637842</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moseley</surname> <given-names>R.</given-names></name> <name><surname>Carota</surname> <given-names>F.</given-names></name> <name><surname>Hauk</surname> <given-names>O.</given-names></name> <name><surname>Mohr</surname> <given-names>B.</given-names></name> <name><surname>Pulverm&#x000FC;ller</surname> <given-names>F.</given-names></name></person-group> (<year>2012</year>). <article-title>A role for the motor system in binding abstract emotional meaning</article-title>. <source>Cereb. Cortex</source> <volume>22</volume>, <fpage>1634</fpage>&#x02013;<lpage>1647</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhr238</pub-id><pub-id pub-id-type="pmid">21914634</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nakamura</surname> <given-names>A.</given-names></name> <name><surname>Maess</surname> <given-names>B.</given-names></name> <name><surname>Kn&#x000F6;sche</surname> <given-names>T. R.</given-names></name> <name><surname>Gunter</surname> <given-names>T. C.</given-names></name> <name><surname>Bach</surname> <given-names>P.</given-names></name> <name><surname>Friederici</surname> <given-names>A. D.</given-names></name></person-group> (<year>2004</year>). <article-title>Cooperation of different neuronal systems during hand sign recognition</article-title>. <source>Neuroimage</source> <volume>23</volume>, <fpage>25</fpage>&#x02013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.04.034</pub-id><pub-id pub-id-type="pmid">15325349</pub-id></citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nichols</surname> <given-names>T.</given-names></name> <name><surname>Brett</surname> <given-names>M.</given-names></name> <name><surname>Andersson</surname> <given-names>J.</given-names></name> <name><surname>Wager</surname> <given-names>T.</given-names></name> <name><surname>Poline</surname> <given-names>J. B.</given-names></name></person-group> (<year>2005</year>). <article-title>Valid conjunction inference with the minimum statistic</article-title>. <source>Neuroimage</source> <volume>25</volume>, <fpage>653</fpage>&#x02013;<lpage>660</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.12.005</pub-id><pub-id pub-id-type="pmid">15808966</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oldfield</surname> <given-names>R. C.</given-names></name></person-group> (<year>1971</year>). <article-title>The assessment and analysis of handedness: the Edinburgh inventory</article-title>. <source>Neuropsychologia</source> <volume>9</volume>, <fpage>97</fpage>&#x02013;<lpage>113</lpage>. <pub-id pub-id-type="doi">10.1016/0028-3932(71)90067-4</pub-id><pub-id pub-id-type="pmid">5146491</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pallier</surname> <given-names>C.</given-names></name> <name><surname>Dehaene</surname> <given-names>S.</given-names></name> <name><surname>Poline</surname> <given-names>J. B.</given-names></name> <name><surname>LeBihan</surname> <given-names>D.</given-names></name> <name><surname>Argenti</surname> <given-names>A. M.</given-names></name> <name><surname>Dupoux</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>2003</year>). <article-title>Brain imaging of language plasticity in adopted adults: can a second language replace the first?</article-title> <source>Cereb. Cortex</source> <volume>13</volume>, <fpage>155</fpage>&#x02013;<lpage>161</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/13.2.155</pub-id><pub-id pub-id-type="pmid">12507946</pub-id></citation>
</ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Patterson</surname> <given-names>K.</given-names></name> <name><surname>Nestor</surname> <given-names>P. J.</given-names></name> <name><surname>Rogers</surname> <given-names>T. T.</given-names></name></person-group> (<year>2007</year>). <article-title>Where do you know what you know? The representation of semantic knowledge in the human brain</article-title>. <source>Nat. Rev. Neurosci</source> <volume>8</volume>, <fpage>976</fpage>&#x02013;<lpage>987</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2277</pub-id><pub-id pub-id-type="pmid">18026167</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perani</surname> <given-names>D.</given-names></name> <name><surname>Dehaene</surname> <given-names>S.</given-names></name> <name><surname>Grassi</surname> <given-names>F.</given-names></name> <name><surname>Cohen</surname> <given-names>L.</given-names></name> <name><surname>Cappa</surname> <given-names>S. F.</given-names></name> <name><surname>Dupoux</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>1996</year>). <article-title>Brain processing of native and foreign languages</article-title>. <source>Neuroreport</source> <volume>7</volume>, <fpage>2439</fpage>&#x02013;<lpage>2444</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-199611040-00007</pub-id><pub-id pub-id-type="pmid">8981399</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perlovsky</surname> <given-names>L. I.</given-names></name> <name><surname>Ilin</surname> <given-names>R.</given-names></name></person-group> (<year>2010</year>). <article-title>Neurally and mathematically motivated architecture for language and thought</article-title>. <source>Open Neuroimag. J</source>. <volume>4</volume>, <fpage>70</fpage>&#x02013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.2174/1874440001004010070</pub-id><pub-id pub-id-type="pmid">21673788</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perlovsky</surname> <given-names>L. I.</given-names></name> <name><surname>Ilin</surname> <given-names>R.</given-names></name></person-group> (<year>2013</year>). <article-title>Mirror neurons, language, and embodied cognition</article-title>. <source>Neural Netw</source>. <volume>41</volume>, <fpage>15</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2013.01.003</pub-id><pub-id pub-id-type="pmid">23403367</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pierno</surname> <given-names>A. C.</given-names></name> <name><surname>Tubaldi</surname> <given-names>F.</given-names></name> <name><surname>Turella</surname> <given-names>L.</given-names></name> <name><surname>Grossi</surname> <given-names>P.</given-names></name> <name><surname>Barachino</surname> <given-names>L.</given-names></name> <name><surname>Gallo</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>Neurofunctional modulation of brain regions by the observation of pointing and grasping actions</article-title>. <source>Cereb. Cortex</source> <volume>19</volume>, <fpage>367</fpage>&#x02013;<lpage>374</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhn089</pub-id><pub-id pub-id-type="pmid">18534989</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Price</surname> <given-names>C. J.</given-names></name></person-group> (<year>2010</year>). <article-title>The anatomy of language: a review of 100 fMRI studies published in (2009)</article-title>. <source>Ann. N. Y. Acad. Sci</source>. <volume>1191</volume>, <fpage>62</fpage>&#x02013;<lpage>88</lpage>. <pub-id pub-id-type="doi">10.1111/j.1749-6632.2010.05444.x</pub-id><pub-id pub-id-type="pmid">20392276</pub-id></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pulverm&#x000FC;ller</surname> <given-names>F.</given-names></name> <name><surname>Fadiga</surname> <given-names>L.</given-names></name></person-group> (<year>2010</year>). <article-title>Active perception: sensorimotor circuits as a cortical basis for language</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>11</volume>, <fpage>351</fpage>&#x02013;<lpage>360</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2811</pub-id><pub-id pub-id-type="pmid">20383203</pub-id></citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rapp</surname> <given-names>A. M.</given-names></name> <name><surname>Leube</surname> <given-names>D. T.</given-names></name> <name><surname>Erb</surname> <given-names>M.</given-names></name> <name><surname>Grodd</surname> <given-names>W.</given-names></name> <name><surname>Kircher</surname> <given-names>T. T.</given-names></name></person-group> (<year>2004</year>). <article-title>Neural correlates of metaphor processing</article-title>. <source>Brain Res. Cogn. Brain Res</source>. <volume>20</volume>, <fpage>395</fpage>&#x02013;<lpage>402</lpage>. <pub-id pub-id-type="doi">10.1016/j.cogbrainres.2004.03.017</pub-id><pub-id pub-id-type="pmid">15268917</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rapp</surname> <given-names>A. M.</given-names></name> <name><surname>Leube</surname> <given-names>D. T.</given-names></name> <name><surname>Erb</surname> <given-names>M.</given-names></name> <name><surname>Grodd</surname> <given-names>W.</given-names></name> <name><surname>Kircher</surname> <given-names>T. T.</given-names></name></person-group> (<year>2007</year>). <article-title>Laterality in metaphor processing: lack of evidence from functional magnetic resonance imaging for the right hemisphere theory</article-title>. <source>Brain Lang</source>. <volume>100</volume>, <fpage>142</fpage>&#x02013;<lpage>149</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandl.2006.04.004</pub-id><pub-id pub-id-type="pmid">16677700</pub-id></citation>
</ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sachs</surname> <given-names>O.</given-names></name> <name><surname>Weis</surname> <given-names>S.</given-names></name> <name><surname>Krings</surname> <given-names>T.</given-names></name> <name><surname>Huber</surname> <given-names>W.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2008a</year>). <article-title>Categorical and thematic knowledge representation in the brain: neural correlates of taxonomic and thematic conceptual relations</article-title>. <source>Neuropsychologia</source> <volume>46</volume>, <fpage>409</fpage>&#x02013;<lpage>418</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2007.08.015</pub-id><pub-id pub-id-type="pmid">17920085</pub-id></citation>
</ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sachs</surname> <given-names>O.</given-names></name> <name><surname>Weis</surname> <given-names>S.</given-names></name> <name><surname>Zellagui</surname> <given-names>N.</given-names></name> <name><surname>Huber</surname> <given-names>W.</given-names></name> <name><surname>Zvyagintsev</surname> <given-names>M.</given-names></name> <name><surname>Mathiak</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2008b</year>). <article-title>Automatic processing of semantic relations in fMRI: neural activation during semantic priming of taxonomic and thematic categories</article-title>. <source>Brain Res</source>. <volume>1218</volume>, <fpage>194</fpage>&#x02013;<lpage>205</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2008.03.045</pub-id><pub-id pub-id-type="pmid">18514168</pub-id></citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sachs</surname> <given-names>O.</given-names></name> <name><surname>Weis</surname> <given-names>S.</given-names></name> <name><surname>Zellagui</surname> <given-names>N.</given-names></name> <name><surname>Sass</surname> <given-names>K.</given-names></name> <name><surname>Huber</surname> <given-names>W.</given-names></name> <name><surname>Zvyagintsev</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>How different types of conceptual relations modulate brain activation during semantic priming</article-title>. <source>J. Cogn. Neurosci</source>. <volume>23</volume>, <fpage>1263</fpage>&#x02013;<lpage>1273</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.2010.21483</pub-id><pub-id pub-id-type="pmid">20350178</pub-id></citation>
</ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sass</surname> <given-names>K.</given-names></name> <name><surname>Krach</surname> <given-names>S.</given-names></name> <name><surname>Sachs</surname> <given-names>O.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2009a</year>). <article-title>Lion - tiger - stripes: neural correlates of indirect semantic priming across processing modalities</article-title>. <source>Neuroimage</source> <volume>45</volume>, <fpage>224</fpage>&#x02013;<lpage>236</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2008.10.014</pub-id><pub-id pub-id-type="pmid">19026751</pub-id></citation>
</ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sass</surname> <given-names>K.</given-names></name> <name><surname>Sachs</surname> <given-names>O.</given-names></name> <name><surname>Krach</surname> <given-names>S.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2009b</year>). <article-title>Taxonomic and thematic categories: neural correlates of categorization in an auditory-to-visual priming task using fMRI</article-title>. <source>Brain Res</source>. <volume>1270</volume>, <fpage>78</fpage>&#x02013;<lpage>87</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2009.03.013</pub-id><pub-id pub-id-type="pmid">19306848</pub-id></citation>
</ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schlosser</surname> <given-names>M. J.</given-names></name> <name><surname>Aoyagi</surname> <given-names>N.</given-names></name> <name><surname>Fulbright</surname> <given-names>R. K.</given-names></name> <name><surname>Gore</surname> <given-names>J. C.</given-names></name> <name><surname>McCarthy</surname> <given-names>G.</given-names></name></person-group> (<year>1998</year>). <article-title>Functional MRI studies of auditory comprehension</article-title>. <source>Hum. Brain Mapp</source>. <volume>6</volume>, <fpage>1</fpage>&#x02013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1002/(SICI)1097-0193(1998)6:1&#x0003C;1::AID-HBM1&#x0003E;3.0.CO;2-7</pub-id><pub-id pub-id-type="pmid">9673659</pub-id></citation>
</ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schmidt</surname> <given-names>G. L.</given-names></name> <name><surname>Kranjec</surname> <given-names>A.</given-names></name> <name><surname>Cardillo</surname> <given-names>E. R.</given-names></name> <name><surname>Chatterjee</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Beyond laterality: a critical assessment of research on the neural basis of metaphor</article-title>. <source>J. Int. Neuropsychol. Soc</source>. <volume>16</volume>, <fpage>1</fpage>&#x02013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.1017/S1355617709990543</pub-id><pub-id pub-id-type="pmid">19765354</pub-id></citation>
</ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schmidt</surname> <given-names>G. L.</given-names></name> <name><surname>Seger</surname> <given-names>C. A.</given-names></name></person-group> (<year>2009</year>). <article-title>Neural correlates of metaphor processing: the roles of figurativeness, familiarity and difficulty</article-title>. <source>Brain Cogn</source>. <volume>71</volume>, <fpage>375</fpage>&#x02013;<lpage>386</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandc.2009.06.001</pub-id><pub-id pub-id-type="pmid">19586700</pub-id></citation>
</ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shibata</surname> <given-names>M.</given-names></name> <name><surname>Abe</surname> <given-names>J.</given-names></name> <name><surname>Terao</surname> <given-names>A.</given-names></name> <name><surname>Miyamoto</surname> <given-names>T.</given-names></name></person-group> (<year>2007</year>). <article-title>Neural mechanisms involved in the comprehension of metaphoric and literal sentences: an fMRI study</article-title>. <source>Brain Res</source>. <volume>1166</volume>, <fpage>92</fpage>&#x02013;<lpage>102</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2007.06.040</pub-id><pub-id pub-id-type="pmid">17662699</pub-id></citation>
</ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Slotnick</surname> <given-names>S. D.</given-names></name> <name><surname>Schacter</surname> <given-names>D. L.</given-names></name></person-group> (<year>2004</year>). <article-title>A sensory signature that distinguishes true from false memories</article-title>. <source>Nat. Neurosci</source>. <volume>7</volume>, <fpage>664</fpage>&#x02013;<lpage>672</lpage>. <pub-id pub-id-type="doi">10.1038/nn1252</pub-id><pub-id pub-id-type="pmid">15156146</pub-id></citation>
</ref>
<ref id="B71">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Green</surname> <given-names>A.</given-names></name> <name><surname>Bromberger</surname> <given-names>B.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2011a</year>). <article-title>The differentiation of iconic and metaphoric gestures: common and unique integration processes</article-title>. <source>Hum. Brain Mapp</source>. <volume>32</volume>, <fpage>520</fpage>&#x02013;<lpage>533</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.21041</pub-id><pub-id pub-id-type="pmid">21391245</pub-id></citation>
</ref>
<ref id="B72">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Green</surname> <given-names>A.</given-names></name> <name><surname>Chatterjee</surname> <given-names>A.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2011b</year>). <article-title>Encoding social interactions: the neural correlates of true and false memories</article-title>. <source>J. Cogn. Neurosci</source>. <volume>23</volume>, <fpage>306</fpage>&#x02013;<lpage>324</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.2010.21505</pub-id><pub-id pub-id-type="pmid">20433241</pub-id></citation>
</ref>
<ref id="B73">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Wolk</surname> <given-names>D.</given-names></name> <name><surname>Chatterjee</surname> <given-names>A.</given-names></name></person-group> (<year>2011c</year>). <article-title>The role of the right parietal lobe in the perception of causality: a tDCS study</article-title>. <source>Exp. Brain Res</source>. <volume>215</volume>, <fpage>315</fpage>&#x02013;<lpage>325</lpage>. <pub-id pub-id-type="doi">10.1007/s00221-011-2899-1</pub-id><pub-id pub-id-type="pmid">21997332</pub-id></citation>
</ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Green</surname> <given-names>A.</given-names></name> <name><surname>Jansen</surname> <given-names>A.</given-names></name> <name><surname>Chatterjee</surname> <given-names>A.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2010</year>). <article-title>Social cues, mentalizing and the neural processing of speech accompanied by gestures</article-title>. <source>Neuropsychologia</source> <volume>48</volume>, <fpage>382</fpage>&#x02013;<lpage>393</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2009.09.025</pub-id><pub-id pub-id-type="pmid">19782696</pub-id></citation>
</ref>
<ref id="B76">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Green</surname> <given-names>A.</given-names></name> <name><surname>Sass</surname> <given-names>K.</given-names></name> <name><surname>Kirner-Veselinovic</surname> <given-names>A.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2013a</year>). <article-title>Neural integration of speech and gesture in schizophrenia: evidence for differential processing of metaphoric gestures</article-title>. <source>Hum. Brain Mapp</source>. <volume>34</volume>, <fpage>1696</fpage>&#x02013;<lpage>1712</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.22015</pub-id><pub-id pub-id-type="pmid">22378493</pub-id></citation>
</ref>
<ref id="B75">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Green</surname> <given-names>A.</given-names></name> <name><surname>Sass</surname> <given-names>K.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2013b</year>). <article-title>Superior temporal sulcus disconnectivity during processing of metaphoric gestures in schizophrenia</article-title>. <source>Schizophr. Bull</source>. [Epub ahead of print]. Available online at: <ext-link ext-link-type="uri" xlink:href="http://schizophreniabulletin.oxfordjournals.org/content/early/2013/08/16/schbul.sbt110.full.pdf">http://schizophreniabulletin.oxfordjournals.org/content/early/2013/08/16/schbul.sbt110.full.pdf</ext-link> <pub-id pub-id-type="doi">10.1093/schbul/sbt110</pub-id><pub-id pub-id-type="pmid">23956120</pub-id></citation>
</ref>
<ref id="B77">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Green</surname> <given-names>A.</given-names></name> <name><surname>Weis</surname> <given-names>S.</given-names></name> <name><surname>Chatterjee</surname> <given-names>A.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2009</year>). <article-title>Memory effects of speech and gesture binding: cortical and hippocampal activation in relation to subsequent memory performance</article-title>. <source>J. Cogn. Neurosci</source>. <volume>21</volume>, <fpage>821</fpage>&#x02013;<lpage>836</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.2009.21053</pub-id><pub-id pub-id-type="pmid">18578601</pub-id></citation>
</ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Green</surname> <given-names>A.</given-names></name> <name><surname>Weis</surname> <given-names>S.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2012</year>). <article-title>A supramodal neural network for speech and gesture semantics: an fMRI study</article-title>. <source>PLoS ONE</source> <volume>7</volume>:<fpage>e51207</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0051207</pub-id><pub-id pub-id-type="pmid">23226488</pub-id></citation>
</ref>
<ref id="B79">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thompson-Schill</surname> <given-names>S. L.</given-names></name> <name><surname>D&#x00027;Esposito</surname> <given-names>M.</given-names></name> <name><surname>Aguirre</surname> <given-names>G. K.</given-names></name> <name><surname>Farah</surname> <given-names>M. J.</given-names></name></person-group> (<year>1997</year>). <article-title>Role of left inferior prefrontal cortex in retrieval of semantic knowledge: a reevaluation</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>94</volume>, <fpage>14792</fpage>&#x02013;<lpage>14797</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.94.26.14792</pub-id><pub-id pub-id-type="pmid">9405692</pub-id></citation>
</ref>
<ref id="B80">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tivarus</surname> <given-names>M. E.</given-names></name> <name><surname>Ibinson</surname> <given-names>J. W.</given-names></name> <name><surname>Hillier</surname> <given-names>A.</given-names></name> <name><surname>Schmalbrock</surname> <given-names>P.</given-names></name> <name><surname>Beversdorf</surname> <given-names>D. Q.</given-names></name></person-group> (<year>2006</year>). <article-title>An fMRI study of semantic priming: modulation of brain activity by varying semantic distances</article-title>. <source>Cogn. Behav. Neurol</source>. <volume>19</volume>, <fpage>194</fpage>&#x02013;<lpage>201</lpage>. <pub-id pub-id-type="pmid">17159615</pub-id></citation>
</ref>
<ref id="B81">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tremblay</surname> <given-names>P.</given-names></name> <name><surname>Small</surname> <given-names>S. L.</given-names></name></person-group> (<year>2011</year>). <article-title>From language comprehension to action understanding and back again</article-title>. <source>Cereb. Cortex</source> <volume>21</volume>, <fpage>1166</fpage>&#x02013;<lpage>1177</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhq189</pub-id><pub-id pub-id-type="pmid">20940222</pub-id></citation>
</ref>
<ref id="B82">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tzourio-Mazoyer</surname> <given-names>N.</given-names></name> <name><surname>Landeau</surname> <given-names>B.</given-names></name> <name><surname>Papathanassiou</surname> <given-names>D.</given-names></name> <name><surname>Crivello</surname> <given-names>F.</given-names></name> <name><surname>Etard</surname> <given-names>O.</given-names></name> <name><surname>Delcroix</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2002</year>). <article-title>Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain</article-title>. <source>Neuroimage</source> <volume>15</volume>, <fpage>273</fpage>&#x02013;<lpage>289</lpage>. <pub-id pub-id-type="doi">10.1006/nimg.2001.0978</pub-id><pub-id pub-id-type="pmid">11771995</pub-id></citation>
</ref>
<ref id="B83">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Uchiyama</surname> <given-names>H.</given-names></name> <name><surname>Seki</surname> <given-names>A.</given-names></name> <name><surname>Kageyama</surname> <given-names>H.</given-names></name> <name><surname>Saito</surname> <given-names>D. N.</given-names></name> <name><surname>Koeda</surname> <given-names>T.</given-names></name> <name><surname>Ohno</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2006</year>). <article-title>Neural substrates of sarcasm: a functional magnetic-resonance imaging study</article-title>. <source>Brain Res</source>. <volume>1124</volume>, <fpage>100</fpage>&#x02013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2006.09.088</pub-id><pub-id pub-id-type="pmid">17092490</pub-id></citation>
</ref>
<ref id="B84">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Uchiyama</surname> <given-names>H. T.</given-names></name> <name><surname>Saito</surname> <given-names>D. N.</given-names></name> <name><surname>Tanabe</surname> <given-names>H. C.</given-names></name> <name><surname>Harada</surname> <given-names>T.</given-names></name> <name><surname>Seki</surname> <given-names>A.</given-names></name> <name><surname>Ohno</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>Distinction between the literal and intended meanings of sentences: a functional magnetic resonance imaging study of metaphor and sarcasm</article-title>. <source>Cortex</source> <volume>48</volume>, <fpage>563</fpage>&#x02013;<lpage>583</lpage>. <pub-id pub-id-type="doi">10.1016/j.cortex.2011.01.004</pub-id><pub-id pub-id-type="pmid">21333979</pub-id></citation>
</ref>
<ref id="B85">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ungerleider</surname> <given-names>L. G.</given-names></name> <name><surname>Haxby</surname> <given-names>J. V.</given-names></name></person-group> (<year>1994</year>). <article-title>&#x02018;What&#x02019; and &#x02018;where&#x02019; in the human brain</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>4</volume>, <fpage>157</fpage>&#x02013;<lpage>165</lpage>. <pub-id pub-id-type="doi">10.1016/0959-4388(94)90066-3</pub-id><pub-id pub-id-type="pmid">8038571</pub-id></citation>
</ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vigneau</surname> <given-names>M.</given-names></name> <name><surname>Beaucousin</surname> <given-names>V.</given-names></name> <name><surname>Herv&#x000E9;</surname> <given-names>P. Y.</given-names></name> <name><surname>Duffau</surname> <given-names>H.</given-names></name> <name><surname>Crivello</surname> <given-names>F.</given-names></name> <name><surname>Houd&#x000E9;</surname> <given-names>O.</given-names></name> <etal/></person-group>. (<year>2006</year>). <article-title>Meta-analyzing left hemisphere language areas: phonology, semantics, and sentence processing</article-title>. <source>Neuroimage</source> <volume>30</volume>, <fpage>1414</fpage>&#x02013;<lpage>1432</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2005.11.002</pub-id><pub-id pub-id-type="pmid">16413796</pub-id></citation>
</ref>
<ref id="B87">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wagner</surname> <given-names>A. D.</given-names></name> <name><surname>Par&#x000E9;-Blagoev</surname> <given-names>E. J.</given-names></name> <name><surname>Clark</surname> <given-names>J.</given-names></name> <name><surname>Poldrack</surname> <given-names>R. A.</given-names></name></person-group> (<year>2001</year>). <article-title>Recovering meaning: left prefrontal cortex guides controlled semantic retrieval</article-title>. <source>Neuron</source> <volume>31</volume>, <fpage>329</fpage>&#x02013;<lpage>338</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(01)00359-2</pub-id><pub-id pub-id-type="pmid">11502262</pub-id></citation>
</ref>
<ref id="B88">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wiggs</surname> <given-names>C. L.</given-names></name> <name><surname>Weisberg</surname> <given-names>J.</given-names></name> <name><surname>Martin</surname> <given-names>A.</given-names></name></person-group> (<year>1999</year>). <article-title>Neural correlates of semantic and episodic memory retrieval</article-title>. <source>Neuropsychologia</source> <volume>37</volume>, <fpage>103</fpage>&#x02013;<lpage>118</lpage>. <pub-id pub-id-type="doi">10.1016/S0028-3932(98)00044-X</pub-id><pub-id pub-id-type="pmid">9920476</pub-id></citation>
</ref>
<ref id="B89">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Willems</surname> <given-names>R. M.</given-names></name> <name><surname>Hagoort</surname> <given-names>P.</given-names></name></person-group> (<year>2007</year>). <article-title>Neural evidence for the interplay between language, gesture, and action: a review</article-title>. <source>Brain Lang</source>. <volume>101</volume>, <fpage>278</fpage>&#x02013;<lpage>289</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandl.2007.03.004</pub-id><pub-id pub-id-type="pmid">17416411</pub-id></citation>
</ref>
<ref id="B90">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Willems</surname> <given-names>R. M.</given-names></name> <name><surname>Hagoort</surname> <given-names>P.</given-names></name> <name><surname>Casasanto</surname> <given-names>D.</given-names></name></person-group> (<year>2010</year>). <article-title>Body-specific representations of action verbs: neural evidence from right- and left-handers</article-title>. <source>Psychol. Sci</source>. <volume>21</volume>, <fpage>67</fpage>&#x02013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.1177/0956797609354072</pub-id><pub-id pub-id-type="pmid">20424025</pub-id></citation>
</ref>
<ref id="B91">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Willems</surname> <given-names>R. M.</given-names></name> <name><surname>Ozyurek</surname> <given-names>A.</given-names></name> <name><surname>Hagoort</surname> <given-names>P.</given-names></name></person-group> (<year>2007</year>). <article-title>When language meets action: the neural integration of gesture and speech</article-title>. <source>Cereb. Cortex</source> <volume>17</volume>, <fpage>2322</fpage>&#x02013;<lpage>2333</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhl141</pub-id><pub-id pub-id-type="pmid">17159232</pub-id></citation>
</ref>
<ref id="B92">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Willems</surname> <given-names>R. M.</given-names></name> <name><surname>Ozyurek</surname> <given-names>A.</given-names></name> <name><surname>Hagoort</surname> <given-names>P.</given-names></name></person-group> (<year>2009</year>). <article-title>Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language</article-title>. <source>Neuroimage</source> <volume>47</volume>, <fpage>1992</fpage>&#x02013;<lpage>2004</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2009.05.066</pub-id><pub-id pub-id-type="pmid">19497376</pub-id></citation>
</ref>
<ref id="B93">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>J.</given-names></name> <name><surname>Gannon</surname> <given-names>P. J.</given-names></name> <name><surname>Emmorey</surname> <given-names>K.</given-names></name> <name><surname>Smith</surname> <given-names>J. F.</given-names></name> <name><surname>Braun</surname> <given-names>A. R.</given-names></name></person-group> (<year>2009</year>). <article-title>Symbolic gestures and spoken language are processed by a common neural system</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>106</volume>, <fpage>20664</fpage>&#x02013;<lpage>20669</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0909197106</pub-id><pub-id pub-id-type="pmid">19923436</pub-id></citation>
</ref>
</ref-list>
</back>
</article>