<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychology</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychology</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2012.00385</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Neurological Evidence Linguistic Processes Precede Perceptual Simulation in Conceptual Processing</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Louwerse</surname> <given-names>Max</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001">&#x0002A;</xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Hutchinson</surname> <given-names>Sterling</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Psychology, Institute for Intelligent Systems, University of Memphis</institution> <country>Memphis, TN, USA</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Louise Connell, University of Manchester, UK</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Claudio Gentili, University of Pisa, Italy; Christopher Kurby, Grand Valley State University, USA</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Max Louwerse, Department of Psychology, Institute for Intelligent Systems, University of Memphis, 202 Psychology Building, Memphis, TN 38152, USA. e-mail: <email>maxlouwerse&#x00040;gmail.com</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Frontiers in Cognitive Science, a specialty of Frontiers in Psychology.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>16</day>
<month>10</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<volume>3</volume>
<elocation-id>385</elocation-id>
<history>
<date date-type="received">
<day>03</day>
<month>05</month>
<year>2012</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>09</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2012 Louwerse and Hutchinson.</copyright-statement>
<copyright-year>2012</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement"><p>This is an open-access article distributed under the terms of the <uri xlink:href="http://creativecommons.org/licenses/by/3.0/">Creative Commons Attribution License</uri>, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.</p></license>
</permissions>
<abstract>
<p>There is increasing evidence from response time experiments that language statistics and perceptual simulations both play a role in conceptual processing. In an EEG experiment we compared neural activity in cortical regions commonly associated with linguistic processing and visual perceptual processing to determine to what extent symbolic and embodied accounts of cognition applied. Participants were asked to determine the semantic relationship of word pairs (e.g., <italic>sky &#x02013; ground</italic>) or to determine their iconic relationship (i.e., if the presentation of the pair matched their expected physical relationship). A linguistic bias was found toward the semantic judgment task and a perceptual bias was found toward the iconicity judgment task. More importantly, conceptual processing involved activation in brain regions associated with both linguistic and perceptual processes. When comparing the relative activation of linguistic cortical regions with perceptual cortical regions, the effect sizes for linguistic cortical regions were larger than those for the perceptual cortical regions early in a trial with the reverse being true later in a trial. These results map upon findings from other experimental literature and provide further evidence that processing of concept words relies both on language statistics and on perceptual simulations, whereby linguistic processes precede perceptual simulation processes.</p>
</abstract>
<kwd-group>
<kwd>embodied cognition</kwd>
<kwd>symbolic cognition</kwd>
<kwd>symbol interdependency</kwd>
<kwd>perceptual simulation</kwd>
<kwd>language processing</kwd>
<kwd>EEG</kwd>
</kwd-group>
<counts>
<fig-count count="3"/>
<table-count count="3"/>
<equation-count count="0"/>
<ref-count count="57"/>
<page-count count="11"/>
<word-count count="8493"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction">
<title>Introduction</title>
<p>Conceptual processing elicits perceptual simulations. For instance, when people read the word pair <italic>sky &#x02013; ground</italic>, one word presented above the other, processing is faster when <italic>sky</italic> appears above <italic>ground</italic> than when the words are presented in the reversed order (Zwaan and Yaxley, <xref ref-type="bibr" rid="B57">2003</xref>; Louwerse, <xref ref-type="bibr" rid="B32">2008</xref>; Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>). Embodiment theorists have interpreted this finding as evidence that perceptual and biomechanical processes underlie cognition (Glenberg, <xref ref-type="bibr" rid="B20">1997</xref>; Barsalou, <xref ref-type="bibr" rid="B3">1999</xref>). Indeed, numerous studies show that processing is affected by tasks that invoke the consideration of perceptual features (see Pecher and Zwaan, <xref ref-type="bibr" rid="B43">2005</xref>; De Vega et al., <xref ref-type="bibr" rid="B14">2008</xref>; Semin and Smith, <xref ref-type="bibr" rid="B50">2008</xref>; for overviews). Much of this evidence comes from behavioral response time (RT) experiments, but there is also evidence stemming from neuropsychological studies (Buccino et al., <xref ref-type="bibr" rid="B9">2005</xref>; Kan et al., <xref ref-type="bibr" rid="B25">2003</xref>; Rueschemeyer et al., <xref ref-type="bibr" rid="B49">2010</xref>). This embodied cognition account is oftentimes presented in contrast to a symbolic cognition account that suggests conceptual representations are formed from statistical linguistic frequencies (Landauer and Dumais, <xref ref-type="bibr" rid="B28">1997</xref>). Such a symbolic cognition account that uses the mind-as-a-computer metaphor has occasionally been dismissed by embodiment theorists (Van Dantzig et al., <xref ref-type="bibr" rid="B56">2008</xref>).</p>
<p>Recently, researchers have cautioned pitting one account against another, demonstrating that symbolic and embodied cognition accounts can be integrated (Barsalou et al., <xref ref-type="bibr" rid="B4">2008</xref>; Louwerse, <xref ref-type="bibr" rid="B32">2008</xref>, <xref ref-type="bibr" rid="B33">2011</xref>; Simmons et al., <xref ref-type="bibr" rid="B51">2008</xref>). For instance, Louwerse (<xref ref-type="bibr" rid="B33">2011</xref>) proposed the Symbol Interdependency Hypothesis, arguing that language encodes embodied relations which language users can use as a shortcut during conceptual processing. The relative importance of language statistics and perceptual simulation in conceptual processing depends on several variables, including the type of stimulus presented to a participant, and the cognitive task the participant is asked to perform (Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>). Louwerse and Connell (<xref ref-type="bibr" rid="B37">2011</xref>) further found that the effects for language statistics on processing times temporally preceded the effects of perceptual simulations on processing times, with fuzzy regularities in linguistic context being used for quick decisions and precise perceptual simulations being used for slower decisions. Importantly, these studies do not deny the importance of perceptual processes. In fact, individual effects for perceptual simulations were also seen early on in a trial, however, when comparing the effect sizes of language statistics and perceptual simulations, Louwerse and Connell (<xref ref-type="bibr" rid="B37">2011</xref>) found evidence for early linguistic and late perceptual simulation processes.</p>
<p>The results from these RT studies, however, only indirectly demonstrate that language statistics and perceptual simulation are active during cognition, because the effects are modulated by hand movements and RTs. Although such methods are methodologically valid, we sought to establish whether such conclusions were also supported by neurological evidence.</p>
<p>In the current paper our objective was to determine when conceptual processing uses neurological processes best explained by language statistics relative to neurological processes best explained by perceptual simulations. Given the evidence that both statistical linguistic frequencies and perceptual simulation are involved in conceptual processing (Louwerse, <xref ref-type="bibr" rid="B32">2008</xref>; Simmons et al., <xref ref-type="bibr" rid="B51">2008</xref>; Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>), and that the effect for language statistics outperforms the effect for perceptual simulations for fast RTs, with the opposite being true for slower RTs (Louwerse and Connell, <xref ref-type="bibr" rid="B37">2011</xref>), we predicted that cortical regions commonly associated with linguistic processing, when compared with activation in cortical regions commonly associated with perceptual simulation, would be activated relatively early in a RT trial. Conversely, when compared with activation in cortical regions commonly associated with linguistic processing, cortical regions associated with perceptual simulation were predicted to show greater activity relatively later in a RT trial. Further, we predicted activation would be modified by the cognitive task, such that perceptual cortical regions would be more active in a perceptual simulation task, whereas linguistic cortical regions would be more active in a semantic judgment task.</p>
<p>Traditional EEG methodologies are not quite sufficient to answer this research question. For instance, event-related potential (ERP) methods only allow for analyses of time-locked components that activate in response to specific events over numerous trials (Collins et al., <xref ref-type="bibr" rid="B12">2011</xref>; Hald et al., <xref ref-type="bibr" rid="B21">2011</xref>). EEG recordings combined with magnetoencephalography (MEG) recordings can provide high-resolution temporal information and spatial estimates of neural activity, provided that appropriate source reconstruction techniques are used (Hauk et al., <xref ref-type="bibr" rid="B22">2008</xref>). However, this technique establishes whether and when cortical regions are activated, but does not answer the question of what cortical regions are activated in relation to each other. Such a comparative analysis seems to call for a different and novel method.</p>
<p>We utilized source localization techniques in conjunction with statistical analyses to determine when and where relative effects of linguistic and perceptual processes occurred. We did this by investigating which regions of the cortex are responsible for activity throughout the time course of each trial. However, source localization determines only where differences emerge between conditions at specific points in time; our goal was to determine whether relatively stronger early effects of linguistic processes preceded a relatively stronger later simulation process. Consequently, we used established source localization techniques (Pascual-Marqui, <xref ref-type="bibr" rid="B42">2002</xref>) to determine where differences in activation were present during an early versus a late time period. With that information we then ran a mixed effects model on electrode activation throughout the duration of a trial to identify the effect size for activation of linguistic versus perceptual cortical regions over time. This type of analysis is progressive in that it allowed us not only to determine that activation differed between linguistic and perceptual cortical regions but also allowed us to gain insight into the relative effect size of language statistics and perceptual simulation as they contribute to conceptual processing throughout the time course of a trial.</p>
</sec>
<sec sec-type="materials|methods">
<title>Materials and Methods</title>
<sec>
<title>Participants</title>
<p>Thirty-three University of Memphis undergraduate students participated for extra credit in a psychology course. All participants had normal or corrected vision and were native English speakers. Fifteen participants were randomly assigned to the semantic judgment condition, and 18 participants were randomly assigned to the iconicity judgment condition.</p>
</sec>
<sec>
<title>Materials</title>
<p>Each condition consisted of 64 iconic/reverse-iconic word pairs extracted from previous research (Louwerse, <xref ref-type="bibr" rid="B32">2008</xref>; Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>; see <xref ref-type="app" rid="A1">Appendix</xref>). Thirty-two pairs with an iconic relationship were presented vertically on the screen in the same order they would appear in the world (i.e., <italic>sky</italic> appears above <italic>ground</italic>). Likewise, 32 pairs with a reverse-iconic relationship appeared in an order opposite of that which would be expected in the world (i.e., <italic>ground</italic> appears above <italic>sky</italic>). The remaining 128 trials contained filler word pairs that had no iconic relationship. Half of the fillers had a high semantic relation (cos&#x02009;&#x0003D;&#x02009;0.55) and half had a low semantic relation (cos&#x02009;&#x0003D;&#x02009;0.21), as determined by latent semantic analysis (LSA), a statistical, corpus-based, technique for estimating semantic similarities on a scale of &#x02212;1 to 1 (Landauer et al., <xref ref-type="bibr" rid="B29">2007</xref>). All items were counterbalanced such that all participants saw all word pairs, but no participant saw the same word pair in both orders (i.e., both the iconic and the reverse-iconic order for the experimental items).</p>
</sec>
<sec>
<title>Equipment</title>
<p>An Emotiv EPOC headset (Emotiv Systems Inc., San Francisco, CA, USA) was used to record electroencephalograph data. EEG data recorded from the Emotiv EPOC headset is comparable to data recorded by traditional EEG devices (Bobrov et al., <xref ref-type="bibr" rid="B6">2011</xref>; Stytsenko et al., <xref ref-type="bibr" rid="B53">2011</xref>). For instance, patterns of brain activity from a study in which participants imagined pictures were comparable between the 16-channel Emotiv EPOC system and the 32-channel ActiCap system (Brain Products, Munich, Germany; Bobrov et al., <xref ref-type="bibr" rid="B6">2011</xref>). The Emotiv EPOC is also able to reliably capture P300 signals (Ram&#x000ED;rez-Cortes et al., <xref ref-type="bibr" rid="B45">2010</xref>; Duvinage et al., <xref ref-type="bibr" rid="B16">2012</xref>), even though the accuracy of high-end systems is superior.</p>
<p>The headset was fitted with 14 Au-plated contact-grade hardened BeCu felt-tipped electrodes that were saturated in a saline solution. Although the headset used a dry electrode system, such technology has shown to be comparable to traditional wet electrode systems (Estepp et al., <xref ref-type="bibr" rid="B17">2009</xref>). The headset used sequential sampling at 2048&#x02009;Hz and was down-sampled to 128&#x02009;Hz. The incoming signal was automatically notch filtered at 50 and 60&#x02009;Hz using a 5th order sinc notch filter. The resolution was 1.95&#x02009;&#x003BC;V.</p>
</sec>
<sec>
<title>Procedure</title>
<p>In both the semantic judgment and iconicity judgment conditions, word pairs were presented vertically on an 800&#x02009;&#x000D7;&#x02009;600 computer screen. In the semantic judgment condition, participants were asked to determine whether a word pair was related in meaning. In the iconicity judgment condition, participants were asked whether a word pair appeared in an iconic relationship (i.e., if a word pair appeared in the same configuration as the pair would occur in the world). Participants responded to stimuli by pressing designated yes or no keys on a number pad. Participants were instructed to move and blink as little as possible. Word pairs were randomly presented for each participant in order to negate any order effects. To ensure participants understood the task, a session of five practice trials preceded the experimental session.</p>
</sec>
</sec>
<sec>
<title>Results</title>
<p>We followed prior research (Louwerse, <xref ref-type="bibr" rid="B32">2008</xref>; Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>) in identifying errors and outliers. As in those studies, error rates were expected to be high in both the semantic judgment task and the iconicity task. Although some word pairs may share a low semantic relation according to LSA, sometimes for at least one word meaning, a higher semantic relationship might be warranted (see Louwerse et al., <xref ref-type="bibr" rid="B36">2006</xref>). For example, according to LSA, <italic>rib</italic> and <italic>spinach</italic> has a low semantic relation (cos&#x02009;&#x0003D;&#x02009;0.07), but in one meaning of <italic>rib</italic> (that of barbecue) such a low semantic relation is not justified (Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>). For the semantic judgment task, error rates were unsurprisingly approximately 25% (<italic>M</italic>&#x02009;&#x0003D;&#x02009;26.07, <italic>SD</italic>&#x02009;&#x0003D;&#x02009;7.51). Similarly, for the iconicity judgment condition, error performance can also be explained by the task. <italic>Priest</italic> and <italic>flag</italic> are not assumed to have an iconic relation, even though such a relation could be imagined. Error rates were around 25&#x02013;30% (<italic>M</italic>&#x02009;&#x0003D;&#x02009;29, <italic>SD</italic>&#x02009;&#x0003D;&#x02009;8.53). For both the semantic judgment condition and the iconicity judgment condition, these error rates were comparable with those reported elsewhere (Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>). Analyses of the errors revealed no evidence for a speed-accuracy trade-off. In the RT analysis, data from each subject whose RTs fell more than 2.5 <italic>SD</italic> from the mean per condition, per subject, were removed from the analysis, affecting less than 3% of the data in both experiments.</p>
<p>A mixed effects regression analysis was conducted on RTs with order (<italic>sky</italic> above <italic>ground</italic> or <italic>ground</italic> above <italic>sky</italic>) as a fixed factor and participants and items as random factors (Richter, <xref ref-type="bibr" rid="B48">2006</xref>; Baayen et al., <xref ref-type="bibr" rid="B2">2008</xref>). <italic>F</italic>-test denominator degrees of freedom for RTs were estimated using the Kenward&#x02013;Roger&#x02019;s degrees of freedom adjustment to reduce the chances of Type I error (Littell et al., <xref ref-type="bibr" rid="B30">2002</xref>). For the semantic judgment condition, differences were found between the iconic and the reverse-iconic word pairs <italic>F</italic>(1, 2683.75)&#x02009;&#x0003D;&#x02009;3.7, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.05, with iconic word pairs being responded to faster than reverse-iconic word pairs, <italic>M</italic>&#x02009;&#x0003D;&#x02009;1592.92, SE&#x02009;&#x0003D;&#x02009;160.46 versus <italic>M</italic>&#x02009;&#x0003D;&#x02009;1640.06, SE&#x02009;&#x0003D;&#x02009;159.8. A similar result was obtained for the iconicity judgment condition, <italic>F</italic>(1, 3332.39)&#x02009;&#x0003D;&#x02009;13.58, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001, again with iconic word pairs being responded to faster than reverse-iconic word pairs, <italic>M</italic>&#x02009;&#x0003D;&#x02009;1882.87, SE&#x02009;&#x0003D;&#x02009;155.43 versus <italic>M</italic>&#x02009;&#x0003D;&#x02009;1980.80, SE&#x02009;&#x0003D;&#x02009;154.67. This RT advantage has been reported elsewhere (Zwaan and Yaxley, <xref ref-type="bibr" rid="B57">2003</xref>; Louwerse, <xref ref-type="bibr" rid="B32">2008</xref>; Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>). What is not clear from these results is whether this effect can be explained by an embodied cognition account (iconicity through perceptual simulations), by a symbolic cognition account (word-order frequency), or by both. As in Louwerse and Jeuniaux (<xref ref-type="bibr" rid="B38">2010</xref>) language statistics and perceptual simulations were operationalized using word-order frequency and iconicity ratings.</p>
<sec>
<title>Order frequency</title>
<p>Language statistics were operationalized as the log frequency of <italic>a</italic>-<italic>b</italic> (e.g., <italic>sky &#x02013; ground</italic>) and <italic>b</italic>-<italic>a</italic> (e.g., <italic>ground &#x02013; sky</italic>) order of word pairs (cf. Louwerse, <xref ref-type="bibr" rid="B32">2008</xref>; Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>; Louwerse and Connell, <xref ref-type="bibr" rid="B37">2011</xref>). The order frequency of all 64 word pairs within 3&#x02013;5 word grams was obtained using the large Web 1T 5-gram corpus (Brants and Franz, <xref ref-type="bibr" rid="B7">2006</xref>).</p>
</sec>
<sec>
<title>Iconicity ratings</title>
<p>Twenty-four participants at the University of Memphis estimated the likelihood that concepts appeared above one another in the real world. Ratings were made for 64 word pairs on a scale of 1&#x02013;6, with 1 being extremely unlikely and 6 being extremely likely. Each participant saw all word pairs, but whether a participant saw a word pair in an iconic or a reverse iconic order was counterbalanced such that each participant saw iconic and reverse-iconic word pairs, but no participant saw a word pair both in an iconic and a reverse-iconic order. High interrater reliability was found in both groups (Group A: average <italic>r</italic>&#x02009;&#x0003D;&#x02009;0.76, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001, <italic>n</italic>&#x02009;&#x0003D;&#x02009;64; Group B: average <italic>r</italic>&#x02009;&#x0003D;&#x02009;0.74, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001, <italic>n</italic>&#x02009;&#x0003D;&#x02009;64), with a negative correlation between the two groups (average <italic>r</italic>&#x02009;&#x0003D;&#x02009;&#x02212;0.72, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001, <italic>n</italic>&#x02009;&#x0003D;&#x02009;64).</p>
<p>A mixed effects regression was conducted on RTs with order frequencies and iconicity ratings as fixed factors and participants and items as random factors. For the semantic judgment condition, a mixed effects regression showed that statistical linguistic frequencies significantly predicted RTs, <italic>F</italic>(1, 760.86)&#x02009;&#x0003D;&#x02009;24.95, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001, with higher frequencies yielding faster RTs. Iconicity ratings did not yield a significant relation with RT, <italic>F</italic>(1, 762.09)&#x02009;&#x0003D;&#x02009;0.46, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.5 (see the first two bars in Figure <xref ref-type="fig" rid="F1">1</xref>; Table <xref ref-type="table" rid="T1">1</xref>).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Strength of the mixed effects regressions on the RTs in absolute <italic>t</italic>-values for each of the two conditions for linguistic (order frequency) and perceptual (iconicity ratings) factors</bold>. Asterisks mark significant strengths (p&#x02009;&#x0003C;&#x02009;0.05) of relationship with RTs.</p></caption>
<graphic xlink:href="fpsyg-03-00385-g001.tif"/>
</fig>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Regression coefficients for the semantic judgment and iconicity judgment RT experiment</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left"/>
<th align="left">variables</th>
<th align="left">Estimate (SE)</th>
<th align="left"><italic>t</italic> (d<italic>f</italic>)</th>
<th align="left">CI lower</th>
<th align="left">CI upper</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Semantic judgment</td>
<td align="left">Intercept</td>
<td align="left">2020.25 (192.11)</td>
<td align="left">10.52 (37.85)&#x0002A;&#x0002A;</td>
<td align="left">1631.29</td>
<td align="left">2409.21</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Language statistics</td>
<td align="left">&#x02212;62.12 (12.44)</td>
<td align="left">&#x02212;4.99 (760.86)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;86.54</td>
<td align="left">&#x02212;37.71</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Iconicity ratings</td>
<td align="left">14.16 (20.97)</td>
<td align="left">0.68 (762.09)</td>
<td align="left">&#x02212;27.01</td>
<td align="left">55.34</td>
</tr>
<tr>
<td align="left">Iconicity judgment</td>
<td align="left">Intercept</td>
<td align="left">2242.95 (185.94)</td>
<td align="left">12.06 (46.41)&#x0002A;&#x0002A;</td>
<td align="left">1868.75</td>
<td align="left">2617.15</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Language statistics</td>
<td align="left">&#x02212;27.50 (12.26)</td>
<td align="left">&#x02212;2.24 (945.78)&#x0002A;</td>
<td align="left">&#x02212;51.55</td>
<td align="left">&#x02212;3.44</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Iconicity ratings</td>
<td align="left">&#x02212;48.79 (20.60)</td>
<td align="left">&#x02212;2.37 (947.65)&#x0002A;</td>
<td align="left">&#x02212;89.21</td>
<td align="left">&#x02212;8.36</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic><italic>Note</italic>. Dependent variable is response time; &#x0002A;&#x0002A;<italic>p</italic>&#x02009;&#x0003C;&#x02009;0.01, &#x0002A;<italic>p</italic>&#x02009;&#x0003C;&#x02009;0.05</italic>.</p>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>Response times</title>
<p>For the iconicity judgment condition, a mixed effects regression showed statistical linguistic frequencies again significantly predicted RT, <italic>F</italic>(1, 945.78)&#x02009;&#x0003D;&#x02009;5.03, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.03, with higher frequencies yielding faster RTs. Iconicity ratings also yielded a significant relation with RT, <italic>F</italic>(1, 947.65)&#x02009;&#x0003D;&#x02009;5.61, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.02, with higher iconicity ratings yielding lower RTs (see the second two bars in Figure <xref ref-type="fig" rid="F1">1</xref>; Table <xref ref-type="table" rid="T1">1</xref>).</p>
<p>Figure <xref ref-type="fig" rid="F1">1</xref> shows that statistical linguistic frequencies explained RTs in both the semantic judgment and the iconicity judgment conditions, but the effect was stronger in the semantic judgment than in the iconicity judgment condition. Figure <xref ref-type="fig" rid="F1">1</xref> and Table <xref ref-type="table" rid="T1">1</xref> also show the opposite results for perceptual simulation in that during the semantic judgment condition, the effect of perceptual simulation on RT was limited (and not significant). However, in the iconicity judgment condition, perceptual simulation was significant. The interaction for linguistic frequencies and condition (semantic versus iconic) was significant, <italic>F</italic>(2, 1005.05)&#x02009;&#x0003D;&#x02009;15.88, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001, as was the interaction for perceptual simulation and condition, <italic>F</italic>(2, 1634.20)&#x02009;&#x0003D;&#x02009;2.9, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.05. Indeed, the overall interaction between factors (linguistic and perceptual) and condition was significant, <italic>F</italic>(2, 1540.18)&#x02009;&#x0003D;&#x02009;8.10, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001.</p>
<p>These findings replicate the RT data in Louwerse and Jeuniaux (<xref ref-type="bibr" rid="B38">2010</xref>). That is, order frequency better explained RTs than the iconicity ratings did in the semantic judgment task, but iconicity ratings better explained RTs than the order frequency did in the iconicity judgment task.</p>
</sec>
<sec>
<title>EEG activation</title>
<p>As discussed earlier, we utilized previously established EEG source localization techniques in conjunction with statistical analyses to determine when and where relative effects of linguistic and perceptual processes occurred. Continuous neural activity was recorded from 14 international 10&#x02013;20 sites (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4; Reilly, <xref ref-type="bibr" rid="B47">2005</xref>, p. 139). Scalp recordings were referenced to CMS/DRL (P3/P4) locations. All electrode impedances were kept below 10&#x02009;k&#x003A9;. As the Emotiv EPOC headset is noisier than high-end systems, to minimize oculomotor, motor, and electrogalvanic artifacts, a high-pass hardware filter removed signals below 0.16&#x02009;Hz and a low-pass filter removed signals above 30&#x02009;Hz (see Bobrov et al., <xref ref-type="bibr" rid="B6">2011</xref> and Duvinage et al., <xref ref-type="bibr" rid="B16">2012</xref> for similar filtering ranges with the Emotiv EPOC headset). The EEG was sampled at 2048&#x02009;Hz and was down-sampled to 128&#x02009;Hz. Gross eye blink and movement artifacts over 150&#x02009;&#x003BC;V were excluded from the analysis. All data were wirelessly collected via a proprietary Bluetooth USB chip operating in the same frequency range as the headset (2.4&#x02009;GHz). Data were recorded using Emotiv Testbench software (Emotiv Systems, Inc., San Francisco, CA, USA).</p>
<p>Data were filtered using EEGLAB (Delorme and Makeig, <xref ref-type="bibr" rid="B15">2004</xref>), an open-source toolbox for MATLAB (Mathworks, Inc., Natick, MA, USA). Independent component analyses were implemented using ADJUST, an algorithm that automatically identifies stereotyped temporal and spatial artifacts (Mognon et al., <xref ref-type="bibr" rid="B39">2010</xref>). Any remaining oculomotor or motor activity was visually identified and removed from the dataset.</p>
<p>On average, subjects took 1809&#x02009;ms to process and respond to the words presented on the screen. Therefore the sLORETA package (Pascual-Marqui, <xref ref-type="bibr" rid="B42">2002</xref>) was used to localize general activity at an early (97&#x02013;291&#x02009;ms) and a late (1551&#x02013;1744&#x02009;ms) time interval (as we predicted linguistic processes would precede perceptual simulation) in both conditions. The early time period began shortly after presentation of the stimuli and the late time period began shortly before the subject response. LORETA used the MNI152 template (Fuchs et al., <xref ref-type="bibr" rid="B18">2002</xref>) to compute a non-parametric topographical analysis of variance comparing differences between two maps of averaged cortical activity over each time period (Strik et al., <xref ref-type="bibr" rid="B52">1998</xref>). The topographies significantly differed between conditions at early, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.01, and late, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.01, intervals, with the maximum source for the early time period being found around the left inferior frontal gyrus (iFG; near electrode sites FC5, F7, and T7) and the maximum source for the late time period being found near the lingual gyrus (near electrode sites O1, O2, P7, and P8). As source localization with EEG poorly maps anatomical correlates to function, these sites are obviously approximations of the relevant underlying cortical regions (Nunez and Srinivasan, <xref ref-type="bibr" rid="B40">2005</xref>). Note we are not attempting to pinpoint exact regions of neural activity at a given time but instead we are simply attempting to compare general estimates of neural activity in early versus late processing (i.e., we would like to determine when processing occurs in more linguistic versus in more perceptual regions over the duration of a trial).</p>
<p>Although neural processes are quite distributed and bilaterally activate multiple cortical regions (Bullmore and Sporns, <xref ref-type="bibr" rid="B10">2009</xref>; Bressler and Menon, <xref ref-type="bibr" rid="B8">2010</xref>), there is considerable agreement that specific regions (such as the left iFG and left superior temporal gyrus (STG) consistently show increased activation during language processing (Cabeza and Nyberg, <xref ref-type="bibr" rid="B11">2000</xref>; Papathanassiou, <xref ref-type="bibr" rid="B41">2000</xref>; Blank et al., <xref ref-type="bibr" rid="B5">2002</xref>; De Carli et al., <xref ref-type="bibr" rid="B13">2007</xref>). The same applies to visual perception and visual imagery processes, which bilaterally activate multiple cortical regions, in particular occipital and parietal lobes (Kosslyn et al., <xref ref-type="bibr" rid="B26">1993</xref>, <xref ref-type="bibr" rid="B27">1999</xref>; Alivisatos and Petrides, <xref ref-type="bibr" rid="B1">1996</xref>). Further, visual imagery of words activates these same regions that process incoming perceptual information (Ganis et al., <xref ref-type="bibr" rid="B19">2004</xref>). Reichle et al. (<xref ref-type="bibr" rid="B46">2000</xref>) used fMRI to demonstrate that when told to rely on visual imagery while processing linguistic information, subjects were more likely to show increased activation in parietal lobes. As expected, when asked to rely on verbal strategies, activation in traditional language processing regions dominated. Finally, in an fMRI study, Simmons et al. (<xref ref-type="bibr" rid="B51">2008</xref>) found that when asked to generate situations in which a word might occur, subjects showed increased activity in the cuneus, precuneus, posterior cingulate gyrus, retrospinal cortex, and lateral parietal cortex. However, when asked to participate in a word association task, activation occurred in language processing regions of the brain, specifically the lateral left iFG and the medial inferior frontal. During early conceptual processing (first 7.5&#x02009;s of a 15&#x02009;s trial), activation was similar to that of the word association task (i.e., these same language processing areas were active). This is consistent with our output from sLORETA in that during early processing, the maximum source was also the left iFG. Unlike early processing, Simmons et al. (<xref ref-type="bibr" rid="B51">2008</xref>) found that late conceptual processing (last 7.5&#x02009;s of a 15&#x02009;s trial) resulted in activation of the precuneus, posterior cingulate gyrus, and the right lateral parietal cortex (regions all closest to electrodes P7, P8, O1, and O2), the same regions active during situation generation. Although our sLORETA source localization indicated that the maximum source for our late time period was near the lingual gyrus, this region is also in closest proximity to electrode sites P7, P8, O1, and O2.</p>
<p>Figure <xref ref-type="fig" rid="F2">2</xref> shows the activation for a participant averaged across all trials in 100&#x02009;ms increments. A relatively localized increase in activation in linguistic processing regions began almost immediately after a stimulus was presented. Around the middle of the trial, the activation dispersed from the linguistic processing regions toward perceptual processing regions. Late in the trial, localized activation was relatively greater in perceptual processing regions. This pattern matches the conclusions drawn by Louwerse and Connell (<xref ref-type="bibr" rid="B37">2011</xref>) on the basis of RT data and the results obtained through sLORETA, that linguistic processes precede perceptual processes.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Cortical activation throughout a trial</bold>. Presentation of the experimental stimulus (i.e., word pair) starts at &#x02212;2800&#x02009;ms.</p></caption>
<graphic xlink:href="fpsyg-03-00385-g002.tif"/>
</fig>
<p>To complement the pattern observed in Figure <xref ref-type="fig" rid="F2">2</xref> in both our RT data and in the sLORETA results, we performed a mixed effects regression on electrode activation. We assigned the linguistic cortical regions, as determined by sLORETA localization, a dummy value of 1, and we assigned the perceptual cortical regions, as determined by sLORETA localization, a dummy value of 2. We used electrode activation as our dependent variable, and participant, item, and receptor as random factors. The reason we used individual receptors as random factors was to rule out strong effects that could be observed for one receptor but not for others within the regions commonly associated with linguistic or perceptual processing. With this analysis, our objective was to determine to what extent linguistic or perceptual cortical regions overall showed increased activation throughout the trial. As in the previous analyses, <italic>F</italic>-test denominator degrees of freedom for the dependent variable were estimated using the Kenward&#x02013;Roger&#x02019;s degrees of freedom adjustment.</p>
<p>For the semantic judgment condition, a significant difference was observed between linguistic and perceptual cortical regions, <italic>F</italic>(1, 1153108.58)&#x02009;&#x0003D;&#x02009;46.70, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001. A similar pattern was found for the iconicity judgment condition, <italic>F</italic>(1, 1464148.76)&#x02009;&#x0003D;&#x02009;24.07, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001. The fact that a difference was observed is perhaps uninteresting; differences between linguistic and perceptual regions are expected. Instead, the direction of the effect is important here. Recall that linguistic regions were dummy coded as 1, and perceptual regions were dummy coded as 2. Positive <italic>t</italic>-values would indicate that perceptual regions dominate, and negative <italic>t</italic>-values would indicate that linguistic regions dominate. Based on the findings in the RT analysis reported above, we predicted that linguistic regions would dominate in both the semantic and iconicity task, and more so in the semantic judgment task than in the iconicity judgment task. This prediction is supported by the results; <italic>t</italic>-values in both the semantic and iconicity tasks were negative, as predicted with higher <italic>t</italic>-values in the semantic task, <italic>t</italic> (1153109)&#x02009;&#x0003D;&#x02009;&#x02212;6.83, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001, than in the iconicity task, <italic>t</italic> (1464149)&#x02009;&#x0003D;&#x02009;&#x02212;4.91, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001, replicating the RT findings.</p>
<p>To determine whether linguistic processes precede perceptual simulation processes, we created 20 time bins for each trial per participant, per condition (cf. Louwerse and Bangerter, <xref ref-type="bibr" rid="B34">2010</xref>). Each time bin was therefore approximately 80&#x02009;ms for the semantic judgment condition and 95&#x02009;ms for the iconicity judgment condition. Twenty time bins allowed for the largest number of groups for examining trends of each factor while retaining sufficient data points per participant to test the time course hypotheses. Mixed effects models were again run, now with time bin as an added predictor in the model. The <italic>t</italic>-values of the mixed effects models per time bin are shown in Figure <xref ref-type="fig" rid="F3">3</xref>A, Tables <xref ref-type="table" rid="T2">2</xref> and <xref ref-type="table" rid="T3">3</xref>. The figure shows that <italic>t</italic>-values in both the semantic judgment and the iconicity judgment experiments are predominantly negative in the first half of the trial (suggesting a bias toward cortical regions associated with linguistic processing), and predominantly positive toward the end of the trial (suggesting a bias toward cortical regions associated with perceptual processing). Note here that these are the relative effect sizes for the two clusters of cortical regions (FC5, F7, and T7) and (O1, O2, P7, and P8), with the effects for individual electrodes filtered out. The findings do not show low activation for the perceptual processing areas early on in the trial (as words must of course be recognized by the visual system during processing); these results merely show that, relative to the brain regions associated with linguistic processing, the effect sizes of perceptual processing regions dominate later in the trial. Also note the relative effect for brain regions associated with perceptual processing very early in the trial (time bins 1&#x02013;4), perhaps in line with the early activation of perceptual simulations (Hauk et al., <xref ref-type="bibr" rid="B22">2008</xref>; Pulverm&#x000FC;ller et al., <xref ref-type="bibr" rid="B44">2009</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>(A)</bold> <italic>t</italic>-values for each of the 20 time bins for both the semantic judgment and iconicity judgment conditions. Negative <italic>t</italic>-values represent a relative bias toward linguistic cortical regions, positive <italic>t</italic>-values represent a relative bias toward perceptual cortical regions. <bold>(B)</bold> <italic>t</italic>-values for each of the 20 time bins for both the semantic judgment and iconicity judgment conditions fitted using a sinusoidal curve model and correlation coefficients, standard errors, and parameter coefficients for the sinusoidal model, y&#x02009;&#x0003D;&#x02009;a&#x02009;&#x0002B;&#x02009;b&#x02009;&#x000D7;&#x02009;cos (cx&#x02009;&#x0002B;&#x02009;d). Negative <italic>t</italic>-values represent a relative bias toward linguistic cortical regions, positive <italic>t</italic>-values represent a relative bias toward perceptual cortical regions.</p></caption>
<graphic xlink:href="fpsyg-03-00385-g003.tif"/>
</fig>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p><bold>Regression coefficients semantic judgment task EEG experiment</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Time bin</th>
<th align="left">variables</th>
<th align="left">Estimate (SE)</th>
<th align="left"><italic>t</italic> (d<italic>f</italic>)</th>
<th align="left">CI lower</th>
<th align="left">CI upper</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">1</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;1.14 (2.45)</td>
<td align="left">&#x02212;0.47 (20.77)</td>
<td align="left">&#x02212;6.23</td>
<td align="left">3.95</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;4.71 (0.73)</td>
<td align="left">&#x02212;6.41 (66071.42)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;6.15</td>
<td align="left">&#x02212;3.27</td>
</tr>
<tr>
<td align="left">2</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;1.30 (2.51)</td>
<td align="left">&#x02212;0.52 (21.64)</td>
<td align="left">&#x02212;6.51</td>
<td align="left">3.90</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;5.34 (0.76)</td>
<td align="left">&#x02212;7.04 (65935.04)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;6.83</td>
<td align="left">&#x02212;3.85</td>
</tr>
<tr>
<td align="left">3</td>
<td align="left">Intercept</td>
<td align="left">0.34 (2.85)</td>
<td align="left">0.12 (37.32)</td>
<td align="left">&#x02212;5.44</td>
<td align="left">6.11</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;5.52 (0.75)</td>
<td align="left">&#x02212;7.36 (66088.74)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;6.99</td>
<td align="left">&#x02212;4.05</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">Intercept</td>
<td align="left">0.19 (1.93)</td>
<td align="left">0.10 (25.32)</td>
<td align="left">&#x02212;3.79</td>
<td align="left">4.16</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;4.40 (0.71)</td>
<td align="left">&#x02212;6.21 (65996.83)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;5.79</td>
<td align="left">&#x02212;3.01</td>
</tr>
<tr>
<td align="left">5</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;0.29 (1.44)</td>
<td align="left">&#x02212;0.20 (27.58)</td>
<td align="left">&#x02212;3.25</td>
<td align="left">2.67</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;3.07 (0.69)</td>
<td align="left">&#x02212;4.45 (65100.18)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;4.42</td>
<td align="left">&#x02212;1.72</td>
</tr>
<tr>
<td align="left">6</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;0.99 (1.37)</td>
<td align="left">&#x02212;0.72 (32.15)</td>
<td align="left">&#x02212;3.78</td>
<td align="left">1.80</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;3.00 (0.69)</td>
<td align="left">&#x02212;4.35 (66873.74)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;4.35</td>
<td align="left">&#x02212;1.65</td>
</tr>
<tr>
<td align="left">7</td>
<td align="left">Intercept</td>
<td align="left">0.39 (1.33)</td>
<td align="left">0.29 (31.12)</td>
<td align="left">&#x02212;2.32</td>
<td align="left">3.11</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;3.53 (0.68)</td>
<td align="left">&#x02212;5.16 (66148.30)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;4.87</td>
<td align="left">&#x02212;2.19</td>
</tr>
<tr>
<td align="left">8</td>
<td align="left">Intercept</td>
<td align="left">2.89 (1.22)</td>
<td align="left">2.36 (32.04)&#x0002A;</td>
<td align="left">0.40</td>
<td align="left">5.38</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;5.69 (0.68)</td>
<td align="left">&#x02212;8.39 (66364.66)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;7.02</td>
<td align="left">&#x02212;4.36</td>
</tr>
<tr>
<td align="left">9</td>
<td align="left">Intercept</td>
<td align="left">2.96 (1.04)</td>
<td align="left">2.84 (43.87)&#x0002A;&#x0002A;</td>
<td align="left">0.86</td>
<td align="left">5.06</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;5.85 (0.67)</td>
<td align="left">&#x02212;8.78 (65944.93)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;7.16</td>
<td align="left">&#x02212;4.54</td>
</tr>
<tr>
<td align="left">10</td>
<td align="left">Intercept</td>
<td align="left">2.50 (1.21)</td>
<td align="left">2.07 (31.56)&#x0002A;</td>
<td align="left">0.04</td>
<td align="left">4.97</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;4.07 (0.67)</td>
<td align="left">&#x02212;6.03 (65120.78)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;5.39</td>
<td align="left">&#x02212;2.74</td>
</tr>
<tr>
<td align="left">11</td>
<td align="left">Intercept</td>
<td align="left">2.29 (1.26)</td>
<td align="left">1.82 (31.59)</td>
<td align="left">&#x02212;0.28</td>
<td align="left">4.86</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;3.42 (0.68)</td>
<td align="left">&#x02212;5.04 (67761.30)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;4.75</td>
<td align="left">&#x02212;2.09</td>
</tr>
<tr>
<td align="left">12</td>
<td align="left">Intercept</td>
<td align="left">0.10 (1.13)</td>
<td align="left">0.09 (32.78)</td>
<td align="left">&#x02212;2.20</td>
<td align="left">2.40</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;0.69 (0.69)</td>
<td align="left">&#x02212;1.00 (65801.13)</td>
<td align="left">&#x02212;2.05</td>
<td align="left">0.67</td>
</tr>
<tr>
<td align="left">13</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;0.43 (1.14)</td>
<td align="left">&#x02212;0.38 (30.18)</td>
<td align="left">&#x02212;2.77</td>
<td align="left">1.90</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">0.52 (0.67)</td>
<td align="left">0.77 (66336.68)</td>
<td align="left">&#x02212;0.80</td>
<td align="left">1.84</td>
</tr>
<tr>
<td align="left">14</td>
<td align="left">Intercept</td>
<td align="left">0.32 (1.00)</td>
<td align="left">0.32 (48.81)</td>
<td align="left">&#x02212;1.69</td>
<td align="left">2.33</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">0.96 (0.68)</td>
<td align="left">1.43 (66148.68)</td>
<td align="left">&#x02212;0.36</td>
<td align="left">2.29</td>
</tr>
<tr>
<td align="left">15</td>
<td align="left">Intercept</td>
<td align="left">1.45 (1.22)</td>
<td align="left">1.19 (35.46)</td>
<td align="left">&#x02212;1.02</td>
<td align="left">3.92</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">0.98 (0.65)</td>
<td align="left">1.53 (66886.19)</td>
<td align="left">&#x02212;0.28</td>
<td align="left">2.25</td>
</tr>
<tr>
<td align="left">16</td>
<td align="left">Intercept</td>
<td align="left">2.21 (1.22)</td>
<td align="left">1.80 (33.54)</td>
<td align="left">&#x02212;0.28</td>
<td align="left">4.70</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">0.48 (0.66)</td>
<td align="left">0.72 (65129.27)</td>
<td align="left">&#x02212;0.82</td>
<td align="left">1.77</td>
</tr>
<tr>
<td align="left">17</td>
<td align="left">Intercept</td>
<td align="left">2.24 (1.40)</td>
<td align="left">1.60 (27.42)</td>
<td align="left">&#x02212;0.62</td>
<td align="left">5.10</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">0.15 (0.69)</td>
<td align="left">0.22 (66049.58)</td>
<td align="left">&#x02212;1.21</td>
<td align="left">1.51</td>
</tr>
<tr>
<td align="left">18</td>
<td align="left">Intercept</td>
<td align="left">2.20 (1.59)</td>
<td align="left">1.39 (25.91)</td>
<td align="left">&#x02212;1.06</td>
<td align="left">5.46</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">0.74 (0.72)</td>
<td align="left">1.03 (66212.19)</td>
<td align="left">&#x02212;0.66</td>
<td align="left">2.14</td>
</tr>
<tr>
<td align="left">19</td>
<td align="left">Intercept</td>
<td align="left">0.75 (1.84)</td>
<td align="left">0.41 (20.99)</td>
<td align="left">&#x02212;3.07</td>
<td align="left">4.57</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">2.74 (0.73)</td>
<td align="left">3.75 (65647.68)&#x0002A;&#x0002A;</td>
<td align="left">1.31</td>
<td align="left">4.17</td>
</tr>
<tr>
<td align="left">20</td>
<td align="left">Intercept</td>
<td align="left">0.38 (1.80)</td>
<td align="left">0.21 (20.48)</td>
<td align="left">&#x02212;3.37</td>
<td align="left">4.13</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">2.44 (0.70)</td>
<td align="left">3.49 (65735.49)&#x0002A;&#x0002A;</td>
<td align="left">1.07</td>
<td align="left">3.81</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic><italic>Note</italic>. Dependent variable is EEG activation: negative <italic>t</italic> values indicate a bias toward linguistic cortical areas, positive <italic>t</italic>-values a bias toward perceptual cortical areas; &#x0002A;&#x0002A;<italic>p</italic>&#x02009;&#x0003C;&#x02009;0.01, &#x0002A;<italic>p</italic>&#x02009;&#x0003C;&#x02009;0.05</italic>.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p><bold>Regression coefficients iconicity judgment EEG experiment</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Time bin</th>
<th align="left">variables</th>
<th align="left">Estimate (SE)</th>
<th align="left"><italic>t</italic> (d<italic>f</italic>)</th>
<th align="left">CI lower</th>
<th align="left">CI upper</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">1</td>
<td align="left">Intercept</td>
<td align="left">0.58 (1.23)</td>
<td align="left">0.47 (27.25)</td>
<td align="left">&#x02212;1.94</td>
<td align="left">3.09</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;0.34 (0.54)</td>
<td align="left">&#x02212;0.63 (86498.15)</td>
<td align="left">&#x02212;1.41</td>
<td align="left">0.72</td>
</tr>
<tr>
<td align="left">2</td>
<td align="left">Intercept</td>
<td align="left">0.86 (1.36)</td>
<td align="left">0.63 (27.96)</td>
<td align="left">&#x02212;1.92</td>
<td align="left">3.64</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;0.01 (0.55)</td>
<td align="left">&#x02212;0.02 (86822.04)</td>
<td align="left">&#x02212;1.09</td>
<td align="left">1.07</td>
</tr>
<tr>
<td align="left">3</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;0.07 (1.30)</td>
<td align="left">&#x02212;0.05 (29.12)</td>
<td align="left">&#x02212;2.73</td>
<td align="left">2.59</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">2.14 (0.62)</td>
<td align="left">3.47 (86759.75)&#x0002A;&#x0002A;</td>
<td align="left">0.93</td>
<td align="left">3.34</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">Intercept</td>
<td align="left">0.99 (1.41)</td>
<td align="left">0.70 (28.46)</td>
<td align="left">&#x02212;1.89</td>
<td align="left">3.87</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;2.48 (0.61)</td>
<td align="left">&#x02212;4.10 (87120.52)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;3.67</td>
<td align="left">&#x02212;1.29</td>
</tr>
<tr>
<td align="left">5</td>
<td align="left">Intercept</td>
<td align="left">0.85 (2.03)</td>
<td align="left">0.42 (21.39)</td>
<td align="left">&#x02212;3.37</td>
<td align="left">5.06</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;3.20 (0.60)</td>
<td align="left">&#x02212;5.37 (85757.91)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;4.37</td>
<td align="left">&#x02212;2.03</td>
</tr>
<tr>
<td align="left">6</td>
<td align="left">Intercept</td>
<td align="left">1.65 (1.89)</td>
<td align="left">0.87 (23.25)</td>
<td align="left">&#x02212;2.25</td>
<td align="left">5.56</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;4.25 (0.57)</td>
<td align="left">&#x02212;7.43 (87672.08)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;5.37</td>
<td align="left">&#x02212;3.13</td>
</tr>
<tr>
<td align="left">7</td>
<td align="left">Intercept</td>
<td align="left">0.75 (1.88)</td>
<td align="left">0.40 (22.18)</td>
<td align="left">&#x02212;3.14</td>
<td align="left">4.64</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;2.79 (0.55)</td>
<td align="left">&#x02212;5.03 (87008.84)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;3.87</td>
<td align="left">&#x02212;1.70</td>
</tr>
<tr>
<td align="left">8</td>
<td align="left">Intercept</td>
<td align="left">1.52 (1.11)</td>
<td align="left">1.38 (46.84)</td>
<td align="left">&#x02212;0.71</td>
<td align="left">3.76</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;0.74 (0.54)</td>
<td align="left">&#x02212;1.36 (86591.37)</td>
<td align="left">&#x02212;1.80</td>
<td align="left">0.33</td>
</tr>
<tr>
<td align="left">9</td>
<td align="left">Intercept</td>
<td align="left">1.54 (1.43)</td>
<td align="left">1.08 (25.22)</td>
<td align="left">&#x02212;1.40</td>
<td align="left">4.48</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;0.88 (0.49)</td>
<td align="left">&#x02212;1.79 (86759.15)</td>
<td align="left">&#x02212;1.84</td>
<td align="left">0.08</td>
</tr>
<tr>
<td align="left">10</td>
<td align="left">Intercept</td>
<td align="left">3.21 (1.20)</td>
<td align="left">2.66 (29.05)&#x0002A;</td>
<td align="left">0.75</td>
<td align="left">5.67</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;3.16 (0.52)</td>
<td align="left">&#x02212;6.11 (85320.16)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;4.18</td>
<td align="left">&#x02212;2.15</td>
</tr>
<tr>
<td align="left">11</td>
<td align="left">Intercept</td>
<td align="left">3.07 (0.94)</td>
<td align="left">3.28 (61.11)&#x0002A;&#x0002A;</td>
<td align="left">1.20</td>
<td align="left">4.94</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;0.51 (0.49)</td>
<td align="left">&#x02212;1.03 (87746.70)</td>
<td align="left">&#x02212;1.47</td>
<td align="left">0.46</td>
</tr>
<tr>
<td align="left">12</td>
<td align="left">Intercept</td>
<td align="left">2.91 (1.72)</td>
<td align="left">1.69 (22.63)</td>
<td align="left">&#x02212;0.65</td>
<td align="left">6.47</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">1.28 (0.53)</td>
<td align="left">2.43 (86582.36)&#x0002A;</td>
<td align="left">0.25</td>
<td align="left">2.31</td>
</tr>
<tr>
<td align="left">13</td>
<td align="left">Intercept</td>
<td align="left">3.99 (2.18)</td>
<td align="left">1.83 (20.53)</td>
<td align="left">&#x02212;0.55</td>
<td align="left">8.52</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">&#x02212;0.16 (0.53)</td>
<td align="left">&#x02212;0.30 (87051.02)</td>
<td align="left">&#x02212;1.21</td>
<td align="left">0.89</td>
</tr>
<tr>
<td align="left">14</td>
<td align="left">Intercept</td>
<td align="left">1.16 (1.07)</td>
<td align="left">1.09 (50.98)</td>
<td align="left">&#x02212;0.98</td>
<td align="left">3.31</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">0.49 (0.53)</td>
<td align="left">0.93 (86359.82)</td>
<td align="left">&#x02212;0.54</td>
<td align="left">1.52</td>
</tr>
<tr>
<td align="left">15</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;0.36 (1.13)</td>
<td align="left">&#x02212;0.32 (54.91)</td>
<td align="left">&#x02212;2.63</td>
<td align="left">1.90</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">1.71 (0.49)</td>
<td align="left">3.48 (87276.97)&#x0002A;&#x0002A;</td>
<td align="left">0.75</td>
<td align="left">2.67</td>
</tr>
<tr>
<td align="left">16</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;0.87 (1.34)</td>
<td align="left">&#x02212;0.65 (27.95)</td>
<td align="left">&#x02212;3.60</td>
<td align="left">1.87</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">3.60 (0.51)</td>
<td align="left">7.03 (85655.56)&#x0002A;&#x0002A;</td>
<td align="left">2.59</td>
<td align="left">4.60</td>
</tr>
<tr>
<td align="left">17</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;3.17 (1.48)</td>
<td align="left">&#x02212;2.14 (25.29)&#x0002A;</td>
<td align="left">&#x02212;6.22</td>
<td align="left">&#x02212;0.13</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">4.89 (0.53)</td>
<td align="left">9.24 (87200.29)&#x0002A;&#x0002A;</td>
<td align="left">3.85</td>
<td align="left">5.93</td>
</tr>
<tr>
<td align="left">18</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;4.84 (2.33)</td>
<td align="left">&#x02212;2.08 (19.25)</td>
<td align="left">&#x02212;9.71</td>
<td align="left">0.04</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">4.46 (0.50)</td>
<td align="left">8.90 (87181.78)&#x0002A;&#x0002A;</td>
<td align="left">3.48</td>
<td align="left">5.45</td>
</tr>
<tr>
<td align="left">19</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;4.17 (2.52)</td>
<td align="left">-1.66 (18.41)</td>
<td align="left">&#x02212;9.45</td>
<td align="left">1.11</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">3.64 (0.49)</td>
<td align="left">7.39 (87015.12)&#x0002A;&#x0002A;</td>
<td align="left">2.67</td>
<td align="left">4.61</td>
</tr>
<tr>
<td align="left">20</td>
<td align="left">Intercept</td>
<td align="left">&#x02212;2.80 (0.94)</td>
<td align="left">&#x02212;2.99 (41.90)&#x0002A;&#x0002A;</td>
<td align="left">&#x02212;4.68</td>
<td align="left">&#x02212;0.91</td>
</tr>
<tr>
<td align="left"/>
<td align="left">Ling.-perc. bias</td>
<td align="left">5.51 (0.52)</td>
<td align="left">10.62 (85437.82)&#x0002A;&#x0002A;</td>
<td align="left">4.49</td>
<td align="left">6.52</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic><italic>Note</italic>. Dependent variable is EEG activation: negative <italic>t</italic> values indicate a bias toward linguistic cortical areas, positive <italic>t</italic>-values a bias toward perceptual cortical areas; &#x0002A;&#x0002A;<italic>p</italic>&#x02009;&#x0003C;&#x02009;0.01, &#x0002A;<italic>p</italic>&#x02009;&#x0003C;&#x02009;0.05</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>To further demonstrate the neurological evidence for relatively earlier linguistic processes and relatively later perceptual simulation, we fitted the <italic>t</italic>-test values for the 20 time bins using exponential, power law, and growth models. The fit of the sinusoidal curve was superior to these models across the two data conditions. Figure <xref ref-type="fig" rid="F3">3</xref>B presents the fit, the standard errors, and the values for the four variables. The sinusoidal fit converged in four iterations (iconicity task) and five iterations (semantic task) to a tolerance of 0.00001.</p>
<p>Using the sinusoidal model and the parameters derived from the data, the following figure emerged (Figure <xref ref-type="fig" rid="F3">3</xref>B). For both the semantic judgment and the iconicity judgment conditions, linguistic cortical regions dominated initially, followed later by perceptual cortical regions. As Figure <xref ref-type="fig" rid="F3">3</xref>B clearly shows, activation in linguistic cortical regions dominated in the semantic judgment task, whereas activation in perceptual cortical regions was prominent in the iconicity judgment task. Moreover, linguistic cortical regions showed greater activation relatively early in the trial, whereas perceptual cortical regions showed greater activation relatively late in processing. The results from these analyses are in line with results we obtained through both more commonly used source localization techniques and RT analyses, but they give a more detailed view of relative cortical activation for linguistic and perceptual processes throughout each trial.</p>
</sec>
</sec>
<sec sec-type="discussion">
<title>Discussion</title>
<p>The purpose of this experiment was to neurologically determine to what extent both linguistic and embodied explanations can be used in conceptual processing. The results of a semantic judgment and an iconicity judgment task demonstrated that both language statistics and perceptual simulation explain conceptual processing. Specifically, statistical linguistic frequencies best explain semantic judgment tasks, whereas iconicity ratings better explain iconicity judgment tasks. Our results also showed that linguistic cortical regions tended to be relatively more active overall during the semantic task, and perceptual cortical regions tended to be relatively more active during the iconicity task. Moreover, on any given trial, neural activation progressed from language processing cortical regions toward perceptual processing cortical regions. These findings support the conclusion that conceptual processing is both linguistic and embodied, both in early and late processing, however when comparing the relative effect of linguistic processes versus perceptual simulation processes, the former precedes the latter (see also Louwerse and Connell, <xref ref-type="bibr" rid="B37">2011</xref>).</p>
<p>Standard EEG methods, such as ERP, are extremely valuable when identifying whether a difference in cortical activation can be obtained for different stimuli. The drawback of these traditional methods is that excessive stimulus repetition is required. Moreover, ERP is useful in identifying whether an anomaly is detected (Van Berkum et al., <xref ref-type="bibr" rid="B55">1999</xref>) or whether a shift in perceptual simulation has taken place (Collins et al., <xref ref-type="bibr" rid="B12">2011</xref>), but does not sufficiently answer the question to what extent different cortical regions are relatively more or less active than others. The technique shown here used source localization techniques to determine where differences in activation were present during early and late processing. We then used that information to compare the relative effect sizes of two clusters of cortical regions over the duration of the trial. This method is novel, yet its findings match those obtained from more traditional methods (Simmons et al., <xref ref-type="bibr" rid="B51">2008</xref>; Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>; Louwerse and Connell, <xref ref-type="bibr" rid="B37">2011</xref>). This method obviously does not render fMRI unnecessary for localization. In our analyses we compared the relative dominance of different clusters of cortical regions (filtering out their individual effects). Such a comparative technique does not allow for localization of specific regions of the brain; it only allows for a comparison of (predetermined) regions.</p>
<p>How can the findings reported in this paper be explained in terms of the cognitive mechanisms involved in language processing? We have argued elsewhere that language encodes perceptual relations (Louwerse, <xref ref-type="bibr" rid="B33">2011</xref>). Speakers translate prelinguistic conceptual knowledge into linguistic conceptualizations, so that perceptual relations become encoded in language, with distributional language statistics building up as a function of language use (Louwerse, <xref ref-type="bibr" rid="B32">2008</xref>). Louwerse (<xref ref-type="bibr" rid="B31">2007</xref>, <xref ref-type="bibr" rid="B33">2011</xref>) proposed the Symbol Interdependency Hypothesis, which states that comprehension relies both on statistical linguistic processes as well as perceptual processes. Language users can ground linguistic units in perceptual experiences (embodied cognition), but through language statistics they can bootstrap meaning from linguistic units (symbolic cognition). Iconicity relations between words (Louwerse, <xref ref-type="bibr" rid="B32">2008</xref>), the modality of a word (Louwerse and Connell, <xref ref-type="bibr" rid="B37">2011</xref>), the valence of a word (Hutchinson and Louwerse, <xref ref-type="bibr" rid="B23">2012</xref>), the social relations between individuals (Hutchinson et al., <xref ref-type="bibr" rid="B24">2012</xref>), the relative location of body parts (Tillman et al., <xref ref-type="bibr" rid="B54">2012</xref>), and even the relative geographical location of city words (Louwerse and Benesh, <xref ref-type="bibr" rid="B35">2012</xref>) can be determined using language statistics. The meaning extracted through language statistics is, however, shallow, but provides good-enough representations. For a more precise understanding of a linguistic unit, perceptual simulation is needed (Louwerse and Connell, <xref ref-type="bibr" rid="B37">2011</xref>). Depending on the stimulus (words or pictures; Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>), the cognitive task (Louwerse and Jeuniaux, <xref ref-type="bibr" rid="B38">2010</xref>; current study), and the time of processing (Louwerse and Connell, <xref ref-type="bibr" rid="B37">2011</xref>; current study) the relative effect of language statistics or perceptual simulations dominates. The findings reported in this paper support the Symbol Interdependency Hypothesis, with the relative effect of the linguistic system being more dominant in the early part of the trial and the relative effect of the perceptual system dominating later in the trial.</p>
<p>The RT and EEG findings reported here are relevant for a better understanding of the mechanisms involved in conceptual processing. They are also relevant for a philosophy of science. Recently, many studies have demonstrated that cognition is embodied, moving the symbolic and embodiment debate toward embodied cognition. The history of the debate (De Vega et al., <xref ref-type="bibr" rid="B14">2008</xref>) is, however, reminiscent of the parable of the blind men and the elephant. In this tale, a group of blind men each touch a different part of an elephant in order to identify the animal, and when comparing their findings learn that they fundamentally disagree because they fail to see the whole picture. Evidence for embodied cognition is akin to identifying the tusk of the elephant, and evidence for symbolic cognition is similar to identifying its trunk. Dismissing or ignoring either explanation is reminiscent of the last lines of a parable: &#x0201C;For, quarreling, each to his view they cling. Such folk see only one side of a thing&#x0201D; (Udana, 6.4). Cognition is both symbolic and embodied; the important question now is under what conditions symbolic and embodied explanations best explain experimental data. The current study has provided RT and EEG evidence that both linguistic and perceptual simulation processes play a role in conceptual cognition, to different extents, depending on the cognitive task, with linguistic processes preceding perceptual simulation.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<app-group>
<app id="A1">
<title>Appendix</title>
<sec>
<title>Experimental items used in Experiment 1 and 2</title>
<p>Airplane &#x02013; runway</p>
<p>Antenna &#x02013; radio</p>
<p>Antler &#x02013; deer</p>
<p>Attic &#x02013; basement</p>
<p>Belt &#x02013; shoe</p>
<p>Billboard &#x02013; highway</p>
<p>Boat &#x02013; lake</p>
<p>Boot &#x02013; heel</p>
<p>Bouquet &#x02013; vase</p>
<p>Branch &#x02013; root</p>
<p>Bridge &#x02013; river</p>
<p>Car &#x02013; road</p>
<p>Castle &#x02013; moat</p>
<p>Ceiling &#x02013; floor</p>
<p>Cork &#x02013; bottle</p>
<p>Curtain &#x02013; stage</p>
<p>Eyes &#x02013; whiskers</p>
<p>Faucet &#x02013; drain</p>
<p>Fender &#x02013; tire</p>
<p>Flame &#x02013; candle</p>
<p>Flower &#x02013; stem</p>
<p>Foam &#x02013; beer</p>
<p>Fountain &#x02013; pool</p>
<p>Froth &#x02013; coffee</p>
<p>Glass &#x02013; coaster</p>
<p>Grill &#x02013; charcoal</p>
<p>Handle &#x02013; bucket</p>
<p>Hat &#x02013; scarf</p>
<p>Head &#x02013; foot</p>
<p>Headlight &#x02013; bumper</p>
<p>Hiker &#x02013; trail</p>
<p>Hood &#x02013; engine</p>
<p>Icing &#x02013; donut</p>
<p>Jam &#x02013; toast</p>
<p>Jockey &#x02013; horse</p>
<p>Kite &#x02013; string</p>
<p>Knee &#x02013; ankle</p>
<p>Lamp &#x02013; table</p>
<p>Lid &#x02013; cup</p>
<p>Lighthouse &#x02013; beach</p>
<p>Mailbox &#x02013; post</p>
<p>Mane &#x02013; hoof</p>
<p>Mantle &#x02013; fireplace</p>
<p>Mast &#x02013; deck</p>
<p>Monitor &#x02013; keyboard</p>
<p>Mustache &#x02013; beard</p>
<p>Nose &#x02013; mouth</p>
<p>Pan &#x02013; stove</p>
<p>Pedestrian &#x02013; sidewalk</p>
<p>Penthouse &#x02013; lobby</p>
<p>Pitcher &#x02013; mound</p>
<p>Plant &#x02013; pot</p>
<p>Roof &#x02013; porch</p>
<p>Runner &#x02013; track</p>
<p>Saddle &#x02013; stirrup</p>
<p>Seat &#x02013; pedal</p>
<p>Sheet &#x02013; mattress</p>
<p>Sky &#x02013; ground</p>
<p>Smoke &#x02013; chimney</p>
<p>Sprinkler &#x02013; lawn</p>
<p>Steeple &#x02013; church</p>
<p>Sweater &#x02013; pants</p>
<p>Tractor &#x02013; field</p>
<p>Train &#x02013; railroad</p>
</sec>
</app>
</app-group>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alivisatos</surname> <given-names>B.</given-names></name> <name><surname>Petrides</surname> <given-names>M.</given-names></name></person-group> (<year>1996</year>). <article-title>Functional activation of the human brain during mental rotation.</article-title> <source>Neuropsychologia</source> <volume>35</volume>, <fpage>111</fpage>&#x02013;<lpage>118</lpage>.<pub-id pub-id-type="doi">10.1016/S0028-3932(96)00083-8</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baayen</surname> <given-names>R. H.</given-names></name> <name><surname>Davidson</surname> <given-names>D.</given-names></name> <name><surname>Bates</surname> <given-names>D.</given-names></name></person-group> (<year>2008</year>). <article-title>Mixed-effects modeling with crossed random effects for subjects and items.</article-title> <source>J. Mem. Lang.</source> <volume>59</volume>, <fpage>390</fpage>&#x02013;<lpage>412</lpage>.<pub-id pub-id-type="doi">10.1016/j.jml.2007.12.005</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barsalou</surname> <given-names>L. W.</given-names></name></person-group> (<year>1999</year>). <article-title>Perceptual symbol systems.</article-title> <source>Behav. Brain Sci.</source> <volume>22</volume>, <fpage>577</fpage>&#x02013;<lpage>660</lpage>.<pub-id pub-id-type="doi">10.1017/S0140525X99002149</pub-id><pub-id pub-id-type="pmid">11301525</pub-id></citation></ref>
<ref id="B4"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Barsalou</surname> <given-names>L. W.</given-names></name> <name><surname>Santos</surname> <given-names>A.</given-names></name> <name><surname>Simmons</surname> <given-names>W. K.</given-names></name> <name><surname>Wilson</surname> <given-names>C. D.</given-names></name></person-group> (<year>2008</year>). <article-title>&#x0201C;Language and simulation in conceptual processing,&#x0201D;</article-title> in <source>Symbols, Embodiment, and Meaning</source>, eds <person-group person-group-type="editor"><name><surname>De Vega</surname> <given-names>M.</given-names></name> <name><surname>Glenberg</surname> <given-names>A. M.</given-names></name> <name><surname>Graesser</surname> <given-names>A. C.</given-names></name></person-group> (<publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>), <fpage>245</fpage>&#x02013;<lpage>283</lpage>.</citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blank</surname> <given-names>C. S.</given-names></name> <name><surname>Scott</surname> <given-names>S. K.</given-names></name> <name><surname>Murphy</surname> <given-names>K.</given-names></name> <name><surname>Warburton</surname> <given-names>E.</given-names></name> <name><surname>Wise</surname> <given-names>R. J. S.</given-names></name></person-group> (<year>2002</year>). <article-title>Speech production: Wernicke, Broca and beyond.</article-title> <source>Brain</source> <volume>128</volume>, <fpage>1829</fpage>&#x02013;<lpage>1838</lpage>.<pub-id pub-id-type="doi">10.1093/brain/awf191</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bobrov</surname> <given-names>P.</given-names></name> <name><surname>Frolov</surname> <given-names>A.</given-names></name> <name><surname>Cantor</surname> <given-names>C.</given-names></name> <name><surname>Fedulova</surname> <given-names>I.</given-names></name> <name><surname>Bakhnyan</surname> <given-names>M.</given-names></name> <name><surname>Zhavoronkov</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Brain-computer interface based on generation of visual images.</article-title> <source>PLoS ONE</source> <volume>6</volume>, <fpage>e20674</fpage>.<pub-id pub-id-type="doi">10.1371/journal.pone.0020674</pub-id></citation></ref>
<ref id="B7"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Brants</surname> <given-names>T.</given-names></name> <name><surname>Franz</surname> <given-names>A.</given-names></name></person-group> (<year>2006</year>). <source>Web 1T 5-gram Version 1</source>. <publisher-loc>Philadelphia</publisher-loc>: <publisher-name>Linguistic Data Consortium</publisher-name>.</citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bressler</surname> <given-names>S. L.</given-names></name> <name><surname>Menon</surname> <given-names>V.</given-names></name></person-group> (<year>2010</year>). <article-title>Large-scale brain networks in cognition: emerging methods and principles.</article-title> <source>Trends Cogn. Sci. (Regul. Ed.)</source> <volume>14</volume>, <fpage>277</fpage>&#x02013;<lpage>290</lpage>.<pub-id pub-id-type="doi">10.1016/j.tics.2010.04.004</pub-id><pub-id pub-id-type="pmid">20493761</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buccino</surname> <given-names>G.</given-names></name> <name><surname>Riggio</surname> <given-names>L.</given-names></name> <name><surname>Melli</surname> <given-names>G.</given-names></name> <name><surname>Binkofski</surname> <given-names>F.</given-names></name> <name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Rizzolatti</surname> <given-names>G.</given-names></name></person-group> (<year>2005</year>). <article-title>Listening to action-related sentences modulates the activity of the motor system: a combined TMS, and behavioral study.</article-title> <source>Brain Res. Cogn. Brain Res.</source> <volume>24</volume>, <fpage>355</fpage>&#x02013;<lpage>363</lpage>.<pub-id pub-id-type="doi">10.1016/j.cogbrainres.2005.02.020</pub-id><pub-id pub-id-type="pmid">16099349</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bullmore</surname> <given-names>E. T.</given-names></name> <name><surname>Sporns</surname> <given-names>O.</given-names></name></person-group> (<year>2009</year>). <article-title>Complex brain networks: graph theoretical analysis of structural and functional systems.</article-title> <source>Nat. Rev. Neurosci.</source> <volume>10</volume>, <fpage>186</fpage>&#x02013;<lpage>198</lpage>.<pub-id pub-id-type="doi">10.1038/nrn2618</pub-id><pub-id pub-id-type="pmid">19190637</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cabeza</surname> <given-names>R.</given-names></name> <name><surname>Nyberg</surname> <given-names>L.</given-names></name></person-group> (<year>2000</year>). <article-title>Imaging cognition II: an empirical review of 275 PET and fMRI studies.</article-title> <source>J. Cogn. Neurosci.</source> <volume>12</volume>, <fpage>1</fpage>&#x02013;<lpage>47</lpage>.<pub-id pub-id-type="doi">10.1162/08989290051137585</pub-id><pub-id pub-id-type="pmid">10769304</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Collins</surname> <given-names>J.</given-names></name> <name><surname>Pecher</surname> <given-names>D.</given-names></name> <name><surname>Zeelenberg</surname> <given-names>R.</given-names></name> <name><surname>Coulson</surname> <given-names>S.</given-names></name></person-group> (<year>2011</year>). <article-title>Modality switching in a property verification task: an ERP study of what happens when candles flicker after high heels click.</article-title> <source>Front. Psychol.</source> <volume>2</volume>:<fpage>10</fpage>, <pub-id pub-id-type="doi">10.3389/fpsyg.2011.00010</pub-id><pub-id pub-id-type="pmid">21713128</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Carli</surname> <given-names>D.</given-names></name> <name><surname>Garreffa</surname> <given-names>G.</given-names></name> <name><surname>Colonnese</surname> <given-names>C.</given-names></name> <name><surname>Giulietti</surname> <given-names>G.</given-names></name> <name><surname>Labruna</surname> <given-names>L.</given-names></name> <name><surname>Briselli</surname> <given-names>E.</given-names></name> <etal/></person-group> (<year>2007</year>). <article-title>Identification of activated regions during a language task.</article-title> <source>Magn. Reson. Imaging</source> <volume>25</volume>, <fpage>933</fpage>&#x02013;<lpage>938</lpage>.<pub-id pub-id-type="doi">10.1016/j.mri.2007.03.031</pub-id><pub-id pub-id-type="pmid">17524589</pub-id></citation></ref>
<ref id="B14"><citation citation-type="book"><person-group person-group-type="author"><name><surname>De Vega</surname> <given-names>M.</given-names></name> <name><surname>Glenberg</surname> <given-names>A. M.</given-names></name> <name><surname>Graesser</surname> <given-names>A. C.</given-names></name></person-group> (eds.). (<year>2008</year>). <source>Symbols and Embodiment: Debates on Meaning and Cognition</source>. <publisher-loc>Oxford, England</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Delorme</surname> <given-names>A.</given-names></name> <name><surname>Makeig</surname> <given-names>S.</given-names></name></person-group> (<year>2004</year>). <article-title>EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis.</article-title> <source>J. Neurosci. Methods</source> <volume>134</volume>, <fpage>9</fpage>&#x02013;<lpage>21</lpage>.<pub-id pub-id-type="doi">10.1016/j.jneumeth.2003.10.009</pub-id><pub-id pub-id-type="pmid">15102499</pub-id></citation></ref>
<ref id="B16"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Duvinage</surname> <given-names>M.</given-names></name> <name><surname>Castermans</surname> <given-names>T.</given-names></name> <name><surname>Dutoit</surname> <given-names>T.</given-names></name> <name><surname>Petieau</surname> <given-names>M.</given-names></name> <name><surname>Hoellinger</surname> <given-names>T.</given-names></name> <name><surname>Saedeleer</surname> <given-names>C. D.</given-names></name> <etal/></person-group> (<year>2012</year>). <article-title>&#x0201C;A P300-based quantitative comparison between the Emotiv Epoc headset and a medical EEG device,&#x0201D;</article-title> <conf-name>Proceedings of the IASTED International Conference Biomedical Engineering</conf-name>, <publisher-loc>Innsbruck, Austria</publisher-loc>: <publisher-name>ACTA Press</publisher-name>.</citation></ref>
<ref id="B17"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Estepp</surname> <given-names>J. R.</given-names></name> <name><surname>Christensen</surname> <given-names>J. C.</given-names></name> <name><surname>Monnin</surname> <given-names>J. W.</given-names></name> <name><surname>Davis</surname> <given-names>I. M.</given-names></name> <name><surname>Wilson</surname> <given-names>G. F.</given-names></name></person-group> (<year>2009</year>). <article-title>&#x0201C;Validation of a dry electrode system for EEG,&#x0201D;</article-title> in <conf-name>Proceedings of the 53rd Annual Meeting of the Human Factors and Ergonomics Society</conf-name> (<conf-loc>San Antonio</conf-loc>: <conf-sponsor>Mira Digital Publishing</conf-sponsor>), <fpage>1171</fpage>&#x02013;<lpage>1175</lpage>.</citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fuchs</surname> <given-names>M.</given-names></name> <name><surname>Kastner</surname> <given-names>J.</given-names></name> <name><surname>Wagner</surname> <given-names>M.</given-names></name> <name><surname>Hawes</surname> <given-names>S.</given-names></name> <name><surname>Ebersole</surname> <given-names>J. S.</given-names></name></person-group> (<year>2002</year>). <article-title>A standardized boundary element method volume conductor model.</article-title> <source>Clin. Neurophysiol.</source> <volume>113</volume>, <fpage>702</fpage>&#x02013;<lpage>712</lpage>.<pub-id pub-id-type="doi">10.1016/S1388-2457(02)00030-5</pub-id><pub-id pub-id-type="pmid">11976050</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ganis</surname> <given-names>G.</given-names></name> <name><surname>Thompson</surname> <given-names>W. L.</given-names></name> <name><surname>Kosslyn</surname> <given-names>S. M.</given-names></name></person-group> (<year>2004</year>). <article-title>Brain areas underlying visual mental imagery and visual perception: an fMRI study.</article-title> <source>Brain Res. Cogn. Brain Res.</source> <volume>20</volume>, <fpage>226</fpage>&#x02013;<lpage>241</lpage>.<pub-id pub-id-type="doi">10.1016/j.cogbrainres.2004.02.012</pub-id><pub-id pub-id-type="pmid">15183394</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Glenberg</surname> <given-names>A. M.</given-names></name></person-group> (<year>1997</year>). <article-title>What memory is for: creating meaning in the service of action.</article-title> <source>Behav. Brain Sci.</source> <volume>20</volume>, <fpage>41</fpage>&#x02013;<lpage>50</lpage>.<pub-id pub-id-type="doi">10.1017/S0140525X97000010</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hald</surname> <given-names>L. A.</given-names></name> <name><surname>Marshall</surname> <given-names>J. A.</given-names></name> <name><surname>Janssen</surname> <given-names>D. P.</given-names></name> <name><surname>Garnham</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Switching modalities in a sentence verification task: ERP evidence for embodied language processing.</article-title> <source>Front. Psychol.</source> <volume>2</volume>:<fpage>45</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2011.00045</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hauk</surname> <given-names>O.</given-names></name> <name><surname>Shtyrov</surname> <given-names>Y.</given-names></name> <name><surname>Pulverm&#x000FC;ller</surname> <given-names>F.</given-names></name></person-group> (<year>2008</year>). <article-title>The time course of action and action-word comprehension in the human brain as revealed by neurophysiology.</article-title> <source>J. Physiol. Paris</source> <volume>102</volume>, <fpage>50</fpage>&#x02013;<lpage>58</lpage>.<pub-id pub-id-type="doi">10.1016/j.jphysparis.2008.03.013</pub-id><pub-id pub-id-type="pmid">18485679</pub-id></citation></ref>
<ref id="B23"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Hutchinson</surname> <given-names>S.</given-names></name> <name><surname>Louwerse</surname> <given-names>M. M.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x0201C;The upbeat of language: linguistic context and perceptual simulation predict processing valence words,&#x0201D;</article-title> <source>Proceedings of the 34th Annual Conference of the Cognitive Science Society</source>, <publisher-loc>Austin, TX</publisher-loc>: <publisher-name>Cognitive Science Society</publisher-name>.</citation></ref>
<ref id="B24"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Hutchinson</surname> <given-names>S.</given-names></name> <name><surname>Datla</surname> <given-names>V.</given-names></name> <name><surname>Louwerse</surname> <given-names>M. M.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x0201C;Social networks are encoded in language,&#x0201D;</article-title> in <source>Proceedings of the 34th Annual Conference of the Cognitive Science Society</source>, eds <person-group person-group-type="editor"><name><surname>Miyake</surname> <given-names>N.</given-names></name> <name><surname>Peebles</surname> <given-names>D.</given-names></name> <name><surname>Cooper</surname> <given-names>R. P.</given-names></name></person-group> (<publisher-loc>Austin</publisher-loc>: <publisher-name>Cognitive Science Society</publisher-name>), <fpage>491</fpage>&#x02013;<lpage>496</lpage>.</citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kan</surname> <given-names>I. P.</given-names></name> <name><surname>Barsalou</surname> <given-names>L. W.</given-names></name> <name><surname>Solomon</surname> <given-names>K. O.</given-names></name> <name><surname>Minor</surname> <given-names>J. K.</given-names></name> <name><surname>Thompson-Schill</surname> <given-names>S. L.</given-names></name></person-group> (<year>2003</year>). <article-title>Role of mental imagery in a property verification task: fMRI evidence for perceptual representations of conceptual knowledge.</article-title> <source>Cogn. Neuropsychol.</source> <volume>20</volume>, <fpage>525</fpage>&#x02013;<lpage>540</lpage>.<pub-id pub-id-type="doi">10.1080/02643290244000257</pub-id><pub-id pub-id-type="pmid">20957583</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kosslyn</surname> <given-names>S. M.</given-names></name> <name><surname>Alpert</surname> <given-names>N. M.</given-names></name> <name><surname>Thompson</surname> <given-names>W.</given-names></name> <name><surname>Maljkovic</surname> <given-names>V.</given-names></name> <name><surname>Weise</surname> <given-names>S. B.</given-names></name> <name><surname>Chabris</surname> <given-names>C. F.</given-names></name> <etal/></person-group> (<year>1993</year>). <article-title>Visual mental imagery activates topographically organized visual cortex: PET investigations.</article-title> <source>J. Cogn. Neurosci.</source> <volume>5</volume>, <fpage>263</fpage>&#x02013;<lpage>287</lpage>.<pub-id pub-id-type="doi">10.1162/jocn.1993.5.3.263</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kosslyn</surname> <given-names>S. M.</given-names></name> <name><surname>Pascual-Leone</surname> <given-names>A.</given-names></name> <name><surname>Felician</surname> <given-names>O.</given-names></name> <name><surname>Camposano</surname> <given-names>S.</given-names></name> <name><surname>Keenan</surname> <given-names>J.</given-names></name> <name><surname>Ganis</surname> <given-names>G.</given-names></name> <etal/></person-group> (<year>1999</year>). <article-title>The role of area 17 in visual imagery: convergent evidence from PET and rTMS.</article-title> <source>Science</source> <volume>284</volume>, <fpage>167</fpage>&#x02013;<lpage>170</lpage>.<pub-id pub-id-type="doi">10.1126/science.284.5411.167</pub-id><pub-id pub-id-type="pmid">10102821</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Landauer</surname> <given-names>T. K.</given-names></name> <name><surname>Dumais</surname> <given-names>S. T.</given-names></name></person-group> (<year>1997</year>). <article-title>A solution to Plato&#x02019;s problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge.</article-title> <source>Psychol. Rev.</source> <volume>104</volume>, <fpage>211</fpage>&#x02013;<lpage>240</lpage>.<pub-id pub-id-type="doi">10.1037/0033-295X.104.2.211</pub-id></citation></ref>
<ref id="B29"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Landauer</surname> <given-names>T. K.</given-names></name> <name><surname>McNamara</surname> <given-names>D. S.</given-names></name> <name><surname>Dennis</surname> <given-names>S.</given-names></name> <name><surname>Kintsch</surname> <given-names>W.</given-names></name></person-group> (eds.). (<year>2007</year>). <source>Handbook of Latent Semantic Analysis</source>. <publisher-loc>Mahwah, NJ</publisher-loc>: <publisher-name>Erlbaum</publisher-name>.</citation></ref>
<ref id="B30"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Littell</surname> <given-names>R. C.</given-names></name> <name><surname>Stroup</surname> <given-names>W. W.</given-names></name> <name><surname>Freund</surname> <given-names>R. J.</given-names></name></person-group> (<year>2002</year>). <source>SAS for Linear Models</source>, <edition>4th Edn</edition>. <publisher-loc>Cary, NC</publisher-loc>: <publisher-name>SAS Publishing</publisher-name>.</citation></ref>
<ref id="B31"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Louwerse</surname> <given-names>M. M.</given-names></name></person-group> (<year>2007</year>). <article-title>&#x0201C;Symbolic or embodied representations: a case for symbol interdependency,&#x0201D;</article-title> in <source>Handbook of Latent Semantic Analysis</source>, eds <person-group person-group-type="editor"><name><surname>Landauer</surname> <given-names>T.</given-names></name> <name><surname>McNamara</surname> <given-names>D.</given-names></name> <name><surname>Dennis</surname> <given-names>S.</given-names></name> <name><surname>Kintsch</surname> <given-names>W.</given-names></name></person-group> (<publisher-loc>Mahwah, NJ</publisher-loc>: <publisher-name>Erlbaum</publisher-name>), <fpage>107</fpage>&#x02013;<lpage>120</lpage>.</citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Louwerse</surname> <given-names>M. M.</given-names></name></person-group> (<year>2008</year>). <article-title>Embodied relations are encoded in language.</article-title> <source>Psychon. Bull. Rev.</source> <volume>15</volume>, <fpage>838</fpage>&#x02013;<lpage>844</lpage>.<pub-id pub-id-type="doi">10.3758/PBR.15.4.838</pub-id><pub-id pub-id-type="pmid">18792513</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Louwerse</surname> <given-names>M. M.</given-names></name></person-group> (<year>2011</year>). <article-title>Symbol interdependency in symbolic and embodied cognition.</article-title> <source>Top. Cogn. Sci.</source> <volume>3</volume>, <fpage>273</fpage>&#x02013;<lpage>302</lpage>.<pub-id pub-id-type="doi">10.1111/j.1756-8765.2010.01106.x</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Louwerse</surname> <given-names>M. M.</given-names></name> <name><surname>Bangerter</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Effects of ambiguous gestures and language on the time course of reference resolution.</article-title> <source>Cogn. Sci.</source> <volume>34</volume>, <fpage>1517</fpage>&#x02013;<lpage>1529</lpage>.<pub-id pub-id-type="doi">10.1111/j.1551-6709.2010.01135.x</pub-id><pub-id pub-id-type="pmid">21564257</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Louwerse</surname> <given-names>M. M.</given-names></name> <name><surname>Benesh</surname> <given-names>N.</given-names></name></person-group> (<year>2012</year>). <article-title>Representing spatial structure through maps and language: Lord of the Rings encodes the spatial structure of middle earth.</article-title> <source>Cogn. Sci.</source><pub-id pub-id-type="doi">10.1111/cogs.12000</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Louwerse</surname> <given-names>M. M.</given-names></name> <name><surname>Cai</surname> <given-names>Z.</given-names></name> <name><surname>Hu</surname> <given-names>X.</given-names></name> <name><surname>Ventura</surname> <given-names>M.</given-names></name> <name><surname>Jeuniaux</surname> <given-names>P.</given-names></name></person-group> (<year>2006</year>). <article-title>Cognitively inspired natural-language based knowledge representations: further explorations of latent semantic analysis.</article-title> <source>Int. J. Artif. Intell. T.</source> <volume>15</volume>, <fpage>1021</fpage>&#x02013;<lpage>1039</lpage>.<pub-id pub-id-type="doi">10.1142/S0218213006003090</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Louwerse</surname> <given-names>M. M.</given-names></name> <name><surname>Connell</surname> <given-names>L.</given-names></name></person-group> (<year>2011</year>). <article-title>A taste of words: linguistic context and perceptual simulation predict the modality of words.</article-title> <source>Cogn. Sci.</source> <volume>35</volume>, <fpage>381</fpage>&#x02013;<lpage>398</lpage>.<pub-id pub-id-type="doi">10.1111/j.1551-6709.2010.01157.x</pub-id><pub-id pub-id-type="pmid">21429005</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Louwerse</surname> <given-names>M. M.</given-names></name> <name><surname>Jeuniaux</surname> <given-names>P.</given-names></name></person-group> (<year>2010</year>). <article-title>The linguistic and embodied nature of conceptual processing.</article-title> <source>Cognition</source> <volume>114</volume>, <fpage>96</fpage>&#x02013;<lpage>104</lpage>.<pub-id pub-id-type="doi">10.1016/j.cognition.2009.09.002</pub-id><pub-id pub-id-type="pmid">19818435</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mognon</surname> <given-names>A.</given-names></name> <name><surname>Jovicich</surname> <given-names>J.</given-names></name> <name><surname>Bruzzone</surname> <given-names>L.</given-names></name> <name><surname>Buiatti</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>ADJUST tutorial an automatic EEG artifact detector based on the joint use of spatial and temporal features.</article-title> <source>Psychophysiology</source> <volume>48</volume>, <fpage>229</fpage>&#x02013;<lpage>240</lpage>.<pub-id pub-id-type="doi">10.1111/j.1469-8986.2010.01061.x</pub-id></citation></ref>
<ref id="B40"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Nunez</surname> <given-names>P. L.</given-names></name> <name><surname>Srinivasan</surname> <given-names>R.</given-names></name></person-group> (<year>2005</year>). <source>Electric Fields of the Brain: The Neurophysics of EEG</source>, <edition>2nd Edn</edition>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Papathanassiou</surname> <given-names>D.</given-names></name></person-group> (<year>2000</year>). <article-title>A common language network for comprehension and production: a contribution to the definition of language epicenters with PET.</article-title> <source>Neuroimage</source> <volume>11</volume>, <fpage>347</fpage>&#x02013;<lpage>357</lpage>.<pub-id pub-id-type="doi">10.1006/nimg.2000.0546</pub-id><pub-id pub-id-type="pmid">10725191</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pascual-Marqui</surname> <given-names>R.</given-names></name></person-group> (<year>2002</year>). <article-title>Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details.</article-title> <source>Methods Find Exp. Clin. Pharmacol.</source> <volume>24</volume>, <fpage>5</fpage>&#x02013;<lpage>12</lpage>.<pub-id pub-id-type="pmid">12575463</pub-id></citation></ref>
<ref id="B43"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Pecher</surname> <given-names>D.</given-names></name> <name><surname>Zwaan</surname> <given-names>R. A.</given-names></name></person-group> (eds.). (<year>2005</year>). <source>Grounding Cognition: The Role of Perception and Action in Memory, Language, and Thinking</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pulverm&#x000FC;ller</surname> <given-names>F.</given-names></name> <name><surname>Shtyrov</surname> <given-names>Y.</given-names></name> <name><surname>Hauk</surname> <given-names>O.</given-names></name></person-group> (<year>2009</year>). <article-title>Understanding in an instant: neuro-physiological evidence for mechanistic language circuits in the brain.</article-title> <source>Brain Lang.</source> <volume>110</volume>, <fpage>81</fpage>&#x02013;<lpage>94</lpage>.<pub-id pub-id-type="doi">10.1016/j.bandl.2008.12.001</pub-id><pub-id pub-id-type="pmid">19664815</pub-id></citation></ref>
<ref id="B45"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Ram&#x02019;rez-Cortes</surname> <given-names>J. M.</given-names></name> <name><surname>Alarcon-Aquino</surname> <given-names>V.</given-names></name> <name><surname>Rosas-Cholula</surname> <given-names>G.</given-names></name> <name><surname>Gomez-Gil</surname> <given-names>P.</given-names></name> <name><surname>Escamilla-Ambrosio</surname> <given-names>J.</given-names></name></person-group> (<year>2010</year>). <article-title>&#x0201C;P-300 rhythm detection using ANFIS algorithm and wavelet feature extraction in EEG signals,&#x0201D;</article-title> in <source>Proceedings of the World Congress on Engineering and Computer Science</source>, <publisher-loc>San Francisco</publisher-loc>: <publisher-name>International Association of Engineers</publisher-name>.</citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reichle</surname> <given-names>E. D.</given-names></name> <name><surname>Carpenter</surname> <given-names>P. A.</given-names></name> <name><surname>Just</surname> <given-names>M. A.</given-names></name></person-group> (<year>2000</year>). <article-title>The neural bases of strategy and skill in sentence&#x02013;picture verification.</article-title> <source>Cogn. Psychol.</source> <volume>40</volume>, <fpage>261</fpage>&#x02013;<lpage>295</lpage>.<pub-id pub-id-type="doi">10.1006/cogp.2000.0733</pub-id><pub-id pub-id-type="pmid">10888341</pub-id></citation></ref>
<ref id="B47"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Reilly</surname> <given-names>E. L.</given-names></name></person-group> (<year>2005</year>). <article-title>&#x0201C;EEG recording and operation of the apparatus,&#x0201D;</article-title> in <source>Electroencephalography: Basic Principles, Clinical Applications, and Related Fields</source>, <edition>5th Edn</edition>, eds <person-group person-group-type="editor"><name><surname>Niedermeyer</surname> <given-names>E.</given-names></name> <name><surname>Lopes da Silva</surname> <given-names>F.</given-names></name></person-group> (<publisher-loc>Philadelphia, PA</publisher-loc>: <publisher-name>Lippincott Williams &#x00026; Wilkins</publisher-name>), <fpage>139</fpage>&#x02013;<lpage>160</lpage>.</citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Richter</surname> <given-names>T.</given-names></name></person-group> (<year>2006</year>). <article-title>What is wrong with ANOVA and multiple regression? Analyzing sentence reading times with hierarchical linear models.</article-title> <source>Discourse Process.</source> <volume>41</volume>, <fpage>221</fpage>&#x02013;<lpage>250</lpage>.<pub-id pub-id-type="doi">10.1207/s15326950dp4103_1</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rueschemeyer</surname> <given-names>S.</given-names></name> <name><surname>Glenberg</surname> <given-names>A. M.</given-names></name> <name><surname>Kaschak</surname> <given-names>M. P.</given-names></name> <name><surname>Mueller</surname> <given-names>K.</given-names></name> <name><surname>Friederici</surname> <given-names>A. D.</given-names></name></person-group> (<year>2010</year>). <article-title>Top-down and bottom-up contributions to understanding sentences describing objects in motion.</article-title> <source>Front. Psychol.</source> <volume>1</volume>:<fpage>183</fpage>.<pub-id pub-id-type="doi">10.3389/fpsyg.2010.00183</pub-id></citation></ref>
<ref id="B50"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Semin</surname> <given-names>G. R.</given-names></name> <name><surname>Smith</surname> <given-names>E. R.</given-names></name></person-group> (eds.). (<year>2008</year>). <source>Embodied Grounding: Social, Cognitive, Affective, and Neuroscientific Approaches</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Simmons</surname> <given-names>W. K.</given-names></name> <name><surname>Hamann</surname> <given-names>S. B.</given-names></name> <name><surname>Harenski</surname> <given-names>C. N.</given-names></name> <name><surname>Hu</surname> <given-names>X. P.</given-names></name> <name><surname>Barsalou</surname> <given-names>L. W.</given-names></name></person-group> (<year>2008</year>). <article-title>fMRI evidence for word association and situated simulation in conceptual processing.</article-title> <source>J. Physiol. Paris</source> <volume>102</volume>, <fpage>106</fpage>&#x02013;<lpage>119</lpage>.<pub-id pub-id-type="doi">10.1016/j.jphysparis.2008.03.014</pub-id><pub-id pub-id-type="pmid">18468869</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Strik</surname> <given-names>W. K.</given-names></name> <name><surname>Fallgatter</surname> <given-names>A. J.</given-names></name> <name><surname>Brandies</surname> <given-names>D.</given-names></name> <name><surname>Pascual-Marqui</surname> <given-names>R.</given-names></name></person-group> (<year>1998</year>). <article-title>Three-dimensional tomography of event-related potentials during respons inhibition: evidence for phasic frontal lobe activation.</article-title> <source>Electroencephalogr. Clin. Neurophysiol.</source> <volume>108</volume>, <fpage>406</fpage>&#x02013;<lpage>413</lpage>.<pub-id pub-id-type="doi">10.1016/S0168-5597(98)00021-5</pub-id><pub-id pub-id-type="pmid">9714383</pub-id></citation></ref>
<ref id="B53"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Stytsenko</surname> <given-names>K.</given-names></name> <name><surname>Jablonskis</surname> <given-names>E.</given-names></name> <name><surname>Prahm</surname> <given-names>C.</given-names></name></person-group> (<year>2011</year>). <article-title>Evaluation of consumer EEG device Emotiv EPOC</article-title>. <source>Paper Presented at the MEi:CogSci Conference</source>, <publisher-loc>Ljubljana</publisher-loc>, <publisher-name>Slovenia</publisher-name>.</citation></ref>
<ref id="B54"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Tillman</surname> <given-names>R.</given-names></name> <name><surname>Datla</surname> <given-names>V.</given-names></name> <name><surname>Hutchinson</surname> <given-names>S.</given-names></name> <name><surname>Louwerse</surname> <given-names>M. M.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x0201C;From head to toe: embodiment through statistical linguistic frequencies,&#x0201D;</article-title> <conf-name>Proceedings of the 34th Annual Conference of the Cognitive Science Society</conf-name>, <conf-loc>Austin, TX</conf-loc>: <conf-sponsor>Cognitive Science Society</conf-sponsor>.</citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Berkum</surname> <given-names>J. J. A.</given-names></name> <name><surname>Hagoort</surname> <given-names>P.</given-names></name> <name><surname>Brown</surname> <given-names>C. M.</given-names></name></person-group> (<year>1999</year>). <article-title>Semantic integration in sentences and discourse: evidence from the N400.</article-title> <source>J. Cogn. Neurosci.</source> <volume>11</volume>, <fpage>657</fpage>&#x02013;<lpage>671</lpage>.<pub-id pub-id-type="doi">10.1162/089892999563724</pub-id><pub-id pub-id-type="pmid">10601747</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Dantzig</surname> <given-names>S.</given-names></name> <name><surname>Pecher</surname> <given-names>D.</given-names></name> <name><surname>Zeelenberg</surname> <given-names>R.</given-names></name> <name><surname>Barsalou</surname> <given-names>L. W.</given-names></name></person-group> (<year>2008</year>). <article-title>Perceptual processing affects conceptual processing.</article-title> <source>Cogn. Sci.</source> <volume>32</volume>, <fpage>579</fpage>&#x02013;<lpage>590</lpage>.<pub-id pub-id-type="doi">10.1080/03640210802035365</pub-id><pub-id pub-id-type="pmid">21635347</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zwaan</surname> <given-names>R. A.</given-names></name> <name><surname>Yaxley</surname> <given-names>R.</given-names></name></person-group> (<year>2003</year>). <article-title>Spatial iconicity affects semantic relatedness judgments.</article-title> <source>Psychon. Bull. Rev.</source> <volume>10</volume>, <fpage>954</fpage>&#x02013;<lpage>958</lpage>.<pub-id pub-id-type="doi">10.3758/BF03196557</pub-id><pub-id pub-id-type="pmid">15000544</pub-id></citation></ref>
</ref-list>
</back>
</article>