<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2013.00530</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Hypothesis &#x00026; Theory Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Hertrich</surname> <given-names>Ingo</given-names></name>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Dietrich</surname> <given-names>Susanne</given-names></name>
</contrib>
<contrib contrib-type="author">
<name><surname>Ackermann</surname> <given-names>Hermann</given-names></name>
</contrib>
</contrib-group>
<aff><institution>Department of General Neurology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of T&#x000FC;bingen</institution> <country>T&#x000FC;bingen, Germany</country>
</aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: <italic>Nicholas Altieri, Idaho State University, USA</italic></p></fn>
<fn fn-type="edited-by"><p>Reviewed by: <italic>Emiliano Ricciardi, University of Pisa, Italy; Nicholas Altieri, Idaho State University, USA</italic></p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: <italic>Ingo Hertrich, Department of General Neurology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of T&#x000FC;bingen, Hoppe-Seyler-Strasse 3, D-72076 T&#x000FC;bingen, Germany e-mail: <email>ingo.hertrich@uni-tuebingen.de</email></italic></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Language Sciences, a specialty of Frontiers in Psychology.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>16</day>
<month>08</month>
<year>2013</year>
</pub-date>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<volume>4</volume>
<elocation-id>530</elocation-id>
<history>
<date date-type="received">
<day>28</day>
<month>02</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>07</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; Hertrich,Dietrich and Ackermann.</copyright-statement>
<copyright-year>2013</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/"><p> This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for &#x0201C;reading&#x0201D; texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the &#x0201C;bottleneck&#x0201D; for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.</p>
</abstract>
<kwd-group>
<kwd>speech perception</kwd>
<kwd>blindness</kwd>
<kwd>time-compressed speech</kwd>
<kwd>audiovisual pathways</kwd>
<kwd>speech timing</kwd>
</kwd-group>
<counts>
<fig-count count="2"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="108"/>
<page-count count="12"/>
<word-count count="0"/>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>INTRODUCTION</title>
<p>Speech perception must be considered a multimodal process, arising as an audio-vibrational sensation even prior to birth (<xref ref-type="bibr" rid="B96">Spence and Decasper, 1987</xref>) and developing afterward into a primarily audiovisual event. Depending on environmental conditions, lip reading can significantly enhance speech perception (<xref ref-type="bibr" rid="B102">Sumby and Pollack, 1954</xref>; <xref ref-type="bibr" rid="B67">Ma et al., 2009</xref>). Within this context, the auditory and the visual data streams interact at different &#x02013; functionally partially independent &#x02013; computational levels as indicated by various psychophysical effects such as the McGurk and the ventriloquist phenomena (<xref ref-type="bibr" rid="B11">Bishop and Miller, 2011</xref>). Furthermore, in combination with cross-modal &#x0201C;equivalence representations&#x0201D; (<xref ref-type="bibr" rid="B72">Meltzoff and Moore, 1997</xref>) the visual channel supports early language acquisition, allowing for a direct imitation of mouth movements &#x02013; based on an innate predisposition for the development of social communication (<xref ref-type="bibr" rid="B100">Streri et al., 2013</xref>). Presumably, the underlying mechanism relies on a general action recognition network that is known from primate studies (<xref ref-type="bibr" rid="B16">Buccino et al., 2004</xref>; <xref ref-type="bibr" rid="B55">Keysers and Fadiga, 2008</xref>), showing that action recognition is closely linked to the motor system, involving a variety of brain structures that have been summarized in a recent review (<xref ref-type="bibr" rid="B77">Molenberghs et al., 2012</xref>). In everyday life, the visual channel can be used, first, for the orientation of attention toward the speaking sound source, second, for lipreading, particularly in case of difficult acoustic environments and, third, for visual prosody providing the recipient with additional information related to several aspects of the communication process such as timing, emphasis, valence, or even semantic/pragmatic meaning of spoken language.</p>
<p>Given that speech perception encompasses audiovisual interactions, we must expect significant handicaps at least in early blind subjects with respect to spoken language capabilities. In line with this assumption, delayed speech acquisition has been observed in early blind children (<xref ref-type="bibr" rid="B84">Perez-Pereira and Conti-Ramsden, 1999</xref>). By contrast, however, various studies have shown that blind as compared to sighted individuals have superior abilities with respect to auditory perception, compensating at least partially for their visual deficits. Apart from altered central-auditory processing due to intra-modal neural plasticity in both early and late blind subjects (<xref ref-type="bibr" rid="B29">Elbert et al., 2002</xref>; <xref ref-type="bibr" rid="B99">Stevens and Weaver, 2009</xref>), blind individuals seem, furthermore, to use &#x02013; at least some components &#x02013; of their central visual system to support language-related representations (<xref ref-type="bibr" rid="B92">R&#x000F6;der et al., 2002</xref>). In principle, various pathways are available for visual cortex recruitment as shown in <bold>Figure <xref ref-type="fig" rid="F1">1</xref></bold>. While particularly in early blind subjects backward projections from supramodal areas (red arrow &#x00023;1 in <bold>Figure <xref ref-type="fig" rid="F1">1</xref></bold>) seem to play a major role for visual cortex activation (<xref ref-type="bibr" rid="B17">B&#x000FC;chel, 2003</xref>), more direct pathways among secondary (&#x00023;2) or primary sensory systems (&#x00023;3) have also been postulated (<xref ref-type="bibr" rid="B31">Foxe and Schroeder, 2005</xref>). In the following we will provide some evidence that even afferent auditory information (&#x00023;4a) can be utilized by the visual system in blind subjects. This information flow seems to refer to a timing aspect of event recording rather than object recognition (&#x00023;4b).</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p><bold>Alternative pathways of visual cortex recruitment during auditory tasks in blind subjects.</bold> In the model proposed in the present paper (see <bold>Figure <xref ref-type="fig" rid="F2">2</xref></bold>), path 4a/4b plays a major role, enabling visual cortex to process event-timing based on afferent auditory information.</p></caption>
<graphic xlink:href="fpsyg-04-00530-g001.tif"/>
</fig>
<p>Enhanced auditory processing in blind subjects appears to be associated with improved encoding of timing aspects of the acoustic signals. For example, congenitally blind individuals seem to preferentially pay attention to temporal as compared to spatial cues (<xref ref-type="bibr" rid="B91">R&#x000F6;der et al., 2007</xref>), and they outperform sighted subjects with respect to temporal resolution capabilities in psychoacoustic backward masking experiments (<xref ref-type="bibr" rid="B98">Stevens and Weaver, 2005</xref>). Furthermore, early as well as late blind subjects can acquire the ability to comprehend time-compressed speech at syllable rates up to ca. 20 syllables/s (normal range: ca. 4&#x02013;8 syllables/s; <xref ref-type="bibr" rid="B79">Moos and Trouvain, 2007</xref>). During both backward masking experiments (<xref ref-type="bibr" rid="B97">Stevens et al., 2007</xref>) and ultra-fast speech perception (<xref ref-type="bibr" rid="B47">Hertrich et al., 2009</xref>, <xref ref-type="bibr" rid="B43">2013</xref>; <xref ref-type="bibr" rid="B28">Dietrich et al., 2013</xref>), task performance-related activation of visual cortex has been observed. The aim of this Hypothesis and Theory paper is to delineate potential functional-neuroanatomic mechanisms engaged in enhanced perceptual processing of time-compressed speech in blind subjects. Since this ability has been observed in early as well as late blind individuals (<xref ref-type="bibr" rid="B79">Moos and Trouvain, 2007</xref>), we assume that the blind subjects rely on pathways also present in sighted people. However, these connections might not be available for ultra-fast speech processing in the latter group because they are engaged in the processing of actual visual signals.</p>
<p>Against the background of, first, functional magnetic resonance imaging (fMRI) and magnetoencephalographic (MEG) data recorded during the perception of time-compressed speech, second, the literature on cross-modal neuronal pathways in various species and, third, experimental findings dealing with audiovisual illusion effects, a model of visual cortex involvement in ultra-fast speech perception can be inferred. The issue of ultra-fast speech comprehension necessarily touches the question of a more general theory of continuous speech perception in the brain, including all subcomponents such as phonological encoding, lexical access, working memory, and sensorimotor activations of the articulatory system.</p>
</sec>
<sec>
<title>NORMAL SPEECH PERCEPTION AND THE TEMPORAL BOTTLENECK</title>
<p>In principle, auditory cortex can follow the temporal envelope of verbal utterances across a wide range of speaking rates (<xref ref-type="bibr" rid="B81">Nourski et al., 2009</xref>), indicating that temporal resolution does not represent a limiting factor for the comprehension of time-compressed speech. Thus, we have to assume a &#x0201C;bottleneck&#x0201D; constraining the speed of spoken language encoding. Although the actual execution of motor programs is not required during speech perception, various studies have documented under these conditions the engagement of frontal areas associated with speech production (<xref ref-type="bibr" rid="B87">Pulverm&#x000FC;ller et al., 2006</xref>). Furthermore, transcranial magnetic stimulation (TMS) experiments revealed these frontal activations to be functionally relevant, e.g., with respect to lexical processing (<xref ref-type="bibr" rid="B56">Kotz et al., 2010</xref>; <xref ref-type="bibr" rid="B24">D&#x02019;Ausilio et al., 2012</xref>). Thus, any model of speech perception (e.g., <xref ref-type="bibr" rid="B37">Grimaldi, 2012</xref>) has to integrate action-related processing stages bound to the frontal lobe into the cerebral network leading from the acoustic signal to spoken language representations. These cortical areas, subserving, among other things, supramodal operations and transient memory functions, seem to be organized in a more or less parallel manner during speech and music perception (<xref ref-type="bibr" rid="B83">Patel, 2003</xref>).</p>
<p>A recent fMRI study (<xref ref-type="bibr" rid="B104">Vagharchakian et al., 2012</xref>) suggests that the &#x0201C;bottleneck&#x0201D; in sighted subjects for the comprehension of time-compressed speech arises from limited temporary storage capacities for phonological materials rather than speed constraints of the extraction of acoustic/phonetic features. As a consequence, phonological information might become &#x0201C;overwritten&#x0201D; before it can be fully encoded, a phenomenon contributing, presumably, to backward masking effects. The buffer mechanism for the comprehension of continuous speech has been attributed to left inferior frontal gyrus (IFG), anterior insula, precentral cortex, and upper frontal cortex including the supplementary motor area (SMA and pre-SMA; <xref ref-type="bibr" rid="B104">Vagharchakian et al., 2012</xref>). While IFG, anterior insula, and precentral gyrus are supposed to be bound to mechanisms of speech generation, pre-SMA and SMA might represent an important timing interface between perception- and action-related mechanisms, subserving, among other things, articulatory programming, inner speech, and working memory. More specifically, SMA has been assumed to trigger the execution of motor programs during the control of any motor activities, including speech production. For example, SMA is involved in the temporal organization and sequential performance of complex movement patterns (<xref ref-type="bibr" rid="B103">Tanji, 1994</xref>). This mesiofrontal area is closely connected to cortical and subcortical structures that adjust the time of movement initiation to a variety of internal and external demands. In case of acoustically cued simple motor tasks, SMA receives input from auditory cortex, as suggested by a study using Granger causality as a measure of connectivity (<xref ref-type="bibr" rid="B1">Abler et al., 2006</xref>). In case of more complex behavior requiring anticipatory synchronization of internal rhythms with external signals such as paced syllable repetitions, SMA seems to also play a major role both in the initiation and the maintenance of motor activity. Furthermore, there seem to be complementary interactions between SMA and the (upper right) cerebellum, the latter being particularly involved in case of increased demands on automation and processing speed during speech production (<xref ref-type="bibr" rid="B90">Riecker et al., 2005</xref>; <xref ref-type="bibr" rid="B13">Brendel et al., 2010</xref>).</p>
<p>Assuming visual cortex in blind individuals supports temporal signal resolution during speech perception, we have to specify, first, the trigger mechanisms of sighted subjects during perception of normal speech and, second, to delineate how the visual system engages in the encoding of temporal information. Concerning the former issue, <xref ref-type="bibr" rid="B58">Kotz et al. (2009)</xref> and <xref ref-type="bibr" rid="B57">Kotz and Schwartze (2010)</xref> put forward a comprehensive model of speech perception including an information channel that conveys auditory-prosodic temporal cues via subcortical pathways to pre-SMA and SMA proper. These suggestions also encompass the Asymmetric Sampling in Time hypothesis (<xref ref-type="bibr" rid="B86">Poeppel, 2003</xref>; <xref ref-type="bibr" rid="B50">Hickok and Poeppel, 2007</xref>) accounting for cortical hemisphere differences that are linked via reciprocal pathways to the cerebellum. As a major focus of the model referred to, <xref ref-type="bibr" rid="B57">Kotz and Schwartze (2010)</xref> tried to elucidate the relation of prosodic and syntactic processing &#x02013; two functional subsystems that have to be coordinated. In analogy to prosody and syntax at the level of the sentence, the syllabic structure of speech, i.e., an aspect of prosody relevant to the timing and relative weighting of segmental phonetic information (<xref ref-type="bibr" rid="B36">Greenberg et al., 2003</xref>), provides a temporal grid for the generation of articulation-related speech representations in frontal cortex during perception. In line with the Asymmetric Sampling hypothesis, it has been shown that the syllabic amplitude modulation of the speech envelope is predominantly represented in the right hemisphere (<xref ref-type="bibr" rid="B65">Luo and Poeppel, 2007</xref>, <xref ref-type="bibr" rid="B66">2012</xref>; <xref ref-type="bibr" rid="B2">Abrams et al., 2008</xref>). Against this background, we hypothesize that a right-hemisphere dominant syllabic timing mechanism is &#x02013; somehow &#x02013; linked via SMA to a left-dominant network of phonological processing during speech encoding.</p>
<p>The brain mechanisms combining low-frequency (theta band) syllabic and high-frequency (gamma band) segmental information have been outlined in a recent perspective paper (<xref ref-type="bibr" rid="B34">Giraud and Poeppel, 2012</xref>). This model must still be further specified with respect to, first, the pathways connecting right-hemisphere prosodic to left-hemisphere phonetic/phonological representations, second, the involved subcortical mechanisms and, third, the role of SMA for temporal coordination. Considering the salient functional role of syllabicity for speech comprehension (<xref ref-type="bibr" rid="B36">Greenberg et al., 2003</xref>), Giraud and Poeppel&#x02019;s model can now be combined with a &#x0201C;syllabic&#x0201D; expansion of the prosodic subcortical-frontal mechanisms including SMA as outlined by <xref ref-type="bibr" rid="B58">Kotz et al. (2009)</xref> and <xref ref-type="bibr" rid="B57">Kotz and Schwartze (2010)</xref>. In this expanded model, a syllable-based representation of speech within the frontal system of spoken language production is temporally coordinated with the incoming speech envelope.</p>
<p>Furthermore, close interactions between frontal speech generation mechanisms and permanent lexical representations have to be postulated since such interactions have also been shown to occur at the level of verbal working memory (<xref ref-type="bibr" rid="B49">Hickok and Poeppel, 2000</xref>; <xref ref-type="bibr" rid="B19">Buchsbaum and D&#x02019;Esposito, 2008</xref>; <xref ref-type="bibr" rid="B5">Acheson et al., 2010</xref>). Although it must be assumed that verbal working memory, including articulatory loop mechanisms, is based on phonological output structures rather than the respective underlying lexical representations, recent data point at a continuous interaction between articulation-related phonological information and permanent lexical &#x0201C;word node&#x0201D; patterns (<xref ref-type="bibr" rid="B93">Romani et al., 2011</xref>). Furthermore, the permanent mental lexicon itself seems to have a dual structure that is linked to the ventral object recognition &#x0201C;what-&#x0201D; pathway within the anterior temporal lobe (phonological features and feature-based word forms; see <xref ref-type="bibr" rid="B25">De Witt and Rauschecker, 2012</xref>), on the one hand, and to the dorsal spatiotemporal and more action-related (&#x0201C;where-&#x0201D;) projections related to phonological gestures, on the other (<xref ref-type="bibr" rid="B35">Gow, 2012</xref>).</p>
<p>Concerning the comprehension of time-compressed speech, syllable rate appears to represent the critical limiting factor rather than missing phonetic information due to shortened segment durations, since insertion of regular silent intervals can largely improve intelligibility in normal subjects (<xref ref-type="bibr" rid="B33">Ghitza and Greenberg, 2009</xref>). Since, furthermore, the &#x0201C;bottleneck&#x0201D; seems to be associated with frontal cortex (<xref ref-type="bibr" rid="B104">Vagharchakian et al., 2012</xref>), it is tempting to assume that the lack of a syllable-prosodic representation at the level of the SMA limits the processing of time-compressed speech in case syllable rate exceeds a certain threshold. Auditory cortex can, in principle, track the envelope of ultra-fast speaking rates (<xref ref-type="bibr" rid="B81">Nourski et al., 2009</xref>) and even monitor considerably higher modulation frequencies, extending into the range of the fundamental frequency of a male speaking voice (<xref ref-type="bibr" rid="B15">Brugge et al., 2009</xref>; <xref ref-type="bibr" rid="B45">Hertrich et al., 2012</xref>). Furthermore, phase locking to amplitude modulations is consistently stronger within the right than the left hemisphere even at frequencies up to 110 Hz (<xref ref-type="bibr" rid="B46">Hertrich et al., 2004</xref>). However, the output from right auditory cortex might have a temporal limitation of syllabic/prosodic event recording: As soon as the modulation frequency approaches the audible range of pitch perception (ca. 16 Hz, that is, for example, the lowest note of an organ) prosodic event recording might compete with a representation of tonal structures. Furthermore, syllable duration at such high speaking rates (16 syllables/s, corresponding to a syllable duration of ca. 60 ms) may interfere with the temporal domain of phonetic features related to voice onset time or formant transitions (ca. 20&#x02013;70 ms). Thus, the auditory system might not be able to track syllable onsets independently of the extraction of segmental phonological features. Although the segmental (left) and the prosodic (right) channels could be processed in different hemispheres, the timing of the two auditory cortices might be too tightly coupled in order to separate syllabic from segmental processing if the temporal domains overlap.</p>
</sec>
<sec>
<title>A MODEL HOW VISUAL CORTEX IN BLIND SUBJECTS CAN ENHANCE THE PERCEPTION OF TIME-COMPRESSED SPEECH</title>
<p>In this section, a model is presented suggesting right-hemisphere visual cortex activity to contribute to enhanced comprehension of ultra-fast speech in blind subjects. This model is supported, first, by the cortical activation patterns (fMRI, MEG) observed during spoken language understanding after vision loss (see Visual Cortex Involvement in Non-Visual Tasks) and, second, by studies dealing with early mechanisms of signal processing in the afferent audiovisual pathways (see Audiovisual Effects and Associated Pathways). Based, essentially, on the Asymmetric Sampling hypothesis (<xref ref-type="bibr" rid="B86">Poeppel, 2003</xref>; <xref ref-type="bibr" rid="B50">Hickok and Poeppel, 2007</xref>), the proposed model &#x02013; as outlined in <bold>Figure <xref ref-type="fig" rid="F2">2</xref></bold> &#x02013; comprises two largely independent data streams, one representing phonological processing including auditory feature recognition in left superior temporal gyrus (STG), frontal speech generation mechanisms, and phonological working memory (green color). The other data stream provides a syllabic timing signal that, in sighted subjects, is predominantly represented at the level of the right-hemisphere auditory system (brown color). The SMA, presumably, synchronizes these two subsystems via subcortical structures (see <xref ref-type="bibr" rid="B57">Kotz and Schwartze, 2010</xref>). Blind subjects perceiving ultra-fast speech may use an alternative prosodic channel via an afferent audiovisual pathway including superior colliculus (SC), pulvinar (Pv), and right visual cortex (red arrows). In sighted subjects, these pathways contribute to auditory-driven gating and timing mechanisms for visual object recognition and/or are involved in visual mechanisms of spatial recalibration for auditory events. This afferent signal could provide the visual system with (meaningless) auditory temporal event markers. As a second step, the temporally marked visual events (in sighted) or &#x0201C;empty&#x0201D; visual events (in case of blind subjects) could be transferred to the frontal lobe for further processing such as the timing of inner speech and its encoding into working memory. In sighted subjects, the occipital-frontal pathways, among other things, contribute to the linkage of visually driven motor activity with the temporal structure of visual events.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p><bold>Hypothetical pathways of speech perception: the phonological network &#x02013; including secondary areas of left- hemisphere auditory cortex in superior temporal gyrus and sulcus (STG/STS) and frontal speech generation mechanisms &#x02013; is colored in green, including additionally left fusiform gyrus (FG) in blind subjects.</bold> This network seems to be linked to a right-dominant syllable-prosodic network via subcortical structures and supplementary motor area (SMA). In normal subjects, this prosodic network is mainly localized in the right-hemisphere auditory system (brown arrows). In order to overcome temporal constraints regarding this prosodic stream as an independent signal (independent from segmental processing and from pitch processing), blind subjects seem to be able to recruit part of their visual cortex &#x02013; presumably via subcortical afferent auditory information (red arrows) &#x02013; to represent this prosodic information and to transfer it as an event-trigger channel to the frontal part of the speech processing network. Arrows to and from left FG were omitted in order to avoid an overload of the model and since the major aspect addressed here is the interplay between the right-dominant prosodic and the left-dominant phonological network. Furthermore, direct pathways between visual and auditory cortex were also omitted since the &#x0201C;bottleneck&#x0201D; for understanding ultra-fast speech seems to be located in the interface between sensory processing and frontal speech generation mechanisms.</p></caption>
<graphic xlink:href="fpsyg-04-00530-g002.tif"/>
</fig>
<p>Synchronization of the left-hemisphere phonological system with the incoming acoustic signal via a prosodic trigger mechanism &#x02013; that, at an early stage, has some independence from the left-dominant pathway of phonological object recognition &#x02013; appears to represent an important prerequisite for continuous speech perception under time-critical conditions. This prosodic timing channel, first, might trigger the extraction of phonological features by providing a syllabic grid since the phonological relevance and informational weight of phonological features depends on their position within a syllable (<xref ref-type="bibr" rid="B36">Greenberg et al., 2003</xref>). Presumably, transcallosal connections between right and left auditory cortex subserve these functions in sighted people. Second, the syllabic-prosodic timing signal could coordinate frontal speech generation and working memory mechanisms with the auditory input signal since speech generation is organized in a syllabic output structure. In particular, these interactions are important for the exact timing of top-down driven forward predictions with regard to the expected acoustic speech signal. Thus, the presence of a syllabic timing signal can significantly enhance the utilization of informational redundancy (predictability) during continuous realtime speech perception. It should also be mentioned that, although we assume an early signal-driven mechanism, visual cortex activation was found to be considerably weaker in case of (unintelligible) backward as compared to forward speech (<xref ref-type="bibr" rid="B28">Dietrich et al., 2013</xref>; <xref ref-type="bibr" rid="B43">Hertrich et al., 2013</xref>). We have to assume, thus, that top-down mechanisms providing information on the meaningfulness of the sound signal &#x02013; arising, presumably, within frontal cortex &#x02013; have an impact on the recruitment of the visual cortex during ultra-fast speech comprehension. Particularly, such interactions might be relevant for functional neuroplasticity processes during the training phase when blind subjects learn to accelerate their speech perception system using visual resources.</p>
<p>Apart from right-hemisphere mechanisms of prosody encoding, blind subjects seem also to engage ventral aspects (fusiform gyrus, FG) of their left-hemisphere visual system during ultra-fast speech perception (<xref ref-type="bibr" rid="B47">Hertrich et al., 2009</xref>; <xref ref-type="bibr" rid="B28">Dietrich et al., 2013</xref>). Therefore, left FG was added to <bold>Figure <xref ref-type="fig" rid="F2">2</xref></bold> although the functional role of this occipito-temporal area remains to be further specified. At least parts of left FG appear to serve as a secondary phonological and/or visual word form area, linked to the left-hemisphere language processing network (<xref ref-type="bibr" rid="B71">McCandliss et al., 2003</xref>; <xref ref-type="bibr" rid="B22">Cao et al., 2008</xref>; <xref ref-type="bibr" rid="B23">Cone et al., 2008</xref>; <xref ref-type="bibr" rid="B28">Dietrich et al., 2013</xref>).</p>
</sec>
<sec>
<title>VISUAL CORTEX INVOLVEMENT IN NON-VISUAL TASKS</title>
<p>A large number of studies report visual cortex activity in blind subjects during non-visual tasks, but the functional relevance of these observations is still a matter of debate (<xref ref-type="bibr" rid="B92">R&#x000F6;der et al., 2002</xref>; <xref ref-type="bibr" rid="B20">Burton, 2003</xref>; <xref ref-type="bibr" rid="B21">Burton et al., 2010</xref>; <xref ref-type="bibr" rid="B59">Kupers et al., 2011</xref>). Most studies (see <xref ref-type="bibr" rid="B80">Noppeney, 2007</xref> for a comprehensive review) focus on early blind subjects, reporting visual cortex activity related to various tasks such as linguistic processing or braille reading. In some cases, a causal relationship has explicitly been demonstrated, e.g., by means of TMS showing that a transient &#x0201C;virtual lesion&#x0201D; in left occipital cortex interferes with semantic verbal processing (<xref ref-type="bibr" rid="B6">Amedi et al., 2004</xref>).</p>
<p>Regarding the neuronal mechanisms of functional cross-modal plasticity, cortico-cortical connections have been hypothesized on the basis of animal experiments, either direct cross-modal connections between, e.g., auditory and visual cortex, or backward projections from higher-order supramodal centers toward secondary and primary sensory areas (see e.g., <xref ref-type="bibr" rid="B31">Foxe and Schroeder, 2005</xref>; <xref ref-type="bibr" rid="B7">Bavelier and Hirshorn, 2010</xref>). Thereby, even in congenitally blind subjects, the supramodal representations seem to be quite similarly organized as in sighted individuals, indicating that supramodal representations form a stable pattern, largely independent of input modality (<xref ref-type="bibr" rid="B89">Ricciardi and Pietrini, 2011</xref>). In most examples of the engagement of the central visual system in blind subjects during non-visual cognitive tasks such as linguistic processing, thus, a top-down mode of stimulus processing from higher-order representations toward visual cortex has been assumed (<xref ref-type="bibr" rid="B18">B&#x000FC;chel et al., 1998</xref>; <xref ref-type="bibr" rid="B17">B&#x000FC;chel, 2003</xref>; <xref ref-type="bibr" rid="B68">Macaluso and Driver, 2005</xref>). By contrast, functional neuroplasticity via subcortical pathways has rarely been taken into account (<xref ref-type="bibr" rid="B8">Bavelier and Neville, 2002</xref>; <xref ref-type="bibr" rid="B80">Noppeney, 2007</xref>). As a phylogenetic example, blind mole rats, rodents with a largely inactive peripheral visual system, have developed an additional pathway conveying auditory input from inferior colliculus via dorsal lateral geniculate nucleus to the central visual system (<xref ref-type="bibr" rid="B14">Bronchti et al., 2002</xref>). In humans, however, this connection between the afferent auditory and the primary visual pathway does not seem to be implemented.</p>
<p>Our recent studies on blind subjects point to a further possibility of visual cortex involvement in an auditory task, i.e., listening to time-compressed speech. As a substitute for reading, blind individuals often use text-to-speech systems for the reception of texts. The speaking rate of these systems can be adjusted to quite high syllable rates, and blind users of these systems may learn to comprehend speech at rates up to ca. 20 syllables/s (<xref ref-type="bibr" rid="B79">Moos and Trouvain, 2007</xref>) while the normal speaking rate amounts to only 4&#x02013;8 syllables/s. fMRI in blind subjects with the ability to understand ultra-fast speech at 16 syllables/s has shown hemodynamic activation, first, in left FG, a region that might be related to phonological representations (<xref ref-type="bibr" rid="B23">Cone et al., 2008</xref>) and, second, in right primary and secondary visual cortex, including parts of Brodmann areas (BA) 17 and 18 (<xref ref-type="bibr" rid="B47">Hertrich et al., 2009</xref>; <xref ref-type="bibr" rid="B28">Dietrich et al., 2013</xref>). Covariance analysis of fMRI data, furthermore, showed the ability to comprehend ultra-fast speech to be significantly associated, in addition to these two visual cortex areas, with activation in bilateral Pv, left IFG, left premotor cortex, left SMA as well as left anterior (aSTS) and bilateral posterior superior temporal sulcus (pSTS). As indicated by preliminary dynamic causal modeling (DCM) analyzes correlating functional connectivity with behavioral performance (<xref ref-type="bibr" rid="B26">Dietrich et al., 2010</xref>, <xref ref-type="bibr" rid="B27">2011</xref>), the two visual areas activated in blind subjects, i.e., left-hemisphere FG and right-hemisphere primary and secondary visual cortex, seem to belong to different networks since they did not show significant connectivity in this analysis. FG, as part of the object-related ventral visual pathway (<xref ref-type="bibr" rid="B38">Haxby et al., 1991</xref>, <xref ref-type="bibr" rid="B39">2000</xref>), might serve the representation of phonological &#x0201C;objects&#x0201D; linked to auditory and visual word form representations of the mental lexicon (<xref ref-type="bibr" rid="B71">McCandliss et al., 2003</xref>; <xref ref-type="bibr" rid="B107">Vigneau et al., 2006</xref>). Direct links between auditory and visual object representations have also been suggested to be activated by the use of sensory substitution devices &#x0201C;translating&#x0201D; optical signals into audible acoustic patterns (<xref ref-type="bibr" rid="B101">Striem-Amit et al., 2012</xref>). By contrast, right-dominant activation of early visual cortex as documented by <xref ref-type="bibr" rid="B28">Dietrich et al. (2013)</xref> seems to be associated with more elementary signal-related aspects as indicated by functional connectivity to pulvinar and auditory cortex. Furthermore, significant connectivity was observed between right visual cortex and left SMA, an area of temporal coordination in the frontal action network. Admittedly, considering the low temporal resolution of fMRI, this DCM analysis does not directly reflect the rapid information flow during speech perception. However, further evidence for an early signal-related rather than a higher-order linguistic aspect of speech processing being performed in right visual cortex has been provided by an MEG experiment (<xref ref-type="bibr" rid="B43">Hertrich et al., 2013</xref>). This study showed a particular signal component with a magnetic source in right occipital cortex that is phase-locked to a syllable onset signal derived from the speech envelope. The cross-correlation latency of this component was about 40&#x02013;80 ms (see Figure 3 in <xref ref-type="bibr" rid="B43">Hertrich et al., 2013</xref>), indicating that this phase-locked activity arises quite early and, thus, might be driven by subcortical afferent input rather than cortico-cortical pathways. This might also be taken as an indicator that visual cortex activity represents a timing pattern rather than linguistic content. Thus, we hypothesize that visual cortex transfers a pre-linguistic prosodic signal, supporting the frontal action part of the speech perception network with timing information if the syllable rate exceeds the temporal resolution of the normal auditory prosody module. Admittedly, this model is still highly speculative given the limited basis of experimental data available so far. In addition, however, these suggestions shed some further light on exceptional abilities of blind subjects in the non-speech domain such as their resistance to backward masking as indicated by psychoacoustic experiments, pointing to a general mechanism of visual cortex recruitment for the purpose of time-critical event recording in blind subjects.</p>
<p>Taken together, left- and right-hemisphere activities observed in visual cortex of blind subjects during ultra-fast speech perception seem to be bound to the segmental (left) and prosodic (right) aspects of speech processing, in analogy to the Asymmetric Sampling hypothesis of the auditory system (<xref ref-type="bibr" rid="B86">Poeppel, 2003</xref>; <xref ref-type="bibr" rid="B50">Hickok and Poeppel, 2007</xref>). Activations of left-hemisphere phonological areas in the ventral visual stream can largely be expected on the basis of our knowledge regarding phonological and visual word form representations. By contrast, right visual cortex in blind subjects seems to belong to a different subsystem, receiving an afferent auditory timing signal that is related to syllable onsets and serving a similar function as the right-dominant prosodic timing channel in the theta band postulated for the auditory system (<xref ref-type="bibr" rid="B2">Abrams et al., 2008</xref>; <xref ref-type="bibr" rid="B66">Luo and Poeppel, 2012</xref>). However, the &#x0201C;prosodic&#x0201D; interpretation of right-hemisphere visual activities may require further support, first, with respect to existing pathways that could be able to build up such an extended prosodic network and, second, with respect to temporal resolution. Thus, in the following section various audiovisual experiments will be reviewed that can shed some light on the pathways contributing to visual system involvement in syllabic prosody representations.</p>
</sec>
<sec>
<title>AUDIOVISUAL EFFECTS AND ASSOCIATED PATHWAYS</title>
<p>Very robust perceptual audiovisual interactions have been documented, such as the sound-induced multiple flash illusion. Irrespective of spatial disparity, these experiments have demonstrated that visual perception can be qualitatively altered by auditory input at an early level of processing. In case of this illusion, for example, a (physical) single flash is perceived as a double-flash if it is accompanied by a sequence of two short acoustic signals (<xref ref-type="bibr" rid="B94">Shams et al., 2000</xref>; <xref ref-type="bibr" rid="B95">Shams and Kim, 2010</xref>). The perception of the illusory second flash has been found to depend upon an early electrophysiological response component in the central visual system following the second sound at a latency of only 30&#x02013;60 ms (<xref ref-type="bibr" rid="B75">Mishra et al., 2007</xref>). These experiments nicely show that the visual cortex is well able to capture acoustic event information at a high temporal resolution and at an early stage of processing. Further electrophysiological evidence for very fast audiovisual interactions has been obtained during simple reaction time tasks (<xref ref-type="bibr" rid="B78">Molholm et al., 2002</xref>).</p>
<p>Under natural conditions, early auditory-to-visual information transfer may serve to improve the detection of visual events although it seems to work in a quite unspecific manner with respect to both the location of the visual event in the visual field and cross-modal spatial congruence or incongruence (<xref ref-type="bibr" rid="B30">Fiebelkorn et al., 2011</xref>). Furthermore, spatially irrelevant sounds presented shortly before visual targets may speed up reaction times, even in the absence of any specific predictive value (<xref ref-type="bibr" rid="B54">Keetels and Vroomen, 2011</xref>). Such early audio-to-visual interactions seem to work predominantly as timing cues rather than signaling specific event-related attributes although some auditory spatial information can, in addition, be derived, e.g., when two data streams have to be segregated (<xref ref-type="bibr" rid="B41">Heron et al., 2012</xref>). Interestingly, the enhancement of visual target detection by auditory-to-visual information flow is not restricted to the actual event. Even passive repetitive auditory stimulation up to 30 min prior to a visual detection task can improve flash detection in the impaired hemifield of hemianopic patients (<xref ref-type="bibr" rid="B63">Lewald et al., 2012</xref>), indicating that auditory stimuli activate audiovisual pathways.</p>
<p>From a more general functional point of view, early audiovisual interactions facilitate the detection of cross-modal (in-)coherence of signals extending across both modalities. In this respect, there seems to be an asymmetry between the two channels with respect to temporal and spatial processing. In the temporal domain, the visual system appears to be adapted or gated (<xref ref-type="bibr" rid="B88">Purushothaman et al., 2012</xref>) by auditory information related to the time of acoustic signal onset (auditory dominance for timing). As a second step, the spatial representation of events within the dorsal auditory pathway may become recalibrated by coincident visual information (<xref ref-type="bibr" rid="B108">Wozny and Shams, 2011</xref>; spatial dominance of the visual system). This asymmetry, attributing temporal and spatial recalibration to different processing stages, can elucidate, for example, the differential interactions of these signal dimensions during the McGurk phenomenon (visual influence on auditory phonetic perception) as compared to the ventriloquist effect (visually induced spatial assignment of a speech signal to a speaking puppet <xref ref-type="bibr" rid="B11">Bishop and Miller, 2011</xref>). The McGurk effect is highly resistant against spatial incongruence, indicating an early binding mechanism (prior to the evaluation of spatial incongruence) on the basis of approximate temporal coincidence, followed by higher-order transfer of visual phonetic cues toward the auditory phonetic system. The temporal integration window of this effect has an asymmetrical structure and requires, as in natural stop consonant production, a temporal lag of the acoustic relative to the visual signal (<xref ref-type="bibr" rid="B106">Van Wassenhove et al., 2007</xref>). In this case, the visual component of the McGurk stimuli not only modifies, but also accelerates distinct electrophysiological responses such as the auditory-evoked N1 deflection (<xref ref-type="bibr" rid="B105">Van Wassenhove et al., 2005</xref>). However, an apparent motion design in which the shift between two pictures is exactly adjusted to the acoustic signal onset does not show such a visual effect on the auditory N1 response (<xref ref-type="bibr" rid="B73">Miki et al., 2004</xref>). In this latter case, presumably, early binding is not possible since the acoustic event trigger precedes the visual shift because of the delayed processing of actual visual signals. Thus, the McGurk effect seems to be based on a very early auditory-to-visual binding mechanism although its outcome might be the result of later higher-order phonological operations. By contrast, in case of the ventriloquist effect, the binding can be attributed to a later stage of spatial recalibration, top-down-driven by the perception of meaningful visual speech cues.</p>
<p>In contrast to syllabic event timing mechanisms assumed to engage visual cortex during ultra-fast speech perception, visuospatial cues are more or less irrelevant for blind subjects. The short latency (40&#x02013;80 ms) of the MEG signal component phase-locked to syllable onsets over the right visual cortex (<xref ref-type="bibr" rid="B43">Hertrich et al., 2013</xref>) is comparable to the latency of visual cortex activity in case of the illusory double-flash perception, indicating a very early rather than late mechanism of visual cortex activation. As a consequence, we hypothesize that auditory timing information is derived from the acoustic signal at a pre-cortical stage, presumably, at the level of the SC, and then transferred to visual cortex via pulvinar and the posterior part of the secondary visual pathway. Although this pathway has been reported to target higher rather than primary visual areas (<xref ref-type="bibr" rid="B70">Martin, 2002</xref>; <xref ref-type="bibr" rid="B9">Berman and Wurtz, 2008</xref>, <xref ref-type="bibr" rid="B10">2011</xref>), a diffusion tensor imaging tractography study indicates also the presence of connections from pulvinar to early cortical visual regions (<xref ref-type="bibr" rid="B61">Leh et al., 2008</xref>). As indicated by a monkey study, the pathway from pulvinar to V1 has a powerful gating function on visual cortex activity (<xref ref-type="bibr" rid="B88">Purushothaman et al., 2012</xref>). In sighted human subjects, the pulvinar-cortical visual pathway seems to play an important role with respect to Redundant Signal Effects (<xref ref-type="bibr" rid="B69">Maravita et al., 2008</xref>; see also <xref ref-type="bibr" rid="B74">Miller (1982)</xref> for behavioral effects of bimodal redundancy), multisensory spatial integration (<xref ref-type="bibr" rid="B62">Leo et al., 2008</xref>), audiovisual training of oculomotor functions during visual exploration (<xref ref-type="bibr" rid="B82">Passamonti et al., 2009</xref>), and suppression of visual motion effects during saccades (<xref ref-type="bibr" rid="B9">Berman and Wurtz, 2008</xref>, <xref ref-type="bibr" rid="B10">2011</xref>). Regarding audiovisual interactions in sighted subjects such as the auditory-induced double-flash illusion (<xref ref-type="bibr" rid="B94">Shams et al., 2000</xref>; <xref ref-type="bibr" rid="B75">Mishra et al., 2007</xref>), the short latencies of electrophysiological responses of only 30&#x02013;60 ms, by and large, rule out any significant impact of higher-order pathways from supramodal cortical regions to primary and secondary visual cortex as potential sources of this phenomenon, and even cross-modal cortico-cortical interactions between primary auditory and visual cortex might by too slow.</p>
<p>Cross-modal gating functions at the level of the auditory evoked P50, N100/M100 potentials as well as mismatch responses could be demonstrated within the framework of visual-to-auditory processing (<xref ref-type="bibr" rid="B60">Lebib et al., 2003</xref>; <xref ref-type="bibr" rid="B105">Van Wassenhove et al., 2005</xref>; <xref ref-type="bibr" rid="B48">Hertrich et al., 2007</xref>, <xref ref-type="bibr" rid="B47">2009</xref>, <xref ref-type="bibr" rid="B42">2011</xref>). Given that auditory event detection triggers visual event perception as in case of the auditory-induced double-flash illusion, it also seems possible that subcortical auditory information can trigger &#x0201C;visual&#x0201D; dummy events in the visual cortex of blind subjects. Subsequently, these event markers may function as a secondary temporal gating signal for the purpose of phonological encoding.</p>
<p>Frontal cortex, particularly, SMA, seems to play an important role in the coordination of phonological encoding with prosodic timing (see above). In principle, visual and audiovisual information via SC and pulvinar might reach frontal cortex in the absence of any activation of the occipital lobe (<xref ref-type="bibr" rid="B64">Liddell et al., 2005</xref>). However, this pathway is unlikely to be involved in the perception of ultra-fast speech since, first, it does not particularly involve SMA and, second, it is linked to reflexive action rather than conscious perception. Thus, we assume that in order to signalize an event-related trigger signal to the SMA, the data stream has to pass sensory cortical areas such somatosensory, auditory, or visual cortex. But how can audiovisual events (in sighted) or auditory-induced empty events represented in visual cortex (in blind people) feed timing information into SMA? A comprehensive study of the efferent and afferent connections of this mesiofrontal area in squirrel monkeys found multiple cortical and subcortical pathways, but no direct input from primary or secondary visual cortex. By contrast, proprioception, probably due to its close relationship to motor control, seems to have a more direct influence on SMA activity (<xref ref-type="bibr" rid="B52">J&#x000FC;rgens, 1984</xref>). Regarding the visual domain, SMA seems to be involved in visually cued motor tasks (<xref ref-type="bibr" rid="B76">Mohamed et al., 2003</xref>) and in visually guided tracking tasks (<xref ref-type="bibr" rid="B85">Picard and Strick, 2003</xref>) as well as in an interaction of visual event detection with oral conversation as shown by reaction time effects (<xref ref-type="bibr" rid="B12">Bowyer et al., 2009</xref>). Thus, in analogy to the auditory models of <xref ref-type="bibr" rid="B50">Hickok and Poeppel (2007)</xref> and <xref ref-type="bibr" rid="B57">Kotz and Schwartze (2010)</xref>, we may assume a pathway from the right-hemisphere dorsal visual stream, representing syllabic events, toward the SMA via subcortical structures including the thalamus and the (left) cerebellum.</p>
</sec>
<sec>
<title>DISCUSSION</title>
<p>In summary, the present model assumes a dual data stream to support the linguistic encoding of continuous speech: predominant left-hemisphere extraction of phonetic features and predominant right-hemisphere capture of the speech envelope. The coordination of these two functional subsystems seems to be bound to the frontal cortex. More specifically, SMA might critically contribute to the synchronization of the incoming signal with top-down driven syllabically organized sequential pacing signals. In case of ultra-fast speech, the auditory system &#x02013; although capable to process signals within the 16 Hz domain &#x02013; may fail to separate syllable-prosodic and segmental information at such high rates. Therefore, the speech generation system, including the phonological working memory, cannot be triggered by a prosodic event channel. In order to overcome this bottleneck, we must either learn to encode speech signals in the absence of a syllabic channel &#x02013; a, most presumably, quite difficult task &#x02013; or we have to recruit a further neural pathway to provide the frontal cortex with syllabic information. The latter strategy seems to be available to blind subjects who may use the audiovisual interface of the secondary visual pathway in order to transmit syllabic event triggers via pulvinar to right visual cortex. As a consequence, the tentative function of visual cortex might consist in the transformation of the received timing signal into a series of (syllabic) events that subsequently can be conveyed to the frontal lobe in order to trigger the phonological representations in the speech generation and working memory system. These &#x0201C;events&#x0201D; might be similar to the ones that, in sighted subjects, become spatially recalibrated by vision. Since vision loss precludes any spatial recalibration, the auditory events may target a region near the center of the retinotopic area in visual cortex. Considering, first, that this audiovisual pathway is linked to visuospatial processing in sighted subjects and, second, that the extracted auditory signal components are prosodic event-related rather than phonological data structures, it seems rather natural that they are preferably processed within the right-hemisphere. Thus, by &#x0201C;outsourcing&#x0201D; the syllabic channel into the visual system, blind people may overcome the prosodic event timing limits of right-hemisphere auditory cortex.</p>
<p>Various aspects of the proposed model must now be tested explicitly, e.g., by means of TMS techniques and further connectivity analyzes. Assuming, for example, that right visual cortex of blind subjects is involved in prosodic timing mechanisms, a virtual lesion of this area during ultra-fast speech perception must be expected to yield similar comprehension deficits as virtual damage to right auditory cortex in sighted subjects during perception of moderately fast speech. Furthermore, pre-activation of right visual cortex as well as co-activation of right visual cortex with SMA might have facilitating effects on speech processing. In sighted subjects, furthermore, it should be possible to simulate the early phase-locked activity in right visual cortex by presenting flashes that are synchronized with syllable rate. If, indeed, visual cortex can forward prosodic event triggers, these flashes should enhance the comprehension of time-compressed speech.</p>
<p>So far, only few studies provide clear-cut evidence for a subcortical audiovisual pathway targeting primary visual cortex. The present model postulates that a speech envelope signal is already represented at a pre-cortical level of the brain. As a consequence, the prosodic timing channel engaged in speech processing should be separated from the &#x0201C;segmental&#x0201D; auditory channel already at a subcortical stage. So far, recordings of brainstem potentials did not reveal any lateralization effects similar to the cortical distinction of short-term segmental (left hemisphere) and low-frequency suprasegmental/prosodic (right-hemisphere) information (<xref ref-type="bibr" rid="B3">Abrams et al., 2010</xref>). At the level of the thalamus, however, low-frequency information is well represented, and it has been hypothesized that these signals &#x02013; bound predominantly to paralemniscal pathways &#x02013; have a gating function regarding the perceptual evaluation of auditory events (<xref ref-type="bibr" rid="B40">He, 2003</xref>; <xref ref-type="bibr" rid="B4">Abrams et al., 2011</xref>). Furthermore, the underlying temporal coding mechanism (spike timing) seems to be particularly involved in the processing of communication sounds via thalamus, primary and non-primary auditory cortex up to frontal areas (<xref ref-type="bibr" rid="B51">Huetz et al., 2011</xref>).</p>
<p>Alternatively, one might suggest that the visual cortex of blind individuals is activated by cross-modal cortico-cortical pathways. In sighted subjects, however, early audiovisual interactions allowing for the enhancement of auditory processing by visual cues require a time-lead of the visual channel extending from 20 to 80 ms (<xref ref-type="bibr" rid="B53">Kayser et al., 2008</xref>). Thus, it seems implausible that ultra-fast speech comprehension can be accelerated by visual cortex activation via cortico-cortical cross-modal pathways. If the visual channel is really capable to impact auditory encoding of speech signals at an early phase-locked stage, then very early subcortical afferent input to the visual system must be postulated. These fast connections might trigger phonological encoding in a manner analogous to the prosodic timing mechanisms in right-hemisphere auditory cortex. The underlying mechanism of this process might consist in phase modulation of oscillatory activity within visual cortex based on subcortical representations of the speech envelope.</p>
<p>Since the &#x0201C;bottleneck&#x0201D; for understanding ultra-fast speech in sighted subjects has been assigned to frontal rather than temporal regions, pathways projecting from visual to frontal cortex, targeting, in particular, SMA, must be assumed in order to understand how blind people can overcome these constraints. The connections sighted subjects use to control the motor system during visual perception, both in association with ocular and visually guided upper limb movements, represent a plausible candidate structure. Considering SMA a motor timing device with multiple input channels but no direct interconnections with primary visual cortex, the transfer of the prosodic signals toward SMA might be performed via subcortical mechanisms involving cerebellum, basal ganglia, and thalamus. However, in upcoming studies this has to be demonstrated explicitly.</p>
<p>The present model might also contribute to a better understanding of previous findings on enhanced auditory performance of blind individuals such as resistance to backward masking, as documented by <xref ref-type="bibr" rid="B98">Stevens and Weaver (2005)</xref>. Thereby, this aspect of temporal processing seems to be related to perceptual consolidation rather than elementary auditory time resolution. Furthermore, resistance to backward masking in blind subjects was associated with activity, even preparatory activity in visual cortex. In line with the present model, activation of visual cortex was found in the right rather than the left hemisphere. <xref ref-type="bibr" rid="B97">Stevens et al. (2007)</xref> interpreted the preparatory visual activation as a &#x0201C;baseline shift&#x0201D; related to attentional modulation. However, they did not provide an explicit hypothesis about the nature of the input signal toward visual cortex. Based on the present model, we might assume that the secondary visual pathway provides the visual system with afferent auditory information. Considering brain activations outside the visual system, <xref ref-type="bibr" rid="B97">Stevens et al. (2007)</xref> did not mention SMA, but other frontal regions such as the frontal eye field, known as a structure serving auditory attentional processing in blind subjects (<xref ref-type="bibr" rid="B32">Garg et al., 2007</xref>). Thus, at least some aspects of the present model might be expanded to the non-speech domain, referring to a general mechanism that enhances the temporal resolution of auditory event recording by using the afferent audiovisual interface toward the secondary visual pathway.</p>
<p>At least partially, the assumption of an early signal-related transfer mechanism via pulvinar, secondary visual pathway, and right visual cortex toward the frontal cortex was based on fMRI connectivity analyzes, an approach of still limited temporal resolution. So far, it cannot be excluded that frontal cortex activation under these conditions simply might reflect higher-order linguistic processes that are secondary to, but not necessary for comprehension. Nevertheless, functional imaging data revealed the time constraints of speech understanding to be associated with frontal structures (<xref ref-type="bibr" rid="B104">Vagharchakian et al., 2012</xref>). Thus, frontal lobe activity during spoken language comprehension seems comprise both the generation of inner speech after lexical access and the generation of well-timed predictions regarding the syllabically organized structure of upcoming speech material. In other words, it is an interface between bottom-up and top-down mechanisms.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>This work was supported by the German Research Foundation (DFG; AC 55 09/01) and the Hertie Institute for Clinical Brain Research, T&#x000FC;bingen. Furthermore, we acknowledge support by the Open Access Publishing Fund of the University of Tuebingen, sponsored by the DFG.</p>
</ack>
<ref-list>
<title>REFERENCES</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abler</surname> <given-names>B.</given-names></name> <name><surname>Roebroeck</surname> <given-names>A.</given-names></name> <name><surname>Goebel</surname> <given-names>R.</given-names></name> <name><surname>H&#x000F6;se</surname> <given-names>A.</given-names></name> <name><surname>Sch&#x000F6;nfeldt-Lecuona</surname> <given-names>C.</given-names></name> <name><surname>Hole</surname> <given-names>G.</given-names></name><etal/></person-group> (<year>2006</year>). <article-title>Investigating directed influences between activated brain areas in a motor-response task using fMRI.</article-title> <source><italic>Magn. Reson. Imaging</italic></source> <volume>24</volume> <fpage>181</fpage>&#x02013;<lpage>185</lpage>.<pub-id pub-id-type="doi">10.1016/j.mri.2005.10.022</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abrams</surname> <given-names>D. A.</given-names></name> <name><surname>Nicol</surname> <given-names>T.</given-names></name> <name><surname>Zecker</surname> <given-names>S.</given-names></name> <name><surname>Kraus</surname> <given-names>N.</given-names></name></person-group> (<year>2008</year>). <article-title>Right-hemisphere auditory cortex is dominant for coding syllable patterns in speech.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>28</volume> <fpage>3958</fpage>&#x02013;<lpage>3965</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0187-08.2008</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abrams</surname> <given-names>D. A.</given-names></name> <name><surname>Nicol</surname> <given-names>T.</given-names></name> <name><surname>Zecker</surname> <given-names>S.</given-names></name> <name><surname>Kraus</surname> <given-names>N.</given-names></name></person-group> (<year>2010</year>). <article-title>Rapid acoustic processing in the auditory brainstem is not related to cortical asymmetry for the syllable rate of speech.</article-title> <source><italic>Clin. Neurophysiol.</italic></source> <volume>121</volume> <fpage>1343</fpage>&#x02013;<lpage>1350</lpage>.<pub-id pub-id-type="doi">10.1016/j.clinph.2010.02.158</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abrams</surname> <given-names>D. A.</given-names></name> <name><surname>Nicol</surname> <given-names>T.</given-names></name> <name><surname>Zecker</surname> <given-names>S.</given-names></name> <name><surname>Kraus</surname> <given-names>N.</given-names></name></person-group> (<year>2011</year>). <article-title>A possible role for a paralemniscal auditory pathway in the coding of slow temporal information.</article-title> <source><italic>Hear. Res.</italic></source> <volume>272</volume> <fpage>125</fpage>&#x02013;<lpage>134</lpage>.<pub-id pub-id-type="doi">10.1016/j.heares.2010.10.009</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Acheson</surname> <given-names>D. J.</given-names></name> <name><surname>Hamidi</surname> <given-names>M.</given-names></name> <name><surname>Binder</surname> <given-names>J. R.</given-names></name> <name><surname>Postle</surname> <given-names>B. R.</given-names></name></person-group> (<year>2010</year>). <article-title>A common neural substrate for language production and verbal working memory.</article-title> <source><italic>J. Cogn. Neurosci.</italic></source> <volume>23</volume> <fpage>1358</fpage>&#x02013;<lpage>1367</lpage>.<pub-id pub-id-type="doi">10.1162/jocn.2010.21519</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amedi</surname> <given-names>A.</given-names></name> <name><surname>Floel</surname> <given-names>A.</given-names></name> <name><surname>Knecht</surname> <given-names>S.</given-names></name> <name><surname>Zohary</surname> <given-names>E.</given-names></name> <name><surname>Cohen</surname> <given-names>L. G.</given-names></name></person-group> (<year>2004</year>). <article-title>Transcranial magnetic stimulation of the occipital pole interferes with verbal processing in blind subjects.</article-title> <source><italic>Nat. Neurosci.</italic></source> <volume>7</volume> <fpage>1266</fpage>&#x02013;<lpage>1270</lpage>.<pub-id pub-id-type="doi">10.1038/nn1328</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bavelier</surname> <given-names>D.</given-names></name> <name><surname>Hirshorn</surname> <given-names>E. A.</given-names></name></person-group> (<year>2010</year>). <article-title>I see where you&#x02019;re hearing: how cross-modal plasticity may exploit homologous brain structures.</article-title> <source><italic>Nat. Neurosci.</italic></source> <volume>13</volume> <fpage>1309</fpage>&#x02013;<lpage>1311</lpage>.<pub-id pub-id-type="doi">10.1038/nn1110-1309</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bavelier</surname> <given-names>D.</given-names></name> <name><surname>Neville</surname> <given-names>H. J.</given-names></name></person-group> (<year>2002</year>). <article-title>Cross-modal plasticity: where and how?</article-title> <source><italic>Nat. Rev. Neurosci.</italic></source> <volume>3</volume> <fpage>443</fpage>&#x02013;<lpage>452</lpage>.<pub-id pub-id-type="doi">10.1038/nrn848</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berman</surname> <given-names>R. A.</given-names></name> <name><surname>Wurtz</surname> <given-names>R. H.</given-names></name></person-group> (<year>2008</year>). <article-title>Exploring the pulvinar path to visual cortex.</article-title> <source><italic>Prog. Brain Res.</italic></source> <volume>171</volume> <fpage>467</fpage>&#x02013;<lpage>473</lpage>.<pub-id pub-id-type="doi">10.1016/S0079-6123(08)00668-7</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berman</surname> <given-names>R. A.</given-names></name> <name><surname>Wurtz</surname> <given-names>R. H.</given-names></name></person-group> (<year>2011</year>). <article-title>Signals conveyed in the pulvinar pathway from superior colliculus to cortical area MT.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>31</volume> <fpage>373</fpage>&#x02013;<lpage>384</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4738-10.2011</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bishop</surname> <given-names>C. W.</given-names></name> <name><surname>Miller</surname> <given-names>L. M.</given-names></name></person-group> (<year>2011</year>). <article-title>Speech cues contribute to audiovisual spatial integration.</article-title> <source><italic>PLoS ONE</italic></source> <volume>6</volume>:<issue>e24016</issue>. <pub-id pub-id-type="doi"> 10.1371/journal.pone.0024016</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bowyer</surname> <given-names>S. M.</given-names></name> <name><surname>Hsieh</surname> <given-names>L.</given-names></name> <name><surname>Moran</surname> <given-names>J. E.</given-names></name> <name><surname>Young</surname> <given-names>R. A.</given-names></name> <name><surname>Manoharan</surname> <given-names>A.</given-names></name> <name><surname>Liao</surname> <given-names>C.-C. J.</given-names></name><etal/></person-group> (<year>2009</year>). <article-title>Conversation effects on neural mechanisms underlying reaction time to visual events while viewing a driving scene using MEG.</article-title> <source><italic>Brain Res.</italic></source> <volume>1251</volume> <fpage>151</fpage>&#x02013;<lpage>161</lpage>.<pub-id pub-id-type="doi">10.1016/j.brainres.2008.10.001</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brendel</surname> <given-names>B.</given-names></name> <name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Erb</surname> <given-names>M.</given-names></name> <name><surname>Lindner</surname> <given-names>A.</given-names></name> <name><surname>Riecker</surname> <given-names>A.</given-names></name> <name><surname>Grodd</surname> <given-names>W.</given-names></name><etal/></person-group> (<year>2010</year>). <article-title>The contribution of mesiofrontal cortex to the preparation and execution of repetitive syllable productions: an fMRI study.</article-title> <source><italic>Neuroimage</italic></source> <volume>50</volume> <fpage>1219</fpage>&#x02013;<lpage>1230</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.01.039</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bronchti</surname> <given-names>G.</given-names></name> <name><surname>Heil</surname> <given-names>P.</given-names></name> <name><surname>Sadka</surname> <given-names>R.</given-names></name> <name><surname>Hess</surname> <given-names>A.</given-names></name> <name><surname>Scheich</surname> <given-names>H.</given-names></name> <name><surname>Wollberg</surname> <given-names>Z.</given-names></name></person-group> (<year>2002</year>). <article-title>Auditory activation of &#x0201C;visual&#x0201D; cortical areas in the blind mole rat (<italic>Spalax ehrenbergi</italic>).</article-title> <source><italic>Eur. J. Neurosci.</italic></source> <volume>16</volume> <fpage>311</fpage>&#x02013;<lpage>329</lpage>.<pub-id pub-id-type="doi">10.1046/j.1460-9568.2002.02063.x</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brugge</surname> <given-names>J. F.</given-names></name> <name><surname>Nourski</surname> <given-names>K. V.</given-names></name> <name><surname>Oya</surname> <given-names>H.</given-names></name> <name><surname>Reale</surname> <given-names>R. A.</given-names></name> <name><surname>Kawasaki</surname> <given-names>H.</given-names></name> <name><surname>Steinschneider</surname> <given-names>M.</given-names></name><etal/></person-group> (<year>2009</year>). <article-title>Coding of repetitive transients by auditory cortex on Heschl&#x02019;s gyrus.</article-title> <source><italic>J. Neurophysiol.</italic></source> <volume>102</volume> <fpage>2358</fpage>&#x02013;<lpage>2374</lpage>.<pub-id pub-id-type="doi">10.1152/jn.91346.2008</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buccino</surname> <given-names>G.</given-names></name> <name><surname>Binkofski</surname> <given-names>F.</given-names></name> <name><surname>Riggio</surname> <given-names>L.</given-names></name></person-group> (<year>2004</year>). <article-title>The mirror neuron system and action recognition.</article-title> <source><italic>Brain Lang.</italic></source> <volume>89</volume> <fpage>370</fpage>&#x02013;<lpage>376</lpage>.<pub-id pub-id-type="doi">10.1016/S0093-934X(03)00356-0</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>B&#x000FC;chel</surname> <given-names>C.</given-names></name></person-group> (<year>2003</year>). <article-title>Cortical hierarchy turned on its head.</article-title> <source><italic>Nat. Neurosci.</italic></source> <volume>6</volume> <fpage>657</fpage>&#x02013;<lpage>658</lpage>.<pub-id pub-id-type="doi">10.1038/nn0703-657</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>B&#x000FC;chel</surname> <given-names>C.</given-names></name> <name><surname>Price</surname> <given-names>C.</given-names></name> <name><surname>Frackowiak</surname> <given-names>R. S. J.</given-names></name><name><surname>Friston</surname> <given-names>K.</given-names></name></person-group> (<year>1998</year>). <article-title>Different activation patterns in the visual cortex of late and congenitally blind subjects.</article-title> <source><italic>Brain</italic></source> <volume>121</volume> <fpage>409</fpage>&#x02013;<lpage>419</lpage>.<pub-id pub-id-type="doi">10.1093/brain/121.3.409</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buchsbaum</surname> <given-names>B. R</given-names></name><name><surname>D&#x02019;Esposito</surname> <given-names>M.</given-names></name></person-group> (<year>2008</year>). <article-title>The search for the phonological store: from loop to convolution.</article-title> <source><italic>J. Cogn. Neurosci.</italic></source> <volume>20</volume> <fpage>762</fpage>&#x02013;<lpage>778</lpage>.<pub-id pub-id-type="doi">10.1162/jocn.2008.20501</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burton</surname> <given-names>H.</given-names></name></person-group> (<year>2003</year>). <article-title>Visual cortex activity in early and late blind people.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>23</volume> <fpage>4005</fpage>&#x02013;<lpage>4011</lpage>.</citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burton</surname> <given-names>H.</given-names></name> <name><surname>Sinclair</surname> <given-names>R. J.</given-names></name> <name><surname>Dixit</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <article-title>Working memory for vibrotactile frequencies: comparison of cortical activity in blind and sighted individuals.</article-title> <source><italic>Hum. Brain Mapp.</italic></source> <volume>31</volume> <fpage>1686</fpage>&#x02013;<lpage>1701</lpage>.<pub-id pub-id-type="doi">10.1002/hbm.20966</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cao</surname> <given-names>F.</given-names></name> <name><surname>Bitan</surname> <given-names>T.</given-names></name> <name><surname>Booth</surname> <given-names>J. R.</given-names></name></person-group> (<year>2008</year>). <article-title>Effective brain connectivity in children with reading difficulties during phonological processing.</article-title> <source><italic>Brain Lang.</italic></source> <volume>107</volume> <fpage>91</fpage>&#x02013;<lpage>101</lpage>.<pub-id pub-id-type="doi">10.1016/j.bandl.2007.12.009</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cone</surname> <given-names>N. E.</given-names></name> <name><surname>Burman</surname> <given-names>D. D.</given-names></name> <name><surname>Bitan</surname> <given-names>T.</given-names></name> <name><surname>Bolger</surname> <given-names>D. J.</given-names></name> <name><surname>Booth</surname> <given-names>J. R.</given-names></name></person-group> (<year>2008</year>). <article-title>Developmental changes in brain regions involved in phonological and orthographic processing during spoken language processing.</article-title> <source><italic>Neuroimage</italic></source> <volume>41</volume> <fpage>623</fpage>&#x02013;<lpage>635</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuroimage.2008.02.055</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>D&#x02019;Ausilio</surname> <given-names>A.</given-names></name> <name><surname>Craighero</surname> <given-names>L.</given-names></name> <name><surname>Fadiga</surname> <given-names>L.</given-names></name></person-group> (<year>2012</year>). <article-title>The contribution of the frontal lobe to the perception of speech.</article-title> <source><italic>J. Neuroling.</italic></source> <volume>25</volume> <fpage>328</fpage>&#x02013;<lpage>335</lpage>.<pub-id pub-id-type="doi">10.1016/j.jneuroling.2010.02.003</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Witt</surname> <given-names>I.</given-names></name> <name><surname>Rauschecker</surname> <given-names>J. P.</given-names></name></person-group> (<year>2012</year>). <article-title>Phoneme and word recognition in the auditory ventral stream.</article-title> <source><italic>Proc. Natl. Acad. Sci. U.S.A.</italic></source> <volume>109</volume> <fpage>E505</fpage>&#x02013;<lpage>E514</lpage>.<pub-id pub-id-type="doi">10.1073/pnas.1113427109</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dietrich</surname> <given-names>S.</given-names></name> <name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Ackermann</surname> <given-names>H.</given-names></name></person-group> (<year>2010</year>). <article-title>Visual cortex doing an auditory job: enhanced spoken language comprehension in blind subjects.</article-title> <source><italic>Abstract Society for Neuroscience 2010.</italic></source> <comment>Available at: <ext-link ext-link-type="uri" xlink:href="http://www.abstractsonline.com/Plan/ViewAbstract.aspx?"></ext-link> (accessed February 1, 2013</comment>).</citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dietrich</surname> <given-names>S.</given-names></name> <name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Ackermann</surname> <given-names>H.</given-names></name></person-group> (<year>2011</year>). <article-title>Why do blind listeners use visual cortex for understanding ultra-fast speech?</article-title> <source><italic>ASA Lay Language Papers, 161st Acoustical Society of America Meeting 2011</italic></source>. <comment>Available at: <ext-link ext-link-type="uri" xlink:href="http://www.acoustics.org/press/161st/Dietrich.html"></ext-link> (accessed February 1, 2013</comment>).</citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dietrich</surname> <given-names>S.</given-names></name> <name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Ackermann</surname> <given-names>H.</given-names></name></person-group> (<year>2013</year>). <article-title>Ultra-fast speech comprehension in blind subjects engages primary visual cortex, fusiform gyrus, and pulvinar &#x02013; a functional magnetic resonance imaging (fMRI) study.</article-title> <source><italic>BMC Neurosci.</italic></source> <volume>14</volume>:<issue>74</issue>. <pub-id pub-id-type="doi"> 10.1186/1471-2202-14-74.</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Elbert</surname> <given-names>T.</given-names></name> <name><surname>Sterr</surname> <given-names>A.</given-names></name> <name><surname>Rockstroh</surname> <given-names>B.</given-names></name> <name><surname>Pantev</surname> <given-names>C.</given-names></name> <name><surname>ller</surname> <given-names>M. M.</given-names></name> <name><surname>Taub</surname> <given-names>E.</given-names></name></person-group> (<year>2002</year>). <article-title>Expansion of the tonotopic area in the auditory cortex of the blind.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>22</volume> <fpage>9941</fpage>&#x02013;<lpage>9944</lpage>.</citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fiebelkorn</surname> <given-names>I. C.</given-names></name> <name><surname>Foxe</surname> <given-names>J. J.</given-names></name> <name><surname>Butler</surname> <given-names>J. S.</given-names></name> <name><surname>Molholm</surname> <given-names>S.</given-names></name></person-group> (<year>2011</year>). <article-title>Auditory facilitation of visual-target detection persists regardless of retinal eccentricity and despite wide audiovisual misalignments.</article-title> <source><italic>Exp. Brain Res.</italic></source> <volume>213</volume> <fpage>167</fpage>&#x02013;<lpage>174</lpage>.<pub-id pub-id-type="doi">10.1007/s00221-011-2670-7</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Foxe</surname> <given-names>J. J.</given-names></name> <name><surname>Schroeder</surname> <given-names>C. E.</given-names></name></person-group> (<year>2005</year>). <article-title>The case for feedforward multisensory convergence during early cortical processing.</article-title> <source><italic>Neuroreport</italic></source> <volume>16</volume> <fpage>419</fpage>&#x02013;<lpage>423</lpage>.<pub-id pub-id-type="doi">10.1097/00001756-200504040-00001</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Garg</surname> <given-names>A.</given-names></name> <name><surname>Schwartz</surname> <given-names>D.</given-names></name> <name><surname>Stevens</surname> <given-names>A. A.</given-names></name></person-group> (<year>2007</year>). <article-title>Orienting auditory spatial attention engages frontal eye fields and medial occipital cortex in congenitally blind humans.</article-title> <source><italic>Neuropsychologia</italic></source> <volume>45</volume> <fpage>2307</fpage>&#x02013;<lpage>2321</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2007.02.015</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ghitza</surname> <given-names>O.</given-names></name> <name><surname>Greenberg</surname> <given-names>S.</given-names></name></person-group> (<year>2009</year>). <article-title>On the possible role of brain rhythms in speech perception: intelligibility of time-compressed speech with periodic and aperiodic insertions of silence.</article-title> <source><italic>Phonetica</italic></source> <volume>66</volume> <fpage>113</fpage>&#x02013;<lpage>126</lpage>.<pub-id pub-id-type="doi">10.1159/000208934</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Giraud</surname> <given-names>A.-L.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>Cortical oscillations and speech processing: emerging computational principles and operations.</article-title> <source><italic>Nat. Neurosci.</italic></source> <volume>15</volume> <fpage>511</fpage>&#x02013;<lpage>517</lpage>.<pub-id pub-id-type="doi">10.1038/nn.3063</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gow</surname> <given-names>D. W.</given-names></name></person-group> (<year>2012</year>). <article-title>The cortical organization of lexical knowledge: a dual lexicon model of spoken language processing.</article-title> <source><italic>Brain Lang.</italic></source> <volume>121</volume> <fpage>273</fpage>&#x02013;<lpage>288</lpage>.<pub-id pub-id-type="doi">10.1016/j.bandl.2012.03.005</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Greenberg</surname> <given-names>S.</given-names></name> <name><surname>Carvey</surname> <given-names>H.</given-names></name> <name><surname>Hitchcock</surname> <given-names>L.</given-names></name> <name><surname>Chang</surname> <given-names>S.</given-names></name></person-group> (<year>2003</year>). <article-title>Temporal properties of spontaneous speech &#x02013; a syllable-centric perspective.</article-title> <source><italic>J. Phon.</italic></source> <volume>31</volume> <fpage>465</fpage>&#x02013;<lpage>485</lpage>.<pub-id pub-id-type="doi">10.1016/j.wocn.2003.09.005</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grimaldi</surname> <given-names>M.</given-names></name></person-group> (<year>2012</year>). <article-title>Toward a neural theory of language: old issues and new perspectives.</article-title> <source><italic>J. Neurolinguistics</italic></source> <volume>25</volume> <fpage>304</fpage>&#x02013;<lpage>327</lpage>.<pub-id pub-id-type="doi">10.1016/j.jneuroling.2011.12.002</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haxby</surname> <given-names>J. V.</given-names></name> <name><surname>Grady</surname> <given-names>C. L.</given-names></name> <name><surname>Horwitz</surname> <given-names>B.</given-names></name> <name><surname>Ungerleider</surname> <given-names>L. G.</given-names></name> <name><surname>Mishkin</surname> <given-names>M.</given-names></name> <name><surname>Carson</surname> <given-names>R. E.</given-names></name><etal/></person-group> (<year>1991</year>). <article-title>Dissociation of object and spatial visual processing pathways in human extrastriate cortex.</article-title> <source><italic>Proc. Natl. Acad. Sci. U.S.A.</italic></source> <volume>88</volume> <fpage>1621</fpage>&#x02013;<lpage>1625</lpage>.<pub-id pub-id-type="doi">10.1073/pnas.88.5.1621</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haxby</surname> <given-names>J. V.</given-names></name> <name><surname>Hoffman</surname> <given-names>E. A.</given-names></name> <name><surname>Gobbini</surname> <given-names>M. I.</given-names></name></person-group> (<year>2000</year>). <article-title>The distributed human neural system for face perception.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>4</volume> <fpage>223</fpage>&#x02013;<lpage>233</lpage>.<pub-id pub-id-type="doi">10.1016/S1364-6613(00)01482-0</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>J.</given-names></name></person-group> (<year>2003</year>). <article-title>Slow oscillation in non-lemniscal auditory thalamus.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>23</volume> <fpage>8281</fpage>&#x02013;<lpage>8290</lpage>.</citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Heron</surname> <given-names>J.</given-names></name> <name><surname>Roach</surname> <given-names>N. W.</given-names></name> <name><surname>Hanson</surname> <given-names>J. V. M.</given-names></name><name><surname>McGraw</surname> <given-names>P. V.</given-names></name> <name><surname>Whitaker</surname> <given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>Audiovisual time perception is spatially specific.</article-title> <source><italic>Exp. Brain Res.</italic></source> <volume>218</volume> <fpage>477</fpage>&#x02013;<lpage>485</lpage>.<pub-id pub-id-type="doi">10.1007/s00221-012-3038-3</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Dietrich</surname> <given-names>S.</given-names></name> <name><surname>Ackermann</surname> <given-names>H.</given-names></name></person-group> (<year>2011</year>). <article-title>Cross-modal interactions during perception of audiovisual speech and non-speech signals: an fMRI study.</article-title> <source><italic>J. Cogn. Neurosci.</italic></source> <volume>23</volume> <fpage>221</fpage>&#x02013;<lpage>237</lpage>.<pub-id pub-id-type="doi">10.1162/jocn.2010.21421</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Dietrich</surname> <given-names>S.</given-names></name> <name><surname>Ackermann</surname> <given-names>H.</given-names></name></person-group> (<year>2013</year>). <article-title>Tracking the speech signal &#x02013; time-locked MEG signals during perception of ultra-fast and moderately fast speech in blind and in sighted listeners.</article-title> <source><italic>Brain Lang.</italic></source> <volume>124</volume> <fpage>9</fpage>&#x02013;<lpage>21</lpage>.<pub-id pub-id-type="doi">10.1016/j.bandl.2012.10.006</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Dietrich</surname> <given-names>S.</given-names></name> <name><surname>Moos</surname> <given-names>A.</given-names></name> <name><surname>Trouvain</surname> <given-names>J.</given-names></name> <name><surname>Ackermann</surname> <given-names>H.</given-names></name></person-group> (<year>2009</year>). <article-title>Enhanced speech perception capabilities in a blind listener are associated with activation of fusiform gyrus and primary visual cortex.</article-title> <source><italic>Neurocase</italic></source> <volume>15</volume> <fpage>163</fpage>&#x02013;<lpage>170</lpage>.<pub-id pub-id-type="doi">10.1080/13554790802709054</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Dietrich</surname> <given-names>S.</given-names></name> <name><surname>Trouvain</surname> <given-names>J.</given-names></name> <name><surname>Moos</surname> <given-names>A.</given-names></name> <name><surname>Ackermann</surname> <given-names>H.</given-names></name></person-group> (<year>2012</year>). <article-title>Magnetic brain activity phase-locked to the envelope, the syllable onsets, and the fundamental frequency of a perceived speech signal.</article-title> <source><italic>Psychophysiology</italic></source> <volume>49</volume> <fpage>322</fpage>&#x02013;<lpage>334</lpage>.<pub-id pub-id-type="doi">10.1111/j.1469-8986.2011.01314.x</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Mathiak</surname> <given-names>K.</given-names></name> <name><surname>Lutzenberger</surname> <given-names>W.</given-names></name> <name><surname>Ackermann</surname> <given-names>H.</given-names></name></person-group> (<year>2004</year>). <article-title>Transient and phase-locked evoked magnetic fields in response to periodic acoustic signals.</article-title> <source><italic>Neuroreport</italic></source> <volume>15</volume> <fpage>1687</fpage>&#x02013;<lpage>1690</lpage>.<pub-id pub-id-type="doi">10.1097/01.wnr.0000134930.04561.b2</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Mathiak</surname> <given-names>K.</given-names></name> <name><surname>Lutzenberger</surname> <given-names>W.</given-names></name> <name><surname>Ackermann</surname> <given-names>H.</given-names></name></person-group> (<year>2009</year>). <article-title>Time course of early audiovisual interactions during speech and non-speech central auditory processing: a magnetoencephalography study.</article-title> <source><italic>J. Cogn. Neurosci.</italic></source> <volume>21</volume> <fpage>259</fpage>&#x02013;<lpage>274</lpage>.<pub-id pub-id-type="doi">10.1162/jocn.2008.21019</pub-id></citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Mathiak</surname> <given-names>K.</given-names></name> <name><surname>Lutzenberger</surname> <given-names>W.</given-names></name> <name><surname>Menning</surname> <given-names>H.</given-names></name> <name><surname>Ackermann</surname> <given-names>H.</given-names></name></person-group> (<year>2007</year>). <article-title>Sequential audiovisual interactions during speech perception: a whole-head MEG study.</article-title> <source><italic>Neuropsychologia</italic></source> <volume>45</volume> <fpage>1342</fpage>&#x02013;<lpage>1354</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2006.09.019 </pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hickok</surname> <given-names>G.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2000</year>). <article-title>Towards a functional neuroanatomy of speech perception.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>4</volume> <fpage>131</fpage>&#x02013;<lpage>138</lpage>.<pub-id pub-id-type="doi">10.1016/S1364-6613(00)01463-7</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hickok</surname> <given-names>G.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2007</year>). <article-title>The cortical organization of speech processing.</article-title> <source><italic>Nat. Rev. Neurosci.</italic></source> <volume>8</volume> <fpage>393</fpage>&#x02013;<lpage>402</lpage>.<pub-id pub-id-type="doi">10.1038/nrn2113</pub-id></citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huetz</surname> <given-names>C.</given-names></name> <name><surname>Gour&#x000E9;vitch</surname> <given-names>B.</given-names></name> <name><surname>Edeline</surname> <given-names>J.-M.</given-names></name></person-group> (<year>2011</year>). <article-title>Neural codes in the thalamocortical auditory system: from artificial stimuli to communication sounds.</article-title> <source><italic>Hear. Res.</italic></source> <volume>271</volume> <fpage>147</fpage>&#x02013;<lpage>158</lpage>.<pub-id pub-id-type="doi">10.1016/j.heares.2010.01.010</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>J&#x000FC;rgens</surname> <given-names>U.</given-names></name></person-group> (<year>1984</year>). <article-title>The efferent and afferent connections of the supplementary motor area.</article-title> <source><italic>Brain Res.</italic></source> <volume>300</volume> <fpage>63</fpage>&#x02013;<lpage>81</lpage>.<pub-id pub-id-type="doi">10.1016/0006-8993(84)91341-6</pub-id></citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kayser</surname> <given-names>C.</given-names></name> <name><surname>Petkov</surname> <given-names>C. I.</given-names></name> <name><surname>Logothetis</surname> <given-names>N. K.</given-names></name></person-group> (<year>2008</year>). <article-title>Visual modulation of neurons in auditory cortex.</article-title> <source><italic>Cereb. Cortex</italic></source> <volume>18</volume> <fpage>1560</fpage>&#x02013;<lpage>1574</lpage>.<pub-id pub-id-type="doi">10.1093/cercor/bhm187</pub-id></citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Keetels</surname> <given-names>M.</given-names></name> <name><surname>Vroomen</surname> <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>Sound affects the speed of visual processing.</article-title> <source><italic>J. Exp. Psychol. Hum. Percept. Perform.</italic></source> <volume>37</volume> <fpage>699</fpage>&#x02013;<lpage>708</lpage>.<pub-id pub-id-type="doi">10.1037/a0020564</pub-id></citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Keysers</surname> <given-names>C.</given-names></name> <name><surname>Fadiga</surname> <given-names>L.</given-names></name></person-group> (<year>2008</year>). <article-title>The mirror neuron system: new frontiers.</article-title> <source><italic>Soc. Neurosci.</italic></source> <volume>3</volume> <fpage>193</fpage>&#x02013;<lpage>198</lpage>.<pub-id pub-id-type="doi">10.1080/17470910802408513</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kotz</surname> <given-names>S. A.</given-names></name> <name><surname>D&#x02019;Ausilio</surname> <given-names>A.</given-names></name> <name><surname>Raettig</surname> <given-names>T.</given-names></name> <name><surname>Begliomini</surname> <given-names>C.</given-names></name> <name><surname>Craighero</surname> <given-names>L.</given-names></name> <name><surname>Fabbri-Destro</surname> <given-names>M.</given-names></name><etal/></person-group> (<year>2010</year>). <article-title>Lexicality drives audio-motor transformations in Broca&#x02019;s area.</article-title> <source><italic>Brain Lang.</italic></source> <volume>112</volume> <fpage>3</fpage>&#x02013;<lpage>11</lpage>.<pub-id pub-id-type="doi">10.1016/j.bandl.2009.07.008</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kotz</surname> <given-names>S. A.</given-names></name> <name><surname>Schwartze</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Cortical speech processing unplugged: a timely subcortico-cortical framework.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>14</volume> <fpage>392</fpage>&#x02013;<lpage>399</lpage>.<pub-id pub-id-type="doi">10.1016/j.tics.2010.06.005</pub-id></citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kotz</surname> <given-names>S. A.</given-names></name> <name><surname>Schwartze</surname> <given-names>M.</given-names></name> <name><surname>Schmidt-Kassow</surname> <given-names>M.</given-names></name></person-group> (<year>2009</year>). <article-title>Non-motor basal ganglia functions: a review and proposal for a model of sensory predictability in auditory language perception.</article-title> <source><italic>Cortex</italic></source> <volume>45</volume> <fpage>982</fpage>&#x02013;<lpage>990</lpage>.<pub-id pub-id-type="doi">10.1016/j.cortex.2009.02.010</pub-id></citation></ref>
<ref id="B59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kupers</surname> <given-names>R.</given-names></name> <name><surname>Beaulieu-Lefebvre</surname> <given-names>M.</given-names></name> <name><surname>Schneider</surname> <given-names>F. C.</given-names></name> <name><surname>Kassuba</surname> <given-names>T.</given-names></name> <name><surname>Paulson</surname> <given-names>O. B.</given-names></name> <name><surname>Siebnerg</surname> <given-names>H. R.</given-names></name><etal/></person-group> (<year>2011</year>). <article-title>Neural correlates of olfactory processing in congenital blindness.</article-title> <source><italic>Neuropsychologia</italic></source> <volume>49</volume> <fpage>2037</fpage>&#x02013;<lpage>2044</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2011.03.033</pub-id></citation></ref>
<ref id="B60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lebib</surname> <given-names>R.</given-names></name> <name><surname>Papo</surname> <given-names>D.</given-names></name> <name><surname>De Bode</surname> <given-names>S.</given-names></name> <name><surname>Baudonniere</surname> <given-names>P. M.</given-names></name></person-group> (<year>2003</year>). <article-title>Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.</article-title> <source><italic>Neurosci. Lett.</italic></source> <volume>341</volume> <fpage>185</fpage>&#x02013;<lpage>188</lpage>.<pub-id pub-id-type="doi">10.1016/S0304-3940(03)00131-9</pub-id></citation></ref>
<ref id="B61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leh</surname> <given-names>S. E.</given-names></name> <name><surname>Chakravarty</surname> <given-names>M. M.</given-names></name> <name><surname>Ptito</surname> <given-names>A.</given-names></name></person-group> (<year>2008</year>). <article-title>The connectivity of the human pulvinar: a diffusion tensor imaging tractography study.</article-title> <source><italic>Int. J. Biomed. Imaging</italic></source> <volume>2008</volume> <issue>789539</issue><pub-id pub-id-type="doi"> 10.1155/2008/789539</pub-id></citation></ref>
<ref id="B62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leo</surname> <given-names>F.</given-names></name> <name><surname>Bertini</surname> <given-names>C.</given-names></name> <name><surname>di Pellegrino</surname> <given-names>G.</given-names></name> <name><surname>L&#x000E0;davas</surname> <given-names>E.</given-names></name></person-group> (<year>2008</year>). <article-title>Multisensory integration for orienting responses in humans requires the activation of the superior colliculus.</article-title> <source><italic>Exp. Brain Res.</italic></source> <volume>186</volume> <fpage>67</fpage>&#x02013;<lpage>77</lpage>.<pub-id pub-id-type="doi">10.1007/s00221-007-1204-9</pub-id></citation></ref>
<ref id="B63"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lewald</surname> <given-names>J.</given-names></name> <name><surname>Tegenthoff</surname> <given-names>M.</given-names></name> <name><surname>Peters</surname> <given-names>S.</given-names></name> <name><surname>Hausmann</surname> <given-names>M.</given-names></name></person-group> (<year>2012</year>). <article-title>Passive auditory stimulation improves vision in hemianopia.</article-title> <source><italic>PLoS ONE</italic></source> <volume>7</volume>:<issue>e31603</issue>. <pub-id pub-id-type="doi"> 10.1371/journal.pone.0031603</pub-id></citation></ref>
<ref id="B64"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liddell</surname> <given-names>B. J.</given-names></name> <name><surname>Brown</surname> <given-names>K. J.</given-names></name> <name><surname>Kemp</surname> <given-names>A. H.</given-names></name> <name><surname>Barton</surname> <given-names>M. J.</given-names></name> <name><surname>Das</surname> <given-names>P.</given-names></name> <name><surname>Peduto</surname> <given-names>A.</given-names></name><etal/></person-group> (<year>2005</year>). <article-title>A direct brainstem&#x02013;amygdala&#x02013;cortical &#x0201C;alarm&#x0201D; system for subliminal signals of fear.</article-title> <source><italic>Neuroimage</italic></source> <volume>24</volume> <fpage>235</fpage>&#x02013;<lpage>243</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.08.016</pub-id></citation></ref>
<ref id="B65"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>H.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2007</year>). <article-title>Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex.</article-title> <source><italic>Neuron</italic></source> <volume>54</volume> <fpage>1001</fpage>&#x02013;<lpage>1010</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuron.2007.06.004</pub-id></citation></ref>
<ref id="B66"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>H.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>3</volume>:<issue>170</issue>. <pub-id pub-id-type="doi"> 10.3389/fpsyg.2012.00170</pub-id></citation></ref>
<ref id="B67"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ma</surname> <given-names>W. J.</given-names></name> <name><surname>Zhou</surname> <given-names>X.</given-names></name> <name><surname>Ross</surname> <given-names>L. A.</given-names></name> <name><surname>Foxe</surname> <given-names>J. J.</given-names></name> <name><surname>Parra</surname> <given-names>L. C.</given-names></name></person-group> (<year>2009</year>). <article-title>Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.</article-title> <source><italic>PLoS ONE</italic></source> <volume>4</volume>:<issue>e4638</issue>. <pub-id pub-id-type="doi"> 10.1371/journal.pone.0004638</pub-id></citation></ref>
<ref id="B68"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Macaluso</surname> <given-names>E.</given-names></name> <name><surname>Driver</surname> <given-names>J.</given-names></name></person-group> (<year>2005</year>). <article-title>Multisensory spatial interactions: a window onto functional integration in the human brain.</article-title> <source><italic>Trends Neurosci.</italic></source> <volume>28</volume> <fpage>264</fpage>&#x02013;<lpage>271</lpage>.<pub-id pub-id-type="doi">10.1016/j.tins.2005.03.008</pub-id></citation></ref>
<ref id="B69"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maravita</surname> <given-names>A.</given-names></name> <name><surname>Bolognini</surname> <given-names>N.</given-names></name> <name><surname>Bricolo</surname> <given-names>E.</given-names></name> <name><surname>Marzi</surname> <given-names>C. A.</given-names></name> <name><surname>Savazzi</surname> <given-names>S.</given-names></name></person-group> (<year>2008</year>). <article-title>Is audiovisual integration subserved by the superior colliculus in humans?</article-title> <source><italic>Neuroreport</italic></source> <volume>19</volume> <fpage>271</fpage>&#x02013;<lpage>275</lpage>.<pub-id pub-id-type="doi">10.1097/WNR.0b013e3282f4f04e</pub-id></citation></ref>
<ref id="B70"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Martin</surname> <given-names>E.</given-names></name></person-group> (<year>2002</year>). <article-title>&#x0201C;Imaging of brain function during early human development,&#x0201D; in</article-title> <source><italic>MRI of the Neonatal Brain</italic></source> <role>ed</role>. <person-group person-group-type="editor"><name><surname>Rutherford</surname> <given-names>M.</given-names></name></person-group> <volume>Chapter 18</volume> <comment>E-book. Available at: <ext-link ext-link-type="uri" xlink:href="http://www.mrineonatalbrain.com/ch04-18.php"></ext-link></comment></citation></ref>
<ref id="B71"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCandliss</surname> <given-names>B. D.</given-names></name> <name><surname>Cohen</surname> <given-names>L.</given-names></name> <name><surname>Dehaene</surname> <given-names>S.</given-names></name></person-group> (<year>2003</year>). <article-title>The visual word form area: expertise for reading in the fusiform gyrus.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>7</volume> <fpage>293</fpage>&#x02013;<lpage>299</lpage>.<pub-id pub-id-type="doi">10.1016/S1364-6613(03)00134-7</pub-id></citation></ref>
<ref id="B72"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meltzoff</surname> <given-names>A. N.</given-names></name> <name><surname>Moore</surname> <given-names>M. K.</given-names></name></person-group> (<year>1997</year>). <article-title>Explaining facial imitation: a theoretical model.</article-title> <source><italic>Early Dev. Parenting</italic></source> <volume>6</volume> <fpage>179</fpage>&#x02013;<lpage>192</lpage>.<pub-id pub-id-type="doi">10.1002/(SICI)1099-0917(199709/12)6:3/4&#x0003C;179::AID-EDP157&#x0003E;3.0.CO;2-R</pub-id></citation></ref>
<ref id="B73"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miki</surname> <given-names>K.</given-names></name> <name><surname>Watanabe</surname> <given-names>S.</given-names></name> <name><surname>Kakigi</surname> <given-names>R.</given-names></name></person-group> (<year>2004</year>). <article-title>Interaction between auditory and visual stimulus relating to the vowel sounds in the auditory cortex in humans: a magnetoencephalographic study.</article-title> <source><italic>Neurosci. Lett.</italic></source> <volume>357</volume> <fpage>199</fpage>&#x02013;<lpage>202</lpage>.<pub-id pub-id-type="doi">10.1016/j.neulet.2003.12.082</pub-id></citation></ref>
<ref id="B74"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miller</surname> <given-names>J.</given-names></name></person-group> (<year>1982</year>). <article-title>Divided attention: evidence for coactivation with redundant signals.</article-title> <source><italic>Cogn. Psychol.</italic></source> <volume>14</volume> <fpage>247</fpage>&#x02013;<lpage>279</lpage>.<pub-id pub-id-type="doi">10.1016/0010-0285(82)90010-X</pub-id></citation></ref>
<ref id="B75"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mishra</surname> <given-names>J.</given-names></name> <name><surname>Martinez</surname> <given-names>A.</given-names></name> <name><surname>Sejnowski</surname> <given-names>T. J.</given-names></name> <name><surname>Hillyard</surname> <given-names>S. A.</given-names></name></person-group> (<year>2007</year>). <article-title>Early cross-modal interactions in auditory and visual cortex underlie a sound-induced visual illusion.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>27</volume> <fpage>4120</fpage>&#x02013;<lpage>4131</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4912-06.2007</pub-id></citation></ref>
<ref id="B76"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mohamed</surname> <given-names>M. A.</given-names></name> <name><surname>Yousem</surname> <given-names>D. M.</given-names></name> <name><surname>Tekes</surname> <given-names>A.</given-names></name> <name><surname>Browner</surname> <given-names>N. M.</given-names></name> <name><surname>Calhoun</surname> <given-names>V. D.</given-names></name></person-group> (<year>2003</year>). <article-title>Timing of cortical activation: a latency-resolved event-related functional MR imaging study.</article-title> <source><italic>Am. J. Neuroradiol.</italic></source> <volume>24</volume> <fpage>1967</fpage>&#x02013;<lpage>1974</lpage>.</citation></ref>
<ref id="B77"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Molenberghs</surname> <given-names>P.</given-names></name> <name><surname>Cunnington</surname> <given-names>R.</given-names></name> <name><surname>Mattingley</surname> <given-names>J. B.</given-names></name></person-group> (<year>2012</year>). <article-title>Brain regions with mirror properties: a meta-analysis of 125 human fMRI studies.</article-title> <source><italic>Neurosci. Biobehav. Rev.</italic></source> <volume>36</volume> <fpage>341</fpage>&#x02013;<lpage>349</lpage>.<pub-id pub-id-type="doi">10.1016/j.neubiorev.2011.07.004</pub-id></citation></ref>
<ref id="B78"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Molholm</surname> <given-names>S.</given-names></name> <name><surname>Ritter</surname> <given-names>W.</given-names></name> <name><surname>Murray</surname> <given-names>M. M.</given-names></name> <name><surname>Javitt</surname> <given-names>D. C.</given-names></name> <name><surname>Schroeder</surname> <given-names>C. E.</given-names></name> <name><surname>Foxe</surname> <given-names>J. J.</given-names></name></person-group> (<year>2002</year>). <article-title>Multisensory auditory-visual interactions during early sensory processing in humans: a high-density electrical mapping study.</article-title> <source><italic>Cogn. Brain Res.</italic></source> <volume>14</volume> <fpage>115</fpage>&#x02013;<lpage>128</lpage>.<pub-id pub-id-type="doi">10.1016/S0926-6410(02)00066-6</pub-id></citation></ref>
<ref id="B79"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moos</surname> <given-names>A.</given-names></name> <name><surname>Trouvain</surname> <given-names>J.</given-names></name></person-group> (<year>2007</year>). <article-title>&#x0201C;Comprehension of ultra-fast speech &#x02013; blind vs. &#x0201C;normally hearing&#x0201D; persons,&#x0201D; in</article-title> <source><italic>Proceedings of the sixteenth International Congress of Phonetic Sciences</italic></source> <role>eds</role> <person-group person-group-type="editor"><name><surname>Trouvain</surname> <given-names>J.</given-names></name> <name><surname>Barry</surname> <given-names>W. J.</given-names></name></person-group> (<publisher-loc>Saarbr&#x000FC;cken</publisher-loc>: <publisher-name>University of Saarbr&#x000FC;cken</publisher-name>) <fpage>677</fpage>&#x02013;<lpage>680</lpage>. <comment>Available at: <ext-link ext-link-type="uri" xlink:href="http://www.icphs2007.de/conference/Papers/1186/1186.pdf"></ext-link></comment></citation></ref>
<ref id="B80"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Noppeney</surname> <given-names>U.</given-names></name></person-group> (<year>2007</year>). <article-title>The effects of visual deprivation on functional and structural organization of the human brain.</article-title> <source><italic>Neurosci. Biobehav. Rev.</italic></source> <volume>31</volume> <fpage>1169</fpage>&#x02013;<lpage>1180</lpage>.<pub-id pub-id-type="doi">10.1016/j.neubiorev.2007.04.012</pub-id></citation></ref>
<ref id="B81"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nourski</surname> <given-names>K. V.</given-names></name> <name><surname>Reale</surname> <given-names>R. A.</given-names></name> <name><surname>Oya</surname> <given-names>H.</given-names></name> <name><surname>Kawasaki</surname> <given-names>H.</given-names></name> <name><surname>Kovach</surname> <given-names>C. K.</given-names></name> <name><surname>Chen</surname> <given-names>H.</given-names></name><etal/></person-group> (<year>2009</year>). <article-title>Temporal envelope of time-compressed speech represented in the human auditory cortex.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>29</volume> <fpage>15564</fpage>&#x02013;<lpage>15574</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.3065-09.2009</pub-id></citation></ref>
<ref id="B82"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Passamonti</surname> <given-names>C.</given-names></name> <name><surname>Bertini</surname> <given-names>C.</given-names></name> <name><surname>L&#x000E0;davas</surname> <given-names>E.</given-names></name></person-group> (<year>2009</year>). <article-title>Audio-visual stimulation improves oculomotor patterns in patients with hemianopia.</article-title> <source><italic>Neuropsychologia</italic></source> <volume>47</volume> <fpage>546</fpage>&#x02013;<lpage>555</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2008.10.008</pub-id></citation></ref>
<ref id="B83"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Patel</surname> <given-names>A. D.</given-names></name></person-group> (<year>2003</year>). <article-title>Language, music, syntax, and the brain.</article-title> <source><italic>Nat. Neurosci.</italic></source> <volume>6</volume> <fpage>674</fpage>&#x02013;<lpage>681</lpage>.<pub-id pub-id-type="doi">10.1038/nn1082</pub-id></citation></ref>
<ref id="B84"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perez-Pereira</surname> <given-names>M.</given-names></name> <name><surname>Conti-Ramsden</surname> <given-names>G.</given-names></name></person-group> (<year>1999</year>). <source><italic>Language Development and Social Interaction in Blind Children</italic></source>. <publisher-loc>Hove, UK</publisher-loc>: <publisher-name>Psychology Press Ltd</publisher-name>.</citation></ref>
<ref id="B85"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Picard</surname> <given-names>N.</given-names></name> <name><surname>Strick</surname> <given-names>P. L.</given-names></name></person-group> (<year>2003</year>). <article-title>Activation of the supplementary motor area (SMA) during performance of visually guided movements.</article-title> <source><italic>Cereb. Cortex</italic></source> <volume>13</volume> <fpage>977</fpage>&#x02013;<lpage>986</lpage>.<pub-id pub-id-type="doi">10.1093/cercor/13.9.977</pub-id></citation></ref>
<ref id="B86"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2003</year>). <article-title>The analysis of speech in different temporal integration windows: cerebral lateralization as &#x02018;asymmetric sampling in time&#x02019;.</article-title> <source><italic>Speech Commun.</italic></source> <volume>41</volume> <fpage>245</fpage>&#x02013;<lpage>255</lpage>.<pub-id pub-id-type="doi">10.1016/S0167-6393(02)00107-3</pub-id></citation></ref>
<ref id="B87"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pulverm&#x000FC;ller</surname> <given-names>F.</given-names></name> <name><surname>Huss</surname> <given-names>M.</given-names></name> <name><surname>Kherif</surname> <given-names>F.</given-names></name> <name><surname>Moscoso del Prado Martin</surname> <given-names>F.</given-names></name> <name><surname>Hauk</surname> <given-names>O.</given-names></name> <name><surname>Shtyrov</surname> <given-names>Y.</given-names></name></person-group> (<year>2006</year>). <article-title>Motor cortex maps articulatory features of speech sounds.</article-title> <source><italic>Proc. Natl. Acad. Sci. U.S.A.</italic></source> <volume>103</volume> <fpage>7865</fpage>&#x02013;<lpage>7870</lpage>.<pub-id pub-id-type="doi">10.1073/pnas.0509989103</pub-id></citation></ref>
<ref id="B88"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Purushothaman</surname> <given-names>G.</given-names></name> <name><surname>Marion</surname> <given-names>R.</given-names></name> <name><surname>Li</surname> <given-names>K.</given-names></name> <name><surname>Casagrande</surname> <given-names>V. A.</given-names></name></person-group> (<year>2012</year>). <article-title>Gating and control of primary visual cortex by pulvinar.</article-title> <source><italic>Nat. Neurosci.</italic></source> <volume>15</volume> <fpage>905</fpage>&#x02013;<lpage>912</lpage>.<pub-id pub-id-type="doi">10.1038/nn.3106</pub-id></citation></ref>
<ref id="B89"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ricciardi</surname> <given-names>E.</given-names></name> <name><surname>Pietrini</surname> <given-names>P.</given-names></name></person-group> (<year>2011</year>). <article-title>New light from the dark: what blindness can teach us about brain function.</article-title> <source><italic>Curr. Opin. Neurol.</italic></source> <volume>24</volume> <fpage>357</fpage>&#x02013;<lpage>363</lpage>.<pub-id pub-id-type="doi">10.1097/WCO.0b013e328348bdbf</pub-id></citation></ref>
<ref id="B90"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Riecker</surname> <given-names>A.</given-names></name> <name><surname>Mathiak</surname> <given-names>K.</given-names></name> <name><surname>Wildgruber</surname> <given-names>D.</given-names></name> <name><surname>Erb</surname> <given-names>M.</given-names></name> <name><surname>Hertrich</surname> <given-names>I.</given-names></name> <name><surname>Grodd</surname> <given-names>W.</given-names></name><etal/></person-group> (<year>2005</year>). <article-title>fMRI reveals two distinct cerebral networks subserving speech motor control.</article-title> <source><italic>Neurology</italic></source> <volume>64</volume> <fpage>700</fpage>&#x02013;<lpage>706</lpage>.<pub-id pub-id-type="doi">10.1212/01.WNL.0000152156.90779.89</pub-id></citation></ref>
<ref id="B91"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>R&#x000F6;der</surname> <given-names>B.</given-names></name> <name><surname>Kramer</surname> <given-names>U. M.</given-names></name> <name><surname>Lange</surname> <given-names>K.</given-names></name></person-group> (<year>2007</year>). <article-title>Congenitally blind humans use different stimulus selection strategies in hearing: an ERP study of spatial and temporal attention.</article-title> <source><italic>Restor. Neurol. Neurosci.</italic></source> <volume>25</volume> <fpage>311</fpage>&#x02013;<lpage>322</lpage>.</citation></ref>
<ref id="B92"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>R&#x000F6;der</surname> <given-names>B.</given-names></name> <name><surname>Stock</surname> <given-names>O.</given-names></name> <name><surname>Bien</surname> <given-names>S.</given-names></name> <name><surname>Neville</surname> <given-names>H.</given-names></name> <name><surname>R&#x000F6;sler</surname> <given-names>F.</given-names></name></person-group> (<year>2002</year>). <article-title>Speech processing activates visual cortex in congenitally blind humans.</article-title> <source><italic>Eur. J. Neurosci.</italic></source> <volume>16</volume> <fpage>930</fpage>&#x02013;<lpage>936</lpage>.<pub-id pub-id-type="doi">10.1046/j.1460-9568.2002.02147.x</pub-id></citation></ref>
<ref id="B93"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Romani</surname> <given-names>C.</given-names></name> <name><surname>Galluzzi</surname> <given-names>C.</given-names></name> <name><surname>Olson</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Phonological&#x02013;lexical activation: a lexical component or an output buffer? Evidence from aphasic errors.</article-title> <source><italic>Cortex</italic></source> <volume>47</volume> <fpage>217</fpage>&#x02013;<lpage>235</lpage>.<pub-id pub-id-type="doi">10.1016/j.cortex.2009.11.004</pub-id></citation></ref>
<ref id="B94"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shams</surname> <given-names>L.</given-names></name> <name><surname>Kamitani</surname> <given-names>Y.</given-names></name> <name><surname>Shimojo</surname> <given-names>S.</given-names></name></person-group> (<year>2000</year>). <article-title>Illusions: what you see is what you hear.</article-title> <source><italic>Nature</italic></source> <volume>408</volume> <fpage>788</fpage>&#x02013;<lpage>788</lpage>.<pub-id pub-id-type="doi">10.1038/35048669</pub-id></citation></ref>
<ref id="B95"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shams</surname> <given-names>L.</given-names></name> <name><surname>Kim</surname> <given-names>R.</given-names></name></person-group> (<year>2010</year>). <article-title>Crossmodal influences on visual perception.</article-title> <source><italic>Phys. Life Rev.</italic></source> <volume>7</volume> <fpage>269</fpage>&#x02013;<lpage>284</lpage>.<pub-id pub-id-type="doi">10.1016/j.plrev.2010.04.006</pub-id></citation></ref>
<ref id="B96"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spence</surname> <given-names>M. J.</given-names></name> <name><surname>Decasper</surname> <given-names>A. J.</given-names></name></person-group> (<year>1987</year>). <article-title>Prenatal experience with low-frequency material-voice sounds influence neonatal perception of maternal voice samples.</article-title> <source><italic>Infant Behav. Dev.</italic></source> <volume>10</volume> <fpage>133</fpage>&#x02013;<lpage>142</lpage>.<pub-id pub-id-type="doi">10.1016/0163-6383(87)90028-2</pub-id></citation></ref>
<ref id="B97"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stevens</surname> <given-names>A. A.</given-names></name> <name><surname>Snodgrass</surname> <given-names>M.</given-names></name> <name><surname>Schwartz</surname> <given-names>D.</given-names></name> <name><surname>Weaver</surname> <given-names>K.</given-names></name></person-group> (<year>2007</year>). <article-title>Preparatory activity in occipital cortex in early blind humans predicts auditory perceptual performance.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>27</volume> <fpage>10734</fpage>&#x02013;<lpage>10741</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.1669-07.2007</pub-id></citation></ref>
<ref id="B98"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stevens</surname> <given-names>A. A.</given-names></name> <name><surname>Weaver</surname> <given-names>K.</given-names></name></person-group> (<year>2005</year>). <article-title>Auditory perceptual consolidation in early-onset blindness.</article-title> <source><italic>Neuropsychologia</italic></source> <volume>43</volume> <fpage>1901</fpage>&#x02013;<lpage>1910</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2005.03.007</pub-id></citation></ref>
<ref id="B99"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stevens</surname> <given-names>A. A.</given-names></name> <name><surname>Weaver</surname> <given-names>K. E.</given-names></name></person-group> (<year>2009</year>). <article-title>Functional characteristics of auditory cortex in the blind.</article-title> <source><italic>Behav. Brain Res.</italic></source> <volume>196</volume> <fpage>134</fpage>&#x02013;<lpage>138</lpage>.<pub-id pub-id-type="doi">10.1016/j.bbr.2008.07.041</pub-id></citation></ref>
<ref id="B100"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Streri</surname> <given-names>A.</given-names></name> <name><surname>Coulon</surname> <given-names>M.</given-names></name> <name><surname>Guella&#x000EF;</surname> <given-names>B.</given-names></name></person-group> (<year>2013</year>). <article-title>The foundations of social cognition: studies on face/voice integration in newborn infants.</article-title> <source><italic>Int. J. Behav. Dev.</italic></source> <volume>37</volume> <fpage>79</fpage>&#x02013;<lpage>83</lpage>.<pub-id pub-id-type="doi">10.1177/0165025412465361</pub-id></citation></ref>
<ref id="B101"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Striem-Amit</surname> <given-names>E.</given-names></name> <name><surname>Guendelman</surname> <given-names>M.</given-names></name> <name><surname>Amedi</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x0201C;Visual&#x0201D; acuity of the congenitally blind using visual-to-auditory sensory substitution.</article-title> <source><italic>PLoS ONE</italic></source> <volume>7</volume>:<issue>e33136</issue>.<pub-id pub-id-type="doi"> 10.1371/journal.pone.0033136</pub-id></citation></ref>
<ref id="B102"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sumby</surname> <given-names>W. H.</given-names></name> <name><surname>Pollack</surname> <given-names>I.</given-names></name></person-group> (<year>1954</year>). <article-title>Visual contribution to speech intelligibility in noise.</article-title> <source><italic>J. Acoust. Soc. Am.</italic></source> <volume>26</volume> <fpage>212</fpage>&#x02013;<lpage>215</lpage>.<pub-id pub-id-type="doi">10.1121/1.1907309</pub-id></citation></ref>
<ref id="B103"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tanji</surname> <given-names>J.</given-names></name></person-group> (<year>1994</year>). <article-title>The supplementary motor area in the cerebral cortex.</article-title> <source><italic>Neurosci. Res.</italic></source> <volume>19</volume> <fpage>251</fpage>&#x02013;<lpage>268</lpage>.<pub-id pub-id-type="doi">10.1016/0168-0102(94)90038-8</pub-id></citation></ref>
<ref id="B104"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vagharchakian</surname> <given-names>L.</given-names></name> <name><surname>Dehaene-Lambertz</surname> <given-names>G.</given-names></name> <name><surname>Pallier</surname> <given-names>C.</given-names></name> <name><surname>Dehaene</surname> <given-names>S.</given-names></name></person-group> (<year>2012</year>). <article-title>A temporal bottleneck in the language comprehension network.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>32</volume> <fpage>9089</fpage>&#x02013;<lpage>9102</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.5685-11.2012</pub-id></citation></ref>
<ref id="B105"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Wassenhove</surname> <given-names>V.</given-names></name> <name><surname>Grant</surname> <given-names>K. W.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2005</year>). <article-title>Visual speech speeds up the neural processing of auditory speech</article-title>. <source><italic>Proc. Natl. Acad. Sci. U.S.A. </italic></source> <volume>102</volume> <fpage>1181</fpage>&#x02013;<lpage>1186</lpage>.<pub-id pub-id-type="doi">10.1073/pnas.0408949102</pub-id></citation></ref>
<ref id="B106"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Wassenhove</surname> <given-names>V.</given-names></name> <name><surname>Grant</surname> <given-names>K. W.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2007</year>). <article-title>Temporal window of integration in auditory-visual speech perception.</article-title> <source><italic>Neuropsychologia</italic></source> <volume>45</volume> <fpage>598</fpage>&#x02013;<lpage>607</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2006.01.001</pub-id></citation></ref>
<ref id="B107"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vigneau</surname> <given-names>M.</given-names></name> <name><surname>Beaucousin</surname> <given-names>V.</given-names></name> <name><surname>Herve</surname> <given-names>P. Y.</given-names></name> <name><surname>Duffau</surname> <given-names>H.</given-names></name> <name><surname>Crivello</surname> <given-names>F.</given-names></name> <name><surname>Houde</surname> <given-names>O.</given-names></name><etal/></person-group> (<year>2006</year>). <article-title>Meta-analyzing left hemisphere language areas: phonology, semantics, and sentence processing.</article-title> <source><italic>Neuroimage</italic></source> <volume>30</volume> <fpage>1414</fpage>&#x02013;<lpage>1432</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuroimage.2005.11.002</pub-id></citation></ref>
<ref id="B108"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wozny</surname> <given-names>D. R.</given-names></name> <name><surname>Shams</surname> <given-names>L.</given-names></name></person-group> (<year>2011</year>). <article-title>Recalibration of auditory space following milliseconds of cross-modal discrepancy.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>31</volume> <fpage>4607</fpage>&#x02013;<lpage>4612</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.6079-10.2011</pub-id></citation></ref>
</ref-list>
</back>
</article>