<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2014.00132</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Review Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>How learning to abstract shapes neural sound representations</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Ley</surname> <given-names>Anke</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/114163"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Vroomen</surname> <given-names>Jean</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/3361"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Formisano</surname> <given-names>Elia</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/8856"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Medical Psychology and Neuropsychology, Tilburg School of Social and Behavioral Sciences, Tilburg University</institution> <country>Tilburg, Netherlands</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University</institution> <country>Maastricht, Netherlands</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Einat Liebenthal, Medical College of Wisconsin, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Rajeev D. S. Raizada, Cornell University, USA; Andre Brechmann, Leibniz Institute for Neurobiologie, Germany</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Elia Formisano, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, PO Box 616, 6200 MD Maastricht, Netherlands e-mail: <email>e.formisano&#x00040;maastrichtuniversity.nl</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>03</day>
<month>06</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>132</elocation-id>
<history>
<date date-type="received">
<day>02</day>
<month>03</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>05</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 Ley, Vroomen and Formisano.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes&#x02014;even in absence of changes in overall signal level&#x02014;these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations.</p></abstract>
<kwd-group>
<kwd>auditory perception</kwd>
<kwd>perceptual categorization</kwd>
<kwd>learning</kwd>
<kwd>plasticity</kwd>
<kwd>MVPA</kwd>
</kwd-group>
<counts>
<fig-count count="3"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="118"/>
<page-count count="11"/>
<word-count count="9124"/>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>Sound perception&#x02014;more than time-frequency analysis</title>
<p>Despite major advances in the past years to unravel the functional organization principles of the auditory system, the neural processes underlying sound perception are still far from being understood. Complementary research in animals and humans has revealed the properties of responses of neurons and neuronal populations along the auditory pathway from the cochlear nucleus to the cortex. Current knowledge on the neural representation of the spectrotemporal features of the incoming sound is such that the sound spectrogram can be accurately reconstructed from neuronal population responses (Pasley et al., <xref ref-type="bibr" rid="B91">2012</xref>). Yet, the precise neural representation of the acoustic sound features alone cannot explain sound perception fully. In fact, how a sound is perceived may be invariant to changes of its acoustic properties. Unless the context in which a sound is repeated is absolutely identical to the first encounter&#x02014;which is rather unlikely under natural circumstances&#x02014;recognizing a sound is not trivial, given that the acoustic properties of the two repetitions may not entirely match. Obviously, this poses an extreme challenge to the auditory system. To maintain processing efficiency, acoustically different sounds must be mapped onto the same perceptual representation. Thus, an essential part of sound processing is the reduction or perceptual categorization of the vast diversity of spectrotemporal events into meaningful (i.e., behaviorally relevant) units. However, despite the ease with which humans generally accomplish this task, the detection of relevant and invariant information in the complexity of the sensory input is not straightforward. This is also reflected in the performance of artificial voice and speech recognition systems for human-computer interaction, that is far below that of humans, which is mainly due to the difficulty of dealing with the naturally occurring variability in speech signals (Benzeguiba et al., <xref ref-type="bibr" rid="B7">2007</xref>). In humans, the need for perceptual abstraction in everyday functioning manifests itself in pathological conditions such as the autism spectrum disorder (ASD). Next to their susceptibility to more general cognitive deficits in abstract reasoning and concept formation (Minshew et al., <xref ref-type="bibr" rid="B78">2002</xref>), individuals with ASD tend to show enhanced processing of detailed acoustic information while processing of more complex and socially relevant sounds such as speech may be diminished (reviewed in Ouimet et al., <xref ref-type="bibr" rid="B90">2012</xref>).</p>
<p>Speech sounds have been widely investigated in the context of sensory-perceptual transformation as they represent a prominent example of perceptual sound categories that comprise a large number of acoustically different sounds. Interestingly, there is not a clear boundary between two phoneme categories such as /b/ and /d/: the underlying acoustic features vary smoothly from one category to the next (Figure <xref ref-type="fig" rid="F1">1A</xref>). Remarkably though, if people are asked to identify individual sounds randomly taken from this spectrotemporal continuum as either /b/ or /d/ their percept does not vary gradually as suggested by the sensory input. Instead, the sounds from the first portion of the continuum are robustly identified as /b/, while the sounds from the second part are perceived as /d/ with an abrupt perceptual switch in between (Figure <xref ref-type="fig" rid="F1">1B</xref>). Performance on discrimination tests further suggests that people are fairly insensitive to the underlying variation of the stimuli within one phoneme category, mapping various physically different stimuli onto the same perceptual object (Liberman et al., <xref ref-type="bibr" rid="B70">1957</xref>). At the category boundary, however, the same extent of physical difference is perceived as a change in stimulus identity. This difference in perceptual discrimination also affects speech production, which strongly relies on online monitoring of auditory feedback. Typically, a self-produced error in the articulation of a speech sound is instantaneously corrected for if, e.g., the output vowel differs from the intended vowel category. An acoustic deviation of the same magnitude and direction may however be tolerated if the produced sound and the intended sound fall within the same perceptual category (Niziolek and Guenther, <xref ref-type="bibr" rid="B84">2013</xref>). This suggests that the within-category differences in the physical domain are perceptually compressed to create a robust representation of the phoneme category while between-category differences are perceptually enhanced to rapidly detect the relevant change of phoneme identity. This phenomenon is termed &#x0201C;Categorical Perception&#x0201D; (CP, Harnad, <xref ref-type="bibr" rid="B48">1987</xref>) and has been demonstrated for stimuli from various natural domains apart from speech, such as music (Burns and Ward, <xref ref-type="bibr" rid="B16">1978</xref>), color (Bornstein et al., <xref ref-type="bibr" rid="B12">1976</xref>; Franklin and Davies, <xref ref-type="bibr" rid="B34">2004</xref>) and facial expressions of emotion (Etcoff and Magee, <xref ref-type="bibr" rid="B30">1992</xref>), not only for humans but also for monkeys (Freedman et al., <xref ref-type="bibr" rid="B35">2001</xref>, <xref ref-type="bibr" rid="B36">2003</xref>), chinchillas (Kuhl and Miller, <xref ref-type="bibr" rid="B63">1975</xref>), songbirds (Prather et al., <xref ref-type="bibr" rid="B95">2009</xref>), and even crickets (Wyttenbach et al., <xref ref-type="bibr" rid="B118">1996</xref>). Thus, the formation of discrete perceptual categories from a continuous physical signal seems to be a universal reduction mechanism to deal with the complexity of natural environments.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Illustration of the sensory-perceptual transformation of speech sounds</bold>. <bold>(A)</bold> Schematic representation of spectral patterns for the continuum between the phonemes /b/ and /d/. F1 and F2 reflect the first and second formant (i.e., amplitude peaks in the frequency spectrum). <bold>(B)</bold> Phoneme identification curves corresponding to the continuum in A. Curves are characterized by relatively stable percepts within a phoneme category and sharp transitions in between. Figure adapted from Liberman et al. (<xref ref-type="bibr" rid="B70">1957</xref>).</p></caption>
<graphic xlink:href="fnins-08-00132-g0001.tif"/>
</fig>
<p>Several recent reviews have discussed the neural representation of sound categories in auditory cortex (AC) and the role of learning-induced plasticity (e.g., Nourski and Brugge, <xref ref-type="bibr" rid="B87">2011</xref>; Spierer et al., <xref ref-type="bibr" rid="B109">2011</xref>). The emphasis of the current review lies on recent empirical studies using natural or artificial sounds and experimental paradigms that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations (such as for speech). Additionally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations.</p>
</sec>
<sec>
<title>The role of experience in the formation of perceptual categories</title>
<p>While CP has been demonstrated many times for a large variety of stimuli, the mechanisms underlying this phenomenon remain debated. Even for speech, which has most widely been investigated, the relative contribution of innate processes and learning in the formation of phoneme categories is not completely resolved. Despite the striking consistency of perceptual phoneme boundaries across different listeners, behavioral evidence suggests that those boundaries are malleable depending on the context in which the sounds are perceived (Benders et al., <xref ref-type="bibr" rid="B6">2010</xref>). Additionally, cross-cultural studies have shown that language learning influences the discriminability of speech sounds, such that phonemes in one particular language are only perceived categorically by speakers of that language and continuously otherwise (Kuhl et al., <xref ref-type="bibr" rid="B64">1992</xref>). Similarly, lifelong (e.g., musical training) as well as short-term experience both affect behavioral processing&#x02014;and neural encoding (see below)&#x02014;of relevant speech cues, such as pitch, timber and timing (Kraus et al., <xref ref-type="bibr" rid="B61">2009</xref>). In support of the claim that speech CP can be acquired through training stand experimental learning studies that successfully induced discontinuous perception of a non-native phoneme continuum through elaborate category training (Myers and Swan, <xref ref-type="bibr" rid="B82">2012</xref>). Nevertheless, even after extensive training, non-native phoneme contrasts tend to remain less robust than speech categories in the native language. Apart from the age of acquisition, the complexity of the learning environment and in particular the offered stimulus variability during category learning seems to affect the ability to discriminate novel phonetic contrasts (Logan et al., <xref ref-type="bibr" rid="B76">1991</xref>). A prevalent theory for the formation of speech categories in particular is the motor theory of speech perception (Liberman and Mattingly, <xref ref-type="bibr" rid="B71">1985</xref>). This theory claims that speech sounds are categorized based on the distinct motor commands for the vocal tract used for pronunciation. Further fueled by the discovery of mirror neurons, the theory still has its proponents (for review see Galantucci et al., <xref ref-type="bibr" rid="B40">2006</xref>), however, today, it is disputed in its strict form in which speech processing is considered special, as the recruitment of the motor system for sound identification has been demonstrated for various forms of non-speech action-related sounds (Kohler et al., <xref ref-type="bibr" rid="B58">2002</xref>). Furthermore, accumulating evidence indicates that CP can be induced by learning for a variety of non-speech stimulus material (e.g., simple noise sounds, Guenther et al., <xref ref-type="bibr" rid="B45">1999</xref> and inharmonic tone complexes, Goudbeek et al., <xref ref-type="bibr" rid="B43">2009</xref>). The use of artificially constructed categories for studying CP has the advantage that the physical distance between neighboring stimuli can be controlled such that the similarity ratings of within- or between-category stimuli can be attributed to true perceptual effects, rather than the metrics of the stimulus dimensions. Nevertheless, one should bear in mind that the long-term exposure to statistical regularities of the acoustics of natural sounds might exert a lasting influence on the formation of new sound categories. In support of this claim, Scharinger et al. (<xref ref-type="bibr" rid="B99">2013b</xref>) revealed a strong preference for negatively correlated spectral dimensions typical for speech and other natural categories when participants learned to categorize novel auditory stimuli. In line with this behavioral documentation in humans, a recent study in rodent pups demonstrated the proneness of auditory receptive fields to the systematics of the acoustic environment shaping the tuning curves of cortical neurons. Most importantly, these neuronal changes were shown to parallel an increase in perceptual discrimination of the employed sounds, which points to a link between (early) neuronal plasticity and perceptual discrimination ability (K&#x000F6;ver et al., <xref ref-type="bibr" rid="B59">2013</xref>). In sum, these experiments demonstrated that the perceptual abilities could be modified by learning and experience, while the role of pre-existing (i.e., innate) neural structures and their early adaptation in critical phases of maturation might play a vital role.</p>
</sec>
<sec>
<title>Neural representations of perceptual sound categories</title>
<p>Behavioral studies have been complemented with research on the neural implementation of perceptual sound categories. Forming new sound categories or assigning a new stimulus to an existing category requires the integration of bottom-up stimulus driven information with knowledge from prior experience and memory as well as linking this information to the appropriate response in case of an active categorization task. Different research lines have highlighted the contribution of neural structures along the auditory pathway and in the cortex to this complex and dynamic process.</p>
<p>Functional neuroimaging studies employing natural sound categories such as voices, speech, and music have located object-specific processing units in higher level auditory areas in the superior temporal lobe (Belin et al., <xref ref-type="bibr" rid="B5">2000</xref>; Leaver and Rauschecker, <xref ref-type="bibr" rid="B66">2010</xref>). Particularly, native phoneme categories were shown to recruit the left superior temporal sulcus (STS) (Liebenthal et al., <xref ref-type="bibr" rid="B72">2005</xref>) and the activation level of this region seems to correlate with the degree of categorical processing (Desai et al., <xref ref-type="bibr" rid="B24">2008</xref>). While categorical processes in the STS were documented by further studies, the generalization to other sound categories beyond speech remains controversial, given that the employed stimuli were either speech sounds or artificial sounds with speech-like characteristics (Leech et al., <xref ref-type="bibr" rid="B67">2009</xref>; Liebenthal et al., <xref ref-type="bibr" rid="B73">2010</xref>). Even if speech sounds are natural examples of the discrepancy between sensory and perceptual space, the results derived from these studies may not generalize to other categories, as humans are processing experts for speech (similar to faces) even prior to linguistic experience (Eimas et al., <xref ref-type="bibr" rid="B26">1987</xref>). In addition, regions in the temporal lobe were shown to retain the sensitivity to acoustic variability within sound categories, while highly abstract phoneme representations (i.e., invariant to changes within one phonetic category) appear to depend on decision-related processes in the frontal lobe (Myers et al., <xref ref-type="bibr" rid="B81">2009</xref>). These results are highly compatible with those from cell recordings in rhesus monkey (Tsunada et al., <xref ref-type="bibr" rid="B112">2011</xref>). Based on the analysis of single-cell responses to human speech categories, the authors suggest that &#x0201C;a hierarchical relationship exists between the superior temporal gyrus (STG) and the ventral PFC whereby STG provides the &#x02018;sensory evidence&#x02019; to form the decision and ventral PFC activity encodes the output of the decision process.&#x0201D; Analog to the two-stage hierarchical processing model in the visual domain (Freedman et al., <xref ref-type="bibr" rid="B36">2003</xref>; Jiang et al., <xref ref-type="bibr" rid="B53">2007</xref>; Li et al., <xref ref-type="bibr" rid="B69">2009</xref>), the set of findings reviewed above suggests that processing areas in the temporal lobe only constitute a preparatory stage for categorization. Specifically, the model proposes that the tuning of neuronal populations in lower-level sensory areas is sharpened according to the category-relevant stimulus features, forming a task-independent reduction of the sensory input (but see below for a different view on the role of early auditory areas). In case of an active categorization task, this information is projected to higher-order cortical areas in the frontal lobe. The predominant recruitment of the prefrontal cortex (PFC) during early phases of category learning (Little and Thulborn, <xref ref-type="bibr" rid="B75">2005</xref>) and in the context of an active categorization task (Boettiger and D&#x00027;Esposito, <xref ref-type="bibr" rid="B9">2005</xref>; Husain et al., <xref ref-type="bibr" rid="B52">2006</xref>; Li et al., <xref ref-type="bibr" rid="B69">2009</xref>) support the concept that it plays a major role in rule learning and attention-related processes modulating lower-level sound processing rather than being the site of categorical sound representations <italic>per se</italic>.</p>
<p>Categorical processing does however not exclusively proceed along the auditory &#x0201C;what&#x0201D; stream. To study the neural basis of CP, Raizada and Poldrack (<xref ref-type="bibr" rid="B96">2007</xref>) measured fMRI while subjects listened to pairs of stimuli taken from a phonetic /ba/-/da/ continuum. Responses in the supramarginal gyrus were significantly larger for pairs that included stimuli belonging to different phonetic categories (i.e., crossing the category boundary) than for pairs with stimuli from a single category. The authors interpreted these results as evidence for &#x0201C;neural amplification&#x0201D; of relevant stimulus difference and thus for categorical processing in the supramarginal gyrus. Similar analyses showed comparatively little amplification of changes that crossed category boundaries in low-level auditory cortical areas (Raizada and Poldrack, <xref ref-type="bibr" rid="B96">2007</xref>). Novel findings revived the motor theory of categorical processing: Chevillet et al. (<xref ref-type="bibr" rid="B21">2013</xref>) provide evidence that the role of the premotor cortex (PMC) is not limited to motor-related processes during active categorization, but that the phoneme-category tuning of premotor regions may essentially facilitate also more automatic speech processes via dorsal projections originating from pSTS. While this automatic motor route is probably limited to processing of speech and other action-related sound categories, the diversity of the categorical processing networks documented in the above cited studies demonstrates that there is not a single answer to where and how sound categories are represented. The role that early auditory cortical fields play in the perceptual abstraction from the acoustic input remains a relevant topic of current research. A recent study from Nelken&#x00027;s group indicated that neurons in the cat primary auditory area convey more information about abstract auditory entities than about the spectro-temporal sound structure (Chechik and Nelken, <xref ref-type="bibr" rid="B20">2012</xref>). These results are in line with the proposal that neuronal populations in primary AC encode perceptual abstractions of sounds (or <italic>auditory objects</italic>, Griffiths and Warren, <xref ref-type="bibr" rid="B44">2004</xref>) rather than their physical make up (Nelken, <xref ref-type="bibr" rid="B83">2004</xref>). Furthermore, research from Scheich&#x00027;s group has suggested that sound representations in primary AC are largely context- and task- dependent and reflect memory-related and semantic aspects of actively listening to sounds (Scheich et al., <xref ref-type="bibr" rid="B100">2007</xref>). This suggestion is also supported by the observation of semantic/categorical effects within early (&#x0007E;70 ms) post-stimulus time windows in human auditory evoked potentials (Murray et al., <xref ref-type="bibr" rid="B80">2006</xref>).</p>
<p>Finding empirical evidence for abstract categorical representations in low-level auditory cortex in humans, however, remains challenging as it requires experimental paradigms and analysis methods that allow disentangling the perceptual processes from the strong dependence of these auditory neurons on the physical sound attributes. Here, carefully controlled stimulation paradigms in combination with fMRI pattern decoding (see below) could shed light on the matter. For example, Staeren et al. (<xref ref-type="bibr" rid="B110">2009</xref>) were able to dissociate perceptual from stimulus-driven processes by controlling the physical overlap of stimuli within and between natural sound categories. They revealed categorical sound representations in spatially distributed and even overlapping activation patterns in early areas of human AC. Similarly, studies employing fMRI-decoding to investigate the auditory cortical processing of speech/voice categories have put forward a &#x0201C;constructive&#x0201D; role of early auditory cortical networks in the formation of perceptual sound representations (Formisano et al., <xref ref-type="bibr" rid="B32">2008</xref>; Kilian-H&#x000FC;tten et al., <xref ref-type="bibr" rid="B55">2011a</xref>; Bonte et al., <xref ref-type="bibr" rid="B10">2014</xref>).</p>
<p>Crucially, studying context-dependence and plasticity of sound representations in early auditory areas may help unraveling their nature. For example, Dehaene-Lambertz et al. (<xref ref-type="bibr" rid="B23">2005</xref>) demonstrated that even early low-level sound processing is susceptible to top-down directed cognitive influences. In a combination of fMRI and electrophysiological measures, they showed that identical acoustic stimuli were processed in a different fashion, depending on the &#x0201C;perceptual mode&#x0201D; (i.e., whether participants perceived the sounds as speech or artificial whistles).</p>
<p>This literature review illustrates that in order to understand the neural mechanisms underlying the formation of perceptual categories, it is necessary to (1) carefully separate perceptual from acoustical sound representations, (2) distinguish between lower-level perceptual representations and higher-order or feedback-guided decision- and task-related processes and also (3) avoid interference with existing processing networks for familiar and overlearned sound categories.</p>
</sec>
<sec>
<title>Learning and plasticity</title>
<p>Most knowledge about categorical processing in the brain is derived from experiments employing speech or other natural (e.g., music) sound categories. While providing important insights about the neural representations of familiar sound categories, these studies lack the potential to investigate the mechanisms underlying the transformation from acoustic to more abstract perceptual representations. Sound processing must however remain highly plastic beyond sensitive periods early in ontogenesis to allow efficient processing adapted to the changing requirements of the acoustic environment.</p>
<p>Studying these rapid experience-related neural reorganizations requires controlled learning paradigms of new sound categories. With novel, artificial sounds, the acoustic properties can be controlled, such that physical and perceptual representations can be decoupled and interference with existing representations of familiar sound categories can be avoided (but see Scharinger et al., <xref ref-type="bibr" rid="B99">2013b</xref>). A comparison of pre- and post-learning neural responses provides information about the amenability of sound representations along different levels of the auditory processing hierarchy to learning-induced plasticity. Extensive research by Fritz and colleagues has provided convincing evidence for learning-induced plasticity of cortical receptive fields. In ferrets that were trained on a target (tone) detection task, a large proportion of cells in primary AC showed significant changes in spectro-temporal receptive field (STRF) shape during the detection task, as compared with the passive pre-behavioral STRF. Relevant to the focus of this review, in two-thirds of these cells the changes persisted in the post-behavior passive state (Fritz et al., <xref ref-type="bibr" rid="B38">2003</xref>, see also Shamma and Fritz, <xref ref-type="bibr" rid="B105">2014</xref>). Additionally, recent results from animal models and human studies have revealed evidence for similar cellular and behavioral mechanisms for learning and memory in the auditory brainstem (e.g., Tzounopoulos and Kraus, <xref ref-type="bibr" rid="B114">2009</xref>).</p>
<p>Learning studies further provide the opportunity to look into the interaction of lower-level sensory and higher-level association cortex during task- and decision-related processes (De Souza et al., <xref ref-type="bibr" rid="B25">2013</xref>). In contrast to juvenile plasticity, which is mainly driven by bottom-up input, adult learning is supposedly largely dependent on top-down control (Kral, <xref ref-type="bibr" rid="B60">2013</xref>). Thus, categorical processing after short-term plasticity induced by temporary changes of environmental demands might differ from the processes formed by early-onset and long-term adaptation to speech stimuli. Even though there is evidence that with increasing proficiency in category discrimination, neural processing of newly learned speech sounds starts to parallel that of native speech (Golestani and Zatorre, <xref ref-type="bibr" rid="B42">2004</xref>), a discrepancy between ventral and dorsal processing networks for highly familiar native sound categories and non-native or artificial sound categories respectively has been suggested by recent work (Callan et al., <xref ref-type="bibr" rid="B17">2004</xref>; Liebenthal et al., <xref ref-type="bibr" rid="B73">2010</xref>, <xref ref-type="bibr" rid="B74">2013</xref>). This difference potentially limits the generalization to native speech of findings derived from studies employing artificial sound categories.</p>
<p>Several studies have examined the changes in the neural sound representations underlying the perceptual transformations induced by category learning. A seminal study with gerbils demonstrated that learning to categorize artificial sounds in the form of frequency sweeps resulted in a transition from a physical (i.e., onset frequency) to a categorical (i.e., up vs. down) sound representation already in the primary AC (Ohl et al., <xref ref-type="bibr" rid="B88">2001</xref>). In contrast to the traditional understanding of primary AC as a feature detector, this finding implicates that sound representations at the first cortical analysis stage are more abstract and prone to plastic reorganization imposed by changes in environmental demands. In fact, sound stimuli have passed through several levels of basic feature analyses before they ascend to the superior temporal cortex (Nelken, <xref ref-type="bibr" rid="B83">2004</xref>). Thus, as discussed above, sound representations in primary AC are unlikely to be faithful copies of the physical characteristics. Even though the involvement of AC in categorization of artificial sounds has also been demonstrated in humans (Guenther et al., <xref ref-type="bibr" rid="B46">2004</xref>), conventional subtraction paradigms typically employed in fMRI studies lack sufficient sensitivity to demarcate distinct categorical representations. Due to the large physical variability within categories and the similarity of sounds straddling the category boundary, between-category contrasts often do not reveal significant results (Klein and Zatorre, <xref ref-type="bibr" rid="B57">2011</xref>). Furthermore, the effects of category learning on sound processing as demonstrated in animals were based on changes in the spatiotemporal activation pattern without apparent changes in response strength (Ohl et al., <xref ref-type="bibr" rid="B88">2001</xref>; Engineer et al., <xref ref-type="bibr" rid="B27">2014</xref>). Using <italic>in vivo</italic> two-photon calcium imaging in mice, Bathellier et al. (<xref ref-type="bibr" rid="B4">2012</xref>) have convincingly shown that categorical sound representations&#x02014;which can be selected for behavioral or perceptual decisions&#x02014;may emerge as a consequence of non-linear dynamics in local networks in the auditory cortex (Bathellier et al., <xref ref-type="bibr" rid="B4">2012</xref>, see also Tsunada et al., <xref ref-type="bibr" rid="B113">2012</xref> and a recent review by Mizrahi et al., <xref ref-type="bibr" rid="B79">2014</xref>).</p>
<p>In human neuroimaging, these neuronal effects that do not manifest as changes in overall response levels may remain inscrutable to univariate contrast analyses. Also, fMRI designs based on adaptation, or more generally, on measuring responses to stimulus pairs/sequences (e.g., as in Raizada and Poldrack, <xref ref-type="bibr" rid="B96">2007</xref>) do not allow excluding generic effects related to the processing of sound sequences or potential hemodynamic confounds, as the reflection of neuronal adaptation/suppression effects in the fMRI signals is complex (Boynton and Finney, <xref ref-type="bibr" rid="B13">2003</xref>; Verhoef et al., <xref ref-type="bibr" rid="B115">2008</xref>).</p>
<p>Modern analyses techniques with increased sensitivity to spatially distributed activation changes in absence of changes in overall signal level provide a promising tool to decode perceptually invariant sound representations in humans (Formisano et al., <xref ref-type="bibr" rid="B32">2008</xref>; Kilian-H&#x000FC;tten et al., <xref ref-type="bibr" rid="B55">2011a</xref>) and detect the neural effects of learning (Figure <xref ref-type="fig" rid="F2">2</xref>). Multivariate pattern analysis (MVPA) employs established classification techniques from machine learning to discriminate between different cognitive states that are represented in the combined activity of multiple locally distributed voxels, even when their average activity does not differ between conditions (see Haynes and Rees, <xref ref-type="bibr" rid="B50">2006</xref>; Norman et al., <xref ref-type="bibr" rid="B86">2006</xref>; Haxby, <xref ref-type="bibr" rid="B49">2012</xref> for tutorial reviews). Recently, Ley et al. (<xref ref-type="bibr" rid="B68">2012</xref>) demonstrated the potential of this method to trace rapid transformations of neural sound representations, which are entirely based on changes in the way the sounds are perceived induced by a few days of category learning (Figure <xref ref-type="fig" rid="F3">3</xref>). In their study, participants were trained to categorize complex artificial ripple sounds, differing along several acoustic dimensions into two distinct groups. BOLD activity was measured before and after training during passive exposure to an acoustic continuum spanned between the trained categories. This design ensured that the acoustic stimulus dimensions were uninformative of the trained sound categorization such that any change in the activation pattern could be attributed to a warping of the perceptual space rather than physical distance. After successful learning, locally distributed response patterns in Heschl&#x00027;s gyrus (HG) and its adjacency became selective for the trained category discrimination (pitch) while the same sounds elicited indistinguishable responses before. In line with recent findings in rat primary AC (Engineer et al., <xref ref-type="bibr" rid="B28">2013</xref>), the similarity of the cortical activation patterns reflected the sigmoid categorical structure and correlated with perceptual rather than physical sound similarity. Thus, complementary research in animals and humans indicate that perceptual sound categories are represented in the activation patterns of distributed neuronal populations in early auditory regions, further supporting the role of the early AC in abstract and experience-driven sound processing rather than acoustic feature mapping (Nelken, <xref ref-type="bibr" rid="B83">2004</xref>). It is noteworthy that these abstract categorical representations were detectable despite passive listening conditions. This is an important detail, as it demonstrates that categorical representations are (at least partially) independent of higher-order decision or motor-related processes. Furthermore, it suggests that some preparatory (i.e., multipurpose) abstraction of the physical input happens at the level of the early auditory cortex.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Functional MRI pattern decoding and rationale for its application in the neuroimaging of learning</bold>. <bold>(A)</bold> General logic of fMRI pattern decoding (Figure adapted from Formisano et al., <xref ref-type="bibr" rid="B32">2008</xref>). Trials (and corresponding multivariate responses) are split into a training set and a testing set. On the training set of data, response patterns that maximally discriminate the stimulus categories are estimated; the testing set of data is then used to measure the correctness of discrimination of new, unlabeled trials. For statistical assessment, the same analysis is repeated for different splits of learning and test sets. <bold>(B)</bold> Schematic representation of the perceptual (and possibly neural) transformation from a continuum to a discrete categorical representation. The first plot depicts an artificial two-dimensional stimulus space without physical indications of a category boundary (exemplars are equally spaced along both dimensions). During learning, stimuli are separated according to the relevant dimension, irrespective of the variability in the second dimension. Lasting differential responses for the left and right half of the continuum eventually lead to a warping of the perceptual space in which within-category differences are reduced and between-category differences enlarged. Graphics inspired by Kuhl (<xref ref-type="bibr" rid="B62">2000</xref>). Thus, in cortical regions where (sound) categories are represented, higher fMRI-based decoding accuracy of responses to stimuli from the two categories is expected <italic>after learning</italic>.</p></caption>
<graphic xlink:href="fnins-08-00132-g0002.tif"/>
</fig>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Representation of the study by Ley et al. (<xref ref-type="bibr" rid="B68">2012</xref>)</bold>. <bold>(A)</bold> Multidimensional stimulus space spanning the two categories A and B. <bold>(B)</bold> Group discrimination maps based on the post-learning fMRI data for the trained stimulus division (i.e., &#x0201C;low pitch&#x0201D; vs. &#x0201C;high pitch&#x0201D;), displayed on an average reconstructed cortical surface after cortex-based realignment. <bold>(C)</bold> Average classification accuracies based on fMRI data prior to category training and after successful category learning for the two types of stimulus space divisions (trained vs. untrained) and the respective trial labeling. <bold>(D)</bold> Changes in pattern similarity and behavioral identification curves. After category learning, neural response patterns for sounds with higher pitch (pitch levels 4, 5, 6) correlated with the prototypical response pattern for class B more strongly than class A, independent of other acoustic features. The profile of these correlations on the pitch continuum closely reflected the sigmoid shape of the behavioral category identification function.</p></caption>
<graphic xlink:href="fnins-08-00132-g0003.tif"/>
</fig>
<p>The mechanisms of neuroplasticity underlying category learning and the origin of the categorical organization of sound representations in the auditory cortex are still quite poorly understood and deserve further investigation. Hypotheses are primarily derived from perceptual learning studies in animals. These studies show that extensive discrimination training may elicit reorganization of the auditory cortical maps, selectively increasing the representation of the behaviorally relevant sound features (Recanzone et al., <xref ref-type="bibr" rid="B97">1993</xref>; Polley et al., <xref ref-type="bibr" rid="B94">2006</xref>). This suggests that environmental and behavioral demands lead to changes of the auditory tuning properties of neurons such that more neurons are tuned to the relevant features to achieve higher sensitivity in the relevant dimension. This reorganization is mediated by synaptic plasticity, i.e., the strengthening of neuronal connections following rules of Hebbian learning (Hebb, <xref ref-type="bibr" rid="B51">1949</xref>; for recent review, see Caporale and Dan, <xref ref-type="bibr" rid="B19">2008</xref>). Passive learning studies suggest that attention is not necessary for sensory plasticity to occur (Watanabe et al., <xref ref-type="bibr" rid="B117">2001</xref>; Seitz and Watanabe, <xref ref-type="bibr" rid="B104">2003</xref>). However, in contrast to the mostly unequivocal sound structure used for perceptual learning experiments, learning to categorize a large number of sounds differing along multiple dimensions requires either sound distributions indicative of the category structure (Goudbeek et al., <xref ref-type="bibr" rid="B43">2009</xref>) or a task including response feedback in order to extract the relevant and category discriminative sound feature. This selective enhancement of features requires some top-down gating mechanism. Attention can act as such a filter, increasing feature saliency (Lakatos et al., <xref ref-type="bibr" rid="B65">2013</xref>) by selectively modulating the tuning properties of neurons in the auditory cortex, eventually leading to a competitive advantage of behaviorally relevant information (Bonte et al., <xref ref-type="bibr" rid="B11">2009</xref>, <xref ref-type="bibr" rid="B10">2014</xref>; Ahveninen et al., <xref ref-type="bibr" rid="B2">2011</xref>). As a consequence, more neural resources would be allocated to the behaviorally relevant information at the expense of information that is irrelevant for the decision. The adaptive allocation of neural resources to diagnostic information after category learning is supported by evidence from monkey electrophysiology (Sigala and Logothetis, <xref ref-type="bibr" rid="B108">2002</xref>; De Baene et al., <xref ref-type="bibr" rid="B22">2008</xref>) and human imaging, showing decreased activation for prototypical exemplars of a category relative to exemplars near the category boundary (Guenther et al., <xref ref-type="bibr" rid="B46">2004</xref>). This idea of categorical sound representations being sparse or parsimonious is also compatible with fMRI observations by Brechmann and Scheich (<xref ref-type="bibr" rid="B14">2005</xref>), showing an inverse correlation of auditory cortex activation and performance in an auditory categorization task. The recent discovery of a positive correlation between gray matter probability in parietal cortex and the optimal utilization of acoustic features in a categorization task (Scharinger et al., <xref ref-type="bibr" rid="B98">2013a</xref>) provides further evidence for the crucial role of attentional processes in feature selection necessary for category learning. Reducing the representation of a large number of sounds too few relevant features presents an enormous processing advantage. It facilitates the read-out of the categorical pattern due to the pruned data structure and limits the neural resources by avoiding redundancies in the representation according to the concept of sparse coding (Olshausen and Field, <xref ref-type="bibr" rid="B89">2004</xref>).</p>
<p>To date, there are several models for describing the neural circuitry between sensory and higher-order attentional processes mediating learning-induced plasticity. Predictive coding models propose that the dynamic interaction between bottom-up sensory information and top-down modulation by prior experience shapes the perceptual sound representation (Friston, <xref ref-type="bibr" rid="B37">2005</xref>). This implies that categorical perception would arise from the continuous updating of the internal representation during learning to incorporate all variability present within a category, with the objective of reducing the prediction error (i.e., the difference between sensory input and internal representation). Consequently, lasting interaction between forward driven processing and backward modulation could induce synaptic plasticity and result in an internal representation that correctly matches the categorical structure and therefore optimally guides correct behavior also beyond the scope of the training period. The implementation of these Bayesian processing models rests on fairly hierarchical structures consisting of forward, backward and lateral connections entering different cortical layers (Felleman and Van Essen, <xref ref-type="bibr" rid="B31">1991</xref>; Hackett, <xref ref-type="bibr" rid="B47">2011</xref>). According to the Reverse Hierarchy Theory (Ahissar and Hochstein, <xref ref-type="bibr" rid="B1">2004</xref>), category learning would be initiated by high-level processes involved in rule-learning, controlling via top-down modulation selective plasticity at lower-level sensory areas sharpening the responses according to the learning rule (Sussman et al., <xref ref-type="bibr" rid="B111">2002</xref>; Myers and Swan, <xref ref-type="bibr" rid="B82">2012</xref>). In accordance with this view, attentional modulation involving a fronto-parietal network of brain areas appears most prominent during early phases of learning, progressively decreasing with expertise (Little and Thulborn, <xref ref-type="bibr" rid="B75">2005</xref>; De Souza et al., <xref ref-type="bibr" rid="B25">2013</xref>). Despite recent evidence for early sensory-perceptual abstraction mechanisms in human auditory cortex (Murray et al., <xref ref-type="bibr" rid="B80">2006</xref>; Bidelman et al., <xref ref-type="bibr" rid="B8">2013</xref>), it is crucial to note that the reciprocal information exchange between higher-level and lower-level cortical fields happens very fast (Kral, <xref ref-type="bibr" rid="B60">2013</xref>) and even within the auditory cortex, processing is characterized by complex forward, lateral and backward microcircuits (Atencio and Schreiner, <xref ref-type="bibr" rid="B3">2010</xref>; Schreiner and Polley, <xref ref-type="bibr" rid="B101">2014</xref>). Therefore, the origin of the categorical responses in AC is difficult to determine unless the response latencies and laminar structure are carefully investigated.</p>
</sec>
<sec>
<title>Crossmodal plasticity&#x02014;considerations for future studies</title>
<p>Considering that sound perception strongly relies on the integration of information represented across multiple cortical areas, simultaneous input from the other sensory modalities presents itself as a major source of influence on learning-induced plasticity of sound representations. In fact, there is compelling behavioral evidence that the human perceptual system integrates specific, event-relevant information across auditory and visual (McGurk and MacDonald, <xref ref-type="bibr" rid="B77">1976</xref>) or auditory and tactile (Gick and Derrick, <xref ref-type="bibr" rid="B41">2009</xref>) modalities and that mechanisms of multisensory integration can be shaped through experience (Wallace and Stein, <xref ref-type="bibr" rid="B116">2007</xref>). Together, these two facts predict that visual or tactile contexts during learning have a major impact on perceptual reorganization of sound representations.</p>
<p>Promising insights are provided by behavioral studies showing that multimodal training designs are generally superior to unimodal training designs (Shams and Seitz, <xref ref-type="bibr" rid="B106">2008</xref>). The beneficial effect of multisensory exposure during training may last beyond the training period itself reflected in increased performance after removal of the stimulus from one modality (for review, see Shams et al., <xref ref-type="bibr" rid="B107">2011</xref>). This effect has been demonstrated even for brief training periods and arbitrary stimulus pairs (Ernst, <xref ref-type="bibr" rid="B29">2007</xref>), promoting the view that short-term multisensory learning can lead to lasting reorganization of the processing networks (Kilian-H&#x000FC;tten et al., <xref ref-type="bibr" rid="B55">2011a</xref>,<xref ref-type="bibr" rid="B56">b</xref>). Given the considerable evidence for response modulation of auditory neurons by simultaneous non-acoustic events and even crossmodal activation of the auditory cortex in absence of sound stimuli (Calvert et al., <xref ref-type="bibr" rid="B18">1997</xref>; Foxe et al., <xref ref-type="bibr" rid="B33">2002</xref>; Fu et al., <xref ref-type="bibr" rid="B39">2003</xref>; Brosch et al., <xref ref-type="bibr" rid="B15">2005</xref>; Kayser et al., <xref ref-type="bibr" rid="B54">2005</xref>; Pekkola et al., <xref ref-type="bibr" rid="B92">2005</xref>; Sch&#x000FC;rmann et al., <xref ref-type="bibr" rid="B103">2006</xref>; Nordmark et al., <xref ref-type="bibr" rid="B85">2012</xref>), it is likely that sound representations at the level of AC are also prone to influences from the visual or tactile modality. Animal electrophysiology has suggested different laminar profiles for tactile and visual pathways in the auditory cortex indicative for forward and backward directed input respectively (Schroeder and Foxe, <xref ref-type="bibr" rid="B102">2002</xref>). Crucially, the quasi-laminar resolution achievable with state-of-art ultra-high field fMRI (Polimeni et al., <xref ref-type="bibr" rid="B93">2010</xref>) provides new possibility to systematically investigate&#x02014;in humans&#x02014;the detailed neurophysiological basis underlying the influence of non-auditory input on sound perception and on learning induced plasticity in sound representations in the auditory cortex.</p>
</sec>
<sec sec-type="conclusion" id="s1">
<title>Conclusion</title>
<p>In recent years, the phenomenon of perceptual categorization has stimulated a tremendous amount of research on the neural representation of perceptual sound categories in animals and humans. Despite this large data pool, no clear answer could yet be found on where abstract sound categories are represented in the brain. Whereas animal research provides increasing evidence for complex processing abilities of early auditory areas, results from human studies tend to promote more hierarchical processing models in which categorical perception relies on higher order temporal and frontal regions. In this review, we discussed this apparent discrepancy and illustrated the potential pitfalls attached to research on categorical sound processing. Separating perceptual and acoustical processes possibly represents the biggest challenge. In this respect, it is crucial to note that many &#x0201C;perceptual&#x0201D; effects, demonstrated in animal studies, did not manifest as changes in overall signal level. Recent research has shown that while these effects may remain inscrutable to univariate contrast analyses typically employed in human neuroimaging, modern analysis techniques&#x02014;such as fMRI-decoding&#x02014;is capable of unraveling perceptual processes in locally distributed activation patterns. It is also becoming increasingly evident that in order to grasp the full capacity of auditory processing in low-level auditory areas, it is necessary to consider its susceptibility to context and task, flexibly adapting its processing resources according to the environmental demands. In order to bring the advances from animal and human research closer together, future approaches on categorical sound representations in humans are likely to require an integrative combination of controlled stimulation designs, sensitive measurement techniques (e.g., high field fMRI) and advanced analysis techniques.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>This work was supported by Maastricht University, Tilburg University and the Netherlands Organization for Scientific Research (NWO; VICI grant 453-12-002 to Elia Formisano).</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahissar</surname> <given-names>M.</given-names></name> <name><surname>Hochstein</surname> <given-names>S.</given-names></name></person-group> (<year>2004</year>). <article-title>The reverse hierarchy theory of visual perceptual learning</article-title>. <source>Trends Cogn. Sci</source>. <volume>8</volume>, <fpage>457</fpage>&#x02013;<lpage>464</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2004.08.011</pub-id><pub-id pub-id-type="pmid">15450510</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahveninen</surname> <given-names>J.</given-names></name> <name><surname>H&#x000E4;m&#x000E4;l&#x000E4;inen</surname> <given-names>M.</given-names></name> <name><surname>J&#x000E4;&#x000E4;skel&#x000E4;inen</surname> <given-names>I. P.</given-names></name> <name><surname>Ahlfors</surname> <given-names>S. P.</given-names></name> <name><surname>Huang</surname> <given-names>S.</given-names></name> <name><surname>Lin</surname> <given-names>F.-H.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>108</volume>, <fpage>4182</fpage>&#x02013;<lpage>4187</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1016134108</pub-id><pub-id pub-id-type="pmid">21368107</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Atencio</surname> <given-names>C. A.</given-names></name> <name><surname>Schreiner</surname> <given-names>C. E.</given-names></name></person-group> (<year>2010</year>). <article-title>Laminar diversity of dynamic sound processing in cat primary auditory cortex</article-title>. <source>J. Neurophysiol</source>. <fpage>192</fpage>&#x02013;<lpage>205</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00624.2009</pub-id><pub-id pub-id-type="pmid">19864440</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bathellier</surname> <given-names>B.</given-names></name> <name><surname>Ushakova</surname> <given-names>L.</given-names></name> <name><surname>Rumpel</surname> <given-names>S.</given-names></name></person-group> (<year>2012</year>). <article-title>Discrete neocortical dynamics predict behavioral categorization of sounds</article-title>. <source>Neuron</source> <volume>76</volume>, <fpage>435</fpage>&#x02013;<lpage>449</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2012.07.008</pub-id><pub-id pub-id-type="pmid">23083744</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Belin</surname> <given-names>P.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name> <name><surname>Lafaille</surname> <given-names>P.</given-names></name> <name><surname>Ahad</surname> <given-names>P.</given-names></name> <name><surname>Pike</surname> <given-names>B.</given-names></name></person-group> (<year>2000</year>). <article-title>Voice-selective areas in human auditory cortex</article-title>. <source>Nature</source> <volume>403</volume>, <fpage>309</fpage>&#x02013;<lpage>312</lpage>. <pub-id pub-id-type="doi">10.1038/35002078</pub-id><pub-id pub-id-type="pmid">10659849</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Benders</surname> <given-names>T.</given-names></name> <name><surname>Escudero</surname> <given-names>P.</given-names></name> <name><surname>Sjerps</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>The interrelaton between acoustic context effects and available response categories in speech sound categorization</article-title>. <source>J. Acoust. Soc. Am</source>. <volume>131</volume>, <fpage>3079</fpage>&#x02013;<lpage>3087</lpage>. <pub-id pub-id-type="doi">10.1121/1.3688512</pub-id><pub-id pub-id-type="pmid">22501081</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Benzeguiba</surname> <given-names>M.</given-names></name> <name><surname>De Mori</surname> <given-names>R.</given-names></name> <name><surname>Deroo</surname> <given-names>O.</given-names></name> <name><surname>Dupont</surname> <given-names>S.</given-names></name> <name><surname>Erbes</surname> <given-names>T.</given-names></name> <name><surname>Jouvet</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2007</year>). <article-title>Automatic speech recognition and speech variability: a review</article-title>. <source>Speech Commun</source>. <volume>49</volume>, <fpage>10</fpage>&#x02013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1016/j.specom.2007.02.006</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bidelman</surname> <given-names>G. M.</given-names></name> <name><surname>Moreno</surname> <given-names>S.</given-names></name> <name><surname>Alain</surname> <given-names>C.</given-names></name></person-group> (<year>2013</year>). <article-title>Tracing the emergence of categorical speech perception in the human auditory system</article-title>. <source>Neuroimage</source> <volume>79</volume>, <fpage>201</fpage>&#x02013;<lpage>212</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2013.04.093</pub-id><pub-id pub-id-type="pmid">23648960</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boettiger</surname> <given-names>C. A.</given-names></name> <name><surname>D&#x00027;Esposito</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>Frontal networks for learning and executing arbitrary stimulus-response associations</article-title>. <source>J. Neurosci</source>. <volume>25</volume>, <fpage>2723</fpage>&#x02013;<lpage>2732</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3697-04.2005</pub-id><pub-id pub-id-type="pmid">15758182</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bonte</surname> <given-names>M.</given-names></name> <name><surname>Hausfeld</surname> <given-names>L.</given-names></name> <name><surname>Scharke</surname> <given-names>W.</given-names></name> <name><surname>Valente</surname> <given-names>G.</given-names></name> <name><surname>Formisano</surname> <given-names>E.</given-names></name></person-group> (<year>2014</year>). <article-title>Task-dependent decoding of speaker and vowel identity from auditory cortical response patterns</article-title>. <source>J. Neurosci</source>. <volume>34</volume>, <fpage>4548</fpage>&#x02013;<lpage>4557</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.4339-13.2014</pub-id><pub-id pub-id-type="pmid">24672000</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bonte</surname> <given-names>M.</given-names></name> <name><surname>Valente</surname> <given-names>G.</given-names></name> <name><surname>Formisano</surname> <given-names>E.</given-names></name></person-group> (<year>2009</year>). <article-title>Dynamic and task-dependent encoding of speech and voice by phase reorganization of cortical oscillations</article-title>. <source>J. Neurosci</source>. <volume>29</volume>, <fpage>1699</fpage>&#x02013;<lpage>1706</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3694-08.2009</pub-id><pub-id pub-id-type="pmid">19211877</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bornstein</surname> <given-names>M. H.</given-names></name> <name><surname>Kessen</surname> <given-names>W.</given-names></name> <name><surname>Weiskopf</surname> <given-names>S.</given-names></name></person-group> (<year>1976</year>). <article-title>Color vision and hue categorization in young human infants</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform</source>. <volume>2</volume>, <fpage>115</fpage>&#x02013;<lpage>129</lpage>. <pub-id pub-id-type="doi">10.1037/0096-1523.2.1.115</pub-id><pub-id pub-id-type="pmid">1262792</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boynton</surname> <given-names>G. M.</given-names></name> <name><surname>Finney</surname> <given-names>E. M.</given-names></name></person-group> (<year>2003</year>). <article-title>Orientation-specific adaptation in human visual cortex</article-title>. <source>J. Neurosci</source>. <volume>23</volume>, <fpage>8781</fpage>&#x02013;<lpage>8787</lpage>. <pub-id pub-id-type="pmid">14507978</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brechmann</surname> <given-names>A.</given-names></name> <name><surname>Scheich</surname> <given-names>H.</given-names></name></person-group> (<year>2005</year>). <article-title>Hemispheric shifts of sound representation in auditory cortex with conceptual listening</article-title>. <source>Cereb. Cortex</source> <volume>15</volume>, <fpage>578</fpage>&#x02013;<lpage>587</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhh159</pub-id><pub-id pub-id-type="pmid">15319313</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brosch</surname> <given-names>M.</given-names></name> <name><surname>Selezneva</surname> <given-names>E.</given-names></name> <name><surname>Scheich</surname> <given-names>H.</given-names></name></person-group> (<year>2005</year>). <article-title>Nonauditory events of a behavioral procedure activate auditory cortex of highly trained monkeys</article-title>. <source>J. Neurosci</source>. <volume>25</volume>, <fpage>6797</fpage>&#x02013;<lpage>6806</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1571-05.2005</pub-id><pub-id pub-id-type="pmid">16033889</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burns</surname> <given-names>E. M.</given-names></name> <name><surname>Ward</surname> <given-names>W. D.</given-names></name></person-group> (<year>1978</year>). <article-title>Categorical perception-phenomenon or epiphenomenon: evidence from experiments in the perception of melodic musical intervals</article-title>. <source>J. Acoust. Soc. Am</source>. <volume>63</volume>, <fpage>456</fpage>&#x02013;<lpage>468</lpage>. <pub-id pub-id-type="doi">10.1121/1.381737</pub-id><pub-id pub-id-type="pmid">670543</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Callan</surname> <given-names>D. E.</given-names></name> <name><surname>Jones</surname> <given-names>J. A.</given-names></name> <name><surname>Callan</surname> <given-names>A. M.</given-names></name> <name><surname>Akahane-Yamada</surname> <given-names>R.</given-names></name></person-group> (<year>2004</year>). <article-title>Phonetic perceptual identification by native- and second-language speakers differentially activates brain regions involved with acoustic phonetic processing and those involved with articulatory-auditory/orosensory internal models</article-title>. <source>Neuroimage</source> <volume>22</volume>, <fpage>1182</fpage>&#x02013;<lpage>1194</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.03.006</pub-id><pub-id pub-id-type="pmid">15219590</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Calvert</surname> <given-names>G. A.</given-names></name> <name><surname>Bullmore</surname> <given-names>E. T.</given-names></name> <name><surname>Brammer</surname> <given-names>M. J.</given-names></name> <name><surname>Campbell</surname> <given-names>R.</given-names></name> <name><surname>Williams</surname> <given-names>S. C. R.</given-names></name> <name><surname>McGuire</surname> <given-names>P. K.</given-names></name> <etal/></person-group>. (<year>1997</year>). <article-title>Activation of auditory cortex during silent lipreading</article-title>. <source>Science</source> <volume>276</volume>, <fpage>593</fpage>&#x02013;<lpage>596</lpage>. <pub-id pub-id-type="doi">10.1126/science.276.5312.593</pub-id><pub-id pub-id-type="pmid">9110978</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caporale</surname> <given-names>N.</given-names></name> <name><surname>Dan</surname> <given-names>Y.</given-names></name></person-group> (<year>2008</year>). <article-title>Spike timing-dependent plasticity: a Hebbian learning rule</article-title>. <source>Annu. Rev. Neurosci</source>. <volume>31</volume>, <fpage>25</fpage>&#x02013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.31.060407.125639</pub-id><pub-id pub-id-type="pmid">18275283</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chechik</surname> <given-names>G.</given-names></name> <name><surname>Nelken</surname> <given-names>I.</given-names></name></person-group> (<year>2012</year>). <article-title>Auditory abstraction from spectro-temporal features to coding auditory entities</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>109</volume>, <fpage>18968</fpage>&#x02013;<lpage>18973</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1111242109</pub-id><pub-id pub-id-type="pmid">23112145</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chevillet</surname> <given-names>M. A.</given-names></name> <name><surname>Jiang</surname> <given-names>X.</given-names></name> <name><surname>Rauschecker</surname> <given-names>J. P.</given-names></name> <name><surname>Riesenhuber</surname> <given-names>M.</given-names></name></person-group> (<year>2013</year>). <article-title>Automatic phoneme category selectivity in the dorsal auditory stream</article-title>. <source>J. Neurosci</source>. <volume>33</volume>, <fpage>5208</fpage>&#x02013;<lpage>5215</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1870-12.2013</pub-id><pub-id pub-id-type="pmid">23516286</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Baene</surname> <given-names>W.</given-names></name> <name><surname>Ons</surname> <given-names>B.</given-names></name> <name><surname>Wagemans</surname> <given-names>J.</given-names></name> <name><surname>Vogels</surname> <given-names>R.</given-names></name></person-group> (<year>2008</year>). <article-title>Effects of category learning on the stimulus selectivity of macaque inferior temporal neurons</article-title>. <source>Learn. Mem</source>. <volume>15</volume>, <fpage>717</fpage>&#x02013;<lpage>727</lpage>. <pub-id pub-id-type="doi">10.1101/lm.1040508</pub-id><pub-id pub-id-type="pmid">18772261</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dehaene-Lambertz</surname> <given-names>G.</given-names></name> <name><surname>Pallier</surname> <given-names>C.</given-names></name> <name><surname>Serniclaes</surname> <given-names>W.</given-names></name> <name><surname>Sprenger-Charolles</surname> <given-names>L.</given-names></name> <name><surname>Jobert</surname> <given-names>A.</given-names></name> <name><surname>Dehaene</surname> <given-names>S.</given-names></name></person-group> (<year>2005</year>). <article-title>Neural correlates of switching from auditory to speech perception</article-title>. <source>Neuroimage</source> <volume>24</volume>, <fpage>21</fpage>&#x02013;<lpage>33</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.09.039</pub-id><pub-id pub-id-type="pmid">15588593</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Desai</surname> <given-names>R.</given-names></name> <name><surname>Liebenthal</surname> <given-names>E.</given-names></name> <name><surname>Waldron</surname> <given-names>E.</given-names></name> <name><surname>Binder</surname> <given-names>J. R.</given-names></name></person-group> (<year>2008</year>). <article-title>Left posterior temporal regions are sensitive to auditory categorization</article-title>. <source>J. Cogn. Neurosci</source>. <volume>20</volume>, <fpage>1174</fpage>&#x02013;<lpage>1188</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.2008.20081</pub-id><pub-id pub-id-type="pmid">18284339</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Souza</surname> <given-names>A. C. S.</given-names></name> <name><surname>Yehia</surname> <given-names>H. C.</given-names></name> <name><surname>Sato</surname> <given-names>M.</given-names></name> <name><surname>Callan</surname> <given-names>D.</given-names></name></person-group> (<year>2013</year>). <article-title>Brain activity underlying auditory perceptual learning during short period training: simultaneous fMRI and EEG recording</article-title>. <source>BMC Neurosci</source>. <volume>14</volume>:<fpage>8</fpage>. <pub-id pub-id-type="doi">10.1186/1471-2202-14-8</pub-id><pub-id pub-id-type="pmid">23316957</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Eimas</surname> <given-names>P. D.</given-names></name> <name><surname>Miller</surname> <given-names>J. L.</given-names></name> <name><surname>Jusczyk</surname> <given-names>P. W.</given-names></name></person-group> (<year>1987</year>). <article-title>On infant speech perception and the acquisition of language</article-title>, in <source>Categorical Perception</source>. The Groundwork of Cognition, ed <person-group person-group-type="editor"><name><surname>Harnad</surname> <given-names>S.</given-names></name></person-group> (<publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>), <fpage>161</fpage>&#x02013;<lpage>195</lpage>.</citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Engineer</surname> <given-names>C. T.</given-names></name> <name><surname>Perez</surname> <given-names>C. A.</given-names></name> <name><surname>Carraway</surname> <given-names>R. S.</given-names></name> <name><surname>Chang</surname> <given-names>K. Q.</given-names></name> <name><surname>Roland</surname> <given-names>J. L.</given-names></name> <name><surname>Kilgard</surname> <given-names>M. P.</given-names></name></person-group> (<year>2014</year>). <article-title>Speech training alters tone frequency tuning in rat primary auditory cortex</article-title>. <source>Behav. Brain Res</source>. <volume>258</volume>, <fpage>166</fpage>&#x02013;<lpage>178</lpage>. <pub-id pub-id-type="doi">10.1016/j.bbr.2013.10.021</pub-id><pub-id pub-id-type="pmid">24344364</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Engineer</surname> <given-names>C. T.</given-names></name> <name><surname>Perez</surname> <given-names>C. A.</given-names></name> <name><surname>Carraway</surname> <given-names>R. S.</given-names></name> <name><surname>Chang</surname> <given-names>K. Q.</given-names></name> <name><surname>Roland</surname> <given-names>J. L.</given-names></name> <name><surname>Sloan</surname> <given-names>A. M.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Similarity of cortical activity patterns predicts generalization behavior</article-title>. <source>PLoS ONE</source> <volume>8</volume>:<fpage>e78607</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0078607</pub-id><pub-id pub-id-type="pmid">24147140</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ernst</surname> <given-names>M. O.</given-names></name></person-group> (<year>2007</year>). <article-title>Learning to integrate arbitrary signals from vision and touch</article-title>. <source>J. Vis</source>. <volume>7</volume>, <fpage>1</fpage>&#x02013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1167/7.5.7</pub-id><pub-id pub-id-type="pmid">18217847</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Etcoff</surname> <given-names>N. L.</given-names></name> <name><surname>Magee</surname> <given-names>J. J.</given-names></name></person-group> (<year>1992</year>). <article-title>Categorical perception of facial expressions</article-title>. <source>Cognition</source> <volume>44</volume>, <fpage>227</fpage>&#x02013;<lpage>240</lpage>. <pub-id pub-id-type="doi">10.1016/0010-0277(92)90002-Y</pub-id><pub-id pub-id-type="pmid">1424493</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Felleman</surname> <given-names>D. J.</given-names></name> <name><surname>Van Essen</surname> <given-names>D. C.</given-names></name></person-group> (<year>1991</year>). <article-title>Distributed hierarchical processing in the primate cerebral cortex</article-title>. <source>Cereb. Cortex</source> <volume>1</volume>, <fpage>1</fpage>&#x02013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/1.1.1</pub-id><pub-id pub-id-type="pmid">1822724</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Formisano</surname> <given-names>E.</given-names></name> <name><surname>De Martino</surname> <given-names>F.</given-names></name> <name><surname>Bonte</surname> <given-names>M.</given-names></name> <name><surname>Goebel</surname> <given-names>R.</given-names></name></person-group> (<year>2008</year>). <article-title>&#x0201C;Who&#x0201D; is saying &#x0201C;what&#x0201D;? Brain-based decoding of human voice and speech</article-title>. <source>Science</source> <volume>322</volume>, <fpage>970</fpage>&#x02013;<lpage>973</lpage>. <pub-id pub-id-type="doi">10.1126/science.1164318</pub-id><pub-id pub-id-type="pmid">18988858</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Foxe</surname> <given-names>J. J.</given-names></name> <name><surname>Wylie</surname> <given-names>G. R.</given-names></name> <name><surname>Martinez</surname> <given-names>A.</given-names></name> <name><surname>Schroeder</surname> <given-names>C. E.</given-names></name> <name><surname>Javitt</surname> <given-names>D. C.</given-names></name> <name><surname>Guilfoyle</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2002</year>). <article-title>Auditory-somatosensory multisensory processing in auditory association cortex: an fMRI study</article-title>. <source>J. Neurophysiol</source>. <volume>88</volume>, <fpage>540</fpage>&#x02013;<lpage>543</lpage>. <pub-id pub-id-type="doi">10.1151/jn.00694.2001</pub-id><pub-id pub-id-type="pmid">12091578</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Franklin</surname> <given-names>A.</given-names></name> <name><surname>Davies</surname> <given-names>I. R. L.</given-names></name></person-group> (<year>2004</year>). <article-title>New evidence for infant colour categories</article-title>. <source>Br. J. Dev. Psychol</source>. <volume>22</volume>, <fpage>349</fpage>&#x02013;<lpage>377</lpage>. <pub-id pub-id-type="doi">10.1348/0261510041552738</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Freedman</surname> <given-names>D. J.</given-names></name> <name><surname>Riesenhuber</surname> <given-names>M.</given-names></name> <name><surname>Poggio</surname> <given-names>T.</given-names></name> <name><surname>Miller</surname> <given-names>E. K.</given-names></name></person-group> (<year>2001</year>). <article-title>Categorical representation of visual stimuli in the primate prefrontal cortex</article-title>. <source>Science</source> <volume>291</volume>, <fpage>312</fpage>&#x02013;<lpage>316</lpage>. <pub-id pub-id-type="doi">10.1126/science.291.5502.312</pub-id><pub-id pub-id-type="pmid">11209083</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Freedman</surname> <given-names>D. J.</given-names></name> <name><surname>Riesenhuber</surname> <given-names>M.</given-names></name> <name><surname>Poggio</surname> <given-names>T.</given-names></name> <name><surname>Miller</surname> <given-names>E. K.</given-names></name></person-group> (<year>2003</year>). <article-title>A comparison of primate prefrontal and inferior temporal cortices during visual categorization</article-title>. <source>J. Neurosci</source>. <volume>23</volume>, <fpage>5235</fpage>&#x02013;<lpage>5246</lpage>. <pub-id pub-id-type="pmid">12832548</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname> <given-names>K.</given-names></name></person-group> (<year>2005</year>). <article-title>A theory of cortical responses</article-title>. <source>Philos. Trans. R. Soc. Lond. B. Biol. Sci</source>. <volume>360</volume>, <fpage>815</fpage>&#x02013;<lpage>836</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.2005.1622</pub-id><pub-id pub-id-type="pmid">15937014</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fritz</surname> <given-names>J.</given-names></name> <name><surname>Shamma</surname> <given-names>S.</given-names></name> <name><surname>Elhilali</surname> <given-names>M.</given-names></name> <name><surname>Klein</surname> <given-names>D.</given-names></name></person-group> (<year>2003</year>). <article-title>Rapid task-related plasticity of spectrotemporal receptive fields in primary auditory cortex</article-title>. <source>Nat. Neurosci</source>. <volume>6</volume>, <fpage>1216</fpage>&#x02013;<lpage>1223</lpage>. <pub-id pub-id-type="doi">10.1038/nn1141</pub-id><pub-id pub-id-type="pmid">14583754</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fu</surname> <given-names>K.-M. G.</given-names></name> <name><surname>Johnston</surname> <given-names>T. A.</given-names></name> <name><surname>Shah</surname> <given-names>A. S.</given-names></name> <name><surname>Arnold</surname> <given-names>L.</given-names></name> <name><surname>Smiley</surname> <given-names>J.</given-names></name> <name><surname>Hackett</surname> <given-names>T. A.</given-names></name> <etal/></person-group>. (<year>2003</year>). <article-title>Auditory cortical neurons respond to somatosensory stimulation</article-title>. <source>J. Neurosci</source>. <volume>23</volume>, <fpage>7510</fpage>&#x02013;<lpage>7515</lpage>. <pub-id pub-id-type="pmid">12930789</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Galantucci</surname> <given-names>B.</given-names></name> <name><surname>Fowler</surname> <given-names>C. A.</given-names></name> <name><surname>Turvey</surname> <given-names>M. T.</given-names></name></person-group> (<year>2006</year>). <article-title>The motor theory of speech perception reviewed</article-title>. <source>Psychon. Bull. Rev</source>. <volume>13</volume>, <fpage>361</fpage>&#x02013;<lpage>377</lpage>. <pub-id pub-id-type="doi">10.3758/BF03193857</pub-id><pub-id pub-id-type="pmid">17048719</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gick</surname> <given-names>B.</given-names></name> <name><surname>Derrick</surname> <given-names>D.</given-names></name></person-group> (<year>2009</year>). <article-title>Aero-tactile integration in speech perception</article-title>. <source>Nature</source> <volume>462</volume>, <fpage>502</fpage>&#x02013;<lpage>504</lpage>. <pub-id pub-id-type="doi">10.1038/nature08572</pub-id><pub-id pub-id-type="pmid">19940925</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Golestani</surname> <given-names>N.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2004</year>). <article-title>Learning new sounds of speech: reallocation of neural substrates</article-title>. <source>Neuroimage</source> <volume>21</volume>, <fpage>494</fpage>&#x02013;<lpage>506</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2003.09.071</pub-id><pub-id pub-id-type="pmid">14980552</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goudbeek</surname> <given-names>M.</given-names></name> <name><surname>Swingley</surname> <given-names>D.</given-names></name> <name><surname>Smits</surname> <given-names>R.</given-names></name></person-group> (<year>2009</year>). <article-title>Supervised and unsupervised learning of multidimensional acoustic categories</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform</source>. <volume>35</volume>, <fpage>1913</fpage>&#x02013;<lpage>1933</lpage>. <pub-id pub-id-type="doi">10.1037/a0015781</pub-id><pub-id pub-id-type="pmid">19968443</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Griffiths</surname> <given-names>T. D.</given-names></name> <name><surname>Warren</surname> <given-names>J. D.</given-names></name></person-group> (<year>2004</year>). <article-title>What is an auditory object?</article-title> <source>Nat. Rev. Neurosci</source>. <volume>5</volume>, <fpage>887</fpage>&#x02013;<lpage>892</lpage>. <pub-id pub-id-type="doi">10.1038/nrn1538</pub-id><pub-id pub-id-type="pmid">15496866</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guenther</surname> <given-names>F. H.</given-names></name> <name><surname>Husain</surname> <given-names>F. T.</given-names></name> <name><surname>Cohen</surname> <given-names>M. A.</given-names></name> <name><surname>Shinn-Cunningham</surname> <given-names>B. G.</given-names></name></person-group> (<year>1999</year>). <article-title>Effects of categorization and discrimination training on auditory perceptual space</article-title>. <source>J. Acoust. Soc. Am</source>. <volume>106</volume>, <fpage>2900</fpage>&#x02013;<lpage>2912</lpage>. <pub-id pub-id-type="doi">10.1121/1.428112</pub-id><pub-id pub-id-type="pmid">10573904</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guenther</surname> <given-names>F. H.</given-names></name> <name><surname>Nieto-Castanon</surname> <given-names>A.</given-names></name> <name><surname>Ghosh</surname> <given-names>S. S.</given-names></name> <name><surname>Tourville</surname> <given-names>J. A.</given-names></name></person-group> (<year>2004</year>). <article-title>Representation of sound categories in auditory cortical maps</article-title>. <source>J. Speech Lang. Hear. Res</source>. <volume>47</volume>, <fpage>46</fpage>&#x02013;<lpage>57</lpage>. <pub-id pub-id-type="doi">10.1044/1092-4388(2004/005)</pub-id><pub-id pub-id-type="pmid">15072527</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hackett</surname> <given-names>T. A.</given-names></name></person-group> (<year>2011</year>). <article-title>Information flow in the auditory cortical network</article-title>. <source>Hear. Res</source>. <volume>271</volume>, <fpage>133</fpage>&#x02013;<lpage>146</lpage>. <pub-id pub-id-type="doi">10.1016/j.heares.2010.01.011</pub-id><pub-id pub-id-type="pmid">20116421</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Harnad</surname> <given-names>S.</given-names></name></person-group> (eds.). (<year>1987</year>). <source>Categorical Perception: The Groundwork of Cognition</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haxby</surname> <given-names>J. V.</given-names></name></person-group> (<year>2012</year>). <article-title>Multivariate pattern analysis of fMRI: the early beginnings</article-title>. <source>Neuroimage</source> <volume>62</volume>, <fpage>852</fpage>&#x02013;<lpage>855</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2012.03.016</pub-id><pub-id pub-id-type="pmid">22425670</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haynes</surname> <given-names>J.-D.</given-names></name> <name><surname>Rees</surname> <given-names>G.</given-names></name></person-group> (<year>2006</year>). <article-title>Decoding mental states from brain activity in humans</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>7</volume>, <fpage>523</fpage>&#x02013;<lpage>534</lpage>. <pub-id pub-id-type="doi">10.1038/nrn1931</pub-id><pub-id pub-id-type="pmid">16791142</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hebb</surname> <given-names>D. O.</given-names></name></person-group> (<year>1949</year>). <source>The Organization of Behavior: A Neuropsychological Theory</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Wiley</publisher-name>.</citation>
</ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Husain</surname> <given-names>F. T.</given-names></name> <name><surname>Fromm</surname> <given-names>S. J.</given-names></name> <name><surname>Pursley</surname> <given-names>R. H.</given-names></name> <name><surname>Hosey</surname> <given-names>L.</given-names></name> <name><surname>Braun</surname> <given-names>A.</given-names></name> <name><surname>Horwitz</surname> <given-names>B.</given-names></name></person-group> (<year>2006</year>). <article-title>Neural bases of categorization of simple speech and nonspeech sounds</article-title>. <source>Hum. Brain Mapp</source>. <volume>27</volume>, <fpage>636</fpage>&#x02013;<lpage>651</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.20207</pub-id><pub-id pub-id-type="pmid">16281285</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jiang</surname> <given-names>X.</given-names></name> <name><surname>Bradley</surname> <given-names>E.</given-names></name> <name><surname>Rini</surname> <given-names>R. A.</given-names></name> <name><surname>Zeffiro</surname> <given-names>T.</given-names></name> <name><surname>Vanmeter</surname> <given-names>J.</given-names></name> <name><surname>Riesenhuber</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Categorization training results in shape- and category-selective human neural plasticity</article-title>. <source>Neuron</source> <volume>53</volume>, <fpage>891</fpage>&#x02013;<lpage>903</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2007.02.015</pub-id><pub-id pub-id-type="pmid">17359923</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kayser</surname> <given-names>C.</given-names></name> <name><surname>Petkov</surname> <given-names>C. I.</given-names></name> <name><surname>Augath</surname> <given-names>M.</given-names></name> <name><surname>Logothetis</surname> <given-names>N. K.</given-names></name></person-group> (<year>2005</year>). <article-title>Integration of touch and sound in auditory cortex</article-title>. <source>Neuron</source> <volume>48</volume>, <fpage>373</fpage>&#x02013;<lpage>384</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2005.09.018</pub-id><pub-id pub-id-type="pmid">16242415</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kilian-H&#x000FC;tten</surname> <given-names>N.</given-names></name> <name><surname>Valente</surname> <given-names>G.</given-names></name> <name><surname>Vroomen</surname> <given-names>J.</given-names></name> <name><surname>Formisano</surname> <given-names>E.</given-names></name></person-group> (<year>2011a</year>). <article-title>Auditory cortex encodes the perceptual interpretation of ambiguous sound</article-title>. <source>J. Neurosci</source>. <volume>31</volume>, <fpage>1715</fpage>&#x02013;<lpage>1720</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.4572-10.2011</pub-id><pub-id pub-id-type="pmid">21289180</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kilian-H&#x000FC;tten</surname> <given-names>N.</given-names></name> <name><surname>Vroomen</surname> <given-names>J.</given-names></name> <name><surname>Formisano</surname> <given-names>E.</given-names></name></person-group> (<year>2011b</year>). <article-title>Brain activation during audiovisual exposure anticipates future perception of ambiguous speech</article-title>. <source>Neuroimage</source> <volume>57</volume>, <fpage>1601</fpage>&#x02013;<lpage>1607</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.05.043</pub-id><pub-id pub-id-type="pmid">21664279</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klein</surname> <given-names>M. E.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2011</year>). <article-title>A role for the right superior temporal sulcus in categorical perception of musical chords</article-title>. <source>Neuropsychologia</source> <volume>49</volume>, <fpage>878</fpage>&#x02013;<lpage>887</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2011.01.008</pub-id><pub-id pub-id-type="pmid">21236276</pub-id></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kohler</surname> <given-names>E.</given-names></name> <name><surname>Keysers</surname> <given-names>C.</given-names></name> <name><surname>Umilt&#x000E0;</surname> <given-names>M. A.</given-names></name> <name><surname>Fogassi</surname> <given-names>L.</given-names></name> <name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Rizzolatti</surname> <given-names>G.</given-names></name></person-group> (<year>2002</year>). <article-title>Hearing sounds, understanding actions: action representation in mirror neurons</article-title>. <source>Science</source> <volume>297</volume>, <fpage>846</fpage>&#x02013;<lpage>848</lpage>. <pub-id pub-id-type="doi">10.1126/science.1070311</pub-id><pub-id pub-id-type="pmid">12161656</pub-id></citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>K&#x000F6;ver</surname> <given-names>H.</given-names></name> <name><surname>Gill</surname> <given-names>K.</given-names></name> <name><surname>Tseng</surname> <given-names>Y.-T. L.</given-names></name> <name><surname>Bao</surname> <given-names>S.</given-names></name></person-group> (<year>2013</year>). <article-title>Perceptual and neuronal boundary learned from higher-order stimulus probabilities</article-title>. <source>J. Neurosci</source>. <volume>33</volume>, <fpage>3699</fpage>&#x02013;<lpage>3705</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3166-12.2013</pub-id><pub-id pub-id-type="pmid">23426696</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kral</surname> <given-names>A.</given-names></name></person-group> (<year>2013</year>). <article-title>Auditory critical periods: a review from system&#x00027;s perspective</article-title>. <source>Neuroscience</source> <volume>247</volume>, <fpage>117</fpage>&#x02013;<lpage>133</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroscience.2013.05.021</pub-id><pub-id pub-id-type="pmid">23707979</pub-id></citation>
</ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kraus</surname> <given-names>N.</given-names></name> <name><surname>Skoe</surname> <given-names>E.</given-names></name> <name><surname>Parbery-Clark</surname> <given-names>A.</given-names></name> <name><surname>Ashley</surname> <given-names>R.</given-names></name></person-group> (<year>2009</year>). <article-title>Experience-induced malleability in neural encoding of pitch, timbre, and timing</article-title>. <source>Ann. N.Y. Acad. Sci</source>. <volume>1169</volume>, <fpage>543</fpage>&#x02013;<lpage>557</lpage>. <pub-id pub-id-type="doi">10.1111/j.1749-6632.2009.04549.x</pub-id><pub-id pub-id-type="pmid">19673837</pub-id></citation>
</ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuhl</surname> <given-names>P. K.</given-names></name></person-group> (<year>2000</year>). <article-title>A new view of language acquisition</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>97</volume>, <fpage>11850</fpage>&#x02013;<lpage>11857</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.97.22.11850</pub-id><pub-id pub-id-type="pmid">11050219</pub-id></citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuhl</surname> <given-names>P. K.</given-names></name> <name><surname>Miller</surname> <given-names>J. D.</given-names></name></person-group> (<year>1975</year>). <article-title>Speech perception by the chinchilla: voiced-voiceless distrinction in alveolar plosive consonants</article-title>. <source>Science</source> <volume>190</volume>, <fpage>69</fpage>&#x02013;<lpage>72</lpage>. <pub-id pub-id-type="doi">10.1126/science.1166301</pub-id><pub-id pub-id-type="pmid">1166301</pub-id></citation>
</ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuhl</surname> <given-names>P. K.</given-names></name> <name><surname>Williams</surname> <given-names>K. A.</given-names></name> <name><surname>Lacerda</surname> <given-names>F.</given-names></name> <name><surname>Stevens</surname> <given-names>K. N.</given-names></name> <name><surname>Lindblom</surname> <given-names>B.</given-names></name></person-group> (<year>1992</year>). <article-title>Linguistic experience alters phonetic perception in infants by 6 months of age</article-title>. <source>Science</source> <volume>255</volume>, <fpage>606</fpage>&#x02013;<lpage>608</lpage>. <pub-id pub-id-type="doi">10.1126/science.1736364</pub-id><pub-id pub-id-type="pmid">1736364</pub-id></citation>
</ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lakatos</surname> <given-names>P.</given-names></name> <name><surname>Musacchia</surname> <given-names>G.</given-names></name> <name><surname>O&#x00027;Connel</surname> <given-names>M. N.</given-names></name> <name><surname>Falchier</surname> <given-names>A. Y.</given-names></name> <name><surname>Javitt</surname> <given-names>D. C.</given-names></name> <name><surname>Schroeder</surname> <given-names>C. E.</given-names></name></person-group> (<year>2013</year>). <article-title>The spectrotemporal filter mechanism of auditory selective attention</article-title>. <source>Neuron</source> <volume>77</volume>, <fpage>750</fpage>&#x02013;<lpage>761</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2012.11.034</pub-id><pub-id pub-id-type="pmid">23439126</pub-id></citation>
</ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leaver</surname> <given-names>A. M.</given-names></name> <name><surname>Rauschecker</surname> <given-names>J. P.</given-names></name></person-group> (<year>2010</year>). <article-title>Cortical representation of natural complex sounds: effects of acoustic features and auditory object category</article-title>. <source>J. Neurosci</source>. <volume>30</volume>, <fpage>7604</fpage>&#x02013;<lpage>7612</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0296-10.2010</pub-id><pub-id pub-id-type="pmid">20519535</pub-id></citation>
</ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leech</surname> <given-names>R.</given-names></name> <name><surname>Holt</surname> <given-names>L. L.</given-names></name> <name><surname>Devlin</surname> <given-names>J. T.</given-names></name> <name><surname>Dick</surname> <given-names>F.</given-names></name></person-group> (<year>2009</year>). <article-title>Expertise with artificial nonspeech sounds recruits speech-sensitive cortical regions</article-title>. <source>J. Neurosci</source>. <volume>29</volume>, <fpage>5234</fpage>&#x02013;<lpage>5239</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.5758-08.2009</pub-id><pub-id pub-id-type="pmid">19386919</pub-id></citation>
</ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ley</surname> <given-names>A.</given-names></name> <name><surname>Vroomen</surname> <given-names>J.</given-names></name> <name><surname>Hausfeld</surname> <given-names>L.</given-names></name> <name><surname>Valente</surname> <given-names>G.</given-names></name> <name><surname>De Weerd</surname> <given-names>P.</given-names></name> <name><surname>Formisano</surname> <given-names>E.</given-names></name></person-group> (<year>2012</year>). <article-title>Learning of new sound categories shapes neural response patterns in human auditory cortex</article-title>. <source>J. Neurosci</source>. <volume>32</volume>, <fpage>13273</fpage>&#x02013;<lpage>13280</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0584-12.2012</pub-id><pub-id pub-id-type="pmid">22993443</pub-id></citation>
</ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>S.</given-names></name> <name><surname>Mayhew</surname> <given-names>S. D.</given-names></name> <name><surname>Kourtzi</surname> <given-names>Z.</given-names></name></person-group> (<year>2009</year>). <article-title>Learning shapes the representation of behavioral choice in the human brain</article-title>. <source>Neuron</source> <volume>62</volume>, <fpage>441</fpage>&#x02013;<lpage>452</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2009.03.016</pub-id><pub-id pub-id-type="pmid">19447098</pub-id></citation>
</ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liberman</surname> <given-names>A. M.</given-names></name> <name><surname>Harris</surname> <given-names>K. S.</given-names></name> <name><surname>Hoffman</surname> <given-names>H. S.</given-names></name> <name><surname>Griffith</surname> <given-names>B. C.</given-names></name></person-group> (<year>1957</year>). <article-title>The discrimination of speech sounds within and across phoneme boundaries</article-title>. <source>J. Exp. Psychol</source>. <volume>54</volume>, <fpage>358</fpage>&#x02013;<lpage>368</lpage>. <pub-id pub-id-type="doi">10.1037/h0044417</pub-id><pub-id pub-id-type="pmid">13481283</pub-id></citation>
</ref>
<ref id="B71">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liberman</surname> <given-names>A. M.</given-names></name> <name><surname>Mattingly</surname> <given-names>I. G.</given-names></name></person-group> (<year>1985</year>). <article-title>The motor theory of speech perception revised</article-title>. <source>Cognition</source> <volume>21</volume>, <fpage>1</fpage>&#x02013;<lpage>36</lpage>. <pub-id pub-id-type="doi">10.1016/0010-0277(85)90021-6</pub-id><pub-id pub-id-type="pmid">4075760</pub-id></citation>
</ref>
<ref id="B72">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liebenthal</surname> <given-names>E.</given-names></name> <name><surname>Binder</surname> <given-names>J. R.</given-names></name> <name><surname>Spitzer</surname> <given-names>S. M.</given-names></name> <name><surname>Possing</surname> <given-names>E. T.</given-names></name> <name><surname>Medler</surname> <given-names>D. A.</given-names></name></person-group> (<year>2005</year>). <article-title>Neural substrates of phonemic perception</article-title>. <source>Cereb. Cortex</source> <volume>15</volume>, <fpage>1621</fpage>&#x02013;<lpage>1631</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhi040</pub-id><pub-id pub-id-type="pmid">15703256</pub-id></citation>
</ref>
<ref id="B73">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liebenthal</surname> <given-names>E.</given-names></name> <name><surname>Desai</surname> <given-names>R.</given-names></name> <name><surname>Ellingson</surname> <given-names>M. M.</given-names></name> <name><surname>Ramachandran</surname> <given-names>B.</given-names></name> <name><surname>Desai</surname> <given-names>A.</given-names></name> <name><surname>Binder</surname> <given-names>J. R.</given-names></name></person-group> (<year>2010</year>). <article-title>Specialization along the left superior temporal sulcus for auditory categorization</article-title>. <source>Cereb. Cortex</source> <volume>20</volume>, <fpage>2958</fpage>&#x02013;<lpage>2970</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhq045</pub-id><pub-id pub-id-type="pmid">20382643</pub-id></citation>
</ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liebenthal</surname> <given-names>E.</given-names></name> <name><surname>Sabri</surname> <given-names>M.</given-names></name> <name><surname>Beardsley</surname> <given-names>S. A.</given-names></name> <name><surname>Mangalathu-Arumana</surname> <given-names>J.</given-names></name> <name><surname>Desai</surname> <given-names>A.</given-names></name></person-group> (<year>2013</year>). <article-title>Neural dynamics of phonological processing in the dorsal auditory stream</article-title>. <source>J. Neurosci</source>. <volume>33</volume>, <fpage>15414</fpage>&#x02013;<lpage>15424</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1511-13.2013</pub-id><pub-id pub-id-type="pmid">24068810</pub-id></citation>
</ref>
<ref id="B75">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Little</surname> <given-names>D. M.</given-names></name> <name><surname>Thulborn</surname> <given-names>K. R.</given-names></name></person-group> (<year>2005</year>). <article-title>Correlations of cortical activation and behavior during the application of newly learned categories</article-title>. <source>Brain Res. Cogn. Brain Res</source>. <volume>25</volume>, <fpage>33</fpage>&#x02013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1016/j.cogbrainres.2005.04.015</pub-id><pub-id pub-id-type="pmid">15936179</pub-id></citation>
</ref>
<ref id="B76">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Logan</surname> <given-names>J. S.</given-names></name> <name><surname>Lively</surname> <given-names>S. E.</given-names></name> <name><surname>Pisoni</surname> <given-names>D. B.</given-names></name></person-group> (<year>1991</year>). <article-title>Training Japanese listeners to identify English /r/ and /l/: a first report</article-title>. <source>J. Acoust. Soc. Am</source>. <volume>89</volume>, <fpage>874</fpage>&#x02013;<lpage>886</lpage>. <pub-id pub-id-type="doi">10.1121/1.1894649</pub-id><pub-id pub-id-type="pmid">2016438</pub-id></citation>
</ref>
<ref id="B77">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McGurk</surname> <given-names>H.</given-names></name> <name><surname>MacDonald</surname> <given-names>J.</given-names></name></person-group> (<year>1976</year>). <article-title>Hearing lips and seeing voices</article-title>. <source>Nature</source> <volume>264</volume>, <fpage>746</fpage>&#x02013;<lpage>748</lpage>. <pub-id pub-id-type="doi">10.1038/264746a0</pub-id><pub-id pub-id-type="pmid">1012311</pub-id></citation>
</ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Minshew</surname> <given-names>N. J.</given-names></name> <name><surname>Meyer</surname> <given-names>J.</given-names></name> <name><surname>Goldstein</surname> <given-names>G.</given-names></name></person-group> (<year>2002</year>). <article-title>Abstract reasoning in autism: a disassociation between concept formation and concept identification</article-title>. <source>Neuropsychology</source> <volume>16</volume>, <fpage>327</fpage>&#x02013;<lpage>334</lpage>. <pub-id pub-id-type="doi">10.1037/0894-4105.16.3.327</pub-id><pub-id pub-id-type="pmid">12146680</pub-id></citation>
</ref>
<ref id="B79">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mizrahi</surname> <given-names>A.</given-names></name> <name><surname>Shalev</surname> <given-names>A.</given-names></name> <name><surname>Nelken</surname> <given-names>I.</given-names></name></person-group> (<year>2014</year>). <article-title>Single neuron and population coding of natural sounds in auditory cortex</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>24</volume>, <fpage>103</fpage>&#x02013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2013.09.007</pub-id><pub-id pub-id-type="pmid">24492086</pub-id></citation>
</ref>
<ref id="B80">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murray</surname> <given-names>M. M.</given-names></name> <name><surname>Camen</surname> <given-names>C.</given-names></name> <name><surname>Gonzalez Andino</surname> <given-names>S. L.</given-names></name> <name><surname>Bovet</surname> <given-names>P.</given-names></name> <name><surname>Clarke</surname> <given-names>S.</given-names></name></person-group> (<year>2006</year>). <article-title>Rapid brain discrimination of sounds of objects</article-title>. <source>J. Neurosci</source>. <volume>26</volume>, <fpage>1293</fpage>&#x02013;<lpage>1302</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.4511-05.2006</pub-id><pub-id pub-id-type="pmid">16436617</pub-id></citation>
</ref>
<ref id="B81">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Myers</surname> <given-names>E. B.</given-names></name> <name><surname>Blumstein</surname> <given-names>S. E.</given-names></name> <name><surname>Walsh</surname> <given-names>E.</given-names></name> <name><surname>Eliassen</surname> <given-names>J.</given-names></name></person-group> (<year>2009</year>). <article-title>Inferior frontal regions underlie the perception of phonetic category invariance</article-title>. <source>Psychol. Sci</source>. <volume>20</volume>, <fpage>895</fpage>&#x02013;<lpage>903</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-9280.2009.02380.x</pub-id><pub-id pub-id-type="pmid">19515116</pub-id></citation>
</ref>
<ref id="B82">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Myers</surname> <given-names>E. B.</given-names></name> <name><surname>Swan</surname> <given-names>K.</given-names></name></person-group> (<year>2012</year>). <article-title>Effects of category learning on neural sensitivity to non-native phonetic categories</article-title>. <source>J. Cogn. Neurosci</source>. <volume>24</volume>, <fpage>1695</fpage>&#x02013;<lpage>1708</lpage>. <pub-id pub-id-type="doi">10.1162/jocn_a_00243</pub-id><pub-id pub-id-type="pmid">22621261</pub-id></citation>
</ref>
<ref id="B83">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nelken</surname> <given-names>I.</given-names></name></person-group> (<year>2004</year>). <article-title>Processing of complex stimuli and natural scenes in the auditory cortex</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>14</volume>, <fpage>474</fpage>&#x02013;<lpage>480</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2004.06.005</pub-id><pub-id pub-id-type="pmid">15321068</pub-id></citation>
</ref>
<ref id="B84">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Niziolek</surname> <given-names>C. A.</given-names></name> <name><surname>Guenther</surname> <given-names>F. H.</given-names></name></person-group> (<year>2013</year>). <article-title>Vowel category boundaries enhance cortical and behavioral responses to speech feedback alterations</article-title>. <source>J. Neurosci</source>. <volume>33</volume>, <fpage>12090</fpage>&#x02013;<lpage>12098</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1008-13.2013</pub-id><pub-id pub-id-type="pmid">23864694</pub-id></citation>
</ref>
<ref id="B85">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nordmark</surname> <given-names>P. F.</given-names></name> <name><surname>Pruszynski</surname> <given-names>J. A.</given-names></name> <name><surname>Johansson</surname> <given-names>R. S.</given-names></name></person-group> (<year>2012</year>). <article-title>BOLD responses to tactile stimuli in visual and auditory cortex depend on the frequency fontent of stimulation</article-title>. <source>J. Cogn. Neurosci</source>. <volume>24</volume>, <fpage>2120</fpage>&#x02013;<lpage>2134</lpage>. <pub-id pub-id-type="doi">10.1162/jocn_a_00261</pub-id><pub-id pub-id-type="pmid">22721377</pub-id></citation>
</ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Norman</surname> <given-names>K. A.</given-names></name> <name><surname>Polyn</surname> <given-names>S. M.</given-names></name> <name><surname>Detre</surname> <given-names>G. J.</given-names></name> <name><surname>Haxby</surname> <given-names>J. V.</given-names></name></person-group> (<year>2006</year>). <article-title>Beyond mind-reading: multi-voxel pattern analysis of fMRI data</article-title>. <source>Trends Cogn. Sci</source>. <volume>10</volume>, <fpage>424</fpage>&#x02013;<lpage>430</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2006.07.005</pub-id><pub-id pub-id-type="pmid">16899397</pub-id></citation>
</ref>
<ref id="B87">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nourski</surname> <given-names>K. V.</given-names></name> <name><surname>Brugge</surname> <given-names>J. F.</given-names></name></person-group> (<year>2011</year>). <article-title>Representation of temporal sound features in the human auditory cortex</article-title>. <source>Rev. Neurosci</source>. <volume>22</volume>, <fpage>187</fpage>&#x02013;<lpage>203</lpage>. <pub-id pub-id-type="doi">10.1515/rns.2011.016</pub-id><pub-id pub-id-type="pmid">21476940</pub-id></citation>
</ref>
<ref id="B88">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ohl</surname> <given-names>F. W.</given-names></name> <name><surname>Scheich</surname> <given-names>H.</given-names></name> <name><surname>Freeman</surname> <given-names>W. J.</given-names></name></person-group> (<year>2001</year>). <article-title>Change in pattern of ongoing cortical activity with auditory category learning</article-title>. <source>Nature</source> <volume>412</volume>, <fpage>733</fpage>&#x02013;<lpage>736</lpage>. <pub-id pub-id-type="doi">10.1038/35089076</pub-id><pub-id pub-id-type="pmid">11507640</pub-id></citation>
</ref>
<ref id="B89">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Olshausen</surname> <given-names>B. A.</given-names></name> <name><surname>Field</surname> <given-names>D. J.</given-names></name></person-group> (<year>2004</year>). <article-title>Sparse coding of sensory inputs</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>14</volume>, <fpage>481</fpage>&#x02013;<lpage>487</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2004.07.007</pub-id><pub-id pub-id-type="pmid">15321069</pub-id></citation>
</ref>
<ref id="B90">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ouimet</surname> <given-names>T.</given-names></name> <name><surname>Foster</surname> <given-names>N. E. V.</given-names></name> <name><surname>Tryfon</surname> <given-names>A.</given-names></name> <name><surname>Hyde</surname> <given-names>K. L.</given-names></name></person-group> (<year>2012</year>). <article-title>Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies</article-title>. <source>Ann. N.Y. Acad. Sci</source>. <volume>1252</volume>, <fpage>325</fpage>&#x02013;<lpage>331</lpage>. <pub-id pub-id-type="doi">10.1111/j.1749-6632.2012.06453.x</pub-id><pub-id pub-id-type="pmid">22524375</pub-id></citation>
</ref>
<ref id="B91">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pasley</surname> <given-names>B. N.</given-names></name> <name><surname>David</surname> <given-names>S. V.</given-names></name> <name><surname>Mesgarani</surname> <given-names>N.</given-names></name> <name><surname>Flinker</surname> <given-names>A.</given-names></name> <name><surname>Shamma</surname> <given-names>S. A.</given-names></name> <name><surname>Crone</surname> <given-names>N. E.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>Reconstructing speech from human auditory cortex</article-title>. <source>PLoS Biol</source>. <volume>10</volume>:<fpage>e1001251</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pbio.1001251</pub-id><pub-id pub-id-type="pmid">22303281</pub-id></citation>
</ref>
<ref id="B92">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pekkola</surname> <given-names>J.</given-names></name> <name><surname>Ojanen</surname> <given-names>V.</given-names></name> <name><surname>Autti</surname> <given-names>T.</given-names></name> <name><surname>J&#x000E4;&#x000E4;skel&#x000E4;inen</surname> <given-names>I. P.</given-names></name> <name><surname>M&#x000F6;tt&#x000F6;nen</surname> <given-names>R.</given-names></name> <name><surname>Tarkiainen</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2005</year>). <article-title>Primary auditory cortex activation by visual speech: an fMRI study at 3T</article-title>. <source>Neuroreport</source> <volume>16</volume>, <fpage>125</fpage>&#x02013;<lpage>128</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-200502080-00010</pub-id><pub-id pub-id-type="pmid">15671860</pub-id></citation>
</ref>
<ref id="B93">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Polimeni</surname> <given-names>J. R.</given-names></name> <name><surname>Fischl</surname> <given-names>B.</given-names></name> <name><surname>Greve</surname> <given-names>D. N.</given-names></name> <name><surname>Wald</surname> <given-names>L. L.</given-names></name></person-group> (<year>2010</year>). <article-title>Laminar analysis of 7T BOLD using an imposed spatial activation pattern in human V1</article-title>. <source>Neuroimage</source> <volume>52</volume>, <fpage>1334</fpage>&#x02013;<lpage>1346</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.05.005</pub-id><pub-id pub-id-type="pmid">20460157</pub-id></citation>
</ref>
<ref id="B94">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Polley</surname> <given-names>D. B.</given-names></name> <name><surname>Steinberg</surname> <given-names>E. E.</given-names></name> <name><surname>Merzenich</surname> <given-names>M. M.</given-names></name></person-group> (<year>2006</year>). <article-title>Perceptual learning directs auditory cortical map reorganization through top-down influences</article-title>. <source>J. Neurosci</source>. <volume>26</volume>, <fpage>4970</fpage>&#x02013;<lpage>4982</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3771-05.2006</pub-id><pub-id pub-id-type="pmid">16672673</pub-id></citation>
</ref>
<ref id="B95">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prather</surname> <given-names>J. F.</given-names></name> <name><surname>Nowicki</surname> <given-names>S.</given-names></name> <name><surname>Anderson</surname> <given-names>R. C.</given-names></name> <name><surname>Peters</surname> <given-names>S.</given-names></name> <name><surname>Mooney</surname> <given-names>R.</given-names></name></person-group> (<year>2009</year>). <article-title>Neural correlates of categorical perception in learned vocal communication</article-title>. <source>Nat. Neurosci</source>. <volume>12</volume>, <fpage>221</fpage>&#x02013;<lpage>228</lpage>. <pub-id pub-id-type="doi">10.1038/nn.2246</pub-id><pub-id pub-id-type="pmid">19136972</pub-id></citation>
</ref>
<ref id="B96">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Raizada</surname> <given-names>R. D.</given-names></name> <name><surname>Poldrack</surname> <given-names>R. A.</given-names></name></person-group> (<year>2007</year>). <article-title>Selective amplification of stimulus differences during categorical processing of speech</article-title>. <source>Neuron</source> <volume>56</volume>, <fpage>726</fpage>&#x02013;<lpage>740</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2007.11.001</pub-id><pub-id pub-id-type="pmid">18031688</pub-id></citation>
</ref>
<ref id="B97">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Recanzone</surname> <given-names>G. H.</given-names></name> <name><surname>Schreiner</surname> <given-names>C. E.</given-names></name> <name><surname>Merzenich</surname> <given-names>M. M.</given-names></name></person-group> (<year>1993</year>). <article-title>Plasticity in the frequency representation of primary auditory cortex following discrimination training in adult owl monkeys</article-title>. <source>J. Neurosci</source>. <volume>13</volume>, <fpage>87</fpage>&#x02013;<lpage>103</lpage>. <pub-id pub-id-type="pmid">8423485</pub-id></citation>
</ref>
<ref id="B98">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Scharinger</surname> <given-names>M.</given-names></name> <name><surname>Henry</surname> <given-names>M. J.</given-names></name> <name><surname>Erb</surname> <given-names>J.</given-names></name> <name><surname>Meyer</surname> <given-names>L.</given-names></name> <name><surname>Obleser</surname> <given-names>J.</given-names></name></person-group> (<year>2013a</year>). <article-title>Thalamic and parietal brain morphology predicts auditory category learning</article-title>. <source>Neuropsychologia</source> <volume>53C</volume>, <fpage>75</fpage>&#x02013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2013.09.012</pub-id><pub-id pub-id-type="pmid">24035788</pub-id></citation>
</ref>
<ref id="B99">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Scharinger</surname> <given-names>M.</given-names></name> <name><surname>Henry</surname> <given-names>M. J.</given-names></name> <name><surname>Obleser</surname> <given-names>J.</given-names></name></person-group> (<year>2013b</year>). <article-title>Prior experience with negative spectral correlations promotes information integration during auditory category learning</article-title>. <source>Mem. Cogn</source>. <volume>41</volume>, <fpage>752</fpage>&#x02013;<lpage>768</lpage>. <pub-id pub-id-type="doi">10.3758/s13421-013-0294-9</pub-id><pub-id pub-id-type="pmid">23354998</pub-id></citation>
</ref>
<ref id="B100">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Scheich</surname> <given-names>H.</given-names></name> <name><surname>Brechmann</surname> <given-names>A.</given-names></name> <name><surname>Brosch</surname> <given-names>M.</given-names></name> <name><surname>Budinger</surname> <given-names>E.</given-names></name> <name><surname>Ohl</surname> <given-names>F. W.</given-names></name></person-group> (<year>2007</year>). <article-title>The cognitive auditory cortex: task-specificity of stimulus representations</article-title>. <source>Hear. Res</source>. <volume>229</volume>, <fpage>213</fpage>&#x02013;<lpage>224</lpage>. <pub-id pub-id-type="doi">10.1016/j.heares.2007.01.025</pub-id><pub-id pub-id-type="pmid">17368987</pub-id></citation>
</ref>
<ref id="B101">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schreiner</surname> <given-names>C. E.</given-names></name> <name><surname>Polley</surname> <given-names>D. B.</given-names></name></person-group> (<year>2014</year>). <article-title>Auditory map plasticity: diversity in causes and consequences</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>24</volume>, <fpage>143</fpage>&#x02013;<lpage>156</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2013.11.009</pub-id><pub-id pub-id-type="pmid">24492090</pub-id></citation>
</ref>
<ref id="B102">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schroeder</surname> <given-names>C. E.</given-names></name> <name><surname>Foxe</surname> <given-names>J. J.</given-names></name></person-group> (<year>2002</year>). <article-title>The timing and laminar profile of converging inputs to multisensory areas of the macaque neocortex</article-title>. <source>Brain Res. Cogn. Brain Res</source>. <volume>14</volume>, <fpage>187</fpage>&#x02013;<lpage>198</lpage>. <pub-id pub-id-type="doi">10.1016/S0926-6410(02)00073-3</pub-id><pub-id pub-id-type="pmid">12063142</pub-id></citation>
</ref>
<ref id="B103">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sch&#x000FC;rmann</surname> <given-names>M.</given-names></name> <name><surname>Caetano</surname> <given-names>G.</given-names></name> <name><surname>Hlushchuk</surname> <given-names>Y.</given-names></name> <name><surname>Jousm&#x000E4;ki</surname> <given-names>V.</given-names></name> <name><surname>Hari</surname> <given-names>R.</given-names></name></person-group> (<year>2006</year>). <article-title>Touch activates human auditory cortex</article-title>. <source>Neuroimage</source> <volume>30</volume>, <fpage>1325</fpage>&#x02013;<lpage>1331</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2005.11.020</pub-id><pub-id pub-id-type="pmid">16488157</pub-id></citation>
</ref>
<ref id="B104">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Seitz</surname> <given-names>A. R.</given-names></name> <name><surname>Watanabe</surname> <given-names>T.</given-names></name></person-group> (<year>2003</year>). <article-title>Is subliminal learning really passive?</article-title> <source>Nature</source> <volume>422</volume>, <fpage>2003</fpage>. <pub-id pub-id-type="doi">10.1038/422036a</pub-id><pub-id pub-id-type="pmid">12621425</pub-id></citation>
</ref>
<ref id="B105">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shamma</surname> <given-names>S.</given-names></name> <name><surname>Fritz</surname> <given-names>J.</given-names></name></person-group> (<year>2014</year>). <article-title>Adaptive auditory computations</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>25C</volume>, <fpage>164</fpage>&#x02013;<lpage>168</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2014.01.011</pub-id><pub-id pub-id-type="pmid">24525107</pub-id></citation>
</ref>
<ref id="B106">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shams</surname> <given-names>L.</given-names></name> <name><surname>Seitz</surname> <given-names>A. R.</given-names></name></person-group> (<year>2008</year>). <article-title>Benefits of multisensory learning</article-title>. <source>Trends Cogn. Sci</source>. <volume>12</volume>, <fpage>411</fpage>&#x02013;<lpage>417</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2008.07.006</pub-id><pub-id pub-id-type="pmid">18805039</pub-id></citation>
</ref>
<ref id="B107">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shams</surname> <given-names>L.</given-names></name> <name><surname>Wozny</surname> <given-names>D. R.</given-names></name> <name><surname>Kim</surname> <given-names>R.</given-names></name> <name><surname>Seitz</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Influences of multisensory experience on subsequent unisensory processing</article-title>. <source>Front. Psychol</source>. <volume>2</volume>:<issue>264</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2011.00264</pub-id><pub-id pub-id-type="pmid">22028697</pub-id></citation>
</ref>
<ref id="B108">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sigala</surname> <given-names>N.</given-names></name> <name><surname>Logothetis</surname> <given-names>N. K.</given-names></name></person-group> (<year>2002</year>). <article-title>Visual categorization shapes feature selectivity in the primate temporal cortex</article-title>. <source>Nature</source> <volume>415</volume>, <fpage>318</fpage>&#x02013;<lpage>320</lpage>. <pub-id pub-id-type="doi">10.1038/415318a</pub-id><pub-id pub-id-type="pmid">11797008</pub-id></citation>
</ref>
<ref id="B109">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spierer</surname> <given-names>L.</given-names></name> <name><surname>De Lucia</surname> <given-names>M.</given-names></name> <name><surname>Bernasconi</surname> <given-names>F.</given-names></name> <name><surname>Grivel</surname> <given-names>J.</given-names></name> <name><surname>Bourquin</surname> <given-names>N. M.</given-names></name> <name><surname>Clarke</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>Learning-induced plasticity in human audition: objects, time, and space</article-title>. <source>Hear. Res</source>. <volume>271</volume>, <fpage>88</fpage>&#x02013;<lpage>102</lpage>. <pub-id pub-id-type="doi">10.1016/j.heares.2010.03.086</pub-id><pub-id pub-id-type="pmid">20430070</pub-id></citation>
</ref>
<ref id="B110">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Staeren</surname> <given-names>N.</given-names></name> <name><surname>Renvall</surname> <given-names>H.</given-names></name> <name><surname>De Martino</surname> <given-names>F.</given-names></name> <name><surname>Goebel</surname> <given-names>R.</given-names></name> <name><surname>Formisano</surname> <given-names>E.</given-names></name></person-group> (<year>2009</year>). <article-title>Sound categories are represented as distributed patterns in the human auditory cortex</article-title>. <source>Curr. Biol</source>. <volume>19</volume>, <fpage>498</fpage>&#x02013;<lpage>502</lpage>. <pub-id pub-id-type="doi">10.1016/j.cub.2009.01.066</pub-id><pub-id pub-id-type="pmid">19268594</pub-id></citation>
</ref>
<ref id="B111">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sussman</surname> <given-names>E.</given-names></name> <name><surname>Winkler</surname> <given-names>I.</given-names></name> <name><surname>Huotilainen</surname> <given-names>M.</given-names></name> <name><surname>Ritter</surname> <given-names>W.</given-names></name> <name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name></person-group> (<year>2002</year>). <article-title>Top-down effects can modify the initially stimulus-driven auditory organization</article-title>. <source>Brain Res. Cogn. Brain Res</source>. <volume>13</volume>, <fpage>393</fpage>&#x02013;<lpage>405</lpage>. <pub-id pub-id-type="doi">10.1016/S0926-6410(01)00131-8</pub-id><pub-id pub-id-type="pmid">11919003</pub-id></citation>
</ref>
<ref id="B112">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsunada</surname> <given-names>J.</given-names></name> <name><surname>Lee</surname> <given-names>J. H.</given-names></name> <name><surname>Cohen</surname> <given-names>Y. E.</given-names></name></person-group> (<year>2011</year>). <article-title>Representation of speech categories in the primate auditory cortex</article-title>. <source>J. Neurophysiol</source>. <volume>105</volume>, <fpage>2634</fpage>&#x02013;<lpage>2646</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00037.2011</pub-id><pub-id pub-id-type="pmid">21346209</pub-id></citation>
</ref>
<ref id="B113">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsunada</surname> <given-names>J.</given-names></name> <name><surname>Lee</surname> <given-names>J. H.</given-names></name> <name><surname>Cohen</surname> <given-names>Y. E.</given-names></name></person-group> (<year>2012</year>). <article-title>Differential representation of auditory categories between cell classes in primate auditory cortex</article-title>. <source>J. Physiol</source>. <volume>590</volume>, <fpage>3129</fpage>&#x02013;<lpage>3139</lpage>. <pub-id pub-id-type="doi">10.1113/jphysiol.2012.232892</pub-id><pub-id pub-id-type="pmid">22570374</pub-id></citation>
</ref>
<ref id="B114">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tzounopoulos</surname> <given-names>T.</given-names></name> <name><surname>Kraus</surname> <given-names>N.</given-names></name></person-group> (<year>2009</year>). <article-title>Learning to encode timing: mechanisms of plasticity in the auditory brainstem</article-title>. <source>Neuron</source> <volume>62</volume>, <fpage>463</fpage>&#x02013;<lpage>469</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2009.05.002</pub-id><pub-id pub-id-type="pmid">19477149</pub-id></citation>
</ref>
<ref id="B115">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Verhoef</surname> <given-names>B. E.</given-names></name> <name><surname>Kayaert</surname> <given-names>G.</given-names></name> <name><surname>Franko</surname> <given-names>E.</given-names></name> <name><surname>Vangeneugden</surname> <given-names>J.</given-names></name> <name><surname>Vogels</surname> <given-names>R.</given-names></name></person-group> (<year>2008</year>). <article-title>Stimulus similarity-contingent neural adaptation can be time and cortical area dependent</article-title>. <source>J. Neurosci</source>. <volume>28</volume>, <fpage>10631</fpage>&#x02013;<lpage>10640</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3333-08.2008</pub-id><pub-id pub-id-type="pmid">18923039</pub-id></citation>
</ref>
<ref id="B116">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallace</surname> <given-names>M. T.</given-names></name> <name><surname>Stein</surname> <given-names>B. E.</given-names></name></person-group> (<year>2007</year>). <article-title>Early experience determines how the senses will interact</article-title>. <source>J. Neurophysiol</source>. <volume>97</volume>, <fpage>921</fpage>&#x02013;<lpage>926</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00497.2006</pub-id><pub-id pub-id-type="pmid">16914616</pub-id></citation>
</ref>
<ref id="B117">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Watanabe</surname> <given-names>T.</given-names></name> <name><surname>N&#x000E1;&#x000F1;ez</surname> <given-names>J. E.</given-names></name> <name><surname>Sasaki</surname> <given-names>Y.</given-names></name></person-group> (<year>2001</year>). <article-title>Perceptual learning without perception</article-title>. <source>Nature</source> <volume>413</volume>, <fpage>844</fpage>&#x02013;<lpage>848</lpage>. <pub-id pub-id-type="doi">10.1038/35101601</pub-id><pub-id pub-id-type="pmid">11677607</pub-id></citation>
</ref>
<ref id="B118">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wyttenbach</surname> <given-names>R. A.</given-names></name> <name><surname>May</surname> <given-names>M. L.</given-names></name> <name><surname>Hoy</surname> <given-names>R. R.</given-names></name></person-group> (<year>1996</year>). <article-title>Categorical perception of sound frequency by crickets</article-title>. <source>Science</source> <volume>273</volume>, <fpage>1542</fpage>&#x02013;<lpage>1544</lpage>. <pub-id pub-id-type="doi">10.1126/science.273.5281.1542</pub-id><pub-id pub-id-type="pmid">8703214</pub-id></citation>
</ref>
</ref-list>
</back>
</article>
