<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Syst. Neurosci.</journal-id>
<journal-title>Frontiers in Systems Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Syst. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5137</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnsys.2014.00010</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The effect of mild-to-moderate hearing loss on auditory and emotion processing networks</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Husain</surname> <given-names>Fatima T.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Carpenter-Thompson</surname> <given-names>Jake R.</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Schmidt</surname> <given-names>Sara A.</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign</institution> <country>Champaign, IL, USA</country></aff>
<aff id="aff2"><sup>2</sup><institution>The Neuroscience Program, University of Illinois at Urbana-Champaign</institution> <country>Champaign, IL, USA</country></aff>
<aff id="aff3"><sup>3</sup><institution>Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign</institution> <country>Champaign, IL, USA</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Jonathan E. Peelle, Washington University in St. Louis, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Carolyn McGettigan, Royal Holloway University of London, UK; Conor J. Wild, Western University, Canada</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Fatima T. Husain, Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, 901 S. Sixth Street, Champaign, IL 61820, USA e-mail: <email>husainf&#x00040;illinois.edu</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to the journal Frontiers in Systems Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>04</day>
<month>02</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>10</elocation-id>
<history>
<date date-type="received">
<day>15</day>
<month>08</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>15</day>
<month>01</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 Husain, Carpenter-Thompson and Schmidt.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>We investigated the impact of hearing loss (HL) on emotional processing using task- and rest-based functional magnetic resonance imaging. Two age-matched groups of middle-aged participants were recruited: one with bilateral high-frequency HL and a control group with normal hearing (NH). During the task-based portion of the experiment, participants were instructed to rate affective stimuli from the International Affective Digital Sounds (IADS) database as pleasant, unpleasant, or neutral. In the resting state experiment, participants were told to fixate on a &#x0201C;&#x0002B;&#x0201D; sign on a screen for 5 min. The results of both the task-based and resting state studies suggest that NH and HL patients differ in their emotional response. Specifically, in the task-based study, we found slower response to affective but not neutral sounds by the HL group compared to the NH group. This was reflected in the brain activation patterns, with the NH group employing the expected limbic and auditory regions including the left amygdala, left parahippocampus, right middle temporal gyrus and left superior temporal gyrus to a greater extent in processing affective stimuli when compared to the HL group. In the resting state study, we observed no significant differences in connectivity of the auditory network between the groups. In the dorsal attention network (DAN), HL patients exhibited decreased connectivity between seed regions and left insula and left postcentral gyrus compared to controls. The default mode network (DMN) was also altered, showing increased connectivity between seeds and left middle frontal gyrus in the HL group. Further targeted analysis revealed increased intrinsic connectivity between the right middle temporal gyrus and the right precentral gyrus. The results from both studies suggest neuronal reorganization as a consequence of HL, most notably in networks responding to emotional sounds.</p></abstract>
<kwd-group>
<kwd>fMRI</kwd>
<kwd>hearing loss</kwd>
<kwd>resting-state fMRI</kwd>
<kwd>functional connectivity</kwd>
<kwd>emotion</kwd>
<kwd>IADS</kwd>
</kwd-group>
<counts>
<fig-count count="4"/>
<table-count count="5"/>
<equation-count count="0"/>
<ref-count count="61"/>
<page-count count="13"/>
<word-count count="10854"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Hearing loss (HL) remains one of the most common chronic conditions affecting older adults (Cruickshanks et al., <xref ref-type="bibr" rid="B10">1998</xref>), with the prevalence rate increasing from between 25 and 40% in adults above 65 years of age to greater than 80% in people older than 85 years (Yueh et al., <xref ref-type="bibr" rid="B60">2003</xref>). In general, mild-to-moderately severe sensorineural HL, which is often untreated, affects about 23&#x02013;33% of the adult population in the world (Stevens et al., <xref ref-type="bibr" rid="B50">2013</xref>). HL has a significant impact on quality of life and general well-being of an older adult (Mulrow et al., <xref ref-type="bibr" rid="B38">1990</xref>; Carabellese et al., <xref ref-type="bibr" rid="B9">1993</xref>) and may be associated with depression and isolation, especially in those younger than 70 years of age (Gopinath et al., <xref ref-type="bibr" rid="B20">2009</xref>). However, little is known about the consequences of mild-to-moderately severe HL on the neural architecture and functionality of the brain.</p>
<p>The majority of brain imaging studies that have explored HL have done so when HL has occurred in conjunction with tinnitus (e.g., Weisz et al., <xref ref-type="bibr" rid="B56">2004</xref>; Husain et al., <xref ref-type="bibr" rid="B26">2011b</xref>), other disorders (e.g., Yoneda et al., <xref ref-type="bibr" rid="B59">2012</xref>) or in the context of sign language studies when the impairment has been profound (e.g., Petitto et al., <xref ref-type="bibr" rid="B41">2000</xref>; Husain et al., <xref ref-type="bibr" rid="B27">2009</xref>). A few brain imaging studies have investigated the impact of HL on aging (Wong et al., <xref ref-type="bibr" rid="B58">2010</xref>; Peelle et al., <xref ref-type="bibr" rid="B40">2011</xref>); these remain the best sources to understand the neural correlates of HL. These neural correlates are manifested in decrease in gray matter in the frontal cortex (Wong et al., <xref ref-type="bibr" rid="B58">2010</xref>; Peelle et al., <xref ref-type="bibr" rid="B40">2011</xref>) and a decreased response in the superior temporal cortex, thalamus and brainstem in a speech comprehension task (Peelle et al., <xref ref-type="bibr" rid="B40">2011</xref>). Our previous structural MRI study of HL in older adults (conducted as part of a larger study to investigate neural bases of tinnitus and HL) observed gray matter loss in the frontal cortices and disordered white matter tracts leading into and out of the auditory cortex (Husain et al., <xref ref-type="bibr" rid="B25">2011a</xref>). A companion functional study noted increased response of the regions in the frontal and parietal cortices, possibly which related to increased attention processing (Husain et al., <xref ref-type="bibr" rid="B26">2011b</xref>). In the latter fMRI study, participants had mild-to-moderately severe HL with an average age in the mid-fifties and were asked to perform a discrimination task with simple tones and tonal sweeps. When compared to rest trials, task trials resulted in greater response in the temporal, frontal and parietal cortices in the HL group relative to the normal hearing (NH) controls.</p>
<p>In the present study, we investigated the neural correlates of mild-to-moderate sensorineural HL in middle-aged adults without tinnitus or any other chronic physical or mental condition and compared them to age-matched NH controls. We concentrated on extra-auditory networks, specifically the one concerned with emotional processing. The limbic system is typically the main network of emotion processing. It consists primarily of the amygdala, parahippocampus, ventral medial prefrontal cortex, nucleus accumbens, and insula. Recently, studies have begun to map out the regions and connectivity of the auditory emotional network in adults with NH (Blood and Zatorre, <xref ref-type="bibr" rid="B5">2001</xref>; Koelsch et al., <xref ref-type="bibr" rid="B30">2006</xref>; Kumar et al., <xref ref-type="bibr" rid="B31">2012</xref>) and in those with tinnitus (Giraud et al., <xref ref-type="bibr" rid="B18">1999</xref>; Mirz et al., <xref ref-type="bibr" rid="B37">2000</xref>; Seydell-Greenwald et al., <xref ref-type="bibr" rid="B47">2012</xref>; Golm et al., <xref ref-type="bibr" rid="B19">2013</xref>). Using dynamic causal modeling, Kumar et al. (<xref ref-type="bibr" rid="B31">2012</xref>) found that the negative valence of a sound modulates the backward connections from the amygdala to the auditory cortex, and the acoustic features of a sound modulate the forward connections from the auditory cortex to the amygdala. It is likely that such acoustic features, processing nodes and their connectivity may be susceptible to changes due to sustained loss of hearing acuity when listening to affective sounds. This may result in delayed processing or misclassification or both of the affective sounds. One of the goals of the present study was to investigate whether loss of hearing acuity affects the acoustic processing of affective sounds and whether this impacts their perception.</p>
<p>HL may also affect the perception of the valence of affective sounds, regardless of the processing of acoustic features. Tinnitus, in particular, has been established to have an altered auditory-limbic system link (Jastreboff, <xref ref-type="bibr" rid="B29">1990</xref>; Rauschecker et al., <xref ref-type="bibr" rid="B44">2010</xref>); behaviorally, there is greater prevalence of depression and anxiety in the tinnitus patient group compared to the general population (Bartels et al., <xref ref-type="bibr" rid="B3">2008</xref>). Not surprisingly then, this disordered link is beginning to be studied in tinnitus. However, HL occurs in about 90% of the individuals with tinnitus (Davis and Rafaie, <xref ref-type="bibr" rid="B11">2000</xref>), and the unique contribution of tinnitus to any changes in emotional processing is unknown. There are other reasons to study emotional processing in HL. As previously noted, prevalence of HL increases with age (Yueh et al., <xref ref-type="bibr" rid="B60">2003</xref>) and may contribute to social isolation (Gopinath et al., <xref ref-type="bibr" rid="B20">2009</xref>). This in turn may impact the emotion processing limbic network, as it has been shown to occur with aging and with tinnitus (Mather and Knight, <xref ref-type="bibr" rid="B36">2005</xref>; Rauschecker et al., <xref ref-type="bibr" rid="B44">2010</xref>; St Jacques et al., <xref ref-type="bibr" rid="B51">2010</xref>; Anticevic et al., <xref ref-type="bibr" rid="B2">2012</xref>). Nevertheless, no brain imaging study has explicitly investigated the emotion processing network in older adults with HL.</p>
<p>We conducted both a task-based and a resting state functional connectivity study of the emotion processing network in middle-aged adults with bilateral sensorineural HL. The task consisted of classification of sounds as being &#x0201C;pleasant,&#x0201D; &#x0201C;unpleasant,&#x0201D; or &#x0201C;neutral.&#x0201D; Our working hypothesis was that a loss of hearing acuity affects behavior, sounds may appear more unpleasant (Franks, <xref ref-type="bibr" rid="B15">1982</xref>; Feldmann and Kumpf, <xref ref-type="bibr" rid="B13">1988</xref>; Leek et al., <xref ref-type="bibr" rid="B34">2008</xref>; Rutledge, <xref ref-type="bibr" rid="B45">2009</xref>; Uys et al., <xref ref-type="bibr" rid="B54">2012</xref>), reaction times may be longer due to effortful listening (Hicks and Tharpe, <xref ref-type="bibr" rid="B23">2002</xref>; Tun et al., <xref ref-type="bibr" rid="B53">2009</xref>). Likewise, the neural network subserving emotion processing may be affected, specifically in the response patterns of the nodes. In order to assess the impact of HL on a baseline, resting state, without the distraction of any task, we measured the functional connectivity of a number of networks, including that connected to the amygdala (a primary node of the limbic system).</p>
<p>Our main focus was on auditory and limbic systems, but these systems interact with intrinsic networks devoted to attention processing and possibly the default mode network (DMN). Intrinsic networks, or resting state networks, are defined as spontaneous, low-frequency oscillations in brain activity that can be organized into coherent networks. In many cases, intrinsic networks mirror task-related networks. For example, the auditory resting state network closely resembles an auditory task network. However, instead of correlations between activated regions in the task-based network, the intrinsic network shows correlations between deactivated brain regions. The DMN is the quintessential resting state network and is unique in that it exhibits deactivation in a task-based state and is active during rest (Raichle et al., <xref ref-type="bibr" rid="B43">2001</xref>). The DMN exhibits a push-pull type of relationship with other networks in the brain (Fox et al., <xref ref-type="bibr" rid="B14">2005</xref>). The dorsal attention network (DAN), for instance, will show activations while the DMN is deactivated (in a task-based state). Its relationship with the DAN and other intrinsic networks warrants study of the DMN. Many disorders, including Alzheimer&#x00027;s disease, schizophrenia, and tinnitus, have been shown to affect the connectivity of the DMN (for reviews see, Greicius, <xref ref-type="bibr" rid="B21">2008</xref>; Husain and Schmidt, <xref ref-type="bibr" rid="B28">2013</xref>). It is also possible that intrinsic connectivity may differ in patients with HL, perhaps relating in particular to limbic areas. Alterations to resting state functional connectivity may result in decreased preparedness to perform a task. In particular, if limbic areas are shown to display irregular connectivity to other brain regions at rest, it may change how individuals process emotional stimuli.</p>
<p>Although we had provisional hypotheses, our study was exploratory because of a lack of brain imaging studies investigating the impact of HL on emotion processing and related extra-auditory networks. In the resting state portion of our study, general hypotheses were made regarding which networks may show altered connectivity, but we did not specify the nodes and the nature of these alterations. We had more constrained expectations about the role of amygdala and auditory processing areas in the emotion-task study, in that we expected reduced engagement of such regions in listeners with HL when processing affective stimuli. In sum, we conducted a comprehensive study, combining both task- and rest-based fMRI using multiple seed regions in order to establish a baseline for future studies.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Methods</title>
<sec>
<title>Subjects</title>
<p>Participants were recruited from the Urbana-Champaign area, were scanned under the UIUC IRB 10144 protocol, gave written informed consent, and were monetarily compensated. Subjects belonged to one of two groups: middle-aged adults with bilateral high-frequency sensorineural HL (<italic>n</italic> &#x0003D; 12), or their age- and gender-matched controls with NH (<italic>n</italic> &#x0003D; 15). Three NH subjects were excluded due to excessive motion artifact, and data from only 12 NH participants were included in the final analysis. During recruitment, we excluded subjects that presented with tinnitus or with asymmetric HL at the time of their audiological examination. We defined asymmetric HL to be more than a 15 dB HL difference between the right and left ear at one or more frequencies, or if the right and left ear differed by 10dB HL at two consecutive frequencies. The Beck Depression Inventory (BDI-II) and the Beck Anxiety Inventory (BAI) were used to assess depression and anxiety levels (Beck and Steer, <xref ref-type="bibr" rid="B4">1984</xref>; Steer et al., <xref ref-type="bibr" rid="B49">1993</xref>, <xref ref-type="bibr" rid="B48">1999</xref>). All subjects scored in the minimal depression range and minimal anxiety range for the BDI-II and BAI, respectively. See Table <xref ref-type="table" rid="T1">1</xref> for information about subject demographics.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Subject demographics and clinical characteristics for the subject groups</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Group</bold></th>
<th valign="top" align="center" colspan="2"><bold>NH (Normal hearing)</bold></th>
<th valign="top" align="center" colspan="2"><bold>HL (Hearing loss)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Group size</td>
<td valign="top" align="center" colspan="2">12</td>
<td valign="top" align="center" colspan="2">12</td>
</tr>
<tr>
<td valign="top" align="left">Age(M &#x000B1; <italic>SD</italic>)</td>
<td valign="top" align="center" colspan="2">51.4 &#x000B1; 9.9</td>
<td valign="top" align="center" colspan="2">58.2 &#x000B1; 9.5</td>
</tr>
<tr>
<td valign="top" align="left">Gender</td>
<td valign="top" align="center" colspan="2">6 males, 6 females</td>
<td valign="top" align="center" colspan="2">5 males, 7 females</td>
</tr>
<tr>
<td valign="top" align="left">BAI(M &#x000B1; <italic>SD</italic>)</td>
<td valign="top" align="center" colspan="2">1.25 &#x000B1; 1.3</td>
<td valign="top" align="center" colspan="2">2.3 &#x000B1; 1.7</td>
</tr>
<tr>
<td valign="top" align="left">BDI-II(M &#x000B1; <italic>SD</italic>)</td>
<td valign="top" align="center" colspan="2">1.7 &#x000B1; 2.3</td>
<td valign="top" align="center" colspan="2">4.3 &#x000B1; 4.1</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="6">Average hearing threshold (dB HL, right column) at different testing frequencies (left column) (M &#x000B1; <italic>SD</italic>)</td>
<td valign="top" align="left">0.5 kHz</td>
<td valign="top" align="left">13.5 &#x000B1; 8.8</td>
<td valign="top" align="left">0.5 kHz</td>
<td valign="top" align="left">15.0 &#x000B1; 8.7</td>
</tr>
<tr>
<td valign="top" align="left">1 kHz</td>
<td valign="top" align="left">11.6 &#x000B1; 7.7</td>
<td valign="top" align="left">1 kHz</td>
<td valign="top" align="left">16.7 &#x000B1; 10.2</td>
</tr>
<tr>
<td valign="top" align="left">2 kHz</td>
<td valign="top" align="left">11.0 &#x000B1; 8.0</td>
<td valign="top" align="left">2 kHz</td>
<td valign="top" align="left">24.0 &#x000B1; 18.7</td>
</tr>
<tr>
<td valign="top" align="left">4 kHz</td>
<td valign="top" align="left">14.0 &#x000B1; 8.1</td>
<td valign="top" align="left">4 kHz</td>
<td valign="top" align="left">36.2 &#x000B1; 19.6</td>
</tr>
<tr>
<td valign="top" align="left">6 kHz</td>
<td valign="top" align="left">8.4 &#x000B1; 9.6</td>
<td valign="top" align="left">6 kHz</td>
<td valign="top" align="left">41.1 &#x000B1; 17.8</td>
</tr>
<tr>
<td valign="top" align="left">8 kHz</td>
<td valign="top" align="left">12.1 &#x000B1; 8.5</td>
<td valign="top" align="left">8 kHz</td>
<td valign="top" align="left">47.1 &#x000B1; 18.7</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Pure tone audiogram information at each frequency is averaged across both ears and all subjects. BDI, Beck Depression Inventory; BAI, Beck Anxiety Inventory.</italic></p>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>Audiometric evaluation</title>
<p>A comprehensive audiometric evaluation was performed for each subject. Audiological testing took place inside a sound-attenuating booth and included pure tone testing, word recognition testing, and bone conduction testing. Additional tests included distortion product otoacoustic emissions and tympanometry measurements to eliminate any confounding peripheral hearing pathologies. For pure tone testing, the test frequencies included 0.25, 0.5, 1, 2, 4, 6, and 8 kHz. For all of the test frequencies, each subject in the NH group had pure-tone thresholds of 25dB HL or lower. Participants in the HL group had a pure-tone threshold of 30 dB HL or lower for the test frequencies 0.25&#x02013;2 kHz, with the exception of two HL subjects who had a slightly elevated threshold of 35 dB HL at 1 kHz. For the test frequencies 4&#x02013;8 kHz, the HL subjects had pure-tone thresholds between 30 and 70 dB HL, ranging from mild to moderately-severe HL. Table <xref ref-type="table" rid="T1">1</xref> contains information about the average hearing at testing frequencies for each subject group. None of the HL participants relied on aided hearing.</p>
</sec>
<sec>
<title>Data acquisition</title>
<p>A 3Tesla Siemens Magnetom Allegra head-only scanner was used to acquire all MRI images. A series of two anatomical and two functional images were acquired- the first functional scan was of the emotional task and the second acquired data on resting state; order of acquisition varied across the subjects. Thirty-two low-resolution T2-weighted structural transversal slices (<italic>TR</italic> &#x0003D; 7260 ms, <italic>TE</italic> &#x0003D; 98 ms) were collected for each volume with a 4.0 mm slice thickness and a 0.9 &#x000D7; 0.9 &#x000D7; 4.0 mm<sup>3</sup> voxel size [matrix size (per slice), 256 &#x000D7; 256; flip angle, 150&#x000B0;]. We obtained 160 high resolution magnetization-prepared rapid-acquisition with gradient echo (MPRAGE) sagittal slices for each volume that were 1.2 mm in thickness with a 1.0 &#x000D7; 1.0 &#x000D7; 1.2 mm<sup>3</sup> voxel size [<italic>TR</italic> &#x0003D; 2300 ms; <italic>TE</italic> &#x0003D; 2.83 ms; matrix size (per slice), 256 &#x000D7; 256; flip angle, 9&#x000B0;]. The functional images were acquired using the following parameters: slice thickness, 4 mm; inter-slice gap, 0.4 mm; 32 axial or transverse slices, distance factor, 10%; voxel size, 3.4 &#x000D7; 3.4 &#x000D7; 4.0 mm<sup>3</sup>; field of view (FoV) read, 220 mm; TR, 9000 ms with 2000 ms acquisition time; TE, 30 ms; matrix size (per slice), 64 &#x000D7; 64; flip angle, 90&#x000B0;. Functional images were acquired separately for (a) an emotional task and (b) a resting state study.</p>
<sec>
<title>(a) Emotion task</title>
<p>To mitigate the loud noise of the radio frequency gradients generated during image acquisition from interfering with the perception of the stimuli, we used clustered echo-planar imaging (EPI) (Hall et al., <xref ref-type="bibr" rid="B22">1999</xref>; Gaab et al., <xref ref-type="bibr" rid="B17">2003</xref>; Zaehle et al., <xref ref-type="bibr" rid="B61">2004</xref>). Clustered EPI, or sparse sampling, was chosen particularly to improve the listening environment for the subjects with HL. To reduce scanner noise interference with sound perception, we collected one image volume (2 s acquisition time) post stimulus presentation rather than using continuous image acquisition during a period of &#x0201C;relative quiet&#x0201D; when the radio-frequency gradients were switched off and the only source of ambient noise was the scanner pump. The repetition time was 9 s, and within each trial a 6 s stimulus was presented during a 7 s interval of reduced scanner noise. To optimize the scanning procedure to detect neural response from regions within the limbic system, prior to data acquisition, a custom MATLAB (<ext-link ext-link-type="uri" xlink:href="http://www.mathworks.com/products/matlab/">http://www.mathworks.com/products/matlab/</ext-link>) toolbox was used in order to fine-tune the timing of stimulus presentation relative to image acquisition (Dolcos and McCarthy, <xref ref-type="bibr" rid="B12">2006</xref>). Stimuli were selected from the International Affective Digital Sounds (IADS) database and had normative scores for valence and arousal; sounds were rated on a scale of 1&#x02013;9 (9 very pleasant, 1 very unpleasant for valence and 9 very arousing, 1 not at all arousing for arousal) (Bradley and Lang, <xref ref-type="bibr" rid="B6">2007</xref>). Normative scores were as follows: pleasant (valence: 6.83 &#x000B1; 0.54, arousal: 6.46 &#x000B1; 0.56), unpleasant (valence: 2.78 &#x000B1; 0.58, arousal: 6.9 &#x000B1; 0.31) and neutral (valence: 4.81 &#x000B1;0.43, arousal: 4.85 &#x000B1;0.57). The normative valence ratings for P and U sounds are statistically different at <italic>p</italic> &#x0003C; 0.00001, but did not differ in arousal scores. Supplementary Table <xref ref-type="supplementary-material" rid="SM1">1</xref> contains a complete list of the affective sounds used in the present study. We presented the sounds in the scanner at a maximum comfort level, as indicated by each participant, during the relatively quiet intervals of image acquisition. Prior to data collection, subjects were given both written and verbal instructions. Additionally, subjects were trained using sounds from the IADS database, different from the stimuli chosen in the experiment, to familiarize the participants with the task. During the final experiment, Presentation 14.7 software (<ext-link ext-link-type="uri" xlink:href="http://www.neurobs.com">www.neurobs.com</ext-link>) on a Windows XP computer in the fMRI control room was used to deliver sounds and instructions via pneumonic headphones (Resonance Technology, Inc., Northridge, CA.). To complete the task, subjects listened to 90 affective sounds [30 pleasant (P), 30 neutral (N), 30 unpleasant (U)], each 6 s in duration and were instructed to rate the sound as P, N, or U as soon as they felt confident in their rating.</p>
</sec>
<sec>
<title>(b) Resting state</title>
<p>To acquire information about the resting state, continuous scanning instead of clustered image acquisition was employed. During the resting scan, which was continuous and lasted approximately 5 min, subjects were instructed to lie still and to look at a fixation cross for the scan duration. One hundred and fifty volumes were collected for each subject. The first four images were discarded prior to preprocessing, leaving 146 volumes for analysis.</p>
</sec>
</sec>
<sec>
<title>Preprocessing</title>
<p>Pre-processing was similar for both types of functional scans. Statistical Parametric Mapping 8 (SPM, Welcome Trust Centre for Neuroimaging, <ext-link ext-link-type="uri" xlink:href="http://www.fil.ion.ucl.ac.uk/spm/software/spm8/">http://www.fil.ion.ucl.ac.uk/spm/software/spm8/</ext-link>) software was used to analyze the functional imaging data. The images were first realigned using a rigid body transformation to control for head motion. Next, the low resolution Axial T2 (AxT2) image was registered to the mean fMRI image generated during the first step. The high resolution MPRAGE image was then registered to the AxT2 image. To normalize the functional images to MNI space, the MPRAGE was normalized to match a standard T1 MNI template. The normalized images were then smoothed using a Gaussian kernel of 8 &#x000D7; 8 &#x000D7; 8 mm<sup>3</sup> full width at half-maximum. To account for artifacts created by head motion, data from 3 NH subjects who displayed excessive motion (defined as at or above &#x000B1;1.5 mm translation and &#x000B1; 1.5&#x000B0; rotation in any direction) were not included for further analysis. We also included regressors of motion as covariates of no-interest in the general linear models created in the different statistical analyses, in order to (partially) remove motion-related artifacts. Further, <italic>t</italic>-tests of root-mean-square estimates of both rotational and translational movement showed that there was no statistical significant difference between the two groups (translational motion mean &#x000B1; standard deviation): NH (0.63 &#x000B1; 0.28); HL (0.67 &#x000B1; 0.23); rotational motion: NH (0.01 &#x000B1; 0.005), HL (0.01 &#x000B1; 0.004).</p>
</sec>
<sec>
<title>Data analysis</title>
<sec>
<title>(a) Emotion task</title>
<p>Behavioral data were obtained in the scanner during fMRI data acquisition. We collected individual subject ratings of each sound as P, N, or U along with reaction times. Subject ratings and reaction times were analyzed using separate Two-Way ANOVA tests in SPSS ver. 20 software (statistical package for social sciences, IBM, <ext-link ext-link-type="uri" xlink:href="http://www-01.ibm.com/software/analytics/spss/">http://www-01.ibm.com/software/analytics/spss/</ext-link>). Group (NH, HL) and condition (P, N, U) were set as independent fixed factors in a general linear model within SPSS, and significance was set at <italic>p</italic> &#x0003C; 0.05.</p>
<p>For data analysis, trials were coded based upon each individual&#x00027;s subjective rating of the affective sounds. We chose to employ the individual ratings to classify trials as &#x0201C;P,&#x0201D; &#x0201C;N,&#x0201D; or &#x0201C;U,&#x0201D; rather than the norms reported in IADS. Either using the IADS classification or individual rating to analyze data are valid methods of classifying the individual trials for further analysis. The trend in affective neuroscience literature is to move away from the normative classification to the individual classification, particularly when examining special populations where it is expected that emotional responses are altered (e.g., older adults or patient population) (St Jacques et al., <xref ref-type="bibr" rid="B51">2010</xref>). It should be noted that the ratings reported with the IADS were obtained from younger adults with NH (Bradley and Lang, <xref ref-type="bibr" rid="B6">2007</xref>). In the present study, the subject population was older with mean age of 51.4 &#x000B1; 9.9 for NH adults and 58.2 &#x000B1; 9.5 for those with HL. Therefore, individual ratings were used rather than the norms for classifying the trials obtained during fMRI scanning. First level fixed effects analysis was performed on each subject&#x00027;s smoothed images to generate P &#x0003E; N and U &#x0003E; N contrast images. Motion parameters were included in the first level model as covariates of no-interest. The contrast images were then included in the flexible factorial analysis and <italic>post-hoc</italic> two-sample <italic>t</italic>-tests at the second level. The P &#x0002B; U &#x0003E; N contrast was computed by performing a <italic>t</italic>-test on the condition vectors for each group separately in the flexible factorial model. The three factor design included group, subject and condition. Group was assumed to be independent and have unequal variance, subject was assumed to be independent and possess equal variance, and condition was assumed to be a dependent factor and to have equal variance. To directly compare the NH and HL groups, we conducted <italic>post-hoc</italic> two-sample <italic>t</italic>-tests within the flexible factorial model. Additionally, we performed a region-of-interest (ROI) analysis using the Wake Forest University (WFU) pickatlas toolbox (<ext-link ext-link-type="uri" xlink:href="http://www.fmri.wfubmc.edu">http://www.fmri.wfubmc.edu</ext-link>) within SPM8, with regions defined anatomically based on the human MNI atlas within the toolbox. Based on our <italic>a priori</italic> hypothesis about the involvement of auditory and limbic regions in affective sound processing, we created a single anatomically-defined mask via selecting regions including the amygdala, insula, parahippocampus, nucleus accumbens, ventral medial prefrontal cortex, inferior colliculus, medial geniculate body and primary auditory cortex (Brodmann areas 42, 41, 22). ROI analysis was performed on the NH (P &#x0002B; U &#x0003E; N), HL (P &#x0002B; U &#x0003E; N), and between group contrasts, and small volume correction (SVC) was employed. All clusters identified in the results were reported using a significance level set at <italic>p</italic> &#x0003C; 0.025 FWE at either the voxel or cluster level (threshold halved from the standard <italic>p</italic> &#x0003C; 0.05 to account for a two-tailed <italic>t</italic>-test).</p>
</sec>
<sec>
<title>(b) Resting state</title>
<p>Preprocessing of the resting state data began with slice time correction for the interleaved, ascending data collection. Following that, the same preprocessing steps used for the emotion task were applied. The Functional Connectivity Toolbox (Conn) (Whitfield-Gabrieli and Nieto-Castanon, <xref ref-type="bibr" rid="B57">2012</xref>) for MATLAB was used for data analysis. The smoothed fMRI data was band-pass filtered from 0.008 to 0.08 kHz. The average BOLD timeseries of the segmented white matter and cerebrospinal fluid as well as the realignment/motion parameters generated during preprocessing were regressed out of the data. Then, seed-to-voxel analysis was performed to analyze the auditory resting state network, the DMN, and the DAN. Connectivity was assessed using pairs of seed regions for both the DMN and auditory networks; correlations between each seed and the whole brain were measured and averaged across seed pairs. Seed locations are listed in Table <xref ref-type="table" rid="T2">2</xref>. For the auditory network, seeds were located in the bilateral primary auditory cortices. For the DMN, they were located in the medial prefrontal cortex and the posterior cingulate cortex. The DAN was examined using four seeds in the left and right posterior intraparietal sulci and the left and right frontal eye fields (Burton et al., <xref ref-type="bibr" rid="B8">2012</xref>). All seeds were created using Marsbar (Brett et al., <xref ref-type="bibr" rid="B7">2002</xref>) and were 5 mm in radius. Coordinates of seed regions were the same as those used in (Schmidt et al., <xref ref-type="bibr" rid="B46">2013</xref>). The resting state data used in the present study were partially described in the Schmidt et al. (<xref ref-type="bibr" rid="B46">2013</xref>) study, but have been re-analyzed for the purpose of this study. Correlation maps of the whole brain were created for each seed and then averaged over all seeds for a specific network for each subject. These correlations were then z-transformed, group averages were computed, and across-group comparisons were made via two-sample <italic>t</italic>-tests in the Conn toolbox (Whitfield-Gabrieli and Nieto-Castanon, <xref ref-type="bibr" rid="B57">2012</xref>). Results were then exported to SPM8 for display purposes. After whole brain analysis at <italic>p</italic> &#x0003C; 0.001 uncorrected threshold, clusters that were significant at <italic>p</italic> &#x0003C; 0.025 FWE either at voxel or cluster level were selected to account for both tails of the <italic>t</italic>-test, with cluster extent set at 25 voxels.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p><bold>Seed regions for the resting state functional connectivity analysis, consisting of seeds for canonical resting state networks and for networks based on local axima from the results of the emotion task study</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left"><bold>Network</bold></th>
<th align="left"><bold>Seed region</bold></th>
<th align="left"><bold>MNI coordinates <italic>X, Y, Z</italic></bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Auditory</td>
<td align="left">Left primary auditory cortex</td>
<td align="left">55, &#x02212;22, 9</td>
</tr>
<tr>
<td align="left">Auditory</td>
<td align="left">Right primary auditory cortex</td>
<td align="left">&#x02212;41, &#x02212;27, 6</td>
</tr>
<tr>
<td align="left">DMN</td>
<td align="left">Medial prefrontal cortex</td>
<td align="left">8, 59, 19</td>
</tr>
<tr>
<td align="left">DMN</td>
<td align="left">Posterior cingulate cortex</td>
<td align="left">&#x02212;2, &#x02212;50, 25</td>
</tr>
<tr>
<td align="left">DAN</td>
<td align="left">Left posterior intraparietal sulcus</td>
<td align="left">&#x02212;23, &#x02212;70, 46</td>
</tr>
<tr>
<td align="left">DAN</td>
<td align="left">Right posterior intraparietal sulcus</td>
<td align="left">26, &#x02212;62, 53</td>
</tr>
<tr>
<td align="left">DAN</td>
<td align="left">Left frontal eye field</td>
<td align="left">&#x02212;25, &#x02212;11, 54</td>
</tr>
<tr>
<td align="left">DAN</td>
<td align="left">Right frontal eye field</td>
<td align="left">27, &#x02212;11, 54</td>
</tr>
<tr>
<td/>
<td align="left">Left amygdala</td>
<td align="left">&#x02212;30, &#x02212;2, &#x02212;18</td>
</tr>
<tr>
<td/>
<td align="left">Left inferior parietal lobule</td>
<td align="left">&#x02212;44, &#x02212;36, 26</td>
</tr>
<tr>
<td/>
<td align="left">Left superior frontal gyrus</td>
<td align="left">&#x02212;24, 42, 30</td>
</tr>
<tr>
<td/>
<td align="left">Right middle temporal gyrus</td>
<td align="left">44, &#x02212;62, 22</td>
</tr>
<tr>
<td/>
<td align="left">Right superior parietal lobule</td>
<td align="left">30, &#x02212;62, 44</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Coordinates are listed in Montreal Neurological Institute (MNI) coordinates.</italic></p>
</table-wrap-foot>
</table-wrap>
<p>A seeding analysis designed to examine resting state network connectivity was performed using ROIs determined from published studies as stated earlier, as well as using ROIs identified from the task-based study. The latter ROIs included the left amygdala, left inferior parietal lobule, left superior frontal gyrus, right middle temporal gyrus, and the right superior parietal lobule (Table <xref ref-type="table" rid="T2">2</xref>). The left amygdala seed was created based on the NH &#x0003E; HL (P &#x0002B; U &#x0003E; N) contrast from the task ROI analysis. The right superior parietal lobule and left inferior parietal lobule were both based on the HL &#x0003E; NH (P &#x0002B; U &#x0003E; N) contrast, and the right middle temporal gyrus and left superior frontal gyrus seeds were based on the NH &#x0003E; HL (P &#x0002B; U &#x0003E; N) contrast from the emotion task results. All of these ROIs were determined from group-level contrasts. Peak maxima were used as the centers for the spherical ROIs, and the mean BOLD timeseries of the voxels in the ROI was generated. Seed specification, data generation and statistical analyses were performed in the manner described earlier for the standard seeds.</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Behavioral data</title>
<sec>
<title>(a) Emotion task</title>
<p>Ratings were obtained in the scanner simultaneous with fMRI data acquisition. There was a statistically significant main effect of group [<italic>F</italic><sub>(1, 23)</sub> &#x0003D; 69.53, <italic>p</italic> &#x0003C; 0.000001], main effect of condition [<italic>F</italic><sub>(1, 23)</sub> &#x0003D; 17.59, <italic>p</italic> &#x0003C; 0.000001] and interaction between group and condition [<italic>F</italic><sub>(1, 23)</sub> &#x0003D; 7.79, <italic>p</italic> &#x0003C; 0.000423] for the reaction time data. The NH group responded significantly slower to the neutral sounds compared to the pleasant and unpleasant sounds (Figure <xref ref-type="fig" rid="F1">1A</xref>). The HL group&#x00027;s reaction time for the three types of sounds did not significantly differ (Figure <xref ref-type="fig" rid="F1">1A</xref>). For between group comparisons, the HL group was significantly slower for both pleasant and unpleasant sounds compared to the NH group (Figure <xref ref-type="fig" rid="F1">1A</xref>). Concerning the type of responses, there was a significant main effect of condition [<italic>F</italic><sub>(1, 23)</sub> &#x0003D; 7.162, <italic>p</italic> &#x0003C; 0.01]; however, the main effect of group [<italic>F</italic><sub>(1, 23)</sub> &#x0003D; 0.118, <italic>p</italic> &#x0003D; 0.733] and the interactions [<italic>F</italic><sub>(1, 23)</sub> &#x0003D; 0.109, <italic>p</italic> &#x0003D; 0.897] did not reach significance. Both groups rated significantly more stimuli as unpleasant compared to pleasant and neutral (determined using <italic>post-hoc</italic> within-group <italic>t</italic>-tests) (Figure <xref ref-type="fig" rid="F1">1B</xref>). Note that the experimental design used an equal number of sounds classified as P, N, and U according to the normative IADS scores (Bradley and Lang, <xref ref-type="bibr" rid="B6">2007</xref>). Due to the observed variation from the normative ratings, we chose to code the trials during analysis according to each individual&#x00027;s rating.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Affective sound categorization task behavioral results. (A)</bold> Mean reaction time data. For within group comparison, the HL group did not statistically differ between the P, N, and U reaction times. However, the NH group responded significantly slower to the N sounds compared to the P and U stimuli. Compared to the NH group, the HL reaction times were significantly slower for P and U sounds. <bold>(B)</bold> Mean number of responses. The NH and HL groups responded U significantly more than N and P. Statistical significance level <italic>p</italic> &#x0003C; 0.05 indicated by <sup>&#x0002A;</sup>.</p></caption>
<graphic xlink:href="fnsys-08-00010-g0001.tif"/>
</fig>
</sec>
<sec>
<title>(b) Resting state</title>
<p>No behavioral data were obtained for the resting state study.</p>
</sec>
</sec>
<sec>
<title>fMRI data</title>
<sec>
<title>(a) Emotion task</title>
<p>Within group comparisons: In the NH group, areas of increased activation for the contrast P &#x0002B; U &#x0003E; N were observed in the bilateral middle temporal gyri, right transverse temporal gyrus, left superior temporal gyrus, left post central gyrus, right medial frontal gyrus, left superior frontal gyrus, left middle frontal gyrus, left anterior cingulate and the left insula (Figure <xref ref-type="fig" rid="F2">2</xref>, Table <xref ref-type="table" rid="T3">3</xref>). With respect to the HL group, increased response was obtained in the bilateral superior temporal gyri, bilateral transverse temporal gyri, right middle temporal gyrus, right superior frontal gyrus, left middle frontal gyrus, right medial frontal gyrus, right precuneus, left inferior parietal lobule, left precentral gyrus, left lentiform nucleus and corpus callosum for affective sounds compared to neutral sounds (Figure <xref ref-type="fig" rid="F2">2</xref>, Table <xref ref-type="table" rid="T3">3</xref>). ROI analysis revealed increased response in bilateral transverse temporal gyri, bilateral superior temporal gyri, bilateral medial frontal gyrus, left insula, right middle temporal gyrus and right parahippocampus for the NH (P &#x0002B; U &#x0003E; N) contrast (Table <xref ref-type="table" rid="T3">3</xref>). For the HL (P &#x0002B; U &#x0003E; N) comparison, increased response was observed in the bilateral transverse temporal gyri, bilateral superior temporal gyri, bilateral superior frontal gyri, right medial frontal gyrus and left insula (Table <xref ref-type="table" rid="T3">3</xref>).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Statistical parametric maps for the effect of affective stimuli (P &#x0002B; U &#x0003E; N) for each group separately (A,B). (A)</bold> HL (P &#x0002B; U &#x0003E; N) and <bold>(B)</bold> NH (P &#x0002B; U &#x0003E; N) images illustrate the similar whole brain response patterns from both groups (MNI coordinate <italic>z</italic> &#x0003D; &#x0002B;14). The maps are displayed at <italic>p</italic> &#x0003C; 0.001 uncorrected level for better visualization, but the clusters in the circles are corrected for multiple comparisons (<italic>p</italic> &#x0003C; 0.05 FWE). (1) bilateral middle temporal gyrus, (2) medial frontal gyrus, (3) bilateral middle temporal gyrus, (4) medial frontal gyrus.</p></caption>
<graphic xlink:href="fnsys-08-00010-g0002.tif"/>
</fig>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p><bold>Local maxima for the P &#x0002B; U &#x0003E; N contrasts from the emotion task</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Contrast</bold></th>
<th valign="top" align="center"><bold>MNI coordinates <italic>X, Y, Z</italic></bold></th>
<th valign="top" align="center"><bold><italic>Z</italic>-score</bold></th>
<th valign="top" align="center"><bold>Cluster (voxels)</bold></th>
<th valign="top" align="left"><bold>Gyrus (brodmann area)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">NH group (P &#x0002B; U &#x0003E; N)</td>
<td valign="top" align="center">62, &#x02212;10, 12</td>
<td valign="top" align="center">6.45</td>
<td valign="top" align="center">3293</td>
<td valign="top" align="left">R. transverse temporal gyrus (BA 42)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">58, 2, &#x02212;8</td>
<td valign="top" align="center">6.42</td>
<td/>
<td valign="top" align="left">R. middle temporal gyrus (BA 21)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">56, 6, &#x02212;18</td>
<td valign="top" align="center">5.98</td>
<td/>
<td valign="top" align="left">R. middle temporal gyrus (BA 21)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;58, &#x02212;16, 15</td>
<td valign="top" align="center">6.06</td>
<td valign="top" align="center">3974</td>
<td valign="top" align="left">L. post central gyrus (BA 40)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;54, &#x02212;6, 0</td>
<td valign="top" align="center">5.95</td>
<td/>
<td valign="top" align="left">L. superior temporal gyrus (BA 22)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;66, &#x02212;28, 4</td>
<td valign="top" align="center">5.73</td>
<td/>
<td valign="top" align="left">L. superior temporal gyrus (BA 22)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;24, 66, 14</td>
<td valign="top" align="center">5.59</td>
<td valign="top" align="center">218</td>
<td valign="top" align="left">L. superior frontal gyrus (BA 10)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;32, 62, 16</td>
<td valign="top" align="center">5.48</td>
<td/>
<td valign="top" align="left">L. middle frontal gyrus</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;38, &#x02212;58, 2</td>
<td valign="top" align="center">5.57</td>
<td valign="top" align="center">233</td>
<td valign="top" align="left">L. middle temporal gyrus (BA 37)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;40, &#x02212;34, 20</td>
<td valign="top" align="center">5.39</td>
<td valign="top" align="center">10</td>
<td valign="top" align="left">L. insula (BA 13)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">4, 56, 20</td>
<td valign="top" align="center">5.38</td>
<td valign="top" align="center">1765</td>
<td valign="top" align="left">R. medial frontal gyrus (BA 9)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">10, 60, 16</td>
<td valign="top" align="center">5.32</td>
<td/>
<td valign="top" align="left">R. medial frontal gyrus (BA 10)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;8, 34, 6</td>
<td valign="top" align="center">5.12</td>
<td/>
<td valign="top" align="left">L. anterior cingulate (BA 32)</td>
</tr>
<tr>
<td valign="top" align="left">HL group (P &#x0002B; U &#x0003E; N)</td>
<td valign="top" align="center">58, &#x02212;14, 12</td>
<td valign="top" align="center">7.21</td>
<td valign="top" align="center">2289</td>
<td valign="top" align="left">R. transverse temporal gyrus (BA42)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">60, &#x02212;22, 14</td>
<td valign="top" align="center">6.46</td>
<td/>
<td valign="top" align="left">R. superior temporal gyrus (BA 42)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">62, &#x02212;28, 4</td>
<td valign="top" align="center">5.30</td>
<td/>
<td valign="top" align="left">R. superior temporal gyrus (BA 22)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;42, &#x02212;36, 22</td>
<td valign="top" align="center">6.56</td>
<td valign="top" align="center">2471</td>
<td valign="top" align="left">L. inferior parietal lobule (BA40)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;50, &#x02212;6, 6</td>
<td valign="top" align="center">5.72</td>
<td/>
<td valign="top" align="left">L. precentral gyrus (BA 6)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;56, &#x02212;16, 6</td>
<td valign="top" align="center">5.29</td>
<td/>
<td valign="top" align="left">L. superior temporal gyrus (BA 22)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;62, &#x02212;10, 12</td>
<td valign="top" align="center">5.13</td>
<td/>
<td valign="top" align="left">L. transverse temporal gyrus (BA42)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;46, 36, 28</td>
<td valign="top" align="center">5.64</td>
<td valign="top" align="center">760</td>
<td valign="top" align="left">L. middle frontal gyrus (BA46)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;24, &#x02212;16, &#x02212;4</td>
<td valign="top" align="center">5.64</td>
<td valign="top" align="center">195</td>
<td valign="top" align="left">L. lentiform nucleus</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">44, &#x02212;62, 24</td>
<td valign="top" align="center">5.52</td>
<td valign="top" align="center">405</td>
<td valign="top" align="left">R. middle temporal gyrus (BA 39)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">36, &#x02212;74, 18</td>
<td valign="top" align="center">5.14</td>
<td/>
<td valign="top" align="left">R. middle temporal gyrus (BA 39)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;8, 32, 6</td>
<td valign="top" align="center">5.48</td>
<td valign="top" align="center">1644</td>
<td valign="top" align="left">Corpus Callosum</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">8, 42, &#x02212;10</td>
<td/>
<td/>
<td valign="top" align="left">R. medial frontal gyrus (BA10)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">24, &#x02212;50, 52</td>
<td valign="top" align="center">5.25</td>
<td valign="top" align="center">63</td>
<td valign="top" align="left">R. precuneus (BA 7)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">12, 62, 28</td>
<td valign="top" align="center">5.10</td>
<td valign="top" align="center">392</td>
<td valign="top" align="left">R. superior frontal gyrus (BA10)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;30, 46, 40</td>
<td valign="top" align="center">5.07</td>
<td valign="top" align="center">43</td>
<td valign="top" align="left">L. middle frontal gyrus (BA9)</td>
</tr>
<tr>
<td valign="top" align="left">[ROI] NH group (P &#x0002B; U &#x0003E; N)</td>
<td valign="top" align="center">62, &#x02212;10, 12</td>
<td valign="top" align="center">6.45</td>
<td valign="top" align="center">534</td>
<td valign="top" align="left">R. transverse temporal gyrus (BA 42)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">58, 0, &#x02212;6</td>
<td valign="top" align="center">6.16</td>
<td/>
<td valign="top" align="left">R. superior temporal gyrus (BA 22)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">54, &#x02212;24, 6</td>
<td valign="top" align="center">4.90</td>
<td/>
<td valign="top" align="left">R. superior temporal gyrus (BA 41)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">52, &#x02212;34, 2</td>
<td valign="top" align="center">4.45</td>
<td/>
<td valign="top" align="left">R. middle temporal gyrus (BA 22)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;58, &#x02212;16, 12</td>
<td valign="top" align="center">5.94</td>
<td valign="top" align="center">938</td>
<td valign="top" align="left">L. transverse temporal gyrus (BA 42)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;66, &#x02212;28, 4</td>
<td valign="top" align="center">5.73</td>
<td/>
<td valign="top" align="left">L. superior temporal gyrus (BA 22)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;50, &#x02212;22, 16</td>
<td valign="top" align="center">5.68</td>
<td/>
<td valign="top" align="left">L. insula</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;40, &#x02212;34, 20</td>
<td valign="top" align="center">5.39</td>
<td/>
<td valign="top" align="left">L. insula (BA 13)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;48, &#x02212;26, 18</td>
<td valign="top" align="center">5.38</td>
<td/>
<td valign="top" align="left">L. insula (BA 13)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;54, &#x02212;20, 6</td>
<td valign="top" align="center">5.25</td>
<td/>
<td valign="top" align="left">L. superior temporal gyrus (BA 41)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;52, &#x02212;26, 12</td>
<td valign="top" align="center">4.69</td>
<td/>
<td valign="top" align="left">L. transverse temporal gyrus (BA 41)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">6, 58, 20</td>
<td valign="top" align="center">5.30</td>
<td valign="top" align="center">54</td>
<td valign="top" align="left">R. medial frontal gyrus (BA 10)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;6, 58, 14</td>
<td valign="top" align="center">4.52</td>
<td valign="top" align="center">41</td>
<td valign="top" align="left">L. medial frontal gyrus (BA 10)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">20, &#x02212;20, &#x02212;22</td>
<td valign="top" align="center">4.31</td>
<td valign="top" align="center">42</td>
<td valign="top" align="left">R. parahippocampus (BA 28)</td>
</tr>
<tr>
<td valign="top" align="left">[ROI] HL group (P &#x0002B; U &#x0003E; N)</td>
<td valign="top" align="center">60, &#x02212;12, 12</td>
<td valign="top" align="center">6.72</td>
<td valign="top" align="center">510</td>
<td valign="top" align="left">R. transverse temporal gyrus (BA42)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">60, &#x02212;22, 12</td>
<td valign="top" align="center">6.18</td>
<td/>
<td valign="top" align="left">R. superior temporal gyrus (BA 42)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">56, &#x02212;10, 8</td>
<td valign="top" align="center">5.96</td>
<td/>
<td valign="top" align="left">R. superior temporal gyrus (BA 22)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">52, &#x02212;22, 14</td>
<td valign="top" align="center">5.11</td>
<td/>
<td valign="top" align="left">R. insula</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;52, &#x02212;6, 4</td>
<td valign="top" align="center">5.60</td>
<td valign="top" align="center">486</td>
<td valign="top" align="left">L. superior temporal gyrus (BA 22)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;42, &#x02212;34, 20</td>
<td valign="top" align="center">5.60</td>
<td/>
<td valign="top" align="left">L. insula (BA 13)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;62, &#x02212;10, 12</td>
<td valign="top" align="center">5.08</td>
<td/>
<td valign="top" align="left">L. transverse temporal gyrus (BA 42)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;54, &#x02212;28, 10</td>
<td valign="top" align="center">4.71</td>
<td/>
<td valign="top" align="left">L. superior temporal gyrus (BA 41)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">8, 42, &#x02212;10</td>
<td valign="top" align="center">5.31</td>
<td valign="top" align="center">201</td>
<td valign="top" align="left">R. medial frontal (BA 10)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">12, 62, 28</td>
<td valign="top" align="center">5.10</td>
<td valign="top" align="center">13</td>
<td valign="top" align="left">R. superior frontal gyrus (BA 10)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">&#x02212;14, 62, 26</td>
<td valign="top" align="center">4.76</td>
<td valign="top" align="center">15</td>
<td valign="top" align="left">L. superior frontal gyrus (BA 10)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The loci are listed for each group separately for the whole-brain analysis and the region-of-interest (ROI) analysis. Reported regions are listed in Montreal Neurological Institute (MNI) coordinates and in terms of Brodmann areas (before determining the Brodmann areas the MNI coordinates were converted to Talairach coordinates). The ROI mask comprised of bilateral regions in the primary auditory cortex, medial geniculate body, inferior colliculus, amygdala, insula, parrahippocampus, nucleus accumbens, and ventral medial prefrontal cortex, defined anatomically. Statistical threshold was set at p &#x0003C; 0.05 FWE corrected for multiple comparisons; all clusters noted here were significant at both voxel and cluster level. L, left; R, right.</italic></p>
</table-wrap-foot>
</table-wrap>
<p>Between-group comparisons: For the NH &#x0003E; HL (P &#x0002B; U &#x0003E; N) comparison, we observed heightened response in the left superior frontal gyrus, right middle temporal gyrus, left superior temporal gyrus, and left superior occipital gyrus (Figure <xref ref-type="fig" rid="F3">3</xref>, Table <xref ref-type="table" rid="T4">4</xref>). Concerning the HL &#x0003E; NH (P &#x0002B; U &#x0003E; N) comparison, elevated response was observed in the right superior parietal lobule, right precuneus, and left inferior parietal lobule (Figure <xref ref-type="fig" rid="F3">3</xref>, Table <xref ref-type="table" rid="T4">4</xref>). For the ROI analysis, no suprathreshold voxels were obtained for the HL &#x0003E; NH (P &#x0002B; U &#x0003E; N) contrast. However, for the NH &#x0003E; HL (P &#x0002B; U &#x0003E; N) comparison, increased response was observed in the left amygdala and left parahippocampus (Figure <xref ref-type="fig" rid="F3">3</xref>, Table <xref ref-type="table" rid="T4">4</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Statistical parametric maps for <italic>post-hoc</italic> two-tailed two sample <italic>t</italic>-tests and region-of-interest analysis (ROI)</bold>. <bold>(A)</bold> HL &#x0003E; NH (P &#x0002B; U &#x0003E; N) and <bold>(B)</bold> NH &#x0003E; HL (P &#x0002B; U &#x0003E; N) illustrate the brain regions chosen from the <italic>post-hoc</italic> two sample <italic>t</italic>-tests for the seed analysis (MNI coordinate <italic>z</italic> &#x0003D; &#x0002B;26, <italic>z</italic> &#x0003D; &#x0002B;21, respectively). <bold>(C)</bold> Denotes increased amygdala activation observed for the NH &#x0003E; HL (P &#x0002B; U &#x0003E; N) comparison (MNI coordinate y &#x0003D; &#x02212;4). The maps are displayed at <italic>p</italic> &#x0003C; 0.001 uncorrected level for better visualization, but the clusters in the circles are corrected for multiple comparisons (<italic>p</italic> &#x0003C; 0.025 FWE). (1) left inferior parietal lobule, (2) left superior frontal gyrus, (3) right middle temporal gyrus, (4) left amygdala.</p></caption>
<graphic xlink:href="fnsys-08-00010-g0003.tif"/>
</fig>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p><bold>Local maxima for the P &#x0002B; U &#x0003E; N contrasts from the emotion task for the group comparisons</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left"><bold>Contrast</bold></th>
<th align="center"><bold>MNI coordinates <italic>X, Y, Z</italic></bold></th>
<th align="center"><bold><italic>Z</italic>-score</bold></th>
<th align="center"><bold>Cluster (voxels)</bold></th>
<th align="left"><bold>Gyrus (brodmann area)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">NH &#x0003E; HL (P &#x0002B; U &#x0003E; N)</td>
<td align="center">44, &#x02212;62, 22</td>
<td align="center">5.77</td>
<td align="center">261</td>
<td align="left">R. middle temporal gyrus (BA 39)</td>
</tr>
<tr>
<td/>
<td align="center">&#x02212;34, 2, &#x02212;18</td>
<td align="center">5.72</td>
<td align="center">133</td>
<td align="left">L. superior temporal gyrus (BA 38)</td>
</tr>
<tr>
<td/>
<td align="center">&#x02212;24, 42, 20</td>
<td align="center">5.04</td>
<td align="center">319</td>
<td align="left">L. superior frontal gyrus (BA 10)</td>
</tr>
<tr>
<td/>
<td align="center">&#x02212;30, &#x02212;78, 24</td>
<td align="center">4.45<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td align="center">272</td>
<td align="left">L. superior occipital gyrus (BA 19)</td>
</tr>
<tr>
<td align="left">HL &#x0003E; NH (P &#x0002B; U &#x0003E; N)</td>
<td align="center">18, &#x02212;64, 44</td>
<td align="center">5.62</td>
<td align="center">402</td>
<td align="left">R. precuneus (BA 7)</td>
</tr>
<tr>
<td/>
<td align="center">30, &#x02212;62, 44</td>
<td align="center">4.86</td>
<td/>
<td align="left">R. superior parietal lobule (BA 7)</td>
</tr>
<tr>
<td/>
<td align="center">&#x02212;44, &#x02212;36, 26</td>
<td align="center">4.68<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td align="center">288</td>
<td align="left">L. inferior parietal lobule (BA 40)</td>
</tr>
<tr>
<td align="left">[ROI] NH &#x0003E; HL (P &#x0002B; U &#x0003E; N)</td>
<td align="center">&#x02212;30, &#x02212;2, &#x02212;18</td>
<td align="center">4.80</td>
<td align="center">57</td>
<td align="left">L. amygdala/parrahippocampus</td>
</tr>
<tr>
<td/>
<td align="center">&#x02212;34, 2, &#x02212;22</td>
<td align="center">4.62</td>
<td/>
<td align="left">L. parrahippocampus</td>
</tr>
<tr>
<td align="left">[ROI] HL &#x0003E; NH (P &#x0002B; U &#x0003E; N)</td>
<td/>
<td/>
<td/>
<td align="left">No significant regions</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The loci are listed for each group separately for the whole-brain analysis and the region-of-interest (ROI) analysis. Reported regions are listed in Montreal Neurological Institute (MNI) coordinates and in terms of Brodmann areas (before determining the Brodmann areas the MNI coordinates were converted to Talairach coordinates). The ROI mask comprised of bilateral regions in the primary auditory cortex, medial geniculate body, inferior colliculus, amygdala, insula, parrahippocampus, nucleus accumbens, and ventral medial prefrontal cortex, defined anatomically. Statistical threshold was set at p &#x0003C; 0.025 FWE corrected for multiple comparisons (to account for two-tailed t-test); all clusters noted here were significant at both voxel and cluster level, unless noted with an</italic></p>
<fn id="TN1">
<label>&#x0002A;</label>
<p><italic>which indicates significance only at cluster level. L, left; R, right.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>(b) Resting state</title>
<p>No significant differences were found between groups in the auditory resting state network. With respect to the DMN, a significant difference in the left middle frontal/precentral gyrus was observed in the HL &#x0003E; NH comparison. Analysis of the DAN revealed a significant difference in the left postcentral/precentral gyrus and left insula. These results of the typical intrinsic networks are listed in Table <xref ref-type="table" rid="T5">5</xref> and displayed in Figure <xref ref-type="fig" rid="F4">4</xref>. For the connectivity analysis with task-based ROIs, no significant differences in connectivity with the left amygdala were seen. Placing a seed in the left inferior parietal lobule also did not reveal significant differences. With the seed in the left superior frontal gyrus, the NH &#x0003E; HL contrast showed significant differences in connectivity with the left middle occipital lobe. The right middle temporal seed showed a stronger correlation with the right precentral gyrus in the HL group compared to the NH group. Finally, no connectivity differences were seen with the seed in the right superior parietal lobule. The results of this analysis are also shown in Table <xref ref-type="table" rid="T5">5</xref> and Figure <xref ref-type="fig" rid="F4">4</xref>.</p>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><p><bold>Local maxima for results of the resting state functional connectivity analysis</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Network/seed</bold></th>
<th valign="top" align="left"><bold>Contrast</bold></th>
<th valign="top" align="center"><bold>MNI coordinates <italic>X, Y, Z</italic></bold></th>
<th valign="top" align="center"><bold><italic>Z</italic>-score</bold></th>
<th valign="top" align="center"><bold>Cluster (voxels)</bold></th>
<th valign="top" align="left"><bold>Gyrus (brodmann area)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">AUD</td>
<td valign="top" align="left">NH &#x0003E; HL</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">HL &#x0003E; NH</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
<tr>
<td valign="top" align="left">DMN</td>
<td valign="top" align="left">NH &#x0003E; HL</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">HL &#x0003E; NH</td>
<td valign="top" align="center">&#x02212;44, 6, 58</td>
<td valign="top" align="center">4.38</td>
<td valign="top" align="center">107</td>
<td valign="top" align="left">L. middle frontal/precentral gyrus (BA 6)</td>
</tr>
<tr>
<td valign="top" align="left">DAN</td>
<td valign="top" align="left">NH &#x0003E; HL</td>
<td valign="top" align="center">&#x02212;56, &#x02212;12, 26</td>
<td valign="top" align="center">5.00</td>
<td valign="top" align="center">171</td>
<td valign="top" align="left">L. postcentral gyrus</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="center">&#x02212;56, &#x02212;2, 26</td>
<td valign="top" align="center">4.09</td>
<td/>
<td valign="top" align="left">L. precentral gyrus</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="center">&#x02212;52, &#x02212;4, 38</td>
<td valign="top" align="center">3.36</td>
<td/>
<td valign="top" align="left">L. postcentral gyrus</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="center">&#x02212;38, &#x02212;6, 6</td>
<td valign="top" align="center">4.36</td>
<td valign="top" align="center">118</td>
<td valign="top" align="left">L. insula</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="center">&#x02212;40, &#x02212;2, 16</td>
<td valign="top" align="center">4.00</td>
<td/>
<td valign="top" align="left">L. precentral gyrus (BA 6)</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="center">&#x02212;46, &#x02212;10, 14</td>
<td valign="top" align="center">3.43</td>
<td/>
<td valign="top" align="left">L. precentral gyrus (BA 6)</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">HL &#x0003E; NH</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
<tr>
<td valign="top" align="left">l amyg</td>
<td valign="top" align="left">NH &#x0003E; HL</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">HL &#x0003E; NH</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
<tr>
<td valign="top" align="left">l inf pariet</td>
<td valign="top" align="left">NH &#x0003E; HL</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">HL &#x0003E; NH</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
<tr>
<td valign="top" align="left">l sup front</td>
<td valign="top" align="left">NH &#x0003E; HL</td>
<td valign="top" align="center">&#x02212;18, &#x02212;96, 8</td>
<td valign="top" align="center">4.7</td>
<td valign="top" align="center">167</td>
<td valign="top" align="left">L. middle occipital gyrus (BA 18)</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="center">&#x02212;4, &#x02212;94, 20</td>
<td valign="top" align="center">4.38</td>
<td/>
<td valign="top" align="left">L. cuneus (BA 19)</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">HL &#x0003E; NH</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
<tr>
<td valign="top" align="left">r mid temp</td>
<td valign="top" align="left">NH &#x0003E; HL</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">HL &#x0003E; NH</td>
<td valign="top" align="center">66, &#x02212;6, 12</td>
<td valign="top" align="center">4.39</td>
<td valign="top" align="center">114</td>
<td valign="top" align="left">R. precentral gyrus (BA 22)</td>
</tr>
<tr>
<td valign="top" align="left">r sup pariet</td>
<td valign="top" align="left">NH &#x0003E; HL</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">HL &#x0003E; NH</td>
<td/>
<td/>
<td/>
<td valign="top" align="left">No significant regions</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The seeds are listed for the canonical resting state networks first, followed by seeds from the task-based study. Reported regions are listed in Montreal Neurological Institute (MNI) coordinates and in terms of Brodmann areas (before determining the Brodmann areas the MNI coordinates were converted to Talairach coordinates). Statistical threshold was set at p &#x0003C; 0.025 FWE corrected for multiple comparisons (to account for two-tailed t-test). All regions were significant at the cluster level. Abbreviations: L, left; R, right; amyg, amygdala; inf pariet, inferior parietal; sup front, superior frontal; mid temp, middle temporal; sup pariet; superior parietal.</italic></p>
</table-wrap-foot>
</table-wrap>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Statistical parametric maps of the two-tailed two sample <italic>t</italic>-tests of the resting state functional connectivity results. (A,B)</bold> show the results of the seeding analyses designed to examine the connectivity of the resting state networks, while <bold>(C,D)</bold> show correlations to the task-based seeds. The maps are displayed at <italic>p</italic> &#x0003C; 0.001 uncorrected level for better visualization, but the clusters in the circles are corrected for multiple comparisons (<italic>p</italic> &#x0003C; 0.025 FWE). In <bold>(A)</bold>, the seed regions were located in the bilateral posterior intraparietal sulci and the bilateral frontal eye fields to examine connectivity in the dorsal attention network (DAN). For the default mode network (DMN) in <bold>(B)</bold>, seeds were located in the posterior cingulate and medial prefrontal cortices. <bold>(C,D)</bold> are labeled with the seed regions in the figure. (1) left postcentral gyrus, (2) left insula, (3) left middle frontal/precentral gyrus, (4) left middle occipital, (5). right precentral gyrus. MNI coordinates for the different sub- figures are: <bold>(A)</bold> <italic>x</italic> &#x0003D; &#x02212;56, <italic>y</italic> &#x0003D; 6, <bold>(B)</bold> <italic>y</italic> &#x0003D; 6, <bold>(C)</bold> <italic>z</italic> &#x0003D; &#x02212;18 left, <bold>(D)</bold> y &#x0003D; &#x02212;6, <italic>z</italic> &#x0003D; 12. Abbreviations: DAN, dorsal attention network; DMN, default mode network; L SFG, left superior frontal gyrus; R MTG, right middle temporal gyrus.</p></caption>
<graphic xlink:href="fnsys-08-00010-g0004.tif"/>
</fig>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>We used a combined task- and rest-based fMRI study to identify the influence of HL on auditory and emotion processing. Results of the responses collected in the scanner revealed similar ratings for the unpleasant, pleasant, and neutral sounds&#x02014;both groups tended to rate more sounds as unpleasant relative to the other types of sounds. However, the HL group differed from the NH group in their response times, which were significantly slower for the affective sounds. Overlapping patterns of fMRI activation were observed in both groups when processing affective sounds compared to neutral sounds. The main finding for the emotion task was increased activation in the left amygdala/parahippocampal gyrus complex for the NH &#x0003E; HL (P &#x0002B; U &#x0003E; N) comparison, via targeted ROI analysis. The reverse contrast, HL &#x0003E; NH (P &#x0002B; U &#x0003E; N), did not show increased activation within the limbic system, but rather revealed heightened responses in the right superior parietal lobule, right precuneus, and left inferior parietal lobule. Resting-state functional connectivity in the same group of participants focused on the canonical intrinsic networks and on seed regions obtained from the task-based activation patterns. Among the typical intrinsic networks, the default mode and DAN, but not the auditory network, revealed differences between the groups. Using seeds from the local maxima noted in the task-based analysis, decreased connectivity between the frontal cortex and other brain regions was noted, with the exception of stronger connectivity between the middle temporal gyrus and the right precentral gyrus in the HL group compared to the NH group. Our results suggest that HL may alter the emotional processing networks and lead to slower reaction times to affective stimuli.</p>
<sec>
<title>Emotion task</title>
<p>HL may reduce the engagement of the emotional processing system, either because of disordered processing of acoustic or of valence features. Complex anatomical and functional connections exist between the auditory cortex and the limbic system, primarily with the amygdala (Amaral and Price, <xref ref-type="bibr" rid="B1">1984</xref>; Blood and Zatorre, <xref ref-type="bibr" rid="B5">2001</xref>; Koelsch et al., <xref ref-type="bibr" rid="B30">2006</xref>; Tschacher et al., <xref ref-type="bibr" rid="B52">2010</xref>; Kumar et al., <xref ref-type="bibr" rid="B31">2012</xref>). Forward projections from the auditory cortex to the amygdala have been shown to be modulated by acoustic features, but backward projections appear to be modulated by the valence of sound (Kumar et al., <xref ref-type="bibr" rid="B31">2012</xref>). The forward and backward projections work in concert to interpret incoming affective stimuli (Kumar et al., <xref ref-type="bibr" rid="B31">2012</xref>). Sound deprivation may reduce the amount of acoustic or valence information available for this network. Reduced information may result in a dampened emotional response to affective stimuli, because individuals with HL may not receive all of the acoustic or valence information necessary to cause a robust emotional response. We investigated this hypothesis via whole brain analysis and a targeted ROI analysis of the auditory and limbic areas.</p>
<p>In our study, both positively and negatively valent sounds exhibited greater engagement of the limbic system and faster response times, compared to neutral sounds, in NH individuals. However, this pattern differed in the HL group, with a decreased response in the temporal and frontal cortices (whole brain analysis) and in the amygdala and parahippocampus (ROI analysis), and an increased response in the parietal cortices and precuneus compared to the NH group when processing affective sounds (Table <xref ref-type="table" rid="T4">4</xref>). Similarly, in behavioral responses, the reaction times for the P and U sounds were significantly slower in the HL group (Figure <xref ref-type="fig" rid="F1">1A</xref>). The behavioral data suggest that the advantageous faster processing of affective sound found in the NH group does not occur in the HL group. The slower responses to pleasant and unpleasant sounds in the HL group may be due to lack of faster, bottom-up engagement of the amygdala and other limbic structures during auditory processing. However, HL does not appear to affect the response to all sounds, with the reaction times for neutral sounds being nearly identical in the two groups. Instead, the highly valent sounds were most affected, indicating that it is identification of valence information rather than acoustic information that is impacted by HL. Another possible explanation of the differential processing of affective sounds by the HL group could be that there is more energy or information in the high frequency regions in the affective sounds and less so in the neutral sounds. Difficulty of processing high-frequency information in the affective sounds by the HL group led to longer reaction times. In order to maintain ecological validity, we chose not to low-pass filter the sounds to compensate for the HL of the HL group. In a previous study with mild-to-moderate HL (Husain et al., <xref ref-type="bibr" rid="B26">2011b</xref>), we filtered sounds such that there was no energy in frequencies greater than 2 kHz. We found no difference in accuracy or reaction times for a discrimination task between HL and NH groups, nevertheless, the fMRI activation patterns were different (Husain et al., <xref ref-type="bibr" rid="B26">2011b</xref>). In sum, regardless of the actual mechanism, our results suggest that HL may reduce engagement of the amygdala and result in slower reaction to positively and negatively valent sounds.</p>
<p>Another interesting aspect of the behavioral results was the finding that an increased number of sounds were classified as unpleasant, which differed from the published normative data (Bradley and Lang, <xref ref-type="bibr" rid="B6">2007</xref>). There are at least two possible explanations for this finding. First, the normative scores were obtained in a young, NH population; therefore it is not surprising that both sets of older middle-aged participants in our study differed from these ratings, which points to an effect of age. Another possible explanation is that possible discomfort in the scanner may have influenced our participants to be more negative in their ratings. In order to tease apart these explanations and better understand the effect of aging on emotional processing, we intend to conduct a follow-up study with both young and older participants using stimuli from the IADS database.</p>
</sec>
<sec>
<title>Resting state data</title>
<p>Resting state functional connectivity demonstrated alterations in the frontal cortex in HL patients. Increased connectivity between frontal regions and seed regions for the DAN and DMN was seen in HL patients compared to NH controls. A decreased correlation was seen between the left superior frontal cortex and the left middle occipital gyrus in HL subjects compared to controls. The relationship seen between HL and alterations in frontal connectivity may not just be important at baseline, as engagement of frontal regions was also apparent during the emotion task. The cause of these network alterations is not clear. The alterations could be purely attentional in nature, but they may also be a factor of interactions between emotional and attentional systems. A study including an attentional task without an emotional component may help to clarify this.</p>
<p>Except for activation of the left temporal pole, we did not find evidence to support our expectation that the response of the auditory cortex would differ between the two groups when processing affective sounds. A separate resting-state functional connectivity analysis with seeds in the primary auditory cortices also failed to find significant connectivity differences at rest between the two groups. The lack of significant findings may relate to mild-to-moderate severity of the HL in our study. To date, there have been no studies of resting state functional connectivity in patients with this level of HL. Deafness, however, has been investigated in this context. Intrinsic connectivity has been shown to be impacted by deafness, both within and outside of the temporal cortex (Li et al., <xref ref-type="bibr" rid="B35">2013</xref>). The findings of the (Li et al., <xref ref-type="bibr" rid="B35">2013</xref>) study are similar to those in our own study. Deaf patients showed increased negative correlations between the middle superior temporal sulcus and the left middle occipital and right precentral gyri when compared to NH controls. In our study, the left middle occipital gyrus was shown to be less correlated to the left superior frontal gyrus in the HL group, whereas the right precentral gyrus was more correlated with the right middle temporal gyrus. The presence of altered connectivity in similar regions in both deaf and our mild-to-moderate HL patients warrants further resting state connectivity studies examining varying degrees of HL.</p>
<p>It is important to note that inferences about the directionality of the connections cannot be made from the present functional connectivity analysis. An effective connectivity analysis, perhaps implemented as a structural equation modeling or dynamic causal modeling, would be needed to examine directionality (Horwitz, <xref ref-type="bibr" rid="B24">2003</xref>). Our results suggest only a general alteration in connectivity between two related regions; differences in correlation may not be specifically due to coupling between the seed and a particular region, but may also arise due to the influence of a third region, or changes in noise, etc. (Friston, <xref ref-type="bibr" rid="B16">1994</xref>).</p>
<p>HL is positively correlated with age (Yueh et al., <xref ref-type="bibr" rid="B60">2003</xref>); age is therefore a potential confound in our research. A study examining connectivity in resting state networks (Onoda et al., <xref ref-type="bibr" rid="B39">2012</xref>) noted a significantly decreased correlation between the auditory resting state network and the salience network (which includes the insula, ventrolateral prefrontal cortex (VLPFC), thalamus and cerebellum) with age. In addition, connections between regions of the salience network also weakened with age. The regions of the salience network relate to the processing of emotional stimuli. It is possible that HL within the older population in this study is partially responsible for the observed results. The decreased correlation between regions of the DAN and the insula seen in our study fit well with the results of the aging study. In our own study, however, all participants were matched for age; therefore, we are unable to parse out the effects of age from those of HL. More fMRI studies specifically addressing the effects of HL of varying severity, in different age groups, should be performed to clarify its impact on intrinsic and task-based functional networks. Subject motion in the scanner is another notable confound with the resting state data being particularly sensitive to its influence. Although we excluded data from participants who exhibited excessive head motion, included motion parameters as covariates of no-interest in our statistical models and statistical tests did not show any significant differences between the groups, it is possible that motion-related artifacts continue to affect our results, as shown by recent publications (Kundu et al., <xref ref-type="bibr" rid="B33">2012</xref>; Van Dijk et al., <xref ref-type="bibr" rid="B55">2012</xref>; Kundu et al., <xref ref-type="bibr" rid="B32">2013</xref>; Power et al., <xref ref-type="bibr" rid="B42">2014</xref>). Future studies with more stringent data acquisition considerations and more advanced analytical methods will need to be conducted to fully account for the possibility of motion artifacts.</p>
</sec>
</sec>
<sec sec-type="conclusion" id="s5">
<title>Conclusion</title>
<p>Our results suggest that HL may affect emotional processing by decreasing amygdalar recruitment, resulting in slower reaction times to highly valent sounds. Although the HL group demonstrated slower response times to affective sounds, there was no difference in sound ratings between the HL and the NH group. Altered engagement of the frontal regions was also demonstrated in both emotion task-based subtraction and resting state functional connectivity analyses. HL is the third most common chronic condition in older adults and is highly comorbid with another hearing disorder, tinnitus. Our results in those with unaided, mild-to-moderately severe HL have implications for auditory rehabilitation for hearing impairment, reducing social isolation in the elderly, and management strategies for tinnitus.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</sec>
</body>
<back>
<ack>
<p>We wish to acknowledge the support of Tinnitus Research Consortium to Fatima T. Husain and of the NeuroEngineering NSF IGERT (Integrative Graduate Education and Research Traineeship) to Jake R. Carpenter-Thompson and Sara A. Schmidt. We are grateful to Kwaku Akrofi and Jaclyn Utz for their assistance in data acquisition.</p>
</ack>
<sec sec-type="supplementary material" id="s6">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="http://www.frontiersin.org/journal/10.3389/fnsys.2014.00010/abstract">http://www.frontiersin.org/journal/10.3389/fnsys.2014.00010/abstract</ext-link></p>
<supplementary-material xlink:href="DataSheet1.PDF" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Supplementary Table 1</label>
<caption><p><bold>Sounds included in the study</bold>. Separated by column are 30 pleasant, 30 unpleasant, and 30 neutral sounds chosen from the IADS database to be included in the study.</p></caption>
</supplementary-material>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amaral</surname> <given-names>D. G.</given-names></name> <name><surname>Price</surname> <given-names>J. L.</given-names></name></person-group> (<year>1984</year>). <article-title>Amygdalo-cortical projections in the monkey (<italic>Macaca fascicularis</italic>)</article-title>. <source>J. Comp. Neurol</source>. <volume>230</volume>, <fpage>465</fpage>&#x02013;<lpage>496</lpage>. <pub-id pub-id-type="doi">10.1002/cne.902300402</pub-id><pub-id pub-id-type="pmid">6520247</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anticevic</surname> <given-names>A.</given-names></name> <name><surname>Repovs</surname> <given-names>G.</given-names></name> <name><surname>Barch</surname> <given-names>D. M.</given-names></name></person-group> (<year>2012</year>). <article-title>Emotion effects on attention, amygdala activation, and functional connectivity in schizophrenia</article-title>. <source>Schizophr. Bull</source>. <volume>38</volume>, <fpage>967</fpage>&#x02013;<lpage>980</lpage>. <pub-id pub-id-type="doi">10.1093/schbul/sbq168</pub-id><pub-id pub-id-type="pmid">21415225</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bartels</surname> <given-names>H.</given-names></name> <name><surname>Middel</surname> <given-names>B. L.</given-names></name> <name><surname>Van Der Laan</surname> <given-names>B. F.</given-names></name> <name><surname>Staal</surname> <given-names>M. J.</given-names></name> <name><surname>Albers</surname> <given-names>F. W.</given-names></name></person-group> (<year>2008</year>). <article-title>The additive effect of co-occurring anxiety and depression on health status, quality of life and coping strategies in help-seeking tinnitus sufferers</article-title>. <source>Ear Hear</source>. <volume>29</volume>, <fpage>947</fpage>&#x02013;<lpage>956</lpage>. <pub-id pub-id-type="doi">10.1097/AUD.0b013e3181888f83</pub-id><pub-id pub-id-type="pmid">18941410</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Beck</surname> <given-names>A. T.</given-names></name> <name><surname>Steer</surname> <given-names>R. A.</given-names></name></person-group> (<year>1984</year>). <article-title>Internal consistencies of the original and revised Beck Depression Inventory</article-title>. <source>J. Clin. Psychol</source>. <volume>40</volume>, <fpage>1365</fpage>&#x02013;<lpage>1367</lpage>. <pub-id pub-id-type="doi">10.1002/1097-4679(198411)40:6&#x0003C;1365::AID-JCLP2270400615&#x0003E;3.0.CO;2-D</pub-id><pub-id pub-id-type="pmid">6511949</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blood</surname> <given-names>A. J.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2001</year>). <article-title>Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>98</volume>, <fpage>11818</fpage>&#x02013;<lpage>11823</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.191355898</pub-id><pub-id pub-id-type="pmid">11573015</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bradley</surname> <given-names>M. M.</given-names></name> <name><surname>Lang</surname> <given-names>P. J.</given-names></name></person-group> (<year>2007</year>). <source>The International Affective Digitized Sounds (IADS-2): Affective Ratings of Sounds and Instruction Manual</source>. <publisher-loc>Gainesville, FL</publisher-loc>: <publisher-name>University of Florida</publisher-name>, Technical Report: B-3.</citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brett</surname> <given-names>M.</given-names></name> <name><surname>Anton</surname> <given-names>J. L.</given-names></name> <name><surname>Valabregue</surname> <given-names>R.</given-names></name> <name><surname>Poline</surname> <given-names>J. B.</given-names></name></person-group> (<year>2002</year>). <article-title>Region of interest analysis using an SPM toolbox</article-title>. <source>NeuroImage</source> <volume>16</volume>, <fpage>1140</fpage>&#x02013;<lpage>1141</lpage>.</citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burton</surname> <given-names>H.</given-names></name> <name><surname>Wineland</surname> <given-names>A.</given-names></name> <name><surname>Bhattacharya</surname> <given-names>M.</given-names></name> <name><surname>Nicklaus</surname> <given-names>J.</given-names></name> <name><surname>Garcia</surname> <given-names>K. S.</given-names></name> <name><surname>Piccirillo</surname> <given-names>J. F.</given-names></name></person-group> (<year>2012</year>). <article-title>Altered networks in bothersome tinnitus: a functional connectivity study</article-title>. <source>BMC Neurosci</source>. <volume>13</volume>:<fpage>3</fpage>. <pub-id pub-id-type="doi">10.1186/1471-2202-13-3</pub-id><pub-id pub-id-type="pmid">22217183</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carabellese</surname> <given-names>C.</given-names></name> <name><surname>Appollonio</surname> <given-names>I.</given-names></name> <name><surname>Rozzini</surname> <given-names>R.</given-names></name> <name><surname>Bianchetti</surname> <given-names>A.</given-names></name> <name><surname>Frisoni</surname> <given-names>G. B.</given-names></name> <name><surname>Frattola</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>1993</year>). <article-title>Sensory impairment and quality of life in a community elderly population</article-title>. <source>J. Am. Geriatr. Soc</source>. <volume>41</volume>, <fpage>401</fpage>&#x02013;<lpage>407</lpage>. <pub-id pub-id-type="pmid">8463527</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cruickshanks</surname> <given-names>K. J.</given-names></name> <name><surname>Wiley</surname> <given-names>T. L.</given-names></name> <name><surname>Tweed</surname> <given-names>T. S.</given-names></name> <name><surname>Klein</surname> <given-names>B. E.</given-names></name> <name><surname>Klein</surname> <given-names>R.</given-names></name> <name><surname>Mares-Perlman</surname> <given-names>J. A.</given-names></name> <etal/></person-group>. (<year>1998</year>). <article-title>Prevalence of hearing loss in older adults in Beaver Dam, Wisconsin. The Epidemiology of Hearing Loss Study</article-title>. <source>Am. J. Epidemiol</source>. <volume>148</volume>, <fpage>879</fpage>&#x02013;<lpage>886</lpage>. <pub-id pub-id-type="doi">10.1093/oxfordjournals.aje.a009713</pub-id><pub-id pub-id-type="pmid">9801018</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Davis</surname> <given-names>A.</given-names></name> <name><surname>Rafaie</surname> <given-names>E. A.</given-names></name></person-group> (<year>2000</year>). <article-title>Epidemiology of tinnitus</article-title>, in <source>Tinnitus Handbook</source>, ed <person-group person-group-type="editor"><name><surname>Tyler</surname> <given-names>R. S.</given-names></name></person-group> (<publisher-loc>San Diego, CA</publisher-loc>: <publisher-name>Singular</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>24</lpage>.</citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dolcos</surname> <given-names>F.</given-names></name> <name><surname>McCarthy</surname> <given-names>G.</given-names></name></person-group> (<year>2006</year>). <article-title>Brain systems mediating cognitive interference by emotional distraction</article-title>. <source>J. Neurosci</source>. <volume>26</volume>, <fpage>2072</fpage>&#x02013;<lpage>2079</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.5042-05.2006</pub-id><pub-id pub-id-type="pmid">16481440</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feldmann</surname> <given-names>H.</given-names></name> <name><surname>Kumpf</surname> <given-names>W.</given-names></name></person-group> (<year>1988</year>). <article-title>[Listening to music in hearing loss with and without a hearing aid]</article-title>. <source>Laryngol. Rhinol. Otol. (Stuttg)</source>. <volume>67</volume>, <fpage>489</fpage>&#x02013;<lpage>497</lpage>. <pub-id pub-id-type="doi">10.1055/s-2007-998547</pub-id><pub-id pub-id-type="pmid">3236982</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fox</surname> <given-names>M. D.</given-names></name> <name><surname>Snyder</surname> <given-names>A. Z.</given-names></name> <name><surname>Vincent</surname> <given-names>J. L.</given-names></name> <name><surname>Corbetta</surname> <given-names>M.</given-names></name> <name><surname>Van Essen</surname> <given-names>D. C.</given-names></name> <name><surname>Raichle</surname> <given-names>M. E.</given-names></name></person-group> (<year>2005</year>). <article-title>The human brain is intrinsically organized into dynamic, anticorrelated functional networks</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>102</volume>, <fpage>9673</fpage>&#x02013;<lpage>9678</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0504136102</pub-id><pub-id pub-id-type="pmid">15976020</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Franks</surname> <given-names>J. R.</given-names></name></person-group> (<year>1982</year>). <article-title>Judgments of hearing aid processed music</article-title>. <source>Ear Hear</source>. <volume>3</volume>, <fpage>18</fpage>&#x02013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1097/00003446-198201000-00004</pub-id><pub-id pub-id-type="pmid">7060840</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname> <given-names>K. J.</given-names></name></person-group> (<year>1994</year>). <article-title>Functional and effective connectivity in neuroimaging: a synthesis</article-title>. <source>Hum. Brain Mapp</source>. <volume>2</volume>, <fpage>56</fpage>&#x02013;<lpage>78</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.460020107</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gaab</surname> <given-names>N.</given-names></name> <name><surname>Gaser</surname> <given-names>C.</given-names></name> <name><surname>Zaehle</surname> <given-names>T.</given-names></name> <name><surname>Jancke</surname> <given-names>L.</given-names></name> <name><surname>Schlaug</surname> <given-names>G.</given-names></name></person-group> (<year>2003</year>). <article-title>Functional anatomy of pitch memory&#x02013;an fMRI study with sparse temporal sampling</article-title>. <source>Neuroimage</source> <volume>19</volume>, <fpage>1417</fpage>&#x02013;<lpage>1426</lpage>. <pub-id pub-id-type="doi">10.1016/S1053-8119(03)00224-6</pub-id><pub-id pub-id-type="pmid">12948699</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Giraud</surname> <given-names>A. L.</given-names></name> <name><surname>Chery-Croze</surname> <given-names>S.</given-names></name> <name><surname>Fischer</surname> <given-names>G.</given-names></name> <name><surname>Fischer</surname> <given-names>C.</given-names></name> <name><surname>Vighetto</surname> <given-names>A.</given-names></name> <name><surname>Gregoire</surname> <given-names>M. C.</given-names></name> <etal/></person-group>. (<year>1999</year>). <article-title>A selective imaging of tinnitus</article-title>. <source>Neuroreport</source> <volume>10</volume>, <fpage>1</fpage>&#x02013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-199901180-00001</pub-id><pub-id pub-id-type="pmid">10094123</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Golm</surname> <given-names>D.</given-names></name> <name><surname>Schmidt-Samoa</surname> <given-names>C.</given-names></name> <name><surname>Dechent</surname> <given-names>P.</given-names></name> <name><surname>Kroner-Herwig</surname> <given-names>B.</given-names></name></person-group> (<year>2013</year>). <article-title>Neural correlates of tinnitus related distress: an fMRI-study</article-title>. <source>Hear. Res</source>. <volume>295</volume>, <fpage>87</fpage>&#x02013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.1016/j.heares.2012.03.003</pub-id><pub-id pub-id-type="pmid">22445697</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gopinath</surname> <given-names>B.</given-names></name> <name><surname>Wang</surname> <given-names>J. J.</given-names></name> <name><surname>Schneider</surname> <given-names>J.</given-names></name> <name><surname>Burlutsky</surname> <given-names>G.</given-names></name> <name><surname>Snowdon</surname> <given-names>J.</given-names></name> <name><surname>McMahon</surname> <given-names>C. M.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>Depressive symptoms in older adults with hearing impairments: the Blue Mountains Study</article-title>. <source>J. Am. Geriatr. Soc</source>. <volume>57</volume>, <fpage>1306</fpage>&#x02013;<lpage>1308</lpage>. <pub-id pub-id-type="doi">10.1111/j.1532-5415.2009.02317.x</pub-id><pub-id pub-id-type="pmid">19570163</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Greicius</surname> <given-names>M.</given-names></name></person-group> (<year>2008</year>). <article-title>Resting-state functional connectivity in neuropsychiatric disorders</article-title>. <source>Curr. Opin. Neurol</source>. <volume>21</volume>, <fpage>424</fpage>&#x02013;<lpage>430</lpage>. <pub-id pub-id-type="doi">10.1097/WCO.0b013e328306f2c5</pub-id><pub-id pub-id-type="pmid">18607202</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hall</surname> <given-names>D. A.</given-names></name> <name><surname>Haggard</surname> <given-names>M. P.</given-names></name> <name><surname>Akeroyd</surname> <given-names>M. A.</given-names></name> <name><surname>Palmer</surname> <given-names>A. R.</given-names></name> <name><surname>Summerfield</surname> <given-names>A. Q.</given-names></name> <name><surname>Elliott</surname> <given-names>M. R.</given-names></name> <etal/></person-group>. (<year>1999</year>). <article-title>&#x0201C;Sparse&#x0201D; temporal sampling in auditory fMRI</article-title>. <source>Hum. Brain Mapp</source>. <volume>7</volume>, <fpage>213</fpage>&#x02013;<lpage>223</lpage>. <pub-id pub-id-type="doi">10.1002/(SICI)1097-0193(1999)7:3&#x0003C;213::AID-HBM5&#x0003E;3.0.CO;2-N</pub-id><pub-id pub-id-type="pmid">10194620</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hicks</surname> <given-names>C. B.</given-names></name> <name><surname>Tharpe</surname> <given-names>A. M.</given-names></name></person-group> (<year>2002</year>). <article-title>Listening effort and fatigue in school-age children with and without hearing loss</article-title>. <source>J. Speech Lang. Hear. Res</source>. <volume>45</volume>, <fpage>573</fpage>. <pub-id pub-id-type="doi">10.1044/1092-4388(2002/046)</pub-id><pub-id pub-id-type="pmid">12069009</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horwitz</surname> <given-names>B.</given-names></name></person-group> (<year>2003</year>). <article-title>The elusive concept of brain connectivity</article-title>. <source>Neuroimage</source> <volume>19</volume>, <fpage>466</fpage>&#x02013;<lpage>470</lpage>. <pub-id pub-id-type="doi">10.1016/S1053-8119(03)00112-5</pub-id><pub-id pub-id-type="pmid">12814595</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Husain</surname> <given-names>F. T.</given-names></name> <name><surname>Medina</surname> <given-names>R. E.</given-names></name> <name><surname>Davis</surname> <given-names>C. W.</given-names></name> <name><surname>Szymko-Bennett</surname> <given-names>Y.</given-names></name> <name><surname>Simonyan</surname> <given-names>K.</given-names></name> <name><surname>Pajor</surname> <given-names>N. M.</given-names></name> <etal/></person-group>. (<year>2011a</year>). <article-title>Neuroanatomical changes due to hearing loss and chronic tinnitus: a combined VBM and DTI study</article-title>. <source>Brain Res</source>. <volume>1369</volume>, <fpage>74</fpage>&#x02013;<lpage>88</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2010.10.095</pub-id><pub-id pub-id-type="pmid">21047501</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Husain</surname> <given-names>F. T.</given-names></name> <name><surname>Pajor</surname> <given-names>N. M.</given-names></name> <name><surname>Smith</surname> <given-names>J. F.</given-names></name> <name><surname>Kim</surname> <given-names>H. J.</given-names></name> <name><surname>Rudy</surname> <given-names>S.</given-names></name> <name><surname>Zalewski</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2011b</year>). <article-title>Discrimination task reveals differences in neural bases of tinnitus and hearing impairment</article-title>. <source>PLoS ONE</source> <volume>6</volume>:<fpage>e26639</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0026639</pub-id><pub-id pub-id-type="pmid">22066003</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Husain</surname> <given-names>F. T.</given-names></name> <name><surname>Patkin</surname> <given-names>D. J.</given-names></name> <name><surname>Thai-Van</surname> <given-names>H.</given-names></name> <name><surname>Braun</surname> <given-names>A. R.</given-names></name> <name><surname>Horwitz</surname> <given-names>B.</given-names></name></person-group> (<year>2009</year>). <article-title>Distinguishing the processing of gestures from signs in deaf individuals: An fMRI study</article-title>. <source>Brain Res</source>. <volume>1276</volume>, <fpage>140</fpage>&#x02013;<lpage>150</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2009.04.034</pub-id><pub-id pub-id-type="pmid">19397900</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Husain</surname> <given-names>F. T.</given-names></name> <name><surname>Schmidt</surname> <given-names>S. A.</given-names></name></person-group> (<year>2013</year>). <article-title>Using resting state functional connectivity to unravel networks of tinnitus</article-title>. <source>Hear. Res</source>. <volume>307</volume>, <fpage>154</fpage>&#x02013;<lpage>162</lpage>. <pub-id pub-id-type="doi">10.1016/j.heares.2013.07.010</pub-id><pub-id pub-id-type="pmid">23895873</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jastreboff</surname> <given-names>P. J.</given-names></name></person-group> (<year>1990</year>). <article-title>Phantom auditory perception (tinnitus): mechanisms of generation and perception</article-title>. <source>Neurosci. Res</source>. <volume>8</volume>, <fpage>221</fpage>&#x02013;<lpage>254</lpage>. <pub-id pub-id-type="doi">10.1016/0168-0102(90)90031-9</pub-id><pub-id pub-id-type="pmid">2175858</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koelsch</surname> <given-names>S.</given-names></name> <name><surname>Fritz</surname> <given-names>T.</given-names></name> <name><surname>Dy</surname> <given-names>V. C.</given-names></name> <name><surname>Muller</surname> <given-names>K.</given-names></name> <name><surname>Friederici</surname> <given-names>A. D.</given-names></name></person-group> (<year>2006</year>). <article-title>Investigating emotion with music: an fMRI study</article-title>. <source>Hum. Brain Mapp</source>. <volume>27</volume>, <fpage>239</fpage>&#x02013;<lpage>250</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.20180</pub-id><pub-id pub-id-type="pmid">16078183</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kumar</surname> <given-names>S.</given-names></name> <name><surname>Von Kriegstein</surname> <given-names>K.</given-names></name> <name><surname>Friston</surname> <given-names>K.</given-names></name> <name><surname>Griffiths</surname> <given-names>T. D.</given-names></name></person-group> (<year>2012</year>). <article-title>Features versus feelings: dissociable representations of the acoustic features and valence of aversive sounds</article-title>. <source>J. Neurosci</source>. <volume>32</volume>, <fpage>14184</fpage>&#x02013;<lpage>14192</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1759-12.2012</pub-id><pub-id pub-id-type="pmid">23055488</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kundu</surname> <given-names>P.</given-names></name> <name><surname>Brenowitz</surname> <given-names>N. D.</given-names></name> <name><surname>Voon</surname> <given-names>V.</given-names></name> <name><surname>Worbe</surname> <given-names>Y.</given-names></name> <name><surname>Vertes</surname> <given-names>P. E.</given-names></name> <name><surname>Inati</surname> <given-names>S. J.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Integrated strategy for improving functional connectivity mapping using multiecho fMRI</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>110</volume>, <fpage>16187</fpage>&#x02013;<lpage>16192</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1301725110</pub-id><pub-id pub-id-type="pmid">24038744</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kundu</surname> <given-names>P.</given-names></name> <name><surname>Inati</surname> <given-names>S. J.</given-names></name> <name><surname>Evans</surname> <given-names>J. W.</given-names></name> <name><surname>Luh</surname> <given-names>W. M.</given-names></name> <name><surname>Bandettini</surname> <given-names>P. A.</given-names></name></person-group> (<year>2012</year>). <article-title>Differentiating BOLD and non-BOLD signals in fMRI time series using multi-echo EPI</article-title>. <source>Neuroimage</source> <volume>60</volume>, <fpage>1759</fpage>&#x02013;<lpage>1770</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.12.028</pub-id><pub-id pub-id-type="pmid">22209809</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leek</surname> <given-names>M. R.</given-names></name> <name><surname>Molis</surname> <given-names>M. R.</given-names></name> <name><surname>Kubli</surname> <given-names>L. R.</given-names></name> <name><surname>Tufts</surname> <given-names>J. B.</given-names></name></person-group> (<year>2008</year>). <article-title>Enjoyment of music by elderly hearing-impaired listeners</article-title>. <source>J. Am. Acad. Audiol</source>. <volume>19</volume>, <fpage>519</fpage>&#x02013;<lpage>526</lpage>. <pub-id pub-id-type="doi">10.3766/jaaa.19.6.7</pub-id><pub-id pub-id-type="pmid">19253784</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Booth</surname> <given-names>J. R.</given-names></name> <name><surname>Peng</surname> <given-names>D.</given-names></name> <name><surname>Zang</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Yan</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Altered intra- and inter-regional synchronization of superior temporal cortex in deaf people</article-title>. <source>Cereb. Cortex</source> <volume>23</volume>, <fpage>1988</fpage>&#x02013;<lpage>1996</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhs185</pub-id><pub-id pub-id-type="pmid">22767633</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mather</surname> <given-names>M.</given-names></name> <name><surname>Knight</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>Goal-directed memory: the role of cognitive control in older adults&#x00027; emotional memory</article-title>. <source>Psychol. Aging</source> <volume>20</volume>, <fpage>554</fpage>&#x02013;<lpage>570</lpage>. <pub-id pub-id-type="doi">10.1037/0882-7974.20.4.554</pub-id><pub-id pub-id-type="pmid">16420131</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mirz</surname> <given-names>F.</given-names></name> <name><surname>Gjedde</surname> <given-names>A.</given-names></name> <name><surname>Ishizu</surname> <given-names>K.</given-names></name> <name><surname>Pedersen</surname> <given-names>C. B.</given-names></name></person-group> (<year>2000</year>). <article-title>Cortical networks subserving the perception of tinnitus&#x02013;a PET study</article-title>. <source>Acta Otolaryngol. Suppl</source>. <volume>543</volume>, <fpage>241</fpage>&#x02013;<lpage>243</lpage>. <pub-id pub-id-type="doi">10.1080/000164800454503</pub-id><pub-id pub-id-type="pmid">10909031</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mulrow</surname> <given-names>C. D.</given-names></name> <name><surname>Aguilar</surname> <given-names>C.</given-names></name> <name><surname>Endicott</surname> <given-names>J. E.</given-names></name> <name><surname>Velez</surname> <given-names>R.</given-names></name> <name><surname>Tuley</surname> <given-names>M. R.</given-names></name> <name><surname>Charlip</surname> <given-names>W. S.</given-names></name> <etal/></person-group>. (<year>1990</year>). <article-title>Association between hearing impairment and the quality of life of elderly individuals</article-title>. <source>J. Am. Geriatr. Soc</source>. <volume>38</volume>, <fpage>45</fpage>&#x02013;<lpage>50</lpage>. <pub-id pub-id-type="pmid">2295767</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Onoda</surname> <given-names>K.</given-names></name> <name><surname>Ishihara</surname> <given-names>M.</given-names></name> <name><surname>Yamaguchi</surname> <given-names>S.</given-names></name></person-group> (<year>2012</year>). <article-title>Decreased functional connectivity by aging is associated with cognitive decline</article-title>. <source>J. Cogn. Neurosci</source>. <volume>24</volume>, <fpage>2186</fpage>&#x02013;<lpage>2198</lpage>. <pub-id pub-id-type="doi">10.1162/jocn_a_00269</pub-id><pub-id pub-id-type="pmid">22784277</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peelle</surname> <given-names>J. E.</given-names></name> <name><surname>Troiani</surname> <given-names>V.</given-names></name> <name><surname>Grossman</surname> <given-names>M.</given-names></name> <name><surname>Wingfield</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Hearing loss in older adults affects neural systems supporting speech comprehension</article-title>. <source>J. Neurosci</source>. <volume>31</volume>, <fpage>12638</fpage>&#x02013;<lpage>12643</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2559-11.2011</pub-id><pub-id pub-id-type="pmid">21880924</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Petitto</surname> <given-names>L. A.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name> <name><surname>Gauna</surname> <given-names>K.</given-names></name> <name><surname>Nikelski</surname> <given-names>E. J.</given-names></name> <name><surname>Dostie</surname> <given-names>D.</given-names></name> <name><surname>Evans</surname> <given-names>A. C.</given-names></name></person-group> (<year>2000</year>). <article-title>Speech-like cerebral activity in profoundly deaf people processing signed languages: implications for the neural basis of human language</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>97</volume>, <fpage>13961</fpage>&#x02013;<lpage>13966</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.97.25.13961</pub-id><pub-id pub-id-type="pmid">11106400</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Power</surname> <given-names>J. D.</given-names></name> <name><surname>Mitra</surname> <given-names>A.</given-names></name> <name><surname>Laumann</surname> <given-names>T. O.</given-names></name> <name><surname>Snyder</surname> <given-names>A. Z.</given-names></name> <name><surname>Schlaggar</surname> <given-names>B. L.</given-names></name> <name><surname>Petersen</surname> <given-names>S. E.</given-names></name></person-group> (<year>2014</year>). <article-title>Methods to detect, characterize, and remove motion artifact in resting state fMRI</article-title>. <source>Neuroimage</source> <volume>84</volume>, <fpage>320</fpage>&#x02013;<lpage>341</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2013.08.048</pub-id><pub-id pub-id-type="pmid">23994314</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Raichle</surname> <given-names>M. E.</given-names></name> <name><surname>Macleod</surname> <given-names>A. M.</given-names></name> <name><surname>Snyder</surname> <given-names>A. Z.</given-names></name> <name><surname>Powers</surname> <given-names>W. J.</given-names></name> <name><surname>Gusnard</surname> <given-names>D. A.</given-names></name> <name><surname>Shulman</surname> <given-names>G. L.</given-names></name></person-group> (<year>2001</year>). <article-title>A default mode of brain function</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>98</volume>, <fpage>676</fpage>&#x02013;<lpage>682</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.98.2.676</pub-id><pub-id pub-id-type="pmid">11209064</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rauschecker</surname> <given-names>J. P.</given-names></name> <name><surname>Leaver</surname> <given-names>A. M.</given-names></name> <name><surname>Muhlau</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Tuning out the noise: limbic-auditory interactions in tinnitus</article-title>. <source>Neuron</source> <volume>66</volume>, <fpage>819</fpage>&#x02013;<lpage>826</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2010.04.032</pub-id><pub-id pub-id-type="pmid">20620868</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Rutledge</surname> <given-names>K. L.</given-names></name></person-group> (<year>2009</year>). <source>A Music Listening Questionnaire for Hearing Aid Users</source>. Unpublished Master&#x00027;s Thesis. University of Canterbury, Christchurch. Available online at: <ext-link ext-link-type="uri" xlink:href="http://hdl.handle.net/10092/3194">http://hdl.handle.net/10092/3194</ext-link></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schmidt</surname> <given-names>S. A.</given-names></name> <name><surname>Akrofi</surname> <given-names>K.</given-names></name> <name><surname>Carpenter-Thompson</surname> <given-names>J. R.</given-names></name> <name><surname>Husain</surname> <given-names>F. T.</given-names></name></person-group> (<year>2013</year>). <article-title>Default mode, dorsal attention and auditory resting state networks exhibit differential functional connectivity in tinnitus and hearing loss</article-title>. <source>PLoS ONE</source> <volume>8</volume>:<fpage>e76488</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0076488</pub-id><pub-id pub-id-type="pmid">24098513</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Seydell-Greenwald</surname> <given-names>A.</given-names></name> <name><surname>Leaver</surname> <given-names>A. M.</given-names></name> <name><surname>Turesky</surname> <given-names>T. K.</given-names></name> <name><surname>Morgan</surname> <given-names>S.</given-names></name> <name><surname>Kim</surname> <given-names>H. J.</given-names></name> <name><surname>Rauschecker</surname> <given-names>J. P.</given-names></name></person-group> (<year>2012</year>). <article-title>Functional MRI evidence for a role of ventral prefrontal cortex in tinnitus</article-title>. <source>Brain Res</source>. <volume>1485</volume>, <fpage>22</fpage>&#x02013;<lpage>39</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2012.08.052</pub-id><pub-id pub-id-type="pmid">22982009</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Steer</surname> <given-names>R. A.</given-names></name> <name><surname>Clark</surname> <given-names>D. A.</given-names></name> <name><surname>Beck</surname> <given-names>A. T.</given-names></name> <name><surname>Ranieri</surname> <given-names>W. F.</given-names></name></person-group> (<year>1999</year>). <article-title>Common and specific dimensions of self-reported anxiety and depression: the BDI-II versus the BDI-IA</article-title>. <source>Behav. Res. Ther</source>. <volume>37</volume>, <fpage>183</fpage>&#x02013;<lpage>190</lpage>. <pub-id pub-id-type="doi">10.1016/S0005-7967(98)00087-4</pub-id><pub-id pub-id-type="pmid">9990749</pub-id></citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Steer</surname> <given-names>R. A.</given-names></name> <name><surname>Ranieri</surname> <given-names>W. F.</given-names></name> <name><surname>Beck</surname> <given-names>A. T.</given-names></name> <name><surname>Clark</surname> <given-names>D. A.</given-names></name></person-group> (<year>1993</year>). <article-title>Further evidence for the validity of the Beck Anxiety Inventory with psychiatric outpatients</article-title>. <source>J. Anxiety Disord</source>. <volume>7</volume>, <fpage>195</fpage>&#x02013;<lpage>205</lpage>. <pub-id pub-id-type="doi">10.1016/0887-6185(93)90002-3</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stevens</surname> <given-names>G.</given-names></name> <name><surname>Flaxman</surname> <given-names>S.</given-names></name> <name><surname>Brunskill</surname> <given-names>E.</given-names></name> <name><surname>Mascarenhas</surname> <given-names>M.</given-names></name> <name><surname>Mathers</surname> <given-names>C. D.</given-names></name> <name><surname>Finucane</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Global and regional hearing impairment prevalence: an analysis of 42 studies in 29 countries</article-title>. <source>Eur. J. Public Health</source> <volume>23</volume>, <fpage>146</fpage>&#x02013;<lpage>152</lpage>. <pub-id pub-id-type="doi">10.1093/eurpub/ckr176</pub-id><pub-id pub-id-type="pmid">22197756</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>St Jacques</surname> <given-names>P.</given-names></name> <name><surname>Dolcos</surname> <given-names>F.</given-names></name> <name><surname>Cabeza</surname> <given-names>R.</given-names></name></person-group> (<year>2010</year>). <article-title>Effects of aging on functional connectivity of the amygdala during negative evaluation: a network analysis of fMRI data</article-title>. <source>Neurobiol. Aging</source> <volume>31</volume>, <fpage>315</fpage>&#x02013;<lpage>327</lpage>. <pub-id pub-id-type="doi">10.1016/j.neurobiolaging.2008.03.012</pub-id><pub-id pub-id-type="pmid">18455837</pub-id></citation>
</ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tschacher</surname> <given-names>W.</given-names></name> <name><surname>Schildt</surname> <given-names>M.</given-names></name> <name><surname>Sander</surname> <given-names>K.</given-names></name></person-group> (<year>2010</year>). <article-title>Brain connectivity in listening to affective stimuli: a functional magnetic resonance imaging (fMRI) study and implications for psychotherapy</article-title>. <source>Psychother. Res</source>. <volume>20</volume>, <fpage>576</fpage>&#x02013;<lpage>588</lpage>. <pub-id pub-id-type="doi">10.1080/10503307.2010.493538</pub-id><pub-id pub-id-type="pmid">20845228</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tun</surname> <given-names>P. A.</given-names></name> <name><surname>McCoy</surname> <given-names>S.</given-names></name> <name><surname>Wingfield</surname> <given-names>A.</given-names></name></person-group> (<year>2009</year>). <article-title>Aging, hearing acuity, and the attentional costs of effortful listening</article-title>. <source>Psychol. Aging</source> <volume>24</volume>, <fpage>761</fpage>&#x02013;<lpage>766</lpage>. <pub-id pub-id-type="doi">10.1037/a0014802</pub-id><pub-id pub-id-type="pmid">19739934</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Uys</surname> <given-names>M.</given-names></name> <name><surname>Pottas</surname> <given-names>L.</given-names></name> <name><surname>Vinck</surname> <given-names>B.</given-names></name> <name><surname>Van Dijk</surname> <given-names>C.</given-names></name></person-group> (<year>2012</year>). <article-title>The influence of non-linear frequency compression on the perception of music by adults with a moderate to sever hearing loss: subjective impressions</article-title>. <source>S. Afr. J. Commun. Disord</source>. <volume>59</volume>, <fpage>53</fpage>&#x02013;<lpage>67</lpage>. <pub-id pub-id-type="doi">10.7196/sajcd.119</pub-id><pub-id pub-id-type="pmid">23409619</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Dijk</surname> <given-names>K. R.</given-names></name> <name><surname>Sabuncu</surname> <given-names>M. R.</given-names></name> <name><surname>Buckner</surname> <given-names>R. L.</given-names></name></person-group> (<year>2012</year>). <article-title>The influence of head motion on intrinsic functional connectivity MRI</article-title>. <source>Neuroimage</source> <volume>59</volume>, <fpage>431</fpage>&#x02013;<lpage>438</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.07.044</pub-id><pub-id pub-id-type="pmid">21810475</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Weisz</surname> <given-names>N.</given-names></name> <name><surname>Voss</surname> <given-names>S.</given-names></name> <name><surname>Berg</surname> <given-names>P.</given-names></name> <name><surname>Elbert</surname> <given-names>T.</given-names></name></person-group> (<year>2004</year>). <article-title>Abnormal auditory mismatch response in tinnitus sufferers with high-frequency hearing loss is associated with subjective distress level</article-title>. <source>BMC Neurosci</source>. <volume>5</volume>:<fpage>8</fpage>. <pub-id pub-id-type="doi">10.1186/1471-2202-5-8</pub-id><pub-id pub-id-type="pmid">15113455</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Whitfield-Gabrieli</surname> <given-names>S.</given-names></name> <name><surname>Nieto-Castanon</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Conn: a functional connectivity toolbox for correlated and anticorrelated brain networks</article-title>. <source>Brain Connect</source>. <volume>2</volume>, <fpage>125</fpage>&#x02013;<lpage>141</lpage>. <pub-id pub-id-type="doi">10.1089/brain.2012.0073</pub-id><pub-id pub-id-type="pmid">22642651</pub-id></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wong</surname> <given-names>P. C.</given-names></name> <name><surname>Ettlinger</surname> <given-names>M.</given-names></name> <name><surname>Sheppard</surname> <given-names>J. P.</given-names></name> <name><surname>Gunasekera</surname> <given-names>G. M.</given-names></name> <name><surname>Dhar</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <article-title>Neuroanatomical characteristics and speech perception in noise in older adults</article-title>. <source>Ear Hear</source>. <volume>31</volume>, <fpage>471</fpage>&#x02013;<lpage>479</lpage>. <pub-id pub-id-type="doi">10.1097/AUD.0b013e3181d709c2</pub-id><pub-id pub-id-type="pmid">20588117</pub-id></citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yoneda</surname> <given-names>M.</given-names></name> <name><surname>Ikawa</surname> <given-names>M.</given-names></name> <name><surname>Arakawa</surname> <given-names>K.</given-names></name> <name><surname>Kudo</surname> <given-names>T.</given-names></name> <name><surname>Kimura</surname> <given-names>H.</given-names></name> <name><surname>Fujibayashi</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title><italic>In vivo</italic> functional brain imaging and a therapeutic trial of L-arginine in MELAS patients</article-title>. <source>Biochim. Biophys. Acta</source> <volume>1820</volume>, <fpage>615</fpage>&#x02013;<lpage>618</lpage>. <pub-id pub-id-type="doi">10.1016/j.bbagen.2011.04.018</pub-id><pub-id pub-id-type="pmid">21600268</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yueh</surname> <given-names>B.</given-names></name> <name><surname>Shapiro</surname> <given-names>N.</given-names></name> <name><surname>Maclean</surname> <given-names>C. H.</given-names></name> <name><surname>Shekelle</surname> <given-names>P. G.</given-names></name></person-group> (<year>2003</year>). <article-title>Screening and management of adult hearing loss in primary care: scientific review</article-title>. <source>JAMA</source> <volume>289</volume>, <fpage>1976</fpage>&#x02013;<lpage>1985</lpage>. <pub-id pub-id-type="doi">10.1001/jama.289.15.1976</pub-id><pub-id pub-id-type="pmid">12697801</pub-id></citation>
</ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zaehle</surname> <given-names>T.</given-names></name> <name><surname>Wustenberg</surname> <given-names>T.</given-names></name> <name><surname>Meyer</surname> <given-names>M.</given-names></name> <name><surname>Jancke</surname> <given-names>L.</given-names></name></person-group> (<year>2004</year>). <article-title>Evidence for rapid auditory perception as the foundation of speech processing: a sparse temporal sampling fMRI study</article-title>. <source>Eur. J. Neurosci</source>. <volume>20</volume>, <fpage>2447</fpage>&#x02013;<lpage>2456</lpage>. <pub-id pub-id-type="doi">10.1111/j.1460-9568.2004.03687.x</pub-id><pub-id pub-id-type="pmid">15525285</pub-id></citation>
</ref>
</ref-list>
</back>
</article>
