<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2017.00153</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Listening to Rhythmic Music Reduces Connectivity within the Basal Ganglia and the Reward System</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Brodal</surname> <given-names>Hans P.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/424641/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Osnes</surname> <given-names>Berge</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/392875/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Specht</surname> <given-names>Karsten</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1421/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Biological and Medical Psychology, University of Bergen</institution> <country>Bergen, Norway</country></aff>
<aff id="aff2"><sup>2</sup><institution>Bj&#x000F8;rgvin District Psychiatric Centre, Haukeland University Hospital</institution> <country>Bergen, Norway</country></aff>
<aff id="aff3"><sup>3</sup><institution>Department of Clinical Engineering, Haukeland University Hospital</institution> <country>Bergen, Norway</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Simone Dalla Bella, University of Montpellier 1, France</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Reyna L. Gordon, Vanderbilt University, USA; Boris Kleber, Aarhus University, Denmark</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Karsten Specht <email>karsten.specht&#x00040;psybp.uib.no</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Neuroscience</p></fn></author-notes>
<pub-date pub-type="epub">
<day>28</day>
<month>03</month>
<year>2017</year>
</pub-date>
<pub-date pub-type="collection">
<year>2017</year>
</pub-date>
<volume>11</volume>
<elocation-id>153</elocation-id>
<history>
<date date-type="received">
<day>14</day>
<month>08</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>09</day>
<month>03</month>
<year>2017</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2017 Brodal, Osnes and Specht.</copyright-statement>
<copyright-year>2017</copyright-year>
<copyright-holder>Brodal, Osnes and Specht</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>Music can trigger emotional responses in a more direct way than any other stimulus. In particular, music-evoked pleasure involves brain networks that are part of the reward system. Furthermore, rhythmic music stimulates the basal ganglia and may trigger involuntary movements to the beat. In the present study, we created a continuously playing rhythmic, dance floor-like composition where the ambient noise from the MR scanner was incorporated as an additional instrument of rhythm. By treating this continuous stimulation paradigm as a variant of resting-state, the data was analyzed with stochastic dynamic causal modeling (sDCM), which was used for exploring functional dependencies and interactions between core areas of auditory perception, rhythm processing, and reward processing. The sDCM model was a fully connected model with the following areas: auditory cortex, putamen/pallidum, and ventral striatum/nucleus accumbens of both hemispheres. The resulting estimated parameters were compared to ordinary resting-state data, without an additional continuous stimulation. Besides reduced connectivity within the basal ganglia, the results indicated a reduced functional connectivity of the reward system, namely the right ventral striatum/nucleus accumbens from and to the basal ganglia and auditory network while listening to rhythmic music. In addition, the right ventral striatum/nucleus accumbens demonstrated also a change in its hemodynamic parameter, reflecting an increased level of activation. These converging results may indicate that the dopaminergic reward system reduces its functional connectivity and relinquishing its constraints on other areas when we listen to rhythmic music.</p></abstract>
<kwd-group>
<kwd>music</kwd>
<kwd>rhythm</kwd>
<kwd>basal ganglia</kwd>
<kwd>reward system</kwd>
<kwd>ventral striatum</kwd>
<kwd>nucleus accumbens</kwd>
<kwd>fMRI</kwd>
<kwd>dynamic causal modeling</kwd>
</kwd-group>
<contract-num rid="cn001">217932</contract-num>
<contract-sponsor id="cn001">Norges Forskningsr&#x000E5;d<named-content content-type="fundref-id">10.13039/501100005416</named-content></contract-sponsor>
<contract-sponsor id="cn002">Bergens Forskningsstiftelse<named-content content-type="fundref-id">10.13039/501100006475</named-content></contract-sponsor>
<counts>
<fig-count count="1"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="44"/>
<page-count count="7"/>
<word-count count="5831"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>Playing and enjoying music is an universal phenomenon and can be found in all known present and past cultures (Gray et al., <xref ref-type="bibr" rid="B16">2001</xref>; Zatorre and Krumhansl, <xref ref-type="bibr" rid="B42">2002</xref>; Fritz et al., <xref ref-type="bibr" rid="B13">2009</xref>); and emotional responses to music are an experience that almost everyone has felt. Further, in all cultures, across all ages, and without the requirement of musical expertise, people easily dance and synchronize their body movements to rhythmic music (Phillips-Silver and Trainor, <xref ref-type="bibr" rid="B31">2005</xref>). Consequently, the recent two decades have seen an increasing interest in mapping the neuroanatomical correlates of music processing for understanding the underlying mechanism that make music such a unique stimulus in human culture and&#x02014;although controversially discussed&#x02014;human evolution (Levitin and Tirovolas, <xref ref-type="bibr" rid="B24">2009</xref>). It turns out that the processing of music does not only activate the auditory system but a complex and distributed network, covering cortical and sub-cortical areas (Peretz and Zatorre, <xref ref-type="bibr" rid="B30">2005</xref>; Zatorre and McGill, <xref ref-type="bibr" rid="B44">2005</xref>; Koelsch, <xref ref-type="bibr" rid="B20">2011</xref>, <xref ref-type="bibr" rid="B21">2014</xref>; Zatorre, <xref ref-type="bibr" rid="B40">2015</xref>). Two aspects have been in the closest focus over the recent years: Processing of rhythm and emotions evoked by music.</p>
<p>It is a daily experience that listening to predominantly rhythmic music often spontaneously triggers toe tapping, foot tapping, or head nodding, synchronous with the beat, and is perceived as pleasurable. It is an almost automatic and often involuntary process that does not require cognitive awareness in order to perform rhythmical movements in synchronization with the perceived beat (Phillips-Silver and Trainor, <xref ref-type="bibr" rid="B31">2005</xref>; Trost et al., <xref ref-type="bibr" rid="B37">2014</xref>). It also requires no musical skills or formal musical training, and is mostly and easily triggered by auditory stimulation. It has been shown that rhythm processing, i.e., both the production of rhythmic movements and the perception of rhythmic sounds, activates the basal ganglia and here the putamen in particular, the supplementary motor area (SMA), pre-SMA, and cerebellum (Grahn and Brett, <xref ref-type="bibr" rid="B14">2007</xref>; Zatorre et al., <xref ref-type="bibr" rid="B41">2007</xref>; Chen et al., <xref ref-type="bibr" rid="B6">2008a</xref>,<xref ref-type="bibr" rid="B7">b</xref>; Grahn and Rowe, <xref ref-type="bibr" rid="B15">2009</xref>); but also other substructures have been identified as structures that encode musical meter, like the caudate nucleus (Trost et al., <xref ref-type="bibr" rid="B37">2014</xref>). Moreover, studies have also shown that the perception of rhythms with a preferred tempo increases activity in the premotor cortex (Kornysheva et al., <xref ref-type="bibr" rid="B23">2010</xref>).</p>
<p>Besides rhythm, there is mounting evidence that listening to music induces emotional experiences and could interact with the affect systems. Neuroanatomically, these processes are tightly linked not only to the amygdala, but to a network that is involved in expecting and experiencing reward (Levitin and Tirovolas, <xref ref-type="bibr" rid="B24">2009</xref>; Chanda and Levitin, <xref ref-type="bibr" rid="B5">2013</xref>; Koelsch, <xref ref-type="bibr" rid="B21">2014</xref>; Salimpoor et al., <xref ref-type="bibr" rid="B34">2015</xref>). In particular the ventral striatum, comprising the caudate nucleus and the nearby nucleus accumbens together with the ventral tegmental area, the ventral pallidum and other, mainly frontal areas have been repeatedly related to music listening (Blood and Zatorre, <xref ref-type="bibr" rid="B1">2001</xref>; Levitin and Tirovolas, <xref ref-type="bibr" rid="B24">2009</xref>; Chanda and Levitin, <xref ref-type="bibr" rid="B5">2013</xref>; Salimpoor et al., <xref ref-type="bibr" rid="B33">2013</xref>, <xref ref-type="bibr" rid="B34">2015</xref>; Zatorre and Salimpoor, <xref ref-type="bibr" rid="B43">2013</xref>; Koelsch, <xref ref-type="bibr" rid="B21">2014</xref>; Zatorre, <xref ref-type="bibr" rid="B40">2015</xref>). Interestingly, Salimpoor et al. (<xref ref-type="bibr" rid="B32">2011</xref>) demonstrated an anatomically distinct increased release of dopamine, depending on whether an emotional response to music was anticipated or experienced. While the dorsal striatum demonstrated an elevated release of dopamine only during the anticipation of an emotional response to the music, the nucleus accumbens demonstrated this during the experience of an emotional response (Salimpoor et al., <xref ref-type="bibr" rid="B32">2011</xref>). Further, the activity of the nucleus accumbens also served as a predictor of the amount one would spend for the music (Salimpoor et al., <xref ref-type="bibr" rid="B33">2013</xref>).</p>
<p>The above-described aspects of emotional responses to music and synchronized movement to rhythmic auditory stimulation are interweaved, particularly in musical groove. Groove is often defined as the musical quality that induces movement and pleasure (Madison, <xref ref-type="bibr" rid="B25">2006</xref>; Madison et al., <xref ref-type="bibr" rid="B26">2011</xref>; Janata et al., <xref ref-type="bibr" rid="B19">2012</xref>; Witek et al., <xref ref-type="bibr" rid="B38">2014</xref>). Further, a recent transcranial-magnetic stimulation study demonstrated that musical groove is able to modulate excitability within the motor system, with stronger motor-evoked potentials to high-groove music (Stupacher et al., <xref ref-type="bibr" rid="B35">2013</xref>). From a musicology perspective, there is strong evidence that the experience of groove is mainly triggered by beat density and syncopation, but also a moderate, i.e., not too simple and not too complex, rhythmic complexity (Stupacher et al., <xref ref-type="bibr" rid="B35">2013</xref>; Witek et al., <xref ref-type="bibr" rid="B38">2014</xref>, <xref ref-type="bibr" rid="B39">2015</xref>). An example for this type of music is the genre of electronic dance music (Panteli et al., <xref ref-type="bibr" rid="B29">2016</xref>), to which also the stimulation of the present study would belong.</p>
<p>Based on the above-described studies, it is evident that rhythmic music, like electronic dance music, triggers responses in different parts of the basal ganglia loop. In particular, the ventral striatum and the putamen appear as key areas that respond to the rhythm of the music and may generate an emotional response. Therefore, it was hypothesized that rhythmic music will affect the connectivity within a network comprising the auditory cortex, putamen/pallidum, and ventral striatum/nucleus accumbens.</p>
<p>This hypothesis was tested using a continuous stimulation paradigm and a resting-state like analysis approach by using stochastic dynamic causal modeling (sDCM). This procedure does not test activation strength <italic>per se</italic>, since there is no reference condition for a contrast, but examines fluctuations of the BOLD signal and functional dependencies between regions. DCM, in general, allows the specification of models with up to eight anatomically distinct regions that are either directly or indirectly connected to each other. Here, the model space was restricted to a plausible model of six areas, representing the sensory input, the rhythm processing in the basal ganglia, and the reward system.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Methods</title>
<sec>
<title>Participants</title>
<p>The participants were recruited from the student population at the University of Bergen. In total, 26 right-handed, healthy adults (music group: eight female, five male, mean age 30.8 &#x000B1; 8.4, control group: six female, seven male, mean age 22.8 &#x000B1; 3.7,) participated in this fMRI study. All participants gave written informed consent in accordance with the Declaration of Helsinki and institutional guidelines, and an approval of fMRI studies in healthy subjects was obtained from the regional ethics committee for western Norway (REK-Vest). Half of the subjects listened to the music during the scanning, while the other 13 subjects served as control subjects with an ordinary resting-state fMRI condition without any additional auditory stimulation. Both groups got the instruction to lie still with eyes open. Participants were compensated for their effort with 200 NOK. Exclusion criteria were neurological or psychiatric disorders, claustrophobia, any surgery of the brain, eyes, or head, pregnancy, implants, braces, large tattoos, and non-removable piercing. All participants got an emergency button and were informed that they could withdraw from the study at any point. Participants were recruited mainly through announcements at the University of Bergen and the Haukeland University Hospital.</p>
</sec>
<sec>
<title>Stimuli</title>
<p>To overcome limitations from earlier studies, where often short or chunked pieces of music have been used, the presented study used a new continuous-stimulation design with an experimentally controlled composition, which was synchronized with the sounds generated by the MR scanner. The music was created as electronic dance music, out of 12 samples of different instruments, like dance drums, bass, kick snare, guitar, etc., mixed together with Adobe Audition 2.0 (<ext-link ext-link-type="uri" xlink:href="http://www.adobe.com">www.adobe.com</ext-link>) to a 10.16 min-long sequence with 120 beats per minute (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Material</xref>).</p>
<p>This piece of music was composed with repeated periods of 20 s duration where all the instruments were playing, alternating with periods where only a few instruments were playing. However, the overall rhythm was always present during the entire sequence of 10.16 min. During image acquisition, the TR was set to 2 s, which included a 0.5 s silent gap. By doing so, the MR scanner was synchronized with the rhythm of the music and thereby was acting as &#x0201C;additional&#x0201D; instrument and was not perceived as an unrelated and unsynchronized auditory stimulation. During the scanning, the subjects of the music group were asked to relax and to listen to the music, and were also asked not to move during the scanning. Participants were interviewed after the examination and all except one reported to have experienced the music as pleasant and relaxing. The control subjects were just asked to relax. Although the control subject were examined with the same sequence, i.e., with brief silent gaps of 500 ms, they did not report to have perceived the scanner noise as any kind of music, and they were only debriefed afterwards that they served as control participants in a study that focuses on rhythm perception.</p>
<p>The fMRI study was performed on a 3T GE Signa Excite scanner, and the axial slices for the functional imaging were positioned parallel to the AC&#x02013;PC line with reference to a high-resolution anatomical image of the entire brain volume, which was obtained using a T<sub>1</sub>-weighted gradient echo pulse sequence. The functional images were acquired using an echo-planar imaging (EPI) sequence with 311 volumes, each containing 24 axial slices (64 &#x000D7; 64 matrix, 3.4 &#x000D7; 3.4 &#x000D7; 5.5 mm<sup>3</sup> voxel size, TE 30 ms) that covered the cerebrum and most of the cerebellum. This low spatial resolution was selected to gain a short acquisition time of TA 1.5 s. Together with a silent gap of 0.5 s this resulted in an effective TR time of 2 s. The stimuli were presented through MR compatible headphones with insulating materials that also compensated for the ambient scanner noise by 24 dB.</p>
</sec>
<sec>
<title>Data analysis</title>
<p>The DICOM data were converted into 4D NifTi data files, using dcm2nii (<ext-link ext-link-type="uri" xlink:href="http://www.mccauslandcenter.sc.edu/mricro/">http://www.mccauslandcenter.sc.edu/mricro/</ext-link>). The BOLD-fMRI data were pre-processed using SPM12 (<ext-link ext-link-type="uri" xlink:href="http://www.fil.ion.ucl.ac.uk/spm">http://www.fil.ion.ucl.ac.uk/spm</ext-link>). The EPI images were first realigned to adjust for head movements during the image acquisition and the images were corrected for movement-induced distortions (&#x0201C;unwarping&#x0201D;). Data were subsequently inspected for residual movement artifacts, and all movements were &#x0003C;2 mm and the rotations were &#x0003C;2&#x000B0;. Afterwards, the realigned image series were normalized to the stereotaxic space created by Montreal Neurological Institute (MNI), which is defined by an EPI template provided by the SPM12 software package (&#x0201C;Old normalize&#x0201D;). Normalization parameters were estimated from the mean images, generated after the realign and unwarp procedure, and subsequently applied to the realigned time series. The normalized images were resampled with a voxel size of 2 &#x000D7; 2 &#x000D7; 2 mm, and finally smoothed by using a Gaussian kernel of 8 mm.</p>
</sec>
<sec>
<title>General linear model</title>
<p>Following the concept proposed by Di and Biswal (<xref ref-type="bibr" rid="B9">2015</xref>), the data were modeled within SPM using a design matrix that consisted only of sinusoidal functions, covering a frequency range 0.005&#x02013;0.1 Hz. Accordingly, the high-pass filter was set to a liberal cut-off of 360 s. This GLM is only needed for extracting the time course that feed into the sDCM analysis. Hence, only an F-contrast was defined that spanned over all sinusoidal functions.</p>
<p>The dynamic causal modeling approach allows specification of models with up to eight regions. Following the a-prior hypothesis, six anatomically distinct areas were selected for the present analysis covering the core areas of auditory perception, rhythm processing, and reward processing. The respective coordinates of four areas of interest were defined on published coordinates with representative areas for the auditory cortex and the putamen/pallidum. The coordinates for the ventral striatum/nucleus accumbens were identified directly on the MNI template. Since there was no prediction on the hemispheric lateralization, areas were defined for the left and right hemisphere. In total six anatomical areas of interest were defined. These were the auditory cortex (AC) with reference to the subarea Te1 (Morosan et al., <xref ref-type="bibr" rid="B27">2001</xref>), the putamen/pallidum (PA; Grahn and Rowe, <xref ref-type="bibr" rid="B15">2009</xref>) and the ventral striatum/nucleus accumbens (VS), of the left and right hemispheres, respectively (see Table <xref ref-type="table" rid="T1">1</xref> for the exact coordinates). For each region, the time series from the most significant voxel were extracted using a liberal threshold of <italic>p</italic> &#x0003C; 0.05, uncorrected, since the underlying significance of the GLM model fit is of minor relevance for this approach and served only as basis for extracting time courses. If no local maximum was within the range of 8 mm around the target coordinate, the nearest sub-threshold voxel within the 8 mm range was used. This procedure assured that only voxels that showed fluctuations over time were selected for this analysis.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>The table reports the mean coordinates (&#x000B1;<italic>SD</italic>) for the six areas used in the stochastic DCM analysis</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Area</bold></th>
<th valign="top" align="center"><bold>X (mm)</bold></th>
<th valign="top" align="center"><bold>Y (mm)</bold></th>
<th valign="top" align="center"><bold>Z (mm)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">AC-L</td>
<td valign="top" align="center">&#x02212;50.6 &#x000B1; 4.7</td>
<td valign="top" align="center">&#x02212;26.7 &#x000B1; 4.5</td>
<td valign="top" align="center">0.4 &#x000B1; 5.2</td>
</tr>
<tr>
<td valign="top" align="left">AC-R</td>
<td valign="top" align="center">61.2 &#x000B1; 4.6</td>
<td valign="top" align="center">&#x02212;25.5 &#x000B1; 4.7</td>
<td valign="top" align="center">5.8 &#x000B1; 5.1</td>
</tr>
<tr>
<td valign="top" align="left">Pa-L</td>
<td valign="top" align="center">&#x02212;29.1 &#x000B1; 5.5</td>
<td valign="top" align="center">4.5 &#x000B1; 4.6</td>
<td valign="top" align="center">4.2 &#x000B1; 5.4</td>
</tr>
<tr>
<td valign="top" align="left">Pa-R</td>
<td valign="top" align="center">25.5 &#x000B1; 5.7</td>
<td valign="top" align="center">4.3 &#x000B1; 6.8</td>
<td valign="top" align="center">4.2 &#x000B1; 5.5</td>
</tr>
<tr>
<td valign="top" align="left">VS-L</td>
<td valign="top" align="center">&#x02212;11.5 &#x000B1; 8.6</td>
<td valign="top" align="center">20.0 &#x000B1; 4.0</td>
<td valign="top" align="center">2.3 &#x000B1; 4.1</td>
</tr>
<tr>
<td valign="top" align="left">VS-R</td>
<td valign="top" align="center">11.5 &#x000B1; 4.9</td>
<td valign="top" align="center">20.2 &#x000B1; 5.7</td>
<td valign="top" align="center">0.6 &#x000B1; 4.5</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>AC, Auditory Cortex; Pa, Putamen/Pallidum; VS, Ventral Striatum/Nucleus Accumbens; L, Left, R, Right. The coordinates refer to the MNI space</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>The terms ventral striatum and nucleus accumbens as well as putamen and pallidum are often used interchangeably in the neuroimaging literature. Although these are anatomically and functionally distinct areas, they are often difficult to separate in functional neuroimaging data with low spatial resolution (in the present case 3.4 &#x000D7; 3.4 &#x000D7; 5.5 mm<sup>3</sup>) due their proximities. Therefore, in the following, the terms <italic>ventral striatum</italic>/<italic>nucleus accumbens</italic> as well as <italic>putamen</italic>/<italic>pallidum</italic> will be used to reflect that the precise spatial localization is limited for these areas, given the intrinsic resolution of the raw fMRI data.</p>
</sec>
<sec>
<title>Stochastic DCM</title>
<p>Dynamic causal modeling (DCM) is a method that rests on generative models describing the brain as non-linear, but deterministic system. Those generative models that describe the network of neuronal populations are a set of differential equations that describe how the hidden neuronal states could have generated the observed data. Since the system is described through those equations, the model can be inverted and parameters for the not directly observable hidden states can be estimated (Friston et al., <xref ref-type="bibr" rid="B10">2011</xref>). Originating from task-related fMRI, DCM has been extended to resting-state fMRI, called stochastic DCM (Friston et al., <xref ref-type="bibr" rid="B10">2011</xref>). The stochastic DCM (sDCM) model was defined as a fully connected model, where each node was connected with every other node. The DCM model (DCM12, revision 5729) was defined within SPM12 (rev. 6225). After model estimation, post-hoc Bayesian model optimization was applied (Friston and Penny, <xref ref-type="bibr" rid="B12">2011</xref>). The resulting connectivity estimates were subjected to group-specific one-sample <italic>t</italic>-tests, as well as between-group two-sample <italic>t</italic>-tests using SPSS (ver. 22). A Bonferroni correction was applied, taking into account that 36 <italic>t</italic>-tests were performed. In addition, the hemodynamic response was examined in more detail but analysing the estimated parameter for <italic>transit</italic> (transit time through the &#x0201C;balloon&#x0201D;), <italic>decay</italic> (signal decay), and <italic>epsilon</italic> (neural efficiency). These parameter refer to the revised Balloon model (Friston et al., <xref ref-type="bibr" rid="B11">2000</xref>; Buxton et al., <xref ref-type="bibr" rid="B3">2004</xref>; Buxton, <xref ref-type="bibr" rid="B2">2012</xref>) that express the metabolic response by taking into account the cerebral blood volume, cerebral blood flow, and oxygen extraction rate. A DCM analysis estimates the metabolic parameter <italic>transit</italic> and <italic>decay</italic> for each region separately, while &#x003B5; is a global parameter per subject.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>The sDCM results revealed significant (Bonferroni-corrected) connections between the auditory cortex, putamen and ventral striatum for both hemispheres and for both groups. More importantly, several significant group differences were detected. In the following, uncorrected and Bonferroni-corrected differences are reported, where the latter ones are marked with &#x0201C;<sup>&#x0002A;</sup>.&#x0201D; All differences are described relative to the control group. Reduced connectivity was detected from the right ventral striatum/nucleus accumbens to the left homolog [<italic>t</italic><sub>(24)</sub> &#x0003D; &#x02212;2.295, <italic>p</italic> &#x0003C; 0.031], to the right putamen/pallidum [<italic>t</italic><sub>(24)</sub> &#x0003D; &#x02212;3.196, <italic>p</italic> &#x0003C; 0.004], and the left auditory cortex [<italic>t</italic><sub>(24)</sub> &#x0003D; &#x02212;2.140, <italic>p</italic> &#x0003C; 0.043]. In addition, reduced connectivity was detected for the connection from the left putamen/pallidum to the right ventral striatum/nucleus accumbens [<italic>t</italic><sub>(24)</sub> &#x0003D; &#x02212;3.901, <italic>p</italic> &#x0003C; 0.001<sup>&#x0002A;</sup>] and from the right to the left putamen/pallidum [<italic>t</italic><sub>(24)</sub> &#x0003D; &#x02212;4.562, <italic>p</italic> &#x0003C; 0.001<sup>&#x0002A;</sup>]. These latter two results were also significant after Bonferroni correction (see Figure <xref ref-type="fig" rid="F1">1</xref>). Finally, the intrinsic, self-inhibitory connection of the right ventral striatum/nucleus accumbens was reduced in the music group [i.e., less negative, <italic>t</italic><sub>(24)</sub> &#x0003D; 2.138, <italic>p</italic> &#x0003C; 0.043]. By contrast, increased connectivity was only detected for the connection from the left to the right auditory cortex [<italic>t</italic><sub>(24)</sub> &#x0003D; 2.354, <italic>p</italic> &#x0003C; 0.027].</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Results from the stochastic dynamic causal modeling (sDCM) analysis, using the auditory cortex (AC), the putamen/pallidum (PA), and the ventral striatum/nucleus accumbens (VS) of the left and right hemisphere as core areas</bold>. The dark arrows indicate connections that are not significantly different between the music and control groups, and line thickness indicates the strength of the connectivity. Yellow/red arrows indicate a decreased connectivity. In addition, the right ventral striatum/nucleus accumbens reduced its self-inhibition, indicated by the green arrow, as well as changed its hemodynamic response parameter, as indicated by the green sphere. Finally, the green arrow from the left to the right auditory cortex indicates a significant increase of connectivity. The two red arrows indicate that this effect was also significant after Bonferroni correction. The figure to the left indicates the localization of the areas (see Table <xref ref-type="table" rid="T1">1</xref> for the exact coordinates).</p></caption>
<graphic xlink:href="fnins-11-00153-g0001.tif"/>
</fig>
<p>For the hemodynamic parameter <italic>transit, decay</italic> and <italic>epsilon</italic>, the group comparison demonstrated that the parameter <italic>transit</italic> and <italic>decay</italic> were significantly different for the right ventral striatum/nucleus accumbens, only. Although the corresponding <italic>p</italic>-values do not survive a Bonferroni correction, they are worth being mentioned, since differences were&#x02014;again&#x02014;detected only for this area. It appeared that <italic>transit</italic> was significantly reduced [<italic>t</italic><sub>(24)</sub> &#x0003D; &#x02212;2.158, <italic>p</italic> &#x0003C; 0.041], while decay was significantly increased [<italic>t</italic><sub>(24)</sub> &#x0003D; 2.202, <italic>p</italic> &#x0003C; 0.038].</p>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>With this study, the aim was to investigate brain networks involved in processing a continuously playing piece of rhythmic music. Generally, the analysis of continuous stimulation paradigms is challenging. Therefore, we present for the first time the application of stochastic dynamic causal modeling as a tool for analysing the connectivity in a study with continuous stimulation. The sDCM analysis provided evidence that, when compared to the control group without music stimulation, dance floor-like rhythmic music decreased connectivity from the right putamen/pallidum, via the left putamen/pallidum to the right ventral striatum/nucleus accumbens. Further, the right ventral striatum/nucleus accumbens changed its level of activity by a reduction in self-inhibition and an altered hemodynamic response. In addition, functional connectivity was reduced from the right ventral striatum/nucleus accumbens to the left ventral striatum/nucleus accumbens, the left auditory cortex, as well as the right putamen/pallidum (see Figure <xref ref-type="fig" rid="F1">1</xref>).</p>
<p>Taking together, the results indicate that mainly the right ventral striatum/nucleus accumbens changed its activity as well as network coupling when listening to this piece of rhythmic, dance-floor-like music. The analysis also shows a significantly reduced connectivity from the right to the left putamen/pallidum, which may reflect a reduced coupling in motor-related brain areas. Future studies may examine further whether this is related to the experience of groove (Stupacher et al., <xref ref-type="bibr" rid="B35">2013</xref>).</p>
<p>The results show that listening to rhythmic music not only changes the level of activation, measured in terms of BOLD signal fluctuation, of the reward system but also causes changes of connectivity within a network that is related to reward processing. The results further support the view that the ventral striatum, i.e., presumably the nucleus accumbens, is deeply involved in processing music-evoked emotions and reduces its connectivity to other areas. This does not mean a reduced activation, but rather acting on a different time scale (Mueller et al., <xref ref-type="bibr" rid="B28">2015</xref>). In fact, the significantly reduced self-inhibition as well as the changed hemodynamic parameter of the right ventral striatum/nucleus accumbens indicate an increased level of activation. The results are also in line with a recent meta-analysis by Koelsch (<xref ref-type="bibr" rid="B21">2014</xref>), who referred to the right ventral striatum and in particular to the right nucleus accumbens as an important structure for processing music-induced emotions. In this respect, the nucleus accumbens, which is part of the dopaminergic system, is seen as part of the reward processing system which is mainly active during experiencing reward, and its activity may correlate with the reward value (Salimpoor et al., <xref ref-type="bibr" rid="B33">2013</xref>, <xref ref-type="bibr" rid="B34">2015</xref>; Ikemoto et al., <xref ref-type="bibr" rid="B18">2015</xref>; Zatorre, <xref ref-type="bibr" rid="B40">2015</xref>). This is true for any type of reward, in particular biologically relevant rewards such as food or sex, but also money (Daniel and Pollmann, <xref ref-type="bibr" rid="B8">2014</xref>), and even fairness (Cappelen et al., <xref ref-type="bibr" rid="B4">2014</xref>). By contrast, the dorsal striatum/caudate is generally more related to anticipation of reward but also to music-induced frisson (Koelsch et al., <xref ref-type="bibr" rid="B22">2015</xref>; Zatorre, <xref ref-type="bibr" rid="B40">2015</xref>). This notion is further supported by a PET study that indicated increase of dopamine release that was different for the ventral and dorsal striatum. The dorsal striatum/caudate appeared to be more involved in the anticipation and prediction of reward, while the ventral striatum/nucleus accumbens demonstrated the strongest dopamine release while experiencing an emotional response to music (Salimpoor et al., <xref ref-type="bibr" rid="B32">2011</xref>).</p>
<p>The results presented here may contradict an fMRI study by Salimpoor and colleagues, who detected an increased connectivity between the auditory cortex and the ventral striatum (Salimpoor et al., <xref ref-type="bibr" rid="B32">2011</xref>). However, one has to bear in mind that the present study used a continuous stimulation paradigm with an unfamiliar piece of music that lasted for 10.16 min and thus much longer than the stimuli of most other studies that typically last less than a minute. Further, all measures are derived from measures of BOLD fluctuations and how they propagate through the specified network. In this respect, results may not be directly comparable to studies, where connectivity measures were based on task contrasts.</p>
<p>Finally, it should be mentioned that the sDCM analysis also indicated an increased connectivity from the left to the right auditory cortex when compared to ordinary resting-state fMRI. This confirms the validity of our approach of analysing data from a continuous stimulation paradigm with sDCM. Moreover, an increased connectivity from left to right was reasonable to expect, since the right auditory cortex is assumed to be more dominant during the processing of tonal information, like music (Tramo, <xref ref-type="bibr" rid="B36">2001</xref>; Zatorre et al., <xref ref-type="bibr" rid="B41">2007</xref>).</p>
<p>The results presented here are also significant from a methodological perspective. The unconstrained DCM model not only confirmed the notion about the right ventral striatum, but it also confirmed the validity of the presented sDCM approach. Note that for the first time a continuous stimulation protocol was combined with a stochastic DCM approach. A stochastic DCM model is set up as a fully connected model without any constraints. Therefore, it is a significant result that the contribution of the right ventral striatum evolved naturally out of the model estimation. Further, there are no significant group differences of the hemodynamic parameter or of the self-inhibition for the auditory system. This may indicate that continuous auditory stimulation does not change hemodynamic fluctuations within the auditory system, when compared to resting-state fMRI, although the level of activation might be different.</p>
<p>However, there are important limitations to the selected approach that one also has to bear in mind and that restrict the general conclusions that can be drawn from the presented results. Firstly, due to methodological constrains, the present study only compares music perception to resting state and used only one piece of music. Although there is strong evidence that the detected effects are mostly related to the presentation of rhythmic music and that the majority of the participants experienced this music as pleasant, one cannot entirely rule out that comparable effects may have emerged when listening to other types of (pleasant) stimuli in a similar passive and &#x0201C;resting&#x0201D; situation. In addition, the experience of groove wasn&#x00027;t formally assessed in this study although several participants of the music group reported this, but one might also want to speculate whether the observed decoupling could be related to suppressing the urge to move along. Secondly, the DCM approach generally limits the number of areas that can be included in a model, and these areas should have a certain distance to avoid overlapping effects through smoothing. Accordingly, the selection of the areas of interest was based on the a-prior hypothesis described in the introduction that comprised the auditory cortex, the basal ganglia and the reward system. Therefore, only these six anatomically distinct areas were included in the model, but future study may include additional areas like the ventromedial prefrontal cortex or amygdala. Finally, fMRI studies with 13 participants per group are still within the standard range, but larger sample sizes are required and recommended for future studies for drawing more general conclusions.</p>
</sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusion</title>
<p>This study demonstrated a reduced functional connectivity and thus less constrained activation of the reward processing system, namely the right ventral striatum and nucleus accumbens, while listening to a several minutes-long pleasurable and rhythmic piece of music&#x02014;as compared to silence. The results give a deeper insight into the power of music and show how easily and how strongly rhythmic music interacts with the affect and basal ganglia system, and that emotional experiences during listening to music may be generated through a stronger activation and, simultaneously, reduced functional connectivity of the reward system. Thus, &#x0201C;<italic>The powers of cognition that are set into play by this representation [of a beautiful object] are hereby in a free play, since no determinate concept restricts them to a particular cognition</italic>&#x0201D; <italic>(Kant, 1790)</italic> (Guyer, <xref ref-type="bibr" rid="B17">2002</xref>).</p>
</sec>
<sec id="s6">
<title>Author contributions</title>
<p>HB contributed to the analysis of the data and writing of the article. BO contributed to the planning, stimulus preparation, and writing of the article. KS contributed to the planning, stimulus preparation, performance, analysis, and writing of the article.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack><p>We thank all participants for their participation, and the staff of the radiological department of the Haukeland University Hospital for their help during data acquisition. We would like to thank particularly Kjetil Vikene for inspiring discussions throughout the writing process. The study was supported by a grant to KS from the Bergen Research Foundation (<italic>When a sound becomes speech</italic>) and the Research Council of Norway (217932: <italic>It&#x00027;s time for some music</italic>).</p>
</ack>
<sec sec-type="supplementary-material" id="s7">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="http://journal.frontiersin.org/article/10.3389/fnins.2017.00153/full#supplementary-material">http://journal.frontiersin.org/article/10.3389/fnins.2017.00153/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Audio1.MP3" id="SM1" mimetype="audio/x-mp3" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blood</surname> <given-names>A. J.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2001</year>). <article-title>Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>98</volume>, <fpage>11818</fpage>&#x02013;<lpage>11823</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.191355898</pub-id><pub-id pub-id-type="pmid">11573015</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buxton</surname> <given-names>R. B.</given-names></name></person-group> (<year>2012</year>). <article-title>Dynamic models of BOLD contrast</article-title>. <source>Neuroimage</source> <volume>62</volume>, <fpage>953</fpage>&#x02013;<lpage>961</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2012.01.012</pub-id><pub-id pub-id-type="pmid">22245339</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buxton</surname> <given-names>R. B.</given-names></name> <name><surname>Uluda&#x0011F;</surname> <given-names>K.</given-names></name> <name><surname>Dubowitz</surname> <given-names>D. J.</given-names></name> <name><surname>Liu</surname> <given-names>T. T.</given-names></name></person-group> (<year>2004</year>). <article-title>Modeling the hemodynamic response to brain activation</article-title>. <source>Neuroimage</source> <volume>23</volume>(<supplement>Suppl. 1</supplement>), <fpage>S220</fpage>&#x02013;<lpage>S233</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.07.013</pub-id><pub-id pub-id-type="pmid">15501093</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cappelen</surname> <given-names>A. W.</given-names></name> <name><surname>Eichele</surname> <given-names>T.</given-names></name> <name><surname>Hugdahl</surname> <given-names>K.</given-names></name> <name><surname>Specht</surname> <given-names>K.</given-names></name> <name><surname>S&#x000F8;rensen</surname> <given-names>E. &#x000D8;.</given-names></name> <name><surname>Tungodden</surname> <given-names>B.</given-names></name></person-group> (<year>2014</year>). <article-title>Equity theory and fair inequality: a neuroeconomic study</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>111</volume>, <fpage>15368</fpage>&#x02013;<lpage>15372</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1414602111</pub-id><pub-id pub-id-type="pmid">25313056</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chanda</surname> <given-names>M. L.</given-names></name> <name><surname>Levitin</surname> <given-names>D. J.</given-names></name></person-group> (<year>2013</year>). <article-title>The neurochemistry of music</article-title>. <source>Trends Cogn. Sci.</source> <volume>17</volume>, <fpage>179</fpage>&#x02013;<lpage>193</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2013.02.007</pub-id><pub-id pub-id-type="pmid">23541122</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>J. L.</given-names></name> <name><surname>Penhune</surname> <given-names>V. B.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2008a</year>). <article-title>Listening to musical rhythms recruits motor regions of the brain</article-title>. <source>Cereb. Cortex</source> <volume>18</volume>, <fpage>2844</fpage>&#x02013;<lpage>2854</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhn042</pub-id><pub-id pub-id-type="pmid">18388350</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>J. L.</given-names></name> <name><surname>Penhune</surname> <given-names>V. B.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2008b</year>). <article-title>Moving on time: brain network for auditory-motor synchronization is modulated by rhythm complexity and musical training</article-title>. <source>J. Cogn. Neurosci.</source> <volume>20</volume>, <fpage>226</fpage>&#x02013;<lpage>239</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.2008.20018</pub-id><pub-id pub-id-type="pmid">18275331</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Daniel</surname> <given-names>R.</given-names></name> <name><surname>Pollmann</surname> <given-names>S.</given-names></name></person-group> (<year>2014</year>). <article-title>A universal role of the ventral striatum in reward-based learning: evidence from human studies</article-title>. <source>Neurobiol. Learn. Mem.</source> <volume>114C</volume>, <fpage>90</fpage>&#x02013;<lpage>100</lpage>. <pub-id pub-id-type="doi">10.1016/j.nlm.2014.05.002</pub-id><pub-id pub-id-type="pmid">24825620</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Di</surname> <given-names>X.</given-names></name> <name><surname>Biswal</surname> <given-names>B. B.</given-names></name></person-group> (<year>2015</year>). <article-title>Dynamic brain functional connectivity modulated by resting-state networks</article-title>. <source>Brain Struct. Funct</source>. <volume>220</volume>, <fpage>37</fpage>&#x02013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1007/s00429-013-0634-3</pub-id><pub-id pub-id-type="pmid">25713839</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname> <given-names>K. J.</given-names></name> <name><surname>Li</surname> <given-names>B.</given-names></name> <name><surname>Daunizeau</surname> <given-names>J.</given-names></name> <name><surname>Stephan</surname> <given-names>K. E.</given-names></name></person-group> (<year>2011</year>). <article-title>Network discovery with DCM</article-title>. <source>Neuroimage</source> <volume>56</volume>, <fpage>1202</fpage>&#x02013;<lpage>1221</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.12.039</pub-id><pub-id pub-id-type="pmid">21182971</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname> <given-names>K. J.</given-names></name> <name><surname>Mechelli</surname> <given-names>A.</given-names></name> <name><surname>Turner</surname> <given-names>R.</given-names></name> <name><surname>Price</surname> <given-names>C. J.</given-names></name></person-group> (<year>2000</year>). <article-title>Nonlinear responses in fMRI: the Balloon model, Volterra kernels, and other hemodynamics</article-title>. <source>Neuroimage</source> <volume>12</volume>, <fpage>466</fpage>&#x02013;<lpage>477</lpage>. <pub-id pub-id-type="doi">10.1006/nimg.2000.0630</pub-id><pub-id pub-id-type="pmid">10988040</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname> <given-names>K. J.</given-names></name> <name><surname>Penny</surname> <given-names>W.</given-names></name></person-group> (<year>2011</year>). <article-title>Post hoc Bayesian model selection</article-title>. <source>Neuroimage</source> <volume>56</volume>, <fpage>2089</fpage>&#x02013;<lpage>2099</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.03.062</pub-id><pub-id pub-id-type="pmid">21459150</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fritz</surname> <given-names>T.</given-names></name> <name><surname>Jentschke</surname> <given-names>S.</given-names></name> <name><surname>Gosselin</surname> <given-names>N.</given-names></name> <name><surname>Sammler</surname> <given-names>D.</given-names></name> <name><surname>Peretz</surname> <given-names>I.</given-names></name> <name><surname>Turner</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>Universal recognition of three basic emotions in music</article-title>. <source>Curr. Biol.</source> <volume>19</volume>, <fpage>573</fpage>&#x02013;<lpage>576</lpage>. <pub-id pub-id-type="doi">10.1016/j.cub.2009.02.058</pub-id><pub-id pub-id-type="pmid">19303300</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grahn</surname> <given-names>J. A.</given-names></name> <name><surname>Brett</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Rhythm and beat perception in motor areas of the brain</article-title>. <source>J. Cogn. Neurosci.</source> <volume>19</volume>, <fpage>893</fpage>&#x02013;<lpage>906</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.2007.19.5.893</pub-id><pub-id pub-id-type="pmid">17488212</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grahn</surname> <given-names>J. A.</given-names></name> <name><surname>Rowe</surname> <given-names>J. B.</given-names></name></person-group> (<year>2009</year>). <article-title>Feeling the beat: premotor and striatal interactions in musicians and nonmusicians during beat perception</article-title>. <source>J. Neurosci.</source> <volume>29</volume>, <fpage>7540</fpage>&#x02013;<lpage>7548</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2018-08.2009</pub-id><pub-id pub-id-type="pmid">19515922</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gray</surname> <given-names>P. M.</given-names></name> <name><surname>Krause</surname> <given-names>B.</given-names></name> <name><surname>Atema</surname> <given-names>J.</given-names></name> <name><surname>Payne</surname> <given-names>R.</given-names></name> <name><surname>Krumhansl</surname> <given-names>C. L.</given-names></name> <name><surname>Baptista</surname> <given-names>L.</given-names></name></person-group> (<year>2001</year>). <article-title>The Music of nature and the nature of music</article-title>. <source>Science</source> <volume>291</volume>, <fpage>52</fpage>&#x02013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1126/science.10.1126/SCIENCE.1056960</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Guyer</surname> <given-names>P.</given-names></name></person-group> (<year>2002</year>). <article-title>Immanuel kant: critique of the power of judgment</article-title>, in <source>The Cambridge Edition of the Works of Immanuel Kant in Translation</source>, ed <person-group person-group-type="editor"><name><surname>Guyer</surname> <given-names>P.</given-names></name></person-group>(<publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>).</citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ikemoto</surname> <given-names>S.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Tan</surname> <given-names>A.</given-names></name></person-group> (<year>2015</year>). <article-title>Basal ganglia circuit loops, dopamine and motivation: a review and enquiry</article-title>. <source>Behav. Brain Res.</source> <volume>290</volume>, <fpage>17</fpage>&#x02013;<lpage>31</lpage>. <pub-id pub-id-type="doi">10.1016/j.bbr.2015.04.018</pub-id><pub-id pub-id-type="pmid">25907747</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Janata</surname> <given-names>P.</given-names></name> <name><surname>Tomic</surname> <given-names>S. T.</given-names></name> <name><surname>Haberman</surname> <given-names>J. M.</given-names></name></person-group> (<year>2012</year>). <article-title>Sensorimotor coupling in music and the psychology of the groove</article-title>. <source>J. Exp. Psychol. Gen.</source> <volume>141</volume>, <fpage>54</fpage>&#x02013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1037/a0024208</pub-id><pub-id pub-id-type="pmid">21767048</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koelsch</surname> <given-names>S.</given-names></name></person-group> (<year>2011</year>). <article-title>Toward a neural basis of music perception - a review and updated model</article-title>. <source>Front. Psychol.</source> <volume>2</volume>:<fpage>110</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2011.00110</pub-id><pub-id pub-id-type="pmid">21713060</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koelsch</surname> <given-names>S.</given-names></name></person-group> (<year>2014</year>). <article-title>Brain correlates of music-evoked emotions</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>15</volume>, <fpage>170</fpage>&#x02013;<lpage>180</lpage>. <pub-id pub-id-type="doi">10.1038/nrn3666</pub-id><pub-id pub-id-type="pmid">24552785</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koelsch</surname> <given-names>S.</given-names></name> <name><surname>Jacobs</surname> <given-names>A. M.</given-names></name> <name><surname>Menninghaus</surname> <given-names>W.</given-names></name> <name><surname>Liebal</surname> <given-names>K.</given-names></name> <name><surname>Klann-Delius</surname> <given-names>G.</given-names></name> <name><surname>von Scheve</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>The quartet theory of human emotions: an integrative and neurofunctional model</article-title>. <source>Phys. Life Rev.</source> <volume>13</volume>, <fpage>1</fpage>&#x02013;<lpage>27</lpage>. <pub-id pub-id-type="doi">10.1016/j.plrev.2015.03.001</pub-id><pub-id pub-id-type="pmid">25891321</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kornysheva</surname> <given-names>K.</given-names></name> <name><surname>von Cramon</surname> <given-names>D. Y.</given-names></name> <name><surname>Jacobsen</surname> <given-names>T.</given-names></name> <name><surname>Schubotz</surname> <given-names>R. I.</given-names></name></person-group> (<year>2010</year>). <article-title>Tuning-in to the beat: aesthetic appreciation of musical rhythms correlates with a premotor activity boost</article-title>. <source>Hum. Brain Mapp.</source> <volume>31</volume>, <fpage>48</fpage>&#x02013;<lpage>64</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.20844</pub-id><pub-id pub-id-type="pmid">19585590</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Levitin</surname> <given-names>D. J.</given-names></name> <name><surname>Tirovolas</surname> <given-names>A. K.</given-names></name></person-group> (<year>2009</year>). <article-title>Current advances in the cognitive neuroscience of music</article-title>. <source>Ann. N.Y. Acad. Sci.</source> <volume>1156</volume>, <fpage>211</fpage>&#x02013;<lpage>231</lpage>. <pub-id pub-id-type="doi">10.1111/j.1749-6632.2009.04417.x</pub-id><pub-id pub-id-type="pmid">19338510</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Madison</surname> <given-names>G.</given-names></name></person-group> (<year>2006</year>). <article-title>Experiencing groove induced by music: consistency and phenomenology</article-title>. <source>Music Percept.</source> <volume>24</volume>, <fpage>201</fpage>&#x02013;<lpage>208</lpage>. <pub-id pub-id-type="doi">10.1525/mp.2006.24.2.201</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Madison</surname> <given-names>G.</given-names></name> <name><surname>Gouyon</surname> <given-names>F.</given-names></name> <name><surname>Ull&#x000E9;n</surname> <given-names>F.</given-names></name> <name><surname>H&#x000F6;rnstr&#x000F6;m</surname> <given-names>K.</given-names></name></person-group> (<year>2011</year>). <article-title>Modeling the tendency for music to induce movement in humans: first correlations with low-level audio descriptors across music genres</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>37</volume>, <fpage>1578</fpage>&#x02013;<lpage>1594</lpage>. <pub-id pub-id-type="doi">10.1037/a0024323</pub-id><pub-id pub-id-type="pmid">21728462</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morosan</surname> <given-names>P.</given-names></name> <name><surname>Rademacher</surname> <given-names>J.</given-names></name> <name><surname>Schleicher</surname> <given-names>A.</given-names></name> <name><surname>Amunts</surname> <given-names>K.</given-names></name> <name><surname>Schormann</surname> <given-names>T.</given-names></name> <name><surname>Zilles</surname> <given-names>K.</given-names></name></person-group> (<year>2001</year>). <article-title>Human primary auditory cortex: cytoarchitectonic subdivisions and mapping into a spatial reference system</article-title>. <source>Neuroimage</source> <volume>13</volume>, <fpage>684</fpage>&#x02013;<lpage>701</lpage>. <pub-id pub-id-type="doi">10.1006/nimg.2000.0715</pub-id><pub-id pub-id-type="pmid">11305897</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mueller</surname> <given-names>K.</given-names></name> <name><surname>Fritz</surname> <given-names>T.</given-names></name> <name><surname>Mildner</surname> <given-names>T.</given-names></name> <name><surname>Richter</surname> <given-names>M.</given-names></name> <name><surname>Schulze</surname> <given-names>K.</given-names></name> <name><surname>Lepsien</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Investigating the dynamics of the brain response to music: a central role of the ventral striatum/nucleus accumbens</article-title>. <source>Neuroimage</source> <volume>116</volume>, <fpage>68</fpage>&#x02013;<lpage>79</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2015.05.006</pub-id><pub-id pub-id-type="pmid">25976924</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Panteli</surname> <given-names>M.</given-names></name> <name><surname>Rocha</surname> <given-names>B.</given-names></name> <name><surname>Bogaards</surname> <given-names>N.</given-names></name> <name><surname>Honingh</surname> <given-names>A.</given-names></name></person-group> (<year>2016</year>). <article-title>A model for rhythm and timbre similarity in electronic dance music</article-title>. <source>Musicae Sci.</source> [Epub ahead of print]. <pub-id pub-id-type="doi">10.1177/1029864916655596</pub-id>.</citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peretz</surname> <given-names>I.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2005</year>). <article-title>Brain organization for music processing</article-title>. <source>Annu. Rev. Psychol.</source> <volume>56</volume>, <fpage>89</fpage>&#x02013;<lpage>114</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.56.091103.070225</pub-id><pub-id pub-id-type="pmid">15709930</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Phillips-Silver</surname> <given-names>J.</given-names></name> <name><surname>Trainor</surname> <given-names>L. J.</given-names></name></person-group> (<year>2005</year>). <article-title>Feeling the beat: movement influences infant rhythm perception</article-title>. <source>Science</source> <volume>308</volume>, <fpage>1430</fpage>&#x02013;<lpage>1430</lpage>. <pub-id pub-id-type="doi">10.1126/science.1110922</pub-id><pub-id pub-id-type="pmid">15933193</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salimpoor</surname> <given-names>V. N.</given-names></name> <name><surname>Benovoy</surname> <given-names>M.</given-names></name> <name><surname>Larcher</surname> <given-names>K.</given-names></name> <name><surname>Dagher</surname> <given-names>A.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2011</year>). <article-title>Anatomically distinct dopamine release during anticipation and experience of peak emotion to music</article-title>. <source>Nat. Neurosci.</source> <volume>14</volume>, <fpage>257</fpage>&#x02013;<lpage>262</lpage>. <pub-id pub-id-type="doi">10.1038/nn.2726</pub-id><pub-id pub-id-type="pmid">21217764</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salimpoor</surname> <given-names>V. N.</given-names></name> <name><surname>van den Bosch</surname> <given-names>I.</given-names></name> <name><surname>Kovacevic</surname> <given-names>N.</given-names></name> <name><surname>McIntosh</surname> <given-names>A. R.</given-names></name> <name><surname>Dagher</surname> <given-names>A.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2013</year>). <article-title>Interactions between the nucleus accumbens and auditory cortices predict music reward value</article-title>. <source>Science</source> <volume>340</volume>, <fpage>216</fpage>&#x02013;<lpage>219</lpage>. <pub-id pub-id-type="doi">10.1126/science.1231059</pub-id><pub-id pub-id-type="pmid">23580531</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salimpoor</surname> <given-names>V. N.</given-names></name> <name><surname>Zald</surname> <given-names>D. H.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name> <name><surname>Dagher</surname> <given-names>A.</given-names></name> <name><surname>McIntosh</surname> <given-names>A. R.</given-names></name></person-group> (<year>2015</year>). <article-title>Predictions and the brain: how musical sounds become rewarding</article-title>. <source>Trends Cogn. Sci.</source> <volume>19</volume>, <fpage>86</fpage>&#x02013;<lpage>91</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2014.12.001</pub-id><pub-id pub-id-type="pmid">25534332</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stupacher</surname> <given-names>J.</given-names></name> <name><surname>Hove</surname> <given-names>M. J.</given-names></name> <name><surname>Novembre</surname> <given-names>G.</given-names></name> <name><surname>Sch&#x000FC;tz-Bosbach</surname> <given-names>S.</given-names></name> <name><surname>Keller</surname> <given-names>P. E.</given-names></name></person-group> (<year>2013</year>). <article-title>Musical groove modulates motor cortex excitability: a TMS investigation</article-title>. <source>Brain Cogn.</source> <volume>82</volume>, <fpage>127</fpage>&#x02013;<lpage>136</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandc.2013.03.003</pub-id><pub-id pub-id-type="pmid">23660433</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tramo</surname> <given-names>M. J.</given-names></name></person-group> (<year>2001</year>). <article-title>Music of the hemispheres</article-title>. <source>Science</source> <volume>291</volume>, <fpage>54</fpage>&#x02013;<lpage>56</lpage>. <pub-id pub-id-type="doi">10.1126/science.10.1126/SCIENCE.1056899</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Trost</surname> <given-names>W.</given-names></name> <name><surname>Fr&#x000FC;hholz</surname> <given-names>S.</given-names></name> <name><surname>Sch&#x000F6;n</surname> <given-names>D.</given-names></name> <name><surname>Labb&#x000E9;</surname> <given-names>C.</given-names></name> <name><surname>Pichon</surname> <given-names>S.</given-names></name> <name><surname>Grandjean</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Getting the beat: entrainment of brain activity by musical rhythm and pleasantness</article-title>. <source>Neuroimage</source> <volume>103C</volume>, <fpage>55</fpage>&#x02013;<lpage>64</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2014.09.009</pub-id><pub-id pub-id-type="pmid">25224999</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Witek</surname> <given-names>M. A. G.</given-names></name> <name><surname>Clarke</surname> <given-names>E. F.</given-names></name> <name><surname>Wallentin</surname> <given-names>M.</given-names></name> <name><surname>Kringelbach</surname> <given-names>M. L.</given-names></name> <name><surname>Vuust</surname> <given-names>P.</given-names></name></person-group> (<year>2014</year>). <article-title>Syncopation, body-movement and pleasure in groove music</article-title>. <source>PLoS ONE</source> <volume>9</volume>:<fpage>e94446</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0094446</pub-id><pub-id pub-id-type="pmid">24740381</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Witek</surname> <given-names>M. A. G.</given-names></name> <name><surname>Kringelbach</surname> <given-names>M. L.</given-names></name> <name><surname>Vuust</surname> <given-names>P.</given-names></name></person-group> (<year>2015</year>). <article-title>Musical rhythm and affect: comment on &#x0201C;The quartet theory of human emotions: an integrative and neurofunctional model&#x0201D; by S. Koelsch et al</article-title>. <source>Phys. Life Rev.</source> <volume>13</volume>, <fpage>92</fpage>&#x02013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1016/j.plrev.2015.04.029</pub-id><pub-id pub-id-type="pmid">25936618</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2015</year>). <article-title>Musical pleasure and reward: mechanisms and dysfunction</article-title>. <source>Ann. N.Y. Acad. Sci.</source> <volume>1337</volume>, <fpage>202</fpage>&#x02013;<lpage>211</lpage>. <pub-id pub-id-type="doi">10.1111/nyas.12677</pub-id><pub-id pub-id-type="pmid">25773636</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zatorre</surname> <given-names>R. J.</given-names></name> <name><surname>Chen</surname> <given-names>J. L.</given-names></name> <name><surname>Penhune</surname> <given-names>V. B.</given-names></name></person-group> (<year>2007</year>). <article-title>When the brain plays music: auditory-motor interactions in music perception and production</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>8</volume>, <fpage>547</fpage>&#x02013;<lpage>558</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2152</pub-id><pub-id pub-id-type="pmid">17585307</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zatorre</surname> <given-names>R. J.</given-names></name> <name><surname>Krumhansl</surname> <given-names>C. L.</given-names></name></person-group> (<year>2002</year>). <article-title>Mental models and musical minds</article-title>. <source>Science</source> <volume>298</volume>, <fpage>2138</fpage>&#x02013;<lpage>2139</lpage>. <pub-id pub-id-type="doi">10.1126/science.1080006</pub-id><pub-id pub-id-type="pmid">12481121</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zatorre</surname> <given-names>R. J.</given-names></name> <name><surname>Salimpoor</surname> <given-names>V. N.</given-names></name></person-group> (<year>2013</year>). <article-title>From perception to pleasure: music and its neural substrates</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>110</volume>(<supplement>Suppl. 2</supplement>), <fpage>10430</fpage>&#x02013;<lpage>10437</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1301228110</pub-id><pub-id pub-id-type="pmid">23754373</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zatorre</surname> <given-names>R.</given-names></name> <name><surname>McGill</surname> <given-names>J.</given-names></name></person-group> (<year>2005</year>). <article-title>Music, the food of neuroscience?</article-title> <source>Nature</source> <volume>434</volume>, <fpage>312</fpage>&#x02013;<lpage>315</lpage>. <pub-id pub-id-type="doi">10.1038/434312a</pub-id><pub-id pub-id-type="pmid">15772648</pub-id></citation>
</ref>
</ref-list>
</back>
</article>