<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2026.1759699</article-id>
<article-version article-version-type="Version of Record" vocab="NISO-RP-8-2008"/>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Hypothesis and Theory</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Music-induced emotion as controlled hallucination: an active interoceptive inference account</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Tsai</surname>
<given-names>Chen-Gia</given-names>
</name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/238356"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Formal analysis" vocab-term-identifier="https://credit.niso.org/contributor-roles/formal-analysis/">Formal analysis</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="methodology" vocab-term-identifier="https://credit.niso.org/contributor-roles/methodology/">Methodology</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="visualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/visualization/">Visualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="validation" vocab-term-identifier="https://credit.niso.org/contributor-roles/validation/">Validation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="investigation" vocab-term-identifier="https://credit.niso.org/contributor-roles/investigation/">Investigation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="conceptualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/conceptualization/">Conceptualization</role>
</contrib>
</contrib-group>
<aff id="aff1"><label>1</label><institution>Graduate Institute of Musicology, National Taiwan University</institution>, <city>Taipei</city>, <country country="tw">Taiwan</country></aff>
<aff id="aff2"><label>2</label><institution>Graduate Institute of Brain and Mind Sciences, National Taiwan University</institution>, <city>Taipei</city>, <country country="tw">Taiwan</country></aff>
<author-notes>
<corresp id="c001"><label>&#x002A;</label>Correspondence: Chen-Gia Tsai, <email xlink:href="mailto:tsaichengia@ntu.edu.tw">tsaichengia@ntu.edu.tw</email></corresp>
</author-notes>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-02-16">
<day>16</day>
<month>02</month>
<year>2026</year>
</pub-date>
<pub-date publication-format="electronic" date-type="collection">
<year>2026</year>
</pub-date>
<volume>17</volume>
<elocation-id>1759699</elocation-id>
<history>
<date date-type="received">
<day>03</day>
<month>12</month>
<year>2025</year>
</date>
<date date-type="rev-recd">
<day>09</day>
<month>01</month>
<year>2026</year>
</date>
<date date-type="accepted">
<day>15</day>
<month>01</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2026 Tsai.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>Tsai</copyright-holder>
<license>
<ali:license_ref start_date="2026-02-16">https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Music is widely recognized as a powerful elicitor of embodied emotion, yet the precise mechanisms by which auditory patterns are translated into specific bodily feelings remain underspecified. Existing models of contagion and entrainment often lack granular mappings between musical features and distinct interoceptive states. This article proposes a novel theoretical framework viewing musical emotion as an instance of active interoceptive inference. I argue that musical structures (e.g., rhythm, dynamics, timbre) function as &#x201C;pseudo-interoceptive&#x201D; evidence. Within a hierarchical generative model, the brain integrates these cues with actual physiological signals and extramusical context to infer the somatic state of a &#x201C;virtual body&#x201D; implied by the music. Conceptually, this approach extends bottom-up theories by emphasizing top-down predictions. It is posited that the resulting conscious experience is a composite: it blends the listener&#x2019;s genuine physiological arousal&#x2014;serving as an energetic substrate&#x2014;with the simulated affective qualia of the virtual persona. To illustrate this, principled mappings are proposed between musical parameters and internal states, specifically focusing on cardiac and pain-like sensations. Analyses of works by Mozart, Schubert, Berlioz, Beethoven, and Verdi demonstrate how composers manipulate these cues to drive a relatively high level of precision-weighted prediction error, thereby sustaining attention and fostering immersion as the music unfolds. Ultimately, this framework redefines music-induced emotion as a &#x201C;controlled hallucination&#x201D; of bodily change, offering new insights into aesthetic empathy and the therapeutic potential of music.</p>
</abstract>
<kwd-group>
<kwd>active inference</kwd>
<kwd>crossmodal correspondence</kwd>
<kwd>interoception</kwd>
<kwd>music theory</kwd>
<kwd>musical emotion</kwd>
<kwd>predictive coding</kwd>
</kwd-group>
<funding-group>
<funding-statement>The author(s) declared that financial support was not received for this work and/or its publication.</funding-statement>
</funding-group>
<counts>
<fig-count count="2"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="75"/>
<page-count count="13"/>
<word-count count="11262"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Emotion Science</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>Despite decades of research on music and emotion, a fundamental question persists: how do abstract auditory patterns give rise to specific bodily feelings? Listeners do not simply recognize that music is sad or joyful; they often report felt changes in heartbeat, breathing, and muscular tension. This gap between acoustic structure and visceral experience has motivated a range of embodied and simulation-based theories, yet the mechanisms underlying such mappings remain incompletely specified. A prominent class of theories frames music-evoked emotion in terms of embodied simulation. The Mimetic Hypothesis, for example, proposes that listeners understand music via covert motor and subvocal imitation, subtly engaging the musculature of the limbs and vocal apparatus in response to musical structure (<xref ref-type="bibr" rid="ref11">Cox, 2016</xref>). Similarly, <xref ref-type="bibr" rid="ref29">Juslin&#x2019;s (2013)</xref> model of music and emotion highlights emotional contagion and rhythmic entrainment as key pathways: listeners mirror the emotional valence of voice-like acoustic cues and allow internal physiological rhythms&#x2014;heart rate, breathing, bodily sway&#x2014;to align with musical patterns. On this view, music does not simply &#x201C;represent&#x201D; emotion in an abstract way; rather, it recruits the same systems that underlie vocal expression, movement, and autonomic adjustment.</p>
<p>Neuroscientific work lends support to this embodied perspective. <xref ref-type="bibr" rid="ref44">Molnar-Szakacs and Overy (2006)</xref> argued that a frontoparietal mirror network encodes musical structure as intentional motor sequences, while the anterior insula links these simulations to the regulation of the body&#x2019;s internal milieu, thereby contributing to feeling states. Subsequent neuroimaging studies have implicated both mirror-related regions and the insula in decoding musical emotion and in tracking individual differences in empathy and aesthetic sensitivity (<xref ref-type="bibr" rid="ref52">Sachs et al., 2018</xref>; <xref ref-type="bibr" rid="ref36">Koelsch et al., 2021</xref>). Yet, despite this growing body of work, existing models rarely specify how particular musical features map onto specific interoceptive states (e.g., cardiac, respiratory, or pain-like sensations), leaving a gap between fine-grained embodied descriptions of musical emotion and mechanistic accounts of bodily feeling.</p>
<p>Interoception is commonly defined as the sensing of the body&#x2019;s internal condition (<xref ref-type="bibr" rid="ref12">Craig, 2002</xref>). Contemporary accounts emphasize that interoceptive feelings are not a simple readout of visceral afferents, but inferred states that integrate cardiovascular, respiratory, nociceptive, somatic, and exteroceptive signals with cognitive and affective context (<xref ref-type="bibr" rid="ref10">Ceunen et al., 2016</xref>; <xref ref-type="bibr" rid="ref9">Carvalho and Damasio, 2021</xref>). Within this view, interoception is best understood as an integrated, cross-modal percept of bodily state. At the neural level, interoceptive processing is tightly coupled to homeostasis and supported by a hierarchically organized insula&#x2013;cingulate&#x2013;prefrontal network. Posterior insula receives homeostatic and visceral inputs, whereas more anterior insula, together with cingulate and prefrontal regions, generates more abstract representations of bodily state and integrates them with exteroceptive information and higher-order appraisal (<xref ref-type="bibr" rid="ref9">Carvalho and Damasio, 2021</xref>; <xref ref-type="bibr" rid="ref14">Damasio and Damasio, 2024</xref>). This abstraction and integration imply that interoceptive experience can be shaped by patterns in the external environment. In particular, auditory events that mimic the temporal and qualitative profile of bodily sensations may be incorporated into the interoceptive construct. Consistent with this idea, exogenous simulations of heartbeats&#x2014;particularly accelerated rhythms&#x2014;have been shown to modulate interoceptive processing and increase subjective and autonomic arousal (<xref ref-type="bibr" rid="ref35">Kleint et al., 2015</xref>; <xref ref-type="bibr" rid="ref63">Tanaka et al., 2021</xref>; <xref ref-type="bibr" rid="ref69">Vicentin et al., 2024</xref>), indicating that stylized cardiac signals are sufficient to influence emotional and bodily states. From this standpoint, musical sound becomes a candidate source of &#x201C;pseudo-interoceptive&#x201D; evidence&#x2014;evidence that does not originate in visceral afferents but is nevertheless treated as informative about bodily state. In other words, certain acoustic patterns can function as a proxy for interoceptive signals, biasing interoceptive inference and shaping subjective feeling.</p>
<p>Interestingly, historical music aesthetics already anticipated this kind of link between auditory structure and bodily feeling. In the Baroque Doctrine of Affections, musical figures were explicitly theorized as tools for arousing and sustaining specific passions (<xref ref-type="bibr" rid="ref6">Buelow, 1973</xref>). These embodied associations between musical parameters and bodily states persisted into the pre-Classical and Classical eras and well into the nineteenth century. C. P. E. Bach&#x2019;s <italic>Fantasia in A major</italic> (H. 278), written during a gout attack, was characterized by Cramer as materializing &#x201C;flying pain&#x201D; through rapid figurations (<xref ref-type="bibr" rid="ref26">Head, 2016</xref>), while Mozart referred to the aria &#x201C;O wie &#x00E4;ngstlich&#x201D; from <italic>Die Entf&#x00FC;hrung aus dem Serail</italic> as expressing a &#x201C;throbbing heart,&#x201D; explicitly linking its musical figuration to cardiac sensation (<xref ref-type="bibr" rid="ref4">Bauman, 1991</xref>). Such descriptions suggest that European art music did not only aim to convey generic affect (e.g., joy or sadness), but sometimes sought to represent specific interoceptive qualia. From a psychological perspective, these historical accounts can be read as early, informal hypotheses about pseudo-interoceptive mappings&#x2014;claims that specific musical figures can simulate particular patterns of bodily sensation.</p>
<p>Building on this convergence between contemporary interoception research and historical descriptions of bodily feeling in music, the present article adopts the framework of <italic>active interoceptive inference</italic> as its core theoretical lens (<xref ref-type="bibr" rid="ref59">Seth and Friston, 2016</xref>). Within this framework, the brain continuously predicts its internal milieu, minimizes interoceptive prediction error through belief updating and allostatic regulation, and subjective feelings arise as the conscious expression of these homeostasis-oriented inferences about bodily state. Against this background, when listeners encounter music containing pain-like or cardiac-like cues, interoceptive generative models are recruited to infer the physiological&#x2013;affective states of a musical persona&#x2014;a &#x201C;virtual body&#x201D; implied by sound. On this view, emotion contagion during music listening is understood as a consequence of interoceptive generative models minimizing prediction error.</p>
<p>The aims of this article are threefold. First, it formalizes musical emotion as an instance of active interoceptive inference, integrating embodied approaches to music with predictive-coding accounts of perception and affect. Second, it proposes principled mappings between specific musical parameters (e.g., rhythm, dynamics, timbre, harmony, mode) and virtual bodily states, drawing on work on crossmodal correspondences and homeostatic regulation. Third, it illustrates these mappings through analyses of Western classical repertoire, focusing on cardiac- and pain-like qualia and exploring implications for emotion contagion, aesthetic experience, and music-based therapeutic mechanisms. These aims are pursued within a unified theoretical framework that links musical structure to interoceptive prediction and control.</p>
<p>Extending and refining previous theories, the present framework introduces several specific advances. Relative to appraisal-based and categorical models of musical emotion, and to mimetic accounts that emphasize covert motor or vocal imitation, it foregrounds how music can mimic and organize specific bodily sensations (such as cardiac- and pain-like feelings), rather than merely signaling broad emotion categories. It also reformulates the &#x201C;action program&#x201D; account of musical feeling proposed by <xref ref-type="bibr" rid="ref23">Habibi and Damasio (2014)</xref>. In their model, music engages evolutionarily conserved action programs for homeostasis and survival: voice-like and rhythmic cues trigger stereotyped autonomic and motor responses (e.g., changes in heart rate, respiration, and skin conductance), and these bodily changes are then mapped by somatosensory and interoceptive cortices into subjective feeling states. The present framework recasts this predominantly bottom-up view within a hierarchical generative model, assigning a portion of the explanatory work to higher levels of the interoceptive system. Finally, the framework exploits the fact that musical works do not present a fixed emotional state but instead unfold over time, proposing that composers can shape the timing and magnitude of musical changes&#x2014;and the associated prediction errors&#x2014;to sustain listeners&#x2019; attention and deepen their immersion in the evolving musical narrative. From this perspective, rhythmic entrainment as described by <xref ref-type="bibr" rid="ref29">Juslin (2013)</xref> can be understood not as literal synchronization between the musical beat and the listener&#x2019;s heart rate, but as the continuous updating of beliefs about the virtual body in response to musical changes that are treated as pseudo-interoceptive evidence.</p>
</sec>
<sec id="sec2">
<label>2</label>
<title>Inferring musical interoception</title>
<sec id="sec3">
<label>2.1</label>
<title>Interoceptive generative models and musical cues</title>
<p>To make the proposed account of musical interoception more precise, this section adopts the predictive coding framework (<xref ref-type="bibr" rid="ref18">Friston, 2010</xref>; <xref ref-type="bibr" rid="ref19">Friston et al., 2017a</xref>), in which the brain is cast as a hierarchical inference system: higher levels generate predictions about sensory input based on internal models (prior beliefs), and mismatches between prediction and input (prediction errors) drive either belief updating or <italic>active inference</italic>&#x2014;implementing actions to minimize prediction error. In this view, covert motor and interoceptive simulations during music listening are not imitation for its own sake, but a means of generating predictions that help minimize prediction error. In this way, predictive coding provides a computational principle for embodied and mimetic accounts of music perception.</p>
<p>Building on predictive coding accounts, <xref ref-type="bibr" rid="ref58">Seth (2013)</xref> characterized perception as a form of &#x201C;controlled hallucination.&#x201D; Here, &#x201C;hallucination&#x201D; is used in a technical, non-clinical sense: it refers to the constructive, model-based nature of perceptual experience, not to pathological percepts that arise without constraint. Fundamentally, the &#x201C;control&#x201D; lies in the continuous calibration of these generative predictions by sensory evidence and error-correction mechanisms, which differentiates this notion from mental imagery or free imagination (which can be voluntarily generated and need not be anchored to ongoing sensory input). On this view, the brain uses hierarchical generative models to predict the causes of both external and bodily signals, updating those predictions in light of incoming data. Interoception is a special case of this process in which the predicted causes concern the internal milieu, so that subjective feeling reflects precision-weighted inference about bodily state under sensory constraint.</p>
<p>Two influential extensions of this predictive-processing perspective move beyond perception per se, arguing that the same hierarchical generative modeling can be applied to both interoceptive regulation and social understanding. In the active interoceptive inference account (<xref ref-type="bibr" rid="ref59">Seth and Friston, 2016</xref>), the brain predicts its internal milieu and reduces interoceptive prediction error via belief updating and allostatic regulation, with feelings construed as the conscious expression of these homeostasis-oriented inferences about bodily state. Extending the same logic to theory of mind and social cognition, <xref ref-type="bibr" rid="ref45">Ondobaka et al. (2017)</xref> proposed that inferences about others&#x2019; intentions and emotions are likewise underpinned by hierarchical interoceptive inference: the generative model used to explain one&#x2019;s own actions and feelings is redeployed to interpret another agent, such that the observer infers which internal states and action policies would best explain the other&#x2019;s movements and expressions and attributes those inferred states to that agent.</p>
<p>These considerations have direct implications for music perception. Certain musical cues may be assimilated as pseudo-interoceptive evidence, informing the listener&#x2019;s inferences about the emotional condition of the music-implied virtual body&#x2014;a simulated persona comparable to a fictional character, rather than a simple set of physical sound properties. For example, rhythmic pulsations resembling the temporal profile of cardiac acceleration may be interpreted as signals of interoceptive fluctuation, prompting the generative model to infer heightened arousal. Subjectively, this can manifest as changes in felt anxiety or urgency that the listener attributes to the music. By analogy with shared-representation accounts of action and emotion (<xref ref-type="bibr" rid="ref44">Molnar-Szakacs and Overy, 2006</xref>), the present article proposes that insular circuitry&#x2014;ordinarily used to monitor one&#x2019;s own bodily state&#x2014;can be recruited to infer the states of musical persona that best explain the pseudo-interoceptive musical cues. The conscious expression of such inferences is the music-induced emotion, which seems to possess a hallucination-like quality and is considered to belong to the domain of vicarious emotions (<xref ref-type="bibr" rid="ref31">Kawakami et al., 2014</xref>) or aesthetic emotions, rather than utilitarian emotions (<xref ref-type="bibr" rid="ref75">Zentner et al., 2008</xref>). Yet these affective experiences are not disembodied. They are likely grounded in neural mappings of bodily states (<xref ref-type="bibr" rid="ref23">Habibi and Damasio, 2014</xref>).</p>
<p>Notably, this &#x201C;hallucination&#x201D; of music-evoked emotion is controlled: its simulated states are continuously constrained and shaped by musical structure, contextual information, and the listener&#x2019;s ongoing interoceptive and homeostatic demands, rather than drifting freely as unconstrained fantasy. Consequently, this framework focuses on interoceptive <italic>awareness</italic>, broadly understood as the subjective capacity to notice and interpret somatic states. Extending this construct to the processing of pseudo-interoceptive cues from a virtual body, the article proposes a form of musical interoceptive awareness in which listeners can experience specific somatic qualities as structuring features of the music.</p>
<p>In this light, when exteroceptive sounds drive interoceptive inference, music provides a far richer aesthetic and epistemic medium than isolated heartbeat-like stimuli. In experimental studies, such stimuli are often designed to manipulate affect and typically function as decontextualized, quasi-medical inputs (<xref ref-type="bibr" rid="ref35">Kleint et al., 2015</xref>; <xref ref-type="bibr" rid="ref63">Tanaka et al., 2021</xref>; <xref ref-type="bibr" rid="ref69">Vicentin et al., 2024</xref>). By contrast, cardiac-like cues in Western classical repertoire are embedded within harmonic, melodic, and narrative contexts: they are compelling musical events in their own right and simultaneously provide rich contextual information about who is feeling what, and why. This embeddedness not only ensures that the hallucination remains controlled, but more crucially, it renders the musical persona as a living subject inhabiting a virtual world&#x2014;an entity whose physiological states undergo meaningful and dynamic fluctuations. This depth of engagement helps to explain why listeners are willing to invest sustained attention and emotional involvement in such pieces.</p>
<p><xref ref-type="fig" rid="fig1">Figure 1</xref> illustrates the proposed architecture. By integrating pseudo-interoceptive musical cues with afferent bodily signals and extramusical priors, the generative model minimizes prediction error across two parallel tracks: the listener&#x2019;s real body and the musically implied virtual body. Consequently, the ensuing conscious emotion is a composite experience&#x2014;a &#x201C;controlled hallucination&#x201D; of the virtual persona&#x2019;s affect that remains grounded in the listener&#x2019;s actual physiology.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Schematic model of musical emotion as active interoceptive inference. The framework proposes that music-induced emotion arises from a hierarchical generative model (central cream-colored box) that integrates three key sources of information: (1) Musical features (top-left), which function as pseudo-interoceptive cues (e.g., rhythmic pulsations processed as virtual cardiac signals); (2) Real bodily signals (bottom-left), which provide the physiological energetic substrate; and (3) Extramusical context (top-center), which sets high-level priors and constraints. Blue arrows indicate the bottom-up flow of sensory evidence (driving prediction errors), while brown arrows denote the resulting conscious percepts. The generative model minimizes prediction error through belief updating and allostatic regulation (curved black arrow). Crucially, the system infers two parallel states: the state of the virtual body implied by the music and the state of the listener&#x2019;s real body. The resulting conscious experience (right) is a composite: a &#x201C;controlled hallucination&#x201D; of the virtual body&#x2019;s affective state (interoceptive qualia and emotion labels) anchored by the listener&#x2019;s genuine physiological feelings.</p>
</caption>
<graphic xlink:href="fpsyg-17-1759699-g001.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Diagram illustrating the process of music-induced emotion via interoceptive mechanisms. Musical cues, such as rhythm and melody, and extramusical context, like lyrics and narratives, function as external stimuli. Real bodily signals contribute to an interoceptive generative model, which infers bodily states and updates beliefs to minimize prediction error. This model leads to emotions experienced as controlled hallucinations, incorporating interoceptive qualia, emotion labels, and real bodily feelings. The diagram features boxes representing each concept, with arrows indicating the flow between them.</alt-text>
</graphic>
</fig>
</sec>
<sec id="sec4">
<label>2.2</label>
<title>Extramusical information and attribution of emotional meaning</title>
<p>In active interoceptive inference, raw interoceptive signals do not in themselves determine a specific emotion. Instead, they are affectively ambiguous and become meaningful only via context-sensitive inferences about their causes (<xref ref-type="bibr" rid="ref59">Seth and Friston, 2016</xref>). In line with the Two-Factor Theory of Emotion (<xref ref-type="bibr" rid="ref54">Schachter and Singer, 1962</xref>), emotion can thus be seen as arising from the interaction between physiological arousal and its cognitive interpretation. This view is compatible with constructionist accounts of emotion, in which core affect is shaped into discrete emotion categories through conceptual and contextual constraints (<xref ref-type="bibr" rid="ref3">Barrett, 2017</xref>).</p>
<p>In music perception, semantic and extramusical information&#x2014;titles, program notes, socio-historical framing, and biographical details about the composer&#x2014;provides a higher-order frame for emotional labeling (<xref ref-type="bibr" rid="ref70">Vuoskoski and Eerola, 2013</xref>; <xref ref-type="bibr" rid="ref32">Kiernan et al., 2021</xref>). Such cues constrain how arousal is construed by anchoring ambiguous physiological simulations to specific, consciously experienced emotions (e.g., construing a rapid heartbeat as &#x201C;romantic longing&#x201D; rather than &#x201C;cardiac distress&#x201D;). In opera and other narrative genres, listeners combine interoceptive cues&#x2014;including simulated laryngeal sensations associated with the singing voice&#x2014;with verbal and dramaturgical context to support theory-of-mind inferences about those characters. While the pseudo-interoceptive signals in the music drive emotional Theory of Mind (feeling the persona&#x2019;s somatic state), extramusical cues support cognitive Theory of Mind (understanding the persona&#x2019;s situation). Underpinning this entire process, the listener&#x2019;s genuine physiological arousal serves as the necessary energetic substrate, lending visceral reality to the inferred virtual states.</p>
<p>The present account is developed primarily with Western art music and its often narrative, persona-rich repertoire in mind. The discussion section will briefly consider how far the same principles might generalize to other genres and listening contexts.</p>
</sec>
<sec id="sec5">
<label>2.3</label>
<title>Precision weighting, attention, and internal model</title>
<p>A key concept for linking musical structure to attention within predictive coding is precision weighting. Predictive coding models posit that the brain optimizes not only the content of its predictions but also the precision assigned to prediction errors. Precision, typically formalized as the expected inverse variance of a given error signal, determines its impact on belief updating (<xref ref-type="bibr" rid="ref17">Feldman and Friston, 2010</xref>). High-precision errors exert greater influence, whereas low-precision errors are down-weighted. Functionally, precision modulation can be regarded as a form of gain control on prediction errors and is closely linked to attention. In predictive coding, attention is the process by which the system selectively enhances the precision of certain sensory channels or model components.</p>
<p>Musical organization&#x2014;particularly the use of variation and contrast&#x2014;provides a powerful means of shaping precision weighting. In passages dominated by repetition and low informational novelty, the brain learns that its predictions are reliable and that ensuing prediction errors tend to be small. Precision is therefore preferentially assigned to the internal model, while the precision accorded to incoming sensory signals is relatively reduced. Conversely, when a harmonic modulation, textural rupture, or rhythmic change produces a salient prediction error, the prevailing model is momentarily inadequate. To revise its beliefs, the system must transiently increase the precision of sensory prediction errors, thereby amplifying the influence of bottom-up input and reallocating computational resources from entrenched priors to new evidence (<xref ref-type="bibr" rid="ref17">Feldman and Friston, 2010</xref>). Subjectively, such moments are often experienced as abrupt captures of attention and can heighten perceived aesthetic pleasure. Consistent with this view, <xref ref-type="bibr" rid="ref55">Schellenberg et al. (2012)</xref> found that listeners generally prefer excerpts exhibiting emotional contrasts over those expressing a single sustained emotion.</p>
<p>The notion of precision-weighted prediction error (pwPE) helps clarify how musical works attract and sustain attention by hitting a &#x201C;sweet spot&#x201D; between predictability and surprise. <xref ref-type="bibr" rid="ref71">Vuust et al. (2018)</xref> argued that rhythmic patterns of intermediate complexity maximize pwPE and are therefore experienced as especially engaging and pleasurable, whereas patterns that are too simple or too complex reduce engagement. Extending this logic, if musical twists occur too frequently, prediction rapidly loses precision, the music becomes effectively unpredictable, and engagement drops.</p>
<p>However, composers do not merely deploy musical twists with caution; rather, they exploit the multidimensionality of music to develop rich techniques for affective contrast. In Western classical music after the Baroque era, composers adeptly redistributed precision across musical dimensions. When precision is lowered in one dimension&#x2014;for instance, through dissonant chromatic harmony&#x2014;other dimensions such as motivic repetition typically retain high precision (<xref ref-type="bibr" rid="ref65">Tsai, 2024</xref>). This strategic counterbalancing ensures that the system maintains a high level of pwPE. By keeping musical surprises salient and the overall structure comprehensible&#x2014;at times anchored by extramusical context&#x2014;this mechanism successfully sustains listener engagement.</p>
</sec>
</sec>
<sec id="sec6">
<label>3</label>
<title>Musical features and virtual bodily states</title>
<p>Having outlined the active inference framework, I now turn to the question of why, and in what sense, specific musical features can be mapped onto interoceptive predictions. In perceptual psychology, one relevant line of work concerns <italic>crossmodal correspondences</italic>. <xref ref-type="bibr" rid="ref60">Spence (2011)</xref> reviewed converging evidence that the human perceptual system exhibits systematic tendencies to associate features across sensory modalities. One class of explanations appeals to <italic>structural correspondences</italic>, which are thought to arise from shared or isomorphic coding principles in the nervous system. Another invokes <italic>statistical correspondences</italic>, which emerge from learned associations based on the frequent co-occurrence of physical properties in the environment. Together, these findings suggest a principled basis for stable mappings between auditory patterns and bodily sensations.</p>
<p>The following sections outline how specific musical features may be mapped onto interoceptive predictions, with a focus on sensations related to cardiac and pain-like states. The same logic may also extend to other kinds of bodily feeling. For clarity, a distinction is drawn between (1) features that primarily specify the physiological state of a virtual body and (2) features that help label and interpret that state as a particular kind of emotion.</p>
<sec id="sec7">
<label>3.1</label>
<title>Virtual physiological states</title>
<p><italic>Rhythm/tempo</italic>. Temporal organization maps readily onto somatic rhythms, including cardiac and respiratory rhythms. Fast tempi and dense rhythmic patterns are associated with heightened arousal (<xref ref-type="bibr" rid="ref68">van Dyck et al., 2017</xref>). Within the present framework, such features may be experienced as analogous to tachycardia or hyperventilation&#x2014;a descriptive correspondence that invites, but does not entail, a specific mechanistic account. One candidate explanation is that the nervous system encodes temporal information isomorphically, utilizing shared neural codes for external auditory tempo and internal physiological rhythms. Additionally, statistical correspondences play a role: the simultaneous experience of the heart&#x2019;s tactile beat and its internal sound creates a natural, learned association between these sensory channels.</p>
<p><italic>Dynamics</italic>. Changes in loudness are typically associated with perceived intensity and suddenness of bodily change. A gradual crescendo tends to evoke sensations akin to mounting palpitations or rising tension, whereas a decrescendo is often experienced as the dissipation of such energy. At the extremes, sforzando attacks may be felt as acute, shock-like intrusions, while an abrupt grand pause can evoke the somatic freezing response&#x2014;such as the involuntary holding of breath. These descriptive correspondences suggest a systematic relationship between dynamic contour and interoceptive quality, though the underlying mechanism remains to be established. One candidate explanation is that the nervous system encodes changes in intensity in a graded fashion across both auditory and interoceptive domains.</p>
<p><italic>Articulation</italic>. The manner of sound onset and connection can itself be experienced as a kind of tactile or visceral texture. Behavioral work has shown that staccato melodies are associated with higher perceived tension, energy, happiness, and surprise, whereas legato counterparts are judged as more cohesive and as conveying greater calmness and sadness (<xref ref-type="bibr" rid="ref8">Carr et al., 2023</xref>). More broadly, research on crossmodal correspondences reviews converging evidence that listeners systematically map auditory features onto tactile- and visual-like dimensions; for instance, <xref ref-type="bibr" rid="ref60">Spence (2011)</xref> highlighted that high-pitched or abrupt sounds are consistently associated with &#x201C;sharp&#x201D; or angular qualities, whereas lower or continuous sounds are linked to &#x201C;rounded&#x201D; or smooth forms. Extending this line of work to the interoceptive domain, I propose that staccato and marcato articulations tend to evoke punctate, jump- or stab-like bodily sensations, whereas legato lines are more readily experienced as smooth, fluid, or diffuse. One possible mechanistic explanation is that the nervous system applies similar coding principles to abrupt versus continuous events across auditory and somatosensory domains.</p>
<p><italic>Timbre</italic>. Spectral shape further refines these qualia. Bright timbres with pronounced high-frequency content are often described as sharp or piercing, whereas darker, low-frequency-rich timbres tend to be heard as dull, heavy, or diffuse. For instance, a piccolo melody, with its high-frequency-rich spectrum, is typically perceived as bright and piercing, while a timpani roll has a darker, booming quality. Extending these observations to the interoceptive domain, I suggest that bright timbres may evoke sharp, localized bodily sensations, whereas darker timbres are more readily experienced as diffuse or heavy&#x2014;as in the visceral thud of a timpani roll. One candidate explanation for these correspondences draws on statistical regularities in the physical environment: objects with sharp or piercing attributes tend to be composed of hard materials, which emit sounds rich in high frequencies when set into vibration.</p>
</sec>
<sec id="sec8">
<label>3.2</label>
<title>Emotional labeling</title>
<p><italic>Melodic contour</italic>. Pitch motion maps onto implied movement and posture, which in turn invite more active versus passive affective interpretations. Ascending contours tend to suggest effortful reaching, striving, or expansion, and are therefore more often associated with active, approach-like affective interpretations. Descending contours tend to suggest sinking or yielding and are more readily associated with passive or release-like affects. This asymmetry can be understood within the framework of <italic>musical gravity</italic> (<xref ref-type="bibr" rid="ref37">Larson and Vanhandel, 2005</xref>). I further speculate that it may be partially grounded in laryngeal proprioception: higher pitches typically require greater vocal-fold tension, analogous to a higher level of gravitational potential energy, whereas lower pitches involve reduced tension, analogous to settling into a lower-energy state. This mapping can be partially explained in terms of structural correspondences: the nervous system may use similar coding principles for physiological effort and subjective effort.</p>
<p><italic>Harmony</italic>. Harmonic configuration can be understood as mapping onto evaluative judgments of the internal environment. Consonance and harmonic resolution signal safety, certainty, and homeostatic recovery&#x2014;a return toward preferred set points. By contrast, dissonance and harmonic instability signal threat, conflict, or crisis, corresponding to deviations from homeostasis that demand explanation and corrective action. Such mappings can be partially explained in terms of structural correspondences: the nervous system likely encodes the acoustic roughness of dissonance (arising from spectral interference) and the spectral smoothness of consonance isomorphically with somatic states of irritation versus equilibrium. The tension-release pattern in music resonates with <italic>drive reduction</italic> accounts in which deviations from preferred internal states (homeostasis) generate tension and motivate a return toward equilibrium (<xref ref-type="bibr" rid="ref27">Hull, 1943</xref>).</p>
<p><italic>Mode</italic>. Within the 18th&#x2013;19th-century Western classical idiom, mode serves as a fundamental cue for emotional valence, with the major mode signaling positive affect and the minor mode conveying negative affect&#x2014;a dichotomy supported by robust empirical evidence. This association likely stems from converging factors: the minor mode&#x2019;s harmonic instability and higher dissonance may evoke uncertainty and tension (<xref ref-type="bibr" rid="ref46">Parncutt, 2014</xref>), while its lowered scale degrees mirror the prosodic features of sad speech (<xref ref-type="bibr" rid="ref13">Curtis and Bharucha, 2010</xref>). Crucially, these emotional connotations are not entirely universal but are significantly reinforced by cultural learning and exposure to Western tonal conventions.</p>
</sec>
</sec>
<sec id="sec9">
<label>4</label>
<title>Illustrative analyses of musical works</title>
<p>Building on the proposed mappings, this section applies the framework to five works from the Western art-music repertoire, spanning the Classical to late Romantic periods. By integrating score analysis with extramusical context, I explore how pseudo-interoceptive cues might interact with textual or programmatic constraints to suggest specific somatic meanings. Adopting <xref ref-type="bibr" rid="ref61">Spitzer&#x2019;s (2010)</xref> holistic approach, these analyses treat emotions not as static &#x201C;semantic labels&#x201D; but as varieties of &#x201C;emotional behavior&#x201D; enshrined within musical structure and dynamic trajectories unfolding over time. The examples serve to illustrate the framework&#x2019;s applicability across two primary interoceptive domains: cardiac sensations and pain-like experiences.</p>
<sec id="sec10">
<label>4.1</label>
<title>Mozart, <italic>die Zauberfl&#x00F6;te</italic> (&#x201C;dies Bildnis ist bezaubernd sch&#x00F6;n&#x201D;): cardiac and respiratory cues in sudden love</title>
<p>In Tamino&#x2019;s aria &#x201C;Dies Bildnis ist bezaubernd sch&#x00F6;n&#x201D; from Mozart&#x2019;s <italic>Die Zauberfl&#x00F6;te</italic>, the awakening of love is staged as a bodily event, saturated with cardiovascular imagery and layered physiological excitement. Tamino first reports that this portrait of a young woman fills his heart with new agitation, and that this nameless &#x201C;something&#x201D; burns in his chest like fire. At precisely the moment he sings the word &#x201C;Herz&#x201D; (heart), the accompaniment settles into a clearly pulsating pattern at a moderate tempo, which can be heard as a stylized heartbeat underpinning this newly discovered inner stirring. Once he finally names the feeling as love, the musical pulse grows more insistent, aligning with a subjective sense of accelerated heart rate and mounting arousal.</p>
<p>Immediately before he resolves to seek out the woman in the portrait and press her to his &#x201C;hot bosom,&#x201D; Mozart inserts a brief rest that may depict a held breath. In predictive coding terms, this silence functions as a prediction error, momentarily interrupting the established heartbeat-like pattern and sharpening the listener&#x2019;s expectations. Following this, over the rapid, pulsating rhythm in the lower strings, the agitated viola figurations add a layer of textural turbulence that can be likened to blood rushing through the body&#x2019;s vessels (<xref ref-type="fig" rid="fig2">Figure 2A</xref>).</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Simplified score excerpts for the selected works. Colored boxes or bars indicate pseudo-interoceptive cues; blue arrows indicate omitted measures. <bold>(A)</bold> Mozart, <italic>Die Zauberfl&#x00F6;te</italic>, &#x201C;Dies Bildnis ist bezaubernd sch&#x00F6;n.&#x201D; <bold>(B)</bold> Schubert, &#x201C;Gretchen am Spinnrade.&#x201D; <bold>(C)</bold> Berlioz, <italic>Symphonie fantastique</italic>, I. <bold>(D)</bold> Beethoven, <italic>Missa solemnis</italic> (strings only). <bold>(E)</bold> Verdi, <italic>Otello</italic>, &#x201C;Dio! Mi potevi scagliar&#x201D; (strings and woodwinds only). Fl., flute; Ob., oboe; Cl., clarinet; Bsn., bassoon; Hr., horn; Vn., violin; Va., viola; Vc., violoncello; Cb., contrabass (sounding an octave below written pitch).</p>
</caption>
<graphic xlink:href="fpsyg-17-1759699-g002.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Sheet music excerpts from five classical pieces illustrate musical expressions linked to physiological responses. (A) Mozart&#x2019;s &#x201C;Die Zauberfl&#x00F6;te&#x201D; shows heartbeat and breath represented in voices and strings. (B) Schubert&#x2019;s &#x201C;Gretchen am Spinnrade&#x201D; demonstrates heartbeat in voice, winds, and piano. (C) Berlioz&#x2019;s &#x201C;Symphonie fantastique&#x201D; depicts heartbeat in winds and strings. (D) Beethoven&#x2019;s &#x201C;Missa solemnis&#x201D; illustrates spasmodic pain and muscle twitches in strings. (E) Verdi&#x2019;s &#x201C;Otello&#x201D; shows first and second pain in winds and strings. Arrows indicate omitted measures in the score excerpts.</alt-text>
</graphic>
</fig>
</sec>
<sec id="sec11">
<label>4.2</label>
<title>Schubert, &#x201C;Gretchen am Spinnrade&#x201D;: anxious heartbeat and fantasized relief</title>
<p>Schubert&#x2019;s &#x201C;Gretchen am Spinnrade&#x201D; (D. 118) is often cited as a paradigmatic example of musical onomatopoeia: the right-hand perpetual-motion figure represents the spinning wheel, while the left-hand bass suggests the treadle. From a psychosomatic perspective, however, this spinning-wheel figuration can also be heard as externalizing Gretchen&#x2019;s ruminative thought patterns. In particular, the inner pulsation in the middle register seems to function as a &#x201C;heartbeat layer&#x201D; that contributes to the overall sense of anxious unease. This interpretation aligns with <xref ref-type="bibr" rid="ref59">Seth and Friston&#x2019;s (2016)</xref> active inference account, in which precision is implemented physiologically via neuronal gain or neuromodulation. They propose that anxiety and psychosomatic conditions involve aberrant precision weighting assigned to threat-related interoceptive signals. In this light, the relentless musical pulsation simulates Gretchen&#x2019;s state of interoceptive hypersensitivity, where the subject becomes unable to attenuate the precision of cardiac signals due to the profound uncertainty and turmoil of her romantic obsession.</p>
<p>A crucial turning point occurs in the central major-mode episode, where Gretchen rapturously describes Faust and ultimately fantasizes about his kiss. At this point, the left-hand accompaniment thins out: the steady treadle-like motion and the heartbeat layer merge into sustained chords. For an empathic listener adopting Gretchen&#x2019;s perspective, this may feel as if attention is drawn away from the chest into an episodic simulation of Faust&#x2019;s presence and kiss, with the intrusive heartbeat entirely disappearing from awareness. When the music returns to the minor mode, however, the anxious heartbeat re-emerges. It seems that attention is pulled back from sweet fantasy to stagnant reality (<xref ref-type="fig" rid="fig2">Figure 2B</xref>).</p>
</sec>
<sec id="sec12">
<label>4.3</label>
<title>Berlioz, <italic>Symphonie fantastique</italic>, I: pathological cardiac pulsation</title>
<p>Berlioz&#x2019;s unrequited love for a young actress produced severe nervous overstimulation, trembling, and a painful hypersensitivity of all his faculties; in his own account, he described listening to his heartbeat, with its pulsations shaking him &#x201C;like the pistons of a steam engine&#x201D; (<xref ref-type="bibr" rid="ref5">Brittan, 2006</xref>). In the first movement of his <italic>Symphonie fantastique</italic>, the <italic>id&#x00E9;e fixe</italic> is interleaved with low-string pulsations that gradually intensify over the course of the exposition.</p>
<p>These pulsations, which initially intrude between thematic phrases, can be interpreted as a musical analogue of palpitations breaking into conscious thought. The lyrical, upward-striving contour of the id&#x00E9;e fixe conveys longing and obsession. However, the increasingly agitated pulsations&#x2014;evolving from intermittent intrusions into a pervasive undercurrent&#x2014;can be metaphorically understood as a loss of autonomic control (<xref ref-type="fig" rid="fig2">Figure 2C</xref>). Remarkably, even as the music wavers between major and minor modes, mirroring the lover&#x2019;s anxious vacillation between hope and despair and rendering harmonic predictions uncertain, the relentless pulsations retain high precision and continue to capture the listener&#x2019;s attention.</p>
</sec>
<sec id="sec13">
<label>4.4</label>
<title>Beethoven, <italic>Missa solemnis</italic> (credo, &#x201C;Crucifixus&#x201D;): paroxysmal, spasm-like pain</title>
<p>Beethoven&#x2019;s lifelong struggles with illness and pain are well documented, and his works sometimes translate physical suffering into musical terms. In the &#x201C;Crucifixus&#x201D; section of the Credo in the <italic>Missa solemnis</italic>, he deploys harmony, dynamics, and recurring rhythmic patterns in a way that vividly evokes bodily torment (<xref ref-type="bibr" rid="ref16">Drabkin and Societies, 1991</xref>). In the strings, the texture is punctuated by three dissonant chordal attacks. While their harmonic content grows increasingly tense and tonally ambiguous (low precision), their temporal organization remains strictly regular (high precision) (<xref ref-type="fig" rid="fig2">Figure 2D</xref>). This juxtaposition generates a high level of pwPE, thereby commanding the listener&#x2019;s attention.</p>
<p>Importantly, this musical texture mirrors the complex temporal profile of somatic distress. Physiologically, the sonorities function as a musical analogue for paroxysmal, spasm-like pain. The onset of a spasmodic episode is heralded by intermittent sharp accents&#x2014;akin to premonitory pangs (<xref ref-type="bibr" rid="ref64">Thiarawat et al., 2016</xref>)&#x2014;which are likely realized here through syncopated sforzando attacks in the strings. Subsequently, rapid thirty-second-note repetitions emulate the main spasmodic attack, capturing the tremulous, cramping quality characteristic of such pain (<xref ref-type="bibr" rid="ref53">Satoyoshi and Yamada, 1967</xref>; <xref ref-type="bibr" rid="ref72">Wallace et al., 2014</xref>). Finally, softer, fragmented figures may simulate the sensation of muscle fasciculations (involuntary twitches) or micro-convulsions that often follow spasmodic pain attacks (<xref ref-type="bibr" rid="ref15">Dewarrat et al., 1994</xref>). This string accompaniment thus maps the theology of the Passion&#x2014;articulated by the vocal lines&#x2014;onto an interoceptive landscape of escalating bodily distress.</p>
</sec>
<sec id="sec14">
<label>4.5</label>
<title>Verdi, <italic>Otello</italic> (&#x201C;Dio! mi potevi scagliar&#x201D;): first and second pain</title>
<p>In <italic>Otello</italic>&#x2019;s Act III monologue &#x201C;Dio! mi potevi scagliar,&#x201D; Verdi uses visceral orchestral writing to depict the tragic hero&#x2019;s torment (<xref ref-type="bibr" rid="ref7">Busch and Verdi, 1988</xref>). From a physiological perspective, the passage offers a striking musical analogue to the biphasic phenomenology of pain. The orchestral prelude can be heard as an artistic dramatization of two experiential phases often distinguished in pain theory: an initial, sharp &#x201C;first pain&#x201D; mediated by A-<italic>&#x03B4;</italic> fibers, followed by a slower, more diffuse &#x201C;second pain&#x201D; mediated by C fibers (<xref ref-type="bibr" rid="ref62">Strigo and Craig, 2016</xref>). Rather than literally mirroring the rapid time course of A-&#x03B4; and C-fiber signaling, the music dilates this two-stage profile onto a perceptually accessible timescale, inviting the listener to inhabit a prolonged, dramatized transition from acute, piercing pain to lingering torment.</p>
<p>The passage opens with high-register string attacks: <italic>staccato fortissimo</italic> gestures that strike with sudden force. These piercing blows can be likened to the rapid, well-localized aspect of first pain&#x2014;an abrupt, shock-like intrusion that commands immediate attention. As a tonally ambiguous motif repeats with grim regularity and descends in register, these high-register jolts give way to syncopated figures in the middle and lower registers, followed by a chromatic, slowly descending line and suspended harmonies. This final phase evokes a slower, pervasive suffering reminiscent of second pain: a duller and poorly localized ache that is entwined with psychological despair (<xref ref-type="fig" rid="fig2">Figure 2E</xref>).</p>
</sec>
<sec id="sec22">
<label>4.6</label>
<title>Summary of the illustrative analyses</title>
<p>These analyses illustrate how listeners can hear dynamic musical shifts as state transitions within a virtual body, thereby fostering deep immersion in the work&#x2019;s emotional landscape. This immersion is likely supported by the hierarchical nature of the internal generative model, where distinct neural layers process predictions over different time scales. Converging theoretical and empirical work suggests that higher levels of the neural hierarchy integrate information and generate predictions over progressively longer time scales (<xref ref-type="bibr" rid="ref24">Hasson et al., 2015</xref>; <xref ref-type="bibr" rid="ref20">Friston et al., 2017b</xref>; <xref ref-type="bibr" rid="ref56">Schmitt et al., 2021</xref>; <xref ref-type="bibr" rid="ref65">Tsai, 2024</xref>). While the musical-affective changes discussed above elicit prediction errors that drive the updating of lower-level beliefs regarding the virtual body&#x2019;s momentary somatic state, the implications at higher levels are distinct. Higher-level layers, which encode predictions over longer time spans, treat these persistent streams of lower-level prediction errors not as failures of the model, but as evidence substantiating the volatile nature of the musical persona. Thus, rather than disrupting the system, these local surprises consolidate the reality of the persona inhabiting the virtual world, deepening the listener&#x2019;s engagement with the unfolding narrative.</p>
<p>Fundamentally, the validity of this framework does not hinge on the listener&#x2019;s actual physiological state closely tracking the interoceptive drama depicted in the music. As the examples from Berlioz, Beethoven, and Verdi suggest, the virtual body sometimes undergoes pathological arousal or physical torment&#x2014;states that the listener is unlikely, and arguably unwilling, to embody in full. Instead, the proposal is that listeners engage in a form of affective and motor simulation, integrating sensory evidence from the music, extramusical context, and their own schematic knowledge of emotion. In light of the Theory of Constructed Emotion (<xref ref-type="bibr" rid="ref3">Barrett, 2017</xref>), such listening episodes can be seen as opportunities to refine and enrich the brain&#x2019;s emotion concepts. By simulating extreme negative states at an abstract level, music deepens the listener&#x2019;s epistemic grasp of life&#x2019;s hardships and suffering, allowing profound meanings to be constructed (<xref ref-type="bibr" rid="ref42">Menninghaus et al., 2017</xref>; <xref ref-type="bibr" rid="ref41">Li and Tsai, 2025</xref>).</p>
</sec>
</sec>
<sec sec-type="discussion" id="sec15">
<label>5</label>
<title>Discussion</title>
<p>Recent research on musical emotion has produced a substantial body of experimental data; however, musical stimuli are still frequently described in broad categorical terms (e.g., happy, sad, fast, or slow), leaving underspecified how particular musical patterns map onto specific bodily sensations. In this article, musical emotion is treated as a special case of active interoceptive inference, a perspective that affords granular mappings between discrete musical features and virtual bodily states. Musical cues act as pseudo-interoceptive evidence, and felt emotion arises as listeners&#x2019; generative models minimize prediction error with respect to a musically implied virtual body. At this stage, the account is offered as a conceptual framework: the correspondences proposed are probabilistic tendencies, intended to guide future empirical tests.</p>
<p>Conceptually, this framework distinguishes itself by foregrounding top-down active inference, thereby extending accounts that emphasize bottom-up embodied mechanisms&#x2014;specifically, the action-program account of <xref ref-type="bibr" rid="ref23">Habibi and Damasio (2014)</xref> and the rhythmic entrainment and emotional contagion components of <xref ref-type="bibr" rid="ref29">Juslin&#x2019;s (2013)</xref> unified theory. In these specific mechanisms, musical structure is typically taken to drive bodily changes and affect via stimulus-driven synchronization and peripheral feedback. Here, by contrast, bodily arousal is treated as a necessary but nonspecific energetic substrate: the listener&#x2019;s real body constrains what states are plausible, yet the experienced quality of the emotion depends on an inferential interpretation. Specifically, the musically implied virtual body supplies a qualitative frame through which arousal is parsed into particular affective&#x2013;somatic states, shifting the explanatory emphasis from &#x201C;what the music makes the body do&#x201D; to &#x201C;what bodily state the brain infers, given musical evidence and prior expectations.&#x201D; For instance, in <italic>Symphonie fantastique</italic>, insistent pulsations may increase real physiological arousal, yet the listener need not literally entrain to the beat. The resulting conscious experience can be understood as a composite, blending proprioceptive and motoric components (e.g., subvocal rehearsal) with ongoing visceral background, while additionally incorporating simulated cardiac rhythms attributed to the musically implied virtual body.</p>
<p>This difference can be situated within the hierarchical organization of interoception. Lower-level brainstem pathways continue to regulate homeostasis, while insular subregions form a graded interface between bodily physiology and higher-level affective integration. In particular, the posterior insula can be construed as a relay where sensory evidence&#x2014;including acoustically structured, pseudo-interoceptive cues&#x2014;can be integrated with ongoing bodily signals and made available to higher-order regions such as the anterior insula. The anterior insula, in turn, supports integrative representations that yield coherent affective&#x2013;interoceptive inferences about the state of the virtual body. In this way, the framework offers a distinctive mechanism-level claim that complements prior theories: music can shape emotion by selectively weighting and reinterpreting bodily evidence within an interoceptive generative model, thereby generating a controlled, perceptual-affective &#x201C;hallucination&#x201D; of bodily change without necessarily perturbing the physiological integrity of the real body.</p>
<p>A longstanding issue in music&#x2013;emotion research concerns how to characterize the affective states that music evokes, and whether they are &#x201C;genuine&#x201D; emotions continuous with everyday life or distinctively &#x201C;aesthetic&#x201D; forms of feeling. <xref ref-type="bibr" rid="ref30">Juslin and V&#x00E4;stfj&#x00E4;ll (2008)</xref> argued for continuity, whereas <xref ref-type="bibr" rid="ref34">Kivy (1990)</xref> argued that our response to purely musical structure constitutes a distinctive form of &#x201C;being moved&#x201D; that should not be conflated with garden-variety emotions. The present account offers a mechanistic middle ground: physiological arousal in the listener&#x2019;s real body can be genuine and consequential, while its qualitative interpretation is shaped by inference, because musical structure supplies pseudo-interoceptive evidence that supports a musically implied virtual body&#x2014;a bodily hypothesis that frames how arousal is experienced. Music-induced affect can therefore be both real (in energetic and autonomic terms) and virtual (in the inferred bodily hypothesis that organizes subjective feeling), clarifying how <italic>as-if</italic> experience can arise from ordinary interoceptive mechanisms under sensory constraint. This perspective also reframes how listeners can enjoy negative emotions in music (<xref ref-type="bibr" rid="ref39">Levinson, 1982</xref>) through aesthetic distancing (<xref ref-type="bibr" rid="ref42">Menninghaus et al., 2017</xref>). The virtual body provides a mechanistic instantiation of &#x201C;distance,&#x201D; enabling salient affective&#x2013;interoceptive qualia without obligatorily recruiting the full set of real-world action policies or corresponding physiological perturbations. In this way, long-standing aesthetic questions could be recast in active-inference terms of policy selection, precision control, and bodily hypotheses.</p>
<p>Beyond explaining how music evokes affect-laden bodily simulations, this framework also extends previous predictive-coding accounts of groove. <xref ref-type="bibr" rid="ref71">Vuust et al. (2018)</xref> argued that, in the context of syncopation, bodily movement serves to reinforce the internal pulse and resolve rhythmic uncertainty. By contrast, this article foregrounds affective mimicry as a mechanism for minimizing interoceptive prediction error, shifting the analytic focus to the semantic significance of specific pseudo-interoceptive cues. This principle of simulating specific bodily states aligns closely with the predictive coding model of groove proposed by <xref ref-type="bibr" rid="ref66">Tsai (in press)</xref>. Focusing on low-frequency, low-complexity rhythmic patterns, this model argues that the kick drum and bass approximate the multisensory consequences of locomotion&#x2014;providing footfall-like impact vibrations, vestibular fluctuations, and weight-shifting patterns. When the listener remains physically still, a mismatch arises between this auditory simulation of locomotion and sensory evidence that the body is at rest. The system reduces this prediction error by engaging in covert motor simulation&#x2014;<italic>as-if</italic> actions experienced subjectively as the sensation of groove. Crucially, both the present framework and my groove model (<xref ref-type="bibr" rid="ref66">Tsai, in press</xref>) posit that active inference is anchored in specific internal bodily sensations. Furthermore, both highlight the critical role of timbre&#x2014;particularly low-frequency energy&#x2014;in conveying somatic meaning. When a rhythmic pattern delivered via such timbre appears in a musical work, the simulation of locomotion and affective mimicry likely occur simultaneously, fusing into a composite experience of an animated, feeling virtual body.</p>
<p>Building on &#x201C;mimetic&#x201D; (<xref ref-type="bibr" rid="ref44">Molnar-Szakacs and Overy, 2006</xref>; <xref ref-type="bibr" rid="ref11">Cox, 2016</xref>) and &#x201C;emotion contagion&#x201D; (<xref ref-type="bibr" rid="ref29">Juslin, 2013</xref>) accounts of music, this article advances the field in two key respects. First, it broadens the scope of internal simulation beyond the commonly discussed domain of laryngeal proprioception&#x2014;specifically, the imagined vocal-fold tension required to produce pitch&#x2014;to include cardiovascular and pain-like sensations. This perspective underscores the critical role of the accompaniment. While high-register principal melodies typically command attention and invite subvocal mimicry, the underlying, relatively unobtrusive accompaniment covertly shapes the emotional landscape, particularly by simulating the cardiac and motor rhythms of the musical persona.</p>
<p>Second, this article offers a computational rationale for embodied simulation accounts of music: we do not mimic simply for the sake of imitation, but because covert simulation enables the brain to interpret sensory signals more efficiently. From an evolutionary perspective, the neural mechanisms underlying such internal mimicry can be situated within the framework of <italic>exaptation</italic>&#x2014;the reuse or &#x201C;repurposing&#x201D; of traits that originally evolved for one function but were later co-opted for another (<xref ref-type="bibr" rid="ref21">Gould and Vrba, 1982</xref>). The sensorimotor system serves as a canonical example: in primates, premotor and parietal circuits likely evolved to support fine-grained sensorimotor control and were later &#x201C;reused&#x201D; for action understanding and imitation (<xref ref-type="bibr" rid="ref33">Kilner et al., 2007</xref>; <xref ref-type="bibr" rid="ref28">Hurley, 2008</xref>). A similar exaptive logic can be applied to the interoceptive network, which operates as a control system for the internal milieu and has arguably been repurposed from basic homeostatic regulation to support empathy and theory of mind (<xref ref-type="bibr" rid="ref45">Ondobaka et al., 2017</xref>). I propose that musical experience constitutes a further reuse of this architecture: music recruits this same circuitry anew to track the state of a virtual body implied by sound.</p>
<p>Although the case studies in this article focus on works with text or programmatic descriptions, the proposed mechanism of active interoceptive inference is likely not limited to such contexts. Even in the absence of explicit extramusical information, listeners may still recruit generative models to interpret &#x201C;absolute&#x201D; music. For instance, listeners familiar with the classical style can readily associate eighteenth-century &#x201C;sigh&#x201D; figures with specific respiratory patterns, using this schematic knowledge to infer specific affective states. Similarly, the <italic>Sturm und Drang</italic> style&#x2014;characterized by rapid, low-frequency pulsations in the minor mode, as seen in the opening themes of Mozart&#x2019;s Symphony No. 25 and Haydn&#x2019;s Symphony No. 45&#x2014;can directly simulate the cardio-respiratory signatures of anxiety or fear. Indeed, <xref ref-type="bibr" rid="ref61">Spitzer&#x2019;s (2010)</xref> analysis of fearful feelings in Schubert&#x2019;s <italic>Unfinished Symphony</italic> demonstrates how pseudo-interoceptive cues can drive affective interpretation purely through musical structure, without reliance on extramusical framing. Furthermore, research indicates that instrumental music frequently evokes visual imagery (<xref ref-type="bibr" rid="ref29">Juslin, 2013</xref>; <xref ref-type="bibr" rid="ref49">Presicce and Bailes, 2019</xref>), which likely entails an embodied, interoceptive dimension. Thus, it is plausible to posit that active interoceptive inference remains a primary engine of emotion even in purely instrumental contexts.</p>
<p>A key boundary condition concerns the cultural specificity of the proposed music&#x2013;interoceptive mappings. Some dimensions of musical structure are likely to be shaped to a greater extent by learned conventions and stylistic enculturation. Harmonic syntax and major&#x2013;minor tonality are a paradigmatic case: their affective connotations plausibly depend on historically contingent compositional norms and culturally transmitted listening schemata. By contrast, other dimensions&#x2014;most notably rhythm, tempo, periodicity, and intensity dynamics&#x2014;may be comparatively more constrained by general perceptual and sensorimotor priors, and thus more likely to support cross-cultural mappings to arousal and action-readiness. A useful illustration comes from Chinese <italic>xiqu</italic>, where a practice known as <italic>jinla manchang</italic> (also <italic>jinda manchang</italic>; &#x201C;tight accompaniment, slow/free singing&#x201D;) is widely used&#x2014;particularly in Yue opera&#x2014;to depict heightened agitation or emotional escalation while preserving a stretched, quasi-recitative vocal delivery. In this texture, a fast, regular instrumental/percussive pulse (often organized at one or two beats per bar) coexists with a freer, elongated vocal line, yielding two concurrent temporal streams. Within the present framework, the &#x201C;tight&#x201D; accompaniment can be construed as pseudo-interoceptive evidence with cardiac-like signatures, whereas the freer vocal layer sustains higher-level narrative and evaluative structure that constrains how arousal is interpreted. This contrast underscores an empirical agenda: mappings grounded in tonal-harmonic conventions should show stronger dependence on cultural familiarity, whereas mappings grounded in rhythmic/tempo cues should generalize more broadly, primarily modulated by context and attentional set.</p>
<p>For musical cues to function as pseudo-interoceptive evidence, they must engage neural circuitry that links auditory representations to interoceptive and affective processing&#x2014;most notably, pathways between the auditory cortex and the insula. This structural and functional coupling is well documented in neuroimaging research. Diffusion-weighted imaging shows that the structural integrity of white-matter tracts connecting auditory cortex and insula predicts individual differences in musical reward and aesthetic sensitivity (<xref ref-type="bibr" rid="ref51">Sachs et al., 2016</xref>). Converging fMRI findings likewise suggest that these pathways are crucial for turning acoustic structure into embodied, affect-laden experience: during listening to joyful versus fearful music, emotion-specific functional connectivity emerges between primary and secondary auditory cortices and granular regions of the insula (<xref ref-type="bibr" rid="ref36">Koelsch et al., 2021</xref>). The insula&#x2019;s role as a hub of the salience network (<xref ref-type="bibr" rid="ref43">Menon and Uddin, 2010</xref>) provides an additional layer of explanation for why certain musical events feel particularly gripping. During music listening, functional coupling between auditory cortex and the insula is thought to support the selection of acoustically and emotionally salient events (<xref ref-type="bibr" rid="ref50">Putkinen et al., 2021</xref>).</p>
<p>Long-term musical experience appears to enhance the efficiency of &#x201C;sound&#x2013;body&#x2013;emotion&#x201D; integration. Professional musicians show increased anterior insula connectivity with networks supporting empathy, attention, and sensorimotor integration (<xref ref-type="bibr" rid="ref73">Zamorano et al., 2017</xref>), and this connectivity correlates with higher empathic ability and affective sensitivity (<xref ref-type="bibr" rid="ref22">Gujing et al., 2019</xref>). Such plasticity is not limited to professional musicians. Older adults with musical experience show stronger dorsal anterior insula&#x2013;sensorimotor coupling, consistent with reinforced somatic awareness (<xref ref-type="bibr" rid="ref1">Ai et al., 2022</xref>), and singing training selectively enhances bilateral anterior insula connectivity with speech&#x2013;sensorimotor networks (<xref ref-type="bibr" rid="ref74">Zamorano et al., 2023</xref>). Moreover, <xref ref-type="bibr" rid="ref25">He et al. (2017)</xref> found that a longitudinal music intervention in patients with schizophrenia increased functional connectivity between anterior insula and anterior cingulate cortex, as well as between posterior insula and sensorimotor cortices. Together, these findings suggest that music can shape, and in some cases partially repair, the neural circuitry that integrates internal bodily states with external sensory cues.</p>
<p>Music therapy provides both a testing ground for, and an application of, the proposed mappings between specific musical parameters and interoceptive sensations. For instance, the Therapeutic Function of Music framework systematically links acoustic features such as tempo, contour, and dynamics to targeted levels of physiological arousal and emotion regulation, demonstrating that carefully structured musical elements can modulate bodily state via bottom-up mechanisms (<xref ref-type="bibr" rid="ref57">Sena Moore and Hanson-Abromeit, 2015</xref>). Extending this logic, adult music listening can be understood as a form of self-administered interoceptive training, in which listeners repeatedly experience musical trajectories from tension or crisis toward resolution and recovery. Specifically, this engagement may enhance interoceptive sensibility&#x2014;the subjective tendency to focus on and appraise somatic states&#x2014;and sharpen interoceptive awareness by continuously recalibrating the generative models that underpin our conscious feeling of the body. Significantly, in contemporary media environments, problematic smartphone use and addiction-like social media engagement have been associated with reduced insula gray-matter volume and altered salience-network connectivity (<xref ref-type="bibr" rid="ref67">Turel et al., 2018</xref>; <xref ref-type="bibr" rid="ref38">Lee et al., 2024</xref>). In this context, sustained engagement with musical narratives&#x2014;where interoceptive states unfold over extended timeframes&#x2014;offers a vital alternative mode of experience. By continuously exercising and recalibrating interoceptive predictive models to resolve high pwPEs, such listening habits may potentially counteract these deficits and strengthen capacities for interoceptive awareness and regulation.</p>
<p>The proposed mappings between specific musical parameters and interoceptive sensations also invite a rethinking of musical listening, performance, and interpretation. If musical emotion is partly grounded in listeners&#x2019; schematic knowledge of interoceptive states, affective responses to music need not be regarded as purely intuitive or fixed. Listeners and performers can, in principle, learn to associate particular musical gestures with particular interoceptive qualities. Such learning is likely facilitated by basic music-analytic and physiological knowledge, together with enriched extramusical information. On this view, sensitivity to the interoceptive dimensions of music is a trainable skill. Future studies could test this claim by comparing behavioral reports, peripheral physiological responses, and neural markers of interoceptive&#x2013;auditory processing before and after targeted training in recognizing specific musical cues.</p>
<p>Complementary experiments could systematically manipulate cardiac-like versus pain-like cues within tightly controlled musical stimuli to determine whether&#x2014;and through which pathways&#x2014;these features modulate canonical markers of interoceptive inference. A recent intracranial electrophysiology study (<xref ref-type="bibr" rid="ref2">Banks et al., 2023</xref>) derived a low-dimensional &#x201C;functional geometry&#x201D; of auditory cortical resting-state networks and showed that the posterior insula occupies an intermediate position between auditory cortex and limbic structures in the resulting embedding. <xref ref-type="bibr" rid="ref2">Banks et al. (2023)</xref> further proposed that, given its robust auditory responsiveness, the posterior insula may help transform auditory cortical information into affective representations in the anterior insula, consistent with a linking role between auditory and limbic systems. While resting-state geometry cannot establish directionality, it helps sharpen a testable hypothesis: pseudo-interoceptive musical cues should reconfigure effective connectivity. Specifically, they should enhance coupling between auditory cortex and posterior insula, and strengthen posterior-insula interactions with the anterior insula and anterior cingulate cortex. This pattern would be consistent with acoustic evidence being converted into affectively salient interoceptive representations and subjective qualia. These predictions can be tested by combining connectivity models with concurrent autonomic measures. Cardiac-like cues should preferentially bias cardiovascular-related bodily hypotheses and autonomic readiness, whereas pain-like cues should more strongly engage insula&#x2013;cingulate pathways linked to salience, aversive qualia, and protective bodily predictions.</p>
<p>The network-level organization reported in <xref ref-type="bibr" rid="ref2">Banks et al. (2023)</xref> also offers a principled way to distinguish what is driven by musical structure from what is driven by contextual priors. Their findings place auditory cortex in close functional proximity to a limbic&#x2013;semantic constellation that includes the temporal pole and medial temporal lobe structures, providing an anatomically plausible route through which semantic and mnemonic information can shape auditory inference. This aligns with task-based evidence linking the temporal pole to context-sensitive socioemotional integration (<xref ref-type="bibr" rid="ref47">Pehrs et al., 2014</xref>; <xref ref-type="bibr" rid="ref48">Pehrs et al., 2018</xref>; <xref ref-type="bibr" rid="ref40">Li et al., 2019</xref>). Accordingly, an empirically tractable approach is to hold the acoustic stimulus constant while manipulating biographical knowledge, textual meaning, or programmatic narratives as contextual primes that vary the precision of higher-level priors. Mechanistically, precision weighting can be framed more simply as context-dependent gain modulation of prediction-error signaling and belief updating, which should be observable as systematic changes in directed coupling between auditory cortex, posterior insula, temporal pole/medial temporal circuitry, and anterior insula/anterior cingulate cortex. On this account, contextual primes should produce shifts in (i) the relative coupling between auditory cortex and posterior insula versus coupling between temporal pole/medial temporal circuitry and anterior insula/anterior cingulate cortex, (ii) the relative weighting of cardiac-like versus pain-like bodily hypotheses inferred from the same musical input, and (iii) the correspondence between subjective interoceptive qualia and peripheral physiology.</p>
</sec>
<sec sec-type="conclusions" id="sec16">
<label>6</label>
<title>Conclusion</title>
<p>This article proposes an active interoceptive inference framework wherein musical cues function as pseudo-interoceptive evidence, prompting a &#x201C;controlled hallucination&#x201D; of bodily change. On this view, listeners recruit generative models to infer the state of a virtual body, with felt emotion arising as these models minimize prediction error. Admittedly, this framework captures only part of the multifaceted ways in which music moves us; mechanisms such as memory, reward, and social meaning undoubtedly interact with interoceptive inference in ways that warrant further investigation. Moreover, while the musical analyses presented here focus on Western art music and on cardiac- and pain-like qualia, future work should explore whether similar principles extend to other genres, cultures, and somatic domains&#x2014;including respiratory rhythms, thermal sensations, muscular tension, vibrotactile roughness (e.g., the tingling, buzzing sensation often reported with rock music), and even experiences of weightlessness.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="sec17">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author/s.</p>
</sec>
<sec sec-type="author-contributions" id="sec18">
<title>Author contributions</title>
<p>C-GT: Writing &#x2013; original draft, Formal analysis, Methodology, Visualization, Validation, Investigation, Writing &#x2013; review &#x0026; editing, Conceptualization.</p>
</sec>
<ack>
<title>Acknowledgments</title>
<p>I gratefully acknowledge Yu-Han Tsai for preparing the musical scores.</p>
</ack>
<sec sec-type="COI-statement" id="sec19">
<title>Conflict of interest</title>
<p>The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="ai-statement" id="sec20">
<title>Generative AI statement</title>
<p>The author(s) declared that Generative AI was used in the creation of this manuscript. Generative AI was used to improve the clarity and quality of English writing, and to assist in searching for relevant literature. The author(s) have reviewed and verified all content and take full responsibility for the final manuscript.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec sec-type="disclaimer" id="sec21">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ai</surname><given-names>M.</given-names></name> <name><surname>Loui</surname><given-names>P.</given-names></name> <name><surname>Morris</surname><given-names>T. P.</given-names></name> <name><surname>Chaddock-Heyman</surname><given-names>L.</given-names></name> <name><surname>Hillman</surname><given-names>C. H.</given-names></name> <name><surname>McAuley</surname><given-names>E.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Musical experience relates to insula-based functional connectivity in older adults</article-title>. <source>Brain Sci.</source> <volume>12</volume>:<fpage>1577</fpage>. doi: <pub-id pub-id-type="doi">10.3390/brainsci12111577</pub-id>, <pub-id pub-id-type="pmid">36421901</pub-id></mixed-citation></ref>
<ref id="ref2"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Banks</surname><given-names>M. I.</given-names></name> <name><surname>Krause</surname><given-names>B. M.</given-names></name> <name><surname>Berger</surname><given-names>D. G.</given-names></name> <name><surname>Campbell</surname><given-names>D. I.</given-names></name> <name><surname>Boes</surname><given-names>A. D.</given-names></name> <name><surname>Bruss</surname><given-names>J. E.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>Functional geometry of auditory cortical resting state networks derived from intracranial electrophysiology</article-title>. <source>PLoS Biol.</source> <volume>21</volume>:<fpage>e3002239</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pbio.3002239</pub-id>, <pub-id pub-id-type="pmid">37651504</pub-id></mixed-citation></ref>
<ref id="ref3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Barrett</surname><given-names>L. F.</given-names></name></person-group> (<year>2017</year>). <article-title>The theory of constructed emotion: an active inference account of interoception and categorization</article-title>. <source>Soc. Cogn. Affect. Neurosci.</source> <volume>12</volume>:<fpage>1833</fpage>. doi: <pub-id pub-id-type="doi">10.1093/scan/nsx060</pub-id>, <pub-id pub-id-type="pmid">28472391</pub-id></mixed-citation></ref>
<ref id="ref4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bauman</surname><given-names>T.</given-names></name></person-group> (<year>1991</year>). <article-title>Mozart and his singers: Mozart's Belmonte</article-title>. <source>Early Music</source> <volume>19</volume>, <fpage>557</fpage>&#x2013;<lpage>564</lpage>. doi: <pub-id pub-id-type="doi">10.1093/earlyj/XIX.4.557</pub-id></mixed-citation></ref>
<ref id="ref5"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Brittan</surname><given-names>F.</given-names></name></person-group> (<year>2006</year>). <article-title>Berlioz and the pathological fantastic: melancholy, monomania, and romantic autobiography</article-title>. <source>19th-Century Music</source> <volume>29</volume>, <fpage>211</fpage>&#x2013;<lpage>239</lpage>. doi: <pub-id pub-id-type="doi">10.1525/ncm.2006.29.3.211</pub-id></mixed-citation></ref>
<ref id="ref6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Buelow</surname><given-names>G. J.</given-names></name></person-group> (<year>1973</year>). <article-title>Music, rhetoric, and the concept of the affections: a selective bibliography</article-title>. <source>Notes</source> <volume>30</volume>:<fpage>250</fpage>.</mixed-citation></ref>
<ref id="ref7"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Busch</surname><given-names>H.</given-names></name> <name><surname>Verdi</surname><given-names>G.</given-names></name></person-group> (<year>1988</year>). <source>Verdi's Otello and Simon Boccanegra (revised version) in letters and documents</source>. <publisher-loc>Oxford [Oxfordshire]; Toronto</publisher-loc>, <publisher-loc>Clarendon Press; Oxford University Press</publisher-loc>.</mixed-citation></ref>
<ref id="ref8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Carr</surname><given-names>N. R.</given-names></name> <name><surname>Olsen</surname><given-names>K. N.</given-names></name> <name><surname>Thompson</surname><given-names>W. F.</given-names></name></person-group> (<year>2023</year>). <article-title>The perceptual and emotional consequences of articulation in music</article-title>. <source>Music. Percept.</source> <volume>40</volume>, <fpage>202</fpage>&#x2013;<lpage>219</lpage>. doi: <pub-id pub-id-type="doi">10.1525/mp.2023.40.3.202</pub-id></mixed-citation></ref>
<ref id="ref9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Carvalho</surname><given-names>G. B.</given-names></name> <name><surname>Damasio</surname><given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Interoception and the origin of feelings: a new synthesis</article-title>. <source>BioEssays</source> <volume>43</volume>:<fpage>e2000261</fpage>. doi: <pub-id pub-id-type="doi">10.1002/bies.202000261</pub-id>, <pub-id pub-id-type="pmid">33763881</pub-id></mixed-citation></ref>
<ref id="ref10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ceunen</surname><given-names>E.</given-names></name> <name><surname>Vlaeyen</surname><given-names>J. W.</given-names></name> <name><surname>Van Diest</surname><given-names>I.</given-names></name></person-group> (<year>2016</year>). <article-title>On the origin of interoception</article-title>. <source>Front. Psychol.</source> <volume>7</volume>:<fpage>743</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2016.00743</pub-id>, <pub-id pub-id-type="pmid">27242642</pub-id></mixed-citation></ref>
<ref id="ref11"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Cox</surname><given-names>A.</given-names></name></person-group> (<year>2016</year>). <source>Music and embodied cognition: listening, moving, feeling, and thinking</source>. <publisher-loc>Bloomington; Indianapolis</publisher-loc>: <publisher-name>Indiana University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Craig</surname><given-names>A. D.</given-names></name></person-group> (<year>2002</year>). <article-title>How do you feel? Interoception: the sense of the physiological condition of the body</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>3</volume>, <fpage>655</fpage>&#x2013;<lpage>666</lpage>. doi: <pub-id pub-id-type="doi">10.1038/nrn894</pub-id>, <pub-id pub-id-type="pmid">12154366</pub-id></mixed-citation></ref>
<ref id="ref13"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Curtis</surname><given-names>M. E.</given-names></name> <name><surname>Bharucha</surname><given-names>J. J.</given-names></name></person-group> (<year>2010</year>). <article-title>The minor third communicates sadness in speech, mirroring its use in music</article-title>. <source>Emotion</source>. <volume>10</volume>, <fpage>335</fpage>&#x2013;<lpage>348</lpage>.</mixed-citation></ref>
<ref id="ref14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Damasio</surname><given-names>A.</given-names></name> <name><surname>Damasio</surname><given-names>H.</given-names></name></person-group> (<year>2024</year>). <article-title>Homeostatic feelings and the emergence of consciousness</article-title>. <source>J. Cogn. Neurosci.</source> <volume>36</volume>, <fpage>1653</fpage>&#x2013;<lpage>1659</lpage>. doi: <pub-id pub-id-type="doi">10.1162/jocn_a_02119</pub-id>, <pub-id pub-id-type="pmid">38319678</pub-id></mixed-citation></ref>
<ref id="ref15"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Dewarrat</surname><given-names>A.</given-names></name> <name><surname>Kuntzer</surname><given-names>T.</given-names></name> <name><surname>Regli</surname><given-names>F.</given-names></name></person-group> (<year>1994</year>). <article-title>Muscle cramps: mechanism, etiology and current treatment</article-title>. <source>Schweiz. Rundsch. Med. Prax.</source> <volume>83</volume>, <fpage>444</fpage>&#x2013;<lpage>448</lpage>, <pub-id pub-id-type="pmid">8184238</pub-id></mixed-citation></ref>
<ref id="ref16"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Drabkin</surname><given-names>W.</given-names></name></person-group> (<year>1991</year>). <source>Beethoven, Missa Solemnis</source>. <publisher-loc>Cambridge, UK</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref17"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Feldman</surname><given-names>H.</given-names></name> <name><surname>Friston</surname><given-names>K. J.</given-names></name></person-group> (<year>2010</year>). <article-title>Attention, uncertainty, and free-energy</article-title>. <source>Front. Hum. Neurosci.</source> <volume>4</volume>:<fpage>215</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnhum.2010.00215</pub-id>, <pub-id pub-id-type="pmid">21160551</pub-id></mixed-citation></ref>
<ref id="ref18"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname><given-names>K.</given-names></name></person-group> (<year>2010</year>). <article-title>The free-energy principle: a unified brain theory?</article-title> <source>Nat. Rev. Neurosci.</source> <volume>11</volume>, <fpage>127</fpage>&#x2013;<lpage>138</lpage>. doi: <pub-id pub-id-type="doi">10.1038/nrn2787</pub-id>, <pub-id pub-id-type="pmid">20068583</pub-id></mixed-citation></ref>
<ref id="ref19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname><given-names>K. J.</given-names></name> <name><surname>Parr</surname><given-names>T.</given-names></name> <name><surname>de Vries</surname><given-names>B.</given-names></name></person-group> (<year>2017a</year>). <article-title>The graphical brain: belief propagation and active inference</article-title>. <source>Netw. Neurosci.</source> <volume>1</volume>, <fpage>381</fpage>&#x2013;<lpage>414</lpage>. doi: <pub-id pub-id-type="doi">10.1162/NETN_a_00018</pub-id>, <pub-id pub-id-type="pmid">29417960</pub-id></mixed-citation></ref>
<ref id="ref20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname><given-names>K. J.</given-names></name> <name><surname>Rosch</surname><given-names>R.</given-names></name> <name><surname>Parr</surname><given-names>T.</given-names></name> <name><surname>Price</surname><given-names>C.</given-names></name> <name><surname>Bowman</surname><given-names>H.</given-names></name></person-group> (<year>2017b</year>). <article-title>Deep temporal models and active inference</article-title>. <source>Neurosci. Biobehav. Rev.</source> <volume>77</volume>, <fpage>388</fpage>&#x2013;<lpage>402</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neubiorev.2017.04.009</pub-id>, <pub-id pub-id-type="pmid">28416414</pub-id></mixed-citation></ref>
<ref id="ref21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gould</surname><given-names>S. J.</given-names></name> <name><surname>Vrba</surname><given-names>E. S.</given-names></name></person-group> (<year>1982</year>). <article-title>Exaptation-a missing term in the science of form</article-title>. <source>Paleobiology</source> <volume>8</volume>, <fpage>4</fpage>&#x2013;<lpage>15</lpage>. doi: <pub-id pub-id-type="doi">10.1017/S0094837300004310</pub-id></mixed-citation></ref>
<ref id="ref22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gujing</surname><given-names>L.</given-names></name> <name><surname>Hui</surname><given-names>H.</given-names></name> <name><surname>Xin</surname><given-names>L.</given-names></name> <name><surname>Lirong</surname><given-names>Z.</given-names></name> <name><surname>Yutong</surname><given-names>Y.</given-names></name> <name><surname>Guofeng</surname><given-names>Y.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Increased insular connectivity and enhanced empathic ability associated with dance/music training</article-title>. <source>Neural Plast.</source> <volume>6</volume>:<fpage>9693109</fpage>. doi: <pub-id pub-id-type="doi">10.1155/2019/9693109</pub-id>, <pub-id pub-id-type="pmid">31198419</pub-id></mixed-citation></ref>
<ref id="ref23"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Habibi</surname><given-names>A.</given-names></name> <name><surname>Damasio</surname><given-names>A.</given-names></name></person-group> (<year>2014</year>). <article-title>Music, feelings, and the human brain</article-title>. <source>Psychomusicol. Music Mind Brain</source> <volume>24</volume>:<fpage>92</fpage>. doi: <pub-id pub-id-type="doi">10.1037/pmu0000033</pub-id></mixed-citation></ref>
<ref id="ref24"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hasson</surname><given-names>U.</given-names></name> <name><surname>Chen</surname><given-names>J.</given-names></name> <name><surname>Honey</surname><given-names>C. J.</given-names></name></person-group> (<year>2015</year>). <article-title>Hierarchical process memory: memory as an integral component of information processing</article-title>. <source>Trends Cogn. Sci.</source> <volume>19</volume>, <fpage>304</fpage>&#x2013;<lpage>313</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.tics.2015.04.006</pub-id>, <pub-id pub-id-type="pmid">25980649</pub-id></mixed-citation></ref>
<ref id="ref25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>He</surname><given-names>H.</given-names></name> <name><surname>Yang</surname><given-names>M.</given-names></name> <name><surname>Duan</surname><given-names>M.</given-names></name> <name><surname>Chen</surname><given-names>X.</given-names></name> <name><surname>Lai</surname><given-names>Y.</given-names></name> <name><surname>Xia</surname><given-names>Y.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Music intervention leads to increased insular connectivity and improved clinical symptoms in schizophrenia</article-title>. <source>Front. Neurosci.</source> <volume>11</volume>:<fpage>744</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnins.2017.00744</pub-id>, <pub-id pub-id-type="pmid">29410607</pub-id></mixed-citation></ref>
<ref id="ref26"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Head</surname><given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>C. P. E. Bach &#x2018;In Tormentis&#x2019;: gout pain and body language in the fantasia in a major, h278 (1782)</article-title>. <source>Eighteenth Century Music</source> <volume>13</volume>, <fpage>211</fpage>&#x2013;<lpage>234</lpage>. doi: <pub-id pub-id-type="doi">10.1017/S1478570616000051</pub-id></mixed-citation></ref>
<ref id="ref27"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Hull</surname><given-names>C. L.</given-names></name></person-group> (<year>1943</year>). <source>Principles of behavior, an introduction to behavior theory</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Appleton-Century Company</publisher-name>.</mixed-citation></ref>
<ref id="ref28"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hurley</surname><given-names>S.</given-names></name></person-group> (<year>2008</year>). <article-title>The shared circuits model (SCM): how control, mirroring, and simulation can enable imitation, deliberation, and mindreading</article-title>. <source>Behav. Brain Sci.</source> <volume>31</volume>, <fpage>1</fpage>&#x2013;<lpage>22</lpage>. doi: <pub-id pub-id-type="doi">10.1017/S0140525X07003123</pub-id>, <pub-id pub-id-type="pmid">18394222</pub-id></mixed-citation></ref>
<ref id="ref29"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Juslin</surname><given-names>P. N.</given-names></name></person-group> (<year>2013</year>). <article-title>From everyday emotions to aesthetic emotions: towards a unified theory of musical emotions</article-title>. <source>Phys. Life Rev.</source> <volume>10</volume>, <fpage>235</fpage>&#x2013;<lpage>266</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.plrev.2013.05.008</pub-id>, <pub-id pub-id-type="pmid">23769678</pub-id></mixed-citation></ref>
<ref id="ref30"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Juslin</surname><given-names>P. N.</given-names></name> <name><surname>V&#x00E4;stfj&#x00E4;ll</surname><given-names>D.</given-names></name></person-group> (<year>2008</year>). <source>Emotional responses to music: the need to consider underlying mechanisms</source>. <publisher-loc>United Kingdom</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref31"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kawakami</surname><given-names>A.</given-names></name> <name><surname>Furukawa</surname><given-names>K.</given-names></name> <name><surname>Okanoya</surname><given-names>K.</given-names></name></person-group> (<year>2014</year>). <article-title>Music evokes vicarious emotions in listeners</article-title>. <source>Front. Psychol.</source> <volume>5</volume>:<fpage>431</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2014.00431</pub-id>, <pub-id pub-id-type="pmid">24910621</pub-id></mixed-citation></ref>
<ref id="ref32"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kiernan</surname><given-names>F.</given-names></name> <name><surname>Krause</surname><given-names>A. E.</given-names></name> <name><surname>Davidson</surname><given-names>J. W.</given-names></name></person-group> (<year>2021</year>). <article-title>The impact of biographical information about a composer on emotional responses to their music</article-title>. <source>Musicae Sci.</source> <volume>26</volume>, <fpage>558</fpage>&#x2013;<lpage>584</lpage>. doi: <pub-id pub-id-type="doi">10.1177/1029864920988883</pub-id></mixed-citation></ref>
<ref id="ref33"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kilner</surname><given-names>J. M.</given-names></name> <name><surname>Friston</surname><given-names>K. J.</given-names></name> <name><surname>Frith</surname><given-names>C. D.</given-names></name></person-group> (<year>2007</year>). <article-title>Predictive coding: an account of the mirror neuron system</article-title>. <source>Cogn. Process.</source> <volume>8</volume>, <fpage>159</fpage>&#x2013;<lpage>166</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10339-007-0170-2</pub-id>, <pub-id pub-id-type="pmid">17429704</pub-id></mixed-citation></ref>
<ref id="ref34"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Kivy</surname><given-names>P.</given-names></name></person-group> (<year>1990</year>). <source>Music alone: Philosophical reflections on the purely musical experience</source>. <publisher-loc>Ithaca</publisher-loc>: <publisher-name>Cornell University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref35"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kleint</surname><given-names>N. I.</given-names></name> <name><surname>Wittchen</surname><given-names>H.-U.</given-names></name> <name><surname>Lueken</surname><given-names>U.</given-names></name></person-group> (<year>2015</year>). <article-title>Probing the interoceptive network by listening to heartbeats: an fMRI study</article-title>. <source>PLoS One</source> <volume>10</volume>:<fpage>e0133164</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pone.0133164</pub-id>, <pub-id pub-id-type="pmid">26204524</pub-id></mixed-citation></ref>
<ref id="ref36"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Koelsch</surname><given-names>S.</given-names></name> <name><surname>Cheung</surname><given-names>V. K. M.</given-names></name> <name><surname>Jentschke</surname><given-names>S.</given-names></name> <name><surname>Haynes</surname><given-names>J. D.</given-names></name></person-group> (<year>2021</year>). <article-title>Neocortical substrates of feelings evoked with music in the ACC, insula, and somatosensory cortex</article-title>. <source>Sci. Rep.</source> <volume>11</volume>:<fpage>10119</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41598-021-89405-y</pub-id>, <pub-id pub-id-type="pmid">33980876</pub-id></mixed-citation></ref>
<ref id="ref37"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Larson</surname><given-names>S.</given-names></name> <name><surname>Vanhandel</surname><given-names>L.</given-names></name></person-group> (<year>2005</year>). <article-title>Measuring musical forces</article-title>. <source>Music. Percept.</source> <volume>23</volume>, <fpage>119</fpage>&#x2013;<lpage>136</lpage>. doi: <pub-id pub-id-type="doi">10.1525/mp.2005.23.2.119</pub-id></mixed-citation></ref>
<ref id="ref38"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname><given-names>S.</given-names></name> <name><surname>Cheong</surname><given-names>Y.</given-names></name> <name><surname>Ro</surname><given-names>J.</given-names></name> <name><surname>Bae</surname><given-names>J.</given-names></name> <name><surname>Jung</surname><given-names>M.</given-names></name></person-group> (<year>2024</year>). <article-title>Alterations in functional connectivity in the salience network shared by depressive symptoms and smartphone overuse</article-title>. <source>Sci. Rep.</source> <volume>14</volume>:<fpage>28679</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41598-024-79951-6</pub-id>, <pub-id pub-id-type="pmid">39562640</pub-id></mixed-citation></ref>
<ref id="ref39"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Levinson</surname><given-names>J.</given-names></name></person-group> (<year>1982</year>). <article-title>Music and negative emotion</article-title>. <source>Pac. Philos. Q.</source> <volume>63</volume>, <fpage>327</fpage>&#x2013;<lpage>346</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.1468-0114.1982.tb00110.x</pub-id></mixed-citation></ref>
<ref id="ref40"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Li</surname><given-names>C. W.</given-names></name> <name><surname>Cheng</surname><given-names>T. H.</given-names></name> <name><surname>Tsai</surname><given-names>C. G.</given-names></name></person-group> (<year>2019</year>). <article-title>Music enhances activity in the hypothalamus, brainstem, and anterior cerebellum during script-driven imagery of affective scenes</article-title>. <source>Neuropsychologia</source> <volume>133</volume>:<fpage>107073</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2019.04.014</pub-id>, <pub-id pub-id-type="pmid">31026474</pub-id></mixed-citation></ref>
<ref id="ref41"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Li</surname><given-names>C. W.</given-names></name> <name><surname>Tsai</surname><given-names>C. G.</given-names></name></person-group> (<year>2025</year>). <article-title>Neural correlates of aesthetic tragedy: evidence for enhanced semantic processing and cognitive control in response to tragic versus joyful music</article-title>. <source>Front. Psychol.</source> <volume>16</volume>:<fpage>1689581</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2025.1689581</pub-id>, <pub-id pub-id-type="pmid">41280175</pub-id></mixed-citation></ref>
<ref id="ref42"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Menninghaus</surname><given-names>W.</given-names></name> <name><surname>Wagner</surname><given-names>V.</given-names></name> <name><surname>Hanich</surname><given-names>J.</given-names></name> <name><surname>Wassiliwizky</surname><given-names>E.</given-names></name> <name><surname>Jacobsen</surname><given-names>T.</given-names></name> <name><surname>Koelsch</surname><given-names>S.</given-names></name></person-group> (<year>2017</year>). <article-title>The distancing-embracing model of the enjoyment of negative emotions in art reception</article-title>. <source>Behav. Brain Sci.</source> <volume>40</volume>:<fpage>e347</fpage>. doi: <pub-id pub-id-type="doi">10.1017/S0140525X17000309</pub-id>, <pub-id pub-id-type="pmid">28215214</pub-id></mixed-citation></ref>
<ref id="ref43"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Menon</surname><given-names>V.</given-names></name> <name><surname>Uddin</surname><given-names>L. Q.</given-names></name></person-group> (<year>2010</year>). <article-title>Saliency, switching, attention and control: a network model of insula function</article-title>. <source>Brain Struct. Funct.</source> <volume>214</volume>, <fpage>655</fpage>&#x2013;<lpage>667</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00429-010-0262-0</pub-id>, <pub-id pub-id-type="pmid">20512370</pub-id></mixed-citation></ref>
<ref id="ref44"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Molnar-Szakacs</surname><given-names>I.</given-names></name> <name><surname>Overy</surname><given-names>K.</given-names></name></person-group> (<year>2006</year>). <article-title>Music and mirror neurons: from motion to &#x2018;e&#x2019;motion</article-title>. <source>Soc. Cogn. Affect. Neurosci.</source> <volume>1</volume>, <fpage>235</fpage>&#x2013;<lpage>241</lpage>. doi: <pub-id pub-id-type="doi">10.1093/scan/nsl029</pub-id>, <pub-id pub-id-type="pmid">18985111</pub-id></mixed-citation></ref>
<ref id="ref45"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ondobaka</surname><given-names>S.</given-names></name> <name><surname>Kilner</surname><given-names>J.</given-names></name> <name><surname>Friston</surname><given-names>K.</given-names></name></person-group> (<year>2017</year>). <article-title>The role of interoceptive inference in theory of mind</article-title>. <source>Brain Cogn.</source> <volume>112</volume>, <fpage>64</fpage>&#x2013;<lpage>68</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.bandc.2015.08.002</pub-id>, <pub-id pub-id-type="pmid">26275633</pub-id></mixed-citation></ref>
<ref id="ref46"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Parncutt</surname><given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>The emotional connotations of major versus minor tonality: one or more origins?</article-title> <source>Musicae Sci.</source> <volume>18</volume>, <fpage>324</fpage>&#x2013;<lpage>353</lpage>. doi: <pub-id pub-id-type="doi">10.1177/1029864914542842</pub-id></mixed-citation></ref>
<ref id="ref47"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pehrs</surname><given-names>C.</given-names></name> <name><surname>Deserno</surname><given-names>L.</given-names></name> <name><surname>Bakels</surname><given-names>J.-H.</given-names></name> <name><surname>Schlochtermeier</surname><given-names>L. H.</given-names></name> <name><surname>Kappelhoff</surname><given-names>H.</given-names></name> <name><surname>Jacobs</surname><given-names>A. M.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>How music alters a kiss: superior temporal gyrus controls fusiform&#x2013;amygdalar effective connectivity</article-title>. <source>Soc. Cogn. Affect. Neurosci.</source> <volume>9</volume>, <fpage>1770</fpage>&#x2013;<lpage>1778</lpage>. doi: <pub-id pub-id-type="doi">10.1093/scan/nst169</pub-id>, <pub-id pub-id-type="pmid">24298171</pub-id></mixed-citation></ref>
<ref id="ref48"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pehrs</surname><given-names>C.</given-names></name> <name><surname>Zaki</surname><given-names>J.</given-names></name> <name><surname>Taruffi</surname><given-names>L.</given-names></name> <name><surname>Kuchinke</surname><given-names>L.</given-names></name> <name><surname>Koelsch</surname><given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>Hippocampal-temporopolar connectivity contributes to episodic simulation during social cognition</article-title>. <source>Sci. Rep.</source> <volume>8</volume>:<fpage>9409</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41598-018-24557-y</pub-id>, <pub-id pub-id-type="pmid">29925874</pub-id></mixed-citation></ref>
<ref id="ref49"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Presicce</surname><given-names>G.</given-names></name> <name><surname>Bailes</surname><given-names>F.</given-names></name></person-group> (<year>2019</year>). <source>Engagement and visual imagery in music listening: an exploratory study</source>. <publisher-loc>US</publisher-loc>: <publisher-name>Educational Publishing Foundation</publisher-name>.</mixed-citation></ref>
<ref id="ref50"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Putkinen</surname><given-names>V.</given-names></name> <name><surname>Nazari-Farsani</surname><given-names>S.</given-names></name> <name><surname>Seppala</surname><given-names>K.</given-names></name> <name><surname>Karjalainen</surname><given-names>T.</given-names></name> <name><surname>Sun</surname><given-names>L.</given-names></name> <name><surname>Karlsson</surname><given-names>H. K.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Decoding music-evoked emotions in the auditory and motor cortex</article-title>. <source>Cereb. Cortex</source> <volume>31</volume>, <fpage>2549</fpage>&#x2013;<lpage>2560</lpage>. doi: <pub-id pub-id-type="doi">10.1093/cercor/bhaa373</pub-id>, <pub-id pub-id-type="pmid">33367590</pub-id></mixed-citation></ref>
<ref id="ref51"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sachs</surname><given-names>M. E.</given-names></name> <name><surname>Ellis</surname><given-names>R. J.</given-names></name> <name><surname>Schlaug</surname><given-names>G.</given-names></name> <name><surname>Loui</surname><given-names>P.</given-names></name></person-group> (<year>2016</year>). <article-title>Brain connectivity reflects human aesthetic responses to music</article-title>. <source>Soc. Cogn. Affect. Neurosci.</source> <volume>11</volume>, <fpage>884</fpage>&#x2013;<lpage>891</lpage>. doi: <pub-id pub-id-type="doi">10.1093/scan/nsw009</pub-id>, <pub-id pub-id-type="pmid">26966157</pub-id></mixed-citation></ref>
<ref id="ref52"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sachs</surname><given-names>M. E.</given-names></name> <name><surname>Habibi</surname><given-names>A.</given-names></name> <name><surname>Damasio</surname><given-names>A.</given-names></name> <name><surname>Kaplan</surname><given-names>J. T.</given-names></name></person-group> (<year>2018</year>). <article-title>Decoding the neural signatures of emotions expressed through sound</article-title>. <source>NeuroImage</source> <volume>174</volume>, <fpage>1</fpage>&#x2013;<lpage>10</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neuroimage.2018.02.058</pub-id>, <pub-id pub-id-type="pmid">29501874</pub-id></mixed-citation></ref>
<ref id="ref53"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Satoyoshi</surname><given-names>E.</given-names></name> <name><surname>Yamada</surname><given-names>K.</given-names></name></person-group> (<year>1967</year>). <article-title>Recurrent muscle spasms of central origin. A report of two cases</article-title>. <source>Arch. Neurol.</source> <volume>16</volume>, <fpage>254</fpage>&#x2013;<lpage>264</lpage>. doi: <pub-id pub-id-type="doi">10.1001/archneur.1967.00470210030004</pub-id>, <pub-id pub-id-type="pmid">6018875</pub-id></mixed-citation></ref>
<ref id="ref54"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schachter</surname><given-names>S.</given-names></name> <name><surname>Singer</surname><given-names>J.</given-names></name></person-group> (<year>1962</year>). <article-title>Cognitive, social, and physiological determinants of emotional state</article-title>. <source>Psychol. Rev.</source> <volume>69</volume>, <fpage>379</fpage>&#x2013;<lpage>399</lpage>. doi: <pub-id pub-id-type="doi">10.1037/h0046234</pub-id>, <pub-id pub-id-type="pmid">14497895</pub-id></mixed-citation></ref>
<ref id="ref55"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schellenberg</surname><given-names>E. G.</given-names></name> <name><surname>Corrigall</surname><given-names>K. A.</given-names></name> <name><surname>Ladinig</surname><given-names>O.</given-names></name> <name><surname>Huron</surname><given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>Changing the tune: listeners like music that expresses a contrasting emotion</article-title>. <source>Front. Psychol.</source> <volume>3</volume>:<fpage>574</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2012.00574</pub-id>, <pub-id pub-id-type="pmid">23269918</pub-id></mixed-citation></ref>
<ref id="ref56"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schmitt</surname><given-names>L. M.</given-names></name> <name><surname>Erb</surname><given-names>J.</given-names></name> <name><surname>Tune</surname><given-names>S.</given-names></name> <name><surname>Rysop</surname><given-names>A. U.</given-names></name> <name><surname>Hartwigsen</surname><given-names>G.</given-names></name> <name><surname>Obleser</surname><given-names>J.</given-names></name></person-group> (<year>2021</year>). <article-title>Predicting speech from a cortical hierarchy of event-based time scales</article-title>. <source>Sci. Adv.</source> <volume>7</volume>:<fpage>eabi6070</fpage>. doi: <pub-id pub-id-type="doi">10.1126/sciadv.abi6070</pub-id>, <pub-id pub-id-type="pmid">34860554</pub-id></mixed-citation></ref>
<ref id="ref57"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sena Moore</surname><given-names>K.</given-names></name> <name><surname>Hanson-Abromeit</surname><given-names>D.</given-names></name></person-group> (<year>2015</year>). <article-title>Theory-guided therapeutic function of music to facilitate emotion regulation development in preschool-aged children</article-title>. <source>Front. Hum. Neurosci.</source> <volume>9</volume>:<fpage>572</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnhum.2015.00572</pub-id>, <pub-id pub-id-type="pmid">26528171</pub-id></mixed-citation></ref>
<ref id="ref58"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Seth</surname><given-names>A. K.</given-names></name></person-group> (<year>2013</year>). <article-title>Interoceptive inference, emotion, and the embodied self</article-title>. <source>Trends Cogn. Sci.</source> <volume>17</volume>, <fpage>565</fpage>&#x2013;<lpage>573</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.tics.2013.09.007</pub-id>, <pub-id pub-id-type="pmid">24126130</pub-id></mixed-citation></ref>
<ref id="ref59"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Seth</surname><given-names>A. K.</given-names></name> <name><surname>Friston</surname><given-names>K. J.</given-names></name></person-group> (<year>2016</year>). <article-title>Active interoceptive inference and the emotional brain</article-title>. <source>Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci.</source> <volume>371</volume>:<fpage>20160007</fpage>. doi: <pub-id pub-id-type="doi">10.1098/rstb.2016.0007</pub-id>, <pub-id pub-id-type="pmid">28080966</pub-id></mixed-citation></ref>
<ref id="ref60"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Spence</surname><given-names>C.</given-names></name></person-group> (<year>2011</year>). <article-title>Crossmodal correspondences: a tutorial review</article-title>. <source>Atten. Percept. Psychophys.</source> <volume>73</volume>, <fpage>971</fpage>&#x2013;<lpage>995</lpage>. doi: <pub-id pub-id-type="doi">10.3758/s13414-010-0073-7</pub-id>, <pub-id pub-id-type="pmid">21264748</pub-id></mixed-citation></ref>
<ref id="ref61"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Spitzer</surname><given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Mapping the human heart: a holistic analysis of fear in Schubert</article-title>. <source>Music. Anal.</source> <volume>29</volume>, <fpage>149</fpage>&#x2013;<lpage>213</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.1468-2249.2011.00329.x</pub-id></mixed-citation></ref>
<ref id="ref62"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Strigo</surname><given-names>I. A.</given-names></name> <name><surname>Craig</surname><given-names>A. D.</given-names></name></person-group> (<year>2016</year>). <article-title>Interoception, homeostatic emotions and sympathovagal balance</article-title>. <source>Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci.</source> <volume>371</volume>:<fpage>20160010</fpage>. doi: <pub-id pub-id-type="doi">10.1098/rstb.2016.0010</pub-id>, <pub-id pub-id-type="pmid">28080968</pub-id></mixed-citation></ref>
<ref id="ref63"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tanaka</surname><given-names>Y.</given-names></name> <name><surname>Terasawa</surname><given-names>Y.</given-names></name> <name><surname>Umeda</surname><given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>Effects of interoceptive accuracy in autonomic responses to external stimuli based on cardiac rhythm</article-title>. <source>PLoS One</source> <volume>16</volume>:<fpage>e0256914</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pone.0256914</pub-id>, <pub-id pub-id-type="pmid">34464424</pub-id></mixed-citation></ref>
<ref id="ref64"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Thiarawat</surname><given-names>P.</given-names></name> <name><surname>Wangtheraprasert</surname><given-names>A.</given-names></name> <name><surname>Jitprapaikulsan</surname><given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>Vagoglossopharyngeal neuralgia occurred concomitantly with ipsilateral hemifacial spasm and versive seizure-like movement: a first case report</article-title>. <source>J. Med. Assoc. Thail.</source> <volume>99</volume>, <fpage>106</fpage>&#x2013;<lpage>110</lpage>, <pub-id pub-id-type="pmid">27455832</pub-id></mixed-citation></ref>
<ref id="ref65"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tsai</surname><given-names>C.-G.</given-names></name></person-group> (<year>2024</year>). <article-title>Predictive processing within music form: analysis of uncertainty and surprise in different sections of sonata form</article-title>. <source>Music. Sci.</source> <volume>7</volume>:<fpage>20592043241267076</fpage>. doi: <pub-id pub-id-type="doi">10.1177/20592043241267076</pub-id></mixed-citation></ref>
<ref id="ref66"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Tsai</surname><given-names>C.-G.</given-names></name></person-group> (<year>in press</year>). <source>The musical-poetic mind: a journey through the cognitive science of popular songs</source>. <publisher-loc>Singapore</publisher-loc>: <publisher-name>Jenny Stanford Publishing</publisher-name>.</mixed-citation></ref>
<ref id="ref67"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Turel</surname><given-names>O.</given-names></name> <name><surname>He</surname><given-names>Q.</given-names></name> <name><surname>Brevers</surname><given-names>D.</given-names></name> <name><surname>Bechara</surname><given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>Delay discounting mediates the association between posterior insular cortex volume and social media addiction symptoms</article-title>. <source>Cogn. Affect. Behav. Neurosci.</source> <volume>18</volume>, <fpage>694</fpage>&#x2013;<lpage>704</lpage>. doi: <pub-id pub-id-type="doi">10.3758/s13415-018-0597-1</pub-id>, <pub-id pub-id-type="pmid">29696595</pub-id></mixed-citation></ref>
<ref id="ref68"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>van Dyck</surname><given-names>E.</given-names></name> <name><surname>Six</surname><given-names>J.</given-names></name> <name><surname>Soyer</surname><given-names>E.</given-names></name> <name><surname>Denys</surname><given-names>M.</given-names></name> <name><surname>Bardijn</surname><given-names>I.</given-names></name> <name><surname>Leman</surname><given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Adopting a music-to-heart rate alignment strategy to measure the impact of music and its tempo on human heart rate</article-title>. <source>Musicae Sci.</source> <volume>21</volume>, <fpage>390</fpage>&#x2013;<lpage>404</lpage>. doi: <pub-id pub-id-type="doi">10.1177/1029864917700706</pub-id></mixed-citation></ref>
<ref id="ref69"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Vicentin</surname><given-names>S.</given-names></name> <name><surname>Guglielmi</surname><given-names>S.</given-names></name> <name><surname>Stramucci</surname><given-names>G.</given-names></name> <name><surname>Bisiacchi</surname><given-names>P.</given-names></name> <name><surname>Cainelli</surname><given-names>E.</given-names></name></person-group> (<year>2024</year>). <article-title>Listen to the beat: behavioral and neurophysiological correlates of slow and fast heartbeat sounds</article-title>. <source>Int. J. Psychophysiol.</source> <volume>206</volume>:<fpage>112447</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2024.112447</pub-id>, <pub-id pub-id-type="pmid">39395546</pub-id></mixed-citation></ref>
<ref id="ref70"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Vuoskoski</surname><given-names>J. K.</given-names></name> <name><surname>Eerola</surname><given-names>T.</given-names></name></person-group> (<year>2013</year>). <article-title>Extramusical information contributes to emotions induced by music</article-title>. <source>Psychol. Music</source> <volume>43</volume>, <fpage>262</fpage>&#x2013;<lpage>274</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0305735613502373</pub-id></mixed-citation></ref>
<ref id="ref71"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Vuust</surname><given-names>P.</given-names></name> <name><surname>Dietz</surname><given-names>M. J.</given-names></name> <name><surname>Witek</surname><given-names>M.</given-names></name> <name><surname>Kringelbach</surname><given-names>M. L.</given-names></name></person-group> (<year>2018</year>). <article-title>Now you hear it: a predictive coding model for understanding rhythmic incongruity</article-title>. <source>Ann. N. Y. Acad. Sci.</source> <volume>1423</volume>, <fpage>19</fpage>&#x2013;<lpage>29</lpage>. doi: <pub-id pub-id-type="doi">10.1111/nyas.13622</pub-id>, <pub-id pub-id-type="pmid">29683495</pub-id></mixed-citation></ref>
<ref id="ref72"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wallace</surname><given-names>V. C.</given-names></name> <name><surname>Ellis</surname><given-names>C. M.</given-names></name> <name><surname>Burman</surname><given-names>R.</given-names></name> <name><surname>Knights</surname><given-names>C.</given-names></name> <name><surname>Shaw</surname><given-names>C. E.</given-names></name> <name><surname>Al-Chalabi</surname><given-names>A.</given-names></name></person-group> (<year>2014</year>). <article-title>The evaluation of pain in amyotrophic lateral sclerosis: a case controlled observational study</article-title>. <source>Amyotroph. Lateral Scler. Frontotemporal Degener</source> <volume>15</volume>, <fpage>520</fpage>&#x2013;<lpage>527</lpage>. doi: <pub-id pub-id-type="doi">10.3109/21678421.2014.951944</pub-id>, <pub-id pub-id-type="pmid">25204842</pub-id></mixed-citation></ref>
<ref id="ref73"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zamorano</surname><given-names>A. M.</given-names></name> <name><surname>Cifre</surname><given-names>I.</given-names></name> <name><surname>Montoya</surname><given-names>P.</given-names></name> <name><surname>Riquelme</surname><given-names>I.</given-names></name> <name><surname>Kleber</surname><given-names>B.</given-names></name></person-group> (<year>2017</year>). <article-title>Insula-based networks in professional musicians: evidence for increased functional connectivity during resting state fMRI</article-title>. <source>Hum. Brain Mapp.</source> <volume>38</volume>, <fpage>4834</fpage>&#x2013;<lpage>4849</lpage>. doi: <pub-id pub-id-type="doi">10.1002/hbm.23682</pub-id>, <pub-id pub-id-type="pmid">28737256</pub-id></mixed-citation></ref>
<ref id="ref74"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zamorano</surname><given-names>A. M.</given-names></name> <name><surname>Zatorre</surname><given-names>R. J.</given-names></name> <name><surname>Vuust</surname><given-names>P.</given-names></name> <name><surname>Friberg</surname><given-names>A.</given-names></name> <name><surname>Birbaumer</surname><given-names>N.</given-names></name> <name><surname>Kleber</surname><given-names>B.</given-names></name></person-group> (<year>2023</year>). <article-title>Singing training predicts increased insula connectivity with speech and respiratory sensorimotor areas at rest</article-title>. <source>Brain Res.</source> <volume>1813</volume>:<fpage>148418</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.brainres.2023.148418</pub-id>, <pub-id pub-id-type="pmid">37217111</pub-id></mixed-citation></ref>
<ref id="ref75"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Zentner</surname><given-names>M.</given-names></name> <name><surname>Grandjean</surname><given-names>D.</given-names></name> <name><surname>Scherer</surname><given-names>K.R.</given-names></name></person-group> (<year>2008</year>). <article-title>Emotions evoked by the sound of music: characterization, classification, and measurement</article-title>. <source>Emotion</source>. <volume>8</volume>, <fpage>494</fpage>&#x2013;<lpage>521</lpage>. doi: 10.1037/1528-3542.8.4.494.</mixed-citation></ref>
</ref-list>
<fn-group>
<fn fn-type="custom" custom-type="edited-by" id="fn0001"><p>Edited by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3098379/overview">Regina Gregori Grgic</ext-link>, Sigmund Freud Privat Universitat Wien GmbH, Italy</p></fn>
<fn fn-type="custom" custom-type="reviewed-by" id="fn0002"><p>Reviewed by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/506405/overview">Dezhao Li</ext-link>, Hong Kong University of Science and Technology, Hong Kong SAR, China</p>
<p><ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3308153/overview">Margarita Lorenzo De Reizabal</ext-link>, Musikene - Centro Superior de M&#x00FA;sica del Pa&#x00ED;s Vasco, Spain</p></fn>
</fn-group>
</back>
</article>