<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2023.1222472</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The effect of Surround sound on embodiment and sense of presence in cinematic experience: a behavioral and HD-EEG study</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Langiulli</surname> <given-names>Nunzio</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/830649/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Calbi</surname> <given-names>Marta</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/400868/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Sbravatti</surname> <given-names>Valerio</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2346074/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Umilt&#x000E0;</surname> <given-names>Maria Alessandra</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<xref ref-type="aff" rid="aff5"><sup>5</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/61814/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Gallese</surname> <given-names>Vittorio</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff5"><sup>5</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/49811/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Unit of Neuroscience, Department of Medicine and Surgery, University of Parma</institution>, <addr-line>Parma</addr-line>, <country>Italy</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Philosophy &#x0201C;Piero Martinetti&#x0201D;, State University of Milan</institution>, <addr-line>Milan</addr-line>, <country>Italy</country></aff>
<aff id="aff3"><sup>3</sup><institution>Department of History, Anthropology, Religions, Arts and Performing Arts, Sapienza University of Rome</institution>, <addr-line>Rome</addr-line>, <country>Italy</country></aff>
<aff id="aff4"><sup>4</sup><institution>Department of Food and Drug, University of Parma</institution>, <addr-line>Parma</addr-line>, <country>Italy</country></aff>
<aff id="aff5"><sup>5</sup><institution>Italian Academy for Advanced Studies in America at Columbia University</institution>, <addr-line>New York, NY</addr-line>, <country>United States</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Celia Andreu-S&#x000E1;nchez, Autonomous University of Barcelona, Spain</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Luz Maria Alonso-Valerdi, Monterrey Institute of Technology and Higher Education (ITESM), Mexico; Danna Pinto, Bar-Ilan University, Israel</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Nunzio Langiulli <email>nunzio.langiulli&#x00040;unipr.it</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>09</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>17</volume>
<elocation-id>1222472</elocation-id>
<history>
<date date-type="received">
<day>14</day>
<month>05</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>08</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2023 Langiulli, Calbi, Sbravatti, Umilt&#x000E0; and Gallese.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Langiulli, Calbi, Sbravatti, Umilt&#x000E0; and Gallese</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<abstract>
<p>Although many studies have investigated spectators&#x00027; cinematic experience, only a few of them explored the neurophysiological correlates of the sense of presence evoked by the spatial characteristics of audio delivery devices. Nevertheless, nowadays both the industrial and the consumer markets have been saturated by some forms of spatial audio format that enrich the audio-visual cinematic experience, reducing the gap between the real and the digitally mediated world. The increase in the immersive capabilities corresponds to the instauration of both the sense of presence and the psychological sense of being in the virtual environment and also embodied simulation mechanisms. While it is well-known that these mechanisms can be activated in the real world, it is hypothesized that they may be elicited even in a virtual acoustic spatial environment and could be modulated by the acoustic spatialization cues reproduced by sound systems. Hence, the present study aims to investigate the neural basis of the sense of presence evoked by different forms of mediation by testing different acoustic space sound delivery (Presentation modes: Monophonic, Stereo, and Surround). To these aims, a behavioral investigation and a high-density electroencephalographic (HD-EEG) study have been developed. A large set of ecological and heterogeneous stimuli extracted from feature films were used. Furthermore, participants were selected following the generalized listener selection procedure. We found a significantly higher event-related desynchronization (ERD) in the Surround Presentation mode when compared to the Monophonic Presentation mode both in Alpha and Low-Beta centro-parietal clusters. We discuss this result as an index of embodied simulation mechanisms that could be considered as a possible neurophysiological correlation of the instauration of the sense of presence.</p></abstract>
<kwd-group>
<kwd>HD-EEG</kwd>
<kwd>sense of presence</kwd>
<kwd>Surround sound</kwd>
<kwd>spatial audio</kwd>
<kwd>Alpha</kwd>
<kwd>Beta</kwd>
<kwd>ERD</kwd>
</kwd-group>
<counts>
<fig-count count="3"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="70"/>
<page-count count="12"/>
<word-count count="9643"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Perception Science</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>Cinema is a highly complex art form that combines visual and aural elements to create a cohesive and immersive experience. While the visual component of cinema has traditionally been the focus of both popular understanding and neuroscientific research (Heimann et al., <xref ref-type="bibr" rid="B27">2014</xref>, <xref ref-type="bibr" rid="B26">2019</xref>; Calbi et al., <xref ref-type="bibr" rid="B7">2019</xref>), the role of sound in the cinematic experience has been largely overlooked. This bias toward the visual aspect can be attributed to a cultural tendency (Sterne, <xref ref-type="bibr" rid="B61">2003</xref>) to prioritize sight over hearing as well as the fact that the human brain is wired to process visual information more efficiently than auditory information (Kitagawa and Ichihara, <xref ref-type="bibr" rid="B32">2002</xref>; Sbravatti, <xref ref-type="bibr" rid="B57">2019</xref>). Previous research has demonstrated that when participants are simultaneously presented with movies depicting facial emotions and emotional sounds (such as crying and laughing) that are incongruent with each other, the electromyography (EMG) signals recorded from their facial muscles are activated in accordance with the visual stimuli and not with the auditory stimuli (Sestito et al., <xref ref-type="bibr" rid="B58">2013</xref>).</p>
<p>On the other hand, even if empirical research on the relationship between moving images and sounds in cinema is limited, some authors have suggested that sound could enhance the immersive qualities of the two-dimensional cinematic experience (moving images) by creating a sense of three-dimensional reality (Elsaesser and Hagener, <xref ref-type="bibr" rid="B12">2015</xref>). This concept is also supported by the idea that Surround sound formats, such as 5.1 channel configurations, have the capability to envelop the viewer in a 360-degree auditory space as opposed to the traditional 180-degree visual space (DiDonato, <xref ref-type="bibr" rid="B10">2010</xref>). Some studies have been conducted to investigate the relationship between Surround sound and the sense of presence (see below for a definition) in the cinematic experience. For example, V&#x000E4;stfj&#x000E4;ll found that 6-channel audio reproductions received a significantly higher presence and emotional realism scores than stereo (2-channels) and mono (1-channel) reproductions (V&#x000E4;stfj&#x000E4;ll, <xref ref-type="bibr" rid="B64">2003</xref>). Kobayashi et al. (<xref ref-type="bibr" rid="B33">2015</xref>) examined the influence of spatialized sounds (reproduced by a 96-channel system) on the sense of presence in virtual environments by using both physiological and psychological measures. Results showed that the presence ratings for sounds in the spatialized sounds condition were higher. Furthermore, physiological measures such as heart rate and skin conductance level indicated that the sympathetic nervous system was activated to a greater extent by sounds in the spatialized sounds condition similar to the responses elicited during intrusions into peri-personal space in real-world scenarios (such as clapping near the participant) (Kobayashi et al., <xref ref-type="bibr" rid="B33">2015</xref>).</p>
<p>In a 1997 study, Slater and Wilbur critically examined for the first time the often confused concepts of immersion and presence, suggesting a way to disambiguate their meanings. The two authors defined immersion as an objective property of the technological playback system and presence as the subjective psychological experience of feeling situated in a mediated environment (Slater and Wilbur, <xref ref-type="bibr" rid="B59">1997</xref>). The spatial situational model framework suggests that the experience of presence in a mediated environment is achieved through a two-step process (Wirth et al., <xref ref-type="bibr" rid="B69">2007</xref>). The first step is the construction of a spatialized mental model of the mediated environment, in which participants can perceive the environment as a space and locate themselves within it. Certain features of the mediated environment are particularly important for the formation of a spatialized mental model, and one of these features is Surround sound among others such as stereoscopy and field of view (Wirth et al., <xref ref-type="bibr" rid="B68">2003</xref>). The second step is the embodiment of the mediated environment. Gallese proposes that &#x0201C;film experience and film immersion do not depend just on concepts and propositions, but rely on sensory-motor schemas, which get the viewer literally in touch with the screen, shaping a multimodal form of simulation, which exploits all the potentialities of our brain&#x02013;body system&#x0201D; (Gallese, <xref ref-type="bibr" rid="B19">2019</xref>), referring to embodied simulation, a cognitive process described as the ability to simulate the actions, emotions, and sensations of others by activating the same neural circuits that are used to perceive one&#x00027;s own experiences. This mechanism allows individuals to recognize the meaning of others&#x00027; behaviors and experiences by directly relating to them through the activation of sensory-motor representations in the bodily format (Gallese, <xref ref-type="bibr" rid="B18">2009</xref>). The neural substrate of the embodied simulation mechanism for actions corresponds to a particular functional group of neurons called &#x0201C;mirror neurons,&#x0201D; first discovered in area F5 of macaques during an intracortical recording of the premotor cortex that responds both during action execution and action observation (DiPellegrino et al., <xref ref-type="bibr" rid="B11">1992</xref>). Mirror neurons allow for the internal representation of observed actions, which in turn facilitates understanding and imitation. According to Keysers et al., mirror neurons encode actions in an abstract manner, independent of the source of information (auditory or visual). This abstraction allows for multisensory integration, which is essential for generating meaningful representations and recognizing relevant actions within the environment (Keysers et al., <xref ref-type="bibr" rid="B31">2003</xref>). In human beings, the mirror neuron mechanism is commonly associated with the mu rhythm typically recorded over sensorimotor centro-parietal cortical areas (Muthukumaraswamy and Johnson, <xref ref-type="bibr" rid="B41">2004a</xref>,<xref ref-type="bibr" rid="B42">b</xref>; Muthukumaraswamy et al., <xref ref-type="bibr" rid="B43">2004</xref>). The mu rhythm is an EEG measure of motor neuron activity considered to belong to the alpha band, generally ranging from 8 to 13 Hz, and the beta band, typically ranging between 14 and 32 Hz (Hari, <xref ref-type="bibr" rid="B24">2006</xref>). When Gastaut and Bert (<xref ref-type="bibr" rid="B23">1954</xref>) initially observed the mu rhythm using EEG, they detected that this rhythm became less active, and there was an event-related desynchronization (ERD), when participants watched video clips of movements, but without exhibiting any visible motor movements themselves (Gastaut and Bert, <xref ref-type="bibr" rid="B23">1954</xref>). Many subsequent studies observed a mu rhythm ERD, occurring during both voluntary movements, motor imagery, and action observation (Pfurtscheller et al., <xref ref-type="bibr" rid="B52">1994</xref>; Toro et al., <xref ref-type="bibr" rid="B62">1994</xref>; Pfurtscheller and Neuper, <xref ref-type="bibr" rid="B51">1997</xref>; Neuper et al., <xref ref-type="bibr" rid="B44">2006</xref>; Perry et al., <xref ref-type="bibr" rid="B49">2010</xref>), and it has been proposed that this mu rhythm desynchronization represents activity in the mirror neuron system (e.g., Caetano et al., <xref ref-type="bibr" rid="B6">2007</xref>; Perry and Bentin, <xref ref-type="bibr" rid="B48">2009</xref>; Press et al., <xref ref-type="bibr" rid="B54">2011</xref>).</p>
<p>The only study that investigated the effect of acoustic spatialization on the sense of presence using electroencephalography (EEG) was by Tsuchida et al. (<xref ref-type="bibr" rid="B63">2015</xref>). They used a surround sound reproduction system called BoSC (62 speakers), designed to simulate the presence of other individuals or objects by providing a highly realistic sound field, to deliver an acoustic stimulus under two experimental conditions: spatialized condition and monophonic condition (1-channel). EEG results showed that mu rhythm suppression occurred for action-related sounds but not for non-action-related sounds. Furthermore, this suppression was significantly greater in the Surround (62-channels) condition, which generates a more realistic sound field, than in the one-channel speaker condition. Additionally, the motor cortical activation for action-related sounds was influenced by the sense of presence perceived by the study participants as they perceived a significantly higher sound realism in the Surround condition (Tsuchida et al., <xref ref-type="bibr" rid="B63">2015</xref>). It should be noted that this study had small participants and stimuli sample size, but only six action-related and non-action-related sounds were recorded and reproduced by an unconventional custom spatialized sound field system; hence, its results should be considered in light of the limitations of the study design. Further research with larger sample sizes and variegated stimuli is needed to fully understand the effect of acoustic spatialization on the sense of presence. Furthermore, the use of a standard surround sound reproduction system setup (such as 5.1-channel configurations) could ensure consistency and replicability compared to an unconventional setup.</p>
<p>Hence, this study aimed to investigate the time course and neural correlates of the sense of presence as evoked by different audio Presentation modes during cinematic immersion. We selected a diverse set of naturalistic stimuli, consisting of validated cinematic excerpts, which were presented to participants in different audio Presentation modes (Monophonic, Stereo, and Surround), while their neural and behavioral responses were measured. We first designed a behavioral experiment (Experiment 1) and subsequently a high-density electroencephalographic (HD-EEG) experiment (Experiment 2). Initially, in the context of the behavioral experiment, the sense of presence was rated by participants through explicit questions formulated to reflect its different aspects. The behavioral experiment was specifically designed to offer an initial investigation of the sense of presence with the aim to use results to guide the design of a subsequent EEG experiment. We hypothesized that participants exposed to the Surround presentation mode would report significantly higher subjective ratings compared to those exposed to the Monophonic and Stereophonic Presentation modes. Afterward, in the EEG experiment, we investigated the neural correlate of the sense of presence elicited by different acoustic Presentation modes. We hypothesized that the greater spatialization of sound in the Surround presentation mode, which more closely resembles a real-life hearing environment, would lead to a greater sense of embodiment as reflected by a higher ERD in the mu rhythm frequency band, compared to the Monophonic and Stereophonic presentation modes. This embodied simulation mechanism would be interpreted as a potential neurophysiological correlate of the rise of the sense of presence.</p>
</sec>
<sec id="s2">
<title>Experiment 1</title>
<sec>
<title>Materials and methods</title>
<sec>
<title>Participants</title>
<p>Thirty-two participants (<italic>N</italic> = 32, 14 men and 18 women, with a mean age <italic>M</italic> of 28.7 years and standard deviation SD of &#x000B1;6.3, within a range of 22 to 42 years) were selected using an adaptation of the generalized listener selection (GLS) procedure described by Zacharov et al. (Mattila and Zacharov, <xref ref-type="bibr" rid="B38">2001</xref>; Bech and Zacharov, <xref ref-type="bibr" rid="B3">2006</xref>). The GLS procedures included six questionnaires, an audiometric test, and two screening tasks about loudness discrimination and localization of the sound source. For more information about GLS procedures, questionnaires, and descriptive statistics, see <xref ref-type="supplementary-material" rid="SM1">Supplementary material</xref>. Participants had a high education level (<italic>M</italic> = 15.5 years, SD = &#x000B1; 2.3 years), had no prior history of neurological or psychiatric disorders, were right-handed as determined by the Italian version of the Edinburgh Handedness Inventory (Oldfield, <xref ref-type="bibr" rid="B45">1971</xref>), had discriminative abilities of both loudness and sound source localization, had normal hearing acuity, and were &#x0201C;un-trained/naive subjects&#x0201D; as described in ITU-T Recommendation P.800 (ITU-R, <xref ref-type="bibr" rid="B29">1996</xref>). All participants provided written informed consent to participate in the studies, which were approved by the local ethical committee &#x0201C;Comitato Etico Area Vasta Emilia Nord&#x0201D; and were conducted in accordance with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards (World Medical Association, <xref ref-type="bibr" rid="B70">2013</xref>).</p>
</sec>
<sec>
<title>Acoustic apparatus</title>
<p>A silent audiometric cabin (IAC-Acoustics) 2 m high, 2.5 m wide, and 2.1 m deep was set up with a 5.1-channel surround sound reproduction system consisting of five APART MASK4C speakers (impedance 8 Ohms) and one APART SUBA165 sub-woofer (impedance 4 Ohms), all driven by a DENON AVR-X1600H amplifier. The participant was positioned at the center of the silent audiometric cabin, while the six speakers channels (&#x0201C;L&#x0201D; = left, &#x0201C;R&#x0201D; = right, &#x0201C;C&#x0201D; = center, &#x0201C;Ls&#x0201D; = left Surround, &#x0201C;Rs&#x0201D; = right Surround, &#x0201C;LFE&#x0201D; = low-frequency effects or sub-woofer) were positioned and oriented following the ITU-R BS.1116-1 recommendation so as to direct the sound to a central point that identified the reference listening position (ITU-R, <xref ref-type="bibr" rid="B30">1997</xref>). Audio reproduction was room-corrected using the Audyssey software (Paul, <xref ref-type="bibr" rid="B47">2009</xref>). Sound pressure levels (SPL) were recorded with a sound level meter (Gain Express, applied standard IEC651 type 2, type ANSI 2 SI 0.4) placed at the listening position, and the reproduction level was set below the hazardous hearing threshold (85 dB, A-weighted, for eight consecutive hours) defined and standardized by the National Institute for Occupational Health and Safety (NIOSH) in the ONE (Occupational Noise Exposure) recommendation (Murphy and Franks, <xref ref-type="bibr" rid="B40">2002</xref>).</p>
</sec>
<sec>
<title>Stimuli</title>
<p>Twenty-seven cinematic excerpts (10 s long) without music and dialogues were chosen through an online validation experiment (see <xref ref-type="supplementary-material" rid="SM1">Supplementary material</xref>). We selected stimuli that had high dynamism, high emotional intensity, and negative emotional valence because these characteristics can elicit stronger arousing responses in the participants. Previous studies have demonstrated that negative audio-visual stimuli from feature films can increase arousal levels (Fern&#x000E1;ndez-Aguilar et al., <xref ref-type="bibr" rid="B13">2019</xref>). Our 27 stimuli were used in three Presentation modes: Surround, Stereo, and Monophonic for a total of 81 experimental stimuli, repeated twice. Stimuli were reproduced in the silent audiometric cabin by all six channels in the Surround reproduction mode, by &#x0201C;L&#x0201D; and &#x0201C;R&#x0201D; channels in the Stereo reproduction mode, and only by the &#x0201C;C&#x0201D; channel in the Monophonic reproduction mode. For more information about the stimuli selection procedure, see <xref ref-type="supplementary-material" rid="SM1">Supplementary material</xref>.</p>
</sec>
<sec>
<title>Procedure</title>
<p>Participants listened to 27 cinematic excerpts (10 s long) reproduced in three Presentation modes, played randomly twice for a total of 162 trials. The experiment was divided into three blocks of 54 trials each, with a break between blocks for a total experiment duration of &#x0007E;45 m. Each trial consisted of a black fixation cross on a gray background (1.5 s), followed by the auditory stimulus presented for 10 s on a black screen. After viewing the stimulus, each time participants had 5 s to respond to two questions, randomly selected from a pool of four questions, on a Visual Analog Scale (VAS) from 0 to 100. The questions were formulated by the authors to measure four potential aspects of the cinematic immersion and sense of presence induced by the sound excerpt: Enjoyment (EN)&#x02014;&#x0201C;How much did you like the scene?&#x0201D;; Emotional Involvement (EI)&#x02014;&#x0201C;How much did you feel emotionally involved?&#x0201D;; Physical Immersion (PI)&#x02014;&#x0201C;How much did you feel physically immersed?&#x0201D;; and Realism (RE)&#x02014;&#x0201C;How realistic did you judge the scene?&#x0201D; (for more information, see <xref ref-type="supplementary-material" rid="SM1">Supplementary material</xref>). Before the experiment, we trained participants with six trials, two per each Presentation mode, using stimuli previously excluded through the validation process. A gray background was used as an inter-trial interval (ITI) with a duration of 3.5 s. At the end of the experimental session, the participant was asked to fill out the Film Immersive Experience Questionnaire (F-IEQ) (Rigby et al., <xref ref-type="bibr" rid="B56">2019</xref>). For descriptive statistics, see <xref ref-type="supplementary-material" rid="SM1">Supplementary material</xref>. Stimuli were presented with MATLAB extension Psychtoolbox-3 (Brainard, <xref ref-type="bibr" rid="B5">1997</xref>).</p>
</sec>
<sec>
<title>Analysis</title>
<p>In order to investigate whether Enjoyment (EN), Emotional Involvement (EI), Physical Immersion (PI), and Realism (RE) were modulated by the experimental conditions, a linear mixed-effect analysis was performed. Following a hierarchical approach, we initially created a simple model using one parameter, and we progressively added others with the aim to evaluate whether their inclusion improved model fit. Likelihood ratio tests, Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC) were used to rigorously choose which parameters improved model fit. We entered participants&#x00027; scores as dependent variables, and Questions (EN, EI, PI, and RE, respectively) and Presentation modes (three levels: Surround, Stereo, and Monophonic) as independent fixed variables. Participants were included as a random intercept and Presentation mode as a random slope. This approach accounted for the within-subject and between-subject variability in the data. Outliers were identified and excluded from the analysis based on the standardized model residuals and a threshold value of Cook&#x00027;s distance (threshold = 1). <italic>Post-hoc</italic> tests were conducted using Tukey&#x00027;s correction for multiple comparisons and Kenward&#x02013;Roger degrees-of-freedom approximation method. Statistical analyses were performed using R software (R Core Team, <xref ref-type="bibr" rid="B55">2022</xref>), lme4 (Bates et al., <xref ref-type="bibr" rid="B2">2015</xref>), effects (Fox and Weisberg, <xref ref-type="bibr" rid="B15">2019</xref>), and emmeans (Lenth, <xref ref-type="bibr" rid="B34">2022</xref>) packages. For data plotting, we used the ggplot2 (Wickham, <xref ref-type="bibr" rid="B65">2016</xref>) package.</p>
</sec>
</sec>
<sec>
<title>Results</title>
<p>The model explained 85% of the variance in the dependent variable taking into account the random effects (<inline-formula><mml:math id="M1"><mml:msubsup><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mtext>m</mml:mtext></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> = 0.22; <inline-formula><mml:math id="M2"><mml:msubsup><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> = 0.85). The model revealed a significant main effect of Presentation modes [<inline-formula><mml:math id="M3"><mml:msubsup><mml:mrow><mml:mi>&#x003C7;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> = 65.16, <italic>p</italic> &#x0003C; 0.001), showing that participants attributed significantly higher absolute scores when stimuli were presented in the Surround Presentation mode than when they were presented in the Stereo Presentation mode [<italic>t</italic><sub>(31)</sub> = 7.76, <italic>p</italic> &#x0003C; 0.001] or in the Monophonic Presentation modes [<italic>t</italic><sub>(30.9)</sub> = 7.76, <italic>p</italic> &#x0003C; 0.001; Surround: <italic>M</italic> = 59.44, CIs = 54.93, 63.95; Stereo: <italic>M</italic> = 51.4, CIs = 47.35, 55.44; Monophonic: <italic>M</italic> = 41.15, Cis = 36.01, 46.29]. At the same time, participants attributed significantly higher scores when stimuli were presented in the Stereo Presentation mode than when they were presented in the Monophonic Presentation mode [<italic>t</italic><sub>(31)</sub> = 6.26, <italic>p</italic> &#x0003C; 0.001].</p>
<p>The model also revealed a significant main effect of Question [<inline-formula><mml:math id="M4"><mml:msubsup><mml:mrow><mml:mi>&#x003C7;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> = 71.57, <italic>p</italic> &#x0003C; 0.001] showing that participants attributed higher scores on Realism than on Enjoyment [<italic>t</italic><sub>(31)</sub> = 3.63, <italic>p</italic> &#x0003C; 0.01], Emotional Involvement [<italic>t</italic><sub>(31)</sub> = 5.7, <italic>p</italic> &#x0003C; 0.001], and Physical Immersion [<italic>t</italic><sub>(31)</sub> = 3.23, <italic>p</italic> &#x0003C; 0.01; EI: <italic>M</italic> = 46.33, CIs = 41.56, 51.11; EN: <italic>M</italic> = 47.53, CIs = 41.34, 53.71; PI: <italic>M</italic> = 52.21, CIs = 48.29, 56.14; RE: <italic>M</italic> = 56.57, CIs = 53.31, 59.84]. In addition, participants attributed higher scores to Physical Immersion than to Emotional Involvement [<italic>t</italic><sub>(30.9)</sub> = 5.55, <italic>p</italic> &#x0003C; 0.001].</p>
<p>Additionally, the model revealed significant Presentation modes<sup>&#x0002A;</sup>Question interaction [<inline-formula><mml:math id="M5"><mml:msubsup><mml:mrow><mml:mi>&#x003C7;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>6</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> = 269.36, <italic>p</italic> &#x0003C; 0.001]. Interaction <italic>post-hoc</italic> comparisons showed that in Monophonic Presentation mode (<xref ref-type="fig" rid="F1">Figure 1A</xref>), participants attributed significantly higher scores on Realism than on Emotional Involvement [<italic>t</italic><sub>(34.6)</sub> = 4.69, <italic>p</italic> &#x0003C; 0.001] and Physical Immersion [<italic>t</italic><sub>(38.1)</sub> = 5.59, <italic>p</italic> &#x0003C; 0.001; Monophonic EI: <italic>M</italic> = 38.25, CIs = 32.57, 43.94; Monophonic EN: <italic>M</italic> = 40.47, CIs = 33.58, 47.36; Monophonic PI: <italic>M</italic> = 38.97, CIs = 33.94, 44.00; Monophonic RE: <italic>M</italic> = 46.92, CIs = 42.36, 51.48]. In the Stereo Presentation mode (<xref ref-type="fig" rid="F1">Figure 1A</xref>), participants attributed significantly higher scores on Realism than on Emotional Involvement [<italic>t</italic><sub>(33.8)</sub> = 6.13, <italic>p</italic> &#x0003C; 0.001] and Enjoyment [<italic>t</italic><sub>(32.6)</sub> = 4.28, <italic>p</italic> &#x0003C; 0.001; Stereo EI: <italic>M</italic> = 46.66, CIs = 41.93, 51.39; Stereo EN: <italic>M</italic> = 47.12, CIs = 40.9, 53.28; Stereo PI: <italic>M</italic> = 53.89, CIs = 50.02, 57.76; Stereo RE: <italic>M</italic> = 57.91, CIs = 54.71, 61.11]. In addition, participants attributed significantly higher scores to Physical Immersion than to Emotional Involvement [<italic>t</italic><sub>(39.4)</sub> = 6.43, <italic>p</italic> &#x0003C; 0.001]. In the Surround Presentation mode (<xref ref-type="fig" rid="F1">Figure 1A</xref>), participants attributed significantly higher scores on Realism than on Emotional Involvement [<italic>t</italic><sub>(34.4)</sub> = 5.86, <italic>p</italic> &#x0003C; 0.001] and Enjoyment [<italic>t</italic><sub>(32.7)</sub> = 3.93, <italic>p</italic> &#x0003C; 0.001; Surround EI: <italic>M</italic> = 54.09, CIs = 48.96, 59.22; Surround EN: <italic>M</italic> = 54.99, CIs = 48.53, 61.44; Surround PI: <italic>M</italic> = 63.78, CIs = 59.41, 68.15; Surround RE: <italic>M</italic> = 64.90, CIs = 61.10, 68.70]. Furthermore, participants attributed significantly higher scores to Physical Immersion than to Emotional Involvement [<italic>t</italic><sub>(41.4)</sub> =8.5, <italic>p</italic> &#x0003C; 0.001) and Enjoyment [<italic>t</italic><sub>(33.3)</sub> = 4.02, <italic>p</italic> &#x0003C; 0.001]. Moreover, in all questions (<xref ref-type="fig" rid="F1">Figure 1B</xref>) participants always attributed significantly higher absolute scores when stimuli were presented in the Surround Presentation mode than when they were presented in the Stereo Presentation mode [EI: <italic>t</italic><sub>(42.1)</sub> = 6.64, <italic>p</italic> &#x0003C; 0.001; EN: <italic>t</italic><sub>(41.9)</sub> = 7.03, <italic>p</italic> &#x0003C; 0.001; PI: <italic>t</italic><sub>(42.7)</sub> = 8.81, <italic>p</italic> &#x0003C; 0.001; RE: <italic>t</italic><sub>(42.5)</sub> = 6.23, <italic>p</italic> &#x0003C; 0.001] or in the Monophonic Presentation mode [EI: <italic>t</italic><sub>(33.18)</sub> = 6.61, <italic>p</italic> &#x0003C; 0.001; EN: <italic>t</italic><sub>(32.9)</sub> = 6.07, <italic>p</italic> &#x0003C; 0.001; PI: <italic>t</italic><sub>(33.5)</sub> = 10.33, <italic>p</italic> &#x0003C; 0.001; RE: <italic>t</italic><sub>(33.3)</sub> = 7.5, <italic>p</italic> &#x0003C; 0.001]. Moreover, independently from the question, participants attributed significantly higher scores when stimuli were presented in the Stereo Presentation mode than when they were presented in the Monophonic Presentation mode [EI: <italic>t</italic><sub>(35.2)</sub> = 4.97, <italic>p</italic> &#x0003C; 0.001; EN: <italic>t</italic><sub>(35)</sub> = 3.94, <italic>p</italic> &#x0003C; 0.001; PI: <italic>t</italic><sub>(35.8)</sub> = 8.8, <italic>p</italic> &#x0003C; 0.001; RE: <italic>t</italic><sub>(35.6)</sub> = 6.49, <italic>p</italic> &#x0003C; 0.001].</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>(A)</bold> Interaction effect <italic>post-hoc</italic> pairwise comparisons by Presentation modes. <bold>(B)</bold> Interaction effect <italic>post-hoc</italic> pairwise comparisons by Question. Error bars represent 95% confidence intervals of the mean (CI); asterisks (<sup>&#x0002A;</sup>) represents <italic>p</italic> &#x0003C; 0.05.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-17-1222472-g0001.tif"/>
</fig>
</sec>
<sec>
<title>Discussion</title>
<p>In this first behavioral experiment, we used a diverse set of naturalistic stimuli consisting of validated cinematic audio excerpts. This approach allowed for a more diverse range of stimuli and more generalizable results (Sonkusare et al., <xref ref-type="bibr" rid="B60">2019</xref>) compared to previous studies (Lipscomb and Kerins, <xref ref-type="bibr" rid="B36">2004</xref>). We investigated how different audio Presentation modes affect the emotional and bodily involvement and audio perception of participants. Results showed that participants consistently gave higher ratings when stimuli were presented in the Surround Presentation mode compared to the Monophonic or Stereo Presentation modes. Specifically, we found that the Surround Presentation mode was particularly effective in eliciting a sense of Realism, Emotional Involvement, and Physical Immersion among participants. These data are in line with the meta-analysis by Cummings and Bailenson, who reported that the spatial presence experience, evoked by the Surround Presentation mode, correlates positively with the level of immersion of the system (Cummings and Bailenson, <xref ref-type="bibr" rid="B8">2015</xref>). We also corroborate, with more robust results and heterogeneous and ecological stimuli, previous results confirming that the sense of presence can be heightened by the spatialized sound Presentation mode (Lessiter and Freeman, <xref ref-type="bibr" rid="B35">2001</xref>; V&#x000E4;stfj&#x000E4;ll, <xref ref-type="bibr" rid="B64">2003</xref>; Kobayashi et al., <xref ref-type="bibr" rid="B33">2015</xref>).</p>
</sec>
</sec>
<sec id="s3">
<title>Experiment 2</title>
<sec>
<title>Materials and methods</title>
<sec>
<title>Participants</title>
<p>Twenty-four participants (<italic>N</italic> = 24, 11 men and 13 women, with a mean age <italic>M</italic> of 24.3 years and standard deviation SD of &#x000B1;2.4, within a range of 21 to 30 years) were selected using an adaptation of the generalized listener selection (GLS) procedure (Mattila and Zacharov, <xref ref-type="bibr" rid="B38">2001</xref>; Bech and Zacharov, <xref ref-type="bibr" rid="B3">2006</xref>). For questionnaire descriptive statistics, see <xref ref-type="supplementary-material" rid="SM1">Supplementary material</xref>. Participants had a high education level (<italic>M</italic> = 15.2 years, SD = &#x000B1; 1.5 years), had no prior history of neurological or psychiatric disorders, were right-handed as determined by the Italian version of the Edinburgh Handedness Inventory (Oldfield, <xref ref-type="bibr" rid="B45">1971</xref>), had discriminative abilities of both loudness and sound source localization, had normal hearing acuity, and were &#x0201C;un-trained/naive subjects&#x0201D; as described in ITU-T Recommendation P.800 (ITU-R, <xref ref-type="bibr" rid="B29">1996</xref>). All participants provided written informed consent to participate in the studies, which were approved by the local ethical committee &#x0201C;Comitato Etico Area Vasta Emilia Nord&#x0201D; and were conducted in accordance with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards (World Medical Association, <xref ref-type="bibr" rid="B70">2013</xref>).</p>
</sec>
<sec>
<title>Stimuli</title>
<p>We used the same set of 27 cinematic excerpts without music and dialogues used in Experiment 1. A set of 27 control stimuli were also generated by phase-scrambling the original audio excerpts making them unintelligible to the participant but retaining all the acoustic characteristics on the frequency level and the same duration (10 s). Based on the results of the first experiment and for the purpose of simplifying the experimental paradigm, we chose the most and least effective among the three Presentation modes, Surround and Monophonic, for a total of 108 experimental stimuli.</p>
</sec>
<sec>
<title>Procedure</title>
<p>Participants listened to 27 cinematic excerpts and 27 control excerpts reproduced in two Presentation modes, played randomly twice for a total of 216 trials. The experiment was divided into four blocks of 54 trials each, with a break between blocks for a total experiment duration of &#x0007E;60 m. Each trial consisted of a black fixation cross on a gray background (1.5 s), followed by the auditory stimulus presented for 10 s on a black screen. After viewing the stimulus, participants had 5 s to respond to a question on a Visual Analog Scale (VAS) from 0 to 100: &#x0201C;How much did you feel physically immersed?&#x0201D; (Physical Immersion, PI). We used only one question compared to Experiment 1 in order to reduce the complexity of the task and we chose one of the questions that better characterizes the spatialized sound experience as exposed in Experiment 1. Before the experiment, we trained participants with six trials, three per each Presentation mode, using stimuli previously excluded through the validation process. A gray background was used as an inter-trial interval (ITI) with a duration of 3.5 s. After the experimental task, participants were asked to indicate if they recognized any action-related sound. If they stated that there was an action-related sound present, they were asked to write down what action sound they recognized. This information was used to verify that participants were able, on average, to recognize one action-related sound in each stimulus. At the end of the experimental session, the participant was asked to fill out the Film Immersive Experience Questionnaire (F-IEQ) (Rigby et al., <xref ref-type="bibr" rid="B56">2019</xref>). For questionnaire descriptives, see <xref ref-type="supplementary-material" rid="SM1">Supplementary material</xref>. Stimuli were presented with MATLAB extension Psychtoolbox-3 (Brainard, <xref ref-type="bibr" rid="B5">1997</xref>).</p>
</sec>
<sec>
<title>EEG and EMG recording and pre-processing</title>
<p>The electromyography (EMG) signal was acquired by an AD Instruments PowerLab 35 (ADInstruments, U.K.), and LabChart 8 Pro software was used for recording. EMG activity was bipolarly recorded on the left Extensor Digitorum Communis and left tibialis anterior with 4 mm standard Ag/Ag-Cl electrodes. Before being attached to the muscle regions, the participants&#x00027; skin was cleaned with an alcohol solution and the electrodes were filled with gel electrode paste (Fridlund and Cacioppo, <xref ref-type="bibr" rid="B17">1986</xref>). EMG was sampled at 2 kHz and recorded with an online Mains Filter (adaptive 50 Hz filter). A band-pass filter (20&#x02013;500 Hz) was applied, and data were arithmetically rectified (Abs). We calculated the EMG root-mean-square (RMS) response in microvolts (&#x003BC;V) by subtracting the baseline activity (average activity during fixation cross) from the activity during each stimulus divided into 20 segments of 500 ms each. EMG recording was done to exclude that the desynchronization recorded during stimulus presentation was influenced by participants&#x00027; movements. Hence, outliers (segments with EMG activity &#x000B1;3 SDs from baseline RMS) were considered movement artifacts, leading to trial exclusion during EEG pre-processing.</p>
<p>EEG data were acquired by a Geodesic Sensor System which includes the Net Amps 300 amplifier and a 128-channel HydroCel Geodesic Sensor Net (HCGSN-128) and recorded at a sampling rate of 500 Hz with the vertex (Cz) as an online reference while sensor-skin impedances were maintained below 50 k&#x003A9; for each sensor using Net Station 5.4 EGI software (Electrical Geodesic Inc., Eugene, OR). We applied a high-pass filter (0.5 Hz, transition window of 0.25 Hz) and ZapLine line noise removal (50 Hz Notch) using MATLAB (MathWorks, <xref ref-type="bibr" rid="B37">2022</xref>) toolbox EEGLAB v2022.1 (Delorme and Makeig, <xref ref-type="bibr" rid="B9">2004</xref>). Bad channels were identified with <italic>Clean Rawdata</italic> EEGLAB plug-in (v2.0) using flatline criterion (max 5s), line noise criterion (cutoff SD = 4), and minimum channel correlation (cutoff <italic>r</italic> = 0.7) and were interpolated using the spherical interpolation method. We removed 24 channels that were located at the periphery or the frontal region of the sensor net as they were likely to show residual muscle (13 peripheral channels: Ch48, Ch49, Ch56, Ch63, Ch68, Ch73, Ch81, Ch88, Ch94, Ch99, Ch107, Ch113, and Ch119) and eye artifacts (11 frontal channels: Ch1, Ch8, Ch14, Ch17, Ch21, Ch25, Ch32, Ch125, Ch126, Ch127, and Ch128), reducing the number of channels from 128 to 104. Continuous EEG data were divided into 12 s epochs, which included 2 s of baseline and 10 s of activity during the presentation of the stimulus. Epochs with muscle activity, identified using EMG (see above), were removed. Independent component analysis (ICA) was applied, and an automated recognition algorithm (MARA) was used to identify ocular, cardiac, and muscular artifacts (Winkler et al., <xref ref-type="bibr" rid="B67">2011</xref>, <xref ref-type="bibr" rid="B66">2014</xref>). A mean number of 16.7 (SD = 1.8) independent components (ICs) were removed. Finally, EEG data were re-referenced to the common average (Bertrand et al., <xref ref-type="bibr" rid="B4">1985</xref>).</p>
</sec>
<sec>
<title>Analysis</title>
<p>In order to investigate the dynamic changes in spectral power over time and temporal patterns of neural activity related to the perception of audio Presentation modes, we performed a time-frequency analysis using the Hanning taper method. The window length was fixed at 0.5 s, with a frequency resolution of 1 Hz, spanning from 3 to 32 Hz. This allowed for the examination of event-related spectral perturbation (ERSP) in Alpha (8&#x02013;13 Hz) and Beta (14&#x02013;32 Hz) frequency bands. Data were baseline corrected by division considering as a baseline the 500 ms window before the stimulus onset. We averaged the Monophonic Control Presentation mode and the Surround Control Presentation mode with the aim to obtain a control condition independent from the perceptual differences due to Presentation modes and considered hereinafter as a Control Presentation mode.</p>
<p>In order to address the multiple comparisons problem (MCP) that arises from the multidimensional EEG data structure and to control for family-wise error rate (FWER), we used a cluster-based non-parametric test for within-subjects experiments implemented in FieldTrip (Oostenveld et al., <xref ref-type="bibr" rid="B46">2011</xref>). The cluster-based test statistics is calculated by comparing experimental conditions at the sample level, selecting samples with <italic>t</italic>-values above a certain threshold, clustering them based on temporal, spatial, and spectral adjacency, and taking the sum of <italic>t</italic>-values within each cluster. The significance probability is then calculated using the Monte Carlo permutation method with 500 random draws. A <italic>p</italic>-value is calculated by comparing the observed test statistic to the distribution of test statistics obtained through random partitions of the data. A cluster is considered significant if its <italic>p</italic>-value is less than the critical Alpha level of 0.05. This data-driven approach allows one to identify specific time windows and electrode clusters, where there is a significant difference in neural activity between experimental conditions without any spatial cluster and frequency band assumption, and highlight regions of interest for further analysis. From electrodes in the identified clusters, we then extracted the log-ratio frequency power within the significant time window/frequencies range, and a linear mixed effect analysis was performed. Following a hierarchical approach, we initially created a simple model using one parameter, and we progressively added others with the aim to evaluate whether their inclusion improved model fit. Likelihood ratio tests, Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC) were used to rigorously choose which parameters improved model fit. We entered log-ratio frequency power as the dependent variable and Presentation modes (three levels: Surround, Monophonic, and Control) as independent fixed variables. The participants were included as a random intercept. This approach accounted for the within-subject and between-subject variability in the data. Outliers were identified and excluded from the analysis based on the standardized model residuals and a threshold value of Cook&#x00027;s distance (threshold = 1). <italic>Post-hoc</italic> tests were conducted using Tukey&#x00027;s correction for multiple comparisons and Kenward&#x02013;Roger degrees-of-freedom approximation method.</p>
<p>For behavioral analysis and its results, see <xref ref-type="supplementary-material" rid="SM1">Supplementary material</xref>. Statistical analyses were performed using R software (R Core Team, <xref ref-type="bibr" rid="B55">2022</xref>), lme4 (Bates et al., <xref ref-type="bibr" rid="B2">2015</xref>), effects (Fox and Weisberg, <xref ref-type="bibr" rid="B15">2019</xref>), and emmeans (Lenth, <xref ref-type="bibr" rid="B34">2022</xref>) packages. For data plotting, we used the ggplot2 (Wickham, <xref ref-type="bibr" rid="B65">2016</xref>) package.</p>
</sec>
</sec>
<sec>
<title>Results</title>
<sec>
<title>Alpha band range</title>
<p>A significant cluster in central and parietal areas (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure S3</xref> for cluster channels) was identified in the time window from 3 to 7 s in the Alpha frequency band (8&#x02013;10 Hz). This reflects a difference in neural activity in this frequency band and time window between the experimental conditions. Specifically, this cluster is characterized by a significantly higher event-related desynchronization (ERD) during the Surround Presentation mode compared to both the Monophonic (<xref ref-type="fig" rid="F2">Figure 2A</xref>) and Control Presentation modes (<xref ref-type="fig" rid="F2">Figure 2B</xref>), with a peak difference of &#x0007E;5 s from the stimulus onset.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>(A)</bold> Surround&#x02014;Monophonic Alpha band (8&#x02013;10 Hz) cluster (peak time 5 s from stimulus onset). <bold>(B)</bold> Surround&#x02014;Control Alpha band (8&#x02013;10 Hz) cluster (peak time 5 s from stimulus onset). <bold>(C)</bold> Surround&#x02014;Monophonic Low-Beta band (16&#x02013;18 Hz) cluster (peak time 4.5 s from the stimulus onset). <bold>(D)</bold> Surround&#x02014;Control Low-Beta band (16&#x02013;18 Hz) cluster (peak time 4.5 s from the stimulus onset). The Crosses (&#x0002B;) symbol indicates channels with <italic>p</italic> &#x0003C; 0.01 and asterisks (<sup>&#x0002A;</sup>) indicates channels with <italic>p</italic> &#x0003C; 0.001.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-17-1222472-g0002.tif"/>
</fig>
<p>The linear mixed model on log-ratio frequency power explained 57% of the variance in a dependent variable taking into account the random effects (<inline-formula><mml:math id="M6"><mml:msubsup><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mtext>m</mml:mtext></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> = 0.11; <inline-formula><mml:math id="M7"><mml:msubsup><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> = 0.57). The model revealed a significant main effect of Presentation modes [<inline-formula><mml:math id="M8"><mml:msubsup><mml:mrow><mml:mi>&#x003C7;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> = 43.87, <italic>p</italic> &#x0003C; 0.001], showing that there was a significantly greater ERD in the Surround Presentation mode when compared to both the Monophonic [<italic>t</italic><sub>(16, 777)</sub> = 3.79, <italic>p</italic> &#x0003C; 0.001] and Control Presentation modes [<italic>t</italic><sub>(16, 777)</sub> = 6.62, <italic>p</italic> &#x0003C; 0.001; Surround: <italic>M</italic> = 1.5, CIs = 0.83, 2.17; Monophonic: <italic>M</italic> = 1.6, CIs = 0.93, 2.28; Control: <italic>M</italic> = 1.66, CIs = 0.98, 2.33]. This means that there was an increase in neural activity in the centro-parietal areas during the Surround Presentation mode when compared to the Monophonic and Control Presentation modes, with a peak difference of &#x0007E;5 s after the stimulus onset. Furthermore, the <italic>post-hoc</italic> comparisons also showed that there was a significantly greater ERD in the Monophonic Presentation mode when compared to the Control Presentation mode [<italic>t</italic><sub>(16, 777)</sub> = 2.25, <italic>p</italic> &#x0003C; 0.001]. In order to better visualize the time course and patterns of ERD, we normalized Surround and Monophonic Presentation mode power by subtracting Control Presentation mode power. We detected a different ERD pattern between the Surround and Monophonic Presentation modes (<xref ref-type="fig" rid="F3">Figure 3A</xref>). Even if a significant difference is detected in the time windows between 3 and 7 s after the stimulus onset, the ERD in Surround starts after the stimulus onset followed by a power rebound 6 s after the stimulus onset, while in Monophonic, we distinguish an ERS in the first 2 s after the stimulus onset and an ERD 2 s after the stimulus onset followed by a power rebound 6 s after the stimulus onset (<xref ref-type="fig" rid="F3">Figure 3A</xref>). The findings demonstrated that the temporal progression of cortical activation differed between the two Presentation modes.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>(A)</bold> Log-ratio frequency power extracted from the Surround Presentation mode and the Monophonic Presentation mode relative to the Control in the Alpha cluster over time. <bold>(B)</bold> Log-ratio frequency power extracted from the Surround Presentation mode and the Monophonic Presentation mode relative to the Control in the Low-Beta cluster over time. Gray area highlights indicate significant differences between Presentation modes, asterisk (<sup>&#x0002A;</sup>) = <italic>p</italic> &#x0003C; 0.05.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-17-1222472-g0003.tif"/>
</fig>
</sec>
<sec>
<title>Low Beta band range</title>
<p>The second significant cluster in the central area (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure S4</xref> for cluster channels) was identified in the time window from 2 to 7 s in the Low-Beta frequency band (16&#x02013;18 Hz). This reflects a difference in neural activity in this frequency band and time window between the experimental conditions. Similar to the first Alpha cluster, this cluster is also characterized by an ERD during the Surround Presentation mode compared to both the Monophonic (<xref ref-type="fig" rid="F2">Figure 2C</xref>) and Control Presentation modes (<xref ref-type="fig" rid="F2">Figure 2D</xref>), with a peak difference of &#x0007E;4.5 s from the stimulus onset.</p>
<p>The linear mixed model on log-ratio frequency power explained 89% of the variance in a dependent variable taking into account the random effects (<inline-formula><mml:math id="M9"><mml:msubsup><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mtext>m</mml:mtext></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> = 0.2; <inline-formula><mml:math id="M10"><mml:msubsup><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> = 0.89). The model revealed a significant main effect of Presentation modes [<inline-formula><mml:math id="M11"><mml:msubsup><mml:mrow><mml:mi>&#x003C7;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> = 9.79, <italic>p</italic> &#x0003C; 0.001], showing that there was a significant ERD in the Surround Presentation mode when compared to both Monophonic [<italic>t</italic><sub>(5017)</sub> = 2.58, <italic>p</italic> &#x0003C; 0.05] and Control Presentation modes [<italic>t</italic><sub>(5017)</sub> = 2.94, <italic>p</italic> &#x0003C; 0.001; Surround: <italic>M</italic> = 0.29, CIs = 0.17, 0.40; Monophonic: <italic>M</italic> = 0.31, CIs = 0.18, 0.41; Control: <italic>M</italic> = 0.30, CIs = 0.18, 0.41]. This means that there was an increase in neural activity in the centro-parietal areas during the Surround Presentation mode when compared to the Monophonic and Control Presentation modes with a peak difference of &#x0007E;4.5 s after the stimulus onset. In order to better visualize the time course and patterns of ERD, we normalized Surround and Monophonic Presentation mode power by subtracting Control Presentation mode power. Moreover, in this cluster, we detect a partially different ERD pattern between the Surround and Monophonic Presentation modes (<xref ref-type="fig" rid="F3">Figure 3B</xref>). Even if the significant difference is detected 2 s after the stimulus onset, the ERD in the Surround starts after the stimulus onset followed by a power rebound 6 s after the stimulus onset, while in Monophonic, we distinguish an ERS in the first 2 s after the stimulus onset and an ERD 2 s after the stimulus onset followed by a power rebound 5 s after the stimulus onset.</p>
</sec>
</sec>
<sec>
<title>Discussion</title>
<p>The objective of this HD-EEG experiment was to explore the neural cortical mechanisms and the temporal specificity of sound perception when presented in two distinct acoustic Presentation modes, namely, Monophonic and Surround. The main focus was to compare the neural activity in the Surround Presentation mode to that in the Monophonic and Control Presentation modes, with the hypothesis that the enhanced spatialization of sound in the Surround Presentation mode would lead to greater activation of embodied simulation mechanisms viewed as the physiological index of the sense of presence. Using a data-driven approach that allowed us to identify specific time windows and electrode clusters where there is a significant difference in neural activity between experimental conditions without any spatial cluster and frequency band assumption, we identified two significant centro-parietal clusters: the first in the Alpha frequency band (8 to 10 Hz) and in the time window from 3 to 7 s and the second in the Low-Beta frequency band (16 to 18 Hz) and in the time window from 2 to 7 s. Since the Rolandic Alpha frequency band of interest (8&#x02013;13 Hz) overlaps with the occipital Alpha band, recordings in central areas might be affected by this posterior activity. However, given that significant clusters were detected only in central and parietal areas, we can exclude that our results were related to attentional/vigilance factors originating from the parieto-occipital cortex. Further analysis revealed a significant ERD in the Surround Presentation mode when compared to both Monophonic and Control Presentation modes both in Alpha and Low-Beta centro-parietal clusters, confirming previous results (Tsuchida et al., <xref ref-type="bibr" rid="B63">2015</xref>) using a more robust analysis approach. We observed a late significant ERD peak (&#x0007E;4.5/5 s) compared with the typical time course of mu rhythm desynchronization (Avanzini et al., <xref ref-type="bibr" rid="B1">2012</xref>).</p>
</sec>
</sec>
<sec id="s4">
<title>General discussion</title>
<p>The results of the present study provide novel insights into the relationship between virtual acoustic spatial environments and the sense of presence providing evidence that Surround Presentation mode enhances the sense of presence by activating embodied simulation mechanisms. In Experiment 1 and consistently with previous research on the relationship between Surround sound and the cinematic experience (Lessiter and Freeman, <xref ref-type="bibr" rid="B35">2001</xref>; V&#x000E4;stfj&#x000E4;ll, <xref ref-type="bibr" rid="B64">2003</xref>; Pettey et al., <xref ref-type="bibr" rid="B50">2010</xref>; Kobayashi et al., <xref ref-type="bibr" rid="B33">2015</xref>), participants explicitly reported that the Surround Presentation mode significantly enhances the sense of presence, Emotional Involvement, and Physical Immersion, thus showing that the Surround Presentation mode enhances immersion by more closely approximating real-world auditory experience. These findings are consistent with previous research showing that Surround sound formats can envelop the viewer in a 360-degree auditory space unlike the 180-degree space of stereo or mono sound (DiDonato, <xref ref-type="bibr" rid="B10">2010</xref>). Cummings et al., using a meta-analytic approach, investigated the relationship between the immersive quality of a mediated environment and the level of presence experienced by the participant. Several immersive features that offer high-fidelity simulations of reality such as Surround sound had a significant effect on presence (Cummings and Bailenson, <xref ref-type="bibr" rid="B8">2015</xref>). Additionally, these results offer some interesting theoretical implications, supporting the formation of presence as outlined by the spatial situational model framework proposed by Wirth et al. (<xref ref-type="bibr" rid="B69">2007</xref>).</p>
<p>The level of similarity between the perceptual experience elicited by video clips and the visual experience during real-life movements is believed to depend on the filming technique. The results of Heimann et al. indeed suggest that there may be a relationship between the perception of approaching stimuli and the feeling of involvement in the scene (Heimann et al., <xref ref-type="bibr" rid="B27">2014</xref>, <xref ref-type="bibr" rid="B26">2019</xref>). This may be due to the presence of more depth cues, which more closely resemble real-life vision. A similar mechanism can be hypothesized for the audio component in cinematic immersion, where the Surround Presentation mode can more closely resemble real-life hearing and activate embodied simulation processes. The EEG results of Experiment 2 further support this hypothesis. The Surround Presentation mode elicited a higher peak of Alpha rhythm desynchronization, reflecting greater activation of the mirror mechanism which represents the neural basis of embodied simulation. The desynchronization peak was delayed from the onset of stimulus presentation likely because of the naturalistic and, to some degree, heterogeneous stimuli used, which lacked a clearly time-locked goal-oriented action onset. This may have influenced the timing of the neural response observed in our study as the sound of the actions that were present in the stimuli had a high and diverse temporal dynamicity. Overall, these findings highlight the importance of considering the nature and characteristics of stimuli used in experiments, particularly when investigating the temporal specificity of neural responses. Furthermore, it is also possible that the use of dynamic and naturalistic stimuli, as opposed to experimental stimuli created <italic>ad hoc</italic>, may have led to a more complex and nuanced neural response. Our findings are consistent with previous studies that have reported different EEG topographies for the Alpha and Beta components of the mu rhythm (Pfurtscheller et al., <xref ref-type="bibr" rid="B52">1994</xref>; McFarland et al., <xref ref-type="bibr" rid="B39">2000</xref>). Previous research revealed different source locations and reactivity for the Alpha and Beta subcomponents of the mu rhythm desynchronization active during action execution and action observation, supporting the idea that they serve distinct functions (Hari and Salmelin, <xref ref-type="bibr" rid="B25">1997</xref>; Pfurtscheller et al., <xref ref-type="bibr" rid="B53">1997</xref>; Hari, <xref ref-type="bibr" rid="B24">2006</xref>; Press et al., <xref ref-type="bibr" rid="B54">2011</xref>; Hobson and Bishop, <xref ref-type="bibr" rid="B28">2016</xref>). The Alpha subcomponent is thought to reflect a sensorimotor function, while the Beta component is more closely linked to motor cortical control. Indeed, further research is needed to fully understand the underlying mechanisms and factors that contribute to this neural response, the different functions of Alpha and Beta subcomponents and how they arise from different neural networks, and the functional significance of the activation of embodied simulation mechanisms in acoustic cinematic immersion. Regardless of these future developments, we can state that immersion, as an objective property of the playback system, a defining characteristic of our stimuli delivery setup, was reflected by the instauration of the sense of presence revealed by a stronger embodiment of spectators. These findings support the idea that cinematic experience is unique and directly connected to sensory-motor patterns that connect the viewer with the screen, allowing for a form of immersive simulation that exploits the full potential of our brain-body system (Freedberg and Gallese, <xref ref-type="bibr" rid="B16">2007</xref>; Gallese and Guerra, <xref ref-type="bibr" rid="B20">2012</xref>, <xref ref-type="bibr" rid="B21">2019</xref>, <xref ref-type="bibr" rid="B22">2022</xref>; Fingerhut and Heimann, <xref ref-type="bibr" rid="B14">2022</xref>). The result is an intersubjective relationship between the viewer and the film that blurs the boundary between the real and imaginary worlds (Gallese and Guerra, <xref ref-type="bibr" rid="B21">2019</xref>).</p>
</sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusion</title>
<p>This study provides new data on how increasing the spatial complexity of virtual environments mediated by cinematic sequences can increase the participant&#x00027;s sense of presence. The greater neural activity recorded in the centro-parietal areas can contribute to the understanding of the neural mechanisms of embodied spatialized auditory perception. In future, by further understanding how the integration between sound and visual information in the cinematic experience occurs, we can gain insight into how the brain processes this information and how it can be used to enhance the viewers&#x00027; immersive experience. Furthermore, deeper comprehension can also be applied to other areas such as virtual reality and augmented reality, which also rely on the integration of sound and visual information to create immersive experiences. Filmmakers and sound designers may also leverage this knowledge to precisely manipulate audiovisual elements, resulting in a heightened emotional impact and greater engagement with film scenes. The knowledge gained from this exploration should also have broader implications in fields beyond entertainment. Fields such as education and therapy can benefit from a deeper insight into how the brain processes and integrates sound and visual information. These applications can range from designing effective educational multimedia content to developing immersive training for therapy/ rehabilitation purposes.</p>
</sec>
<sec sec-type="data-availability" id="s6">
<title>Data availability statement</title>
<p>The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: <ext-link ext-link-type="uri" xlink:href="https://osf.io/ntemf/?view_only=a6f1d81e84d940f9a31513b9ed3f8c67">https://osf.io/ntemf/?view_only=a6f1d81e84d940f9a31513b9ed3f8c67</ext-link>.</p>
</sec>
<sec sec-type="ethics-statement" id="s7">
<title>Ethics statement</title>
<p>The studies involving human participants were reviewed and approved by &#x0201C;Comitato Etico Area Vasta Emilia Nord&#x0201D; and were conducted in accordance with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards (World Medical Association, 2013). Participants provided their written informed consent to participate in these studies.</p>
</sec>
<sec sec-type="author-contributions" id="s8">
<title>Author contributions</title>
<p>NL and VS conceptualized the idea and discussed it with all the authors. NL edited the stimuli, performed data acquisition, performed analyses, and wrote the manuscript. All authors designed the experiment, interpreted the results, contributed, and approved the manuscript.</p>
</sec>
</body>
<back>
<sec sec-type="funding-information" id="s9">
<title>Funding</title>
<p>The study was supported by &#x00023;NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006)&#x02014;A multiscale integrated approach to the study of the nervous system in health and disease (DN. 1553 11.10.2022).</p>
</sec>
<ack><p>The authors wish to thank Davide Bonini (AudioB S.a.s.) for his help with acoustic system arrangement and calibration, and Leonardo Fazio and Rosalia Burrafato for their help in data recording.</p>
</ack>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec sec-type="supplementary-material" id="s11">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fnins.2023.1222472/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fnins.2023.1222472/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.docx" id="SM1" mimetype="application/vnd.openxmlformats-officedocument.wordprocessingml.document" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Avanzini</surname> <given-names>P.</given-names></name> <name><surname>Fabbri-Destro</surname> <given-names>M.</given-names></name> <name><surname>Volta</surname> <given-names>R. D.</given-names></name> <name><surname>Daprati</surname> <given-names>E.</given-names></name> <name><surname>Rizzolatti</surname> <given-names>G.</given-names></name> <name><surname>Cantalupo</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>The dynamics of sensorimotor cortical oscillations during the observation of hand movements: an EEG study</article-title>. <source>PLoS ONE</source> <volume>7</volume>, <fpage>e37534</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0037534</pub-id><pub-id pub-id-type="pmid">22624046</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bates</surname> <given-names>D.</given-names></name> <name><surname>M&#x000E4;chler</surname> <given-names>M.</given-names></name> <name><surname>Bolker</surname> <given-names>B. M.</given-names></name> <name><surname>Walker</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Fitting linear mixed-effects models using lme4</article-title>. <source>J. Stat. Softw.</source> <volume>67</volume>, <fpage>1</fpage>&#x02013;<lpage>48</lpage>. <pub-id pub-id-type="doi">10.18637/jss.v067.i01</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bech</surname> <given-names>S.</given-names></name> <name><surname>Zacharov</surname> <given-names>N.</given-names></name></person-group> (<year>2006</year>). <source>Perceptual Audio Evaluation&#x02013;Theory, Method and Application</source>. <publisher-loc>Hoboken, NJ</publisher-loc>: <publisher-name>John Wiley &#x00026; Sons, Ltd</publisher-name>. <pub-id pub-id-type="doi">10.1002/9780470869253</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bertrand</surname> <given-names>O.</given-names></name> <name><surname>Perrin</surname> <given-names>F.</given-names></name> <name><surname>Pernier</surname> <given-names>J.</given-names></name></person-group> (<year>1985</year>). <article-title>A theoretical justification of the average reference in topographic evoked potential studies</article-title>. <source>Electroencephalogr. Clin. Neurophysiol.</source> <volume>62</volume>, <fpage>462</fpage>&#x02013;<lpage>464</lpage>. <pub-id pub-id-type="doi">10.1016/0168-5597(85)90058-9</pub-id><pub-id pub-id-type="pmid">2415344</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brainard</surname> <given-names>D. H.</given-names></name></person-group> (<year>1997</year>). <article-title>The psychophysics toolbox</article-title>. <source>Spat. Vis.</source> <volume>10</volume>, <fpage>433</fpage>&#x02013;<lpage>436</lpage>. <pub-id pub-id-type="doi">10.1163/156856897X00357</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caetano</surname> <given-names>G.</given-names></name> <name><surname>Jousm&#x000E4;ki</surname> <given-names>V.</given-names></name> <name><surname>Hari</surname> <given-names>R.</given-names></name></person-group> (<year>2007</year>). <article-title>Actor&#x00027;s and observer&#x00027;s primary motor cortices stabilize similarly after seen or heard motor actions</article-title>. <source>Proc. Nat. Acad. Sci.</source> <volume>104</volume>, <fpage>9058</fpage>&#x02013;<lpage>9062</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0702453104</pub-id><pub-id pub-id-type="pmid">17470782</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Calbi</surname> <given-names>M.</given-names></name> <name><surname>Siri</surname> <given-names>F.</given-names></name> <name><surname>Heimann</surname> <given-names>K.</given-names></name> <name><surname>Barratt</surname> <given-names>D.</given-names></name> <name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Kolesnikov</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2019</year>). How context influences the interpretation of facial expressions: a source localization high-density EEG study on the &#x0201C;Kuleshov effect.&#x0201D; <italic>Sci. Rep. Int. Dev. Res. Cent. Can</italic>. 9, 2107. <pub-id pub-id-type="doi">10.1038/s41598-018-37786-y</pub-id><pub-id pub-id-type="pmid">30765713</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cummings</surname> <given-names>J. J.</given-names></name> <name><surname>Bailenson</surname> <given-names>J. N.</given-names></name></person-group> (<year>2015</year>). <article-title>How immersive is enough? A meta-analysis of the effect of immersive technology on user presence</article-title>. <source>Media Psychol.</source> <volume>19</volume>, <fpage>272</fpage>&#x02013;<lpage>309</lpage>. <pub-id pub-id-type="doi">10.1080/15213269.2015.1015740</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Delorme</surname> <given-names>A.</given-names></name> <name><surname>Makeig</surname> <given-names>S.</given-names></name></person-group> (<year>2004</year>). <article-title>EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis</article-title>. <source>J. Neurosci. Methods</source> <volume>134</volume>, <fpage>9</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2003.10.009</pub-id><pub-id pub-id-type="pmid">15102499</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>DiDonato</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <source>La Spazializzazione Acustica Nel Cinema Contemporaneo. Tecnica, Linguaggio, Modelli di Analisi</source>. <publisher-loc>Leesburg, VA</publisher-loc>: <publisher-name>Onyx, Ed</publisher-name>.</citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>DiPellegrino</surname> <given-names>G.</given-names></name> <name><surname>Fadiga</surname> <given-names>L.</given-names></name> <name><surname>Fogassi</surname> <given-names>L.</given-names></name> <name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Rizzolatti</surname> <given-names>G.</given-names></name></person-group> (<year>1992</year>). <article-title>Understanding motor events: a neurophysiological study</article-title>. <source>Exp. Brain Res.</source> <volume>91</volume>, <fpage>176</fpage>&#x02013;<lpage>180</lpage>. <pub-id pub-id-type="doi">10.1007/BF00230027</pub-id><pub-id pub-id-type="pmid">1301372</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Elsaesser</surname> <given-names>T.</given-names></name> <name><surname>Hagener</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <source>Film Theory an Introduction through the Senses</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Routledge.</publisher-name> <pub-id pub-id-type="doi">10.4324/9781315740768</pub-id><pub-id pub-id-type="pmid">17419812</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fern&#x000E1;ndez-Aguilar</surname> <given-names>L.</given-names></name> <name><surname>Navarro-Bravo</surname> <given-names>B.</given-names></name> <name><surname>Ricarte</surname> <given-names>J.</given-names></name> <name><surname>Ros</surname> <given-names>L.</given-names></name> <name><surname>Latorre</surname> <given-names>J. M.</given-names></name></person-group> (<year>2019</year>). <article-title>How effective are films in inducing positive and negative emotional states? A meta-analysis</article-title>. <source>PLoS ONE</source> <volume>14</volume>, <fpage>e0225040</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0225040</pub-id><pub-id pub-id-type="pmid">31751361</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fingerhut</surname> <given-names>J.</given-names></name> <name><surname>Heimann</surname> <given-names>K.</given-names></name></person-group> (<year>2022</year>). <article-title>Enacting moving images: film theory and experimental science within a new cognitive media theory</article-title>. <source>Projections</source> <volume>16</volume>, <fpage>105</fpage>&#x02013;<lpage>123</lpage>. <pub-id pub-id-type="doi">10.3167/proj.2022.160107</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Fox</surname> <given-names>J.</given-names></name> <name><surname>Weisberg</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <source>An R Companion to Applied Regression, 3rd ed. London: Sage Publications</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://socialsciences.mcmaster.ca/jfox/Books/Companion/index.html">https://socialsciences.mcmaster.ca/jfox/Books/Companion/index.html</ext-link> (accessed May 13, 2023).</citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Freedberg</surname> <given-names>D.</given-names></name> <name><surname>Gallese</surname> <given-names>V.</given-names></name></person-group> (<year>2007</year>). <article-title>Motion, emotion and empathy in esthetic experience</article-title>. <source>Trends Cogn. Sci.</source> <volume>11</volume>, <fpage>197</fpage>&#x02013;<lpage>203</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2007.02.003</pub-id><pub-id pub-id-type="pmid">17347026</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fridlund</surname> <given-names>A. J.</given-names></name> <name><surname>Cacioppo</surname> <given-names>J. T.</given-names></name></person-group> (<year>1986</year>). <article-title>Guidelines for human electromyographic research</article-title>. <source>Psychophysiology</source> <volume>23</volume>, <fpage>567</fpage>&#x02013;<lpage>589</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.1986.tb00676.x</pub-id><pub-id pub-id-type="pmid">3809364</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gallese</surname> <given-names>V.</given-names></name></person-group> (<year>2009</year>). <article-title>Mirror neurons, embodied simulation, and the neural basis of social identification</article-title>. <source>Psychoanal. Dialogues</source> <volume>19</volume>, <fpage>519</fpage>&#x02013;<lpage>536</lpage>. <pub-id pub-id-type="doi">10.1080/10481880903231910</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gallese</surname> <given-names>V.</given-names></name></person-group> (<year>2019</year>). <article-title>Embodied simulation. its bearing on aesthetic experience and the dialogue between neuroscience and the humanities</article-title>. <source>Gestalt Theory</source> <volume>41</volume>, <fpage>113</fpage>&#x02013;<lpage>127</lpage>. <pub-id pub-id-type="doi">10.2478/gth-2019-0013</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Guerra</surname> <given-names>M.</given-names></name></person-group> (<year>2012</year>). <article-title>Embodying movies: embodied simulation and film studies</article-title>. <source>Cinema J. Philos. Moving Image</source> <volume>3</volume>, <fpage>183</fpage>&#x02013;<lpage>210</lpage>.</citation>
</ref>
<ref id="B21">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Guerra</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <source>The Empathic Screen: Cinema and Neuroscience.</source> <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>. <pub-id pub-id-type="doi">10.1093/oso/9780198793533.001.0001</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Guerra</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>The neuroscience of film</article-title>. <source>Projections</source> <volume>16</volume>, <fpage>1</fpage>&#x02013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.3167/proj.2022.160101</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gastaut</surname> <given-names>H. J.</given-names></name> <name><surname>Bert</surname> <given-names>J.</given-names></name></person-group> (<year>1954</year>). <article-title>EEG changes during cinematographic presentation (moving picture activation of the EEG)</article-title>. <source>Electroencephalogr. Clin. Neurophysiol.</source> <volume>6</volume>, <fpage>433</fpage>&#x02013;<lpage>444</lpage>. <pub-id pub-id-type="doi">10.1016/0013-4694(54)90058-9</pub-id><pub-id pub-id-type="pmid">13200415</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hari</surname> <given-names>R.</given-names></name></person-group> (<year>2006</year>). <article-title>Action&#x02013;perception connection and the cortical mu rhythm</article-title>. <source>Prog. Brain Res.</source> <volume>159</volume>, <fpage>253</fpage>&#x02013;<lpage>260</lpage>. <pub-id pub-id-type="doi">10.1016/S0079-6123(06)59017-X</pub-id><pub-id pub-id-type="pmid">17071236</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hari</surname> <given-names>R.</given-names></name> <name><surname>Salmelin</surname> <given-names>R.</given-names></name></person-group> (<year>1997</year>). <article-title>Human cortical oscillations: a neuromagnetic view through the skull</article-title>. <source>Trends Neurosci.</source> <volume>20</volume>, <fpage>44</fpage>&#x02013;<lpage>49</lpage>. <pub-id pub-id-type="doi">10.1016/S0166-2236(96)10065-5</pub-id><pub-id pub-id-type="pmid">9004419</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Heimann</surname> <given-names>K.</given-names></name> <name><surname>Uithol</surname> <given-names>S.</given-names></name> <name><surname>Calbi</surname> <given-names>M.</given-names></name> <name><surname>Umilt&#x000E0;</surname> <given-names>M. A.</given-names></name> <name><surname>Guerra</surname> <given-names>M.</given-names></name> <name><surname>Fingerhut</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Embodying the camera: an EEG study on the effect of camera movements on film spectators&#x00027; sensorimotor cortex activation</article-title>. <source>PLoS ONE</source> <volume>14</volume>, <fpage>e0211026</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0211026</pub-id><pub-id pub-id-type="pmid">30865624</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Heimann</surname> <given-names>K.</given-names></name> <name><surname>Umilt&#x000E0;</surname> <given-names>M. A.</given-names></name> <name><surname>Guerra</surname> <given-names>M.</given-names></name> <name><surname>Gallese</surname> <given-names>V.</given-names></name></person-group> (<year>2014</year>). <article-title>Moving mirrors: a high-density EEG study investigating the effect of camera movements on motor cortex activation during action observation</article-title>. <source>J. Cogn. Neurosci.</source> <volume>26</volume>, <fpage>2087</fpage>&#x02013;<lpage>2101</lpage>. <pub-id pub-id-type="doi">10.1162/jocn_a_00602</pub-id><pub-id pub-id-type="pmid">24666130</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hobson</surname> <given-names>H. M.</given-names></name> <name><surname>Bishop</surname> <given-names>D. V. M.</given-names></name></person-group> (<year>2016</year>). <article-title>Mu suppression &#x02013; a good measure of the human mirror neuron system?</article-title> <source>Cortex</source> <volume>82</volume>, <fpage>290</fpage>&#x02013;<lpage>310</lpage>. <pub-id pub-id-type="doi">10.1016/j.cortex.2016.03.019</pub-id><pub-id pub-id-type="pmid">27180217</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><collab>ITU-R</collab></person-group> (<year>1996</year>). <source>Recommendation ITU-RP. 800, Methods for Subjective Determination of Transmission Quality</source>. <publisher-loc>Geneva</publisher-loc>: <publisher-name>International Telecommunication Union</publisher-name>.</citation>
</ref>
<ref id="B30">
<citation citation-type="book"><person-group person-group-type="author"><collab>ITU-R</collab></person-group> (<year>1997</year>). <source>Recommendation BS.1116-1, Methods for the Subjective Assessment of Small Impairments in Audio Systems Including Multichannel Sound Systems</source>. <publisher-loc>Geneva</publisher-loc>: <publisher-name>International Telecommunication Union</publisher-name>.</citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Keysers</surname> <given-names>C.</given-names></name> <name><surname>Kohler</surname> <given-names>E.</given-names></name> <name><surname>Umilt&#x000E0;</surname> <given-names>M. A.</given-names></name> <name><surname>Nanetti</surname> <given-names>L.</given-names></name> <name><surname>Fogassi</surname> <given-names>L.</given-names></name> <name><surname>Gallese</surname> <given-names>V.</given-names></name> <etal/></person-group>. (<year>2003</year>). <article-title>Audiovisual mirror neurons and action recognition</article-title>. <source>Exp. Brain Res.</source> <volume>153</volume>, <fpage>628</fpage>&#x02013;<lpage>636</lpage>. <pub-id pub-id-type="doi">10.1007/s00221-003-1603-5</pub-id><pub-id pub-id-type="pmid">12937876</pub-id></citation></ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kitagawa</surname> <given-names>N.</given-names></name> <name><surname>Ichihara</surname> <given-names>S.</given-names></name></person-group> (<year>2002</year>). <article-title>Hearing visual motion in depth</article-title>. <source>Nature</source> <volume>416</volume>, <fpage>172</fpage>&#x02013;<lpage>174</lpage>. <pub-id pub-id-type="doi">10.1038/416172a</pub-id><pub-id pub-id-type="pmid">11894093</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kobayashi</surname> <given-names>M.</given-names></name> <name><surname>Ueno</surname> <given-names>K.</given-names></name> <name><surname>Ise</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>The effects of spatialized sounds on the sense of presence in auditory virtual environments: a psychological and physiological study</article-title>. <source>Presence</source> <volume>24</volume>, <fpage>163</fpage>&#x02013;<lpage>174</lpage>. <pub-id pub-id-type="doi">10.1162/PRES_a_00226</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Lenth</surname> <given-names>R. V.</given-names></name></person-group> (<year>2022</year>). <source>emmeans: Estimated Marginal Means, aka Least-Squares Means. R package version 1.8.3</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://CRAN.R-project.org/package=emmeans">https://CRAN.R-project.org/package=emmeans</ext-link> (accessed May 13, 2023).</citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lessiter</surname> <given-names>J.</given-names></name> <name><surname>Freeman</surname> <given-names>J.</given-names></name></person-group> (<year>2001</year>). <article-title>&#x0201C;Really hear? The effects of audio quality on presence,&#x0201D;</article-title> in <source>4th International Workshop on Presence</source>, <fpage>288</fpage>&#x02013;<lpage>324</lpage>.</citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lipscomb</surname> <given-names>S. D.</given-names></name> <name><surname>Kerins</surname> <given-names>M.</given-names></name></person-group> (<year>2004</year>). <article-title>&#x0201C;An empirical investigation into the effect of presentation mode in the cinematic and music listening experience,&#x0201D;</article-title> in <source>8th International Conference on Music Perception</source> and <italic>Cognition</italic>.</citation>
</ref>
<ref id="B37">
<citation citation-type="web"><person-group person-group-type="author"><collab>MathWorks</collab></person-group> (<year>2022</year>). <source>Statistics and Machine Learning Toolbox Documentation, Natick, Massachusetts: The MathWorks Inc</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.mathworks.com/help/stats/index.html">https://www.mathworks.com/help/stats/index.html</ext-link> (accessed May 13, 2023).</citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mattila</surname> <given-names>V.-V.</given-names></name> <name><surname>Zacharov</surname> <given-names>N.</given-names></name></person-group> (<year>2001</year>). <article-title>&#x0201C;Generalized listener selection (GLS) procedure,&#x0201D;</article-title> in <source>AES 110th CONVENTION</source>.</citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McFarland</surname> <given-names>D. J.</given-names></name> <name><surname>Miner</surname> <given-names>L. A.</given-names></name> <name><surname>Vaughan</surname> <given-names>T. M.</given-names></name> <name><surname>Wolpaw</surname> <given-names>J. R.</given-names></name></person-group> (<year>2000</year>). <article-title>Mu and beta rhythm topographies during motor imagery and actual movements</article-title>. <source>Brain Topogr.</source> <volume>12</volume>, <fpage>177</fpage>&#x02013;<lpage>186</lpage>. <pub-id pub-id-type="doi">10.1023/A:1023437823106</pub-id><pub-id pub-id-type="pmid">10791681</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murphy</surname> <given-names>W. J.</given-names></name> <name><surname>Franks</surname> <given-names>J. R.</given-names></name></person-group> (<year>2002</year>). <article-title>Revisiting the NIOSH criteria for a recommended standard: Occupational noise exposure</article-title>. <source>Int. Congress Exposit. Noise Cont. Engg</source>. <volume>111</volume>, <fpage>19</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1121/1.4778162</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Muthukumaraswamy</surname> <given-names>S. D.</given-names></name> <name><surname>Johnson</surname> <given-names>B. W.</given-names></name></person-group> (<year>2004a</year>). <article-title>Changes in rolandic mu rhythm during observation of a precision grip</article-title>. <source>Psychophysiology</source> <volume>41</volume>, <fpage>152</fpage>&#x02013;<lpage>156</lpage>. <pub-id pub-id-type="doi">10.1046/j.1469-8986.2003.00129.x</pub-id><pub-id pub-id-type="pmid">14693010</pub-id></citation></ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Muthukumaraswamy</surname> <given-names>S. D.</given-names></name> <name><surname>Johnson</surname> <given-names>B. W.</given-names></name></person-group> (<year>2004b</year>). <article-title>Primary motor cortex activation during action observation revealed by wavelet analysis of the <italic>EEG</italic></article-title>. <source>Clin. Neurophysiol.</source> <volume>115</volume>, <fpage>1760</fpage>&#x02013;<lpage>1766</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2004.03.004</pub-id><pub-id pub-id-type="pmid">15261854</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Muthukumaraswamy</surname> <given-names>S. D.</given-names></name> <name><surname>Johnson</surname> <given-names>B. W.</given-names></name> <name><surname>McNair</surname> <given-names>N. A.</given-names></name></person-group> (<year>2004</year>). <article-title>Mu rhythm modulation during observation of an object-directed grasp</article-title>. <source>Brain Res. Cogn. Brain Res</source>. <volume>19</volume>, <fpage>195</fpage>&#x02013;<lpage>201</lpage>. <pub-id pub-id-type="doi">10.1016/j.cogbrainres.2003.12.001</pub-id><pub-id pub-id-type="pmid">15019715</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Neuper</surname> <given-names>C.</given-names></name> <name><surname>W&#x000F6;rtz</surname> <given-names>M.</given-names></name> <name><surname>Pfurtscheller</surname> <given-names>G.</given-names></name></person-group> (<year>2006</year>). <article-title>ERD/ERS patterns reflecting sensorimotor activation and deactivation</article-title>. <source>Prog. Brain Res.</source> <volume>159</volume>, <fpage>211</fpage>&#x02013;<lpage>222</lpage>. <pub-id pub-id-type="doi">10.1016/S0079-6123(06)59014-4</pub-id><pub-id pub-id-type="pmid">17071233</pub-id></citation></ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oldfield</surname> <given-names>R. C.</given-names></name></person-group> (<year>1971</year>). <article-title>The assessment and analysis of handedness: the Edinburgh inventory</article-title>. <source>Neuropsychologia</source> <volume>9</volume>, <fpage>97</fpage>&#x02013;<lpage>113</lpage>. <pub-id pub-id-type="doi">10.1016/0028-3932(71)90067-4</pub-id><pub-id pub-id-type="pmid">5146491</pub-id></citation></ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oostenveld</surname> <given-names>R.</given-names></name> <name><surname>Fries</surname> <given-names>P.</given-names></name> <name><surname>Maris</surname> <given-names>E.</given-names></name> <name><surname>Schoffelen</surname> <given-names>J.-M.</given-names></name></person-group> (<year>2011</year>). <article-title>FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data</article-title>. <source>Comput. Intell. Neurosci.</source> <volume>2011</volume>, <fpage>156869</fpage>. <pub-id pub-id-type="doi">10.1155/2011/156869</pub-id><pub-id pub-id-type="pmid">21253357</pub-id></citation></ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paul</surname> <given-names>A.</given-names></name></person-group> (<year>2009</year>). <source>Audyssey DSX 10.2. Surround Sound Overview</source>.</citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perry</surname> <given-names>A.</given-names></name> <name><surname>Bentin</surname> <given-names>S.</given-names></name></person-group> (<year>2009</year>). <article-title>Mirror activity in the human brain while observing hand movements: a comparison between EEG desynchronization in the &#x003BC;-range and previous fMRI results</article-title>. <source>Brain Res.</source> <volume>1282</volume>, <fpage>126</fpage>&#x02013;<lpage>132</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2009.05.059</pub-id><pub-id pub-id-type="pmid">19500557</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perry</surname> <given-names>A.</given-names></name> <name><surname>Troje</surname> <given-names>N. F.</given-names></name> <name><surname>Bentin</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <article-title>Exploring motor system contributions to the perception of social information: evidence from EEG activity in the mu/alpha frequency range</article-title>. <source>Soc. Neurosci</source>. <volume>5</volume>, <fpage>272</fpage>&#x02013;<lpage>284</lpage>. <pub-id pub-id-type="doi">10.1080/17470910903395767</pub-id><pub-id pub-id-type="pmid">20169504</pub-id></citation></ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pettey</surname> <given-names>G.</given-names></name> <name><surname>Bracken</surname> <given-names>C. C.</given-names></name> <name><surname>Rubenking</surname> <given-names>B.</given-names></name> <name><surname>Buncher</surname> <given-names>M.</given-names></name> <name><surname>Gress</surname> <given-names>E.</given-names></name></person-group> (<year>2010</year>). <article-title>Telepresence, soundscapes and technological expectation: putting the observer into the equation</article-title>. <source>Virtual Real.</source> <volume>14</volume>, <fpage>15</fpage>&#x02013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1007/s10055-009-0148-8</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfurtscheller</surname> <given-names>G.</given-names></name> <name><surname>Neuper</surname> <given-names>C.</given-names></name></person-group> (<year>1997</year>). <article-title>Motor imagery activates primary sensorimotor area in humans</article-title>. <source>Neurosci. Lett.</source> <volume>239</volume>, <fpage>65</fpage>&#x02013;<lpage>68</lpage>. <pub-id pub-id-type="doi">10.1016/S0304-3940(97)00889-6</pub-id><pub-id pub-id-type="pmid">9469657</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfurtscheller</surname> <given-names>G.</given-names></name> <name><surname>Pregenzer</surname> <given-names>M.</given-names></name> <name><surname>Neuper</surname> <given-names>C.</given-names></name></person-group> (<year>1994</year>). <article-title>Visualization of sensorimotor areas involved in preparation for hand movement based on classification of &#x003BC; and central &#x003B2; rhythms in single EEG trials in man</article-title>. <source>Neurosci. Lett.</source> <volume>181</volume>, <fpage>43</fpage>&#x02013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1016/0304-3940(94)90556-8</pub-id><pub-id pub-id-type="pmid">7898767</pub-id></citation></ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfurtscheller</surname> <given-names>G.</given-names></name> <name><surname>Stanc&#x000E1;k</surname> <given-names>A.</given-names></name> <name><surname>Edlinger</surname> <given-names>G.</given-names></name></person-group> (<year>1997</year>). <article-title>On the existence of different types of central beta rhythms below 30 Hz</article-title>. <source>Electroencephalogr. Clin. Neurophysiol.</source> <volume>102</volume>, <fpage>316</fpage>&#x02013;<lpage>325</lpage>. <pub-id pub-id-type="doi">10.1016/S0013-4694(96)96612-2</pub-id><pub-id pub-id-type="pmid">9146493</pub-id></citation></ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><collab>Press C. Cook J. Blakemore S.-J. Kilner J.</collab></person-group> (<year>2011</year>). <article-title>Dynamic modulation of human motor activity when observing actions</article-title>. <source>J. Neurosci.</source> <volume>31</volume>, <fpage>2792</fpage>&#x02013;<lpage>2800</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1595-10.2011</pub-id><pub-id pub-id-type="pmid">21414901</pub-id></citation></ref>
<ref id="B55">
<citation citation-type="web"><person-group person-group-type="author"><collab>R Core Team</collab></person-group> (<year>2022</year>). <source>R<italic>: A Language and Environment for Statistical Computing</italic>. Vienna: R Foundation for Statistical Computing</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.R-project.org/">https://www.R-project.org/</ext-link> (accessed May 13, 2023).</citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rigby</surname> <given-names>J. M.</given-names></name> <name><surname>Brumby</surname> <given-names>D. P.</given-names></name> <name><surname>Gould</surname> <given-names>S. J. J.</given-names></name> <name><surname>Cox</surname> <given-names>A. L.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Development of a questionnaire to measure immersion in video media: the film IEQ,&#x0201D;</article-title> in <source>Proceedings of the 2019 ACM International Conference on Interactive Experiences for TV and Online Video</source>, 35&#x02212;46. <pub-id pub-id-type="doi">10.1145/3317697.3323361</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sbravatti</surname> <given-names>V.</given-names></name></person-group> (<year>2019</year>). <source>La cognizione dello Spazio Sonoro Filmico: Un Approccio Neurofilmologico</source>. <publisher-loc>Rome</publisher-loc>: <publisher-name>Bulzoni</publisher-name>.</citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sestito</surname> <given-names>M.</given-names></name> <name><surname>Umilt&#x000E0;</surname> <given-names>M. A.</given-names></name> <name><surname>Paola</surname> <given-names>G. D.</given-names></name> <name><surname>Fortunati</surname> <given-names>R.</given-names></name> <name><surname>Raballo</surname> <given-names>A.</given-names></name> <name><surname>Leuci</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Facial reactions in response to dynamic emotional stimuli in different modalities in patients suffering from schizophrenia: a behavioral and EMG study</article-title>. <source>Front. Hum. Neurosci.</source> <volume>7</volume>, <fpage>368</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2013.00368</pub-id><pub-id pub-id-type="pmid">23888132</pub-id></citation></ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Slater</surname> <given-names>M.</given-names></name> <name><surname>Wilbur</surname> <given-names>S.</given-names></name></person-group> (<year>1997</year>). <article-title>A framework for immersive virtual environments (FIVE): speculations on the role of presence in virtual environments</article-title>. <source>Presence</source> <volume>6</volume>, <fpage>603</fpage>&#x02013;<lpage>616</lpage>. <pub-id pub-id-type="doi">10.1162/pres.1997.6.6.603</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sonkusare</surname> <given-names>S.</given-names></name> <name><surname>Breakspear</surname> <given-names>M.</given-names></name> <name><surname>Guo</surname> <given-names>C.</given-names></name></person-group> (<year>2019</year>). <article-title>Naturalistic stimuli in neuroscience: critically acclaimed</article-title>. <source>Trends Cogn. Sci.</source> <volume>23</volume>, <fpage>699</fpage>&#x02013;<lpage>714</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2019.05.004</pub-id><pub-id pub-id-type="pmid">31257145</pub-id></citation></ref>
<ref id="B61">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sterne</surname> <given-names>J.</given-names></name></person-group> (<year>2003</year>). <source>The Audible Past Cultural Origins of Sound Reproduction</source>. <publisher-loc>Durham, NC</publisher-loc>: <publisher-name>Duke University Press</publisher-name>. <pub-id pub-id-type="doi">10.1515/9780822384250</pub-id></citation>
</ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Toro</surname> <given-names>C.</given-names></name> <name><surname>Deuschl</surname> <given-names>G.</given-names></name> <name><surname>Thatcher</surname> <given-names>R.</given-names></name> <name><surname>Sato</surname> <given-names>S.</given-names></name> <name><surname>Kufta</surname> <given-names>C.</given-names></name> <name><surname>Hallett</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>1994</year>). <article-title>Event-related desynchronization and movement-related cortical potentials on the ECoG and EEG</article-title>. <source>Electroencephalogr. Clin. Neurophysiol.</source> <volume>93</volume>, <fpage>380</fpage>&#x02013;<lpage>389</lpage>. <pub-id pub-id-type="doi">10.1016/0168-5597(94)90126-0</pub-id><pub-id pub-id-type="pmid">7525246</pub-id></citation></ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsuchida</surname> <given-names>K.</given-names></name> <name><surname>Ueno</surname> <given-names>K.</given-names></name> <name><surname>Shimada</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Motor area activity for action-related and nonaction-related sounds in a three-dimensional sound field reproduction system</article-title>. <source>NeuroReport</source> <volume>26</volume>, <fpage>291</fpage>&#x02013;<lpage>295</lpage>. <pub-id pub-id-type="doi">10.1097/WNR.0000000000000347</pub-id><pub-id pub-id-type="pmid">25714418</pub-id></citation></ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>V&#x000E4;stfj&#x000E4;ll</surname> <given-names>D.</given-names></name></person-group> (<year>2003</year>). <article-title>The subjective sense of presence, emotion recognition, and experienced emotions in auditory virtual environments</article-title>. <source>CyberPsychol. Behav.</source> <volume>6</volume>, <fpage>181</fpage>&#x02013;<lpage>188</lpage>. <pub-id pub-id-type="doi">10.1089/109493103321640374</pub-id><pub-id pub-id-type="pmid">12804030</pub-id></citation></ref>
<ref id="B65">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Wickham</surname> <given-names>H.</given-names></name></person-group> (<year>2016</year>). <source>ggplot2: Elegant Graphics for Data Analysis. New York, NY: Springer-Verlag</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://ggplot2.tidyverse.org">https://ggplot2.tidyverse.org</ext-link> (accessed May 13, 2023).</citation>
</ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Winkler</surname> <given-names>I.</given-names></name> <name><surname>Brandl</surname> <given-names>S.</given-names></name> <name><surname>Horn</surname> <given-names>F.</given-names></name> <name><surname>Waldburger</surname> <given-names>E.</given-names></name> <name><surname>Allefeld</surname> <given-names>C.</given-names></name> <name><surname>Tangermann</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Robust artifactual independent component classification for BCI practitioners</article-title>. <source>J. Neural Eng.</source> <volume>11</volume>, <fpage>035013</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/11/3/035013</pub-id><pub-id pub-id-type="pmid">24836294</pub-id></citation></ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Winkler</surname> <given-names>I.</given-names></name> <name><surname>Haufe</surname> <given-names>S.</given-names></name> <name><surname>Tangermann</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>Automatic classification of artifactual ICA-components for artifact removal in EEG signals</article-title>. <source>Behav. Brain Funct.</source> <volume>7</volume>, <fpage>30</fpage>&#x02013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.1186/1744-9081-7-30</pub-id><pub-id pub-id-type="pmid">21810266</pub-id></citation></ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wirth</surname> <given-names>W.</given-names></name> <name><surname>Hartmann</surname> <given-names>T.</given-names></name> <name><surname>Bocking</surname> <given-names>S.</given-names></name> <name><surname>Vorderer</surname> <given-names>P.</given-names></name> <name><surname>Klimmt</surname> <given-names>C.</given-names></name> <name><surname>Schramm</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2003</year>). Constructing <italic>Presence: A Two-Level Model of the Formation of Spatial Presence Experiences</italic>.</citation>
</ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wirth</surname> <given-names>W.</given-names></name> <name><surname>Hartmann</surname> <given-names>T.</given-names></name> <name><surname>B&#x000F6;cking</surname> <given-names>S.</given-names></name> <name><surname>Vorderer</surname> <given-names>P.</given-names></name> <name><surname>Klimmt</surname> <given-names>C.</given-names></name> <name><surname>Schramm</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2007</year>). <article-title>A process model of the formation of spatial presence experiences</article-title>. <source>Media Psychol.</source> <volume>9</volume>, <fpage>493</fpage>&#x02013;<lpage>525</lpage>. <pub-id pub-id-type="doi">10.1080/15213260701283079</pub-id></citation>
</ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><collab>World Medical Association</collab></person-group> (<year>2013</year>). <article-title>World Medical Association declaration of helsinki: ethical principles for medical research involving human subjects</article-title>. <source>JAMA</source> <volume>310</volume>, <fpage>2191</fpage>&#x02013;<lpage>2194</lpage>. <pub-id pub-id-type="doi">10.1001/jama.2013.281053</pub-id><pub-id pub-id-type="pmid">24141714</pub-id></citation></ref>
</ref-list> 
</back>
</article> 