<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="brief-report">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Hum. Neurosci.</journal-id>
<journal-title>Frontiers in Human Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Hum. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5161</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnhum.2023.1124784</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Brief Research Report</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Actions do not clearly impact auditory memory</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Font-Alaminos</surname> <given-names>Marta</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2199495/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Paraskevoudi</surname> <given-names>Nadia</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/589017/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>SanMiguel</surname> <given-names>Iria</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/43185/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Institut de Neuroci&#x00E8;ncies, Universitat de Barcelona</institution>, <addr-line>Barcelona</addr-line>, <country>Spain</country></aff>
<aff id="aff2"><sup>2</sup><institution>Brainlab-Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona</institution>, <addr-line>Barcelona</addr-line>, <country>Spain</country></aff>
<aff id="aff3"><sup>3</sup><institution>Institut de Recerca Sant Joan de D&#x00E9;u</institution>, <addr-line>Esplugues de Llobregat</addr-line>, <country>Spain</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Erich Schr&#x00F6;ger, Leipzig University, Germany</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Tjerk Dercksen, Leibniz Institute for Neurobiology (LG), Germany; Maria Pyasik, University of Verona, Italy</p></fn>
<corresp id="c001">&#x002A;Correspondence: Iria SanMiguel, <email>isanmiguel@ub.edu</email></corresp>
<fn fn-type="other" id="fn004"><p>This article was submitted to Cognitive Neuroscience, a section of the journal Frontiers in Human Neuroscience</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>27</day>
<month>02</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>17</volume>
<elocation-id>1124784</elocation-id>
<history>
<date date-type="received">
<day>15</day>
<month>12</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>31</day>
<month>01</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2023 Font-Alaminos, Paraskevoudi and SanMiguel.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Font-Alaminos, Paraskevoudi and SanMiguel</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>When memorizing a list of words, those that are read aloud are remembered better than those read silently, a phenomenon known as the production effect. There have been several attempts to understand the production effect, however, actions alone have not been examined as possible contributors. Stimuli that coincide with our own actions are processed differently compared to stimuli presented passively to us. These sensory response modulations may have an impact on how action-revolving inputs are stored in memory. In this study, we investigated whether actions could impact auditory memory. Participants listened to sounds presented either during or in between their actions. We measured electrophysiological responses to the sounds and tested participants&#x2019; memory of them. Results showed attenuation of sensory responses for action-coinciding sounds. However, we did not find a significant effect on memory performance. The absence of significant behavioral findings suggests that the production effect may be not dependent on the effects of actions <italic>per se</italic>. We conclude that action alone is not sufficient to improve memory performance, and thus elicit a production effect.</p>
</abstract>
<kwd-group>
<kwd>action</kwd>
<kwd>production effect</kwd>
<kwd>self-generation effects</kwd>
<kwd>auditory memory</kwd>
<kwd>active learning</kwd>
</kwd-group>
<contract-num rid="cn001">PSI2017-85600-P</contract-num>
<contract-num rid="cn001">MDM-2017-0729-18-2M</contract-num>
<contract-num rid="cn001">RYC-2013-12577</contract-num>
<contract-num rid="cn002">2017SGR-974</contract-num>
<contract-sponsor id="cn001">Agencia Estatal de Investigaci&#x00F3;n<named-content content-type="fundref-id">10.13039/501100011033</named-content></contract-sponsor>
<contract-sponsor id="cn002">Ag&#x00E8;ncia de Gesti&#x00F3; d'Ajuts Universitaris i de Recerca<named-content content-type="fundref-id">10.13039/501100003030</named-content></contract-sponsor><contract-sponsor id="cn003">Universitat de Barcelona<named-content content-type="fundref-id">10.13039/501100005774</named-content></contract-sponsor>
<counts>
<fig-count count="3"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="50"/>
<page-count count="9"/>
<word-count count="6337"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>Introduction</title>
<p>You have probably been told at least once to study aloud or while chewing gum to best prepare for an upcoming test. There are countless examples from daily life that suggest that actions could have an impact on memory performance. A related finding in scientific literature is the production effect. Several studies collectively have found that self-generated sounds (i.e., rehearsed piano melodies and spoken words) have better memory recall than their passively processed counterparts (<xref ref-type="bibr" rid="B10">Ekstrand et al., 1966</xref>; <xref ref-type="bibr" rid="B14">Hopkins and Edwards, 1972</xref>; <xref ref-type="bibr" rid="B5">Conway and Gathercole, 1987</xref>; <xref ref-type="bibr" rid="B11">Gathercole and Conway, 1988</xref>; <xref ref-type="bibr" rid="B27">MacDonald and MacLeod, 1998</xref>; <xref ref-type="bibr" rid="B29">MacLeod et al., 2010</xref>; <xref ref-type="bibr" rid="B3">Brown and Palmer, 2012</xref>; <xref ref-type="bibr" rid="B31">Mathias et al., 2015</xref>). The production effect&#x2019;s memory mechanism(s) have been the subject of numerous theories, but one possibility that has not been considered is that movement in and of itself may contribute to this memory enhancement.</p>
<p>Stimuli generated by our own actions are processed differently than the inputs coming from external sources. Specifically, the most frequently reported finding has been sensory attenuation to self- compared to externally generated stimuli [see <xref ref-type="bibr" rid="B17">Horv&#x00E1;th (2015)</xref>, and <xref ref-type="bibr" rid="B48">Schr&#x00F6;ger et al. (2015)</xref>, for a review of findings in the auditory modality]. In auditory research, most of the studies find attenuation of the N1 and P2 components of the event-related-potential (ERP). Typically, this sensory attenuation has been found for self-generated sounds (i.e., when the actions cause the sounds), however, several studies show attenuation even with mere action-sound coincidence (i.e., <xref ref-type="bibr" rid="B12">Hazemann et al., 1975</xref>; <xref ref-type="bibr" rid="B30">Makeig et al., 1996</xref>; <xref ref-type="bibr" rid="B18">Horv&#x00E1;th et al., 2012</xref>; <xref ref-type="bibr" rid="B15">Horv&#x00E1;th, 2013a</xref>,<xref ref-type="bibr" rid="B16">b</xref>). Indeed, movement has been shown to modulate sensory processing (<xref ref-type="bibr" rid="B45">Schafer and Marcus, 1973</xref>; <xref ref-type="bibr" rid="B40">Roy and Cullen, 2001</xref>; Hesse et al., 2010; <xref ref-type="bibr" rid="B22">Kelley and Bass, 2010</xref>; <xref ref-type="bibr" rid="B37">Requarth and Sawtell, 2011</xref>; <xref ref-type="bibr" rid="B47">Schneider et al., 2014</xref>; <xref ref-type="bibr" rid="B4">Chagnaud et al., 2015</xref>; <xref ref-type="bibr" rid="B23">Kim et al., 2015</xref>; <xref ref-type="bibr" rid="B36">Pyasik et al., 2018</xref>). One intriguing possibility is that movement may drive the activity of diffuse neuromodulatory systems such as the LC-NE system and thereby modulate responses in sensory cortices (<xref ref-type="bibr" rid="B35">Paraskevoudi and SanMiguel, 2023</xref>). Here, we ask whether movement, beyond sensory processing, may also modulate memory for concurrent sounds.</p>
<p>We hypothesize that the modulation of sensory responses during movement may have an impact on the memory encoding of concurrent stimuli, leading to an altered memory representation. Behaviorally, we expect that this can manifest as either an increased or decreased ability to remember the sounds depending on whether they coincided with an action or not during the encoding phase of a memory task. At the neural level, we expect to find indices of an altered memory representation. This may manifest as a modulation of sensory responses, that is, N1 and P2 attenuation, to the stimuli that coincided with movement during encoding when they are encountered again at retrieval. Alternatively, the modulation of sensory responses at encoding may in turn result in a modulation of the old/new effect, which consists of a more positive-going potential for correctly recognized old compared to new items and indexes the quality of conscious recollection (<xref ref-type="bibr" rid="B43">Sanquist et al., 1980</xref>; <xref ref-type="bibr" rid="B49">Warren, 1980</xref>, <xref ref-type="bibr" rid="B50">Wilding, 2000</xref>; <xref ref-type="bibr" rid="B21">Kayser et al., 2007</xref>; <xref ref-type="bibr" rid="B41">Rugg and Curran, 2007</xref>; <xref ref-type="bibr" rid="B32">Mecklinger et al., 2016</xref>; <xref ref-type="bibr" rid="B28">MacLeod and Donaldson, 2017</xref>).</p>
</sec>
<sec id="S2" sec-type="materials|methods">
<title>Materials and methods</title>
<sec id="S2.SS1">
<title>Participants</title>
<p>Twenty-two healthy subjects provided written consent and participated in the present study. The sample size was selected based on previous studies reporting robust self-generation effects (e.g., <xref ref-type="bibr" rid="B18">Horv&#x00E1;th et al., 2012</xref>). Three participants were excluded from the analysis due to low signal-to-noise ratio on the electrophysiological data. Thus, the final sample consisted of 19 participants (6 males, mean age 22.74 years, range 18&#x2013;29) that had a normal hearing, reported no history of psychiatric or neurological disease, and did not regularly consume psychoactive drugs nor in the 48 h before the experimental session. The study was approved by the Bioethics Committee of the University of Barcelona. Participants were monetarily compensated (10 euros per hour).</p>
</sec>
<sec id="S2.SS2">
<title>Stimuli</title>
<p>We generated a total of 100 different environmental, natural, complex, and non-identifiable sounds. Samples were selected from the McDermott<sup><xref ref-type="fn" rid="footnote1">1</xref></sup> and the Adobe<sup><xref ref-type="fn" rid="footnote2">2</xref></sup> sound libraries. Non-identifiable sounds were selected to avoid, or at least minimize, semantic activation and instead focus the identification on the physical properties of the sounds. Sounds were sliced to a duration of 250 ms, ramped (0.01 s, exponential) and presented at 44.1 kHz, 16 bit and mono. The sound intensity was normalized across sound samples and adjusted to a comfortable hearing level. The 50 least identifiable sounds, according to an independent rating of 3 subjects, were used in the main experiment and the next 50 in the training.</p>
</sec>
<sec id="S2.SS3">
<title>Experimental design</title>
<p>The general design of the experiment was a Delayed-Match-to-Sample Task (DMTS), which consisted of 3 phases: encoding, retention, and retrieval. During the encoding phase, we exposed the subjects to auditory stimuli which they had to memorize. Half of the sounds were presented coinciding with a button press of the participant and constitute the Motor-auditory (MA) condition. The other half of the sounds were not related to any action of the participant and constitute the Auditory (A) condition. After a short retention period, we presented a test sound at retrieval. Participants responded whether the test sound was one of the sounds presented during the encoding and, thus, an old sound (Old condition) or a new sound (New condition, <xref ref-type="fig" rid="F1">Figure 1A</xref>).</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p><bold>(A)</bold> Schematic description of a trial depicting the visual (Vis), auditory (Aud), and motor (Mot) occurrences taking place, and highlighting an example event for each condition (Con): motor-auditory (MA), auditory (A), and motor (M). Time in seconds, t(s): the timepoints mark the beginning of each phase of the trial. ITI, inter-trial interval. Finger used to generate sounds was the thumb. <bold>(B)</bold> Behavioral results. Bar plots with individual data points comparing the memory performance for the encoded as motor-auditory and encoded as auditory (left) and for the Old and New (right) sounds at retrieval. Individual data points are connected by a discontinuous line in each comparison. Error bars display the standard error of the mean (SEM). Asterisk denotes statistical significance.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnhum-17-1124784-g001.tif"/>
</fig>
<sec id="S2.SS3.SSS1">
<title>Encoding</title>
<p>At the beginning of each trial, the screen displayed 6 horizontally aligned and randomly spaced gray rectangles and a perpendicular, horizontal line that proceeded from left to right. Subjects pressed a button with their right thumb every time the line intersected a rectangle. Meanwhile, 6 sounds were presented which they had to memorize. On 50% of the presses, a sound was immediately presented after the press and the remaining sounds were presented between presses. Subjects were not told that some of the sounds will be generated by their actions. This resulted in 3 different event types: 3 &#x00D7; Motor condition (M): The subject pressed the button, but no sound was presented, 3 &#x00D7; A condition: A sound was presented without any action of the subject, 3 &#x00D7; MA condition: A sound was presented the moment the subject pressed the button. If subjects failed to press the button when indicated, an error message was presented, and the trial was aborted.</p>
<p>The total duration of the encoding phase was 12.8 s. The 9 encoding events occurred pseudo randomly within this time, with the following limitations: The event-to-event onset asynchrony varied randomly between 0.8 and 2.4 s. However, the minimum sound-to-sound onset asynchrony was 1.6 s. The last event occurred latest at 12 s, and it was always a sound event (MA or A). M events were always separated by at least one sound event.</p>
</sec>
<sec id="S2.SS3.SSS2">
<title>Retention</title>
<p>After the encoding phase, a fixation cross was presented for 1.2 s. This was estimated as the minimum duration that would engage short term memory while minimizing echoic memory contributions (<xref ref-type="bibr" rid="B7">Crowder, 1976</xref>; <xref ref-type="bibr" rid="B26">Lu et al., 1992</xref>).</p>
</sec>
<sec id="S2.SS3.SSS3">
<title>Retrieval</title>
<p>The test sound was presented 14 s after trial onset. A &#x201C;Yes/No?&#x201D; replaced the fixation cross on the screen 0.8 s after test sound onset, prompting participants to answer whether the test sound was old or new. The response window was 1 s. Once the participant responded, or after the response window ended, the question on the screen was replaced with a fixation cross until the onset of the next trial. The intertrial interval was 2 s.</p>
<p>Each of the 50 unique sounds used in the experiment served as the test sound in 4 trials. In these 4 trials, the sound sequences were composed of the same 6 encoding sounds and one test sound. However, two of these trials belonged to the Old condition, where the test sound was part of the encoding sequence, once presented coinciding with a button press (MA condition) and once presented without any action (A condition). The other two trials represented the New condition. These were identical to the Old condition, except that the test sound was replaced by another sound both at encoding and retrieval. The rest of the events of the trial (i.e., the other encoding sounds and the participant&#x2019;s actions) were identical across the 4 trial versions generated for each unique sound.</p>
<p>The position of the test sound within the encoding sequence was chosen randomly for each unique sound. The positions could be from the second to the fifth, avoiding the first and last encoding sound positions to avoid primacy and recency effects (<xref ref-type="bibr" rid="B33">Mondor and Morin, 2004</xref>). However, to ensure that subjects did not learn to ignore those positions, 20 Catch-trials were added to the experiment with either position 1 or 6 for the encoding-test sound. The Catch-trials were not part of the analysis.</p>
</sec>
</sec>
<sec id="S2.SS4">
<title>Procedure</title>
<p>The experiment started with a progressive training where the participants learned how to perform the experiment in several short blocks of 5 trials each. First, they learned how to press the button on time whenever the line hit one of the rectangles, without auditory input. The word &#x201C;error&#x201D; appeared instantly on the screen every time they did not press the button on time. At the end of each block, feedback was presented on how many presses they missed and how many presses were not on time. Subsequently, auditory input was added, and subjects were instructed to perform the memory task. Here, the feedback screen at the end of each block also showed the &#x201C;Misses&#x201D; indicating unanswered questions or answers out of the required time window. Each part of the training was repeated until the subject could perform within minimal errors and misses.</p>
<p>After the successful training the experiment began which consisted of 22 blocks of 10 trials each, presented in randomized order. Total experimental time without pauses was 65 min. Subjects took short breaks between blocks to avoid fatigue.</p>
</sec>
<sec id="S2.SS5">
<title>Apparatus</title>
<p>The experiment was performed in an electrically shielded chamber. The center of the screen was positioned at eye height, at 1.2 m. The EEG was recorded at a sampling rate of 500 Hz using Neuroscan 4.4 software <italic>via</italic> a SynAmps RT amplifier (NeuroScan, Compumedics). We used 64 Ag/AgCl electrodes inserted in a nylon cap (Quick-Cap; Compumedics) following the 10% extension of the International 10&#x2013;20 system (<xref ref-type="bibr" rid="B34">Oostenveld and Praamstra, 2001</xref>). The EOG was recorded with NAS and one electrode under each eye (<xref ref-type="bibr" rid="B46">Schl&#x00F6;gl et al., 2007</xref>). The reference was set at the tip of the nose and the AFz electrode served as the ground. Impedances were kept below 10 k&#x03A9;. Auditory stimuli were delivered binaurally <italic>via</italic> over-ear headphones (Sennheiser, HD 558). Participants&#x2019; button presses and responses were recorded with a silent response pad (Korg nanoPAD2). The setup was controlled and performed <italic>via</italic> MATLAB (The MathWorks)<sup><xref ref-type="fn" rid="footnote3">3</xref></sup> with the Psychophysics Toolbox (<xref ref-type="bibr" rid="B2">Brainard, 1997</xref>; <xref ref-type="bibr" rid="B24">Kleiner et al., 2007</xref>).</p>
</sec>
<sec id="S2.SS6">
<title>Behavioral analysis</title>
<p>We calculated the percent of correct responses for sounds encoded as A and MA as well as for Old (both A and MA) and New sounds and performed a two-tailed paired samples <italic>t</italic>-test for each of the two comparisons (A-MA, Old-New). To complement our frequentist analysis, we conducted <italic>post hoc</italic> Bayesian <italic>t</italic>-tests to assess the evidence supporting a difference. We calculated the Bayes factor (<italic>BF10</italic>) for the alternative hypothesis (i.e., the difference of the means is not equal to zero), which was specified as a Cauchy prior distribution centered around 0 with a scaling factor of <italic>r</italic> = 0.707. The null hypothesis was specifically matched to an effect magnitude with a standardized effect size &#x03B4; = 0 (<xref ref-type="bibr" rid="B38">Rouder et al., 2009</xref>). Data were viewed as moderate support for the alternative hypothesis if the <italic>BF10</italic> was larger than 3, whereas values close to 1 were considered only weak evidence and values below 0.3 were viewed as supporting the null hypothesis (<xref ref-type="bibr" rid="B25">Lee and Wagenmakers, 2013</xref>). Finally, to assess the bias in the responses we calculated sensitivity [as d&#x2019; = z(Hit) &#x2013; z(False Alarm)] and criterion c = &#x2212;0.5 &#x002A; [z(Hit) + z(False Alarm)]; <xref ref-type="bibr" rid="B39">Roussel et al. (2013)</xref>.</p>
</sec>
<sec id="S2.SS7">
<title>EEG preprocessing and analysis</title>
<p>EEG analysis was performed with EEGLAB (<xref ref-type="bibr" rid="B9">Delorme and Makeig, 2004</xref>) and Eeprobe (ANT Neuro) was used for visualization. Data was high pass filtered at 0.5 Hz and non-stereotypical artifacts were manually rejected. We then applied Independent Component Analysis (ICA) decomposition using the binary version of the Infomax algorithm. After manual identification of the eye-movement artifactual components (<xref ref-type="bibr" rid="B20">Jung et al., 2000</xref>), the ICA weights of those components (mean components: 2.8) were removed from the raw data, already high pass filtered at 0.5 Hz. Subsequently, data was low pass filtered at 25 Hz and channels marked as broken at recording were interpolated.</p>
<p>Epochs were extracted from &#x2212;0.1 to 0.5 s around the onset of each event of interest using the prestimulus period for baseline correction. At encoding epochs were defined for Auditory (eA) and Motor-auditory (eMA) sounds and Motor (eM) events; and at retrieval for encoded as Auditory (rA) and encoded as Motor-auditory (rMA) sounds. At retrieval, we also extracted epochs for correctly rejected New sounds, and for correctly recognized Old sounds, both as a whole and separately for those encoded as Auditory (rAcorrect) and Motor-auditory (rMAcorrect). Epochs with a voltage range exceeding 75 &#x03BC;V were rejected.</p>
<p>To test for the effects of actions on neural responses to sounds, we compared the auditory ERPs between MA and A events at encoding (eA vs. eMA) and between encoded as MA and encoded as A at retrieval (rA vs. rMA). At encoding, MA responses were corrected subtracting the ERP elicited by Motor events (eMA-eM) prior to this comparison. Both at encoding and retrieval, specifically, we tested for differences in the amplitude of the auditory N1 and P2 components at electrodes Cz and mastoids, and the N1 subcomponents Na and Tb at the collapsed electrodes T8 and T7, all identified and measured following <xref ref-type="bibr" rid="B42">SanMiguel et al. (2013)</xref>. Given that P3 modulations have been reported (but not discussed) in previous work (e.g., <xref ref-type="bibr" rid="B18">Horv&#x00E1;th et al., 2012</xref>), we decided to analyze P3 at encoding identified as the peak of the difference wave (A &#x2013;[MA&#x2013;M]) in the P3 window range based on previous work (e.g., <xref ref-type="bibr" rid="B1">Baess et al., 2008</xref>). At retrieval, the P3 component window served to test the old/new effect comparing responses between the correct New and correct Old (as a whole and separately for rAcorrect and rMAcorrect). We compared the mean amplitude of the components of interest in the identified time-windows at each electrode with two-tailed paired samples <italic>t</italic>-tests (Cz, Pz, collapsed mastoids and temporal electrodes) and with the <italic>BF10</italic> for consistency with the behavioral analysis.</p>
</sec>
</sec>
<sec id="S3" sec-type="results">
<title>Results</title>
<sec id="S3.SS1">
<title>Behavioral</title>
<p>The overall memory performance was 70.57% (SD: 7.23). Accuracy for Old sounds did not differ based on how they were encoded [<italic>t</italic>(18) = &#x2212;0.578, <italic>p</italic> = 0.571, <italic>d</italic> = &#x2212;0.129, <italic>BF</italic><sub>10</sub> = 0.276; <xref ref-type="fig" rid="F1">Figure 1B</xref>, left; see <xref ref-type="table" rid="T1">Table 1</xref>]. However, participants were better at recognizing old sounds than correctly rejecting new sounds [<italic>t</italic>(18) = 2.716, <italic>p</italic> = 0.014, <italic>d</italic> = 0.963, <italic>BF</italic><sub>10</sub> = 3.901; <xref ref-type="fig" rid="F1">Figure 1B</xref>, right].</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Mean amplitudes and standard deviations of the results.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<tbody>
<tr>
<td valign="top" align="left" colspan="6" style="background-color: #dcdcdc;"><bold>Behavioral</bold></td>
</tr>
<tr>
<td valign="top" align="left" colspan="2" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"><bold>% correct</bold></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"><bold>D-prime</bold></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"><bold>Criterion</bold></td>
</tr>
<tr>
<td valign="top" align="left" colspan="2" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"><bold>Condition</bold></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"><bold>Mean (SD)</bold></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"></td>
</tr>
<tr>
<td valign="top" align="center" colspan="2" rowspan="4"></td>
<td valign="top" align="center">A</td>
<td valign="top" align="center">0.75 (0.10)</td>
<td valign="top" align="center">1.26 (0.37)</td>
<td valign="top" align="center">&#x2212;0.09 (0.27)</td>
</tr>
<tr>
<td valign="top" align="center">MA</td>
<td valign="top" align="center">0.77 (0.10)</td>
<td valign="top" align="center">1.30 (0.44)</td>
<td valign="top" align="center">&#x2212;0.11 (0.25)</td>
</tr>
<tr>
<td valign="top" align="center">Old</td>
<td valign="top" align="center">0.76 (0.09)</td>
<td valign="top" align="center">1.27 (0.37)</td>
<td valign="top" align="center">&#x2212;0.10 (0.25)</td>
</tr>
<tr>
<td valign="top" align="center">New</td>
<td valign="top" align="center">0.65 (0.14)</td>
<td valign="top" align="center">1.27 (0.39)</td>
<td valign="top" align="center">0.22 (0.28)</td>
</tr>
<tr>
<td valign="top" align="left" colspan="6" style="background-color: #dcdcdc;"><bold>Electrophysiological</bold></td>
</tr>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="center" colspan="2" style="color:#ffffff;background-color: #7f8080;"><bold>Encoding</bold></td>
<td valign="top" align="center" colspan="2" style="color:#ffffff;background-color: #7f8080;"><bold>Retrieval</bold></td>
</tr>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;"><bold>ERPs</bold></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"><bold>Electrodes</bold></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"><bold>Condition</bold></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"><bold>Mean (SD)</bold></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"><bold>Condition</bold></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"><bold>Mean (SD)</bold></td>
</tr>
<tr>
<td valign="top" align="left" rowspan="4">N1</td>
<td valign="top" align="center" rowspan="2">Cz</td>
<td valign="top" align="center">A</td>
<td valign="top" align="center">&#x2212;4.03 (1.68)</td>
<td valign="top" align="center">rA</td>
<td valign="top" align="center">&#x2212;4.85 (2.61)</td>
</tr>
<tr>
<td valign="top" align="center">MA</td>
<td valign="top" align="center">&#x2212;3.46 (1.45)</td>
<td valign="top" align="center">rMA</td>
<td valign="top" align="center">&#x2212;4.44 (2.06)</td>
</tr>
<tr>
<td valign="top" align="center" rowspan="2">Mastoids</td>
<td valign="top" align="center">A</td>
<td valign="top" align="center">0.39 (0.89)</td>
<td valign="top" align="center">rA</td>
<td valign="top" align="center">0.52 (1.07)</td>
</tr>
<tr>
<td valign="top" align="center">MA</td>
<td valign="top" align="center">0.40 (0.94)</td>
<td valign="top" align="center">rMA</td>
<td valign="top" align="center">&#x2212;0.07 (1.29)</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="2">Na</td>
<td valign="top" align="center" rowspan="2">Temporal</td>
<td valign="top" align="center">A</td>
<td valign="top" align="center">&#x2212;0.71 (1.07)</td>
<td valign="top" align="center">rA</td>
<td valign="top" align="center">&#x2212;0.74 (1.08)</td>
</tr>
<tr>
<td valign="top" align="center">MA</td>
<td valign="top" align="center">&#x2212;0.87 (0.86)</td>
<td valign="top" align="center">rMA</td>
<td valign="top" align="center">&#x2212;0.58 (1.31)</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="2">Tb</td>
<td valign="top" align="center" rowspan="2">Temporal</td>
<td valign="top" align="center">A</td>
<td valign="top" align="center">&#x2212;1.69 (1.02)</td>
<td valign="top" align="center">rA</td>
<td valign="top" align="center">&#x2212;2.12 (1.65)</td>
</tr>
<tr>
<td valign="top" align="center">MA</td>
<td valign="top" align="center">&#x2212;1.09 (0.91)</td>
<td valign="top" align="center">rMA</td>
<td valign="top" align="center">&#x2212;1.93 (1.25)</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="4">P2</td>
<td valign="top" align="center" rowspan="2">Cz</td>
<td valign="top" align="center">A</td>
<td valign="top" align="center">3.04 (1.75)</td>
<td valign="top" align="center">rA</td>
<td valign="top" align="center">1.73 (2.54)</td>
</tr>
<tr>
<td valign="top" align="center">MA</td>
<td valign="top" align="center">1.43 (1.18)</td>
<td valign="top" align="center">rMA</td>
<td valign="top" align="center">1.94 (2.18)</td>
</tr>
<tr>
<td valign="top" align="center" rowspan="2">Mastoids</td>
<td valign="top" align="center">A</td>
<td valign="top" align="center">&#x2212;0.63 (0.89)</td>
<td valign="top" align="center">rA</td>
<td valign="top" align="center">&#x2212;0.91 (1.18)</td>
</tr>
<tr>
<td valign="top" align="center">MA</td>
<td valign="top" align="center">&#x2212;0.44 (0.71)</td>
<td valign="top" align="center">rMA</td>
<td valign="top" align="center">&#x2212;1.17 (1.22)</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="2">P3</td>
<td valign="top" align="center" rowspan="2">Pz</td>
<td valign="top" align="center">A</td>
<td valign="top" align="center">0.02 (0.88)</td>
<td valign="top" align="center">rAcorrect</td>
<td valign="top" align="center">2.27 (2.97)</td>
</tr>
<tr>
<td valign="top" align="center">MA</td>
<td valign="top" align="center">0.64 (0.91)</td>
<td valign="top" align="center">rMAcorrect</td>
<td valign="top" align="center">2.48 (2.35)</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" colspan="2"/><td valign="top" align="center">Old</td>
<td valign="top" align="center">2.40 (2.45)</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" colspan="2"/><td valign="top" align="center">New</td>
<td valign="top" align="center">0.88 (2.23)</td>
</tr>
</tbody>
</table></table-wrap>
<p>D-prime did not differ between Old and New [<italic>t</italic>(18) = 0.164, <italic>p</italic> = 0.872, <italic>d</italic> = 0.008, <italic>BF</italic><sub>10</sub> = 0.240] nor between the A and MA conditions [<italic>t</italic>(18) = 0.621, <italic>p</italic> = 0.543, <italic>d</italic> = 0.112, <italic>BF</italic><sub>10</sub> = 0.282]. The Criterion measure differed between the Old and New [<italic>t</italic>(18) = &#x2212;2.645, <italic>p</italic> = 0.016, <italic>d</italic> = &#x2212;1.191, <italic>BF</italic><sub>10</sub> = 3.450]. However, it was similar for the A and MA conditions [<italic>t</italic>(18) = &#x2212;0.621, <italic>p</italic> = 0.543, <italic>d</italic> = 0.086, <italic>BF</italic><sub>10</sub> = 0.282]. This reflects a more conservative strategy when judging new stimuli, however, the presence of an action does not affect the judgment strategy of old stimuli.</p>
</sec>
<sec id="S3.SS2">
<title>Electrophysiological</title>
<sec id="S3.SS2.SSS1">
<title>Encoding</title>
<p>To assess the effect of action on sensory responses, we contrasted the ERPs for the A and the motor corrected MA conditions (eA vs. eMA-eM; <xref ref-type="fig" rid="F2">Figure 2</xref>). First, we identified the time-windows for the components N1 (80&#x2013;110 ms) and P2 (140&#x2013;200 ms) at the Cz electrode and at the mastoids, the N1 subcomponents Na (74&#x2013;94 ms) and Tb (102&#x2013;132 ms) at T7 and T8, and the P3 at Pz (276&#x2013;306 ms). The analysis of the mean amplitudes (see <xref ref-type="table" rid="T1">Table 1</xref>) of the selected time-windows revealed a significant attenuation at Cz of N1 [<italic>t</italic>(18) = &#x2212;2.452, <italic>p</italic> = 0.025, <italic>d</italic> = &#x2212;0.56, <italic>BF</italic><sub>10</sub> = 2.487] and P2 [<italic>t</italic>(18) = 5.993, <italic>p</italic> &#x003C; 0.001, <italic>d</italic> = 1.37, <italic>BF</italic><sub>10</sub> = 1957.803] for the MA condition. At the mastoids there were no differences on N1 [<italic>t</italic>(18) = &#x2212;0.126, <italic>p</italic> = 0.901, <italic>d</italic> = &#x2212;0.012, <italic>BF</italic><sub>10</sub> = 0.239] nor P2 [<italic>t</italic>(18) = &#x2212;1.625, <italic>p</italic> = 0.122, <italic>d</italic> = &#x2212;0.235, <italic>BF</italic><sub>10</sub> = 0.723] between conditions. Examining the temporal electrodes we found a significant attenuation of Tb for the MA condition [<italic>t</italic>(18) = &#x2212;3.313, <italic>p</italic> = 0.004, <italic>d</italic> = &#x2212;0.617, <italic>BF</italic><sub>10</sub> = 11.50], and no significant effects for Na [<italic>t</italic>(18) = 1.090, <italic>p</italic> = 0.290, <italic>d</italic> = 0.165, <italic>BF</italic><sub>10</sub> = 0.399]. At Pz, the P3 component revealed larger amplitudes for the MA condition [<italic>t</italic>(18) = &#x2212;3.934, <italic>p</italic> = 0.001, <italic>d</italic> = &#x2212;0.690, <italic>BF</italic><sub>10</sub> = 37.888].</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>Electrophysiological results comparing the auditory and motor-auditory (motor corrected) stimuli at encoding. <bold>(A)</bold> Event-related-potentials (ERPs) on the analyzed electrodes. At Cz, M1 and M2 the analyzed components are N1 and P2, at T7 and T8 the N1 subcomponents Na and Tb, and at Pz the P3 component. The gray shading marks the time windows of the amplitude analysis. Asterisks mark significance. <bold>(B)</bold> Topographical plots of each component of interest.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnhum-17-1124784-g002.tif"/>
</fig>
</sec>
<sec id="S3.SS2.SSS2">
<title>Retrieval</title>
<p>First, we assessed whether the source of the stimuli at encoding had an effect when presenting passively the same stimuli at retrieval by comparing the Old of the A and MA conditions (rA vs. rMA; <xref ref-type="fig" rid="F3">Figure 3A</xref>). Then, we analyzed whether the old/new effect was modulated by the action effect comparing the correct Old for both A and MA with the correct New. To this end, we identified the time-windows for the components N1 (90&#x2013;120 ms) and P2 (170&#x2013;210 ms) at Cz and at the mastoids and the N1 subcomponents Na (60&#x2013;90 ms) and Tb (120&#x2013;150 ms) at T7 and T8. Additionally, to assess the memory old/new effect we identified the time-window for the P3 component at Pz (300&#x2013;350 ms) for the correct responses at retrieval Old and New. The analysis of the mean amplitudes (see <xref ref-type="table" rid="T1">Table 1</xref>) of the selected time-windows for the contrast rA vs. rMA remained not significant for N1 [<italic>t</italic>(18) = &#x2212;0.939, <italic>p</italic> = 0.360, <italic>d</italic> = &#x2212;0.175, <italic>BF</italic><sub>10</sub> = 0.350] and P2 [<italic>t</italic>(18) = &#x2212;0.433, <italic>p</italic> = 0.670, <italic>d</italic> = &#x2212;0.088, <italic>BF</italic><sub>10</sub> = 0.258] at Cz. The P2 at the mastoids was in concordance with the findings on Cz [<italic>t</italic>(18) = 0.799, <italic>p</italic> = 0.435, <italic>d</italic> = 0.211, <italic>BF</italic><sub>10</sub> = 0.315], however, the N1 [<italic>t</italic>(18) = 2.671, <italic>p</italic> = 0.016, <italic>d</italic> = 0.500, <italic>BF</italic><sub>10</sub> = 3.604] revealed a significant enhancement for the sounds encoded as MA. Given that we did not obtain a significant N1 attenuation for the active condition at the Cz electrode, this mastoid attenuation should be treated with caution. As for the N1 subcomponents, we found no significant effects on Na [<italic>t</italic>(18) = &#x2212;0.674, <italic>p</italic> = 0.509, <italic>d</italic> = &#x2212;0.135, <italic>BF</italic><sub>10</sub> = 0.291] nor Tb [<italic>t</italic>(18) = &#x2212;0.589, <italic>p</italic> = 0.563, <italic>d</italic> = &#x2212;0.126, <italic>BF</italic><sub>10</sub> = 0.277]. Finally, the P3 old/new effect was significantly present at Pz between the Old and New [<italic>t</italic>(18) = 3.764, <italic>p</italic> = 0.001, <italic>d</italic> = 0.650, <italic>BF</italic><sub>10</sub> = 27.289], however, it did not differ between the rAcorrect and rMAcorrect condition [<italic>t</italic>(18) = &#x2212;0.437, <italic>p</italic> = 0.667, <italic>d</italic> = &#x2212;0.079, <italic>BF</italic><sub>10</sub> = 0.259; <xref ref-type="fig" rid="F3">Figure 3B</xref>].</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>Electrophysiological results at retrieval. <bold>(A)</bold> Event-related-potentials (ERPs) comparing the encoded as auditory and motor-auditory sounds, passively presented at retrieval on the analyzed electrodes. At Cz, M1 and M2 the analyzed components are N1 and P2, at T7 and T8 the N1 subcomponents Na and Tb. The gray shading marks the time windows of the amplitude analysis. Asterisks mark significance. <bold>(B)</bold> Top figure: ERPs at Pz comparing the old and the new conditions. Auditory and motor-auditory conditions are displayed here for visualization purposes. Bottom figure: topographical plots in the P3 time-window showing the distribution of the old/new effect.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnhum-17-1124784-g003.tif"/>
</fig>
</sec>
</sec>
</sec>
<sec id="S4" sec-type="discussion">
<title>Discussion</title>
<p>The goal of this study was to investigate whether actions alone could account for the production effect. Behavioral research has shown abundant evidence that sounds produced by oneself are better remembered than those just passively processed (<xref ref-type="bibr" rid="B10">Ekstrand et al., 1966</xref>; <xref ref-type="bibr" rid="B14">Hopkins and Edwards, 1972</xref>; <xref ref-type="bibr" rid="B5">Conway and Gathercole, 1987</xref>; <xref ref-type="bibr" rid="B11">Gathercole and Conway, 1988</xref>; <xref ref-type="bibr" rid="B27">MacDonald and MacLeod, 1998</xref>; <xref ref-type="bibr" rid="B29">MacLeod et al., 2010</xref>; <xref ref-type="bibr" rid="B3">Brown and Palmer, 2012</xref>; <xref ref-type="bibr" rid="B31">Mathias et al., 2015</xref>). However, since memory is a higher order process, it can be challenging to disentangle which lower-level processes are contributing to this complex effect. Normally, several co-occurring processes determine an outcome, thus, modulations of sensory responses could affect how action-revolving inputs are encoded in the memory stream.</p>
<p>In the auditory domain, self-generation effects refer to the attenuation of the sensory responses to a stimulus that has been produced by the same individual who is hearing the sound (<xref ref-type="bibr" rid="B42">SanMiguel et al., 2013</xref>; <xref ref-type="bibr" rid="B44">Saupe et al., 2013</xref>). Surprisingly, this effect persists even in the absence of contingency, that is, when the act performed does not actually generate the stimulus but occurs in the same time window (<xref ref-type="bibr" rid="B18">Horv&#x00E1;th et al., 2012</xref>; <xref ref-type="bibr" rid="B15">Horv&#x00E1;th, 2013a</xref>,<xref ref-type="bibr" rid="B16">b</xref>). Looking at the electrophysiological response during the encoding phase of our study we have replicated this result. The attenuation we measured for N1, Tb and P2 during encoding for sounds coinciding with actions is in line with well-established literature (<xref ref-type="bibr" rid="B17">Horv&#x00E1;th, 2015</xref>; <xref ref-type="bibr" rid="B48">Schr&#x00F6;ger et al., 2015</xref>) and indicates the quality of our measurements. At encoding we also observed an increased P3 amplitude at Pz which may reflect the surprise of the sound that coincides with an action (<xref ref-type="bibr" rid="B8">Darriba et al., 2021</xref>), as in our experiment only half of the actions were accompanied by a sound (cf. <xref ref-type="bibr" rid="B18">Horv&#x00E1;th et al., 2012</xref>; <xref ref-type="bibr" rid="B35">Paraskevoudi and SanMiguel, 2023</xref>). The surprising nature of the motor-auditory event could be obscuring the hypothetic memory encoding enhancement, and thus, result in the absence of memory improvement found for the motor-auditory sounds.</p>
<p>Could the action effects described at encoding contribute to the memory advantage observed in the production effect? We examined whether a non-contingent action-sound relationship affected memory performance on a task where old items could be either encoded coinciding with an action or not (i.e., motor-auditory and auditory sounds here). Our measurements showed evidence against an effect on auditory memory for action-coinciding stimuli. This indicates that actions alone do not facilitate the production effect. In line with our behavioral results, as the test sound was always externally generated, we could not find the typical self-generation effects at retrieval. However, our aim was to detect if there was any modulation in the sensory processing at retrieval dependent on the condition of the test sound at encoding.</p>
<p>Previous ERP research has reported the old/new effect, that is, correctly recognizing a previously heard sound elicits a more positive potential (onset at 300 ms) compared to hearing a new sound (<xref ref-type="bibr" rid="B43">Sanquist et al., 1980</xref>; <xref ref-type="bibr" rid="B49">Warren, 1980</xref>; <xref ref-type="bibr" rid="B50">Wilding, 2000</xref>; <xref ref-type="bibr" rid="B21">Kayser et al., 2007</xref>; <xref ref-type="bibr" rid="B41">Rugg and Curran, 2007</xref>; <xref ref-type="bibr" rid="B32">Mecklinger et al., 2016</xref>; <xref ref-type="bibr" rid="B28">MacLeod and Donaldson, 2017</xref>). In our study, this enhancement for the &#x201C;Old&#x201D; sounds at retrieval did not differ between previously encoded as motor-auditory and encoded as auditory sounds, indicating that the quality of recollection was also not affected by the presence of an action during encoding.</p>
<p>All in all, while we found a robust modulation of sound processing by actions during encoding, this did not seem to affect memory retrieval of these sounds, as we could not find any effects on the responses to the test sounds at retrieval. Hence, our data does not support a relationship between unspecific action effects of the coincidence of a sound with an action and memory accuracy. The null effect at retrieval could be related to the specific conditions of our experiment. We did not have sufficient trials to perform a remembered vs. forgotten analysis that could reveal the slight differences in performance that a coincidental action could be mediating. Interestingly, the sole study to date that tried to relate the memory advantage present on the production effect to the modulatory effects of motor activity surrounding auditory stimuli revealed worse memory performance to sounds coinciding with actions (<xref ref-type="bibr" rid="B35">Paraskevoudi and SanMiguel, 2023</xref>). One apparently minor difference between this and the former study is the type of question at retrieval. Both the yes/no and two-alternative forced-choice (2AFC) are formats often utilized in the recognition memory literature. In the yes/no format, used in the present study, the target stimulus was presented for a decision in isolation. This is known to require higher memory strength than the decision making between two stimuli (<xref ref-type="bibr" rid="B19">Jang et al., 2009</xref>). It could be possible that in <xref ref-type="bibr" rid="B35">Paraskevoudi and SanMiguel (2023)</xref> the 2AFC&#x2019;s inherently greater performance made it easier to uncover the subtler differences between the two research conditions.</p>
<p>The absence of significant behavioral findings suggests that the production effect is not dependent on the presence of an action <italic>per se</italic>. We considered examining coincidental action was a logical first step to elucidate the role of action in the production effect. However, as we have evidenced, the surprise surrounding a coincidental action could be masking a co-occurrent memory enhancement. Future research with fully contingent paradigms will help clarify if there could be a memory advantage. We conclude the presence of an action alone is not sufficient to enhance auditory memory on a behavioral level and elicit a production effect.</p>
</sec>
<sec id="S5" sec-type="data-availability">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="S6" sec-type="ethics-statement">
<title>Ethics statement</title>
<p>The studies involving human participants were reviewed and approved by Bioethics Committee, University of Barcelona. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="S7" sec-type="author-contributions">
<title>Author contributions</title>
<p>MF-A collected, analyzed the data, and wrote the first draft of the manuscript. NP and IS wrote sections of the manuscript. All authors contributed to conception and design of the study, manuscript revision, read, and approved the submitted version.</p>
</sec>
</body>
<back>
<sec id="S8" sec-type="funding-information">
<title>Funding</title>
<p>This work was part of the projects PSI2017-85600-P and PID2021-128790NB-I00 funded by MCIN/AEI/10.13039/50110001 1033/ and by &#x201C;ERDF A way of making Europe,&#x201D; it has been additionally supported by the MDM-2017-0729-18-2M Maria de Maeztu Center of Excellence UBNeuro, funded by MCIN/AEI/10.13039/501100011033, the Excellence Research Group 2017SGR-974 funded by the Secretaria d&#x2019;Universitats I Recerca del Departament d&#x2019;Empresa i Coneixement de la Generalitat de Catalunya and by the University of Barcelona funding for Open access publishing. IS was supported by grant RYC-2013-12577, funded by MCIN/AEI/10.13039/501100011033 and by &#x201C;ESF Investing in your future.&#x201D; MF-A was supported by predoctoral fellowship PRE2018-085099 funded by funded by MCIN/AEI/10.13039/501100011033/. NP was supported by predoctoral fellowship FI-DGR 2019 funded by the Secretaria d&#x2019;Universitats i Recerca de la Generalitat de Catalunya and the European Social Fund.</p>
</sec>
<ack>
<p>We would especially like to thank Peter Gericke for helping with the data collection for this study.</p>
</ack>
<sec id="S9" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="S10" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<fn-group>
<fn id="footnote1">
<label>1</label>
<p><ext-link ext-link-type="uri" xlink:href="http://mcdermottlab.mit.edu/svnh/Natural-Sound/Stimuli.html">http://mcdermottlab.mit.edu/svnh/Natural-Sound/Stimuli.html</ext-link></p></fn>
<fn id="footnote2">
<label>2</label>
<p><ext-link ext-link-type="uri" xlink:href="https://www.adobe.com/products/audition/offers/AdobeAuditionDLCSFX.html">https://www.adobe.com/products/audition/offers/AdobeAuditionDLCSFX.html</ext-link></p></fn>
<fn id="footnote3">
<label>3</label>
<p><ext-link ext-link-type="uri" xlink:href="http://www.mathworks.com">www.mathworks.com</ext-link></p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baess</surname> <given-names>P.</given-names></name> <name><surname>Jacobsen</surname> <given-names>T.</given-names></name> <name><surname>Schr&#x00F6;ger</surname> <given-names>E.</given-names></name></person-group> (<year>2008</year>). <article-title>Suppression of the auditory N1 event-related potential component with unpredictable self-initiated tones: Evidence for internal forward models with dynamic stimulation.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>70</volume> <fpage>137</fpage>&#x2013;<lpage>143</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2008.06.005</pub-id> <pub-id pub-id-type="pmid">18627782</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brainard</surname> <given-names>D. H.</given-names></name></person-group> (<year>1997</year>). <article-title>The psychophysics toolbox.</article-title> <source><italic>Spat. Vis.</italic></source> <volume>10</volume> <fpage>433</fpage>&#x2013;<lpage>436</lpage>. <pub-id pub-id-type="doi">10.1163/156856897X00357</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brown</surname> <given-names>R. M.</given-names></name> <name><surname>Palmer</surname> <given-names>C.</given-names></name></person-group> (<year>2012</year>). <article-title>Auditory&#x2013;motor learning influences auditory memory for music.</article-title> <source><italic>Mem. Cognit.</italic></source> <volume>40</volume> <fpage>567</fpage>&#x2013;<lpage>578</lpage>. <pub-id pub-id-type="doi">10.3758/s13421-011-0177-x</pub-id> <pub-id pub-id-type="pmid">22271265</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chagnaud</surname> <given-names>B. P.</given-names></name> <name><surname>Banchi</surname> <given-names>R.</given-names></name> <name><surname>Simmers</surname> <given-names>J.</given-names></name> <name><surname>Straka</surname> <given-names>H.</given-names></name></person-group> (<year>2015</year>). <article-title>Spinal corollary discharge modulates motion sensing during vertebrate locomotion.</article-title> <source><italic>Nat. Commun.</italic></source> <volume>6</volume>:<issue>7982</issue>. <pub-id pub-id-type="doi">10.1038/ncomms8982</pub-id> <pub-id pub-id-type="pmid">26337184</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Conway</surname> <given-names>M. A.</given-names></name> <name><surname>Gathercole</surname> <given-names>S. E.</given-names></name></person-group> (<year>1987</year>). <article-title>Modality and long-term memory.</article-title> <source><italic>J. Mem. Lang.</italic></source> <volume>26</volume> <fpage>341</fpage>&#x2013;<lpage>361</lpage>. <pub-id pub-id-type="doi">10.1016/0749-596X(87)90118-5</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Crowder</surname> <given-names>R. G.</given-names></name></person-group> (<year>1976</year>). <source><italic>Principles of learning and memory</italic></source>. <publisher-loc>Hillsdale, NJ</publisher-loc>: <publisher-name>Lawrence Erlbaum</publisher-name>.</citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Crowder</surname> <given-names>R. G.</given-names></name></person-group> (<year>2014</year>). <source><italic>Principles of learning and memory: Classic edition</italic></source>, <edition>1st Edn</edition>. <publisher-loc>Hove</publisher-loc>: <publisher-name>Psychology Press</publisher-name>. <pub-id pub-id-type="doi">10.4324/9781315746944</pub-id> <pub-id pub-id-type="pmid">36153787</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Darriba</surname> <given-names>&#x00C1;.</given-names></name> <name><surname>Hsu</surname> <given-names>Y.-F.</given-names></name> <name><surname>Van Ommen</surname> <given-names>S.</given-names></name> <name><surname>Waszak</surname> <given-names>F.</given-names></name></person-group> (<year>2021</year>). <article-title>Intention-based and sensory-based predictions.</article-title> <source><italic>Sci. Rep.</italic></source> <volume>11</volume>:<issue>19899</issue>. <pub-id pub-id-type="doi">10.1038/s41598-021-99445-z</pub-id> <pub-id pub-id-type="pmid">34615990</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Delorme</surname> <given-names>A.</given-names></name> <name><surname>Makeig</surname> <given-names>S.</given-names></name></person-group> (<year>2004</year>). <article-title>EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis.</article-title> <source><italic>J. Neurosci. Methods</italic></source> <volume>134</volume> <fpage>9</fpage>&#x2013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2003.10.009</pub-id> <pub-id pub-id-type="pmid">15102499</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ekstrand</surname> <given-names>B. R.</given-names></name> <name><surname>Wallace</surname> <given-names>W. P.</given-names></name> <name><surname>Underwood</surname> <given-names>B. J.</given-names></name></person-group> (<year>1966</year>). <article-title>A frequency theory of verbal-discrimination learning.</article-title> <source><italic>Psychol. Rev.</italic></source> <volume>73</volume> <fpage>566</fpage>&#x2013;<lpage>578</lpage>. <pub-id pub-id-type="doi">10.1037/h0023876</pub-id> <pub-id pub-id-type="pmid">5978999</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gathercole</surname> <given-names>S. E.</given-names></name> <name><surname>Conway</surname> <given-names>M. A.</given-names></name></person-group> (<year>1988</year>). <article-title>Exploring long-term modality effects: Vocalization leads to best retention.</article-title> <source><italic>Mem. Cognit.</italic></source> <volume>16</volume> <fpage>110</fpage>&#x2013;<lpage>119</lpage>. <pub-id pub-id-type="doi">10.3758/BF03213478</pub-id> <pub-id pub-id-type="pmid">3352516</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hazemann</surname> <given-names>P.</given-names></name> <name><surname>Audin</surname> <given-names>G.</given-names></name> <name><surname>Lille</surname> <given-names>F.</given-names></name></person-group> (<year>1975</year>). <article-title>Effect of voluntary self-paced movements upon auditory and somatosensory evoked potentials in man.</article-title> <source><italic>Electroencephalogr. Clin. Neurophysiol.</italic></source> <volume>39</volume> <fpage>247</fpage>&#x2013;<lpage>254</lpage>. <pub-id pub-id-type="doi">10.1016/0013-4694(75)90146-7</pub-id> <pub-id pub-id-type="pmid">50222</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hesse</surname> <given-names>M. D.</given-names></name> <name><surname>Nishitani</surname> <given-names>N.</given-names></name> <name><surname>Fink</surname> <given-names>G. R.</given-names></name> <name><surname>Jousm&#x00E4;ki</surname> <given-names>V.</given-names></name> <name><surname>Hari</surname> <given-names>R.</given-names></name></person-group> (<year>2010</year>). <article-title>Attenuation of somatosensory responses to self-produced tactile stimulation.</article-title> <source><italic>Cereb. Cortex</italic></source> <volume>20</volume> <fpage>425</fpage>&#x2013;<lpage>432</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhp110</pub-id> <pub-id pub-id-type="pmid">19505992</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hopkins</surname> <given-names>R. H.</given-names></name> <name><surname>Edwards</surname> <given-names>R. E.</given-names></name></person-group> (<year>1972</year>). <article-title>Pronunciation effects in recognition memory.</article-title> <source><italic>J. Verbal Learn. Verbal Behav.</italic></source> <volume>11</volume> <fpage>534</fpage>&#x2013;<lpage>537</lpage>. <pub-id pub-id-type="doi">10.1016/S0022-5371(72)80036-7</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horv&#x00E1;th</surname> <given-names>J.</given-names></name></person-group> (<year>2013a</year>). <article-title>Action-sound coincidence-related attenuation of auditory ERPs is not modulated by affordance compatibility.</article-title> <source><italic>Biol. Psychol.</italic></source> <volume>93</volume> <fpage>81</fpage>&#x2013;<lpage>87</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2012.12.008</pub-id> <pub-id pub-id-type="pmid">23298717</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horv&#x00E1;th</surname> <given-names>J.</given-names></name></person-group> (<year>2013b</year>). <article-title>Attenuation of auditory ERPs to action-sound coincidences is not explained by voluntary allocation of attention: Action-sound coincidence effect is not attentional.</article-title> <source><italic>Psychophysiology</italic></source> <volume>50</volume> <fpage>266</fpage>&#x2013;<lpage>273</lpage>. <pub-id pub-id-type="doi">10.1111/psyp.12009</pub-id> <pub-id pub-id-type="pmid">23316925</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horv&#x00E1;th</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>Action-related auditory ERP attenuation: Paradigms and hypotheses.</article-title> <source><italic>Brain Res.</italic></source> <volume>1626</volume> <fpage>54</fpage>&#x2013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2015.03.038</pub-id> <pub-id pub-id-type="pmid">25843932</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horv&#x00E1;th</surname> <given-names>J.</given-names></name> <name><surname>Maess</surname> <given-names>B.</given-names></name> <name><surname>Baess</surname> <given-names>P.</given-names></name> <name><surname>T&#x00F3;th</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Action&#x2013;sound coincidences suppress evoked responses of the human auditory cortex in EEG and MEG.</article-title> <source><italic>J. Cogn. Neurosci.</italic></source> <volume>24</volume> <fpage>1919</fpage>&#x2013;<lpage>1931</lpage>. <pub-id pub-id-type="doi">10.1162/jocn_a_00215</pub-id> <pub-id pub-id-type="pmid">22360594</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jang</surname> <given-names>Y.</given-names></name> <name><surname>Wixted</surname> <given-names>J. T.</given-names></name> <name><surname>Huber</surname> <given-names>D. E.</given-names></name></person-group> (<year>2009</year>). <article-title>Testing signal-detection models of yes/no and two-alternative forced-choice recognition memory.</article-title> <source><italic>J. Exp. Psychol. Gen.</italic></source> <volume>138</volume> <fpage>291</fpage>&#x2013;<lpage>306</lpage>. <pub-id pub-id-type="doi">10.1037/a0015525</pub-id> <pub-id pub-id-type="pmid">19397385</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jung</surname> <given-names>T. P.</given-names></name> <name><surname>Makeig</surname> <given-names>S.</given-names></name> <name><surname>Humphries</surname> <given-names>C.</given-names></name> <name><surname>Lee</surname> <given-names>T. W.</given-names></name> <name><surname>Mckeown</surname> <given-names>M. J.</given-names></name> <name><surname>Iragui</surname> <given-names>V.</given-names></name><etal/></person-group> (<year>2000</year>). <article-title>Removing electroencephalographic artifacts by blind source separation.</article-title> <source><italic>Psychophysiology</italic></source> <volume>37</volume> <fpage>163</fpage>&#x2013;<lpage>178</lpage>. <pub-id pub-id-type="doi">10.1111/1469-8986.3720163</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kayser</surname> <given-names>J.</given-names></name> <name><surname>Tenke</surname> <given-names>C. E.</given-names></name> <name><surname>Gates</surname> <given-names>N. A.</given-names></name> <name><surname>Bruder</surname> <given-names>G. E.</given-names></name></person-group> (<year>2007</year>). <article-title>Reference-independent ERP old/new effects of auditory and visual word recognition memory: Joint extraction of stimulus- and response-locked neuronal generator patterns.</article-title> <source><italic>Psychophysiology</italic></source> <volume>44</volume> <fpage>949</fpage>&#x2013;<lpage>967</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2007.00562.x</pub-id> <pub-id pub-id-type="pmid">17640266</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kelley</surname> <given-names>D. B.</given-names></name> <name><surname>Bass</surname> <given-names>A. H.</given-names></name></person-group> (<year>2010</year>). <article-title>Neurobiology of vocal communication: Mechanisms for sensorimotor integration and vocal patterning.</article-title> <source><italic>Curr. Opin. Neurobiol.</italic></source> <volume>20</volume> <fpage>748</fpage>&#x2013;<lpage>753</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2010.08.007</pub-id> <pub-id pub-id-type="pmid">20829032</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>A. J.</given-names></name> <name><surname>Fitzgerald</surname> <given-names>J. K.</given-names></name> <name><surname>Maimon</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>Cellular evidence for efference copy in <italic>Drosophila</italic> visuomotor processing.</article-title> <source><italic>Nat. Neurosci.</italic></source> <volume>18</volume> <fpage>1247</fpage>&#x2013;<lpage>1255</lpage>. <pub-id pub-id-type="doi">10.1038/nn.4083</pub-id> <pub-id pub-id-type="pmid">26237362</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kleiner</surname> <given-names>M.</given-names></name> <name><surname>Brainard</surname> <given-names>D.</given-names></name> <name><surname>Pelli</surname> <given-names>D.</given-names></name> <name><surname>Ingling</surname> <given-names>A.</given-names></name> <name><surname>Murray</surname> <given-names>R.</given-names></name> <name><surname>Broussard</surname> <given-names>C.</given-names></name></person-group> (<year>2007</year>). <article-title>What&#x2019;s new in psychtoolbox-3.</article-title> <source><italic>Perception</italic></source> <volume>36</volume> <fpage>1</fpage>&#x2013;<lpage>16</lpage>.</citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>M. D.</given-names></name> <name><surname>Wagenmakers</surname> <given-names>E. J.</given-names></name></person-group> (<year>2013</year>). <source><italic>Bayesian cognitive modeling: A practical course.</italic></source> <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9781139087759</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lu</surname> <given-names>Z.-L.</given-names></name> <name><surname>Williamson</surname> <given-names>S. J.</given-names></name> <name><surname>Kaufman</surname> <given-names>L.</given-names></name></person-group> (<year>1992</year>). <article-title>Behavioral lifetime of human auditory sensory memory predicted by physiological measures.</article-title> <source><italic>Science</italic></source> <volume>258</volume> <fpage>1668</fpage>&#x2013;<lpage>1670</lpage>. <pub-id pub-id-type="doi">10.1126/science.1455246</pub-id> <pub-id pub-id-type="pmid">1455246</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>MacDonald</surname> <given-names>P. A.</given-names></name> <name><surname>MacLeod</surname> <given-names>C. M.</given-names></name></person-group> (<year>1998</year>). <article-title>The influence of attention at encoding on direct and indirect remembering.</article-title> <source><italic>Acta Psychol.</italic></source> <volume>98</volume> <fpage>291</fpage>&#x2013;<lpage>310</lpage>. <pub-id pub-id-type="doi">10.1016/S0001-6918(97)00047-4</pub-id> <pub-id pub-id-type="pmid">9621835</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>MacLeod</surname> <given-names>C. A.</given-names></name> <name><surname>Donaldson</surname> <given-names>D. I.</given-names></name></person-group> (<year>2017</year>). <article-title>Investigating the functional utility of the left parietal ERP old/new effect: Brain activity predicts within but not between participant variance in episodic recollection.</article-title> <source><italic>Front. Hum. Neurosci.</italic></source> <volume>11</volume>:<issue>580</issue>. <pub-id pub-id-type="doi">10.3389/fnhum.2017.00580</pub-id> <pub-id pub-id-type="pmid">29259551</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>MacLeod</surname> <given-names>C. M.</given-names></name> <name><surname>Gopie</surname> <given-names>N.</given-names></name> <name><surname>Hourihan</surname> <given-names>K. L.</given-names></name> <name><surname>Neary</surname> <given-names>K. R.</given-names></name> <name><surname>Ozubko</surname> <given-names>J. D.</given-names></name></person-group> (<year>2010</year>). <article-title>The production effect: Delineation of a phenomenon.</article-title> <source><italic>J. Exp. Psychol. Learn. Mem. Cogn.</italic></source> <volume>36</volume> <fpage>671</fpage>&#x2013;<lpage>685</lpage>. <pub-id pub-id-type="doi">10.1037/a0018785</pub-id> <pub-id pub-id-type="pmid">20438265</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Makeig</surname> <given-names>S.</given-names></name> <name><surname>M&#x00FC;ller</surname> <given-names>M. M.</given-names></name> <name><surname>Rockstroh</surname> <given-names>B.</given-names></name></person-group> (<year>1996</year>). <article-title>Effects of voluntary movements on early auditory brain responses.</article-title> <source><italic>Exp. Brain Res.</italic></source> <volume>110</volume> <fpage>487</fpage>&#x2013;<lpage>492</lpage>. <pub-id pub-id-type="doi">10.1007/BF00229149</pub-id> <pub-id pub-id-type="pmid">8871108</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mathias</surname> <given-names>B.</given-names></name> <name><surname>Palmer</surname> <given-names>C.</given-names></name> <name><surname>Perrin</surname> <given-names>F.</given-names></name> <name><surname>Tillmann</surname> <given-names>B.</given-names></name></person-group> (<year>2015</year>). <article-title>Sensorimotor learning enhances expectations during auditory perception.</article-title> <source><italic>Cereb. Cortex</italic></source> <volume>25</volume> <fpage>2238</fpage>&#x2013;<lpage>2254</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhu030</pub-id> <pub-id pub-id-type="pmid">24621528</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mecklinger</surname> <given-names>A.</given-names></name> <name><surname>Rosburg</surname> <given-names>T.</given-names></name> <name><surname>Johansson</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Reconstructing the past: The late posterior negativity (LPN) in episodic memory studies.</article-title> <source><italic>Neurosci. Biobehav. Rev.</italic></source> <volume>68</volume> <fpage>621</fpage>&#x2013;<lpage>638</lpage>. <pub-id pub-id-type="doi">10.1016/j.neubiorev.2016.06.024</pub-id> <pub-id pub-id-type="pmid">27365154</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mondor</surname> <given-names>T. A.</given-names></name> <name><surname>Morin</surname> <given-names>S. R.</given-names></name></person-group> (<year>2004</year>). <article-title>Primacy, recency, and suffix effects in auditory short-term memory for pure tones: Evidence from a probe recognition paradigm.</article-title> <source><italic>Can. J. Exp. Psychol.</italic></source> <volume>58</volume> <fpage>206</fpage>&#x2013;<lpage>219</lpage>. <pub-id pub-id-type="doi">10.1037/h0087445</pub-id> <pub-id pub-id-type="pmid">15487440</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oostenveld</surname> <given-names>R.</given-names></name> <name><surname>Praamstra</surname> <given-names>P.</given-names></name></person-group> (<year>2001</year>). <article-title>The five percent electrode system for high-resolution EEG and ERP measurements.</article-title> <source><italic>Clin. Neurophysiol.</italic></source> <volume>112</volume> <fpage>713</fpage>&#x2013;<lpage>719</lpage>. <pub-id pub-id-type="doi">10.1016/S1388-2457(00)00527-7</pub-id> <pub-id pub-id-type="pmid">11275545</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paraskevoudi</surname> <given-names>N.</given-names></name> <name><surname>SanMiguel</surname> <given-names>I.</given-names></name></person-group> (<year>2023</year>). <article-title>Sensory suppression and increased neuromodulation during actions disrupt memory encoding of unpredictable self-initiated stimuli.</article-title> <source><italic>Psychophysiology</italic></source> <volume>60</volume>:<issue>e14156</issue>. <pub-id pub-id-type="doi">10.1111/psyp.14156</pub-id> <pub-id pub-id-type="pmid">35918912</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pyasik</surname> <given-names>M.</given-names></name> <name><surname>Burin</surname> <given-names>D.</given-names></name> <name><surname>Pia</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>On the relation between body ownership and sense of agency: A link at the level of sensory-related signals.</article-title> <source><italic>Acta Psychol.</italic></source> <volume>185</volume> <fpage>219</fpage>&#x2013;<lpage>228</lpage>. <pub-id pub-id-type="doi">10.1016/j.actpsy.2018.03.001</pub-id> <pub-id pub-id-type="pmid">29533775</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Requarth</surname> <given-names>T.</given-names></name> <name><surname>Sawtell</surname> <given-names>N. B.</given-names></name></person-group> (<year>2011</year>). <article-title>Neural mechanisms for filtering self-generated sensory signals in cerebellum-like circuits.</article-title> <source><italic>Curr. Opin. Neurobiol.</italic></source> <volume>21</volume> <fpage>602</fpage>&#x2013;<lpage>608</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2011.05.031</pub-id> <pub-id pub-id-type="pmid">21704507</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rouder</surname> <given-names>J. N.</given-names></name> <name><surname>Speckman</surname> <given-names>P. L.</given-names></name> <name><surname>Sun</surname> <given-names>D.</given-names></name> <name><surname>Morey</surname> <given-names>R. D.</given-names></name> <name><surname>Iverson</surname> <given-names>G.</given-names></name></person-group> (<year>2009</year>). <article-title>Bayesian t tests for accepting and rejecting the null hypothesis.</article-title> <source><italic>Psychon. Bull. Rev.</italic></source> <volume>16</volume> <fpage>225</fpage>&#x2013;<lpage>237</lpage>. <pub-id pub-id-type="doi">10.3758/PBR.16.2.225</pub-id> <pub-id pub-id-type="pmid">19293088</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Roussel</surname> <given-names>C.</given-names></name> <name><surname>Hughes</surname> <given-names>G.</given-names></name> <name><surname>Waszak</surname> <given-names>F.</given-names></name></person-group> (<year>2013</year>). <article-title>A preactivation account of sensory attenuation.</article-title> <source><italic>Neuropsychologia</italic></source> <volume>51</volume> <fpage>922</fpage>&#x2013;<lpage>929</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2013.02.005</pub-id> <pub-id pub-id-type="pmid">23428377</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Roy</surname> <given-names>J. E.</given-names></name> <name><surname>Cullen</surname> <given-names>K. E.</given-names></name></person-group> (<year>2001</year>). <article-title>Selective processing of vestibular reafference during self-generated head motion.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>21</volume> <fpage>2131</fpage>&#x2013;<lpage>2142</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.21-06-02131.2001</pub-id> <pub-id pub-id-type="pmid">11245697</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rugg</surname> <given-names>M. D.</given-names></name> <name><surname>Curran</surname> <given-names>T.</given-names></name></person-group> (<year>2007</year>). <article-title>Event-related potentials and recognition memory.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>11</volume> <fpage>251</fpage>&#x2013;<lpage>257</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2007.04.004</pub-id> <pub-id pub-id-type="pmid">17481940</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>SanMiguel</surname> <given-names>I.</given-names></name> <name><surname>Todd</surname> <given-names>J.</given-names></name> <name><surname>Schr&#x00F6;ger</surname> <given-names>E.</given-names></name></person-group> (<year>2013</year>). <article-title>Sensory suppression effects to self-initiated sounds reflect the attenuation of the unspecific N1 component of the auditory ERP: Auditory N1 suppression: N1 components.</article-title> <source><italic>Psychophysiology</italic></source> <volume>50</volume> <fpage>334</fpage>&#x2013;<lpage>343</lpage>. <pub-id pub-id-type="doi">10.1111/psyp.12024</pub-id> <pub-id pub-id-type="pmid">23351131</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sanquist</surname> <given-names>T. F.</given-names></name> <name><surname>Rohrbaugh</surname> <given-names>J. W.</given-names></name> <name><surname>Syndulko</surname> <given-names>K.</given-names></name> <name><surname>Lindsley</surname> <given-names>D. B.</given-names></name></person-group> (<year>1980</year>). <article-title>Electrocortical signs of levels of processing: Perceptual analysis and recognition memory.</article-title> <source><italic>Psychophysiology</italic></source> <volume>17</volume> <fpage>568</fpage>&#x2013;<lpage>576</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.1980.tb02299.x</pub-id> <pub-id pub-id-type="pmid">7443924</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Saupe</surname> <given-names>K.</given-names></name> <name><surname>Widmann</surname> <given-names>A.</given-names></name> <name><surname>Trujillo-Barreto</surname> <given-names>N. J.</given-names></name> <name><surname>Schr&#x00F6;ger</surname> <given-names>E.</given-names></name></person-group> (<year>2013</year>). <article-title>Sensorial suppression of self-generated sounds and its dependence on attention.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>90</volume> <fpage>300</fpage>&#x2013;<lpage>310</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2013.09.006</pub-id> <pub-id pub-id-type="pmid">24095710</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schafer</surname> <given-names>E. W. P.</given-names></name> <name><surname>Marcus</surname> <given-names>M. M.</given-names></name></person-group> (<year>1973</year>). <article-title>Self-stimulation alters human sensory brain responses.</article-title> <source><italic>Science</italic></source> <volume>181</volume> <fpage>175</fpage>&#x2013;<lpage>177</lpage>. <pub-id pub-id-type="doi">10.1126/science.181.4095.175</pub-id> <pub-id pub-id-type="pmid">4711735</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schl&#x00F6;gl</surname> <given-names>A.</given-names></name> <name><surname>Keinrath</surname> <given-names>C.</given-names></name> <name><surname>Zimmermann</surname> <given-names>D.</given-names></name> <name><surname>Scherer</surname> <given-names>R.</given-names></name> <name><surname>Leeb</surname> <given-names>R.</given-names></name> <name><surname>Pfurtscheller</surname> <given-names>G.</given-names></name></person-group> (<year>2007</year>). <article-title>A fully automated correction method of EOG artifacts in EEG recordings.</article-title> <source><italic>Clin. Neurophysiol.</italic></source> <volume>118</volume> <fpage>98</fpage>&#x2013;<lpage>104</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2006.09.003</pub-id> <pub-id pub-id-type="pmid">17088100</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schneider</surname> <given-names>D. M.</given-names></name> <name><surname>Nelson</surname> <given-names>A.</given-names></name> <name><surname>Mooney</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>A synaptic and circuit basis for corollary discharge in the auditory cortex.</article-title> <source><italic>Nature</italic></source> <volume>513</volume> <fpage>189</fpage>&#x2013;<lpage>194</lpage>. <pub-id pub-id-type="doi">10.1038/nature13724</pub-id> <pub-id pub-id-type="pmid">25162524</pub-id></citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schr&#x00F6;ger</surname> <given-names>E.</given-names></name> <name><surname>Marzecov&#x00E1;</surname> <given-names>A.</given-names></name> <name><surname>SanMiguel</surname> <given-names>I.</given-names></name></person-group> (<year>2015</year>). <article-title>Attention and prediction in human audition: A lesson from cognitive psychophysiology.</article-title> <source><italic>Eur. J. Neurosci.</italic></source> <volume>41</volume> <fpage>641</fpage>&#x2013;<lpage>664</lpage>. <pub-id pub-id-type="doi">10.1111/ejn.12816</pub-id> <pub-id pub-id-type="pmid">25728182</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Warren</surname> <given-names>L. R.</given-names></name></person-group> (<year>1980</year>). <article-title>Evoked potential correlates of recognition memory.</article-title> <source><italic>Biol. Psychol.</italic></source> <volume>11</volume> <fpage>21</fpage>&#x2013;<lpage>35</lpage>. <pub-id pub-id-type="doi">10.1016/0301-0511(80)90023-x</pub-id> <pub-id pub-id-type="pmid">7248401</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wilding</surname> <given-names>E. L.</given-names></name></person-group> (<year>2000</year>). <article-title>In what way does the parietal ERP old/new effect index recollection?</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>35</volume> <fpage>81</fpage>&#x2013;<lpage>87</lpage>. <pub-id pub-id-type="doi">10.1016/s0167-8760(99)00095-1</pub-id> <pub-id pub-id-type="pmid">10683669</pub-id></citation></ref>
</ref-list>
</back>
</article>