<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2026.1736951</article-id>
<article-version article-version-type="Version of Record" vocab="NISO-RP-8-2008"/>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Original Research</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Tonal modulation influences on musical sight-reading</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Zhang</surname>
<given-names>Yumo</given-names>
</name>
<xref ref-type="aff" rid="aff1"/>
<uri xlink:href="https://loop.frontiersin.org/people/3263755"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Project administration" vocab-term-identifier="https://credit.niso.org/contributor-roles/project-administration/">Project administration</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="methodology" vocab-term-identifier="https://credit.niso.org/contributor-roles/methodology/">Methodology</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Data curation" vocab-term-identifier="https://credit.niso.org/contributor-roles/data-curation/">Data curation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="conceptualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/conceptualization/">Conceptualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="investigation" vocab-term-identifier="https://credit.niso.org/contributor-roles/investigation/">Investigation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Formal analysis" vocab-term-identifier="https://credit.niso.org/contributor-roles/formal-analysis/">Formal analysis</role>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Lewandowska</surname>
<given-names>Olivia Podolak</given-names>
</name>
<xref ref-type="aff" rid="aff1"/>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<xref ref-type="author-notes" rid="fn0004"><sup>&#x2020;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/132817"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Formal analysis" vocab-term-identifier="https://credit.niso.org/contributor-roles/formal-analysis/">Formal analysis</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="visualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/visualization/">Visualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="resources" vocab-term-identifier="https://credit.niso.org/contributor-roles/resources/">Resources</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Project administration" vocab-term-identifier="https://credit.niso.org/contributor-roles/project-administration/">Project administration</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="conceptualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/conceptualization/">Conceptualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="supervision" vocab-term-identifier="https://credit.niso.org/contributor-roles/supervision/">Supervision</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="methodology" vocab-term-identifier="https://credit.niso.org/contributor-roles/methodology/">Methodology</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Data curation" vocab-term-identifier="https://credit.niso.org/contributor-roles/data-curation/">Data curation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="validation" vocab-term-identifier="https://credit.niso.org/contributor-roles/validation/">Validation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="investigation" vocab-term-identifier="https://credit.niso.org/contributor-roles/investigation/">Investigation</role>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Jones</surname>
<given-names>Spencer</given-names>
</name>
<xref ref-type="aff" rid="aff1"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="methodology" vocab-term-identifier="https://credit.niso.org/contributor-roles/methodology/">Methodology</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="conceptualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/conceptualization/">Conceptualization</role>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Schmuckler</surname>
<given-names>Mark A.</given-names>
</name>
<xref ref-type="aff" rid="aff1"/>
<xref ref-type="author-notes" rid="fn0004"><sup>&#x2020;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/18833"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="visualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/visualization/">Visualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="software" vocab-term-identifier="https://credit.niso.org/contributor-roles/software/">Software</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Funding acquisition" vocab-term-identifier="https://credit.niso.org/contributor-roles/funding-acquisition/">Funding acquisition</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="resources" vocab-term-identifier="https://credit.niso.org/contributor-roles/resources/">Resources</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Formal analysis" vocab-term-identifier="https://credit.niso.org/contributor-roles/formal-analysis/">Formal analysis</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="conceptualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/conceptualization/">Conceptualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Project administration" vocab-term-identifier="https://credit.niso.org/contributor-roles/project-administration/">Project administration</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="validation" vocab-term-identifier="https://credit.niso.org/contributor-roles/validation/">Validation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="supervision" vocab-term-identifier="https://credit.niso.org/contributor-roles/supervision/">Supervision</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="methodology" vocab-term-identifier="https://credit.niso.org/contributor-roles/methodology/">Methodology</role>
</contrib>
</contrib-group>
<aff id="aff1"><institution>Department of Psychology, University of Toronto Scarborough</institution>, <city>Toronto</city>, <state>ON</state>, <country country="ca">Canada</country></aff>
<author-notes>
<corresp id="c001"><label>&#x002A;</label>Correspondence: Olivia Podolak Lewandowska, <email xlink:href="mailto:olivia.podolak@mail.utoronto.ca">olivia.podolak@mail.utoronto.ca</email></corresp>
<fn fn-type="other" id="fn0004">
<label>&#x2020;</label>
<p>ORCID: Olivia Podolak Lewandowska, <uri xlink:href="https://orcid.org/0000-0002-2914-8944">orcid.org/0000-0002-2914-8944</uri>; Mark A. Schmuckler, <uri xlink:href="https://orcid.org/0000-0002-9945-6141">orcid.org/0000-0002-9945-6141</uri></p>
</fn>
</author-notes>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-03-04">
<day>04</day>
<month>03</month>
<year>2026</year>
</pub-date>
<pub-date publication-format="electronic" date-type="collection">
<year>2026</year>
</pub-date>
<volume>17</volume>
<elocation-id>1736951</elocation-id>
<history>
<date date-type="received">
<day>31</day>
<month>10</month>
<year>2025</year>
</date>
<date date-type="rev-recd">
<day>26</day>
<month>01</month>
<year>2026</year>
</date>
<date date-type="accepted">
<day>09</day>
<month>02</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2026 Zhang, Lewandowska, Jones and Schmuckler.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>Zhang, Lewandowska, Jones and Schmuckler</copyright-holder>
<license>
<ali:license_ref start_date="2026-03-04">https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Musical sight-reading requires decoding visual notation into coordinated motor actions, making it an invaluable paradigm for studying the perceptual and motor representations underlying perception&#x2013;action coupling. Two experiments examined the impact of tonal modulations on sight-reading, having pianists perform melodies varying in the tonal distance of their modulation (no modulation, close modulation, mid modulation, and far modulation). Experiment 1 presented melodies in a random order, whereas Experiment 2 presenting melodies blocked by condition (no modulation melodies first versus modulating melodies first). In both studies, analyses of performance errors revealed increased errors from the initial key to the subsequent key. Additionally, both experiments found gradated tonal distance effects, with far modulations producing the largest difference in error rate between initial and subsequent keys, no modulations producing the least difference, and close and mid modulations falling in between the two. Finally, both experiments observed a spike in error rates, with errors peaking at the transition point from the initial to the subsequent key. Of note is that Experiment 1 showed this (albeit non-significant) pattern for the no modulation melodies, suggesting that pianists developed expectations for key modulation irrespective of its occurrence. Experiment 2 confirmed this hypothesis, demonstrating no change in error rates for no modulation melodies when pianists performed these melodies prior to experiencing the modulating melodies. Together, these results support a perception-action account of piano performance, and suggest intriguing new directions for research on real-time music performance.</p>
</abstract>
<kwd-group>
<kwd>motor control</kwd>
<kwd>music cognition and perception</kwd>
<kwd>music performance</kwd>
<kwd>perception-action approach</kwd>
<kwd>sight-reading</kwd>
</kwd-group>
<funding-group>
<funding-statement>The author(s) declared that financial support was received for this work and/or its publication. This research was supported by a Natural Sciences and Engineering Research Council of Canada grant to MAS.</funding-statement>
</funding-group>
<counts>
<fig-count count="6"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="113"/>
<page-count count="16"/>
<word-count count="14358"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Performance Science</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>Music performance is a complex behavior that engages numerous psychological processes in its practitioners (<xref ref-type="bibr" rid="ref70">Palmer, 1997</xref>, <xref ref-type="bibr" rid="ref71">2005</xref>, <xref ref-type="bibr" rid="ref72">2013</xref>). These processes cut across a wide array of abilities, including multisensory integration involved in visual, auditory, and haptic perception, motor behavior and control, interpersonal interaction and synchrony, emotional production and perception, and even anxiety regulation. As evidence of the plethora of psychological components underlying music performance, <xref ref-type="bibr" rid="ref75">Parncutt and McPherson&#x2019;s (2002)</xref> edited volume on this topic contains no less that nine chapters on the &#x201C;subskills of music performance,&#x201D; with these components ranging from perceptual processing of visual and auditory input (<xref ref-type="bibr" rid="ref66">McPherson and Gabrielsson, 2002</xref>) to motor learning and practice (<xref ref-type="bibr" rid="ref7">Barry and Hallam, 2002</xref>) to memory processes (<xref ref-type="bibr" rid="ref2">Aiello and Williamon, 2002</xref>) to structural (<xref ref-type="bibr" rid="ref32">Friberg and Battel, 2002</xref>) and emotional communication (<xref ref-type="bibr" rid="ref42">Juslin and Persson, 2002</xref>) to body movement (<xref ref-type="bibr" rid="ref23">Davidson and Correia, 2002</xref>). Accordingly, musical performance represents a microcosm of general psychological processing. It is unsurprising, therefore, that investigations of musical performance have been one of the fastest growing (<xref ref-type="bibr" rid="ref33">Gabrielsson, 2003</xref>; <xref ref-type="bibr" rid="ref109">Tirovolas and Levitin, 2011</xref>) and most highly cited (<xref ref-type="bibr" rid="ref19">Cohen and Schmuckler, 2023</xref>) areas in music psychology.</p>
<p>One framework adopted for understanding music performance involves a focus on the explicit perception-action relations required for this behavior (<xref ref-type="bibr" rid="ref63">Maes et al., 2013</xref>; <xref ref-type="bibr" rid="ref69">Novembre and Keller, 2014</xref>; <xref ref-type="bibr" rid="ref79">Pfordresher, 2006</xref>, <xref ref-type="bibr" rid="ref81">2019</xref>; <xref ref-type="bibr" rid="ref86">Schaefer, 2014</xref>). According to this approach, performance of notated music is foundationally structured by the recoding of visual symbols necessary for the structure and planning of physical movement, as well as the auditory perception of sounds produced by these actions. These perceptual and motor components recursively interact, forming a continuous perception-action loop in which perceptual input influences motor behavior, with this motor behavior then influencing the subsequent planning and execution of this movement, which then creates new perceptual input, and so on.</p>
<p>Within the realm of auditory information, probably the most well-known, and thoroughly studied example of this approach involves work on delayed auditory feedback, investigated by <xref ref-type="bibr" rid="ref52">Kulpa and Pfordresher (2013)</xref>, <xref ref-type="bibr" rid="ref77">Pfordresher (2003</xref>, <xref ref-type="bibr" rid="ref78">2005</xref>, <xref ref-type="bibr" rid="ref80">2008</xref>, <xref ref-type="bibr" rid="ref81">2019)</xref>, <xref ref-type="bibr" rid="ref82">Pfordresher and Dalla Bella (2011)</xref>, and <xref ref-type="bibr" rid="ref83">Pfordresher and Palmer (2002</xref>, <xref ref-type="bibr" rid="ref84">2006)</xref> as well as others (<xref ref-type="bibr" rid="ref12">Bradshaw et al., 1971</xref>; <xref ref-type="bibr" rid="ref27">Finney, 1997</xref>; <xref ref-type="bibr" rid="ref28">Finney and Palmer, 2003</xref>; <xref ref-type="bibr" rid="ref29">Finney and Warren, 2002</xref>; <xref ref-type="bibr" rid="ref34">Gates and Bradshaw, 1974</xref>; <xref ref-type="bibr" rid="ref35">Gates et al., 1974</xref>; <xref ref-type="bibr" rid="ref38">Havlicek, 1968</xref>). Such research has demonstrated that the motor control involved in producing musical passages is affected by the availability and timing of auditory information arising from such performances. As some examples of this influence, <xref ref-type="bibr" rid="ref83">Pfordresher and Palmer (2002)</xref> found that delayed auditory feedback affected the timing variability of performances of isochronous melodies, with the extent of timing variations influenced by whether the delay produced binary or non-binary subdivisions of the underlying beat structure. Similarly, <xref ref-type="bibr" rid="ref82">Pfordresher and Dalla Bella (2011)</xref> found that even simple tapping of an isochronous rhythm was affected by delayed auditory feedback, with these effects evident in both timing variability as well as the velocity of finger movements. Overall, this research represents a paradigmatic example of a perception-action approach in understanding the complexities inherent in music performance.</p>
<p>Although not as thoroughly studied, within the visual domain one arena for investigating perception-action relations involves sight-reading (<xref ref-type="bibr" rid="ref6">Banton, 1995</xref>; <xref ref-type="bibr" rid="ref45">Kopiez and Lee, 2008</xref>; <xref ref-type="bibr" rid="ref54">Lehmann and Ericsson, 1993</xref>, <xref ref-type="bibr" rid="ref55">1996</xref>; <xref ref-type="bibr" rid="ref56">Lehmann and Kopiez, 2016</xref>; <xref ref-type="bibr" rid="ref57">Lehmann and McArthur, 2002</xref>; <xref ref-type="bibr" rid="ref59">Lewandowska and Schmuckler, 2019</xref>; <xref ref-type="bibr" rid="ref67">Mishra, 2014a</xref>, <xref ref-type="bibr" rid="ref68">2014b</xref>; <xref ref-type="bibr" rid="ref76">Perra et al., 2021</xref>; <xref ref-type="bibr" rid="ref97">Sch&#x00F6;n and Besson, 2002</xref>; <xref ref-type="bibr" rid="ref100">Sloboda, 1974</xref>; <xref ref-type="bibr" rid="ref112">Wolf, 1976</xref>). Because musical sight-reading involves performing passages from a score with no or little prior practice or experience with the notation, this behavior provides an ideal context for exploring the use of visual information in perception-action loops, as sight-reading requires performers to immediately decode visual input, and integrate both current and upcoming information with corresponding motor behavior. Research on sight-reading has focused on three different issues: eye movements during sight-reading, structural factors influencing sight-reading, and factors related to sight-reading skill, including how to teach and improve sight-reading (see <xref ref-type="bibr" rid="ref59">Lewandowska and Schmuckler, 2019</xref>, for a discussion). Of these areas, explorations of the structural factors influencing sight-reading music (see reviews by <xref ref-type="bibr" rid="ref45">Kopiez and Lee, 2008</xref>; <xref ref-type="bibr" rid="ref67">Mishra, 2014a</xref>) are especially illuminating for understanding perception-action relations, given that such factors, by definition, involve critical visual and auditory input to be translated, in some form, into motor activity.</p>
<p>Research on the impact of structural components on musical sight-reading has itself explored a wide range of factors, including visual and auditory feedback (<xref ref-type="bibr" rid="ref24">Delogu et al., 2019</xref>; <xref ref-type="bibr" rid="ref29">Finney and Warren, 2002</xref>; <xref ref-type="bibr" rid="ref78">Pfordresher, 2005</xref>, <xref ref-type="bibr" rid="ref79">2006</xref>), pitch, rhythmic, and handedness complexity (<xref ref-type="bibr" rid="ref26">Fine et al., 2006</xref>; <xref ref-type="bibr" rid="ref36">Gregory, 1972</xref>; <xref ref-type="bibr" rid="ref37">Gudmundsdottir, 2010</xref>; <xref ref-type="bibr" rid="ref59">Lewandowska and Schmuckler, 2019</xref>), and phrase structure (<xref ref-type="bibr" rid="ref1">Ahken et al., 2012</xref>; <xref ref-type="bibr" rid="ref5">Arthur et al., 2016</xref>; <xref ref-type="bibr" rid="ref62">Madell and H&#x00E9;bert, 2008</xref>; <xref ref-type="bibr" rid="ref64">Mainenti et al., 2011</xref>; <xref ref-type="bibr" rid="ref101">Sloboda, 1977</xref>). One particularly important structural component that has been explored involves the tonal structure of the to-be-performed passage. Briefly described, tonality refers to the organization of the 12 chromatic tones around a central reference pitch (<xref ref-type="bibr" rid="ref47">Krumhansl, 1990</xref>, <xref ref-type="bibr" rid="ref48">2000</xref>; <xref ref-type="bibr" rid="ref49">Krumhansl and Cuddy, 2010</xref>; <xref ref-type="bibr" rid="ref90">Schmuckler, 2004</xref>, <xref ref-type="bibr" rid="ref91">2009</xref>, <xref ref-type="bibr" rid="ref92">2016</xref>, <xref ref-type="bibr" rid="ref93">2023</xref>). In Western music, these chromatic tones form a hierarchy, with the central reference pitch, called the tonic, positioned at the top of the hierarchy, and the remaining tones occupying differing positions in this hierarchy depending upon their theoretic relatedness to the tonic. Tonalities can be built on any of the 12 chromatic notes, and are generally one of two types &#x2013; major or minor &#x2013; that vary in the specifics of their internal hierarchical relations. A wealth of research in the field attests not only to the psychological reality of these &#x201C;tonal hierarchies&#x201D; (<xref ref-type="bibr" rid="ref47">Krumhansl, 1990</xref>, <xref ref-type="bibr" rid="ref48">2000</xref>; <xref ref-type="bibr" rid="ref50">Krumhansl and Kessler, 1982</xref>; <xref ref-type="bibr" rid="ref51">Krumhansl and Shepard, 1979</xref>), but the impact of tonality on a wide array of psychological behaviors, including online processing of musical information, memory for musical materials, and the performance of musical passages (see <xref ref-type="bibr" rid="ref59">Lewandowska and Schmuckler, 2019</xref>; <xref ref-type="bibr" rid="ref92">Schmuckler, 2016</xref>, <xref ref-type="bibr" rid="ref93">2023</xref>, for reviews).</p>
<p>Tonality has similarly featured prominently in sight-reading research (<xref ref-type="bibr" rid="ref4">Alexander and Henry, 2012</xref>; <xref ref-type="bibr" rid="ref26">Fine et al., 2006</xref>; <xref ref-type="bibr" rid="ref59">Lewandowska and Schmuckler, 2019</xref>; <xref ref-type="bibr" rid="ref61">MacKenzie et al., 1986</xref>). For instance, <xref ref-type="bibr" rid="ref61">MacKenzie et al. (1986)</xref> had performers sight-read tonal and atonal (music that is not tonally structured) passages, and found increased rhythmic variability in the performance of a target passage in atonal, relative to tonal passages. Similarly, <xref ref-type="bibr" rid="ref26">Fine et al. (2006)</xref> had performers sight-read Bach chorales manipulated such that the melody of the passages were either tonal or atonal, with either a tonal or atonal harmonic accompaniment, and found that introducing atonality into the passages increased errors in performance. Most recently, <xref ref-type="bibr" rid="ref59">Lewandowska and Schmuckler (2019)</xref> had pianists sight-read passages varying in their tonality (major tonality, minor tonality, atonality) and texture (simultaneous versus successive tone onsets, unimanual versus bimanual performance), and found that both factors significantly influenced sight-reading accuracy. Together, these findings highlight that tonal structure plays an important role in the perception-action organization of music performance.</p>
<p>Despite demonstrating the importance of tonality as a structural principal in piano performance and sight-reading, this work is limited in certain respects. For instance, these findings do not necessarily indicate the continuous interactive nature of perceptual and action systems during this behavior. Specifically, although tonal information can be abstracted from the to-be-performed musical scores, because this tonal structure stays constant across this context, motor performance might only be constrained by information gathered at the start of the performance, and not as a result of continuous perceptual monitoring and updating. Put more simply, a static tonal structure across the length of a passage does not actually require continuous perception-action feedback throughout performance to demonstrate influences of this structural factor, although clearly continuous monitoring is required for the actual production of the to-be-performed notes.</p>
<p>One way to address this limitation would be to explore sight-reading within contexts that require explicit continuous monitoring of the tonal structure through the entire passage. Such a situation could be achieved by examining sight-reading in passages that change in their tonal organization. In fact, shifting tonal organizations, called &#x201C;tonal modulations&#x201D; or &#x201C;key changes,&#x201D; are extremely common in Western music, with there being no limit to the number of modulations that can occur in a piece, nor any constraint with respect to the distance, in tonal space, over which such movements can occur (<xref ref-type="bibr" rid="ref46">Korsakova-Kreyn and Dowling, 2014</xref>; <xref ref-type="bibr" rid="ref50">Krumhansl and Kessler, 1982</xref>). According to some authors, tonal modulations are essential to the aesthetic properties of a piece of music (<xref ref-type="bibr" rid="ref13">Brattico et al., 2013</xref>; <xref ref-type="bibr" rid="ref15">Brattico and Pearce, 2013</xref>; <xref ref-type="bibr" rid="ref14">Brattico et al., 2017</xref>; <xref ref-type="bibr" rid="ref46">Korsakova-Kreyn and Dowling, 2014</xref>).</p>
<p>Over the years, researchers have examined the psychological impact of tonal modulations, primarily focusing on listeners&#x2019; percepts of key movement (<xref ref-type="bibr" rid="ref21">Cook, 1987</xref>; <xref ref-type="bibr" rid="ref22">Cuddy and Thompson, 1992</xref>; <xref ref-type="bibr" rid="ref25">Farbood, 2016</xref>; <xref ref-type="bibr" rid="ref65">Marvin and Brinkman, 1999</xref>; <xref ref-type="bibr" rid="ref102">Spyra et al., 2021</xref>; <xref ref-type="bibr" rid="ref103">Spyra and Woolhouse, 2023</xref>; <xref ref-type="bibr" rid="ref105">Thompson and Cuddy, 1989</xref>, <xref ref-type="bibr" rid="ref106">1992</xref>, <xref ref-type="bibr" rid="ref107">1997</xref>; <xref ref-type="bibr" rid="ref113">Woolhouse et al., 2016</xref>). This work has explored multiple issues, including the ability to track tonal modulations throughout a piece of music (<xref ref-type="bibr" rid="ref16">Chew, 2002</xref>, <xref ref-type="bibr" rid="ref17">2007</xref>, <xref ref-type="bibr" rid="ref18">2014</xref>; <xref ref-type="bibr" rid="ref47">Krumhansl, 1990</xref>; <xref ref-type="bibr" rid="ref93">Schmuckler, 2023</xref>; <xref ref-type="bibr" rid="ref95">Schmuckler and Tomovski, 2005</xref>), factors affecting the perception of tonal closure (<xref ref-type="bibr" rid="ref65">Marvin and Brinkman, 1999</xref>; <xref ref-type="bibr" rid="ref102">Spyra et al., 2021</xref>; <xref ref-type="bibr" rid="ref103">Spyra and Woolhouse, 2023</xref>), the timescale on which previously heard tonal information disappears in memory (<xref ref-type="bibr" rid="ref21">Cook, 1987</xref>; <xref ref-type="bibr" rid="ref25">Farbood, 2016</xref>; <xref ref-type="bibr" rid="ref40">Janata et al., 2003</xref>; <xref ref-type="bibr" rid="ref41">Janata et al., 2002</xref>; <xref ref-type="bibr" rid="ref113">Woolhouse et al., 2016</xref>), the effect of tonal modulation on subjective time (<xref ref-type="bibr" rid="ref30">Firmino and Bueno, 2008</xref>; <xref ref-type="bibr" rid="ref31">Firmino et al., 2009</xref>), the role of tonal distance and the direction of tonal movement on tonal percepts (<xref ref-type="bibr" rid="ref22">Cuddy and Thompson, 1992</xref>; <xref ref-type="bibr" rid="ref58">Lewandowska, 2019</xref>; <xref ref-type="bibr" rid="ref60">Lewandowska and Schmuckler, in preparation</xref>; <xref ref-type="bibr" rid="ref105">Thompson and Cuddy, 1989</xref>, <xref ref-type="bibr" rid="ref106">1992</xref>, <xref ref-type="bibr" rid="ref107">1997</xref>), and neural correlates of tonal modulations (<xref ref-type="bibr" rid="ref39">Janata, 2007</xref>; <xref ref-type="bibr" rid="ref40">Janata et al., 2003</xref>; <xref ref-type="bibr" rid="ref41">Janata et al., 2002</xref>; <xref ref-type="bibr" rid="ref43">Koelsch and Friederici, 2003</xref>; <xref ref-type="bibr" rid="ref44">Koelsch et al., 2003</xref>). As an oversimplified summary, this work suggests that listeners can accurately track modulations throughout a piece of music, that the tonal distance of modulations can influence percepts of modulations in some circumstances, that tonal modulations are encoded in systematic patterns of cortical activation, and that there seems to be little expectation for, or even recognition of, whether a piece returns to the initial key in which it started.</p>
<p>Of note is the fact that virtually no work has examined the impact of tonal modulation on performances of musical pieces. Although <xref ref-type="bibr" rid="ref107">Thompson and Cuddy (1997)</xref> did employ a performance manipulation in their research, this study focused on whether performance expression enhanced listeners&#x2019; perceptions of tonal movement. As such, the impact of key movement on the musical performances remains an open question.</p>
<p>Taken together, this review highlights an obvious avenue for exploring the flexibility of perception-action feedback in music performance. Specifically, examining the impact of key movement during a musical sight-reading task provides a window into the operation of perception-action integration presumably underlying music performance. Towards this end, a pair of experiments investigated the impact of modulation on music performance, specifically exploring the effect on pianists&#x2019; sight-reading of a modulation from an initial tonality in the beginning of a passage to a subsequent tonality at the end of a passage. Assuming that the sight-reading of a musical excerpt requires some form of initial perceptual and motor representations, the occurrence of a modulation would similarly necessitate the modification of both schemas. The necessity of modifying both perceptual and motor schemas could produce multiple consequences on performance, resulting in a set of predictions for performance. For instance, one outcome of changing representations might be that, at the transition point between the two keys, because performers must adapt to the changing perceptual input by altering their motor performance, there could be an observable disruption in their performance. Put more simply, performance should noticeably deteriorate during the key transitions.</p>
<p>Second, and related to this first prediction, because performers must adjust their underlying representations, there could be an observable impact on performance that continues for a period of time into the new key section. Although one would anticipate that this impact would eventually dissipate over time, such an effect could well produce a disruption of performance in the subsequent key section, relative to the initial key section.</p>
<p>Finally, a third consequence of the requirement of shifting perceptual and motor representations due to tonal modulation can be seen in considering the impact on performance of modulations varying in their distance in tonal space between initial and subsequent keys. Building on work highlighting the importance of tonal distance in perceptions of modulations (<xref ref-type="bibr" rid="ref22">Cuddy and Thompson, 1992</xref>; <xref ref-type="bibr" rid="ref58">Lewandowska, 2019</xref>; <xref ref-type="bibr" rid="ref60">Lewandowska and Schmuckler, in preparation</xref>; <xref ref-type="bibr" rid="ref105">Thompson and Cuddy, 1989</xref>, <xref ref-type="bibr" rid="ref106">1992</xref>, <xref ref-type="bibr" rid="ref107">1997</xref>), if sight-reading of modulating passages does require modifying perceptual and motor representations, then modulations varying in their tonal distance should similarly, and differentially, affect performance. Specifically, more distant modulations, which require more complex realignment in perceptual and motor schemas, should lead to more errors in sight-reading, relative to less distance modulations.</p>
</sec>
<sec id="sec2">
<label>2</label>
<title>Experiment 1: sight-reading of modulating melodies: tonal distance effects</title>
<p>Experiment 1 provided an initial test of the question of the impact of tonal modulation on performers&#x2019; sight-reading of short musical passage. To examine this question, this study draws from research examining listeners&#x2019; percepts of tonal modulations (<xref ref-type="bibr" rid="ref22">Cuddy and Thompson, 1992</xref>; <xref ref-type="bibr" rid="ref25">Farbood, 2016</xref>; <xref ref-type="bibr" rid="ref58">Lewandowska, 2019</xref>; <xref ref-type="bibr" rid="ref60">Lewandowska and Schmuckler, in preparation</xref>; <xref ref-type="bibr" rid="ref105">Thompson and Cuddy, 1989</xref>, <xref ref-type="bibr" rid="ref106">1992</xref>, <xref ref-type="bibr" rid="ref107">1997</xref>; <xref ref-type="bibr" rid="ref113">Woolhouse et al., 2016</xref>), within the methodological context of recent work on tonality and sight-reading (<xref ref-type="bibr" rid="ref59">Lewandowska and Schmuckler, 2019</xref>).</p>
<p>One critical issue underlying exploration of this question involves delineating how best to characterize the degree of tonal distance present in modulating stimuli. Within a Western music-theoretic framework, one means of characterizing the degree of modulation involves mapping the distance between the initial and subsequent tonalities on the &#x201C;circle of fifths.&#x201D; This arrangement, shown in <xref ref-type="fig" rid="fig1">Figure 1</xref>, portrays the relations between the 12 chromatic pitches and the major and minor keys built on them, positioned in a circular sequence in which neighboring tonalities are separated by the musical interval of a perfect fifth (i.e., seven semitones). One advantage to this arrangement is that it spatially portrays the musical relation between the tonalities, with the number of steps between different tonalities indicative of the similarity or relatedness between the tonalities. Thus, adjacent tonalities along this circle (tonalities one step apart, such as C and G, Db and Ab, and so on) are highly related tonalities, whereas maximally distance tonalities along this circle (tonalities six steps apart, such as C and F#/Gb or A and Eb) are highly unrelated tonalities. Indeed, a wealth of research (see <xref ref-type="bibr" rid="ref8">Bartlett and Dowling, 1980</xref>; <xref ref-type="bibr" rid="ref50">Krumhansl and Kessler, 1982</xref>; <xref ref-type="bibr" rid="ref98">Shepard, 1982a</xref>, <xref ref-type="bibr" rid="ref99">1982b</xref>, for classic references) attests to the perceived reality of these key relations. Accordingly, this arrangement provides a convenient quantification of the degree of tonal distance for modulating passages.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>The circle of fifths, showing distance in tonal space for major keys. Examples of enharmonic spellings for two keys are provided. The different tonal modulation conditions (close, mid, far) are shown as arrows around this circle.</p>
</caption>
<graphic xlink:href="fpsyg-17-1736951-g001.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Circle of fifths diagram illustrating types of key modulations from C major: no modulation to C (zero steps, blue), close modulation to G (one step, purple), mid modulation to A (three steps, red), and far modulation to G flat or F sharp (six steps, green), with corresponding key signatures shown for each key.</alt-text>
</graphic>
</fig>
<sec id="sec3">
<label>2.1</label>
<title>Method</title>
<sec id="sec4">
<label>2.1.1</label>
<title>Participants</title>
<p>Twenty-five trained pianists (19 females; mean age&#x202F;=&#x202F;20.64&#x202F;years, <italic>SD</italic>&#x202F;=&#x202F;2.50&#x202F;years) were recruited from the University of Toronto Scarborough student population. Participants received either course credit in a psychology course or a $30 Amazon Gift card for participating. Eligibility requirements for this study included having a Grade 7 sight-reading ability based on guidelines established by the Royal Conservatory of Music (RCM), or an equivalent musical institution. At this grade level, RCM sight reading assessments require individuals to sight-read unfamiliar musical materials either as lead sheets (with chord symbols), or simple two handed passages of music of 8&#x2013;12 measures in length, with these passages varying in their tonality (in major or minor keys of up to three flats or sharps), time signatures (simple and compound meters), and rhythmic structure (<xref ref-type="bibr" rid="ref104">The Royal Conservatory, 2022</xref>). Grade 7 was selected as a conservative inclusion threshold to ensure fluent performance of the experimental stimuli (whose tonal, metric and rhythmic complexity fell below the upper limits of the Grade 7 sight-reading demands) while avoiding unnecessarily restrictive eligibility criteria that would limit participant recruitment. Additionally, by Grade 7, pianists would have completed technical requirements (e.g., scales, arpeggios, etc.) encompassing all major and minor scales, ensuring familiarity with the full range of tonal materials used in this study. Participants reported an average of 14.64 (<italic>SD</italic>&#x202F;=&#x202F;3.03) years of playing, 10.68 (<italic>SD</italic>&#x202F;=&#x202F;3.08) years of formal instruction, 5.52 (<italic>SD</italic>&#x202F;=&#x202F;6.06) hours of weekly practice, with a modal level of 10 for their highest certificate in piano performance.</p>
</sec>
<sec id="sec5">
<label>2.1.2</label>
<title>Materials</title>
<p>The stimuli for this study consisted of 10-measure melodies in 4/4 time, written for the right hand, composed entirely of quarter notes, with the final measure containing only a single note. All melodies were structured such that the first four measures (measures 1&#x2013;4) instantiated an initial tonality (called &#x201C;Key 1&#x201D;), followed by a transition measure (measure 5) in which the melody modulated to a new key (called &#x201C;Transition&#x201D;), and ended with five measures (measures 6&#x2013;10) that instantiated this new tonality (called &#x201C;Key 2&#x201D;). Sample melodies for this experiment are shown in <xref ref-type="fig" rid="fig2">Figure 2</xref>, denoting the Key 1, Transition, and Key 2 sections, as a function of the principal experimental manipulation (see below). All melodies were presented devoid of key signatures, with all accidentals (sharps and flats) written directly into the score.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Sample melodies in the no modulation, close modulation, mid modulation, and far modulation conditions in Experiment 1.</p>
</caption>
<graphic xlink:href="fpsyg-17-1736951-g002.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Four horizontal musical notation examples demonstrate modulation types: no modulation (B-flat major to B-flat major), close modulation (B-flat major to F major), mid modulation (B-flat major to G major), and far modulation (B-flat major to E major). Each staff is labeled with key signature sections and color-coded measure bars for key one, transition, and key two, spanning ten measures.</alt-text>
</graphic>
</fig>
<p>The principal manipulation of this study involved the distance, in tonal space, of the modulation in the melody. This manipulation consisted of four conditions: &#x201C;no modulation,&#x201D; &#x201C;close modulation,&#x201D; &#x201C;mid modulation,&#x201D; and &#x201C;far modulation.&#x201D; <xref ref-type="fig" rid="fig1">Figure 1</xref> displays these four conditions schematically, with respect to the circle of fifths. No modulation melodies did not contain any movement in tonal space; Key 1 was the same as Key 2. Close modulation conditions involved a movement of one clockwise step along the circle of fifths. As already discussed, tonalities differing by a single step are as closely related as possible in Western music (<xref ref-type="bibr" rid="ref3">Aldwell and Schachter, 2002</xref>; <xref ref-type="bibr" rid="ref85">Rosen, 1972</xref>; <xref ref-type="bibr" rid="ref111">Walton, 2010</xref>). Mid modulation conditions involved a movement of three clockwise steps along the circle of fifths. Tonalities separated by this distance represent less common modulations in Western music (<xref ref-type="bibr" rid="ref3">Aldwell and Schachter, 2002</xref>; <xref ref-type="bibr" rid="ref53">Laitz, 1996</xref>; <xref ref-type="bibr" rid="ref111">Walton, 2010</xref>). Finally, far modulation conditions involved a movement of six steps along the circle of fifths. In Western music, such modulations are very uncommon (<xref ref-type="bibr" rid="ref3">Aldwell and Schachter, 2002</xref>; <xref ref-type="bibr" rid="ref111">Walton, 2010</xref>).<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref></p>
<p>Each of the four conditions were comprised of 12 melodies to be sight-read by participants. Each of these melodies started in one of the 12 unique major tonalities based on the chromatic set, and ended in a unique major tonality. For example, in the no modulation condition, one stimulus began in C major and ended in C major (see <xref ref-type="fig" rid="fig1">Figure 1</xref>), another stimulus began and ended in C#/Db major, another began and ended in D major, and so on. Similarly, in the close modulation condition, one stimulus began in C major and ended in G major (see <xref ref-type="fig" rid="fig1">Figure 1</xref>), one stimulus began in C#/Db major and ended in G#/Ab major, and so on. And as a final example, in the far modulation condition, one stimulus began in C major and ended in F#/Gb major (see <xref ref-type="fig" rid="fig1">Figure 1</xref>), one began in C#/Db major and ended in G major, and so on. The 12 exemplars in each of the 4 modulation conditions thus produced 48 stimulus melodies in total.</p>
<p>The decision to employ one-handed melodic lines, devoid of key signatures, was driven by a few factors. First, had we included key signatures, this information could have not only provided inadvertent <italic>a priori</italic> cueing to performers of the tonal modulation condition of a given to-be-performed stimulus, but also of the location of the position of the transition between keys within the stimulus. Along these lines, the tonal modulation would have been discernible from the nature of the key signatures, with the no modulation melodies containing only a single key signature, far modulations containing two divergent key signatures, likely employing differing accidental types (e.g., sharps in one key signature and flats in the other), with close and mid modulation conditions falling between these two extremes. Moreover, the placement of the second key signature in the score (possibilities in this regard include the beginning of the transition measure, within the transition measure itself, or the beginning of the second key section) would have indicated where the key transition occurred. Although these aspects would be likely adduced by performers through their performances, and over the course of the experiment, it was believed important to not provide such information on an <italic>a priori</italic> basis, available through a cursory visual inspection of the scores.</p>
<p>As for employing one-handed melodic lines, this choice was driven by multiple factors. First, and pragmatically, based on work by <xref ref-type="bibr" rid="ref59">Lewandowska and Schmuckler (2019)</xref>, one-handed melodic lines seemed to offer an optimal level of performance challenge. As described earlier, in Lewandowska and Schmuckler performers sight read both one-handed (single melodic lines) and two-handed (homophonic and polyphonic) four measure musical passages, producing error rates between 6% (one-handed) and 19% (two-handed). Given that the current study intended to use passages over twice as long as this previous work, there was a concern that required two-handed performances would pose overwhelming sight-reading challenges for performers.</p>
<p>Second, and converging with the just expressed concern regarding the use of key signatures, the exact content of two-handed passages was also felt to be potentially problematic. Given the utility of using melodies as stimuli, in terms of being able to create a key transition section, as opposed to a single transition (e.g., a pivot) event, two-handed performance would have required the use of either two independent right- and left-hand melodic lines, or a right-hand melodic line with a left-hand harmonic accompaniment. The former is not ideal in that it would have required performers to sight read the equivalent of (simplified) two-part inventions, whereas the latter would have re-introduced idealized harmonic information that would be the virtual equivalent of including key signature information. Given all of these reasons, it was determined that employing one-handed melodies were the optimum choice for stimuli.</p>
<p>All stimuli were rendered as musical scores, written in treble clef. Different types of accidentals (i.e., sharps versus flats) were employed both across and importantly, within stimuli, depending upon the specific enharmonic spelling of the two keys sounded. As an example, for the far modulation stimulus Db &#x2013; G, Key 1 (Db) employed flats, whereas Key 2 (G) employed sharps. The mixed use of both sharps and flats within a score is common in Western tonal music, and would not be seen as unusual by participants. Finally, all melodies were written within a two-octave span ranging from C<sub>4</sub> to C<sub>6</sub>.</p>
</sec>
<sec id="sec6">
<label>2.1.3</label>
<title>Apparatus</title>
<p>Participants performed these melodies on a Yamaha S-80, full-sized weighted, digital keyboard, connected to a Windows based PC via a Digidesign Mbox 2 Pro interface. Both MIDI and audio data were recorded using Cakewalk SONAR Studio (v8.0). An electronic metronome set to 140 beats per minute (BPM) provided a tempo cue throughout all trials to encourage participants to maintain a constant tempo during performance, to play continuously through each trial, without stopping to correct errors, and to prevent performers from playing so slowly as to not make any errors. Auditory feedback from the keyboard was input into a Mackie 1604 mixing console and presented to performers through a pair of Project Studio 5 speakers placed to their left and right.</p>
</sec>
<sec id="sec7">
<label>2.1.4</label>
<title>Procedure</title>
<p>Prior to beginning the experiment, participants were simply told that this study was exploring musical sight-reading, provided written informed consent, and filled out an online questionnaire collecting relevant demographic and musical background information. Specifically, participants were told that they would be sight-reading a series of melodies with their right hand only, and that all of these melodies would be 10 measures long in 4/4 time, consisting only of quarter notes. Participants were also instructed that there would be no key signatures for these melodies, and that any accidentals that occurred should be treated as typical for musical scores (i.e., the occurrence of an accidental remained in effect for all subsequent occurrences of that specific note in the measure, but not beyond). Participants were told they would hear a metronome during the trial, and were asked to perform the passage with an emphasis on pitch accuracy (play the indicated note), while maintaining the tempo of the metronome. Thus, performers were asked to not &#x201C;correct&#x201D; errors in sight-reading, but to simply continue their performance regardless of the occurrence of errors; such a constraint is common in sight-reading contexts.</p>
<p>At the start of each trial, participants received 10&#x202F;s to scan the to-be-performed melody. Once this time elapsed, performers heard two measures (i.e., eight beats) of a metronome count-in, after which they began their performance. After performing a given melody, performers were given a few seconds break, and then began the next trial. The 48 trials (4 modulation conditions&#x202F;&#x00D7;&#x202F;12 exemplars) were randomized for all participants. Halfway through the experiment (i.e., after completing the 24th trial), participants were offered a break if they desired. After finishing the 48 trials, participants were debriefed as to the nature of the stimuli and the purpose of the study. The entire experimental session lasted 60&#x2013;90&#x202F;min in total.</p>
</sec>
</sec>
<sec id="sec8">
<label>2.2</label>
<title>Results</title>
<sec id="sec9">
<label>2.2.1</label>
<title>Data preprocessing</title>
<p>All performances were output as musical scores (using the aforementioned SONAR software) in common Western music notation. Pitch errors for all trials were manually coded by comparing the pitch events of the performed melody with the original musical score. Pitch errors were based on performance error coding schemes adapted from <xref ref-type="bibr" rid="ref73">Palmer and van de Sande (1993</xref>, <xref ref-type="bibr" rid="ref74">1995)</xref>, and were defined as any played note that was different from what was notated in the score (a substitution or added note) or a note omitted altogether during sight-reading. <xref ref-type="fig" rid="fig3">Figure 3</xref> shows examples of the performances for the sample melodies presented previously (<xref ref-type="fig" rid="fig2">Figure 2</xref>), with errors in performances indicated.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Sample performed melodies in the no modulation, close modulation, mid modulation, and far modulation conditions in Experiment 1. Performance errors are indicated with arrows (&#x2191;). Note that added notes have been written as eighth notes, rather than quartertones; this rhythmic deviation was employed to facilitate comparison with <xref ref-type="fig" rid="fig2">Figure 2</xref>.</p>
</caption>
<graphic xlink:href="fpsyg-17-1736951-g003.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Four horizontal sheet music excerpts labeled No Modulation, Close Modulation, Mid Modulation, and Far Modulation feature melodies in blue, purple, red, and green respectively, each with several vertical arrows below certain notes indicating errors in performance.</alt-text>
</graphic>
</fig>
<p>Pitch errors were aggregated for each performance for each participant for subsequent analysis. The initial step in this process involved summing the total number of errors occurring on a measure-by-measure basis; for this analysis, measure 10 was included with the errors for measure 9. The summed errors were converted to percent errors by dividing by the number of to-be-performed notes in each measure and multiplying by 100. The average error rates on a measure basis were then averaged across the 12 exemplars for each of the no modulation, close modulation, mid modulation, and far modulation conditions for each participant. Finally, aggregate error rates representing the different sections based on their sounded tonalities were calculated. For this measure, the average error rates for measures 1&#x2013;4 (Key 1), corresponding to the first tonality, and for measures 6&#x2013;9 (Key 2), corresponding to the second tonality, were created; error rates for measure 5 (Transition), the transition measure between the tonalities were kept separate. This process created three sections for each modulation conditions: Key 1, Transition, and Key 2. Note that for the no modulation condition, these sections all represent the same tonality. Thus, this division was based on the temporal parallels between these melodies and the remaining three conditions, not on the inherent tonality of these sections <italic>per se</italic>.<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref></p>
</sec>
<sec id="sec10">
<label>2.2.2</label>
<title>Data analysis</title>
<p>Error rates were analyzed using a two-way Analysis of Variance (ANOVA), with the within-subjects factors of <italic>Tonal Section</italic> (Key 1, Transition, Key 2) and <italic>Modulation Condition</italic> (no modulation, close modulation, mid modulation, far modulation). This analysis found normality was violated for all combinations of these two variables, based on the Shapiro&#x2013;Wilk test. However, because ANOVAs are robust to such violations (<xref ref-type="bibr" rid="ref87">Schmider et al., 2010</xref>) without inflating Type 1 error, we decided not to transform the data. Additionally, Machley&#x2019;s test indicated that sphericity was violated for the two-way interaction between <italic>Tonal Section</italic> and <italic>Modulation Condition</italic> (<italic>p</italic>&#x202F;=&#x202F;0.01). Accordingly, for this effect, the Greenhouse&#x2013;Geisser correction was applied to the relevant degrees of freedom, and is reported here.</p>
<p><xref ref-type="fig" rid="fig4">Figure 4</xref> presents the error rates (and SEs) as a function of <italic>Tonal Section</italic> and <italic>Modulation Condition</italic>. This ANOVA revealed a main effect of <italic>Tonal Section</italic>, <italic>F</italic>(2,48)&#x202F;=&#x202F;10.53, <italic>MSE</italic>&#x202F;=&#x202F;31.50, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, <italic>partial &#x03B7;<sup>2</sup></italic>&#x202F;=&#x202F;0.30, but no main effect of <italic>Modulation Condition</italic>, <italic>F</italic>(3,72)&#x202F;=&#x202F;1.56, <italic>MSE</italic>&#x202F;=&#x202F;16.32, <italic>n.s.</italic>, nor any interaction between the two factors, <italic>F</italic>(4.29,102.94)&#x202F;=&#x202F;1.32, <italic>MSE</italic>&#x202F;=&#x202F;16.65, <italic>n.s</italic>. To examine differences in error rates as a function of the main effect of <italic>Tonal Section</italic> (i.e., collapsed across <italic>Modulation Condition</italic>), <italic>post hoc</italic> tests using Holm-Bonferroni corrections were conducted for all pairwise comparisons. The Holm&#x2013;Bonferroni procedure is a multiple comparison correction that adjusts significance thresholds in a stepwise manner, controlling the family-wise error rate while providing greater statistical power than the standard Bonferroni correction. These comparisons revealed significant differences between error rates for all pairs of means: Key 1 (<italic>M</italic>&#x202F;=&#x202F;9.00, <italic>SE</italic>&#x202F;=&#x202F;1.54) versus Transition (<italic>M</italic>&#x202F;=&#x202F;12.65, <italic>SE</italic>&#x202F;=&#x202F;2.31), <italic>t</italic>(24)&#x202F;=&#x202F;&#x2212;3.91, <italic>p</italic>&#x202F;=&#x202F;0.002, Key 1 versus Key 2 (<italic>M</italic>&#x202F;=&#x202F;10.34, <italic>SE</italic>&#x202F;=&#x202F;1.94), <italic>t</italic>(24)&#x202F;=&#x202F;&#x2212;2.74, <italic>p</italic>&#x202F;=&#x202F;0.023, and Transition versus Key 2, <italic>t</italic>(24)&#x202F;=&#x202F;2.42, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.023.</p>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Percent error (and SEs) for the <italic>Tonal modulation</italic> conditions, as a function of <italic>Tonal Section</italic>, for Experiment 1. This figure is presented for information only; the two-way interaction was not significant.</p>
</caption>
<graphic xlink:href="fpsyg-17-1736951-g004.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Line graph showing percent error rate on the y-axis versus tonal section on the x-axis for four tonal modulation conditions&#x2014;no modulation, close, mid, and far modulation&#x2014;with data points for Key 1, Transition, and Key 2, and error bars included.</alt-text>
</graphic>
</fig>
<p>Despite the non-significant interaction, a series of <italic>post hoc</italic> analyses with Holm-Bonferroni corrections examined the differences between these tonal sections for each modulation condition individually. The results of these comparisons appear in <xref ref-type="table" rid="tab1">Table 1</xref>. Generally, error rates for Key 1 were less than for the Transition section for the no modulation (marginally significant), and the mid and far modulation conditions (significant). When comparing Key 1 and Key 2, error rates for Key 2 exceeded those for Key 1 for both mid and far modulations. Finally, comparisons of error rates between the Transition section and Key 2 were less systematic, with differences for the no modulation condition (again marginally significant) and the far modulation condition, but not for the close or mid modulation conditions.</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>Results of <italic>post hoc</italic> analyses with Holm-Bonferroni corrections for <italic>Tonal Section</italic> in Experiments 1 and 2.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" colspan="4">Experiment 1</th>
</tr>
<tr>
<th align="left" valign="top">Modulation condition</th>
<th align="center" valign="top">Key 1 &#x2013; Transition</th>
<th align="center" valign="top">Key 1 &#x2013; Key 2</th>
<th align="center" valign="top">Transition &#x2013; Key 2</th>
</tr>
<tr>
<th/>
<th align="center" valign="top"><italic>t</italic>(24)</th>
<th align="center" valign="top"><italic>t</italic>(24)</th>
<th align="center" valign="top"><italic>t</italic>(24)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">No modulation</td>
<td align="center" valign="top">&#x2013;2.37<sup>A</sup></td>
<td align="center" valign="top">&#x2212;1.08</td>
<td align="center" valign="top">2.51<sup>A</sup></td>
</tr>
<tr>
<td align="left" valign="top">Close modulation</td>
<td align="center" valign="top">&#x2212;2.08</td>
<td align="center" valign="top">&#x2212;2.31</td>
<td align="center" valign="top">0.88</td>
</tr>
<tr>
<td align="left" valign="top">Mid modulation</td>
<td align="center" valign="top">&#x2212;2.69&#x002A;</td>
<td align="center" valign="top">&#x2212;2.78&#x002A;</td>
<td align="center" valign="top">0.72</td>
</tr>
<tr>
<td align="left" valign="top">Far modulation</td>
<td align="center" valign="top">&#x2212;3.52&#x002A;&#x002A;&#x002A;</td>
<td align="center" valign="top">&#x2212;2.09&#x002A;</td>
<td align="center" valign="top">2.43&#x002A;</td>
</tr>
</tbody>
</table>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" colspan="4">Experiment 2 (no modulation condition first)</th>
</tr>
<tr>
<th align="left" valign="top">Modulation condition</th>
<th align="center" valign="top">Key 1 &#x2013; Transition</th>
<th align="center" valign="top">Key 1 &#x2013; Key 2</th>
<th align="center" valign="top">Transition &#x2013; Key 2</th>
</tr>
<tr>
<th/>
<th align="center" valign="top"><italic>t</italic>(14)</th>
<th align="center" valign="top"><italic>t</italic>(14)</th>
<th align="center" valign="top"><italic>t</italic>(14)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">No modulation</td>
<td align="center" valign="top">&#x2212;1.34</td>
<td align="center" valign="top">&#x2212;1.12</td>
<td align="center" valign="top">0.81</td>
</tr>
<tr>
<td align="left" valign="top">Close modulation</td>
<td align="center" valign="top">&#x2212;1.00</td>
<td align="center" valign="top">&#x2212;1.49</td>
<td align="center" valign="top">&#x2212;0.02</td>
</tr>
<tr>
<td align="left" valign="top">Mid modulation</td>
<td align="center" valign="top">&#x2212;1.50</td>
<td align="center" valign="top">&#x2212;1.61</td>
<td align="center" valign="top">0.36</td>
</tr>
<tr>
<td align="left" valign="top">Far modulation</td>
<td align="center" valign="top">&#x2212;3.04&#x002A;</td>
<td align="center" valign="top">&#x2212;3.76&#x002A;&#x002A;</td>
<td align="center" valign="top">&#x2212;0.74</td>
</tr>
</tbody>
</table>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" colspan="4">Experiment 2 (modulation conditions first)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">No modulation</td>
<td align="center" valign="top">&#x2212;0.46</td>
<td align="center" valign="top">&#x2212;1.76</td>
<td align="center" valign="top">&#x2212;1.29</td>
</tr>
<tr>
<td align="left" valign="top">Close modulation</td>
<td align="center" valign="top">&#x2212;2.11</td>
<td align="center" valign="top">&#x2212;2.90&#x002A;</td>
<td align="center" valign="top">0.12</td>
</tr>
<tr>
<td align="left" valign="top">Mid modulation</td>
<td align="center" valign="top">&#x2212;0.29</td>
<td align="center" valign="top">&#x2212;2.19</td>
<td align="center" valign="top">&#x2212;2.68<sup>A</sup></td>
</tr>
<tr>
<td align="left" valign="top">Far modulation</td>
<td align="center" valign="top">&#x2212;4.76&#x002A;&#x002A;&#x002A;&#x002A;</td>
<td align="center" valign="top">&#x2212;6.25&#x002A;&#x002A;&#x002A;&#x002A;</td>
<td align="center" valign="top">&#x2212;2.72&#x002A;</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><sup>A</sup><italic>p</italic>&#x202F;&#x003C;&#x202F;0.06; &#x002A;<italic>p</italic>&#x202F;&#x003C;&#x202F;0.05; &#x002A;&#x002A;<italic>p</italic>&#x202F;&#x003C;&#x202F;0.001; &#x002A;&#x002A;&#x002A;<italic>p</italic>&#x202F;&#x003C;&#x202F;0.005; &#x002A;&#x002A;&#x002A;&#x002A;<italic>p</italic>&#x202F;&#x003C;&#x202F;0.001.</p>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec id="sec11">
<label>2.3</label>
<title>Discussion</title>
<p>With respect to the principal goals of this experiment, the current results present a mix of both anticipated and unexpected findings. In terms of the expected results, there is an effect of modulation on pianists&#x2019; abilities to accurately sight-read a passage of music. This impact was evident for the (averaged across modulation condition) pairwise comparisons of all three sections: Key 1 and Transition sections, Key 1 and Key tonal sections, and Transition and Key 1 sections. Considered in a global sense, these differences align with our second and first predictions (respectively) highlighted earlier, and are what might be expected from a perception-action framework for sight-reading. The necessity of shifting perceptual and motor representations produced by modulation caused a disruption in performance, producing increased error rates at the transition point between the keys, as well as increased errors between the initial and subsequent sections.</p>
<p>In line with our third prediction, there also appears to be an effect of the degree of movement in tonal space on performance. Specifically, when looking at the individual modulation conditions, the far modulation condition consistently produced significant differences across the various sections. In contrast, in the no modulation condition, although there appears to be an increase (albeit non-significant) in error rates at the transition section, there was no difference between the two key areas of these passages. The close and mid modulations fall between these two sets of patterns, with the close modulations producing no differences between the sections, and the mid modulations producing differences between two of the three sections. These results largely align with predictions regarding the effect of distance of tonal modulation on error rates in performance, with the most disruption in performance observed for the farthest distance, and decreasing disruption with decreasing tonal distance.</p>
<p>One unexpected result of this study was the increase in error rates at the transition point in the no modulation condition. Although this increase was not significant, the pattern is nevertheless striking given that, because there was no modulation in these stimuli, there is no logical reason for performance accuracy to have varied at all across these sections; all sections (Key 1, Transition, Key 2) were literally in same key. As such, there simply should not have been any change in error rates observed across these melodies. One of the principal goals of Experiment 2 was thus to investigate this perplexing result.</p>
</sec>
</sec>
<sec id="sec12">
<label>3</label>
<title>Experiment 2: sight-reading of modulating melodies: abstraction of anticipated tonal movement</title>
<p>One reason that the no modulation melodies of the previous experiment might have demonstrated unexpected increased error rates lies in the decision to randomly order the stimulus melodies in this study. Although random presentation of stimuli is generally considered a <italic>sine qua non</italic> of good experimental design, in the current case this procedure led to participants experiencing experimental sessions in which three-quarters of the trials had modulating melodies, and one-quarter had non-modulating melodies. One possibility is that this high prevalence of modulating passages may have biased performers to generally anticipate key movement in these stimuli. Although this expectation would be unfulfilled for the no-modulation melodies, it could have nevertheless led to increased errors at the transition position through a variety of mechanisms, including performers actively inhibiting the previous key section, misreading of the notes in scores, and so on.</p>
<p>Of course, this explanation assumes that participants in an experiment actively abstract structural regularities from a set of presumably independent individual trials in an experiment, with these regularities then guiding responses in the experimental context. In fact, recent work by <xref ref-type="bibr" rid="ref96">Schmuckler et al. (2020)</xref> uncovered evidence for exactly such a process. Presented as a set of case studies, this paper highlighted how the unintended tonal structure of a set of stimuli, discernible only as an abstract property across a series of independent trials, each of which was individually devoid of this overall structure, nevertheless influenced participants&#x2019; responses in these contexts. The two experimental contexts producing these effects differed dramatically, with the first involving pianists&#x2019; performances of two-note dyads (based on work by <xref ref-type="bibr" rid="ref94">Schmuckler and Bosman, 1997</xref>), and the second involving listeners&#x2019; expectancy generation and memory for tones in short atonal melodies (based on work by <xref ref-type="bibr" rid="ref110">Vuvan et al., 2014</xref>). Without going into the details of these analyses (see <xref ref-type="bibr" rid="ref96">Schmuckler et al., 2020</xref>, for a detailed review and discussion), this work demonstrated the feasibility of abstraction of overall structural properties, and specifically tonal structure, across a series of trials in both performance and perception contexts. Accordingly, the idea that performers in Experiment 1 anticipated key modulations in these stimuli seems quite possible. One of the principal goals of this study was to examine this hypothesis.</p>
<p>This study also provided an opportunity to test the impact of a (belatedly recognized) potential confound in the stimuli of the previous experiment. Specifically, it was realized that the stimuli employed in Experiment 1 contained a relation between the tonal distance of the modulating melodies and the tendency for the written notation of the melodies to contain a mixed set of accidentals (i.e., the use of both flats and sharps in the melodies). For the no modulation and close modulation condition there were no occurrences of mixed accidentals in the stimulus scores. However, the 12 mid-modulation melodies contained four melodies with mixed accidental scores, and the far modulation stimuli contained 10 melodies with mixed accidental scores. Although the occurrence of both flats and sharps in a musical score is a common occurrence naturalistically, their presence could have nevertheless created additional attentional demands for performers, above and beyond that needed for the perceptual-motor requirements of sight-reading. These additional demands may have thus contributed to, or even driven, the observed increase in errors across these conditions. Accordingly, it is of interest to examine the impact of this factor on the tonal distance effects just observed.</p>
<p>Experiment 2 addressed both of these aspects. To evaluate whether exposure to modulating sequences generates modulatory expectations in the no modulation sequences, we simply blocked the presentation order of the different conditions. Specifically, one group of participants completed all the no-modulation condition trials prior to encountering any of the other modulation conditions, whereas a second group completed the no modulation trials after experiencing the modulation conditions. Comparing performance between these two groups allows for a direct assessment of whether prior experience with modulating sequences drives expectations in non-modulating contexts. If such exposure induces an expectation of modulation, then error rates in the Transition section, and potentially the Key 2 section, should be reduced for the first group relative to the second group of performers. To address concerns regarding mixed accidentals, this issue can be eliminated by employing enharmonic note spellings (e.g., replace Bb with A#) whenever necessary to equate the types of accidentals across the two tonalities. Although this approach introduces some unconventional instances of musical notation (as discussed in the upcoming methods section), it offers a straightforward solution to this issue.</p>
<sec id="sec13">
<label>3.1</label>
<title>Methods</title>
<sec id="sec14">
<label>3.1.1</label>
<title>Participants</title>
<p>Thirty trained pianists (22 females; mean age&#x202F;=&#x202F;20.56&#x202F;years, <italic>SD</italic>&#x202F;=&#x202F;1.63&#x202F;years) were recruited from the University of Toronto Scarborough population. Participants received either extra credit in a psychology course or a $30 Amazon Gift card for participating. As with Experiment 1, eligibility requirements included having a minimum of Grade 7 sight-reading ability using guidelines established by the Royal Conservatory of Music or an equivalent musical institution. Participants reported an average of 15.33 (<italic>SD</italic>&#x202F;=&#x202F;3.32) years of playing the piano, 10.93 (<italic>SD</italic>&#x202F;=&#x202F;3.82) years of formal instruction, 2.78 (<italic>SD</italic>&#x202F;=&#x202F;3.02) hours of weekly practice, with a modal level of 10 for their highest certificate in piano performance.</p>
</sec>
<sec id="sec15">
<label>3.1.2</label>
<title>Materials, apparatus, experimental design, and procedure</title>
<p>With respect to the stimulus materials, the principal variation from the previous experiment involved modifying the to-be-performed musical scores to remove all instances of mixed accidentals (i.e., both flats and sharps) in a given musical score, when necessary. These changes were accomplished via the use of enharmonic note spellings for one of the two component tonalities in the passage. <xref ref-type="fig" rid="fig5">Figure 5</xref> shows the implementation of this process of harmonizing accidentals for the previously viewed sample melodies, explicitly indicating the occurrence of enharmonic note usage.</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Sample melodies in the no modulation, close modulation, mid modulation, and far modulation conditions in Experiment 2. Harmonized accidentals are indicated with arrows (&#x2191;).</p>
</caption>
<graphic xlink:href="fpsyg-17-1736951-g005.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Sheet music graphic with four panels labeled No Modulation, Close Modulation, Mid Modulation, and Far Modulation, each displaying musical notes in different colors. Arrows below Mid and Far Modulation indicate notes that employed enharmonic note spelling.</alt-text>
</graphic>
</fig>
<p>It should be noted that employing enharmonic notations does produce unusual note collections within a music-theoretic framework. For instance, the far modulation stimulus shown in <xref ref-type="fig" rid="fig2">Figures 2</xref>, <xref ref-type="fig" rid="fig3">3</xref>, and <xref ref-type="fig" rid="fig5">5</xref> comprises movement from Bb major to E major. Bb major employs two flats (Bb and Eb), whereas E major contains four sharps (F#, C#, G#, D#). One way of equating the accidentals across these two sections would be to use the enharmonic key for one of the two &#x2013; either A# major for Bb major, which would change the flats to sharps, or Fb major for E major, which would change the sharps to flats. Although both of these enharmonic tonalities are technically possible, neither are especially practicable, given that both are unusual and somewhat novel keys for performers. As an alternative, rather than employing the entire enharmonic tonality, it is also possible to modify only the notes containing the mixed accidentals for one of the two tonalities. In the previous example, this would mean changing either the Bb and Eb to A# and D# in the Bb major tonality, or the F#, C#, G#, and D# to Gb, Db, Ab, and Eb, in the E major tonality. In this case, although modification of Bb major requires fewer changes, because we attempted to roughly balance the proportion of melodies written with flats and sharps, we determined it was better to employ the enharmonic spellings for the E major notes. As an aside, it is recognized that this procedure does create an unusual spelling of the diatonic set (E, Gb, G, Ab, A, B, Db, Eb, E). However, because none of the melodies contained key signatures, or spelled out the diatonic set directly, the impact of this oddity was judged to be minor.</p>
<p>The critical design modification of this study involved using a blocked design for stimulus presentation, with two groups of participants. The first group (<italic>N</italic>&#x202F;=&#x202F;15), called the &#x201C;no modulation condition first&#x201D; group, received a random ordering of the no modulation melodies in a single block of 12 trials, presented at the beginning of the experiment. After completing these trials, the remaining 36 trials for the close, mid, and far modulation conditions were then randomly presented. The second group (<italic>N</italic>&#x202F;=&#x202F;15), called the &#x201C;modulation conditions first&#x201D; group, received a block of the randomly ordered 36 trials for the close, mid, and far modulation conditions initially, followed by a block of the randomly ordered 12 no modulation melodies. Note that this design fails to examine the impact of the degree of exposure to modulating melodies on performance (testing this question would require a Latin square design consisting of all counterbalanced orderings of conditions); however, because our question was focused only on whether experience with modulating melodies produced an expectation for such modulation, such a design, although more nuanced, is significantly less efficient. Accordingly, we restricted our focus to the basic issue of having or not having experienced modulation melodies. All remaining aspects of the experimental apparatus and procedure were identical to Experiment 1.</p>
</sec>
</sec>
<sec id="sec16">
<label>3.2</label>
<title>Results</title>
<p>Performances were preprocessed equivalently to Experiment 1, resulting in average error rates for all performers as a function of individual measures, ultimately aggregated into the three sections of Key 1, Transition, and Key 2, for all four modulation types. Error rates were analyzed in a 3-way ANOVA, with the within-subjects factors of <italic>Tonal Section</italic> (Key 1, Transition, Key 2) and <italic>Modulation Condition</italic> (no modulation, close modulation, mid modulation, far modulation), as well as the between-subjects factor of <italic>Condition Order</italic> (no modulation condition first, modulation conditions first). Initially analyses found that, based on the Shapiro&#x2013;Wilk test, normality was violated for 6 of the possible 24 combinations of variables in this study. Accordingly, given the robustness of ANOVAs with respect to such violations (<xref ref-type="bibr" rid="ref87">Schmider et al., 2010</xref>), and the fact that these normality violations occurred for only a handful of our means, we elected not to transform the data. Additionally, Mauchley&#x2019;s test indicated that sphericity was violated for the main effects of <italic>Tonal Section</italic> (<italic>p</italic>&#x202F;=&#x202F;0.003) and <italic>Modulation Condition</italic> (<italic>p</italic>&#x202F;=&#x202F;0.004), as well as the <italic>Tonal Section</italic> x <italic>Modulation Condition</italic> interaction (<italic>p</italic>&#x202F;&#x003C;&#x202F;0.001). Accordingly, Greenhouse&#x2013;Geisser corrections were applied to the degrees of freedom for these factors. Finally, this study observed a violation of the assumption of homogeneity of variance, based on Levene&#x2019;s test for equality of variance. Fortunately, one consequence of the application of the Greenhouse&#x2013;Geisser correction is that it also addresses violations of homogeneity of variance.</p>
<p><xref ref-type="fig" rid="fig6">Figure 6</xref> shows errors rates (and SEs) as a function of the three factors noted above. The ANOVA revealed main effects for <italic>Tonal Section</italic>, <italic>F</italic>(1.48,41.38)&#x202F;=&#x202F;19.18, <italic>MSE</italic>&#x202F;=&#x202F;69.50, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, <italic>partial &#x03B7;<sup>2</sup></italic>&#x202F;=&#x202F;0.41, and <italic>Modulation Condition</italic>, <italic>F</italic>(2.14,51.80)&#x202F;=&#x202F;41.65, <italic>MSE</italic>&#x202F;=&#x202F;43.18, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, <italic>partial &#x03B7;<sup>2</sup></italic>&#x202F;=&#x202F;0.60, but no main effect for <italic>Condition Order</italic>, <italic>F</italic>(1,28)&#x202F;=&#x202F;0.86, <italic>MSE</italic>&#x202F;=&#x202F;675. For the main effect of <italic>Tonal Section</italic>, <italic>post hoc</italic> tests with Holm-Bonferroni corrections revealed significant differences between the Key 1 (<italic>M</italic>&#x202F;=&#x202F;10.34, <italic>SE</italic>&#x202F;=&#x202F;1.08) and Transition (<italic>M</italic>&#x202F;=&#x202F;14.45, <italic>SE</italic>&#x202F;=&#x202F;1.50) sections, <italic>t</italic>(28)&#x202F;=&#x202F;&#x2212;4.80, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, as well as Key 1 and Key 2 (<italic>M</italic>&#x202F;=&#x202F;15.84, <italic>SE</italic>&#x202F;=&#x202F;1.76) sections, <italic>t</italic>(28)&#x202F;=&#x202F;&#x2212;4.79, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001. The difference between the Transition and Key 2 sections was marginally significant, <italic>t</italic>(28)&#x202F;=&#x202F;&#x2212;2.05, <italic>p</italic>&#x202F;=&#x202F;0.05. For the main effect of <italic>Modulation Condition</italic>, errors increased systematically across the no modulation (<italic>M</italic>&#x202F;=&#x202F;10.17, <italic>SE</italic>&#x202F;=&#x202F;1.19), close modulation (<italic>M</italic>&#x202F;=&#x202F;12.18, <italic>SE</italic>&#x202F;=&#x202F;1.46), mid modulation (<italic>M</italic>&#x202F;=&#x202F;12.92, <italic>SE</italic>&#x202F;=&#x202F;1.43), and far modulation (<italic>M</italic>&#x202F;=&#x202F;18.93, <italic>SE</italic>&#x202F;=&#x202F;1.71) conditions. <italic>Post hoc</italic> tests with Holm-Bonferroni corrections revealed significant differences between all pairwise comparisons, with the exception of the close and mid modulation conditions. Of the remaining effects, there was a significant <italic>Tonal Section</italic> x <italic>Modulation Condition</italic> interaction, <italic>F</italic>(3.60,100.92)&#x202F;=&#x202F;9.96, <italic>MSE</italic>&#x202F;=&#x202F;34.52, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, <italic>partial &#x03B7;<sup>2</sup></italic>&#x202F;=&#x202F;0.26.</p>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption>
<p>Percent error (and SEs) for the <italic>Tonal Modulation</italic> conditions, as a function of <italic>Tonal Section</italic>, in the no modulation condition first (top) and modulation conditions first (bottom) for Experiment 2. This figure is presented for information only; the three-way interaction was not significant.</p>
</caption>
<graphic xlink:href="fpsyg-17-1736951-g006.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Two line graphs compare percent error rate across tonal sections&#x2014;Key 1, Transition, and Key 2&#x2014;for four tonal modulation conditions: no modulation, close, mid, and far modulation, each with error bars. In both graphs, percent error rates increase most sharply for far modulation, especially at Key 2, while no modulation remains lowest. All other conditions show moderate error rates that rise slightly across sections.</alt-text>
</graphic>
</fig>
<p>Again, although the three-way interaction was non-significant, <italic>F</italic>(3.60,100.92)&#x202F;=&#x202F;1.75, <italic>MSE</italic>&#x202F;=&#x202F;34.52, <italic>n.s.</italic>, as with the previous study, a series of follow-up analyses using Holm-Bonferroni corrections examined the pattern of error rates as a function of <italic>Tonal Section</italic>, for the no modulation condition first and modulation conditions first groups separately, with respect to the individual modulation conditions. The results of these analyses also appear in <xref ref-type="table" rid="tab1">Table 1</xref>, and demonstrate a comparable pattern as to what has already been described. For the participants in the no modulation first condition, there were no significant differences between any of the sections in the no modulation condition, nor were there any differences between the sections for the close and mid modulation conditions. For the far modulation condition, however, errors in the Key 1 section were significantly lower than error rates in the transition and the Key 2 section, with no difference in error rates between the Transition and Key 2 section.</p>
<p>For modulation conditions first participants, in the no modulation condition there was no difference in error rates between the Key 1, Transition, and Key 2 sections of these melodies. For the close and mid modulation conditions, differences in error rates between the sections began to appear, with the Key 1 section producing lower error rates than the Key 2 section for the close modulation condition, and a non-significant trend for the mid modulation condition. Finally, in the far modulation condition, the differences between the sections become consistent, with the Key 1 section significantly lower than both the Transition and Key 2 sections, with a significant increase in errors between the Transition and Key 2 sections as well.</p>
<sec id="sec17">
<label>3.2.1</label>
<title>Cross-experiment analyses</title>
<p>A final analysis examined error rates during sight-reading across the two experiments. This analysis is of interest in that it provides a direct assessment of the impact of the two principal modifications employed in Experiment 2 &#x2013; the blocking of no modulation and modulation trials, and the impact of harmonizing accidentals across the passages, on sight-reading performance. In this analysis, percent error rates were analyzed in a three-way ANOVA, with the within-subjects factors of <italic>Tonal Section</italic> (Key 1, Transition, Key 2), <italic>Modulation Condition</italic> (no modulation, close modulation, mid modulation, far modulation), and the between-subjects factor of <italic>Stimulus Presentation</italic> (intermixed [Experiment 1], no modulation condition first [Experiment 2], modulation conditions first [Experiment 2]). Mauchley&#x2019;s test indicated that sphericity was violated for the main effects of <italic>Tonal Section</italic> (<italic>p</italic>&#x202F;=&#x202F;0.051) and <italic>Modulation Condition</italic> (<italic>p</italic>&#x202F;=&#x202F;0.01), as well as for the <italic>Tonal Section</italic> x <italic>Modulation Condition</italic> interaction (<italic>p</italic>&#x202F;&#x003C;&#x202F;0.001). Accordingly, Greenhouse&#x2013;Geisser corrections were applied to the degrees of freedom for these factors.</p>
<p>The above ANOVA revealed main effects for <italic>Tonal Section</italic>, <italic>F</italic>(1.80, 93.67)&#x202F;=&#x202F;27.80, <italic>MSE</italic>&#x202F;=&#x202F;46.84, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, <italic>partial &#x03B7;<sup>2</sup></italic>&#x202F;=&#x202F;0.35, and <italic>Modulation Condition</italic>, <italic>F</italic>(2.53,131.58)&#x202F;=&#x202F;46.99, <italic>MSE</italic>&#x202F;=&#x202F;28.55, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, <italic>partial &#x03B7;<sup>2</sup></italic>&#x202F;=&#x202F;0.48, but no main effect for <italic>Stimulus Presentation</italic>, <italic>F</italic>(2,52)&#x202F;=&#x202F;1.06, <italic>MSE</italic>&#x202F;=&#x202F;859.30, <italic>n.s</italic>. For the main effect of <italic>Tonal Section</italic>, <italic>post hoc</italic> tests with Holm-Bonferroni corrections revealed significant differences between the Key 1 (<italic>M</italic>&#x202F;=&#x202F;9.89, <italic>SE</italic>&#x202F;=&#x202F;0.94) and Transition (<italic>M</italic>&#x202F;=&#x202F;13.87, <italic>SE</italic>&#x202F;=&#x202F;1.37) sections, <italic>t</italic>(52)&#x202F;=&#x202F;&#x2212;5.96, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, as well as the Key 1 and Key 2 sections (<italic>M</italic>&#x202F;=&#x202F;14.14, <italic>SE</italic>&#x202F;=&#x202F;1.33), <italic>t</italic>(52)&#x202F;=&#x202F;&#x2212;6.01, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, but not between the Transition and Key 2 sections, <italic>t</italic>(52)&#x202F;=&#x202F;&#x2212;0.52, <italic>n.s</italic>. For the main effect for <italic>Modulation Condition</italic>, <italic>post hoc</italic> texts revealed a systematic increase in errors across the no modulation (<italic>M</italic>&#x202F;=&#x202F;10.23, <italic>SE</italic>&#x202F;=&#x202F;1.13), close modulation (<italic>M</italic>&#x202F;=&#x202F;11.56, <italic>SE</italic>&#x202F;=&#x202F;1.20), mid modulation (<italic>M</italic>&#x202F;=&#x202F;12.27, <italic>SE</italic>&#x202F;=&#x202F;1.19), and far modulation (<italic>M</italic>&#x202F;=&#x202F;16.47, <italic>SE</italic>&#x202F;=&#x202F;1.36) conditions, with significant differences between almost all modulation conditions (<italic>p</italic>&#x202F;&#x003C;&#x202F;0.001 for all comparisons), except between the no modulation and close modulation conditions, <italic>t</italic>(52)&#x202F;=&#x202F;&#x2212;2.55, <italic>n.s.</italic>, and between the close and mid modulation conditions, <italic>t</italic>(52)&#x202F;=&#x202F;&#x2212;1.46, <italic>n.s</italic>.</p>
<p>This analysis also revealed significant two-way interactions between <italic>Tonal Section</italic> and <italic>Modulation Condition</italic>, <italic>F</italic>(4.23,219.79)&#x202F;=&#x202F;11.18, <italic>MSE</italic>&#x202F;=&#x202F;23.65, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, <italic>partial &#x03B7;<sup>2</sup></italic>&#x202F;=&#x202F;0.18, <italic>Modulation Condition</italic> and <italic>Stimulus Presentation</italic>, <italic>F</italic>(5.06,131.58)&#x202F;=&#x202F;9.17, <italic>MSE</italic>&#x202F;=&#x202F;28.55, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, <italic>partial &#x03B7;<sup>2</sup></italic>&#x202F;=&#x202F;0.26, and <italic>Tonal Section</italic> and <italic>Stimulus Presentation</italic>, <italic>F</italic>(3.60,93.67)&#x202F;=&#x202F;3.79, <italic>MSE</italic>&#x202F;=&#x202F;46.85, <italic>p</italic>&#x202F;=&#x202F;0.006, <italic>partial &#x03B7;<sup>2</sup></italic>&#x202F;=&#x202F;0.13. These two-way interactions were qualified by a significant three-way interaction between all factors, <italic>F</italic>(8.45,219.79)&#x202F;=&#x202F;3.23, <italic>MSE</italic>&#x202F;=&#x202F;23.65, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, <italic>partial &#x03B7;<sup>2</sup></italic>&#x202F;=&#x202F;0.11. This interaction can be seen by comparing the pattern of findings across <xref ref-type="fig" rid="fig4">Figures 4</xref> and <xref ref-type="fig" rid="fig6">6</xref>. To determine the locus of this interaction, a series of <italic>post hoc</italic>, one-way univariate analyses compared percent error rates as a function of the three stimulus presentation modes (intermixed, no modulation condition first, modulation conditions first) for each individual combination of <italic>Tonal Section</italic> and <italic>Modulation Condition</italic> (nine analyses in total: Key 1, no modulation; Transition, no modulation &#x2026; Key 2, far modulation). Of these comparisons, the only significant difference across the three stimulus presentations was for the Key 2, far modulation combination, <italic>F</italic>(2,52)&#x202F;=&#x202F;9.68, <italic>MSE</italic>&#x202F;=&#x202F;134.20, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, <italic>partial &#x03B7;<sup>2</sup></italic>&#x202F;=&#x202F;0.27. <italic>Post hoc</italic> pairwise comparisons using Holm-Bonferroni corrections revealed significant differences between the error rates with the intermixed (<italic>M</italic>&#x202F;=&#x202F;11.50, <italic>SE</italic>&#x202F;=&#x202F;2.32) and the no modulation condition first stimulus presentations (<italic>M</italic>&#x202F;=&#x202F;20.85, <italic>SE</italic>&#x202F;=&#x202F;2.99), <italic>t</italic>(52)&#x202F;=&#x202F;&#x2212;2.50, <italic>p</italic>&#x202F;=&#x202F;0.032, as well as the intermixed and modulation conditions first stimulus presentations (<italic>M</italic>&#x202F;=&#x202F;27.65, <italic>SE</italic>&#x202F;=&#x202F;2.99), <italic>t</italic>(52)&#x202F;=&#x202F;&#x2212;4.29, <italic>p</italic>&#x202F;&#x003C;&#x202F;0.001, but not between the latter two conditions, <italic>t</italic>(52)&#x202F;=&#x202F;&#x2212;1.61, <italic>ns</italic>. Accordingly, the locus of this interaction arises for the second tonality in the far modulation condition, with the intermixed stimuli of Experiment 1 leading to significant fewer errors than the different forms of blocked presentations employed in Experiment 2.</p>
</sec>
</sec>
<sec id="sec18">
<label>3.3</label>
<title>Discussion</title>
<p>The current findings provide insight into the initial questions driving this study. Regarding the impact of prior exposure to modulation on performers&#x2019; expectations, this study provides evidence that such experience shapes processing during the sight-reading of modulating melodies. Specifically, for the no modulation condition there was no evidence of any change in error rates across the different sections of these melodies. This result stands in contrast to Experiment 1, in which the no modulation melodies produced a pattern of (non-significant) errors across the sections &#x2013; a perplexing result given that these melodies did not change in their tonal information across their extents. As previously discussed, the most viable hypothesis for such a result is that, regardless of the actual content of the passages, performers developed expectations that modulations would occur, with these expectations then influencing performance. This result fits with previous work demonstrating that tonal information plays a significant role on performers&#x2019; expectations for, and productions of, musical passages quite generally (<xref ref-type="bibr" rid="ref88">Schmuckler, 1989</xref>, <xref ref-type="bibr" rid="ref89">1990</xref>). Even more interestingly, and as a cautionary note, these results provide yet another example of how stimulus information aggregated across multiple independent trials of an experiment can significantly effect, sometimes deleteriously, the results of that experiment; this finding has been discussed by <xref ref-type="bibr" rid="ref96">Schmuckler et al. (2020)</xref>.</p>
<p>Although finding that initial presentation of the no-modulation melodies in a concentrated block eliminated performers&#x2019; expectations for modulation in these passages, this effect occurred when the no-modulation melodies were presented (as a block) both prior and subsequent to the modulation melodies. This result adds a nuance to the operation of expectation generation within the current context, showing that performers&#x2019; expectations can also change adaptively over the course of an experimental session. Thus, although performers in the modulation conditions first group likely began the final set of no modulation trials expecting to encounter modulating melodies, they likely modified this expectation after some exposure to these trials, now anticipating that the stimuli would remain in the same key across their length.<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref> This finding also converges with the analyses of <xref ref-type="bibr" rid="ref96">Schmuckler et al. (2020)</xref> in showing that the abstraction of aggregate structure can occur powerfully over short periods of exposure to stimuli. In their original paper, Schmuckler et al. simulated the number of trials needed to abstract the presumed tonal structure in both of their case studies, with these simulations suggesting that such structure was available within relatively few trials (7 and 16 trials, respectively, in the two case studies). The current studies support these simulations, demonstrating structural abstraction, whether initially or adaptively, within 12 trials of exposure.</p>
<p>As for the second goal of this study &#x2013; examining the impact of harmonizing accidentals across changing tonal sections &#x2013; this study found that error rates for the modulating melodies continued to be influenced in a generally comparable fashion as Experiment 1. Thus, the impact of tonal distance observed in the previous study was not simply an artifact of visual confusion arising from the use of mixed accidentals.</p>
<p>This is not to say that modifying the accidentals had no effect on performance, however. In fact, this study found an even stronger influence of modulation on sight-reading. Specifically, error rates increased in the transition, and remained elevated throughout, with the Key 2 section of the far modulation melodies producing elevated error rates with respect to the initial key and the transition section. One possible explanation for this increased effect is that, rather than confusing performers, the mixed accidentals in Experiment 1 actually provided a clearer indication that the passage had modulated to a new key. Accordingly, performers more successfully adaptively modified their expectations for what notes should occur with respect to this new tonality, thereby reducing error rates. In contrast, in Experiment 2, without an obvious indication of key change arising from changed accidentals, performers were slower to adapt to this new key, elevating errors in the subsequent tonal section. Although this finding was unexpected, in retrospect its occurrence is consistent with the perception-action processes presumably underlying performance in this context.</p>
</sec>
</sec>
<sec id="sec19">
<label>4</label>
<title>General discussion</title>
<p>With respect to the principal motivations for this project, these results provide compelling evidence that the sight-reading of melodies incorporating a change of key lead to increased performance errors subsequent to the key change. Specifically, this study observed an increase in error rates at the transition section between the two keys, a finding in keeping with the first prediction articulated for these results. Additionally, this study found that error rates continued to be elevated in the Key 2 section, relative to the first key, a finding in keeping with the second prediction previously discussed. Intriguingly, one as yet unaddressed aspect of this second result involves the time scale of this continued disruption. Clearly one would not expect that modulation would have an unending impact on performance; ultimately, performers will adjust to the new key, and sight-reading errors will drop to previous baseline levels. However, the time frame for such adaptation remains unknown. This issue could be easily tested by providing differing lengths of new key information, and tracking accuracy as a function of passage length.</p>
<p>Finally, this study also observed that the strength of the disruptions induced by tonal modulations was itself moderated systematically by the distance, in tonal space, of these modulations. In keeping with the third prediction delineated, this study found that the greater the tonal distance between the initial and subsequent key, the more significant were the performance disruptions. This demonstration of tonal distance effects in sight-reading complements research examining tonal distance effects in listeners&#x2019; percepts of modulating passages (<xref ref-type="bibr" rid="ref22">Cuddy and Thompson, 1992</xref>; <xref ref-type="bibr" rid="ref58">Lewandowska, 2019</xref>; <xref ref-type="bibr" rid="ref60">Lewandowska and Schmuckler, in preparation</xref>; <xref ref-type="bibr" rid="ref105">Thompson and Cuddy, 1989</xref>, <xref ref-type="bibr" rid="ref106">1992</xref>, <xref ref-type="bibr" rid="ref107">1997</xref>). Generally, this perceptual work demonstrates that listeners are sensitive to the tonal distance of modulating passages, as assessed by direct ratings of perceived distance (<xref ref-type="bibr" rid="ref105">Thompson and Cuddy, 1989</xref>, <xref ref-type="bibr" rid="ref106">1992</xref>, <xref ref-type="bibr" rid="ref107">1997</xref>) and ratings of the musical fit of, and processing times for, target chords related to or unrelated to the initial versus subsequent key sections of modulating passages (<xref ref-type="bibr" rid="ref58">Lewandowska, 2019</xref>; <xref ref-type="bibr" rid="ref60">Lewandowska and Schmuckler, in preparation</xref>).</p>
<p>On a fundamental level, all of these findings are understandable with respect to a perception-action framework of motor control in piano performance, discussed earlier (<xref ref-type="bibr" rid="ref63">Maes et al., 2013</xref>; <xref ref-type="bibr" rid="ref69">Novembre and Keller, 2014</xref>; <xref ref-type="bibr" rid="ref79">Pfordresher, 2006</xref>, <xref ref-type="bibr" rid="ref81">2019</xref>; <xref ref-type="bibr" rid="ref86">Schaefer, 2014</xref>). Along these lines, the visual and auditory (i.e., perceptual) information from the beginnings of these melodies generate schematic representations of the tonal structure of these passages. Given the explicit sight-reading context, these perceptual schemas produce complementary motor schemas involving specific scalar patterns to be performed, consistent with the implied tonal structure of these passages. Although speculative, given the highly overlearned and well-practiced status of tonal scales for musicians (and particularly pianists), it seems reasonable that identification of a specific tonal context would give rise to motor expectations for producing certain specific notes (e.g., diatonic scale tones). In other words, recognition of a given tonal context leads to the operation of perceptual and motor schemas consistent with this tonal context.</p>
<p>Modulation of the melodies to a new key, therefore, requires modifying both perception and action schemas, leading to sharp increases in performance errors at the transition point, and relatively elevated errors for the new key. The tonal distance effect arises because more distant modulating keys literally have less related perceptual, and (presumably) motor schematic representations. Although the second part of this idea (the relation between motor schemas) is a supposition, current psychological models of key relations and tonal processing (<xref ref-type="bibr" rid="ref9">Bharucha, 1987</xref>, <xref ref-type="bibr" rid="ref10">1989</xref>; <xref ref-type="bibr" rid="ref11">Bharucha and Todd, 1989</xref>; <xref ref-type="bibr" rid="ref20">Collins et al., 2014</xref>; <xref ref-type="bibr" rid="ref108">Tillman et al., 2000</xref>) provide ample support for the first part of this idea (less related perceptual/cognitive representations).</p>
<p>This framework also accounts for an unanticipated result of this study: the greater and more prolonged disruption of performance for the Key 2 section in Experiment 2, relative to Experiment 1, as indicated in the cross-experiment analyses. In this case, and as discussed earlier, the difference between the studies likely arises due to the lack of change in accidentals in Experiment 2. This failure to provide an overt indicator of a key change provides less compelling information indicating the necessity of shifting representations. It is worth noting that this idea also relates to the just discussed issue of influences on the performers&#x2019; adaptation to the new tonal framework. Along with just the simple time frame for such adaptation, specific components of the musical score, such as the nature of accidentals, could also influence performers&#x2019; adoption of these new key schemas. Although hypothetical, this idea could be tested by having performers sight-read the same modulating melodies notated with changing accidentals between the two key sections, and with harmonized accidentals. If true, this idea leads to the (non-intuitive) prediction that the former notation should result in fewer errors than the latter notation.</p>
<p>Another assumption inherent in the perception-action framework employed in this work is that performers were activating motor schemas based on scalar representations of the perceived tonalities. Because the stimuli employed exclusively diatonic scale members in the tonal sections, this assumption seems warranted, particularly given that (as already mentioned) diatonic scales are highly overlearned patterns for pianists. This assumption leads to the prediction that employing modulating melodies in which the tonal sections contain both diatonic and chromatic tones might reduce the impact of modulation on performance, given that the presumed scale-based motor schemas would be less strongly engaged.</p>
<p>The current findings also raise interesting questions with respect to the understanding of psychological processes involved in perceiving modulation. One such question involves whether or not the tonal movement asymmetries observed in perceptual studies (<xref ref-type="bibr" rid="ref22">Cuddy and Thompson, 1992</xref>; <xref ref-type="bibr" rid="ref58">Lewandowska, 2019</xref>; <xref ref-type="bibr" rid="ref60">Lewandowska and Schmuckler, in preparation</xref>; <xref ref-type="bibr" rid="ref105">Thompson and Cuddy, 1989</xref>, <xref ref-type="bibr" rid="ref106">1992</xref>, <xref ref-type="bibr" rid="ref107">1997</xref>) would be similarly present in a performance context. As discussed earlier, Thompson and Cuddy (<xref ref-type="bibr" rid="ref22">Cuddy and Thompson, 1992</xref>; <xref ref-type="bibr" rid="ref105">Thompson and Cuddy, 1989</xref>, <xref ref-type="bibr" rid="ref106">1992</xref>, <xref ref-type="bibr" rid="ref107">1997</xref>) consistently observed an asymmetry in perceived tonal distances as a function of the direction of movement around the circle of fifths, with counterclockwise movement (e.g., G major to C major) perceived as more tonally distant that clockwise movement (e.g., C major to G major). Similarly <xref ref-type="bibr" rid="ref58">Lewandowska (2019)</xref> and <xref ref-type="bibr" rid="ref60">Lewandowska and Schmuckler (in preparation)</xref>, in a harmonic priming context, found increased response time to counterclockwise modulations, relative to clockwise modulations. Given these results, it would be interesting to explore whether these differences in perceived tonal distance also lead to increased performance errors when sight-reading counterclockwise versus clockwise modulating melodies.</p>
<p>A final issue raised in this work involves its implications for what has been considered to be two of the most fundamental issues with respect to processing tonal modulations: (1) whether or not there is an enduring impact of the initial (home) key on listeners&#x2019; percepts of tonal closure, and (2) the length of time an initial key retains an impact on listeners&#x2019; percepts of a subsequent key. Currently, extant literature suggests that there is little impact of the initial key on percepts of tonal closure in a large-scale musical context (e.g., <xref ref-type="bibr" rid="ref21">Cook, 1987</xref>), with an initial key retaining an impact anywhere from 11&#x202F;s (<xref ref-type="bibr" rid="ref113">Woolhouse et al., 2016</xref>) to 15&#x2013;20&#x202F;s (<xref ref-type="bibr" rid="ref25">Farbood, 2016</xref>). In the current study, given a metronome beat of 140 BPM, the sight-read melodies lasted just under 16&#x202F;s (37 notes&#x202F;&#x00D7;&#x202F;428.6&#x202F;msec/note&#x202F;=&#x202F;15.86&#x202F;s). Given the consistently observed tonal distance effects, these results imply a continued impact of the initial key on performance of the subsequent key, falling within the estimates of tonal influence proposed by <xref ref-type="bibr" rid="ref25">Farbood (2016)</xref>. In this regard, it would be illuminating to more systematically vary the length of these melodies, including both initial/subsequent key sections, as well as the transition section, to see if the strength of the tonal distance effect is comparably modified by such variation.</p>
<p>In conclusion, this work has not only provided insight into the psychological processes underlying this somewhat bespoke (and admittedly challenging) behavior, it has also helped elucidate the general perception-action mechanisms operative during skilled performance. As always with such work, the current findings represent but a step forward in our understanding of the complexities underlying such sophisticated behavior; further insights into these processes awaits future research.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="sec20">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec sec-type="ethics-statement" id="sec21">
<title>Ethics statement</title>
<p>The studies involving humans were approved by University of Toronto Ethics Review Board Protocol # 22701. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.</p>
</sec>
<sec sec-type="author-contributions" id="sec22">
<title>Author contributions</title>
<p>YZ: Project administration, Methodology, Data curation, Conceptualization, Writing &#x2013; original draft, Writing &#x2013; review &#x0026; editing, Investigation, Formal analysis. OL: Formal analysis, Visualization, Resources, Project administration, Conceptualization, Supervision, Methodology, Writing &#x2013; review &#x0026; editing, Data curation, Validation, Investigation. SJ: Methodology, Writing &#x2013; review &#x0026; editing, Conceptualization. MS: Visualization, Writing &#x2013; original draft, Software, Funding acquisition, Resources, Formal analysis, Conceptualization, Project administration, Validation, Writing &#x2013; review &#x0026; editing, Supervision, Methodology.</p>
</sec>
<sec sec-type="COI-statement" id="sec23">
<title>Conflict of interest</title>
<p>The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
<p>The author MAS declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.</p>
</sec>
<sec sec-type="ai-statement" id="sec24">
<title>Generative AI statement</title>
<p>The author(s) declared that Generative AI was not used in the creation of this manuscript.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec sec-type="disclaimer" id="sec25">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ahken</surname><given-names>S.</given-names></name> <name><surname>Comeau</surname><given-names>G.</given-names></name> <name><surname>H&#x00E9;bert</surname><given-names>S.</given-names></name> <name><surname>Balasubramaniam</surname><given-names>R.</given-names></name></person-group> (<year>2012</year>). <article-title>Eye movement patterns during the processing of musical and linguistic syntactic incongruities</article-title>. <source>Psychomusicol. Music Mind Brain</source> <volume>22</volume>, <fpage>18</fpage>&#x2013;<lpage>25</lpage>. doi: <pub-id pub-id-type="doi">10.1037/a0026751</pub-id></mixed-citation></ref>
<ref id="ref2"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Aiello</surname><given-names>R.</given-names></name> <name><surname>Williamon</surname><given-names>A.</given-names></name></person-group> (<year>2002</year>). &#x201C;<chapter-title>Memory</chapter-title>&#x201D; in <source>The science and psychology of music performance: Creative strategies for teaching and learning</source>. eds. <person-group person-group-type="editor"><name><surname>Parncutt</surname><given-names>R.</given-names></name> <name><surname>McPherson</surname><given-names>G.</given-names></name></person-group> (New York: <publisher-name>Oxford Academic</publisher-name>), <fpage>166</fpage>&#x2013;<lpage>181</lpage>.</mixed-citation></ref>
<ref id="ref3"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Aldwell</surname><given-names>E.</given-names></name> <name><surname>Schachter</surname><given-names>C.</given-names></name></person-group> (<year>2002</year>). <source>Harmony and voice leading</source>. <edition>3rd</edition> Edn. New York: <publisher-name>Schirmer</publisher-name>.</mixed-citation></ref>
<ref id="ref4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Alexander</surname><given-names>M. L.</given-names></name> <name><surname>Henry</surname><given-names>M. L.</given-names></name></person-group> (<year>2012</year>). <article-title>The development of a string sight-reading pitch skill hierarchy</article-title>. <source>J. Res. Music. Educ.</source> <volume>60</volume>, <fpage>201</fpage>&#x2013;<lpage>216</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0022429412446375</pub-id></mixed-citation></ref>
<ref id="ref5"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Arthur</surname><given-names>P.</given-names></name> <name><surname>Khuu</surname><given-names>S.</given-names></name> <name><surname>Blom</surname><given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>Music sight-reading expertise, visually disrupted score and eye movements</article-title>. <source>J. Eye Mov. Res.</source> <volume>9</volume>, <fpage>1</fpage>&#x2013;<lpage>12</lpage>. doi: <pub-id pub-id-type="doi">10.16910/jemr.9.7.1</pub-id></mixed-citation></ref>
<ref id="ref6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Banton</surname><given-names>L. J.</given-names></name></person-group> (<year>1995</year>). <article-title>The role of visual and auditory feedback during the sight-reading of music</article-title>. <source>Psychol. Music</source> <volume>23</volume>, <fpage>3</fpage>&#x2013;<lpage>16</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0305735695231001</pub-id></mixed-citation></ref>
<ref id="ref7"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Barry</surname><given-names>N. H.</given-names></name> <name><surname>Hallam</surname><given-names>S.</given-names></name></person-group> (<year>2002</year>). &#x201C;<chapter-title>Practice</chapter-title>&#x201D; in <source>The science and psychology of music performance: Creative strategies for teaching and learning</source>. eds. <person-group person-group-type="editor"><name><surname>Parncutt</surname><given-names>R.</given-names></name> <name><surname>McPherson</surname><given-names>G.</given-names></name></person-group> (New York: <publisher-name>Oxford Academic</publisher-name>), <fpage>151</fpage>&#x2013;<lpage>165</lpage>.</mixed-citation></ref>
<ref id="ref8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bartlett</surname><given-names>J. C.</given-names></name> <name><surname>Dowling</surname><given-names>W. J.</given-names></name></person-group> (<year>1980</year>). <article-title>Recognition of transposed melodies: a key-distance effect in developmental perspective</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>6</volume>, <fpage>501</fpage>&#x2013;<lpage>515</lpage>. doi: <pub-id pub-id-type="doi">10.1037//0096-1523.6.3.501</pub-id>, <pub-id pub-id-type="pmid">6447764</pub-id></mixed-citation></ref>
<ref id="ref9"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Bharucha</surname><given-names>J. J.</given-names></name></person-group> (<year>1987</year>). &#x201C;<chapter-title>MUSACT: a connectionist model of musical harmony</chapter-title>&#x201D; in <source>Proceedings of the ninth annual conference of the cognitive science society</source> (Mahweh, NJ: <publisher-name>Lawrence Erlbaum Associates</publisher-name>), <fpage>508</fpage>&#x2013;<lpage>517</lpage>.</mixed-citation></ref>
<ref id="ref10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bharucha</surname><given-names>J. J.</given-names></name></person-group> (<year>1989</year>). <article-title>Pitch, harmony, and neural nets: a psychological perspective</article-title>. <source>Comput. Music. J.</source> <volume>13</volume>, <fpage>84</fpage>&#x2013;<lpage>95</lpage>. doi: <pub-id pub-id-type="doi">10.7551/mitpress/4804.001.0001</pub-id></mixed-citation></ref>
<ref id="ref11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bharucha</surname><given-names>J. J.</given-names></name> <name><surname>Todd</surname><given-names>P. M.</given-names></name></person-group> (<year>1989</year>). <article-title>Modeling the perception of tonal structure with neural nets</article-title>. <source>Comput. Music. J.</source> <volume>13</volume>, <fpage>128</fpage>&#x2013;<lpage>137</lpage>. Available online at: <ext-link xlink:href="http://www.jstor.org/stable/3679552?origin=JSTOR-pdf" ext-link-type="uri">http://www.jstor.org/stable/3679552?origin=JSTOR-pdf</ext-link></mixed-citation></ref>
<ref id="ref12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bradshaw</surname><given-names>J. L.</given-names></name> <name><surname>Nettleton</surname><given-names>N.</given-names></name> <name><surname>Geffen</surname><given-names>G.</given-names></name></person-group> (<year>1971</year>). <article-title>Ear differences and delayed auditory feedback: effects on a speech and a music task</article-title>. <source>J. Exp. Psychol.</source> <volume>91</volume>, <fpage>85</fpage>&#x2013;<lpage>92</lpage>. doi: <pub-id pub-id-type="doi">10.1037/h0031797</pub-id>, <pub-id pub-id-type="pmid">5126655</pub-id></mixed-citation></ref>
<ref id="ref13"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Brattico</surname><given-names>E.</given-names></name> <name><surname>Bogert</surname><given-names>B.</given-names></name> <name><surname>Jacobsen</surname><given-names>T.</given-names></name></person-group> (<year>2013</year>). <article-title>Toward a neural chronometry for the aesthetic experience of music</article-title>. <source>Front. Psychol.</source> <volume>4</volume>:<fpage>206</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2013.00206</pub-id>, <pub-id pub-id-type="pmid">23641223</pub-id></mixed-citation></ref>
<ref id="ref14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Brattico</surname><given-names>P.</given-names></name> <name><surname>Brattico</surname><given-names>E.</given-names></name> <name><surname>Vuust</surname><given-names>P.</given-names></name></person-group> (<year>2017</year>). <article-title>Global sensory qualities and aesthetic experience in music</article-title>. <source>Front. Neurosci.</source> <volume>11</volume>:<fpage>159</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnins.2017.00159</pub-id>, <pub-id pub-id-type="pmid">28424573</pub-id></mixed-citation></ref>
<ref id="ref15"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Brattico</surname><given-names>E.</given-names></name> <name><surname>Pearce</surname><given-names>M. T.</given-names></name></person-group> (<year>2013</year>). <article-title>The neuroaesthcis of music</article-title>. <source>Psychomusicology</source> <volume>7</volume>, <fpage>48</fpage>&#x2013;<lpage>61</lpage>. doi: <pub-id pub-id-type="doi">10.1037/a0031624</pub-id></mixed-citation></ref>
<ref id="ref16"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Chew</surname><given-names>E.</given-names></name></person-group> (<year>2002</year>). &#x201C;<chapter-title>The spiral array: an algorithm for determining key boundaries</chapter-title>&#x201D; in <source>Music and artificial intelligence. LNCS/LNAI 2445</source>. eds. <person-group person-group-type="editor"><name><surname>Anagnostopoulou</surname><given-names>C.</given-names></name> <name><surname>Ferrand</surname><given-names>M.</given-names></name> <name><surname>Smaill</surname><given-names>A.</given-names></name></person-group> (Heidelberg: <publisher-name>Springer-Verlag</publisher-name>), <fpage>18</fpage>&#x2013;<lpage>31</lpage>.</mixed-citation></ref>
<ref id="ref17"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Chew</surname><given-names>E.</given-names></name></person-group> (<year>2007</year>). &#x201C;<chapter-title>Out of the grid and into the spiral: geometric interpretations of and comparisons with the spiral-Array model</chapter-title>&#x201D; in <source>Tonal theory for the digital age, computing in musicology</source>. eds. <person-group person-group-type="editor"><name><surname>Hewlett</surname><given-names>W. B.</given-names></name> <name><surname>Selfridge-Field</surname><given-names>E.</given-names></name> <name><surname>Correia</surname><given-names>J.</given-names></name></person-group>, vol. <volume>15</volume> (Redwood, CA: <publisher-name>CCARH</publisher-name>), <fpage>51</fpage>&#x2013;<lpage>72</lpage>.</mixed-citation></ref>
<ref id="ref18"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Chew</surname><given-names>E.</given-names></name></person-group> (<year>2014</year>). <source>Mathematical and computational modeling of tonality</source>. New York: <publisher-name>Springer</publisher-name>.</mixed-citation></ref>
<ref id="ref19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cohen</surname><given-names>A. J.</given-names></name> <name><surname>Schmuckler</surname><given-names>M. A.</given-names></name></person-group> (<year>2023</year>). <article-title>Psychomusicology: a resounding closing cadence</article-title>. <source>Psychomusicology</source> <volume>33</volume>, <fpage>1</fpage>&#x2013;<lpage>6</lpage>. doi: <pub-id pub-id-type="doi">10.1037/pmu0000305</pub-id></mixed-citation></ref>
<ref id="ref20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Collins</surname><given-names>T.</given-names></name> <name><surname>Tillman</surname><given-names>B.</given-names></name> <name><surname>Barrett</surname><given-names>F. S.</given-names></name> <name><surname>Delb&#x00E9;</surname><given-names>C.</given-names></name> <name><surname>Janata</surname><given-names>P.</given-names></name></person-group> (<year>2014</year>). <article-title>A combined model of sensory and cognitive representations underlying tonal expectations in music: from audio signals to behavior</article-title>. <source>Psychol. Rev.</source> <volume>121</volume>, <fpage>33</fpage>&#x2013;<lpage>65</lpage>. doi: <pub-id pub-id-type="doi">10.1037/a0034695</pub-id>, <pub-id pub-id-type="pmid">24490788</pub-id></mixed-citation></ref>
<ref id="ref21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cook</surname><given-names>N.</given-names></name></person-group> (<year>1987</year>). <article-title>The perception of large-scale tonal closure</article-title>. <source>Music. Percept.</source> <volume>5</volume>, <fpage>197</fpage>&#x2013;<lpage>205</lpage>. doi: <pub-id pub-id-type="doi">10.2307/40285392</pub-id></mixed-citation></ref>
<ref id="ref22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cuddy</surname><given-names>L. L.</given-names></name> <name><surname>Thompson</surname><given-names>W. F.</given-names></name></person-group> (<year>1992</year>). <article-title>Asymmetry of perceived key movement in chorale sequences: converging evidence from a probe-tone analysis</article-title>. <source>Psychol. Res.</source> <volume>54</volume>, <fpage>51</fpage>&#x2013;<lpage>59</lpage>. doi: <pub-id pub-id-type="doi">10.1007/BF00937133</pub-id>, <pub-id pub-id-type="pmid">1620798</pub-id></mixed-citation></ref>
<ref id="ref23"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Davidson</surname><given-names>J. W.</given-names></name> <name><surname>Correia</surname><given-names>J. S.</given-names></name></person-group> (<year>2002</year>). &#x201C;<chapter-title>Body movement</chapter-title>&#x201D; in <source>The science and psychology of music performance: Creative strategies for teaching and learning</source>. eds. <person-group person-group-type="editor"><name><surname>Parncutt</surname><given-names>R.</given-names></name> <name><surname>McPherson</surname><given-names>G.</given-names></name></person-group> (New York: <publisher-name>Oxford Academic</publisher-name>), <fpage>237</fpage>&#x2013;<lpage>250</lpage>.</mixed-citation></ref>
<ref id="ref24"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Delogu</surname><given-names>F.</given-names></name> <name><surname>Brunetti</surname><given-names>R.</given-names></name> <name><surname>Inuggi</surname><given-names>A.</given-names></name> <name><surname>Campus</surname><given-names>C.</given-names></name> <name><surname>Del Gatto</surname><given-names>C.</given-names></name> <name><surname>D'Ausilio</surname><given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>That does not sound right: sounds affect visual ERPS during a piano sight-reading task</article-title>. <source>Behav. Brain Res.</source> <volume>367</volume>, <fpage>1</fpage>&#x2013;<lpage>9</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.bbr.2019.03.037</pub-id>, <pub-id pub-id-type="pmid">30922941</pub-id></mixed-citation></ref>
<ref id="ref25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Farbood</surname><given-names>M. M.</given-names></name></person-group> (<year>2016</year>). <article-title>Memory of a tonal center after a modulation</article-title>. <source>Music. Percept.</source> <volume>34</volume>, <fpage>71</fpage>&#x2013;<lpage>93</lpage>. doi: <pub-id pub-id-type="doi">10.1525/mp.2016.34.1.71</pub-id></mixed-citation></ref>
<ref id="ref26"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Fine</surname><given-names>P.</given-names></name> <name><surname>Berry</surname><given-names>A.</given-names></name> <name><surname>Rosner</surname><given-names>B.</given-names></name></person-group> (<year>2006</year>). <article-title>The effect of pattern recognition and tonal predictability on sight-singing ability</article-title>. <source>Psychol. Music</source> <volume>34</volume>, <fpage>431</fpage>&#x2013;<lpage>447</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0305735606067152</pub-id></mixed-citation></ref>
<ref id="ref27"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Finney</surname><given-names>S. A.</given-names></name></person-group> (<year>1997</year>). <article-title>Auditory feedback and musical keyboard performance</article-title>. <source>Music. Percept.</source> <volume>15</volume>, <fpage>153</fpage>&#x2013;<lpage>174</lpage>. doi: <pub-id pub-id-type="doi">10.2307/40285747</pub-id></mixed-citation></ref>
<ref id="ref28"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Finney</surname><given-names>S. A.</given-names></name> <name><surname>Palmer</surname><given-names>C.</given-names></name></person-group> (<year>2003</year>). <article-title>Auditory feedback and memory for music performance: sound evidence for an encoding effect</article-title>. <source>Mem. Cogn.</source> <volume>31</volume>, <fpage>51</fpage>&#x2013;<lpage>64</lpage>. doi: <pub-id pub-id-type="doi">10.3758/bf03196082</pub-id>, <pub-id pub-id-type="pmid">12699143</pub-id></mixed-citation></ref>
<ref id="ref29"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Finney</surname><given-names>S. A.</given-names></name> <name><surname>Warren</surname><given-names>W. H.</given-names></name></person-group> (<year>2002</year>). <article-title>Delayed auditory feedback and rhythmic tapping: evidence for a critical interval shift</article-title>. <source>Percept. Psychophys.</source> <volume>64</volume>, <fpage>896</fpage>&#x2013;<lpage>908</lpage>. doi: <pub-id pub-id-type="doi">10.3758/BF03196794</pub-id>, <pub-id pub-id-type="pmid">12269297</pub-id></mixed-citation></ref>
<ref id="ref30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Firmino</surname><given-names>&#x00C9;. A.</given-names></name> <name><surname>Bueno</surname><given-names>J. L. O.</given-names></name></person-group> (<year>2008</year>). <article-title>Modulation and subjective time</article-title>. <source>J. New Music Res.</source> <volume>37</volume>, <fpage>275</fpage>&#x2013;<lpage>297</lpage>. doi: <pub-id pub-id-type="doi">10.1080/09298210802711652</pub-id></mixed-citation></ref>
<ref id="ref31"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Firmino</surname><given-names>&#x00C9;. A.</given-names></name> <name><surname>Lino</surname><given-names>J.</given-names></name> <name><surname>Bueno</surname><given-names>O.</given-names></name> <name><surname>Bigand</surname><given-names>E.</given-names></name></person-group> (<year>2009</year>). <article-title>Traveling through pitch space speeds up musical time</article-title>. <source>Music. Percept.</source> <volume>26</volume>, <fpage>205</fpage>&#x2013;<lpage>209</lpage>. doi: <pub-id pub-id-type="doi">10.1525/mp.2009.26.3.205</pub-id></mixed-citation></ref>
<ref id="ref32"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Friberg</surname><given-names>A.</given-names></name> <name><surname>Battel</surname><given-names>G. V.</given-names></name></person-group> (<year>2002</year>). &#x201C;<chapter-title>Structural communication</chapter-title>&#x201D; in <source>The science and psychology of music performance: Creative strategies for teaching and learning</source>. eds. <person-group person-group-type="editor"><name><surname>Parncutt</surname><given-names>R.</given-names></name> <name><surname>McPherson</surname><given-names>G.</given-names></name></person-group> (New York: <publisher-name>Oxford Academic</publisher-name>), <fpage>198</fpage>&#x2013;<lpage>218</lpage>.</mixed-citation></ref>
<ref id="ref33"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gabrielsson</surname><given-names>A.</given-names></name></person-group> (<year>2003</year>). <article-title>Music performance research at the millennium</article-title>. <source>Psychol. Music</source> <volume>31</volume>, <fpage>221</fpage>&#x2013;<lpage>272</lpage>. doi: <pub-id pub-id-type="doi">10.1177/03057356030313002</pub-id></mixed-citation></ref>
<ref id="ref34"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gates</surname><given-names>A.</given-names></name> <name><surname>Bradshaw</surname><given-names>J. L.</given-names></name></person-group> (<year>1974</year>). <article-title>Effects of auditory feedback on a musical performance task</article-title>. <source>Percept. Psychophys.</source> <volume>16</volume>, <fpage>105</fpage>&#x2013;<lpage>109</lpage>. doi: <pub-id pub-id-type="doi">10.3758/BF03203260</pub-id></mixed-citation></ref>
<ref id="ref35"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gates</surname><given-names>A.</given-names></name> <name><surname>Bradshaw</surname><given-names>J. L.</given-names></name> <name><surname>Nettleton</surname><given-names>N.</given-names></name></person-group> (<year>1974</year>). <article-title>Effect of different delayed auditory feedback intervals on a music performance task</article-title>. <source>Percept. Psychophys.</source> <volume>15</volume>, <fpage>21</fpage>&#x2013;<lpage>25</lpage>. doi: <pub-id pub-id-type="doi">10.3758/BF03205822</pub-id></mixed-citation></ref>
<ref id="ref36"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gregory</surname><given-names>T. B.</given-names></name></person-group> (<year>1972</year>). <article-title>The effect of rhythmic notation variables on sight-reading errors</article-title>. <source>J. Res. Music. Educ.</source> <volume>20</volume>, <fpage>462</fpage>&#x2013;<lpage>468</lpage>. doi: <pub-id pub-id-type="doi">10.2307/3343804</pub-id></mixed-citation></ref>
<ref id="ref37"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gudmundsdottir</surname><given-names>H. R.</given-names></name></person-group> (<year>2010</year>). <article-title>Pitch error analysis of young piano students' music reading performances</article-title>. <source>Int. J. Music. Educ.</source> <volume>28</volume>, <fpage>61</fpage>&#x2013;<lpage>70</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0255761409351342</pub-id></mixed-citation></ref>
<ref id="ref38"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Havlicek</surname><given-names>L.</given-names></name></person-group> (<year>1968</year>). <article-title>Effects of delayed auditory feedback on musical performance</article-title>. <source>J. Res. Music. Educ.</source> <volume>16</volume>, <fpage>308</fpage>&#x2013;<lpage>318</lpage>. doi: <pub-id pub-id-type="doi">10.2307/3344070</pub-id></mixed-citation></ref>
<ref id="ref39"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Janata</surname><given-names>P.</given-names></name></person-group> (<year>2007</year>). Navigating tonal space. In W. B. Hewlett, E. Selfridge-Field, and E. Correia Jr. (Eds.), Tonal theory for the digital age (Vol. 15, pp. 39-50). Redwood CIty, CA: Stanford University Press. Available online at: <ext-link xlink:href="https://www.ccarh.org/publications/cm/vol/15/" ext-link-type="uri">https://www.ccarh.org/publications/cm/vol/15/</ext-link></mixed-citation></ref>
<ref id="ref40"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Janata</surname><given-names>P.</given-names></name> <name><surname>Birk</surname><given-names>J. L.</given-names></name> <name><surname>Tillman</surname><given-names>B.</given-names></name> <name><surname>Bharucha</surname><given-names>J. J.</given-names></name></person-group> (<year>2003</year>). <article-title>On-line detection of tonal pop-out in modulating contexts</article-title>. <source>Music. Percept.</source> <volume>20</volume>, <fpage>283</fpage>&#x2013;<lpage>305</lpage>. doi: <pub-id pub-id-type="doi">10.1525/mp.2003.20.3.283</pub-id></mixed-citation></ref>
<ref id="ref41"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Janata</surname><given-names>P.</given-names></name> <name><surname>Birk</surname><given-names>J. L.</given-names></name> <name><surname>Van Horn</surname><given-names>J. D.</given-names></name> <name><surname>Leman</surname><given-names>M.</given-names></name> <name><surname>Tillman</surname><given-names>B.</given-names></name> <name><surname>Bharucha</surname><given-names>J. J.</given-names></name></person-group> (<year>2002</year>). <article-title>The cortical topography of tonal structures underling Western music</article-title>. <source>Science</source> <volume>298</volume>, <fpage>2167</fpage>&#x2013;<lpage>2170</lpage>. doi: <pub-id pub-id-type="doi">10.1126/science.1076262</pub-id>, <pub-id pub-id-type="pmid">12481131</pub-id></mixed-citation></ref>
<ref id="ref42"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Juslin</surname><given-names>P. N.</given-names></name> <name><surname>Persson</surname><given-names>R. S.</given-names></name></person-group> (<year>2002</year>). &#x201C;<chapter-title>Emoitional communication</chapter-title>&#x201D; in <source>The science and psychology of music performance: Creative strategies for teaching and learning</source>. eds. <person-group person-group-type="editor"><name><surname>Parncutt</surname><given-names>R.</given-names></name> <name><surname>McPherson</surname><given-names>G.</given-names></name></person-group> (New York: <publisher-name>Oxford Academic</publisher-name>), <fpage>219</fpage>&#x2013;<lpage>236</lpage>.</mixed-citation></ref>
<ref id="ref43"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Koelsch</surname><given-names>S.</given-names></name> <name><surname>Friederici</surname><given-names>A. D.</given-names></name></person-group> (<year>2003</year>). <article-title>Toward the neural basis of processing structure in music: comparative results of different neurophysiological investigation methods</article-title>. <source>Ann. N. Y. Acad. Sci.</source> <volume>999</volume>, <fpage>15</fpage>&#x2013;<lpage>28</lpage>. doi: <pub-id pub-id-type="doi">10.1196/annals.1284.002</pub-id>, <pub-id pub-id-type="pmid">14681114</pub-id></mixed-citation></ref>
<ref id="ref44"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Koelsch</surname><given-names>S.</given-names></name> <name><surname>Gunter</surname><given-names>T.</given-names></name> <name><surname>Schr&#x00F6;ger</surname><given-names>E.</given-names></name> <name><surname>Friederici</surname><given-names>A. D.</given-names></name></person-group> (<year>2003</year>). <article-title>Processing tonal modulations: an ERP study</article-title>. <source>J. Cogn. Neurosci.</source> <volume>15</volume>, <fpage>1149</fpage>&#x2013;<lpage>1159</lpage>. doi: <pub-id pub-id-type="doi">10.1162/089892903322598111</pub-id>, <pub-id pub-id-type="pmid">14709233</pub-id></mixed-citation></ref>
<ref id="ref45"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kopiez</surname><given-names>R.</given-names></name> <name><surname>Lee</surname><given-names>J. I.</given-names></name></person-group> (<year>2008</year>). <article-title>Towards a general model of skills involved in sight reading music</article-title>. <source>Music. Educ. Res.</source> <volume>10</volume>, <fpage>41</fpage>&#x2013;<lpage>62</lpage>. doi: <pub-id pub-id-type="doi">10.1080/14613800701871363</pub-id></mixed-citation></ref>
<ref id="ref46"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Korsakova-Kreyn</surname><given-names>M.</given-names></name> <name><surname>Dowling</surname><given-names>W. J.</given-names></name></person-group> (<year>2014</year>). <article-title>Emotional processing in music: study in affective responses to tonal modulation in controlled harmonic progressions and real music</article-title>. <source>Psychomusicol. Music Mind Brain</source> <volume>24</volume>:<fpage>4</fpage>. doi: <pub-id pub-id-type="doi">10.1037/pmu0000029</pub-id></mixed-citation></ref>
<ref id="ref47"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Krumhansl</surname><given-names>C. L.</given-names></name></person-group> (<year>1990</year>). <source>Cognitive foundations of musical pitch</source>. London: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref48"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Krumhansl</surname><given-names>C. L.</given-names></name></person-group> (<year>2000</year>). <article-title>Rhythm and pitch in music cognition</article-title>. <source>Psychol. Bull.</source> <volume>126</volume>, <fpage>159</fpage>&#x2013;<lpage>179</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0033-2909.126.1.159</pub-id>, <pub-id pub-id-type="pmid">10668354</pub-id></mixed-citation></ref>
<ref id="ref49"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Krumhansl</surname><given-names>C. L.</given-names></name> <name><surname>Cuddy</surname><given-names>L. L.</given-names></name></person-group> (<year>2010</year>). &#x201C;<chapter-title>A theory of tonal hierarchies in music</chapter-title>&#x201D; in <source>Music perception</source>. eds. <person-group person-group-type="editor"><name><surname>Jones</surname><given-names>M. R.</given-names></name> <name><surname>Fay</surname><given-names>R. R.</given-names></name> <name><surname>Popper</surname><given-names>A. N.</given-names></name></person-group> (New York: <publisher-name>Springer</publisher-name>), <fpage>51</fpage>&#x2013;<lpage>87</lpage>.</mixed-citation></ref>
<ref id="ref50"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Krumhansl</surname><given-names>C. L.</given-names></name> <name><surname>Kessler</surname><given-names>E. J.</given-names></name></person-group> (<year>1982</year>). <article-title>Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys</article-title>. <source>Psychol. Rev.</source> <volume>89</volume>, <fpage>334</fpage>&#x2013;<lpage>368</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0033-295X.89.4.334</pub-id>, <pub-id pub-id-type="pmid">7134332</pub-id></mixed-citation></ref>
<ref id="ref51"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Krumhansl</surname><given-names>C. L.</given-names></name> <name><surname>Shepard</surname><given-names>R. N.</given-names></name></person-group> (<year>1979</year>). <article-title>Quantification of the hierarchy of tonal functions within a diatonic context</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>5</volume>, <fpage>579</fpage>&#x2013;<lpage>594</lpage>. doi: <pub-id pub-id-type="doi">10.1037//0096-1523.5.4.579</pub-id>, <pub-id pub-id-type="pmid">528960</pub-id></mixed-citation></ref>
<ref id="ref52"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kulpa</surname><given-names>J. D.</given-names></name> <name><surname>Pfordresher</surname><given-names>P. Q.</given-names></name></person-group> (<year>2013</year>). <article-title>Effects of delayed auditory and visual feedback on sequence production</article-title>. <source>Exp. Brain Res.</source> <volume>224</volume>, <fpage>69</fpage>&#x2013;<lpage>77</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00221-012-3289-z</pub-id>, <pub-id pub-id-type="pmid">23283420</pub-id></mixed-citation></ref>
<ref id="ref53"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Laitz</surname><given-names>S.</given-names></name></person-group> (<year>1996</year>). <article-title>The submediant complex: its musical and poetic roles in Schubert's songs</article-title>. <source>Theory Pract.</source> <volume>21</volume>, <fpage>123</fpage>&#x2013;<lpage>165</lpage>. Available online at: <ext-link xlink:href="https://www.jstor.org/stable/41054293" ext-link-type="uri">https://www.jstor.org/stable/41054293</ext-link></mixed-citation></ref>
<ref id="ref54"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lehmann</surname><given-names>A. C.</given-names></name> <name><surname>Ericsson</surname><given-names>K. A.</given-names></name></person-group> (<year>1993</year>). <article-title>Sight-reading ability of expert pianists in the context of piano accompanying</article-title>. <source>Psychomusicol. A J. Res. Music Cogn.</source> <volume>12</volume>, <fpage>182</fpage>&#x2013;<lpage>195</lpage>. doi: <pub-id pub-id-type="doi">10.1037/h0094108</pub-id></mixed-citation></ref>
<ref id="ref55"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lehmann</surname><given-names>A. C.</given-names></name> <name><surname>Ericsson</surname><given-names>K. A.</given-names></name></person-group> (<year>1996</year>). <article-title>Performance without preparation: structure and acquisition of expert sight-reading and accompanying performance</article-title>. <source>Psychomusicol. A J. Res. Music Cogn.</source> <volume>15</volume>, <fpage>1</fpage>&#x2013;<lpage>29</lpage>. doi: <pub-id pub-id-type="doi">10.1037/h0094082</pub-id></mixed-citation></ref>
<ref id="ref56"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Lehmann</surname><given-names>A. C.</given-names></name> <name><surname>Kopiez</surname><given-names>R.</given-names></name></person-group> (<year>2016</year>). &#x201C;<chapter-title>Sight-reading</chapter-title>&#x201D; in <source>The Oxford handbook of music psychology</source>. eds. <person-group person-group-type="editor"><name><surname>Hallam</surname><given-names>S.</given-names></name> <name><surname>Cross</surname><given-names>I.</given-names></name> <name><surname>Thaut</surname><given-names>M.</given-names></name></person-group>. <edition>2nd</edition> ed (Oxford: <publisher-name>Oxford University Press</publisher-name>), <fpage>547</fpage>&#x2013;<lpage>557</lpage>.</mixed-citation></ref>
<ref id="ref57"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Lehmann</surname><given-names>A. C.</given-names></name> <name><surname>McArthur</surname><given-names>V.</given-names></name></person-group> (<year>2002</year>). &#x201C;<chapter-title>Sight-reading</chapter-title>&#x201D; in <source>The science and psychology of music performance: Creative strategies for teaching and learning</source>. eds. <person-group person-group-type="editor"><name><surname>Parncutt</surname><given-names>R.</given-names></name> <name><surname>McPherson</surname><given-names>G.</given-names></name></person-group> (New York: <publisher-name>Oxford University Press</publisher-name>), <fpage>143</fpage>&#x2013;<lpage>163</lpage>.</mixed-citation></ref>
<ref id="ref58"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Lewandowska</surname><given-names>O. P.</given-names></name></person-group> (<year>2019</year>) Understanding the processing of tonal modulations during musical listening using behavioural and neural techniques. Doctoral dissertation, University of Toronto, Toronto</mixed-citation></ref>
<ref id="ref59"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lewandowska</surname><given-names>O. P.</given-names></name> <name><surname>Schmuckler</surname><given-names>M. A.</given-names></name></person-group> (<year>2019</year>). <article-title>Tonal and textural influences on musical sight-reading</article-title>. <source>Psychol. Res.</source> <volume>84</volume>, <fpage>1920</fpage>&#x2013;<lpage>1945</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00426-019-01187-1</pub-id>, <pub-id pub-id-type="pmid">31073771</pub-id></mixed-citation></ref>
<ref id="ref60"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Lewandowska</surname><given-names>O. P.</given-names></name></person-group>, &#x0026; <person-group person-group-type="author"><name><surname>Schmuckler</surname><given-names>M. A.</given-names></name></person-group> (<year>in preparation</year>). <source>Tonal distance and the perception of key modulation</source></mixed-citation></ref>
<ref id="ref61"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>MacKenzie</surname><given-names>C. L.</given-names></name> <name><surname>Vaneerd</surname><given-names>D. L.</given-names></name> <name><surname>Graham</surname><given-names>E. D.</given-names></name> <name><surname>Huron</surname><given-names>D. B.</given-names></name> <name><surname>Wills</surname><given-names>B. L.</given-names></name></person-group> (<year>1986</year>). <article-title>The effect of tonal structure on rhythm in piano performance</article-title>. <source>Music. Percept.</source> <volume>4</volume>, <fpage>215</fpage>&#x2013;<lpage>222</lpage>. doi: <pub-id pub-id-type="doi">10.2307/40285361</pub-id></mixed-citation></ref>
<ref id="ref62"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Madell</surname><given-names>J.</given-names></name> <name><surname>H&#x00E9;bert</surname><given-names>S.</given-names></name></person-group> (<year>2008</year>). <article-title>Eye movements and music reading: where do we look next?</article-title> <source>Music. Percept.</source> <volume>26</volume>, <fpage>157</fpage>&#x2013;<lpage>170</lpage>. doi: <pub-id pub-id-type="doi">10.1525/MP.2008.26.2.157</pub-id></mixed-citation></ref>
<ref id="ref63"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Maes</surname><given-names>P. J.</given-names></name> <name><surname>Leman</surname><given-names>M.</given-names></name> <name><surname>Palmer</surname><given-names>C.</given-names></name> <name><surname>Wanderley</surname><given-names>M. M.</given-names></name></person-group> (<year>2013</year>). <article-title>Action-based effects on music perception</article-title>. <source>Front. Psychol.</source> <volume>4</volume>:<fpage>1008</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2013.01008</pub-id></mixed-citation></ref>
<ref id="ref64"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mainenti</surname><given-names>M. R. M.</given-names></name> <name><surname>Rodrigues</surname><given-names>E. C.</given-names></name> <name><surname>Oliveira</surname><given-names>J. F.</given-names></name> <name><surname>Ferreira</surname><given-names>A. S.</given-names></name> <name><surname>Dias</surname><given-names>C. M.</given-names></name> <name><surname>Silva</surname><given-names>A. L. S.</given-names></name></person-group> (<year>2011</year>). <article-title>Adiposity and postural balance control: correlations between bioelectrical impedance and stabilometric signals in elderly Brazilian women</article-title>. <source>Clinics</source> <volume>66</volume>, <fpage>1513</fpage>&#x2013;<lpage>1518</lpage>. doi: <pub-id pub-id-type="doi">10.1590/S1807-59322011000900001</pub-id>, <pub-id pub-id-type="pmid">22179151</pub-id></mixed-citation></ref>
<ref id="ref65"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Marvin</surname><given-names>E. W.</given-names></name> <name><surname>Brinkman</surname><given-names>A.</given-names></name></person-group> (<year>1999</year>). <article-title>The effect of modulation and formal manipulation on perception of tonic closure by expert listeners</article-title>. <source>Music. Percept.</source> <volume>16</volume>, <fpage>389</fpage>&#x2013;<lpage>408</lpage>. doi: <pub-id pub-id-type="doi">10.2307/40285801</pub-id></mixed-citation></ref>
<ref id="ref66"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>McPherson</surname><given-names>G.</given-names></name> <name><surname>Gabrielsson</surname><given-names>A.</given-names></name></person-group> (<year>2002</year>). &#x201C;<chapter-title>From sound to sign</chapter-title>&#x201D; in <source>The science and psychology of music performance: Creative strategies for teaching and learning</source>. eds. <person-group person-group-type="editor"><name><surname>Parncutt</surname><given-names>R.</given-names></name> <name><surname>McPherson</surname><given-names>G.</given-names></name></person-group> (New York: <publisher-name>Oxford Academic</publisher-name>), <fpage>98</fpage>&#x2013;<lpage>115</lpage>.</mixed-citation></ref>
<ref id="ref67"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mishra</surname><given-names>J.</given-names></name></person-group> (<year>2014a</year>). <article-title>Factors related to sight-reading accuracy: a meta-analysis</article-title>. <source>J. Res. Music. Educ.</source> <volume>61</volume>, <fpage>452</fpage>&#x2013;<lpage>465</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0022429413508585</pub-id></mixed-citation></ref>
<ref id="ref68"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mishra</surname><given-names>J.</given-names></name></person-group> (<year>2014b</year>). <article-title>Improving sightreading accuracy: a meta-analysis</article-title>. <source>Psychol. Music</source> <volume>42</volume>, <fpage>131</fpage>&#x2013;<lpage>156</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0305735612463770</pub-id></mixed-citation></ref>
<ref id="ref69"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Novembre</surname><given-names>G.</given-names></name> <name><surname>Keller</surname><given-names>P. E.</given-names></name></person-group> (<year>2014</year>). <article-title>A conceptual review on action-perception coupling in the musicians&#x2019; brain: what is it good for?</article-title> <source>Front. Hum. Neurosci.</source> <volume>8</volume>, <fpage>1</fpage>&#x2013;<lpage>11</lpage>. doi: <pub-id pub-id-type="doi">10.3389/fnhum.2014.00603</pub-id>, <pub-id pub-id-type="pmid">25191246</pub-id></mixed-citation></ref>
<ref id="ref70"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Palmer</surname><given-names>C.</given-names></name></person-group> (<year>1997</year>). <article-title>Music performance</article-title>. <source>Annu. Rev. Psychol.</source> <volume>48</volume>, <fpage>115</fpage>&#x2013;<lpage>138</lpage>. doi: <pub-id pub-id-type="doi">10.1146/annurev.psych.48.1.115</pub-id>, <pub-id pub-id-type="pmid">9046557</pub-id></mixed-citation></ref>
<ref id="ref71"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Palmer</surname><given-names>C.</given-names></name></person-group> (<year>2005</year>). <article-title>Sequence memory in music performance</article-title>. <source>Curr. Dir. Psychol. Sci.</source> <volume>14</volume>, <fpage>247</fpage>&#x2013;<lpage>250</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.0963-7214.2005.00374.x</pub-id></mixed-citation></ref>
<ref id="ref72"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Palmer</surname><given-names>C.</given-names></name></person-group> (<year>2013</year>). &#x201C;<chapter-title>Music performance: movement and coordination</chapter-title>&#x201D; in <source>The psychology of music</source>. ed. <person-group person-group-type="editor"><name><surname>Deutsch</surname><given-names>D.</given-names></name></person-group>. <edition>3rd</edition> ed (Amsterdam: <publisher-name>Elsevier Press</publisher-name>), <fpage>405</fpage>&#x2013;<lpage>422</lpage>.</mixed-citation></ref>
<ref id="ref73"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Palmer</surname><given-names>C.</given-names></name> <name><surname>van de Sande</surname><given-names>C.</given-names></name></person-group> (<year>1993</year>). <article-title>Units of knowledge in musical performance</article-title>. <source>J. Exp. Psychol. Learn. Mem. Cogn.</source> <volume>19</volume>, <fpage>457</fpage>&#x2013;<lpage>470</lpage>. doi: <pub-id pub-id-type="doi">10.1037//0278-7393.19.2.457</pub-id>, <pub-id pub-id-type="pmid">8454966</pub-id></mixed-citation></ref>
<ref id="ref74"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Palmer</surname><given-names>C.</given-names></name> <name><surname>van de Sande</surname><given-names>C.</given-names></name></person-group> (<year>1995</year>). <article-title>Range of planning in music performance</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>21</volume>, <fpage>947</fpage>&#x2013;<lpage>962</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0096-1523.21.5.947</pub-id>, <pub-id pub-id-type="pmid">7595248</pub-id></mixed-citation></ref>
<ref id="ref75"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Parncutt</surname><given-names>R.</given-names></name> <name><surname>McPherson</surname><given-names>G.</given-names></name></person-group> (<year>2002</year>). <source>The science and psychology of music performance: Creative strategies for teaching and learning</source>. New York: <publisher-name>Oxford Academic</publisher-name>.</mixed-citation></ref>
<ref id="ref76"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Perra</surname><given-names>J.</given-names></name> <name><surname>Poulin-Charronnat</surname><given-names>B.</given-names></name> <name><surname>Baccino</surname><given-names>T.</given-names></name> <name><surname>Drai-Zerbib</surname><given-names>V.</given-names></name></person-group> (<year>2021</year>). <article-title>Review on eye-hand span in sight-reading of music</article-title>. <source>J. Eye Mov. Res.</source> <volume>14</volume>, <fpage>1</fpage>&#x2013;<lpage>25</lpage>. doi: <pub-id pub-id-type="doi">10.16910/jemr.14.4.4</pub-id>, <pub-id pub-id-type="pmid">34840670</pub-id></mixed-citation></ref>
<ref id="ref77"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pfordresher</surname><given-names>P. Q.</given-names></name></person-group> (<year>2003</year>). <article-title>Auditory feedback in music performance: evidence for a dissociation of sequencing and timing</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>29</volume>, <fpage>949</fpage>&#x2013;<lpage>964</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0096-1523.29.5.949</pub-id>, <pub-id pub-id-type="pmid">14585016</pub-id></mixed-citation></ref>
<ref id="ref78"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pfordresher</surname><given-names>P. Q.</given-names></name></person-group> (<year>2005</year>). <article-title>Auditory feedback in music performance: the role of melodic structure and musical skill</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>31</volume>, <fpage>1331</fpage>&#x2013;<lpage>1345</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0096-1523.31.6.1331</pub-id>, <pub-id pub-id-type="pmid">16366793</pub-id></mixed-citation></ref>
<ref id="ref79"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pfordresher</surname><given-names>P. Q.</given-names></name></person-group> (<year>2006</year>). <article-title>Coordination of perception and action in music performance</article-title>. <source>Adv. Cogn. Psychol.</source> <volume>2</volume>, <fpage>183</fpage>&#x2013;<lpage>198</lpage>. doi: <pub-id pub-id-type="doi">10.2478/v10053-008-0054-8</pub-id></mixed-citation></ref>
<ref id="ref80"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pfordresher</surname><given-names>P. Q.</given-names></name></person-group> (<year>2008</year>). <article-title>Auditory feedback in music performance: the role of transition-based similarity</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>34</volume>, <fpage>708</fpage>&#x2013;<lpage>725</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0096-1523.34.3.708</pub-id>, <pub-id pub-id-type="pmid">18505333</pub-id></mixed-citation></ref>
<ref id="ref81"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Pfordresher</surname><given-names>P. Q.</given-names></name></person-group> (<year>2019</year>). <source>Sound and action in music performance</source>. San Diego: <publisher-name>Academic Press</publisher-name>.</mixed-citation></ref>
<ref id="ref82"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pfordresher</surname><given-names>P. Q.</given-names></name> <name><surname>Dalla Bella</surname><given-names>S.</given-names></name></person-group> (<year>2011</year>). <article-title>Delayed auditory fee.dback and movement</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>37</volume>, <fpage>566</fpage>&#x2013;<lpage>579</lpage>. doi: <pub-id pub-id-type="doi">10.1037/a0021487</pub-id>, <pub-id pub-id-type="pmid">21463087</pub-id></mixed-citation></ref>
<ref id="ref83"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pfordresher</surname><given-names>P. Q.</given-names></name> <name><surname>Palmer</surname><given-names>C.</given-names></name></person-group> (<year>2002</year>). <article-title>Effects of delayed auditory feedback in music performance</article-title>. <source>Psychol. Res.</source> <volume>66</volume>, <fpage>71</fpage>&#x2013;<lpage>79</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s004260100075</pub-id>, <pub-id pub-id-type="pmid">11963280</pub-id></mixed-citation></ref>
<ref id="ref84"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pfordresher</surname><given-names>P. Q.</given-names></name> <name><surname>Palmer</surname><given-names>C.</given-names></name></person-group> (<year>2006</year>). <article-title>Effects of hearing the past, present, or future during music performance</article-title>. <source>Percept. Psychophys.</source> <volume>68</volume>, <fpage>362</fpage>&#x2013;<lpage>376</lpage>. doi: <pub-id pub-id-type="doi">10.3758/bf03193683</pub-id>, <pub-id pub-id-type="pmid">16900830</pub-id></mixed-citation></ref>
<ref id="ref85"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Rosen</surname><given-names>C.</given-names></name></person-group> (<year>1972</year>). <source>The classical style: Haydn, Mozart, Beethoven</source>. New York: <publisher-name>W. W. Norton</publisher-name>.</mixed-citation></ref>
<ref id="ref86"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schaefer</surname><given-names>R. S.</given-names></name></person-group> (<year>2014</year>). <article-title>Mental representations in musical processing and their role in action-perception loops</article-title>. <source>Empir. Musicol. Rev.</source> <volume>9</volume>, <fpage>161</fpage>&#x2013;<lpage>176</lpage>. doi: <pub-id pub-id-type="doi">10.18061/emr.v9i3-4.4291</pub-id></mixed-citation></ref>
<ref id="ref87"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schmider</surname><given-names>E.</given-names></name> <name><surname>Ziegler</surname><given-names>E.</given-names></name> <name><surname>Danay</surname><given-names>E.</given-names></name> <name><surname>Beyer</surname><given-names>L.</given-names></name> <name><surname>B&#x00FC;hner</surname><given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Is it really robust? Reinvestigating the robustness of ANOVA against violations of the normal distribution assumption</article-title>. <source>Methodology</source> <volume>6</volume>, <fpage>147</fpage>&#x2013;<lpage>151</lpage>. doi: <pub-id pub-id-type="doi">10.1027/1614-2241/a000016</pub-id></mixed-citation></ref>
<ref id="ref88"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schmuckler</surname><given-names>M. A.</given-names></name></person-group> (<year>1989</year>). <article-title>Expectation in music: investigation of melodic and harmonic processes</article-title>. <source>Music. Percept.</source> <volume>7</volume>, <fpage>109</fpage>&#x2013;<lpage>150</lpage>. doi: <pub-id pub-id-type="doi">10.2307/40285454</pub-id></mixed-citation></ref>
<ref id="ref89"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schmuckler</surname><given-names>M. A.</given-names></name></person-group> (<year>1990</year>). <article-title>The performance of global expectations</article-title>. <source>Psychomusicol</source> <volume>9</volume>, <fpage>122</fpage>&#x2013;<lpage>147</lpage>. doi: <pub-id pub-id-type="doi">10.1037/h0094151</pub-id></mixed-citation></ref>
<ref id="ref90"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Schmuckler</surname><given-names>M. A.</given-names></name></person-group> (<year>2004</year>). &#x201C;<chapter-title>Pitch and pitch structures</chapter-title>&#x201D; in <source>Ecological psychoacoustics</source>. ed. <person-group person-group-type="editor"><name><surname>Neuhoff</surname><given-names>J.</given-names></name></person-group> (San Diego: <publisher-name>Academic Press</publisher-name>), <fpage>271</fpage>&#x2013;<lpage>315</lpage>.</mixed-citation></ref>
<ref id="ref91"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Schmuckler</surname><given-names>M. A.</given-names></name></person-group> (<year>2009</year>). &#x201C;<chapter-title>Components of melodic processing</chapter-title>&#x201D; in <source>Oxford handbook of music psychology</source>. eds. <person-group person-group-type="editor"><name><surname>Hallam</surname><given-names>S.</given-names></name> <name><surname>Cross</surname><given-names>I.</given-names></name> <name><surname>Thaut</surname><given-names>M.</given-names></name></person-group>. <edition>1st</edition> ed (Oxford: <publisher-name>Oxford University Press</publisher-name>), <fpage>93</fpage>&#x2013;<lpage>106</lpage>.</mixed-citation></ref>
<ref id="ref92"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Schmuckler</surname><given-names>M. A.</given-names></name></person-group> (<year>2016</year>). &#x201C;<chapter-title>Tonality and contour in melodic processing</chapter-title>&#x201D; in <source>The Oxford handbook of music psychology</source>. eds. <person-group person-group-type="editor"><name><surname>Hallam</surname><given-names>S.</given-names></name> <name><surname>Cross</surname><given-names>I.</given-names></name> <name><surname>Thaut</surname><given-names>M.</given-names></name></person-group>. <edition>2nd</edition> ed (Oxford: <publisher-name>Oxford University Press</publisher-name>), <fpage>143</fpage>&#x2013;<lpage>165</lpage>.</mixed-citation></ref>
<ref id="ref93"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Schmuckler</surname><given-names>M. A.</given-names></name></person-group> (<year>2023</year>). &#x201C;<chapter-title>Tonality and key-finding in music: answers and questions</chapter-title>&#x201D; in <source>The Oxford hanbook of music and corpus studies</source>. eds. <person-group person-group-type="editor"><name><surname>Shanahan</surname><given-names>D.</given-names></name> <name><surname>Burgoyne</surname><given-names>J. A.</given-names></name> <name><surname>Quinn</surname><given-names>I.</given-names></name></person-group> (Oxford: <publisher-name>Oxford University Press</publisher-name>).</mixed-citation></ref>
<ref id="ref94"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schmuckler</surname><given-names>M. A.</given-names></name> <name><surname>Bosman</surname><given-names>E. L.</given-names></name></person-group> (<year>1997</year>). <article-title>Interkey timing in piano performance and typing</article-title>. <source>Can. J. Exp. Psychol.</source> <volume>51</volume>, <fpage>99</fpage>&#x2013;<lpage>111</lpage>. doi: <pub-id pub-id-type="doi">10.1037/1196-1961.51.2.99</pub-id>, <pub-id pub-id-type="pmid">9340078</pub-id></mixed-citation></ref>
<ref id="ref95"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schmuckler</surname><given-names>M. A.</given-names></name> <name><surname>Tomovski</surname><given-names>R.</given-names></name></person-group> (<year>2005</year>). <article-title>Perceptual tests of an algorithm for musical key-finding</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>31</volume>, <fpage>1124</fpage>&#x2013;<lpage>1149</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0096-1523.31.5.1124</pub-id>, <pub-id pub-id-type="pmid">16262503</pub-id></mixed-citation></ref>
<ref id="ref96"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schmuckler</surname><given-names>M. A.</given-names></name> <name><surname>Vuvan</surname><given-names>D. T.</given-names></name> <name><surname>Lewandowski</surname><given-names>O. P.</given-names></name></person-group> (<year>2020</year>). <article-title>Aggregate context effects in music processing</article-title>. <source>Atten. Percept. Psychophys.</source> <volume>82</volume>, <fpage>2215</fpage>&#x2013;<lpage>2229</lpage>. doi: <pub-id pub-id-type="doi">10.3758/s13414-020-02003-4</pub-id>, <pub-id pub-id-type="pmid">32166641</pub-id></mixed-citation></ref>
<ref id="ref97"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sch&#x00F6;n</surname><given-names>D.</given-names></name> <name><surname>Besson</surname><given-names>M.</given-names></name></person-group> (<year>2002</year>). <article-title>Processing pitch and duration in music reading: a RT-ERP study</article-title>. <source>Neuropsychologia</source> <volume>40</volume>, <fpage>868</fpage>&#x2013;<lpage>878</lpage>. doi: <pub-id pub-id-type="doi">10.1016/s0028-3932(01)00170-1</pub-id>, <pub-id pub-id-type="pmid">11900738</pub-id></mixed-citation></ref>
<ref id="ref98"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Shepard</surname><given-names>R. N.</given-names></name></person-group> (<year>1982a</year>). <article-title>Geometrical approximations to the structure of musical pitch</article-title>. <source>Psychol. Rev.</source> <volume>89</volume>, <fpage>305</fpage>&#x2013;<lpage>333</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0033-295X.89.4.305</pub-id>, <pub-id pub-id-type="pmid">7134331</pub-id></mixed-citation></ref>
<ref id="ref99"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Shepard</surname><given-names>R. N.</given-names></name></person-group> (<year>1982b</year>). &#x201C;<chapter-title>Structural representations of musical pitch</chapter-title>&#x201D; in <source>The psychology of music</source>. ed. <person-group person-group-type="editor"><name><surname>Deutsch</surname><given-names>D.</given-names></name></person-group> (San Diego: <publisher-name>Academic Press</publisher-name>), <fpage>343</fpage>&#x2013;<lpage>390</lpage>.</mixed-citation></ref>
<ref id="ref100"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sloboda</surname><given-names>J. A.</given-names></name></person-group> (<year>1974</year>). <article-title>The eye-hand span - an approach to the study of sight reading</article-title>. <source>Psychol. Music</source> <volume>2</volume>, <fpage>4</fpage>&#x2013;<lpage>10</lpage>. doi: <pub-id pub-id-type="doi">10.1177/030573567422001</pub-id></mixed-citation></ref>
<ref id="ref101"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sloboda</surname><given-names>J. A.</given-names></name></person-group> (<year>1977</year>). <article-title>Phrase units as determinants of visual processing in music reading</article-title>. <source>Br. J. Psychol.</source> <volume>68</volume>, <fpage>117</fpage>&#x2013;<lpage>124</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.2044-8295.1977.tb01566.x</pub-id></mixed-citation></ref>
<ref id="ref102"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Spyra</surname><given-names>J.</given-names></name> <name><surname>Stodolak</surname><given-names>M.</given-names></name> <name><surname>Woolhouse</surname><given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>Events versus time in the perception of nonadjacent key relationships</article-title>. <source>Musicae Sci.</source> <volume>25</volume>, <fpage>212</fpage>&#x2013;<lpage>225</lpage>. doi: <pub-id pub-id-type="doi">10.1177/1029864919867463</pub-id></mixed-citation></ref>
<ref id="ref103"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Spyra</surname><given-names>J.</given-names></name> <name><surname>Woolhouse</surname><given-names>M.</given-names></name></person-group> (<year>2023</year>). <article-title>Influence of surface features on the perception of nonadjacent musical phrases</article-title>. <source>Musicae Sci.</source> <volume>27</volume>, <fpage>780</fpage>&#x2013;<lpage>797</lpage>. doi: <pub-id pub-id-type="doi">10.1177/10298649221148681</pub-id>, <pub-id pub-id-type="pmid">37711548</pub-id></mixed-citation></ref>
<ref id="ref104"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll1">The Royal Conservatory</collab></person-group> (<year>2022</year>). <source>Piano syllabus, 2022 edition</source>. Toronto: <publisher-name>The Royal Conservatory</publisher-name>.</mixed-citation></ref>
<ref id="ref105"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Thompson</surname><given-names>W. F.</given-names></name> <name><surname>Cuddy</surname><given-names>L. L.</given-names></name></person-group> (<year>1989</year>). <article-title>Sensitivity to key change in chorale sequences: a comparison of single voices and four-voice harmonic</article-title>. <source>Music. Percept.</source> <volume>7</volume>, <fpage>151</fpage>&#x2013;<lpage>168</lpage>. doi: <pub-id pub-id-type="doi">10.2307/40285455</pub-id></mixed-citation></ref>
<ref id="ref106"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Thompson</surname><given-names>W. F.</given-names></name> <name><surname>Cuddy</surname><given-names>L. L.</given-names></name></person-group> (<year>1992</year>). <article-title>Perceived key movement in four-voice harmony and single voices</article-title>. <source>Music. Percept.</source> <volume>9</volume>, <fpage>427</fpage>&#x2013;<lpage>438</lpage>. doi: <pub-id pub-id-type="doi">10.2307/40285563</pub-id></mixed-citation></ref>
<ref id="ref107"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Thompson</surname><given-names>W. F.</given-names></name> <name><surname>Cuddy</surname><given-names>L. L.</given-names></name></person-group> (<year>1997</year>). <article-title>Music performance and the perception of key</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>23</volume>, <fpage>116</fpage>&#x2013;<lpage>135</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0096-1523.23.1.116</pub-id>, <pub-id pub-id-type="pmid">9090149</pub-id></mixed-citation></ref>
<ref id="ref108"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tillman</surname><given-names>B.</given-names></name> <name><surname>Bharucha</surname><given-names>J. J.</given-names></name> <name><surname>Bigand</surname><given-names>E.</given-names></name></person-group> (<year>2000</year>). <article-title>Implicit learning of tonality: a self-organizing approach</article-title>. <source>Psychol. Rev.</source> <volume>107</volume>, <fpage>885</fpage>&#x2013;<lpage>913</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0033-295X.107.4.885</pub-id>, <pub-id pub-id-type="pmid">11089410</pub-id></mixed-citation></ref>
<ref id="ref109"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tirovolas</surname><given-names>A. K.</given-names></name> <name><surname>Levitin</surname><given-names>D. J.</given-names></name></person-group> (<year>2011</year>). <article-title>Music perception and cognition research from 1983 to 2010: a categorical and bibliometric analysis of empirical articles in music perception</article-title>. <source>Music. Percept.</source> <volume>29</volume>, <fpage>23</fpage>&#x2013;<lpage>36</lpage>. doi: <pub-id pub-id-type="doi">10.1525/mp.2011.29.1.23</pub-id></mixed-citation></ref>
<ref id="ref110"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Vuvan</surname><given-names>D. T.</given-names></name> <name><surname>Podolak</surname><given-names>O. M.</given-names></name> <name><surname>Schmuckler</surname><given-names>M. A.</given-names></name></person-group> (<year>2014</year>). <article-title>Memory for musical tones: the impact of tonality and the creation of false memories</article-title>. <source>Front. Psychol.</source> <volume>5</volume>, <fpage>1</fpage>&#x2013;<lpage>18</lpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2014.00582</pub-id>, <pub-id pub-id-type="pmid">24971071</pub-id></mixed-citation></ref>
<ref id="ref111"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Walton</surname><given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>A graph theoretic approach to tonal modulation</article-title>. <source>J. Math. Music</source> <volume>4</volume>, <fpage>45</fpage>&#x2013;<lpage>56</lpage>. doi: <pub-id pub-id-type="doi">10.1080/17459730903370940</pub-id></mixed-citation></ref>
<ref id="ref112"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wolf</surname><given-names>T.</given-names></name></person-group> (<year>1976</year>). <article-title>A cognitive model of musical sight-reading</article-title>. <source>J. Psycholinguist. Res.</source> <volume>5</volume>, <fpage>143</fpage>&#x2013;<lpage>171</lpage>. doi: <pub-id pub-id-type="doi">10.1007/bf01067255</pub-id>, <pub-id pub-id-type="pmid">957269</pub-id></mixed-citation></ref>
<ref id="ref113"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Woolhouse</surname><given-names>M.</given-names></name> <name><surname>Cross</surname><given-names>I.</given-names></name> <name><surname>Horton</surname><given-names>T.</given-names></name></person-group> (<year>2016</year>). <article-title>Perception of nonadjacent tonic-key relationships</article-title>. <source>Psychol. Music</source> <volume>44</volume>, <fpage>802</fpage>&#x2013;<lpage>816</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0305735615593409</pub-id></mixed-citation></ref>
</ref-list>
<fn-group>
<fn fn-type="custom" custom-type="edited-by" id="fn0005">
<p>Edited by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/857713/overview">Anja-Xiaoxing Cui</ext-link>, University of Vienna, Austria</p>
</fn>
<fn fn-type="custom" custom-type="reviewed-by" id="fn0006">
<p>Reviewed by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1124404/overview">Laura Bishop</ext-link>, University of Oslo, Norway</p>
<p><ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3263890/overview">You Jin Kim</ext-link>, Kyung Hee University, Republic of Korea</p>
</fn>
</fn-group>
<fn-group>
<fn id="fn0001">
<label>1</label>
<p>Readers with a musical background will note that our choice of tonalities do not fall evenly around the circle of fifths; even spacing would have required employing tonalities of two steps, four steps, and six steps around this circle. Instead, our choice reflects a desire to employ both the most and least related tonalities for the close and far modulation conditions. Because these tonalities are one and six steps, respectively, around this circle, the decision then became whether to employ the tonality three or four steps away as the mid modulation. Ultimately, we (admittedly arbitrarily) chose the tonality three steps removed, which we would argue represents a perfectly reasonable moderately related tonality.</p>
</fn>
<fn id="fn0002">
<label>2</label>
<p>One might ask why we chose the intermediate step of calculated percent errors on a measure-by-measure basis and then aggregate these errors into sections, as opposed to simply calculating percent errors on the aggregated sections directly. Our initial intent was to analyze the data on a measure-by-measure basis; unfortunately, this analysis did not provide any additional insight into response patterns beyond what was discernible from the sectional analyses. Accordingly, in the interests of efficiency, clarity, and brevity, we present only the sectional analyses.</p>
</fn>
<fn id="fn0003">
<label>3</label>
<p>In fact, we attempted to quantify this adaptation process by examining percent error rates in this block on a trial by trial basis. Unfortunately, this microanalysis led to inconclusive findings.</p>
</fn>
</fn-group>
</back>
</article>