<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2018.00105</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Musical Scales in Tone Sequences Improve Temporal Accuracy</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Li</surname> <given-names>Min S.</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/480041/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Di Luca</surname> <given-names>Massimiliano</given-names></name>
<xref ref-type="author-notes" rid="fn001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/47732/overview"/>
</contrib>
</contrib-group>
<aff><institution>Centre for Computational Neuroscience and Cognitive Robotics, School of Psychology, University of Birmingham</institution>, <addr-line>Birmingham</addr-line>, <country>United Kingdom</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: <italic>Ikuya Murakami, The University of Tokyo, Japan</italic></p></fn>
<fn fn-type="edited-by"><p>Reviewed by: <italic>Guido Marco Cicchini, Consiglio Nazionale delle Ricerche (CNR), Italy; Julian Keil, Christian-Albrechts-Universit&#x00E4;t zu Kiel, Germany</italic></p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x002A;Correspondence: <italic>Massimiliano Di Luca, <email>m.diluca@bham.ac.uk</email></italic></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Perception Science, a section of the journal Frontiers in Psychology</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>06</day>
<month>02</month>
<year>2018</year>
</pub-date>
<pub-date pub-type="collection">
<year>2018</year>
</pub-date>
<volume>09</volume>
<elocation-id>105</elocation-id>
<history>
<date date-type="received">
<day>03</day>
<month>10</month>
<year>2017</year>
</date>
<date date-type="accepted">
<day>22</day>
<month>01</month>
<year>2018</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2018 Li and Di Luca.</copyright-statement>
<copyright-year>2018</copyright-year>
<copyright-holder>Li and Di Luca</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Predicting the time of stimulus onset is a key component in perception. Previous investigations of perceived timing have focused on the effect of stimulus properties such as rhythm and temporal irregularity, but the influence of non-temporal properties and their role in predicting stimulus timing has not been exhaustively considered. The present study aims to understand how a non-temporal pattern in a sequence of regularly timed stimuli could improve or bias the detection of temporal deviations. We presented interspersed sequences of 3, 4, 5, and 6 auditory tones where only the timing of the last stimulus could slightly deviate from isochrony. Participants reported whether the last tone was &#x2018;earlier&#x2019; or &#x2018;later&#x2019; relative to the expected regular timing. In two conditions, the tones composing the sequence were either organized into musical scales or they were random tones. In one experiment, all sequences ended with the same tone; in the other experiment, each sequence ended with a different tone. Results indicate higher discriminability of anisochrony with musical scales and with longer sequences, irrespective of the knowledge of the final tone. Such an outcome suggests that the predictability of non-temporal properties, as enabled by the musical scale pattern, can be a factor in determining the sensitivity of time judgments.</p>
</abstract>
<kwd-group>
<kwd>tone frequency</kwd>
<kwd>expectation</kwd>
<kwd>perceived timing</kwd>
<kwd>temporal sensitivity</kwd>
<kwd>musical scale</kwd>
<kwd>isochrony</kwd>
</kwd-group>
<counts>
<fig-count count="5"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="62"/>
<page-count count="9"/>
<word-count count="0"/>
</counts>
</article-meta>
</front>
<body>
<sec><title>Introduction</title>
<p>Perceived timing does not necessarily represent the objective time (e.g., <xref ref-type="bibr" rid="B62">Woodrow, 1935</xref>; <xref ref-type="bibr" rid="B1">Allan, 1979</xref>), as perception can be influenced by stimulus repetitions, sequence patterns, and expectations (e.g., <xref ref-type="bibr" rid="B25">Jones, 1976</xref>; <xref ref-type="bibr" rid="B18">Hirsh et al., 1990</xref>; <xref ref-type="bibr" rid="B51">Rose and Summers, 1995</xref>). The simplest form of temporal pattern, the repeated presentation of identical stimuli separated by identical intervals, has been shown to lead to increased temporal sensitivity in detecting anisochrony, and such an increase has been quantitatively captured by several models (<xref ref-type="bibr" rid="B52">Schulze, 1978</xref>; <xref ref-type="bibr" rid="B13">Drake and Botte, 1993</xref>; <xref ref-type="bibr" rid="B23">Ivry and Hazeltine, 1995</xref>; <xref ref-type="bibr" rid="B43">Miller and McAuley, 2005</xref>; <xref ref-type="bibr" rid="B58">ten Hoopen et al., 2011</xref>; <xref ref-type="bibr" rid="B36">Li et al., 2016</xref>). Similar perceptual influences of patterns on the precision of discrimination performance occur also when the pattern is more complex (<xref ref-type="bibr" rid="B3">Barnes and Jones, 2000</xref>; <xref ref-type="bibr" rid="B41">McAuley and Jones, 2003</xref>). Such improvements in temporal sensitivity have been interpreted in several ways, including an averaging process for the perceptual representation of interval durations (<xref ref-type="bibr" rid="B52">Schulze, 1978</xref>), the effect of sensory predictions generated by expectations and conditional probability of future event (<xref ref-type="bibr" rid="B45">Nobre et al., 2007</xref>), or the influence of neuronal oscillation-based predictive timing (<xref ref-type="bibr" rid="B2">Arnal and Giraud, 2012</xref>) which relates to the idea that rhythmic sequences entrain low-frequency neural oscillations enhancing sensory processing of in-phase stimuli (e.g., <xref ref-type="bibr" rid="B25">Jones, 1976</xref>; <xref ref-type="bibr" rid="B33">Lakatos et al., 2008</xref>; <xref ref-type="bibr" rid="B44">Ng et al., 2012</xref>; <xref ref-type="bibr" rid="B10">Cravo et al., 2013</xref>; <xref ref-type="bibr" rid="B22">Horr et al., 2016</xref>). So far, a large portion of the literature highlighted the perceptual benefits of temporal patterns in temporal judgments, but there has been also an interest in investigating how non-temporal patterns affect discrimination of time (e.g., <xref ref-type="bibr" rid="B42">Micheyl et al., 1998</xref>; <xref ref-type="bibr" rid="B27">Jones, 2009</xref>; <xref ref-type="bibr" rid="B46">Okazaki and Ichikawa, 2016</xref>). In this work, we will analyze whether some of these accounts based on the presence of temporal patterns, hold for sequences with non-temporal patterns.</p>
<p>From as early as the 19th century, numerous studies proposed a close relationship between pitch, melody and time (<xref ref-type="bibr" rid="B57">Stumpf, 1890</xref>; <xref ref-type="bibr" rid="B12">Divenyi and Danner, 1977</xref>; <xref ref-type="bibr" rid="B38">Long, 1977</xref>; <xref ref-type="bibr" rid="B17">H&#x00E9;bert and Peretz, 1997</xref>) that stem from two observations suggesting that temporal and tonal properties find a connection point in music. Firstly, the auditory modality has been traditionally recognized to have the highest temporal resolution of all senses (<xref ref-type="bibr" rid="B19">Hirsh and Sherrick, 1961</xref>), i.e., temporal attributes are most precise in human hearing than in any other sense. Thus, audition seemed to be the modality best tailored to process temporal properties, whether they are the frequency of a tone or its timing in a melody. Secondly, rhythm in music provides temporal cues which not only lead to temporal expectancies in complex melodic phases, but such temporal cues become a fundamental element in the perception of the musical piece (<xref ref-type="bibr" rid="B7">Cariani, 1999</xref>; <xref ref-type="bibr" rid="B34">Large and Palmer, 2002</xref>). There are several examples of the scientific investigation in support of the connection between temporal and tonal perception. For instance, <xref ref-type="bibr" rid="B32">K&#x00F6;nig (1957)</xref> demonstrated a worse pitch discrimination with longer intervals (up to 5 s). Temporal regularity has been shown to help with implicit pitch structure learning (<xref ref-type="bibr" rid="B29">Jones et al., 2002</xref>; <xref ref-type="bibr" rid="B53">Selchenkova et al., 2014</xref>). Recent research, (for example, <xref ref-type="bibr" rid="B30">Kinney and Forsythe, 2012</xref>) showed positive influences of melody on various timing tasks, such as interval reproductions. In summary, the literature hints at an association between time and music where tone discrimination is facilitated or hindered by temporal properties of the stimulus. But because the results have been obtained with a range of different methods, it is difficult to infer whether the opposite is true, i.e., to what degree the structure of the sequence in the tonal domain can influence sensitivity to temporal properties, like the detection of deviations from a rhythm.</p>
<p>Other than the precision with which temporal judgments can be performed, recent studies have also been concerned with the presence of biases in the discrimination of temporal properties in sequences of stimuli. The perception of intervals in an isochronous sequence of identical stimuli is affected by a bias, as sequences need to be accelerated to be perceived to be isochronous (<xref ref-type="bibr" rid="B11">Di Luca and Rhodes, 2016</xref>). In accordance to this tendency to perceive accelerating sequences as being isochronous and consequently produce sequences that naturally speed up (<xref ref-type="bibr" rid="B56">Spence and Parise, 2010</xref>; <xref ref-type="bibr" rid="B61">Wackermann et al., 2014</xref>), it has been found that the last interval in a sequence appears shorter than it should, with an effect consistent with a perceptual acceleration of the last stimulus (<xref ref-type="bibr" rid="B11">Di Luca and Rhodes, 2016</xref>; <xref ref-type="bibr" rid="B36">Li et al., 2016</xref>). The presence of such biases in the perception of temporal properties is, as in the case of precision just discussed, affected by whether sequences have an organization, like the one music can provide. It has been shown, for example, that perceived duration largely depends on the stimulus context and on the events that occur during that particular duration of time (<xref ref-type="bibr" rid="B47">Ornstein, 1975</xref>; <xref ref-type="bibr" rid="B5">Block, 1978</xref>; <xref ref-type="bibr" rid="B21">Horr and Di Luca, 2015b</xref>). Similarly, <xref ref-type="bibr" rid="B9">Clynes and Walker (1986)</xref> found that musical concepts influenced the stability and accuracy of timing in musical performances. Despite the suggestion that temporal judgment and motor behavior in time are two distinct sensory attributes (<xref ref-type="bibr" rid="B37">London, 2011</xref>), the perception of musical time has also been shown to be biased by musical characteristics (<xref ref-type="bibr" rid="B39">Longuet-Higgins and Lee, 1982</xref>; <xref ref-type="bibr" rid="B14">Grisey, 1987</xref>). For instance, <xref ref-type="bibr" rid="B6">Boltz (1989)</xref> found the musical endings affected duration judgments, so that an unexpected tonic ending made the last interval to appear shorter compared to cases where the music finished with an expected tone. Such tendency is similar to what found by <xref ref-type="bibr" rid="B20">Horr and Di Luca (2015a)</xref>, who showed that temporal regularity significantly increased the perceived duration of intervals compared to irregular ones. Interestingly for this paper, changing the regularity in tone frequency did not bias perceived duration. <xref ref-type="bibr" rid="B8">Clarke and Krumhansl (1990)</xref>, instead, failed to identify a relationship between the perceived duration of a brief passage of music and the variety of the musical structure. Such a lack of an influence can be attributed to the recruitment of a group of trained musical experts to take part in the experiments. Due to the inconsistent use of experimental tasks such as perceptual judgments, sensorimotor synchronization and musical performances, as well as the difference in the populations tested in previous literature (i.e., musicians and non-musicians), it is difficult to draw a clear picture of the relation between sequence structure and bias in subjective timing. In particular it is not clear how to interpret the difference in perceived timing between regular and irregular sequences, which could be due to a decrease in otherwise biased timing with a structure or, on the contrary, an unexpected property of a stimulus is the culprit in creating temporal biases. Here we attempt to answer this question by studying how a non-temporal structure affects the accuracy in perceived timing.</p>
<p>To look into the effect of tones patterns on the precision of temporal discriminability and perceived timing, we cannot rely on duration reproduction tasks (<xref ref-type="bibr" rid="B30">Kinney and Forsythe, 2012</xref>) because motor variability has the effect of decreasing measurement precision. Instead, we will employ a temporal discrimination task that uses the two-alternative forced choice (2AFC) &#x2018;early or late&#x2019; judgment (see <xref ref-type="bibr" rid="B36">Li et al., 2016</xref>). To estimate changes in temporal sensitivity (<xref ref-type="bibr" rid="B16">Hansen and Pearce, 2014</xref>) and biases in perceived timing, we will employ respectively the Just Noticeable Difference (JND) and the Point of Subjective Equality (PSE) calculated from each of the participants&#x2019; distribution of &#x2018;late&#x2019; responses. Our experiment intentionally avoided recruiting musicians, as it has been shown that they obtain higher levels of performance in behavioral tasks (<xref ref-type="bibr" rid="B49">Repp and Doggett, 2007</xref>; <xref ref-type="bibr" rid="B48">Petrini et al., 2009</xref>; <xref ref-type="bibr" rid="B40">Matthews et al., 2016</xref>) do not exhibit perceptual biases (<xref ref-type="bibr" rid="B8">Clarke and Krumhansl, 1990</xref>) that can be explained by different cortical connectivity compared to the normal population (<xref ref-type="bibr" rid="B35">Lee and Noppeney, 2011</xref>).</p>
<p>In addition, to specifically avoid the confounding factors deriving from rhythm and melody, we analyze the interaction between sequence type and temporal structure using one of the simplest forms of tonal structure. That is, we test whether the arrangement of tones in a musical scale or in a sequence of random tones influences the detection of deviations from isochrony. Knowing whether there is an influence will contribute to the understanding of predictions and expectations within a sequence of stimuli on perception, as suggested by recent computational accounts (i.e., <xref ref-type="bibr" rid="B24">Jazayeri and Shadlen, 2010</xref>; <xref ref-type="bibr" rid="B11">Di Luca and Rhodes, 2016</xref>; <xref ref-type="bibr" rid="B54">Shi and Burr, 2016</xref>). We will also study whether knowing which tone is the one to be judged is sufficient to increase precision, i.e., by allowing participants to expect the tone and allocate the appropriate attentional resource. To do this, in Experiment 1, the final tone will vary across trials, whereas in Experiment 2, the final tone will always be presented with the same pitch (note A; 440 Hz).</p>
</sec>
<sec id="s1" sec-type="materials|methods">
<title>Materials and Methods</title>
<sec><title>Participants</title>
<p>A total of 42 non-musician undergraduate students (35 females, 19.6 &#x00B1; 2.4 years), with self-reported normal hearing were recruited by the Research Participation Scheme of University of Birmingham. Participants were divided into two groups that took part either in Experiment 1 or in Experiment 2. They gave informed consent before taking part and were rewarded with either course credits or a payment of 6GBP/h. Ethical guidelines of the Declaration of Helsinki have been followed and were approved by the Science, Technology, Engineering and Mathematics (STEM) Ethics Committee of the University of Birmingham.</p>
</sec>
<sec><title>Experimental Design</title>
<p>Participants were presented via Soundlab/Electrovision A069 Mono Earpiece headphones (with cup clip) with 3, 4, 5, or 6 60 ms tones, spaced 700 ms apart, except for the final tone whose time could deviate by 0, &#x00B1;20, &#x00B1;40, &#x00B1;60, &#x00B1;80, &#x00B1;100, &#x00B1;150, &#x00B1;200 ms. The length of a trial ranged from 1380 to 4060 ms depending on the length of the sequence and the anisochrony of the final tone. The four sequence lengths were randomly intermixed in a block. At the end of each trial, participants pressed one of two keys to indicate whether the final tone in the sequence was &#x2018;early&#x2019; or &#x2018;late&#x2019; compared to the expected regular timing. Participants were offered the possibility to take a break at three points during the experiment.</p>
<p>All trial types resulting from the combination of sequence type (2 values: random and scale), sequence lengths (4 values), and anisochronies of the final tone (15 values) were repeated 8 times at random resulting in 960 trials per participant. We analyzed the proportion of &#x2018;late&#x2019; responses at each anisochrony of the final tone, to obtain a distribution for each sequence length and sequence type. The Spearman-K&#x00E4;rber method (<xref ref-type="bibr" rid="B60">Ulrich and Miller, 2004</xref>) was employed to analyze the data, where the PSE was obtained by calculating the first order moment of the monotonized difference (<xref ref-type="bibr" rid="B31">Klein, 2001</xref>) between successive proportions of responses, while the JND was obtained by calculating the second order moment. The <italic>post-hoc</italic> tests were conducted with the JND and PSE values obtained to confirm the differences between each condition tested.</p>
<sec><title>Experiment 1</title>
<p>For the scale condition, the sequence of tones was one of the four ascending diatonic scales: F major, C major, D major and E major, with tone frequencies ranging from 261.63 to 587.3 Hz. For the random condition, to avoid any sort of tonality, the frequency of each tone was randomly selected from the range of those employed in the scale condition. The final tones of the sequence were varied in all trials (non-fixed final tone condition). See <bold>Figure <xref ref-type="fig" rid="F1">1A</xref></bold>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p><bold>(A)</bold> Sequence type conditions in Experiment 1. The top row in gray depicts two examples of random sequences, while the bottom row in black depicts two of the four scale sequences. <bold>(B)</bold> Sequence type conditions in Experiment 2 where sequences always ended with the same tone. The top row depicts two examples of random sequences, while the bottom row in black depicts two of the four scale sequences.</p></caption>
<graphic xlink:href="fpsyg-09-00105-g001.tif"/>
</fig>
</sec>
<sec><title>Experiment 2</title>
<p>For the scale condition, the sequence of tones was one of the four ascending and descending diatonic scales: E Major, C major, E minor, C minor, ranging from 261.63 to 740 Hz. For the random condition, the note was randomly selected. To control the predictability in this particular experiment, the final tone of sequences was always 440 Hz (fixed final tone condition). See <bold>Figure <xref ref-type="fig" rid="F1">1B</xref></bold>.</p>
</sec>
</sec></sec>
<sec><title>General Results</title>
<p>From the proportion of response data (<bold>Figure <xref ref-type="fig" rid="F2">2</xref></bold>), we calculated PSE and JND values. JND values are shown in <bold>Figure <xref ref-type="fig" rid="F3">3A</xref></bold> for each of the tested conditions. We conducted a three-way (one between and two within) ANOVA (and in parallel, a Bayesian mixed ANOVA) with the JND values, where the two final tone conditions (non-fixed, Experiment 1, and fixed at 440 Hz, Experiment 2) served as the between factor, two sequence types (random and scale) as the first within factor, and four sequence length (3, 4, 5, and 6 tones) as the second within factor. We did not find a significant difference in JND due to the final tone conditions [Experiment 1 vs. Experiment 2, <italic>F</italic>(1,40) = 1.4, <italic>p</italic> = 0.236, <inline-formula><mml:math id="M1"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.04, <italic>BF<sub>10</sub></italic> = 0.62], as shown in <bold>Figure <xref ref-type="fig" rid="F3">3A</xref></bold>. <bold>Figure <xref ref-type="fig" rid="F4">4A</xref></bold> shows that sequence type changed JND by roughly 9% when compared random to scale sequences [<italic>F</italic>(1,40) = 10.5, <italic>p</italic> = 0.002, <inline-formula><mml:math id="M2"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.21, <italic>BF<sub>10</sub></italic> = 27.8]. In addition, results demonstrated detectability to anisochrony changes with different sequence lengths [<italic>F</italic>(3,120) = 4.3, <italic>p</italic> = 0.007, <italic>BF<sub>10</sub></italic> = 1.4], as shown in <bold>Figure <xref ref-type="fig" rid="F4">4A</xref></bold>. No interaction was significant (all <italic>p</italic> > 0.2).</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>The proportion of &#x201C;late&#x201D; responses as a function of anisochrony of the final tone in the two experiments. Such data were analyzed using the Spearman&#x2013;K&#x00E4;rber method to derive the PSE and JND values shown in <bold>Figures <xref ref-type="fig" rid="F3">3</xref></bold>, <bold><xref ref-type="fig" rid="F4">4</xref></bold>. <bold>(A)</bold> Shows the scale condition in Experiment 1, which had non-fixed final tone. <bold>(B)</bold> Shows the random condition in Experiment 1, which had non-fixed final tone. <bold>(C)</bold> Shows the scale condition in Experiment 2, which had the final tones always fixed at 440 Hz. <bold>(D)</bold> Shows the random condition in Experiment 2, which had the final tones always fixed at 440 Hz. All error bars represent the standard error of the mean.</p></caption>
<graphic xlink:href="fpsyg-09-00105-g002.tif"/>
</fig>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>JND and PSE values obtained in the four conditions (two sequence types and two final tone fixing conditions) as a function of the four sequence lengths. <bold>(A)</bold> Shows the JND values of scale and random sequences with fixed and non-fixed final tone. <bold>(B)</bold> Shows the PSE values. Positive PSE values represent an equal proportion of &#x201C;early&#x201D; and &#x201C;late&#x201D; responses obtained with tone presented later than expected, which is consistent with an acceleration in the perceived timing of the final tone. All error bars represent the standard error of the mean.</p></caption>
<graphic xlink:href="fpsyg-09-00105-g003.tif"/>
</fig>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p>JND and PSE values as a function of sequence type and four sequence lengths, obtained by collating the values obtained from Experiment 1 and 2 (as we find no significant difference between them). <bold>(A)</bold> JND values evincing an improvement in performance due to sequence length and sequence type. <bold>(B)</bold> PSE values indicating that the final tone should be presented later than expected to be perceived isochronous, and the amount required increases with longer sequences. All error bars represent the standard error of the mean.</p></caption>
<graphic xlink:href="fpsyg-09-00105-g004.tif"/>
</fig>
<p>To gain further insight on the influence of sequence length on JNDs, we fitted a two-parameter regression line to the JND values of each participant, which shows a decrease of 3.2 &#x00B1; 0.9 ms [single sample <italic>t</italic>-test of the slopes of the regression against 0, <italic>t</italic>(41) = -3.4, <italic>p</italic> = 0.001] and an intercept of 100.5 &#x00B1; 5.3 ms.</p>
<p>We now turn our attention to the PSE values (<bold>Figure <xref ref-type="fig" rid="F3">3B</xref></bold>). First, we conducted single sample <italic>t</italic>-test on PSE values against zero (presented in <bold>Table <xref ref-type="table" rid="T1">1</xref></bold>). The <italic>t</italic>-test results reported significant acceleration in perceived timing with the longest sequences (6 tones) for all four conditions. In addition, the PSE deviates from 0 for sequences composed of 3 and 5 tones in the non-fixed final tone condition only. Moreover, a regression line fitted to the PSE values and passing through the origin showed that the final tone needs to be presented 3.2 &#x00B1; 0.4 ms more delayed to be perceived to be isochronous for each additional tone composing the sequence [<italic>t</italic>(41) = 5.4, <italic>p</italic> &#x003C; 0.001].</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Single-sample Bonferroni corrected <italic>t</italic>-test on PSE values against zero.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left"></td>
<th valign="top" align="center" colspan="2">Sequence lengths</th>
<th valign="top" align="center">3</th>
<th valign="top" align="center">4</th>
<th valign="top" align="center">5</th>
<th valign="top" align="center">6</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Experiment 1 (non-fixed final tone)</td>
<td valign="top" align="left">Scale</td>
<td valign="top" align="center"><italic>t</italic>(20) =</td>
<td valign="top" align="center">1.29</td>
<td valign="top" align="center">1.35</td>
<td valign="top" align="center">1.70</td>
<td valign="top" align="center">2.57</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"></td>
<td valign="top" align="center"><italic>p</italic> =</td>
<td valign="top" align="center">0.212</td>
<td valign="top" align="center">0.193</td>
<td valign="top" align="center">0.105</td>
<td valign="top" align="center"><sup>&#x2217;</sup>0.018</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left">Random</td>
<td valign="top" align="center"><italic>t</italic>(20) =</td>
<td valign="top" align="center">3.64</td>
<td valign="top" align="center">1.44</td>
<td valign="top" align="center">2.22</td>
<td valign="top" align="center">4.64</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"></td>
<td valign="top" align="center"><italic>p</italic> =</td>
<td valign="top" align="center"><sup>&#x2217;</sup>0.002</td>
<td valign="top" align="center">0.164</td>
<td valign="top" align="center"><sup>&#x2217;</sup>0.038</td>
<td valign="top" align="center"><sup>&#x2217;</sup>&#x003C;0.001</td>
</tr>
<tr>
<td valign="top" align="left">Experiment 2 (fixed final tone)</td>
<td valign="top" align="left">Scale</td>
<td valign="top" align="center"><italic>t</italic>(20) =</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">-0.15</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">3.63</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"></td>
<td valign="top" align="center"><italic>p</italic> =</td>
<td valign="top" align="center">0.405</td>
<td valign="top" align="center">0.884</td>
<td valign="top" align="center">0.447</td>
<td valign="top" align="center"><sup>&#x2217;</sup>0.002</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left">Random</td>
<td valign="top" align="center"><italic>t</italic>(20) =</td>
<td valign="top" align="center">2.07</td>
<td valign="top" align="center">1.0</td>
<td valign="top" align="center">1.25</td>
<td valign="top" align="center">3.58</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"></td>
<td valign="top" align="center"><italic>p</italic> =</td>
<td valign="top" align="center">0.052</td>
<td valign="top" align="center">0.150</td>
<td valign="top" align="center">0.225</td>
<td valign="top" align="center"><sup>&#x2217;</sup>0.002</td></tr>
</tbody></table>
<table-wrap-foot>
<attrib><italic>Asterisks highlight p-values lower than 0.05</italic>.</attrib>
</table-wrap-foot>
</table-wrap>
<p>We assessed the influence of the experimental condition on PSE values with a three-way mixed ANOVA with the same factors used for the JND analysis. We did not observe a significant influence of the between factor final tone (fixed or non-fixed) on the PSE values [<italic>F</italic>(1,40) = 1.0, <italic>p</italic> = 0.334, <inline-formula><mml:math id="M3"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.02, <italic>BF<sub>10</sub></italic> = 0.316]. The within factor sequence type (random or scale) also did not have an influence on PSEs [<italic>F</italic>(1,40) = 2.0, <italic>p</italic> = 0.169, <inline-formula><mml:math id="M4"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.05, <italic>BF<sub>10</sub></italic> = 0.330]. In accordance with the results of the regression, we found that the timing at which the final tone was perceived to be isochronous changed across the different sequence lengths [<italic>F</italic>(3,120) = 4.4, <italic>p</italic> = 0.006, <italic>BF<sub>10</sub></italic> = 5.67]. The effect of sequence length is best evidenced in <bold>Figure <xref ref-type="fig" rid="F4">4B</xref></bold>, where we collapsed the non-influent final tone factor. No interaction between factors was present (all <italic>p</italic> > 0.5).</p>
</sec>
<sec><title>Discussion</title>
<p>Our data replicated the improvement in sensitivity with longer sequences that have been reported previously (e.g., <xref ref-type="bibr" rid="B52">Schulze, 1978</xref>; <xref ref-type="bibr" rid="B58">ten Hoopen et al., 2011</xref>; <xref ref-type="bibr" rid="B36">Li et al., 2016</xref>). According to <xref ref-type="bibr" rid="B52">Schulze (1978)</xref>, the reason for the performance change has been linked to the integration of multiple sensory estimates by a running average, which leads to a more precise and optimal representation. Recent accounts have framed the sensitivity improvement in terms of iterative interaction between sensory signals and temporal expectation, according to the rules of Bayesian inference (<xref ref-type="bibr" rid="B11">Di Luca and Rhodes, 2016</xref>). Such framework has been previously applied to explain several related phenomena in duration judgments. One of these was the regression to the mean interval when participants are presented with a range of durations during a block of trials (<xref ref-type="bibr" rid="B24">Jazayeri and Shadlen, 2010</xref>). Results were interpreted by advancing the hypothesis that a temporal prior and a cost function can account for the bias in the responses toward the mean and by showing how such a bias depended on the average duration of the range of intervals. <xref ref-type="bibr" rid="B55">Shi et al. (2013)</xref> proposed that an explanation based on the Bayesian Observer Model parallels information-processing ones, but such an explanation is normatively based on the reduction of temporal uncertainty. Our results are qualitatively consistent with an explanation based on such an uncertainty-reduction principle, and that the effect appears to be modulated by the pattern of non-temporal properties.</p>
<p>Our results indicate that temporal discrimination is more precise with sequences whose tones are arranged in a scale, rather than having tones arranged in a random sequence, and that this pattern is present irrespectively of the knowledge of the final tone. We speculate that equal temperament (scale-step) in the scale condition functioned as a &#x2018;standard,&#x2019; which effectively generated musical expectation in the auditory sequence. Such scaled pattern arranged with simple tonality worked as a &#x2018;physical attribute&#x2019; (<xref ref-type="bibr" rid="B15">Haluska, 2003</xref>) to a subjective perceptual experience. The influence of the regular scale-steps, as compared to atonality, led to the prediction of the frequency of the next tone and, in turn, to better coding of sensory information. Our data suggest that the sensory improvement was present not only in the pitch domain as demonstrated in the previous findings (i.e., <xref ref-type="bibr" rid="B28">Jones and Boltz, 1989</xref>; <xref ref-type="bibr" rid="B53">Selchenkova et al., 2014</xref>), but also in the time domain as shown by an increased temporal sensitivity. In addition to isochrony as shown in previous studies (i.e., <xref ref-type="bibr" rid="B52">Schulze, 1978</xref>), equal scale-steps in tonal structure also contribute to creating expectations, which allow more sensitive predictions of up-coming sensory events. We hypothesize that the process underlying such an improvement is akin to the formation of temporal priors which has been postulated in isochronous and equal pitch sequences (i.e., <xref ref-type="bibr" rid="B11">Di Luca and Rhodes, 2016</xref>). In the time domain, it has been advanced that the process is based on the projection of expectations for future stimuli according to the rules of Bayesian inference, which integrates <italic>a priori</italic> knowledge with sensory evidence (<xref ref-type="bibr" rid="B24">Jazayeri and Shadlen, 2010</xref>). If we extend the process to the tone domain, the presence of a scale allows the prediction of the frequency of the next tone, which combined with the prediction of the timing of the next tone, should iteratively improve the allocation of resources and the precision of sensory judgments (<bold>Figure <xref ref-type="fig" rid="F5">5</xref></bold>). The current outcomes suggested a broader definition of &#x2018;patterning&#x2019; in auditory perception, which should no longer be limited to the tone structure and temporal rhythms taken alone, but it should consider that the non-temporal characteristics can have an impact on temporal judgments (and vice versa). The findings denoted the direct influence of pattern in tonal structures on time perception.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p>Graphic illustrating how predictability could influence temporal sensitivity. The distributions represent the predictability of time of upcoming tones at each position in an isochronous sequence, where a flat dashed line means no prediction and a narrow peak means great predictability. <bold>(A)</bold> Shows a progressive increase in predictability in the tone dimension and in the temporal dimension with an isochronous scale-toned sequence. <bold>(B)</bold> Shows an increased predictability in the time domain as the length of sequence increase, however, the random sequence does not lead to successive expectations of tone frequency, thus leading to a lower overall predictability of stimulus properties, leading to the lower precision in the discrimination of isochrony that we find here.</p></caption>
<graphic xlink:href="fpsyg-09-00105-g005.tif"/>
</fig>
<p>In our experiment, in addition to precision, we investigated temporal accuracy by looking at the timing at which participants reported the final tone to appear to be isochronous. We find a general bias to report tones presented later than regular to be isochronous, replicating previous findings (<xref ref-type="bibr" rid="B36">Li et al., 2016</xref>). This result is consistent with a perceptual acceleration of stimuli presented in a sequence (<xref ref-type="bibr" rid="B11">Di Luca and Rhodes, 2016</xref>). The finding can be accounted for by an asymmetric representation of stimuli in the time domain (for details, see <xref ref-type="bibr" rid="B11">Di Luca and Rhodes, 2016</xref>). The manipulation of sequence length increases the anticipatory effect, but the tonal patterns seem to have no influence. Rather, the current PSE outcomes reported an anticipatory behavior in time, suggesting the influence of an attentional phenomenon consistent with <italic>prior entry</italic> (<xref ref-type="bibr" rid="B59">Titchener, 1908</xref>). Analogous to the effects of sensory entrainments (<xref ref-type="bibr" rid="B50">Rohenkohl and Nobre, 2011</xref>), it is a sensory characteristic of acceleration when a stimulust was attended to, the sensory processing was prioritized and facilitated. The phenomenon has been discussed and captured under various conditions, including the increased number of intervals in a sequence (for a review, see <xref ref-type="bibr" rid="B56">Spence and Parise, 2010</xref>; <xref ref-type="bibr" rid="B36">Li et al., 2016</xref>). In terms of the manipulation of various sequence lengths, longer sequences with more isochronic intervals usually generate more precise temporal expectations. As the number of tones in a sequence increases, the chances of a stimulus being temporally deviant also increased accordingly enhancing the attention deployed on the next possible timing of upcoming sensory events resulting in an anticipatory perception.</p>
<p>However, the fact that our PSE results did not highlight a different bias due to the tone structure does not appear to be in line with Dynamic Attending Theory proposed by <xref ref-type="bibr" rid="B26">Jones (1987)</xref> and <xref ref-type="bibr" rid="B29">Jones et al. (2002)</xref>. Such proposal predicts instead that non-temporal patterns like music and tone scales, should lead to a modulation of attention and thus to a change in facilitation with consequent perceptual bias, that instead we did not find. A similar attempt to show sensory facilitation has been published recently, using the original pitch comparison task. Similar to our results, <xref ref-type="bibr" rid="B4">Bauer et al. (2015)</xref> showed that anticipatory behavior was also not present, possibly because in their stimuli the melody was ignored, and instead the temporal information was utilized for sensory expectation. In addition, they argued that a pitch comparison task may not be the most replicable and suitable judgment for investigating dynamic attending in audition. Here we showed that temporal judgments showed no evidence in the anticipatory attending as well. The failure of our and <xref ref-type="bibr" rid="B4">Bauer et al.&#x2019;s 2015</xref> attempts to find a bias in perceived timing could be also due to the simplicity of music in the sequences. In support of this possibility, we observe that <xref ref-type="bibr" rid="B26">Jones (1987)</xref> introduced several musical properties that were classified as dynamic elements that could facilitate the perceived onset of an auditory tone. This included the melodic accents, harmony and beat variations. The configuration of these dynamic characteristics in music were beyond simple patterns and followed a much more complex musical composition rule.</p>
<p>The current study exploited simple types of patterns in tones structure and timing to measure whether time perception is affected. We succeeded in showing an influence of a simple tonal pattern on temporal sensitivity, but such a difference is not associated with a change in bias. Moreover, we find a change in both precision and accuracy depending on sequence length. We explain such a pattern of results suggesting a predicting mechanism similar to the one hypothesized to regulate perception of regular temporal intervals. Here, in addition to temporal expectancies, tonal expectancies generated by a repetitive tone-gap can improve participants&#x2019; predictability of future sensory events. In sum, our results demonstrated the effectiveness of non-temporal patterns in the time dimension. Future research can consider manipulating harmony, chords, different auditory sources (i.e., vocal, instrumental stimuli), or signal reliability to further investigate how patterns influence the precision of temporal judgments. This type of research can be potentially combined with visual and tactile stimuli to provide a comprehensive understanding of musical and temporal perception.</p>
</sec>
<sec><title>Author Contributions</title>
<p>ML and MDL contributed to the design and implementation of the research. ML carried out the experiment and wrote the manuscript with support from MDL who supervised the project.</p>
</sec>
<sec><title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="financial-disclosure">
<p><bold>Funding.</bold> This work was supported by the Marie Curie Career Integration Grant CIG304235.</p>
</fn>
</fn-group>
<ack>
<p>We are sincerely grateful to the reviewers for their constructive comments and suggestions. We would also like to thank Prof. Alan Wing, in particular, for providing us valuable comments in carrying out the research.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Allan</surname> <given-names>L. G.</given-names></name></person-group> (<year>1979</year>). <article-title>The perception of time.</article-title> <source><italic>Atten. Percept. Psychophys.</italic></source> <volume>26</volume> <fpage>340</fpage>&#x2013;<lpage>354</lpage>. <pub-id pub-id-type="doi">10.3758/BF03204158</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arnal</surname> <given-names>L. H.</given-names></name> <name><surname>Giraud</surname> <given-names>A. L.</given-names></name></person-group> (<year>2012</year>). <article-title>Cortical oscillations and sensory predictions.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>16</volume> <fpage>390</fpage>&#x2013;<lpage>398</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2012.05.003</pub-id> <pub-id pub-id-type="pmid">22682813</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barnes</surname> <given-names>R.</given-names></name> <name><surname>Jones</surname> <given-names>M. R.</given-names></name></person-group> (<year>2000</year>). <article-title>Expectancy, attention, and time.</article-title> <source><italic>Cogn. Psychol.</italic></source> <volume>41</volume> <fpage>254</fpage>&#x2013;<lpage>311</lpage>. <pub-id pub-id-type="doi">10.1006/cogp.2000.0738</pub-id> <pub-id pub-id-type="pmid">11032658</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bauer</surname> <given-names>A. K. R.</given-names></name> <name><surname>Jaeger</surname> <given-names>M.</given-names></name> <name><surname>Thorne</surname> <given-names>J. D.</given-names></name> <name><surname>Bendixen</surname> <given-names>A.</given-names></name> <name><surname>Debener</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>The auditory dynamic attending theory revisited: a closer look at the pitch comparison task.</article-title> <source><italic>Brain Res.</italic></source> <volume>1626</volume> <fpage>198</fpage>&#x2013;<lpage>210</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2015.04.032</pub-id> <pub-id pub-id-type="pmid">25934332</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Block</surname> <given-names>R. A.</given-names></name></person-group> (<year>1978</year>). <article-title>Remembered duration: effects of event and sequence complexity.</article-title> <source><italic>Mem. Cogn.</italic></source> <volume>6</volume> <fpage>320</fpage>&#x2013;<lpage>326</lpage>. <pub-id pub-id-type="doi">10.3758/BF03197462</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boltz</surname> <given-names>M.</given-names></name></person-group> (<year>1989</year>). <article-title>Rhythm and &#x201C;good endings&#x201D;: effects of temporal structure on tonality judgments.</article-title> <source><italic>Percept. Psychophys.</italic></source> <volume>46</volume> <fpage>9</fpage>&#x2013;<lpage>17</lpage>. <pub-id pub-id-type="doi">10.3758/BF03208069</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cariani</surname> <given-names>P.</given-names></name></person-group> (<year>1999</year>). <article-title>Temporal coding of periodicity pitch in the auditory system: an overview.</article-title> <source><italic>Neural Plast.</italic></source> <volume>6</volume> <fpage>147</fpage>&#x2013;<lpage>172</lpage>. <pub-id pub-id-type="doi">10.1155/NP.1999.147</pub-id> <pub-id pub-id-type="pmid">10714267</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clarke</surname> <given-names>E. F.</given-names></name> <name><surname>Krumhansl</surname> <given-names>C. L.</given-names></name></person-group> (<year>1990</year>). <article-title>Perceiving musical time.</article-title> <source><italic>Music Percept.</italic></source> <volume>7</volume> <fpage>213</fpage>&#x2013;<lpage>251</lpage>. <pub-id pub-id-type="doi">10.2307/40285462</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clynes</surname> <given-names>M.</given-names></name> <name><surname>Walker</surname> <given-names>J.</given-names></name></person-group> (<year>1986</year>). <article-title>Music as Time&#x2019;s Measure.</article-title> <source><italic>Music Percept.</italic></source> <volume>4</volume> <fpage>85</fpage>&#x2013;<lpage>119</lpage>. <pub-id pub-id-type="doi">10.2307/40285353</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cravo</surname> <given-names>A. M.</given-names></name> <name><surname>Rohenkohl</surname> <given-names>G.</given-names></name> <name><surname>Wyart</surname> <given-names>V.</given-names></name> <name><surname>Nobre</surname> <given-names>A. C.</given-names></name></person-group> (<year>2013</year>). <article-title>Temporal expectation enhances contrast sensitivity by phase entrainment of low- frequency oscillations in visual cortex.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>33</volume> <fpage>4002</fpage>&#x2013;<lpage>4010</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.4675-12.2013</pub-id> <pub-id pub-id-type="pmid">23447609</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Di Luca</surname> <given-names>M.</given-names></name> <name><surname>Rhodes</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>Optimal perceived timing: integrating sensory information with dynamically updated expectations.</article-title> <source><italic>Sci. Rep.</italic></source> <volume>6</volume>:<issue>28563</issue>. <pub-id pub-id-type="doi">10.1038/srep28563</pub-id> <pub-id pub-id-type="pmid">27385184</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Divenyi</surname> <given-names>P. L.</given-names></name> <name><surname>Danner</surname> <given-names>W. F.</given-names></name></person-group> (<year>1977</year>). <article-title>Discrimination of time intervals marked by brief acoustic pulses of various intensities and spectra.</article-title> <source><italic>Atten. Percept. Psychophy.</italic></source> <volume>21</volume> <fpage>125</fpage>&#x2013;<lpage>142</lpage>. <pub-id pub-id-type="doi">10.3758/BF03198716</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Drake</surname> <given-names>C.</given-names></name> <name><surname>Botte</surname> <given-names>M. C.</given-names></name></person-group> (<year>1993</year>). <article-title>Tempo sensitivity in auditory sequences: evidence for a multiple-look model.</article-title> <source><italic>Percept. Psychophys.</italic></source> <volume>54</volume> <fpage>277</fpage>&#x2013;<lpage>286</lpage>. <pub-id pub-id-type="doi">10.3758/BF03205262</pub-id> <pub-id pub-id-type="pmid">8414886</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grisey</surname> <given-names>G.</given-names></name></person-group> (<year>1987</year>). <article-title>Tempus ex Machina: a composer&#x2019;s reflections on musical time.</article-title> <source><italic>Contemp. Music Rev.</italic></source> <volume>2</volume> <fpage>239</fpage>&#x2013;<lpage>275</lpage>. <pub-id pub-id-type="doi">10.1080/07494468708567060</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haluska</surname> <given-names>J.</given-names></name></person-group> (<year>2003</year>). <source><italic>The Mathematical Theory of Tone Systems.</italic></source> <publisher-loc>Boca Raton, FL</publisher-loc>: <publisher-name>CRC Press</publisher-name>.</citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hansen</surname> <given-names>N. C.</given-names></name> <name><surname>Pearce</surname> <given-names>M. T.</given-names></name></person-group> (<year>2014</year>). <article-title>Predictive uncertainty in auditory sequence processing.</article-title> <source><italic>Front. psychol.</italic></source> <volume>5</volume>:<issue>1052</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2014.01052</pub-id> <pub-id pub-id-type="pmid">25295018</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>H&#x00E9;bert</surname> <given-names>S.</given-names></name> <name><surname>Peretz</surname> <given-names>I.</given-names></name></person-group> (<year>1997</year>). <article-title>Recognition of music in long-term memory: are melodic and temporal patterns equal partners?</article-title> <source><italic>Mem. Cogn.</italic></source> <volume>25</volume> <fpage>518</fpage>&#x2013;<lpage>533</lpage>. <pub-id pub-id-type="doi">10.3758/BF03201127</pub-id> <pub-id pub-id-type="pmid">9259629</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hirsh</surname> <given-names>I. J.</given-names></name> <name><surname>Monahan</surname> <given-names>C. B.</given-names></name> <name><surname>Grant</surname> <given-names>K. W.</given-names></name> <name><surname>Singh</surname> <given-names>P. G.</given-names></name></person-group> (<year>1990</year>). <article-title>Studies in auditory timing: 1. Simple patterns.</article-title> <source><italic>Atten. Percept. Psychophys.</italic></source> <volume>47</volume> <fpage>215</fpage>&#x2013;<lpage>226</lpage>. <pub-id pub-id-type="doi">10.3758/BF03204997</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hirsh</surname> <given-names>I. J.</given-names></name> <name><surname>Sherrick</surname> <given-names>C. E.</given-names> <suffix>Jr.</suffix></name></person-group> (<year>1961</year>). <article-title>Perceived order in different sense modalities.</article-title> <source><italic>J. Exp. Psychol.</italic></source> <volume>62</volume> <fpage>423</fpage>&#x2013;<lpage>432</lpage>. <pub-id pub-id-type="doi">10.1037/h0045283</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horr</surname> <given-names>N. K.</given-names></name> <name><surname>Di Luca</surname> <given-names>M.</given-names></name></person-group> (<year>2015a</year>). <article-title>Filling the blanks in temporal intervals: the type of filling influences perceived duration and discrimination performance.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>6</volume>:<issue>114</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2015.00114</pub-id> <pub-id pub-id-type="pmid">25717310</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horr</surname> <given-names>N. K.</given-names></name> <name><surname>Di Luca</surname> <given-names>M.</given-names></name></person-group> (<year>2015b</year>). <article-title>Taking a long look at isochrony: perceived duration increases with temporal, but not stimulus regularity.</article-title> <source><italic>Atten. Percept. Psychophys.</italic></source> <volume>77</volume> <fpage>592</fpage>&#x2013;<lpage>602</lpage>. <pub-id pub-id-type="doi">10.3758/s13414-014-0787-z</pub-id> <pub-id pub-id-type="pmid">25341650</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horr</surname> <given-names>N. K.</given-names></name> <name><surname>Wimber</surname> <given-names>M.</given-names></name> <name><surname>Di Luca</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Perceived time and temporal structure: neural entrainment to isochronous stimulation increases duration estimates.</article-title> <source><italic>Neuroimage</italic></source> <volume>132</volume> <fpage>148</fpage>&#x2013;<lpage>156</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2016.02.011</pub-id> <pub-id pub-id-type="pmid">26883062</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ivry</surname> <given-names>R. B.</given-names></name> <name><surname>Hazeltine</surname> <given-names>R. E.</given-names></name></person-group> (<year>1995</year>). <article-title>Perception and production of temporal intervals across a range of durations: evidence for a common timing mechanism.</article-title> <source><italic>J. Exp. Psychol. Hum. Percept. Perform.</italic></source> <volume>21</volume> <fpage>3</fpage>&#x2013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1037/0096-1523.21.1.3</pub-id> <pub-id pub-id-type="pmid">7707031</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jazayeri</surname> <given-names>M.</given-names></name> <name><surname>Shadlen</surname> <given-names>M. N.</given-names></name></person-group> (<year>2010</year>). <article-title>Temporal context calibrates interval timing.</article-title> <source><italic>Nat. Neurosci.</italic></source> <volume>13</volume> <fpage>1020</fpage>&#x2013;<lpage>1026</lpage>. <pub-id pub-id-type="doi">10.1038/nn.2590</pub-id> <pub-id pub-id-type="pmid">20581842</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>M. R.</given-names></name></person-group> (<year>1976</year>). <article-title>Time, our lost dimension: toward a new theory of perception, attention, and memory.</article-title> <source><italic>Psychol. Rev.</italic></source> <volume>83</volume> <fpage>323</fpage>&#x2013;<lpage>355</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.83.5.323</pub-id> <pub-id pub-id-type="pmid">794904</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>M. R.</given-names></name></person-group> (<year>1987</year>). <article-title>Dynamic pattern structure in music: recent theory and research.</article-title> <source><italic>Percept. Psychophys.</italic></source> <volume>41</volume> <fpage>621</fpage>&#x2013;<lpage>634</lpage>. <pub-id pub-id-type="doi">10.3758/BF03210494</pub-id> <pub-id pub-id-type="pmid">3615156</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>M. R.</given-names></name></person-group> (<year>2009</year>). <article-title>&#x201C;Musical time,&#x201D; in</article-title> <source><italic>The Handbook of Music Psychology</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Hallam</surname> <given-names>S.</given-names></name> <name><surname>Cross</surname> <given-names>I.</given-names></name> <name><surname>Thaut</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>), <fpage>81</fpage>&#x2013;<lpage>92</lpage>.</citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>M. R.</given-names></name> <name><surname>Boltz</surname> <given-names>M.</given-names></name></person-group> (<year>1989</year>). <article-title>Dynamic attending and responses to time.</article-title> <source><italic>Psychol. Rev.</italic></source> <volume>96</volume> <fpage>459</fpage>&#x2013;<lpage>491</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.96.3.459</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>M. R.</given-names></name> <name><surname>Moynihan</surname> <given-names>H.</given-names></name> <name><surname>MacKenzie</surname> <given-names>N.</given-names></name> <name><surname>Puente</surname> <given-names>J.</given-names></name></person-group> (<year>2002</year>). <article-title>Temporal aspects of stimulus-driven attending in dynamic arrays.</article-title> <source><italic>Psychol. Sci.</italic></source> <volume>13</volume> <fpage>313</fpage>&#x2013;<lpage>319</lpage>. <pub-id pub-id-type="doi">10.1111/1467-9280.00458</pub-id> <pub-id pub-id-type="pmid">12137133</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kinney</surname> <given-names>D. W.</given-names></name> <name><surname>Forsythe</surname> <given-names>J. L.</given-names></name></person-group> (<year>2012</year>). <article-title>Does Melody Assist in the Reproduction of Novel Rhythm Patterns?</article-title> <source><italic>Contrib. Music Educ.</italic></source> <volume>39</volume> <fpage>69</fpage>&#x2013;<lpage>85</lpage>.</citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klein</surname> <given-names>S. A.</given-names></name></person-group> (<year>2001</year>). <article-title>Measuring, estimating, and understanding the psychometric function: a commentary.</article-title> <source><italic>Atten. Percept. Psychophys.</italic></source> <volume>63</volume> <fpage>1421</fpage>&#x2013;<lpage>1455</lpage>. <pub-id pub-id-type="doi">10.3758/BF03194552</pub-id> <pub-id pub-id-type="pmid">11800466</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>K&#x00F6;nig</surname> <given-names>E.</given-names></name></person-group> (<year>1957</year>). <article-title>Effect of time on pitch discrimination thresholds under several psychophysical procedures; comparison with intensity discrimination thresholds.</article-title> <source><italic>J. Acoust. Soc. Am.</italic></source> <volume>29</volume> <fpage>606</fpage>&#x2013;<lpage>612</lpage>. <pub-id pub-id-type="doi">10.1121/1.1908981</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lakatos</surname> <given-names>P.</given-names></name> <name><surname>Karmos</surname> <given-names>G.</given-names></name> <name><surname>Mehta</surname> <given-names>A. D.</given-names></name> <name><surname>Ulbert</surname> <given-names>I.</given-names></name> <name><surname>Schroeder</surname> <given-names>C. E.</given-names></name></person-group> (<year>2008</year>). <article-title>Entrainment of neuronal oscillations as a mechanism of attentional selection.</article-title> <source><italic>Science</italic></source> <volume>320</volume> <fpage>110</fpage>&#x2013;<lpage>113</lpage>. <pub-id pub-id-type="doi">10.1126/science.1154735</pub-id> <pub-id pub-id-type="pmid">18388295</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Large</surname> <given-names>E. W.</given-names></name> <name><surname>Palmer</surname> <given-names>C.</given-names></name></person-group> (<year>2002</year>). <article-title>Perceiving temporal regularity in music.</article-title> <source><italic>Cogn. Sci.</italic></source> <volume>26</volume> <fpage>1</fpage>&#x2013;<lpage>37</lpage>. <pub-id pub-id-type="doi">10.1207/s15516709cog26011</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>H.</given-names></name> <name><surname>Noppeney</surname> <given-names>U.</given-names></name></person-group> (<year>2011</year>). <article-title>Long-term music training tunes how the brain temporally binds signals from multiple senses.</article-title> <source><italic>Proc. Natl. Acad. Sci. U.S.A.</italic></source> <volume>108</volume> <fpage>E1441</fpage>&#x2013;<lpage>E1450</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1115267108</pub-id> <pub-id pub-id-type="pmid">22114191</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>M. S.</given-names></name> <name><surname>Rhodes</surname> <given-names>D.</given-names></name> <name><surname>Di Luca</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>For the last time: temporal sensitivity and perceived timing of the final stimulus in an isochronous sequence.</article-title> <source><italic>Timing Time Percept.</italic></source> <volume>4</volume> <fpage>123</fpage>&#x2013;<lpage>146</lpage>. <pub-id pub-id-type="doi">10.1163/22134468-00002057</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>London</surname> <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>Tactus &#x2260; tempo: some dissociations between attentional focus, motor behavior, and tempo judgment.</article-title> <source><italic>Empir. Musicol. Rev.</italic></source> <volume>6</volume> <fpage>43</fpage>&#x2013;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.18061/1811/49761</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Long</surname> <given-names>P. A.</given-names></name></person-group> (<year>1977</year>). <article-title>Relationships between pitch memory in short melodies and selected factors.</article-title> <source><italic>J. Res. Music Educ.</italic></source> <volume>25</volume> <fpage>272</fpage>&#x2013;<lpage>282</lpage>. <pub-id pub-id-type="doi">10.2307/3345268</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Longuet-Higgins</surname> <given-names>H. C.</given-names></name> <name><surname>Lee</surname> <given-names>C. S.</given-names></name></person-group> (<year>1982</year>). <article-title>The perception of musical rhythms.</article-title> <source><italic>Perception</italic></source> <volume>11</volume> <fpage>115</fpage>&#x2013;<lpage>128</lpage>. <pub-id pub-id-type="doi">10.1068/p110115</pub-id> <pub-id pub-id-type="pmid">7155765</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Matthews</surname> <given-names>T. E.</given-names></name> <name><surname>Thibodeau</surname> <given-names>J. N.</given-names></name> <name><surname>Gunther</surname> <given-names>B. P.</given-names></name> <name><surname>Penhune</surname> <given-names>V. B.</given-names></name></person-group> (<year>2016</year>). <article-title>The impact of instrument-specific musical training on rhythm perception and production.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>7</volume>:<issue>69</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2016.00069</pub-id> <pub-id pub-id-type="pmid">26869969</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McAuley</surname> <given-names>J. D.</given-names></name> <name><surname>Jones</surname> <given-names>M. R.</given-names></name></person-group> (<year>2003</year>). <article-title>Modeling effects of rhythmic context on perceived duration: a comparison of interval and entrainment approaches to short-interval timing.</article-title> <source><italic>J. Exp. Psychol. Hum. Percept. Perform.</italic></source> <volume>29</volume> <fpage>1102</fpage>&#x2013;<lpage>1125</lpage>. <pub-id pub-id-type="doi">10.1037/0096-1523.29.6.1102</pub-id> <pub-id pub-id-type="pmid">14640833</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Micheyl</surname> <given-names>C.</given-names></name> <name><surname>Moore</surname> <given-names>B. C.</given-names></name> <name><surname>Carlyon</surname> <given-names>R. P.</given-names></name></person-group> (<year>1998</year>). <article-title>The role of excitation-pattern cues and temporal cues in the frequency and modulation-rate discrimination of amplitude-modulated tones.</article-title> <source><italic>J. Acoust. Soc. Am.</italic></source> <volume>104</volume> <fpage>1039</fpage>&#x2013;<lpage>1050</lpage>. <pub-id pub-id-type="doi">10.1121/1.423322</pub-id> <pub-id pub-id-type="pmid">9714923</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miller</surname> <given-names>N. S.</given-names></name> <name><surname>McAuley</surname> <given-names>J. D.</given-names></name></person-group> (<year>2005</year>). <article-title>Tempo sensitivity in isochronous tone sequences: the multiple-look model revisited.</article-title> <source><italic>Percept. Psychophys.</italic></source> <volume>67</volume> <fpage>1150</fpage>&#x2013;<lpage>1160</lpage>. <pub-id pub-id-type="doi">10.3758/BF03193548</pub-id> <pub-id pub-id-type="pmid">16502837</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ng</surname> <given-names>B. S. W.</given-names></name> <name><surname>Schroeder</surname> <given-names>T.</given-names></name> <name><surname>Kayser</surname> <given-names>C.</given-names></name></person-group> (<year>2012</year>). <article-title>A precluding but not ensuring role of entrained low-frequency oscillations for auditory perception.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>32</volume> <fpage>12268</fpage>&#x2013;<lpage>12276</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1877-12.2012</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nobre</surname> <given-names>A. C.</given-names></name> <name><surname>Correa</surname> <given-names>A.</given-names></name> <name><surname>Coull</surname> <given-names>J. T.</given-names></name></person-group> (<year>2007</year>). <article-title>The hazards of time.</article-title> <source><italic>Curr. Opin. Neurobiol.</italic></source> <volume>17</volume> <fpage>465</fpage>&#x2013;<lpage>470</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2007.07.006</pub-id> <pub-id pub-id-type="pmid">17709239</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Okazaki</surname> <given-names>S.</given-names></name> <name><surname>Ichikawa</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Effects of frequency separation and fundamental frequency on perception of simultaneity of the tones.</article-title> <source><italic>J. Acoust. Soc. Am.</italic></source> <volume>140</volume> <fpage>3263</fpage>&#x2013;<lpage>3263</lpage>. <pub-id pub-id-type="doi">10.1121/1.4970339</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ornstein</surname> <given-names>R. E.</given-names></name></person-group> (<year>1975</year>). <source><italic>On the Experience of Time.</italic></source> <publisher-loc>London</publisher-loc>: <publisher-name>Penguin Books</publisher-name>.</citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Petrini</surname> <given-names>K.</given-names></name> <name><surname>Dahl</surname> <given-names>S.</given-names></name> <name><surname>Rocchesso</surname> <given-names>D.</given-names></name> <name><surname>Waadeland</surname> <given-names>C. H.</given-names></name> <name><surname>Avanzini</surname> <given-names>F.</given-names></name> <name><surname>Puce</surname> <given-names>A.</given-names></name><etal/></person-group> (<year>2009</year>). <article-title>Multisensory integration of drumming actions: musical expertise affects perceived audiovisual asynchrony.</article-title> <source><italic>Exp. Brain Res.</italic></source> <volume>198</volume> <fpage>339</fpage>&#x2013;<lpage>352</lpage>. <pub-id pub-id-type="doi">10.1007/s00221-009-1817-2</pub-id> <pub-id pub-id-type="pmid">19404620</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Repp</surname> <given-names>B. H.</given-names></name> <name><surname>Doggett</surname> <given-names>R.</given-names></name></person-group> (<year>2007</year>). <article-title>Tapping to a very slow beat: a comparison of musicians and nonmusicians.</article-title> <source><italic>Music Percept.</italic></source> <volume>24</volume> <fpage>367</fpage>&#x2013;<lpage>376</lpage>. <pub-id pub-id-type="doi">10.1525/mp.2007.24.4.367</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rohenkohl</surname> <given-names>G.</given-names></name> <name><surname>Nobre</surname> <given-names>A. C.</given-names></name></person-group> (<year>2011</year>). <article-title>Alpha oscillations related to anticipatory attention follow temporal expectations.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>31</volume> <fpage>14076</fpage>&#x2013;<lpage>14084</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3387-11.2011</pub-id> <pub-id pub-id-type="pmid">21976492</pub-id></citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rose</surname> <given-names>D.</given-names></name> <name><surname>Summers</surname> <given-names>J.</given-names></name></person-group> (<year>1995</year>). <article-title>Duration illusions in a train of visual stimuli.</article-title> <source><italic>Perception</italic></source> <volume>24</volume> <fpage>1177</fpage>&#x2013;<lpage>1187</lpage>. <pub-id pub-id-type="doi">10.1068/p241177</pub-id> <pub-id pub-id-type="pmid">8577576</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schulze</surname> <given-names>H. H.</given-names></name></person-group> (<year>1978</year>). <article-title>The detectability of local and global displacements in regular rhythmic patterns.</article-title> <source><italic>Psychol. Res.</italic></source> <volume>40</volume> <fpage>173</fpage>&#x2013;<lpage>181</lpage>. <pub-id pub-id-type="doi">10.1007/BF00308412</pub-id> <pub-id pub-id-type="pmid">693733</pub-id></citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Selchenkova</surname> <given-names>T.</given-names></name> <name><surname>Jones</surname> <given-names>M. R.</given-names></name> <name><surname>Tillmann</surname> <given-names>B.</given-names></name></person-group> (<year>2014</year>). <article-title>The influence of temporal regularities on the implicit learning of pitch structures.</article-title> <source><italic>Q. J. Exp. Psychol.</italic></source> <volume>67</volume> <fpage>2360</fpage>&#x2013;<lpage>2380</lpage>. <pub-id pub-id-type="doi">10.1080/17470218.2014.929155</pub-id> <pub-id pub-id-type="pmid">25318962</pub-id></citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shi</surname> <given-names>Z.</given-names></name> <name><surname>Burr</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>Predictive coding of multisensory timing.</article-title> <source><italic>Curr. Opin. Behav. Sci.</italic></source> <volume>8</volume> <fpage>200</fpage>&#x2013;<lpage>206</lpage>. <pub-id pub-id-type="doi">10.1016/j.cobeha.2016.02.014</pub-id> <pub-id pub-id-type="pmid">27695705</pub-id></citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shi</surname> <given-names>Z.</given-names></name> <name><surname>Church</surname> <given-names>R. M.</given-names></name> <name><surname>Meck</surname> <given-names>W. H.</given-names></name></person-group> (<year>2013</year>). <article-title>Bayesian optimization of time perception.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>17</volume> <fpage>556</fpage>&#x2013;<lpage>564</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2013.09.009</pub-id> <pub-id pub-id-type="pmid">24139486</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spence</surname> <given-names>C.</given-names></name> <name><surname>Parise</surname> <given-names>C.</given-names></name></person-group> (<year>2010</year>). <article-title>Prior-entry: a review.</article-title> <source><italic>Conscious. Cogn.</italic></source> <volume>19</volume> <fpage>364</fpage>&#x2013;<lpage>379</lpage>. <pub-id pub-id-type="doi">10.1016/j.concog.2009.12.001</pub-id> <pub-id pub-id-type="pmid">20056554</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stumpf</surname> <given-names>C.</given-names></name></person-group> (<year>1890</year>). <source><italic>Tonpsychologie</italic></source>, <volume>Vol. 2.</volume> <publisher-loc>Leipzig</publisher-loc>: <publisher-name>S. Hirzel</publisher-name>.</citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>ten Hoopen</surname> <given-names>G.</given-names></name> <name><surname>Van Den Berg</surname> <given-names>S.</given-names></name> <name><surname>Memelink</surname> <given-names>J.</given-names></name> <name><surname>Bocanegra</surname> <given-names>B.</given-names></name> <name><surname>Boon</surname> <given-names>R.</given-names></name></person-group> (<year>2011</year>). <article-title>Multiple-look effects on temporal discrimination within sound sequences.</article-title> <source><italic>Atten. Percept. Psychophys.</italic></source> <volume>73</volume> <fpage>2249</fpage>&#x2013;<lpage>2269</lpage>. <pub-id pub-id-type="doi">10.3758/s13414-011-0171-1</pub-id> <pub-id pub-id-type="pmid">21735312</pub-id></citation></ref>
<ref id="B59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Titchener</surname> <given-names>E. B.</given-names></name></person-group> (<year>1908</year>). <source><italic>Lectures on the Elementary Psychology of Feeling and Attention.</italic></source> <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Macmillan</publisher-name>. <pub-id pub-id-type="doi">10.1037/10867-000</pub-id></citation></ref>
<ref id="B60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ulrich</surname> <given-names>R.</given-names></name> <name><surname>Miller</surname> <given-names>J.</given-names></name></person-group> (<year>2004</year>). <article-title>Threshold estimation in two-alternative forced-choice (2AFC) tasks: the Spearman-K&#x00E4;rber method.</article-title> <source><italic>Atten. Percept. Psychophys.</italic></source> <volume>66</volume> <fpage>517</fpage>&#x2013;<lpage>533</lpage>. <pub-id pub-id-type="doi">10.3758/BF03194898</pub-id></citation></ref>
<ref id="B61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wackermann</surname> <given-names>J.</given-names></name> <name><surname>Pacer</surname> <given-names>J.</given-names></name> <name><surname>Wittmann</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>Perception of acoustically presented time series with varied intervals.</article-title> <source><italic>Acta Psychol.</italic></source> <volume>147</volume> <fpage>105</fpage>&#x2013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1016/j.actpsy.2013.09.015</pub-id> <pub-id pub-id-type="pmid">24210180</pub-id></citation></ref>
<ref id="B62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Woodrow</surname> <given-names>H.</given-names></name></person-group> (<year>1935</year>). <article-title>The effect of practice upon time-order errors in the comparison of temporal intervals.</article-title> <source><italic>Psychol. Rev.</italic></source> <volume>42</volume> <fpage>127</fpage>&#x2013;<lpage>152</lpage>. <pub-id pub-id-type="doi">10.1037/h0063696</pub-id></citation></ref>
</ref-list>
</back>
</article>