<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2024.1387641</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Same principle, but different computations in representing time and space</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Sima</surname> <given-names>Sepehr</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/2714783/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/data-curation/"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/software/"/>
<role content-type="https://credit.niso.org/contributor-roles/supervision/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/visualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Sanayei</surname> <given-names>Mehdi</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2359926/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/project-administration/"/>
<role content-type="https://credit.niso.org/contributor-roles/supervision/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff><institution>School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM)</institution>, <addr-line>Tehran</addr-line>, <country>Iran</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Ernest Greene, University of Southern California, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Pierfrancesco Ambrosi, Stella Maris Foundation (IRCCS), Italy</p><p>Encarni Marcos, Instituto de Neurociencias Alicante, Spain</p></fn>
<corresp id="c001">&#x002A;Correspondence: Mehdi Sanayei, <email>mehdi.sanayei@gmail.com</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>05</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>18</volume>
<elocation-id>1387641</elocation-id>
<history>
<date date-type="received">
<day>18</day>
<month>02</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>11</day>
<month>04</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2024 Sima and Sanayei.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Sima and Sanayei</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Time and space are two intertwined contexts that frame our cognition of the world and have shared mechanisms. A well-known theory on this case is &#x201C;A Theory of Magnitude (ATOM)&#x201D; which states that the perception of these two domains shares common mechanisms. However, evidence regarding shared computations of time and space is intermixed. To investigate this issue, we asked human subjects to reproduce time and distance intervals with saccadic eye movements in similarly designed tasks. We applied an observer model to both modalities and found underlying differences in the processing of time and space. While time and space computations are both probabilistic, adding priors to space perception minimally improved model performance, as opposed to time perception which was consistently better explained by Bayesian computations. We also showed that while both measurement and motor variability were smaller in distance than time reproduction, only the motor variability was correlated between them, as both tasks used saccadic eye movements for response. Our results suggest that time and space perception abide by the same algorithm but have different computational properties.</p>
</abstract>
<kwd-group>
<kwd>time</kwd>
<kwd>space</kwd>
<kwd>perception</kwd>
<kwd>Bayesian</kwd>
<kwd>spatiotemporal</kwd>
</kwd-group>
<counts>
<fig-count count="6"/>
<table-count count="2"/>
<equation-count count="8"/>
<ref-count count="58"/>
<page-count count="12"/>
<word-count count="8812"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Perception Science</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>1 Introduction</title>
<p>The study of time perception has demonstrated the complex interplay of spatiotemporal information in the brain. As temporal processes are linked to their embodied experience of the environment (<xref ref-type="bibr" rid="B45">Safaie et al., 2020</xref>), analyzing the spatial dimension in the study of time perception can provide a powerful tool for understanding the underlying mechanisms that allow us to comprehend the external world. Time and space are two aspects of the physical world that frame our experience of the world. The perception of time and space occurs in an interrelated fashion in human cognition, and recent research shows a growing interest in understanding the underlying mechanisms of their relationship (<xref ref-type="bibr" rid="B19">Goldreich, 2007</xref>; <xref ref-type="bibr" rid="B40">Robinson and Wiener, 2021</xref>; <xref ref-type="bibr" rid="B47">Schroeger et al., 2022</xref>; <xref ref-type="bibr" rid="B57">Whitaker et al., 2022</xref>). It has been suggested that time perception occurs through spatialization of time intervals in the face of movement-based events of the world (<xref ref-type="bibr" rid="B39">Robbe, 2023</xref>). Such spatialization could be seen in the way we have tied the perception of time to space by devising various types of clocks to keep track of the passage of time.</p>
<p>In modern science, time and space have been represented with measurable proxies imbued with operational definitions, i.e., time and distance intervals, respectively (<xref ref-type="bibr" rid="B6">Buzs&#x00E1;ki and Llin&#x00E1;s, 2017</xref>), which provide a framework for studying and quantifying these abstract concepts. This has made possible the study of the relationship between time and space perception. A variety of time-space interactions in the human perceptual system has been observed. For example, spatiotemporal interference is one such interaction where spatial information can distort the perception of temporal information and vice versa (<xref ref-type="bibr" rid="B52">Vidaud-Laperri&#x00E8;re et al., 2022</xref>). Peri-saccadic spatiotemporal compression is another phenomenon that serves as evidence of common mechanisms in the perception of time and space (<xref ref-type="bibr" rid="B34">Morrone et al., 2005</xref>).</p>
<p>A Theory of Magnitude (ATOM) proposed by <xref ref-type="bibr" rid="B54">Walsh (2003)</xref> suggests that the brain has a core common magnitude system for time, space, and quantity. According to this theory, the neural mechanisms underlying the perception of time, space, and quantity are intertwined, and the brain processes these dimensions in a unified manner. This theory has been supported by empirical evidence from studies that have shown that the perception of time and space share common neural substrates (<xref ref-type="bibr" rid="B9">Casasanto and Boroditsky, 2008</xref>; <xref ref-type="bibr" rid="B20">Howard et al., 2014</xref>; <xref ref-type="bibr" rid="B7">Cai and Connell, 2016</xref>; <xref ref-type="bibr" rid="B10">Chen et al., 2021</xref>; <xref ref-type="bibr" rid="B14">Cui et al., 2022</xref>; <xref ref-type="bibr" rid="B22">Jie and Youguo, 2023</xref>). A recent meta-analysis of neuroimaging studies (<xref ref-type="bibr" rid="B12">Cona et al., 2021</xref>) has suggested that there is a common system of brain regions that are activated during both time and space processing, including bilateral insula, the pre-supplementary motor area (pre-SMA), the right frontal operculum, and intraparietal sulci. At the neuronal level, it has been observed that spatial information could at least be partially derived from temporal information (<xref ref-type="bibr" rid="B5">Burgess et al., 2011</xref>).</p>
<p>Despite these findings, the precise nature and the extent to which these perceptual domains share common mechanisms remain unknown. This has spurred several investigations into better understanding the relationship between time and space (<xref ref-type="bibr" rid="B1">Abramson et al., 2023</xref>; <xref ref-type="bibr" rid="B46">Schonhaut et al., 2023</xref>). Most studies to date have approached the question in terms of the interferences that occur between perceptual domains (<xref ref-type="bibr" rid="B27">Marcos and Genovesio, 2017</xref>; <xref ref-type="bibr" rid="B29">Martin et al., 2017</xref>; <xref ref-type="bibr" rid="B51">&#x00DC;st&#x00FC;n et al., 2022</xref>). We attempted to approach this question by utilizing behavioral modeling and model comparison.</p>
<p>A Bayesian understanding of timing has revealed that the interaction of the temporal context and the internal ongoing processes culminates in the calibration of estimated intervals (<xref ref-type="bibr" rid="B44">Sadibolova and Terhune, 2022</xref>) in the form of perceptual biases. Such a formulation of interval timing presents us with two stages in the process of timing, i.e., the measurement (perception) and the reproduction (action) phases of interval timing (<xref ref-type="bibr" rid="B21">Jazayeri and Shadlen, 2010</xref>). The link between the measurement and the reproduction is actualized by an estimation function in the observer model. Based on the nature of the observer model (ideal vs. non-ideal), the estimation functions differ. Bayesian least squares (BLS) and maximum likelihood estimation (MLE) estimators have been used in the literature as prior-dependent and prior-independent functions, respectively (<xref ref-type="bibr" rid="B21">Jazayeri and Shadlen, 2010</xref>). The Bayesian perspective has also been explored in human spatial navigation (<xref ref-type="bibr" rid="B37">Petzschner and Glasauer, 2011</xref>; <xref ref-type="bibr" rid="B50">Thurley and Schild, 2018</xref>). Thus, the probabilistic nature of spatiotemporal information could be well captured by an optimum-seeking system which combines experience-dependent information with contextual noisy measurements to generate an estimate of various facets of time and space. It remains unclear whether the perceptual biases in time and space are both attributable to the prior information.</p>
<p>In this study, we compared how spatial and temporal measurements are implemented by probing sources of variability in the process of time and distance measurement and reproduction, within a probabilistic framework. We used saccadic eye movement as the effector to reproduce presented time/distance intervals. In each block, the subjects had to reproduce the presented time/distance interval by making a saccade to a predefined target in case of time reproduction or to a point on a predefined line to reproduce the presented distance. We showed that the perceptual biases in time perception are explained by prior-dependent computations, as previously shown in the literature. On the other hand, we cast doubt on the contribution of prior information to the observed perceptual biases in space perception.</p>
</sec>
<sec id="S2" sec-type="materials|methods">
<title>2 Materials and methods</title>
<sec id="S2.SS1">
<title>2.1 Apparatus</title>
<p>The experiments were carried out on a computer running Linux operating system, on MATLAB (2016b), with Psychtoolbox 3 extension (<xref ref-type="bibr" rid="B4">Brainard, 1997</xref>). Stimuli were presented on a monitor (17&#x2033;) placed &#x223C;60 cm from the subject with a 60 Hz refresh rate. The subject sat comfortably on a chair in a dimly lit room to participate in this study, with the head stabilized by a head and chin rest. An EyeLink 1000 infrared eye tracking system (SR Research, Mississauga, Ontario) was used to record eye movements at 1 kHz.</p>
</sec>
<sec id="S2.SS2">
<title>2.2 Subjects</title>
<p>We enrolled 22 volunteers (12 female, range: [20, 43], mean &#x00B1; SD: 26.5 &#x00B1; 5.5). All were na&#x00EF;ve to the purpose of the study except 2 (subjects 1 and 2) who were the authors of this study. We excluded 1 subject because of the troubled eye-tracker calibration caused by her contact lens and 1 subject because of excessively large eye-calibration errors. All subjects had normal or correct-to-normal vision. They had signed the consent form prior to the experiment. The experiment was approved by the ethics committee of the School of Cognitive Sciences (IPM). We counter-balanced all variables and blocks between participants. Half of the participants completed the time reproduction task first. Before starting each experiment, each participant completed a full block of training to familiarize themselves with each task.</p>
</sec>
<sec id="S2.SS3">
<title>2.3 Experiment 1</title>
<p>We designed a time reproduction task in which subjects had to reproduce perceived time intervals. At the beginning of each block, the name of the condition (time) was displayed at the center of the screen. The participant then pressed the space bar to start the block. A white fixation cross with a length of 0.5&#x00B0; was presented at the center of the screen for 1 s. After participants acquired fixation, a black line was then presented for a variable duration of 500&#x2013;1,000 ms (uniform distribution). The participants were instructed to keep their gaze on the fixation point (within a 4&#x00B0; &#x00D7; 4&#x00B0; window). The line extended from the fixation cross to one of the four corners of the screen. The location of the line was fixed within each block, but changed between blocks. After that, a white circle (&#x201C;set,&#x201D; diameter of 1.5&#x00B0;) was flashed on the horizontal meridian, contra-lateral to the black line. The eccentricity of the circle was 6, 8, or 12&#x00B0;, randomly chosen on each trial. After a variable sample interval of 0.4, 0.8, or 1.6 s, a white circle (&#x201C;go,&#x201D; diameter of 1.5&#x00B0;) was presented on the black line at the same eccentricity as the flashed circle. Participants were then required to reproduce the duration between the onset of the &#x201C;set&#x201D; stimulus and the onset of the &#x201C;go&#x201D; stimulus by making a saccade to the &#x201C;go&#x201D; target to reproduce the sample interval. The initiation of the saccade was defined as the time the eye exited the fixation window (4&#x00B0; &#x00D7; 4&#x00B0;). If the saccade was landed within a 4&#x00B0; &#x00D7; 4&#x00B0; window of the &#x201C;go&#x201D; stimulus within 100 ms of the saccade initiation and stayed in the window for 100 ms, the go stimulus would turn to green (<xref ref-type="fig" rid="F1">Figure 1A</xref>). The reproduced time was calculated as the interval between the &#x201C;go&#x201D; presentation and the initiation of the saccade. We did not provide any feedback regarding the accuracy of the timing. Each block consisted of 54 trials (3 eccentricities, 3 sample interval, and 6 repetitions for each condition) and subjects performed 8 blocks.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>The sequence of trial events for the time reproduction task <bold>(A)</bold> and the distance reproduction task <bold>(B)</bold>. <bold>(A)</bold> Participants were given a time reproduction task in which they had to reproduce sample time intervals. After acquiring fixation and the presentation of a black line on screen, A white circle (&#x201C;set&#x201D;) appeared on the horizontal meridian of the screen, and after a sample interval, another white circle (&#x201C;go&#x201D;) appeared on the black line. Participants had to reproduce the sample interval by making a saccade to the &#x201C;go&#x201D; stimulus. Successful saccades turned the &#x201C;go&#x201D; stimulus green, but no timing feedback was provided. <bold>(B)</bold> A distance reproduction task was designed similarly to the time reproduction task. It involved the same setup until the appearance of the &#x201C;set&#x201D; stimulus. In this case, participants had to reproduce the distance between the &#x201C;set&#x201D; stimulus and the fixation cross by making a saccade to a point on the black line at the same eccentricity as the &#x201C;set&#x201D; stimulus. Valid saccades were marked with a green circle, and each block consisted of 54 trials with variations in eccentricity and sample intervals.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-18-1387641-g001.tif"/>
</fig>
</sec>
<sec id="S2.SS4">
<title>2.4 Experiment 2</title>
<p>We designed the distance reproduction task as similar to the time reproduction task as possible (<xref ref-type="fig" rid="F1">Figure 1B</xref>). Each trial is similar to the experiment 1 up to the presentation of the &#x201C;set&#x201D; stimulus (eccentricities, 6, 8, or 12&#x00B0;). Here, after passing a variable duration (0.4, 0.8, or 1.6 s) from &#x201C;set&#x201D; stimulus onset, the fixation cross turned to green (go signal). This indicated to the participants that they should reproduce the distance between the &#x201C;set&#x201D; stimulus and the fixation cross by making a saccade to a point on the black line that had the same eccentricity as the &#x201C;set&#x201D; stimulus. Gaze locations which landed within 2&#x00B0; of the black line were considered valid. A green circle (diameter of 1.5&#x00B0;) was presented at the location of the saccade. Each block consisted of 54 trials (3 eccentricities, 3 sample interval, and 6 repetitions for each condition) and subjects performed 8 blocks.</p>
</sec>
<sec id="S2.SS5">
<title>2.5 Data analysis</title>
<p>We employed the interquartile range (IQR) method as a robust statistical technique to identify and eliminate outlier data points (<xref ref-type="bibr" rid="B48">Schwertman et al., 2004</xref>). This method excludes the data that lie outside of 1.5 IQR below the first quantile or 1.5 IQR above the third quantile. We applied the IQR method for each subject for each experiment. We also excluded trials with a reaction time of less than 200 ms The number of excluded trials per subject per experiment was below 1%.</p>
</sec>
<sec id="S2.SS6">
<title>2.6 Ideal observer model</title>
<p>In our data, we had pairs of sample time/distance intervals (<italic>t<sub>s</sub></italic>, <italic>d<sub>s</sub></italic>) and corresponding reproduced times/distances (<italic>t<sub>r</sub></italic>, <italic>d<sub>r</sub></italic>) for each trial. We used an ideal observer model to relate sample times/distances to the reproduced ones. To model these relationships, we used two hidden variables, each of which refers to one of the noisy stages of the process of reproducing time/distance intervals.</p>
<p>In these observer models, <italic>p</italic>(<italic>t</italic><sub><italic>m</italic></sub>|<italic>t</italic><sub><italic>s</italic></sub>) and <italic>p</italic>(<italic>d</italic><sub><italic>m</italic></sub>|<italic>d</italic><sub><italic>s</italic></sub>) are modeled as Gaussian distributions centered at <italic>t<sub>s</sub></italic> and <italic>d<sub>s</sub></italic>, and we assume that their standard deviations (SD) grow linearly with their means. This assumption is motivated by the scalar variability of timing and distance (<xref ref-type="bibr" rid="B21">Jazayeri and Shadlen, 2010</xref>; <xref ref-type="bibr" rid="B50">Thurley and Schild, 2018</xref>). The distribution of measurement noise is thus fully characterized by the ratio of the SD to the mean of <italic>p</italic>(<italic>t</italic><sub><italic>m</italic></sub>|<italic>t</italic><sub><italic>s</italic></sub>) and <italic>p</italic>(<italic>d</italic><sub><italic>m</italic></sub>|<italic>d</italic><sub><italic>s</italic></sub>), which we will refer to as the Weber fraction associated with the measurement, <italic>w<sub>m</sub></italic>. With the same arguments in mind, we assume that the distributions of <italic>t<sub>r</sub></italic> and <italic>d<sub>r</sub></italic> conditioned on <italic>t<sub>e</sub></italic> and <italic>d<sub>e</sub></italic>, <italic>p</italic>(<italic>t</italic><sub><italic>r</italic></sub>|<italic>t</italic><sub><italic>e</italic></sub>) and <italic>p</italic>(<italic>d</italic><sub><italic>r</italic></sub>|<italic>d</italic><sub><italic>e</italic></sub>), are also Gaussian, centered at <italic>t<sub>e</sub></italic> and <italic>d<sub>e</sub></italic>, and associated with a constant Weber fraction, <italic>w<sub>r</sub></italic>.</p>
<p>The model has three stages as:</p>
<p>Measurement stage</p>
<disp-formula id="S2.Ex1">
<mml:math id="M1">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">&#x03BB;</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo rspace="5.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo lspace="2.5pt" rspace="2.5pt">|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:mi mathvariant="normal">&#x03A0;</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mfrac>
<mml:mo>&#x2062;</mml:mo>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mfrac>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Estimation stage</p>
<disp-formula id="S2.Ex2">
<mml:math id="M2">
<mml:mrow>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Reproduction stage</p>
<disp-formula id="S2.Ex3">
<mml:math id="M3">
<mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>r</mml:mi>
</mml:msub>
<mml:mo lspace="2.5pt" rspace="2.5pt">|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:mi mathvariant="normal">&#x03A0;</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>r</mml:mi>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mfrac>
<mml:mo>&#x2062;</mml:mo>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>r</mml:mi>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>r</mml:mi>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mfrac>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>and then</p>
<disp-formula id="S2.Ex4">
<mml:math id="M4">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>r</mml:mi>
</mml:msub>
<mml:mo stretchy="false">|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo rspace="7.5pt">,</mml:mo>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo rspace="7.5pt">,</mml:mo>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>r</mml:mi>
</mml:msub>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mo largeop="true" symmetric="true">&#x222B;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>r</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo rspace="7.5pt">,</mml:mo>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>r</mml:mi>
</mml:msub>
<mml:mo rspace="7.5pt">)</mml:mo>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo rspace="7.5pt">,</mml:mo>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
<p>x stands for t (parameters from the time reproduction task) or d (parameters from the space reproduction task).</p>
<p>For the time reproduction task, we have <italic>t<sub>s</sub></italic>, sample time interval; <italic>t<sub>m</sub></italic>, measured time interval; <italic>t<sub>r</sub></italic>, reproduced time interval; <italic>t<sub>e</sub></italic>, estimated time interval; <italic>w<sub>m</sub></italic>, measurement Weber fraction; <italic>w<sub>r</sub></italic>, reproduction weber fraction. For the distance reproduction task, we have <italic>d<sub>s</sub></italic> sample distance interval, <italic>d<sub>m</sub></italic>, measured distance interval, <italic>d<sub>r</sub></italic>, reproduced distance interval, and, <italic>d<sub>e</sub></italic>, estimated distance interval.</p>
<p>We used a maximum likelihood estimation (MLE) function, which does not fuse prior information with the likelihood function, and a Bayesian least squares (BLS) function in the estimation stage of both tasks. For the Bayesian models, we used a uniform distribution over the range of experimental <inline-formula>
<mml:math id="INEQ22"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>s</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>i</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> and <inline-formula>
<mml:math id="INEQ23"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>s</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>a</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>x</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>. The BLS and MLE functions were defined as:</p>
<disp-formula id="S2.Ex5">
<mml:math id="M5">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>L</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x222B;</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:msubsup>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo lspace="2.5pt" rspace="2.5pt">|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo mathvariant="italic" rspace="0pt">d</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x222B;</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:msubsup>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo lspace="2.5pt" rspace="2.5pt">|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo mathvariant="italic" rspace="0pt">d</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="S2.Ex6">
<mml:math id="M6">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>L</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mpadded width="+5pt">
<mml:munder>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mo movablelimits="false">&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo movablelimits="false">&#x2062;</mml:mo>
<mml:mi>g</mml:mi>
<mml:mo movablelimits="false">&#x2062;</mml:mo>
<mml:mi>m</mml:mi>
<mml:mo movablelimits="false">&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo movablelimits="false">&#x2062;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:munder>
</mml:mpadded>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">&#x03BB;</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mn>4</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:mmultiscripts>
<mml:mi>w</mml:mi>
<mml:mi>m</mml:mi>
<mml:none/>
<mml:none/>
<mml:mn>2</mml:mn>
</mml:mmultiscripts>
</mml:mrow>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:mmultiscripts>
<mml:mi>w</mml:mi>
<mml:mi>m</mml:mi>
<mml:none/>
<mml:none/>
<mml:mn>2</mml:mn>
</mml:mmultiscripts>
</mml:mrow>
</mml:mfrac>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>In the pilot data, we observed individual-specific shifts in the range effect as represented in an overall tendency to over/under-estimate across the whole range of intervals. These patterns were not explained by the common observer model so we used a modified version of these models. We introduced another free parameter (&#x03B1;) to the estimation stage of the model as a multiplication factor. So, the estimation stage for these models would be:</p>
<disp-formula id="S2.Ex7">
<mml:math id="M7">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>L</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x222B;</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:msubsup>
<mml:mi mathvariant="normal">&#x03B1;</mml:mi>
</mml:mrow>
<mml:mo>.</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo lspace="2.5pt" rspace="2.5pt">|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>d</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x222B;</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:msubsup>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo lspace="2.5pt" rspace="2.5pt">|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo mathvariant="italic" rspace="0pt">d</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="S2.Ex8">
<mml:math id="M8">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>L</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo rspace="10.8pt">)</mml:mo>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mpadded width="+5pt">
<mml:munder>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mo movablelimits="false">&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo movablelimits="false">&#x2062;</mml:mo>
<mml:mi>g</mml:mi>
<mml:mo movablelimits="false">&#x2062;</mml:mo>
<mml:mi>m</mml:mi>
<mml:mo movablelimits="false">&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo movablelimits="false">&#x2062;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:munder>
</mml:mpadded>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">&#x03BB;</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mi mathvariant="normal">&#x03B1;</mml:mi>
</mml:mrow>
<mml:mo>.</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mn>4</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:mmultiscripts>
<mml:mi>w</mml:mi>
<mml:mi>m</mml:mi>
<mml:none/>
<mml:none/>
<mml:mn>2</mml:mn>
</mml:mmultiscripts>
</mml:mrow>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:mmultiscripts>
<mml:mi>w</mml:mi>
<mml:mi>m</mml:mi>
<mml:none/>
<mml:none/>
<mml:mn>2</mml:mn>
</mml:mmultiscripts>
</mml:mrow>
</mml:mfrac>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>In a separate analysis, we added &#x03B1; to the estimation stage as an additive parameter. The result from multiplication and addition did not differ from each other qualitatively, so we only showed the multiplicative modulation. We preferred the multiplication result as it gives us a unitless &#x03B1; which is comparable between our tasks.</p>
</sec>
<sec id="S2.SS7">
<title>2.7 Model fitting</title>
<p>We maximized the likelihood of model parameters <italic>w<sub>m</sub></italic>, <italic>w<sub>r</sub></italic>, and &#x03B1; (when applicable) across all <italic>x<sub>s</sub></italic> and <italic>x<sub>r</sub></italic> values. Maximum likelihood estimation was performed with the minimize function in SciPy library, using the Nelder&#x2013;Mead downhill simplex optimization method. We evaluated the success of the fitting procedure by repeating the search with several different initial values.</p>
</sec>
<sec id="S2.SS8">
<title>2.8 Model comparison</title>
<p>In order to compare modified models (with &#x03B1;) with the previous models (without &#x03B1;), we used Akaike information criterion (AIC), Bayesian information criterion (BIC), and cross-validated log-likelihoods (CLL) as our quantitative criteria. We also plotted the averaged result of 50 simulations of tasks with the best fitted models over the data. We compared plots to make sure there are visible differences between the models. We also wanted to check that the best model actually captures the pattern of the data well. We considered differences bigger than 5 in each criterion (i.e., AIC, BIC, and CLL) between different models, as an indicator of better model performance (<xref ref-type="bibr" rid="B26">Ma et al., 2023</xref>). We also used the highest CLL values among the models as an absolute measure of goodness-of-fit. We performed comparisons between the modified and the classic models, separately for f<sub><italic>BLS</italic></sub> and f<sub><italic>MLE</italic></sub> estimators, to choose the best model for time and space reproduction, again separately.</p>
</sec>
<sec id="S2.SS9">
<title>2.9 Difference between model parameters of time and space</title>
<p>We employed Wilcoxon signed-rank test to detect possible differences between model parameters (&#x03B1;, <italic>w<sub>m</sub></italic>, and <italic>w<sub>r</sub></italic>) obtained from the best fitted BLS<sub>3p</sub> model between time and space. We considered <italic>p</italic>-values of less than 0.05 as an indication of statistical significance.</p>
</sec>
<sec id="S2.SS10">
<title>2.10 Correlation between time and space</title>
<p>We calculated Pearson correlation between the best fitted model parameters for the time and distance reproduction tasks to measure the degree of potential overlap between time and distance perception.</p>
</sec>
<sec id="S2.SS11">
<title>2.11 Effects of a perceptual domain on the other</title>
<p>First, we performed a two-way ANOVA on the reproduced time (dependent variable) for different presented time intervals (factor 1) and different eccentricities (factor 2). We did the same analysis for reproduced distance (dependent variable) for different presented distance intervals (factor 1) and for different GO delays (factor 2). For performing ANOVAs, we pooled data from all subjects.</p>
<p>To further investigate time-space interference, we fitted BLS<sub>3p</sub> on the time and distance data across stimulus eccentricity and fixation to GO delay, respectively. We computed <italic>R</italic><sup>2</sup> score to assess the goodness-of-fit of a model fitted to the data in one eccentricity/delay and used to predict data in other eccentricities/delays. Since we wanted to see how the overall performance of our model in capturing the mean and SD of data across different eccentricities/delays changes, we used the mean predicted values and the true values across time and distance intervals to calculate <italic>R</italic><sup>2</sup> for each subject. With this approach we had the problem of small number of sample points (3 in each domain) which resulted in negative <italic>R</italic><sup>2</sup> values in some of the fitted-predicted combinations in time domain for 5 subjects. We excluded these 5 subjects from this analysis in time domain.</p>
</sec>
</sec>
<sec id="S3" sec-type="results">
<title>3 Results</title>
<sec id="S3.SS1">
<title>3.1 Bayesian observer modeling of time and distance reproduction tasks</title>
<p>We calculated Akaike information criterion (AIC), Bayesian information criterion (BIC), and cross-validated log likelihood (CLL) across subjects for each model. We considered a value of 5 in &#x201C;2 &#x00D7; difference in CLL&#x201D; as the cutoff point in model comparison for each pair as was previously suggested. In the time domain, we found that the BLS<sub>3p</sub> model is a better fit for 11/20 subjects compared to the other three models (for CLL values, and summary of result, see <xref ref-type="table" rid="T1">Table 1</xref>). To further investigate the validity of such results, we ran pairwise comparisons of models&#x2019; CLL values for each subject. These analyses revealed that in the time perception domain, BLS<sub>2p</sub> is a better fit than MLE<sub>2p</sub> in 17/20 subjects (<xref ref-type="fig" rid="F2">Figure 2A</xref>), BLS<sub>3p</sub> is a better fit than MLE<sub>3p</sub> in 19/20 subjects (<xref ref-type="fig" rid="F2">Figure 2B</xref>), and BLS<sub>3p</sub> is a better fit than BLS<sub>2p</sub> in 10/20 subjects (<xref ref-type="fig" rid="F2">Figure 2C</xref>). These results are plotted as a histogram in <xref ref-type="supplementary-material" rid="FS1">Supplementary Figures 1A&#x2013;C</xref> as well.</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Cross-validated log-likelihoods (CLL) computed for models with BLS<sub>3p</sub>, MLE<sub>3p</sub>, BLS<sub>2p</sub>, and MLE<sub>2p</sub> estimators in time perception domain for all subjects.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">Subject ID</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">BLS<sub>3p</sub> CLL</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">MLE<sub>3p</sub> CLL</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">BLS<sub>2p</sub> CLL</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">MLE<sub>2p</sub> CLL</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1</td>
<td valign="top" align="center"><bold>&#x2212;3.15</bold></td>
<td valign="top" align="center">&#x2212;18.11</td>
<td valign="top" align="center">&#x2212;3.18</td>
<td valign="top" align="center">&#x2212;13.04</td>
</tr>
<tr>
<td valign="top" align="left">2</td>
<td valign="top" align="center"><bold>&#x2212;2.25</bold></td>
<td valign="top" align="center">&#x2212;12.43</td>
<td valign="top" align="center">&#x2212;3.29</td>
<td valign="top" align="center">&#x2212;13.98</td>
</tr>
<tr>
<td valign="top" align="left">3</td>
<td valign="top" align="center"><bold>&#x2212;10.25</bold></td>
<td valign="top" align="center">&#x2212;22.42</td>
<td valign="top" align="center">&#x2212;27.13</td>
<td valign="top" align="center">&#x2212;29.24</td>
</tr>
<tr>
<td valign="top" align="left">4</td>
<td valign="top" align="center"><bold>&#x2212;12.78</bold></td>
<td valign="top" align="center">&#x2212;24.46</td>
<td valign="top" align="center">&#x2212;12.80</td>
<td valign="top" align="center">&#x2212;28.80</td>
</tr>
<tr>
<td valign="top" align="left">5</td>
<td valign="top" align="center"><bold>&#x2212;9.62</bold></td>
<td valign="top" align="center">&#x2212;11.66</td>
<td valign="top" align="center">&#x2212;10.92</td>
<td valign="top" align="center">&#x2212;15.25</td>
</tr>
<tr>
<td valign="top" align="left">6</td>
<td valign="top" align="center"><bold>&#x2212;3.98</bold></td>
<td valign="top" align="center">&#x2212;16.92</td>
<td valign="top" align="center">&#x2212;4.65</td>
<td valign="top" align="center">&#x2212;20.13</td>
</tr>
<tr>
<td valign="top" align="left">7</td>
<td valign="top" align="center"><bold>&#x2212;9.22</bold></td>
<td valign="top" align="center">&#x2212;14.75</td>
<td valign="top" align="center">&#x2212;10.41</td>
<td valign="top" align="center">&#x2212;20.36</td>
</tr>
<tr>
<td valign="top" align="left">8</td>
<td valign="top" align="center"><bold>&#x2212;6.99</bold></td>
<td valign="top" align="center">&#x2212;16.43</td>
<td valign="top" align="center">&#x2212;12.85</td>
<td valign="top" align="center">&#x2212;20.84</td>
</tr>
<tr>
<td valign="top" align="left">9</td>
<td valign="top" align="center"><bold>&#x2212;23.69</bold></td>
<td valign="top" align="center">&#x2212;30.02</td>
<td valign="top" align="center">&#x2212;28.00</td>
<td valign="top" align="center">&#x2212;35.54</td>
</tr>
<tr>
<td valign="top" align="left">11</td>
<td valign="top" align="center"><bold>13.18</bold></td>
<td valign="top" align="center">&#x2212;0.13</td>
<td valign="top" align="center">3.67</td>
<td valign="top" align="center">&#x2212;0.12</td>
</tr>
<tr>
<td valign="top" align="left">12</td>
<td valign="top" align="center"><bold>&#x2212;21.10</bold></td>
<td valign="top" align="center">&#x2212;30.93</td>
<td valign="top" align="center">&#x2212;21.68</td>
<td valign="top" align="center">&#x2212;38.66</td>
</tr>
<tr>
<td valign="top" align="left">13</td>
<td valign="top" align="center"><bold>&#x2212;0.74</bold></td>
<td valign="top" align="center">&#x2212;15.14</td>
<td valign="top" align="center">&#x2212;0.99</td>
<td valign="top" align="center">&#x2212;18.87</td>
</tr>
<tr>
<td valign="top" align="left">14</td>
<td valign="top" align="center"><bold>&#x2212;22.69</bold></td>
<td valign="top" align="center">&#x2212;37.52</td>
<td valign="top" align="center">&#x2212;25.97</td>
<td valign="top" align="center">&#x2212;46.03</td>
</tr>
<tr>
<td valign="top" align="left">15</td>
<td valign="top" align="center"><bold>14.81</bold></td>
<td valign="top" align="center">3.78</td>
<td valign="top" align="center">&#x2212;4.38</td>
<td valign="top" align="center">0.70</td>
</tr>
<tr>
<td valign="top" align="left">16</td>
<td valign="top" align="center"><bold>&#x2212;11.70</bold></td>
<td valign="top" align="center">&#x2212;23.43</td>
<td valign="top" align="center">&#x2212;15.21</td>
<td valign="top" align="center">&#x2212;24.95</td>
</tr>
<tr>
<td valign="top" align="left">17</td>
<td valign="top" align="center"><bold>&#x2212;3.50</bold></td>
<td valign="top" align="center">&#x2212;18.55</td>
<td valign="top" align="center">&#x2212;6.61</td>
<td valign="top" align="center">&#x2212;20.16</td>
</tr>
<tr>
<td valign="top" align="left">18</td>
<td valign="top" align="center"><bold>&#x2212;0.01</bold></td>
<td valign="top" align="center">&#x2212;4.06</td>
<td valign="top" align="center">&#x2212;12.73</td>
<td valign="top" align="center">&#x2212;5.64</td>
</tr>
<tr>
<td valign="top" align="left">19</td>
<td valign="top" align="center"><bold>8.84</bold></td>
<td valign="top" align="center">&#x2212;5.20</td>
<td valign="top" align="center">8.54</td>
<td valign="top" align="center">&#x2212;11.45</td>
</tr>
<tr>
<td valign="top" align="left">21</td>
<td valign="top" align="center"><bold>&#x2212;16.23</bold></td>
<td valign="top" align="center">&#x2212;24.39</td>
<td valign="top" align="center">&#x2212;16.18</td>
<td valign="top" align="center">&#x2212;28.24</td>
</tr>
<tr>
<td valign="top" align="left">22</td>
<td valign="top" align="center"><bold>&#x2212;15.24</bold></td>
<td valign="top" align="center">&#x2212;23.37</td>
<td valign="top" align="center">&#x2212;26.14</td>
<td valign="top" align="center">&#x2212;36.31</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p>Bold values represent the highest value among the 4 models.</p></fn>
</table-wrap-foot>
</table-wrap>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>Pair-wise comparisons of observer models with different estimators in time and space. <bold>(A,B)</bold> Cross-validated log-likelihoods (CLL) were multiplied by 2 and the relative differences of BLS<sub>2p</sub> (light green) and BLS<sub>3p</sub> (dark green) models to MLE<sub>2p</sub> (light pink) and MLE<sub>3p</sub> (dark pink) models in time domain are plotted, respectively, in <bold>(A,B)</bold>. <bold>(C)</bold> CLLs were multiplied by 2 and the relative differences of BLS<sub>3p</sub> (dark green) model to BLS<sub>2p</sub> (light green) model in time domain is plotted. <bold>(D,E)</bold> CLLs were multiplied by 2 and the relative differences of BLS<sub>2p</sub> (light green) and BLS<sub>3p</sub> (dark green) models to MLE<sub>2p</sub> (light pink) and MLE<sub>3p</sub> (dark pink) models in space domain are plotted, respectively, in <bold>(D,E)</bold>. <bold>(F)</bold> CLLs were multiplied by 2 and the relative differences of BLS<sub>3p</sub> (dark green) model to BLS<sub>2p</sub> (light green) model in space domain is plotted. Ordinate represents subject&#x2019;s ID in each subplot.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-18-1387641-g002.tif"/>
</fig>
<p>In the distance domain, in 55% of subjects, BLS<sub>3p</sub> was the best fit among our four models as well (11/20, <xref ref-type="table" rid="T2">Table 2</xref>). In the pairwise comparisons across models, we observed that in only 2/20 subjects BLS<sub>2p</sub> was a better fit that MLE<sub>2p</sub> while in 8/20 subjects, MLE<sub>2p</sub> was a better fit than BLS<sub>2p</sub> (<xref ref-type="fig" rid="F2">Figure 2D</xref>). Comparing BLS<sub>3p</sub> and MLE<sub>3p</sub> did not reveal a conclusive picture (<xref ref-type="fig" rid="F2">Figure 2E</xref>), while BLS<sub>3p</sub> is a better fit than BLS<sub>2p</sub> in 11/20 subjects (<xref ref-type="fig" rid="F2">Figure 2F</xref>). These results are plotted as a histogram in <xref ref-type="supplementary-material" rid="FS1">Supplementary Figures 1D&#x2013;F</xref> as well. We replicated all of these analyses based on AIC and BIC, and the results were qualitatively similar to CLL data presented here. It seems that both prior-dependent and prior-independent models are capturing the data pattern in space reproduction.</p>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p>Cross-validated log-likelihoods (CLL) computed for models with BLS<sub>3p</sub>, MLE<sub>3p</sub>, BLS<sub>2p</sub>, and MLE<sub>2p</sub> estimators in space perception domain for all subjects.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">Subject ID</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">BLS<sub>3p</sub> CLL</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">MLE<sub>3p</sub> CLL</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">BLS<sub>2p</sub> CLL</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">MLE<sub>2p</sub> CLL</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1</td>
<td valign="top" align="center"><bold>&#x2212;78.00</bold></td>
<td valign="top" align="center">&#x2212;78.68</td>
<td valign="top" align="center">&#x2212;86.05</td>
<td valign="top" align="center">&#x2212;81.82</td>
</tr>
<tr>
<td valign="top" align="left">2</td>
<td valign="top" align="center">&#x2212;88.40</td>
<td valign="top" align="center"><bold>&#x2212;87.93</bold></td>
<td valign="top" align="center">&#x2212;89.34</td>
<td valign="top" align="center">&#x2212;88.96</td>
</tr>
<tr>
<td valign="top" align="left">3</td>
<td valign="top" align="center">&#x2212;83.91</td>
<td valign="top" align="center">&#x2212;83.95</td>
<td valign="top" align="center">&#x2212;89.30</td>
<td valign="top" align="center">&#x2212;89.21</td>
</tr>
<tr>
<td valign="top" align="left">4</td>
<td valign="top" align="center"><bold>&#x2212;65.21</bold></td>
<td valign="top" align="center">&#x2212;66.82</td>
<td valign="top" align="center">&#x2212;68.74</td>
<td valign="top" align="center">&#x2212;67.30</td>
</tr>
<tr>
<td valign="top" align="left">5</td>
<td valign="top" align="center"><bold>&#x2212;71.45</bold></td>
<td valign="top" align="center">&#x2212;71.98</td>
<td valign="top" align="center">&#x2212;79.79</td>
<td valign="top" align="center">&#x2212;75.68</td>
</tr>
<tr>
<td valign="top" align="left">6</td>
<td valign="top" align="center"><bold>&#x2212;87.20</bold></td>
<td valign="top" align="center">&#x2212;87.36</td>
<td valign="top" align="center">&#x2212;89.35</td>
<td valign="top" align="center">&#x2212;89.77</td>
</tr>
<tr>
<td valign="top" align="left">7</td>
<td valign="top" align="center"><bold>&#x2212;74.52</bold></td>
<td valign="top" align="center">&#x2212;74.70</td>
<td valign="top" align="center">&#x2212;86.35</td>
<td valign="top" align="center">&#x2212;80.89</td>
</tr>
<tr>
<td valign="top" align="left">8</td>
<td valign="top" align="center">&#x2212;73.72</td>
<td valign="top" align="center"><bold>&#x2212;73.43</bold></td>
<td valign="top" align="center">&#x2212;84.51</td>
<td valign="top" align="center">&#x2212;79.44</td>
</tr>
<tr>
<td valign="top" align="left">9</td>
<td valign="top" align="center"><bold>&#x2212;84.95</bold></td>
<td valign="top" align="center">&#x2212;85.72</td>
<td valign="top" align="center">&#x2212;86.78</td>
<td valign="top" align="center">&#x2212;85.73</td>
</tr>
<tr>
<td valign="top" align="left">11</td>
<td valign="top" align="center"><bold>&#x2212;76.33</bold></td>
<td valign="top" align="center">&#x2212;77.00</td>
<td valign="top" align="center">&#x2212;89.62</td>
<td valign="top" align="center">&#x2212;83.65</td>
</tr>
<tr>
<td valign="top" align="left">12</td>
<td valign="top" align="center">&#x2212;78.45</td>
<td valign="top" align="center"><bold>&#x2212;77.45</bold></td>
<td valign="top" align="center">&#x2212;82.12</td>
<td valign="top" align="center">&#x2212;78.78</td>
</tr>
<tr>
<td valign="top" align="left">13</td>
<td valign="top" align="center"><bold>&#x2212;68.91</bold></td>
<td valign="top" align="center">&#x2212;69.53</td>
<td valign="top" align="center">&#x2212;70.23</td>
<td valign="top" align="center">&#x2212;69.40</td>
</tr>
<tr>
<td valign="top" align="left">14</td>
<td valign="top" align="center"><bold>&#x2212;82.18</bold></td>
<td valign="top" align="center">&#x2212;85.35</td>
<td valign="top" align="center">&#x2212;82.21</td>
<td valign="top" align="center">&#x2212;85.96</td>
</tr>
<tr>
<td valign="top" align="left">15</td>
<td valign="top" align="center">&#x2212;66.76</td>
<td valign="top" align="center"><bold>&#x2212;66.51</bold></td>
<td valign="top" align="center">&#x2212;67.17</td>
<td valign="top" align="center">&#x2212;67.13</td>
</tr>
<tr>
<td valign="top" align="left">16</td>
<td valign="top" align="center">&#x2212;73.24</td>
<td valign="top" align="center"><bold>&#x2212;73.11</bold></td>
<td valign="top" align="center">&#x2212;82.57</td>
<td valign="top" align="center">&#x2212;77.87</td>
</tr>
<tr>
<td valign="top" align="left">17</td>
<td valign="top" align="center"><bold>&#x2212;80.69</bold></td>
<td valign="top" align="center">&#x2212;81.06</td>
<td valign="top" align="center">&#x2212;81.30</td>
<td valign="top" align="center">&#x2212;81.90</td>
</tr>
<tr>
<td valign="top" align="left">18</td>
<td valign="top" align="center">&#x2212;90.48</td>
<td valign="top" align="center">&#x2212;89.11</td>
<td valign="top" align="center">&#x2212;90.48</td>
<td valign="top" align="center"><bold>&#x2212;88.98</bold></td>
</tr>
<tr>
<td valign="top" align="left">19</td>
<td valign="top" align="center"><bold>&#x2212;81.45</bold></td>
<td valign="top" align="center">&#x2212;83.86</td>
<td valign="top" align="center">&#x2212;83.51</td>
<td valign="top" align="center">&#x2212;87.70</td>
</tr>
<tr>
<td valign="top" align="left">21</td>
<td valign="top" align="center">&#x2212;78.63</td>
<td valign="top" align="center"><bold>&#x2212;77.78</bold></td>
<td valign="top" align="center">&#x2212;81.33</td>
<td valign="top" align="center">&#x2212;78.79</td>
</tr>
<tr>
<td valign="top" align="left">22</td>
<td valign="top" align="center">&#x2212;74.77</td>
<td valign="top" align="center"><bold>&#x2212;74.36</bold></td>
<td valign="top" align="center">&#x2212;93.85</td>
<td valign="top" align="center">&#x2212;84.76</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p>Bold values represent the highest value among the 4 models.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>We plotted subjects&#x2019; reproduced time and distance as a function of the presented duration and distance (<xref ref-type="fig" rid="F3">Figure 3</xref>). In both time and distance, we observed that reproduced time and distance roughly followed the presented time and distance, respectively. In order to observe the goodness-of-fit across our four models (BLS<sub>2p</sub>, BLS<sub>3p</sub>, MLE<sub>2p</sub>, MLE<sub>3p</sub>), we fitted these models to our data, separately, and plotted simulations from the best fitted models on our data. As it is visually evident, the BLS models outperformed MLE models in the time domain (<xref ref-type="fig" rid="F3">Figure 3A</xref>). In the distance domain (<xref ref-type="fig" rid="F3">Figure 3B</xref>), MLE and BLS models were very similar. So, the results implicate that both prior-dependent models outperformed the prior-independent models in the time perception domain.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>Subjects and observer model behavior in time reproduction <bold>(A)</bold> and distance reproduction tasks <bold>(B)</bold>. <bold>(a1&#x2013;a4)</bold> Black circles and the error bar show the mean &#x00B1; SD of subjects&#x2019; reproduced times across three sample intervals. The colored circles and the dotted error bar indicate the mean &#x00B1; SD of the Bayesian observer model reproduced times computed from simulations of the best-fitted models with 4 estimators (BLS<sub>3p</sub>, BLS<sub>2p</sub>, MLE<sub>3p</sub>, MLE<sub>2p</sub>, respectively). <bold>(b1&#x2013;b4)</bold> Black circles and the error bar show the mean &#x00B1; SD of subjects&#x2019; reproduced distances across three sample intervals. The colored circles and the dotted error bar indicate the mean &#x00B1; SD of the Bayesian observer model reproduced distances computed from simulations of the best-fitted models with 4 estimators, BLS<sub>2p</sub>, BLS<sub>3p</sub>, MLE<sub>2p</sub>, and MLE<sub>3p</sub>, respectively. The inset in <bold>(a2, a4, b2, b4)</bold> shows the distribution of alpha values for all subjects. The dotted line represents the median of alpha distribution.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-18-1387641-g003.tif"/>
</fig>
</sec>
<sec id="S3.SS2">
<title>3.2 Comparison of model parameters between time and space</title>
<p>Given our results so far, we compared the best fitted free parameters from BLS<sub>3p</sub> model between time and space. In the case of &#x03B1; we did not find neither correlation (<italic>r</italic> = &#x2212;0.11, <italic>p</italic> = 0.64, <xref ref-type="fig" rid="F4">Figure 4A</xref>), nor difference between time (mean &#x00B1; SD: 1.01 &#x00B1; 0.21) and space (0.93 &#x00B1; 0.11; <italic>W</italic> = 70, <italic>p</italic> = 0.2, Wilcoxson Rank Sum). We calculated the measurement (<italic>w<sub>m</sub></italic>) and the reproduction (<italic>w<sub>r</sub></italic>) noise parameters of space and time models (<xref ref-type="fig" rid="F4">Figure 4B</xref>). <italic>w<sub>m</sub></italic> in space domain (0.05 &#x00B1; 0.03) was lower than <italic>w<sub>m</sub></italic> in time domain (0.30 &#x00B1; 0.07; <italic>W</italic> = 210, <italic>p</italic> &#x003C; 0.0001). We also found that <italic>w<sub>r</sub></italic> in space domain (0.23 &#x00B1; 0.02) was smaller than <italic>w<sub>r</sub></italic> in time domain (0.29 &#x00B1; 0.07; <italic>W</italic> = 185, <italic>p</italic> &#x003C; 0.001). We found that although there was no correlation between <italic>w<sub>m</sub></italic> in space and time domain (<italic>r</italic> = 0.4, <italic>p</italic> = 0.08, <xref ref-type="fig" rid="F4">Figure 4C</xref>), there was a positive and significant correlation between <italic>w<sub>r</sub></italic> in time and space domain (<italic>r</italic> = 0.45, <italic>p</italic> &#x003C; 0.05, <xref ref-type="fig" rid="F4">Figure 4D</xref>). These results show that time and space differ in terms of measurement noise while share similar noise profile in reproduction.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p>Comparison of and correlation between best fitted model parameters computed in time and space. <bold>(A)</bold> The correlation (<italic>r</italic> = &#x2013;0.11, <italic>p</italic> = 0.64) between alpha computed from the BLS<sub>3p</sub> model in time and space. <bold>(B)</bold> Box plot comparing the distribution of best-fitted model noise parameters between time and space. The line inside the box represents the median, and the whiskers extend to the most extreme data points within 1.5 times the inter-quartile range (IQR). The Wilcoxon signed rank test was conducted to assess the statistical significance of the differences between best-fitted model noise parameters in time and space. <italic>p</italic> &#x003C; 0.001&#x002A;&#x002A;&#x002A;, <italic>p</italic> &#x003C; 0.0001&#x002A;&#x002A;&#x002A;&#x002A;. <bold>(C)</bold> The correlation (<italic>r</italic> = 0.40, <italic>p</italic> = 0.08) between <italic>w<sub>m</sub></italic> computed from the BLS<sub>3p</sub> model in time and space. <bold>(D)</bold> The correlation (<italic>r</italic> = 0.45, <italic>p</italic> &#x003C; 0.05) between <italic>w<sub>r</sub></italic> computed from the BLS<sub>3p</sub> model in time and space.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-18-1387641-g004.tif"/>
</fig>
</sec>
<sec id="S3.SS3">
<title>3.3 Effect of eccentricities/delays on the perception of time/distance intervals</title>
<p>We plotted the reproduced time intervals across different eccentricities (<xref ref-type="fig" rid="F5">Figure 5A</xref>). In order to see whether time perception was dependent on distance, we performed a two-way ANOVA on the reproduced time intervals as a dependent variable (factor 1: eccentricity, factor 2: time interval). We found that both time interval (as expected, <italic>F</italic> = 1,961.0, df = 2, <italic>p</italic> &#x003C; 0.001) and eccentricity (<italic>F</italic> = 26.1, df = 2, <italic>p</italic> &#x003C; 0.001) had a significant effect on the reproduced time interval. The interaction between eccentricity and time interval did not reach a significant level (<italic>F</italic> = 0.5, df = 4, <italic>p</italic> = 0.7). This means that as the eccentricity of the stimulus increases, subjects&#x2019; perceived time increases as well.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p>Time-space interference. <bold>(A)</bold> Circles and the error bar show the mean &#x00B1; SD of subjects&#x2019; reproduced times across three sample intervals for eccentricities of 6 (gray), 8 (green), 12&#x00B0; (cyan). <bold>(B)</bold> Circles and the error bar show the mean &#x00B1; SD of subjects&#x2019; reproduced distances across three delays of 400 (gray), 800 (green), 1,600 ms (cyan).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-18-1387641-g005.tif"/>
</fig>
<p>For the distance reproduction task, we plotted the reproduced distance intervals across different delays (<xref ref-type="fig" rid="F5">Figure 5B</xref>). In order to see whether space perception was dependent on time, we performed a two-way ANOVA on the reproduced distance intervals as a dependent variable (factor 1: delay, factor 2: distance interval). We found that while distance had a significant effect (as expected, <italic>F</italic> = 4,073.9, df = 2, <italic>p</italic> &#x003C; 0.001), delay did not have any significant effect on the reproduced distance intervals (<italic>F</italic> = 2.2, df = 2, <italic>p</italic> = 0.1). The interaction between delay and distance interval did not reach a significant level (<italic>F</italic> = 0.3, df = 4, <italic>p</italic> = 0.8). This means that the increase in the delay before response did not affect subjects&#x2019; perceived distance.</p>
<p>To further investigate these observations, we took a modeling approach. We fitted BLS<sub>3p</sub> on the timing data as a function of stimulus eccentricity and used the fitted parameters for simulation across the three eccentricities (6, 8, and 12&#x00B0;). We computed <italic>R</italic><sup>2</sup> score to assess the goodness-of-fit of a model trained on one eccentricity and tested on other eccentricities. The mean of <italic>R</italic><sup>2</sup> values is shown in <xref ref-type="fig" rid="F6">Figure 6A</xref>. As can be seen, although each model from any eccentricities can predict data from other eccentricities well (<italic>R</italic><sup>2</sup>s &#x003E; 0.83), data from each eccentricity predicted the same eccentricity better than others (the rightward diagonal in <xref ref-type="fig" rid="F6">Figure 6A</xref>). Similarly in the space domain, we fitted BLS<sub>3p</sub> on the space data as a function of delay and used the fitted parameters for simulation across the three delays. We computed <italic>R</italic><sup>2</sup> score to assess the goodness-of-fit of a model trained on one delay and tested on other delays (<italic>R</italic><sup>2</sup>s &#x003E; 0.93). The mean of <italic>R</italic><sup>2</sup> values is shown in <xref ref-type="fig" rid="F6">Figure 6B</xref>. We did not find any systematic difference between models and data as a function of eccentricity. The modeling approach confirmed the ANOVA results.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption><p>Prediction of a perceptual domain as a function of another one. <bold>(A)</bold> Heatmap shows <italic>R</italic><sup>2</sup> values for the simulated Bayesian observer model (BLS<sub>3p</sub>) for time domain fitted on trials with short (6&#x00B0;), inter (8&#x00B0;), and long (12&#x00B0;) stimulus eccentricity and tested on trials with these eccentricities. <bold>(B)</bold> Heatmap shows <italic>R</italic><sup>2</sup> values for the simulated Bayesian observer model (BLS<sub>3p</sub>) for distance fitted on trials with short (0.4 s), inter (0.8 s), and long (1.6 s) fixation to GO delays and tested on trials with these delays.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-18-1387641-g006.tif"/>
</fig>
</sec>
</sec>
<sec id="S4" sec-type="discussion">
<title>4 Discussion</title>
<p>We investigated the similarities and differences in the perception of time and space, as two components of the &#x201C;A Theory of Magnitude&#x201D; (ATOM) proposal (<xref ref-type="bibr" rid="B54">Walsh, 2003</xref>). Our investigation was motivated by the evidence regarding various encoding schemes that are recruited for time and space across different parts of the brain (<xref ref-type="bibr" rid="B25">Kraus et al., 2013</xref>; <xref ref-type="bibr" rid="B28">Marcos et al., 2017</xref>; <xref ref-type="bibr" rid="B1">Abramson et al., 2023</xref>). To achieve this goal, we designed two magnitude reproduction tasks and made them as much similar in terms of stimuli and procedure as possible. We used the observer model, which (<xref ref-type="bibr" rid="B21">Jazayeri and Shadlen, 2010</xref>) had used, with two different estimators [Bayesian least square (BLS) vs. maximum likelihood estimator (MLE)] to our behavioral data. We showed that adding another parameter (&#x03B1;) to the classic Bayesian observer model (<xref ref-type="bibr" rid="B21">Jazayeri and Shadlen, 2010</xref>) resulted in better fits regardless of the estimator used. We added &#x03B1; to the ideal observer model in order to capture the heterogeneity of perception of time and space in population (<xref ref-type="bibr" rid="B31">Matthews and Meck, 2014</xref>).</p>
<p>We showed that in the time domain, models that take priors into account (i.e., BLS<sub>2p</sub> and BLS<sub>3p</sub>) outperformed those that do not (i.e., MLE<sub>2p</sub> and MLE<sub>3p</sub>). In the space domain, it remains inconclusive whether MLE<sub>3p</sub> or BLS<sub>3p</sub> better captures our data. Since both of these two models in the space domain fitted data well, we chose BLS<sub>3p</sub> to compare the time and space with the same model. Given that a probabilistic framework explained both time and space, we believe there is a general computation principle in both domains. Although time and space perception abide by shared principles, based on our observation, we believe that this probabilistic computation is applied differently in these two domains: time perception is prior-dependent, while we cannot say the same for space perception.</p>
<p>Our results are corroborated by studies that have shown an effect of global context in time. We provide evidence of the effect of global context on space perception in line with previous works as well (<xref ref-type="bibr" rid="B29">Martin et al., 2017</xref>; <xref ref-type="bibr" rid="B44">Sadibolova and Terhune, 2022</xref>; <xref ref-type="bibr" rid="B55">Wang et al., 2023</xref>). However, we also show that not taking into account the prior would also describe the data equally well. The lack of difference between prior-dependent and prior-independent models in explaining the data in space perception points out to a new challenge in the study of space perception. Space takes different meanings in cognitive experiments (eccentricity in the retinotopic map, surface, distance in navigation, etc.). It may be that priors have an effect on some measures of space and not on the other. Or maybe, the degree of effect varies in different spatial contexts.</p>
<p>Our results are supported by biological evidence as well. A recent meta-analysis has shown that although time and space perception share common regions in the brain, their processing might be separated by an anatomical anterior-posterior gradient (<xref ref-type="bibr" rid="B12">Cona et al., 2021</xref>). Dissociable neural indices for time and space have also been found in human electroencephalography (EEG) data (<xref ref-type="bibr" rid="B40">Robinson and Wiener, 2021</xref>). Neural recordings from prefrontal cortex (PFC) and temporal lobe in epileptic patients found neurons that encode time-only, space-only or both time and space. In non-human primates, researchers found that neurons in the PFC that encode time and space have a small overlap with each other (<xref ref-type="bibr" rid="B28">Marcos et al., 2017</xref>) and the commonality may be at the level of goal coding. These results can be explained by the recent proposal that the brain uses distinct mechanisms to measure temporal and spatial magnitudes and combines them in a unimodal estimate through another mechanism (<xref ref-type="bibr" rid="B18">Gladhill et al., 2022</xref>).</p>
<p>In the time domain, we observed in subjects with the lowest and the highest &#x03B1; values, &#x03B1; captured the overall overestimation (Subject No. 3, <xref ref-type="supplementary-material" rid="FS2">Supplementary Figure 2</xref>) and underestimation (Subject No. 17, <xref ref-type="supplementary-material" rid="FS3">Supplementary Figure 3</xref>) in time reproduction behavior, respectively. We suggest that &#x03B1; represents the speed of an internal clock (<xref ref-type="bibr" rid="B32">Meck, 1983</xref>). Research has shown that people experience the passage of time in different ways and the speed of the internal clock varies in population (<xref ref-type="bibr" rid="B56">Wearden et al., 1999</xref>; <xref ref-type="bibr" rid="B2">Allman et al., 2014</xref>). Some attempts have been made to use drift diffusion models (DDM) to describe data from timing tasks (<xref ref-type="bibr" rid="B49">Simen et al., 2011</xref>). In line with this view, we can reformulate time reproduction as a decision of when to act. The timing of a decision has been studied extensively in the decision-making literature by incorporating an evidence-independent urgency signal to the accumulation of evidence (<xref ref-type="bibr" rid="B11">Cisek et al., 2009</xref>; <xref ref-type="bibr" rid="B8">Carland et al., 2016</xref>; <xref ref-type="bibr" rid="B17">Ferrucci et al., 2021</xref>). An alternative interpretation of &#x03B1; in the perception time is that &#x03B1; could be representative of an urgency signal. The heterogeneity that we found in the performance of our subjects in time reproduction might be linked to their different levels of urgency to act.</p>
<p>In the space domain, we made the same observation regarding &#x03B1;. However, the majority of subjects had &#x03B1; values under 1 (median of 0.9 for the distribution of &#x03B1;) which translates into an overall underestimation tendency in distance reproduction. This observation is in line with previous report of an average undershoot of about 10% of target eccentricity in peripheral saccades (<xref ref-type="bibr" rid="B53">Vitu et al., 2017</xref>).</p>
<p>We observed that <italic>w<sub>m</sub></italic> and <italic>w<sub>r</sub></italic>, model parameters representing the level of variability, were lower in space than in time models. We also found that <italic>w<sub>m</sub></italic> was not correlated between time and space which hints at the possibility of different measurement mechanisms in the two domains. One explanation for the observed difference in <italic>w<sub>m</sub></italic> between the two domains could be that there are various sources of variability for time and space, and the measurement and reproduction of space goes under less noisy stages (or processed further). This difference between time and space can be due to how these dimensions are represented in the brain. We believe that given that brain has many retinotopic maps in which spatial relations such as eccentricity, and distance are coded (<xref ref-type="bibr" rid="B16">Felleman and Van Essen, 1991</xref>), it has the power to reduce noise at the level of measurement. On the other hand, time has very few chronotopic maps (<xref ref-type="bibr" rid="B38">Protopapa et al., 2019</xref>), so the perception of time would be subject to more noise than space. At the motor level, it has long been known that the primate brain has a dedicated system for saccadic eye movement that direct eyes to different locations (<xref ref-type="bibr" rid="B41">Rucci et al., 2007</xref>). This system is very precise in transforming static visual scenes into spatiotemporal signals for the brain to structure the spatial maps of the environment (<xref ref-type="bibr" rid="B42">Rucci and Poletti, 2015</xref>). We believe that that is the reason why where to look is less noisy than when to look.</p>
<p>We can compute distance based on information from both egocentric cues (i.e., navigation) and distance from fovea (allocentric, i.e., making a saccade) (<xref ref-type="bibr" rid="B15">Feigenbaum and Rolls, 1991</xref>; <xref ref-type="bibr" rid="B43">Sabes et al., 2002</xref>; <xref ref-type="bibr" rid="B36">Olson, 2003</xref>). Distance reproduction tasks so far have mostly focused on egocentric representation as studies have been conducted in virtual reality settings (<xref ref-type="bibr" rid="B30">Maruhn et al., 2019</xref>; <xref ref-type="bibr" rid="B58">Zhang et al., 2021</xref>). We used a task design in which the perceived and reproduced distances represent the allocentric mapping of spatial information. In recent years, research has found that there are neurons that encode egocentric spatial representations and also represent allocentric spatial relations in primate hippocampal formation (<xref ref-type="bibr" rid="B3">Baraduc et al., 2019</xref>; <xref ref-type="bibr" rid="B13">Courellis et al., 2019</xref>) driven by saccadic eye movements during visual exploration of environment (<xref ref-type="bibr" rid="B24">Killian et al., 2012</xref>). Because of this similarity, we believe our results could generalize to tasks that measure the egocentric encoding of distance.</p>
<p>We observed that as the eccentricity of stimulus increased, the subjects overestimated time intervals, as represented in different (not statistically though) &#x03B1; values obtained from models fitted based on eccentricities we had in our task. We also showed that the fitted parameters to timing data in different eccentricities are not generalizable to other eccentricities, as shown in the deterioration of the goodness-of-fit metric. This finding is in line with the previous observations that as the distance between two visual stimuli, with a constant stimulus onset asynchrony, increases, the perceived duration between these two stimuli also increases. This effect is known as Kappa (<xref ref-type="bibr" rid="B23">Jones and Huang, 1982</xref>; <xref ref-type="bibr" rid="B52">Vidaud-Laperri&#x00E8;re et al., 2022</xref>). On the other hand, the reproduced distances as a function of delay were not different. This observation is unexpected given the literature on the relation of working memory and the anti-saccade task which shows the deterioration of task performance as a function of delay interval (<xref ref-type="bibr" rid="B35">Munoz and Everling, 2004</xref>; <xref ref-type="bibr" rid="B33">Meier et al., 2018</xref>). We think that the maximum delay duration that we used (1.6 s) is not long enough to manifest the interference effect of time on space.</p>
<p>We do not know that if BLS<sub>3p</sub> would be a better fit than BLS<sub>2p</sub> to previously reported data in time domain like the ones from <xref ref-type="bibr" rid="B21">Jazayeri and Shadlen (2010)</xref>. But although they trained subjects to have a stable performance before their main task, we did not have that training. They also provided feedback regarding the precision of performance to their subjects on each trial, which we did not have that either. So, although we cannot extend our modeling works to theirs, our model is more powerful in dealing with not giving feedback and also not requiring subjects to be heavily trained. Given that our model can accommodate data collection with minimum training, we believe it can be applied more easily in populations for which data collection is an obstacle, like children or people with neurological or psychiatric disorders. Meanwhile, note that we only used three intervals for both time and space (nine conditions) to keep the duration for completing both tasks manageable. We are aware that three intervals might not be enough to generalize our results to larger range of time or space, or other models, that we have not used here, might explain these data more comprehensively. Given the literature on time and space perception, we think this possibility is not likely, but needs further investigation.</p>
</sec>
<sec id="S5" sec-type="data-availability">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="S6" sec-type="ethics-statement">
<title>Ethics statement</title>
<p>The studies involving humans were approved by the School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM). The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="S7" sec-type="author-contributions">
<title>Author contributions</title>
<p>SS: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Supervision, Validation, Visualization, Writing &#x2013; original draft, Writing &#x2013; review &#x0026; editing. MS: Conceptualization, Methodology, Project administration, Supervision, Validation, Writing &#x2013; review &#x0026; editing.</p>
</sec>
</body>
<back>
<sec id="S8" sec-type="funding-information">
<title>Funding</title>
<p>The authors declare that no financial support was received for the research, authorship, and/or publication of this article.</p>
</sec>
<ack><p>We are grateful to Abdol-hossein Vahabie for giving feedback on our data analysis approach, and Mohammad Rabie for comments on a previous version of our manuscript.</p>
</ack>
<sec id="S9" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="S10" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec id="S11" sec-type="supplementary-material">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fnins.2024.1387641/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fnins.2024.1387641/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.docx" id="DS1" mimetype="application/vnd.openxmlformats-officedocument.wordprocessingml.document" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Image_1.pdf" id="FS1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Image_2.pdf" id="FS2" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Image_3.pdf" id="FS3" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abramson</surname> <given-names>S.</given-names></name> <name><surname>Kraus</surname> <given-names>B. J.</given-names></name> <name><surname>White</surname> <given-names>J. A.</given-names></name> <name><surname>Hasselmo</surname> <given-names>M. E.</given-names></name> <name><surname>Derdikman</surname> <given-names>D.</given-names></name> <name><surname>Morris</surname> <given-names>G.</given-names></name></person-group> (<year>2023</year>). <article-title>Flexible coding of time or distance in hippocampal cells.</article-title> <source><italic>eLife</italic></source> <volume>12</volume>:<issue>e83930</issue>. <pub-id pub-id-type="doi">10.7554/eLife.83930</pub-id> <pub-id pub-id-type="pmid">37842914</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Allman</surname> <given-names>M. J.</given-names></name> <name><surname>Teki</surname> <given-names>S.</given-names></name> <name><surname>Griffiths</surname> <given-names>T. D.</given-names></name> <name><surname>Meck</surname> <given-names>W. H.</given-names></name></person-group> (<year>2014</year>). <article-title>Properties of the internal clock: First- and second-order principles of subjective time.</article-title> <source><italic>Annu. Rev. Psychol.</italic></source> <volume>65</volume> <fpage>743</fpage>&#x2013;<lpage>771</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-psych-010213-115117</pub-id> <pub-id pub-id-type="pmid">24050187</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baraduc</surname> <given-names>P.</given-names></name> <name><surname>Duhamel</surname> <given-names>J.-R.</given-names></name> <name><surname>Wirth</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Schema cells in the macaque hippocampus.</article-title> <source><italic>Science</italic></source> <volume>363</volume> <fpage>635</fpage>&#x2013;<lpage>639</lpage>. <pub-id pub-id-type="doi">10.1126/science.aav5404</pub-id> <pub-id pub-id-type="pmid">30733419</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brainard</surname> <given-names>D. H.</given-names></name></person-group> (<year>1997</year>). <article-title>The psychophysics toolbox.</article-title> <source><italic>Spat. Vis.</italic></source> <volume>10</volume> <fpage>433</fpage>&#x2013;<lpage>436</lpage>.</citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burgess</surname> <given-names>C.</given-names></name> <name><surname>Schuck</surname> <given-names>N. W.</given-names></name> <name><surname>Burgess</surname> <given-names>N.</given-names></name></person-group> (<year>2011</year>). &#x201C;<article-title>Chapter 5&#x2014;temporal neuronal oscillations can produce spatial phase codes</article-title>,&#x201D; in <source><italic>Space, Time and Number in the Brain</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Dehaene</surname> <given-names>S.</given-names></name> <name><surname>Brannon</surname> <given-names>E. M.</given-names></name></person-group> (<publisher-loc>Amsterdam</publisher-loc>: <publisher-name>Academic Press</publisher-name>), <fpage>59</fpage>&#x2013;<lpage>69</lpage>. <pub-id pub-id-type="doi">10.1016/B978-0-12-385948-8.00005-0</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buzs&#x00E1;ki</surname> <given-names>G.</given-names></name> <name><surname>Llin&#x00E1;s</surname> <given-names>R.</given-names></name></person-group> (<year>2017</year>). <article-title>Space and time in the brain.</article-title> <source><italic>Science</italic></source> <volume>358</volume> <fpage>482</fpage>&#x2013;<lpage>485</lpage>. <pub-id pub-id-type="doi">10.1126/science.aan8869</pub-id> <pub-id pub-id-type="pmid">29074768</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cai</surname> <given-names>Z. G.</given-names></name> <name><surname>Connell</surname> <given-names>L.</given-names></name></person-group> (<year>2016</year>). <article-title>On magnitudes in memory: An internal clock account of space&#x2013;time interaction.</article-title> <source><italic>Acta Psychol.</italic></source> <volume>168</volume> <fpage>1</fpage>&#x2013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1016/j.actpsy.2016.04.003</pub-id> <pub-id pub-id-type="pmid">27116395</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carland</surname> <given-names>M.</given-names></name> <name><surname>Marcos</surname> <given-names>E.</given-names></name> <name><surname>Thura</surname> <given-names>D.</given-names></name> <name><surname>Cisek</surname> <given-names>P.</given-names></name></person-group> (<year>2016</year>). <article-title>Evidence against perfect integration of sensory information during perceptual decision making.</article-title> <source><italic>J. Neurophysiol.</italic></source> <volume>115</volume> <fpage>915</fpage>&#x2013;<lpage>930</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00264.2015</pub-id> <pub-id pub-id-type="pmid">26609110</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Casasanto</surname> <given-names>D.</given-names></name> <name><surname>Boroditsky</surname> <given-names>L.</given-names></name></person-group> (<year>2008</year>). <article-title>Time in the mind: Using space to think about time.</article-title> <source><italic>Cognition</italic></source> <volume>106</volume> <fpage>579</fpage>&#x2013;<lpage>593</lpage>. <pub-id pub-id-type="doi">10.1016/j.cognition.2007.03.004</pub-id> <pub-id pub-id-type="pmid">17509553</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Peng</surname> <given-names>C.</given-names></name> <name><surname>Avitt</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>A unifying Bayesian framework accounting for spatiotemporal interferences with a deceleration tendency.</article-title> <source><italic>Vis. Res.</italic></source> <volume>187</volume> <fpage>66</fpage>&#x2013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2021.06.005</pub-id> <pub-id pub-id-type="pmid">34217984</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cisek</surname> <given-names>P.</given-names></name> <name><surname>Puskas</surname> <given-names>G. A.</given-names></name> <name><surname>El-Murr</surname> <given-names>S.</given-names></name></person-group> (<year>2009</year>). <article-title>Decisions in changing conditions: The urgency-gating model.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>29</volume> <fpage>11560</fpage>&#x2013;<lpage>11571</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1844-09.2009</pub-id> <pub-id pub-id-type="pmid">19759303</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cona</surname> <given-names>G.</given-names></name> <name><surname>Wiener</surname> <given-names>M.</given-names></name> <name><surname>Scarpazza</surname> <given-names>C.</given-names></name></person-group> (<year>2021</year>). <article-title>From ATOM to GradiATOM: Cortical gradients support time and space processing as revealed by a meta-analysis of neuroimaging studies.</article-title> <source><italic>NeuroImage</italic></source> <volume>224</volume>:<issue>117407</issue>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2020.117407</pub-id> <pub-id pub-id-type="pmid">32992001</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Courellis</surname> <given-names>H. S.</given-names></name> <name><surname>Nummela</surname> <given-names>S. U.</given-names></name> <name><surname>Metke</surname> <given-names>M.</given-names></name> <name><surname>Diehl</surname> <given-names>G. W.</given-names></name> <name><surname>Bussell</surname> <given-names>R.</given-names></name> <name><surname>Cauwenberghs</surname> <given-names>G.</given-names></name><etal/></person-group> (<year>2019</year>). <article-title>Spatial encoding in primate hippocampus during free navigation.</article-title> <source><italic>PLoS Biol.</italic></source> <volume>17</volume>:<issue>e3000546</issue>. <pub-id pub-id-type="doi">10.1371/journal.pbio.3000546</pub-id> <pub-id pub-id-type="pmid">31815940</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cui</surname> <given-names>M.</given-names></name> <name><surname>Peng</surname> <given-names>C.</given-names></name> <name><surname>Huang</surname> <given-names>M.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name></person-group> (<year>2022</year>). <article-title>Electrophysiological evidence for a common magnitude representation of spatiotemporal information in working memory.</article-title> <source><italic>Cereb. Cortex</italic></source> <volume>32</volume> <fpage>4068</fpage>&#x2013;<lpage>4079</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhab466</pub-id> <pub-id pub-id-type="pmid">35024791</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feigenbaum</surname> <given-names>J. D.</given-names></name> <name><surname>Rolls</surname> <given-names>E. T.</given-names></name></person-group> (<year>1991</year>). <article-title>Allocentric and egocentric spatial information processing in the hippocampal formation of the behaving primate.</article-title> <source><italic>Psychobiology</italic></source> <volume>19</volume> <fpage>21</fpage>&#x2013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1007/BF03337953</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Felleman</surname> <given-names>D. J.</given-names></name> <name><surname>Van Essen</surname> <given-names>D. C.</given-names></name></person-group> (<year>1991</year>). <article-title>Distributed hierarchical processing in the primate cerebral cortex.</article-title> <source><italic>Cereb. Cortex</italic></source> <volume>1</volume> <fpage>1</fpage>&#x2013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/1.1.1-a</pub-id> <pub-id pub-id-type="pmid">1822724</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ferrucci</surname> <given-names>L.</given-names></name> <name><surname>Genovesio</surname> <given-names>A.</given-names></name> <name><surname>Marcos</surname> <given-names>E.</given-names></name></person-group> (<year>2021</year>). <article-title>The importance of urgency in decision making based on dynamic information.</article-title> <source><italic>PLoS Comput. Biol.</italic></source> <volume>17</volume>:<issue>e1009455</issue>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1009455</pub-id> <pub-id pub-id-type="pmid">34606494</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gladhill</surname> <given-names>K. A.</given-names></name> <name><surname>Robinson</surname> <given-names>E. M.</given-names></name> <name><surname>Stanfield-Wiswall</surname> <given-names>C.</given-names></name> <name><surname>Bader</surname> <given-names>F.</given-names></name> <name><surname>Wiener</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>Anchors for time, distance, and magnitude in virtual movements.</article-title> <source><italic>bioRxiv [Preprint]</italic></source> <pub-id pub-id-type="doi">10.1101/2022.09.12.507649</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goldreich</surname> <given-names>D.</given-names></name></person-group> (<year>2007</year>). <article-title>A Bayesian perceptual model replicates the cutaneous rabbit and other tactile spatiotemporal illusions.</article-title> <source><italic>PLoS One</italic></source> <volume>2</volume>:<issue>e333</issue>. <pub-id pub-id-type="doi">10.1371/journal.pone.0000333</pub-id> <pub-id pub-id-type="pmid">17389923</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Howard</surname> <given-names>M. W.</given-names></name> <name><surname>MacDonald</surname> <given-names>C. J.</given-names></name> <name><surname>Tiganj</surname> <given-names>Z.</given-names></name> <name><surname>Shankar</surname> <given-names>K. H.</given-names></name> <name><surname>Du</surname> <given-names>Q.</given-names></name> <name><surname>Hasselmo</surname> <given-names>M. E.</given-names></name><etal/></person-group> (<year>2014</year>). <article-title>A unified mathematical framework for coding time, space, and sequences in the hippocampal region.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>34</volume> <fpage>4692</fpage>&#x2013;<lpage>4707</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.5808-12.2014</pub-id> <pub-id pub-id-type="pmid">24672015</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jazayeri</surname> <given-names>M.</given-names></name> <name><surname>Shadlen</surname> <given-names>M. N.</given-names></name></person-group> (<year>2010</year>). <article-title>Temporal context calibrates interval timing.</article-title> <source><italic>Nat. Neurosci.</italic></source> <volume>13</volume>:<issue>8</issue>. <pub-id pub-id-type="doi">10.1038/nn.2590</pub-id> <pub-id pub-id-type="pmid">20581842</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jie</surname> <given-names>Y. U.</given-names></name> <name><surname>Youguo</surname> <given-names>C.</given-names></name></person-group> (<year>2023</year>). <article-title>Spatiotemporal interference effect: An explanation based on Bayesian models.</article-title> <source><italic>Adv. Psychol. Sci.</italic></source> <volume>31</volume>:<issue>597</issue>. <pub-id pub-id-type="doi">10.3724/SP.J.1042.2023.00597</pub-id> <pub-id pub-id-type="pmid">37113526</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>B.</given-names></name> <name><surname>Huang</surname> <given-names>Y. L.</given-names></name></person-group> (<year>1982</year>). <article-title>Space-time dependencies in psychophysical judgment of extent and duration: Algebraic models of the tau and kappa effects.</article-title> <source><italic>Psychol. Bull.</italic></source> <volume>91</volume> <fpage>128</fpage>&#x2013;<lpage>142</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.91.1.128</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Killian</surname> <given-names>N. J.</given-names></name> <name><surname>Jutras</surname> <given-names>M. J.</given-names></name> <name><surname>Buffalo</surname> <given-names>E. A.</given-names></name></person-group> (<year>2012</year>). <article-title>A map of visual space in the primate entorhinal cortex.</article-title> <source><italic>Nature</italic></source> <volume>491</volume>:<issue>7426</issue>. <pub-id pub-id-type="doi">10.1038/nature11587</pub-id> <pub-id pub-id-type="pmid">23103863</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kraus</surname> <given-names>B. J.</given-names></name> <name><surname>Robinson</surname> <given-names>R. J.</given-names></name> <name><surname>White</surname> <given-names>J. A.</given-names></name> <name><surname>Eichenbaum</surname> <given-names>H.</given-names></name> <name><surname>Hasselmo</surname> <given-names>M. E.</given-names></name></person-group> (<year>2013</year>). <article-title>Hippocampal &#x201C;time cells&#x201D;: Time versus path integration.</article-title> <source><italic>Neuron</italic></source> <volume>78</volume> <fpage>1090</fpage>&#x2013;<lpage>1101</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2013.04.015</pub-id> <pub-id pub-id-type="pmid">23707613</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ma</surname> <given-names>W. J.</given-names></name> <name><surname>Kording</surname> <given-names>K. P.</given-names></name> <name><surname>Goldreich</surname> <given-names>D.</given-names></name></person-group> (<year>2023</year>). <source><italic>Bayesian Models of Perception and Action: An Introduction.</italic></source> <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT press</publisher-name>.</citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marcos</surname> <given-names>E.</given-names></name> <name><surname>Genovesio</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>Interference between space and time estimations: From behavior to neurons.</article-title> <source><italic>Front. Neurosci.</italic></source> <volume>11</volume>:<issue>631</issue>. <pub-id pub-id-type="doi">10.3389/fnins.2017.00631</pub-id> <pub-id pub-id-type="pmid">29209159</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marcos</surname> <given-names>E.</given-names></name> <name><surname>Tsujimoto</surname> <given-names>S.</given-names></name> <name><surname>Genovesio</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>Independent coding of absolute duration and distance magnitudes in the prefrontal cortex.</article-title> <source><italic>J. Neurophysiol.</italic></source> <volume>117</volume> <fpage>195</fpage>&#x2013;<lpage>203</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00245.2016</pub-id> <pub-id pub-id-type="pmid">27760814</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Martin</surname> <given-names>B.</given-names></name> <name><surname>Wiener</surname> <given-names>M.</given-names></name> <name><surname>van Wassenhove</surname> <given-names>V.</given-names></name></person-group> (<year>2017</year>). <article-title>A Bayesian perspective on accumulation in the magnitude system.</article-title> <source><italic>Sci. Rep.</italic></source> <volume>7</volume>:<issue>1</issue>. <pub-id pub-id-type="doi">10.1038/s41598-017-00680-0</pub-id> <pub-id pub-id-type="pmid">28377631</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maruhn</surname> <given-names>P.</given-names></name> <name><surname>Schneider</surname> <given-names>S.</given-names></name> <name><surname>Bengler</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>Measuring egocentric distance perception in virtual reality: Influence of methodologies, locomotion and translation gains.</article-title> <source><italic>PLoS One</italic></source> <volume>14</volume>:<issue>e0224651</issue>. <pub-id pub-id-type="doi">10.1371/journal.pone.0224651</pub-id> <pub-id pub-id-type="pmid">31671138</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Matthews</surname> <given-names>W. J.</given-names></name> <name><surname>Meck</surname> <given-names>W. H.</given-names></name></person-group> (<year>2014</year>). <article-title>Time perception: The bad news and the good.</article-title> <source><italic>Wiley Interdiscip. Rev. Cogn. Sci.</italic></source> <volume>5</volume> <fpage>429</fpage>&#x2013;<lpage>446</lpage>. <pub-id pub-id-type="doi">10.1002/wcs.1298</pub-id> <pub-id pub-id-type="pmid">25210578</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meck</surname> <given-names>W. H.</given-names></name></person-group> (<year>1983</year>). <article-title>Selective adjustment of the speed of internal clock and memory processes.</article-title> <source><italic>J. Exp. Psychol. Anim. Behav. Process.</italic></source> <volume>9</volume> <fpage>171</fpage>&#x2013;<lpage>201</lpage>.</citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meier</surname> <given-names>M. E.</given-names></name> <name><surname>Smeekens</surname> <given-names>B. A.</given-names></name> <name><surname>Silvia</surname> <given-names>P. J.</given-names></name> <name><surname>Kwapil</surname> <given-names>T. R.</given-names></name> <name><surname>Kane</surname> <given-names>M. J.</given-names></name></person-group> (<year>2018</year>). <article-title>Working memory capacity and the antisaccade task: A microanaltyic-macroanalytic investigation of individual differences in goal activation and maintenance.</article-title> <source><italic>J. Exp. Psychol. Learn. Mem. Cogn.</italic></source> <volume>44</volume> <fpage>68</fpage>&#x2013;<lpage>84</lpage>. <pub-id pub-id-type="doi">10.1037/xlm0000431</pub-id> <pub-id pub-id-type="pmid">28639800</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morrone</surname> <given-names>M. C.</given-names></name> <name><surname>Ross</surname> <given-names>J.</given-names></name> <name><surname>Burr</surname> <given-names>D.</given-names></name></person-group> (<year>2005</year>). <article-title>Saccadic eye movements cause compression of time as well as space.</article-title> <source><italic>Nat. Neurosci.</italic></source> <volume>8</volume> <fpage>950</fpage>&#x2013;<lpage>954</lpage>. <pub-id pub-id-type="doi">10.1038/nn1488</pub-id> <pub-id pub-id-type="pmid">15965472</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Munoz</surname> <given-names>D. P.</given-names></name> <name><surname>Everling</surname> <given-names>S.</given-names></name></person-group> (<year>2004</year>). <article-title>Look away: The anti-saccade task and the voluntary control of eye movement.</article-title> <source><italic>Nat. Rev. Neurosci.</italic></source> <volume>5</volume>:<issue>3</issue>. <pub-id pub-id-type="doi">10.1038/nrn1345</pub-id> <pub-id pub-id-type="pmid">14976521</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Olson</surname> <given-names>C. R.</given-names></name></person-group> (<year>2003</year>). <article-title>Brain representation of object-centered space in monkeys and humans.</article-title> <source><italic>Annu. Rev. Neurosci.</italic></source> <volume>26</volume> <fpage>331</fpage>&#x2013;<lpage>354</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.26.041002.131405</pub-id> <pub-id pub-id-type="pmid">12626696</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Petzschner</surname> <given-names>F. H.</given-names></name> <name><surname>Glasauer</surname> <given-names>S.</given-names></name></person-group> (<year>2011</year>). <article-title>Iterative Bayesian estimation as an explanation for range and regression effects: A study on human path integration.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>31</volume> <fpage>17220</fpage>&#x2013;<lpage>17229</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2028-11.2011</pub-id> <pub-id pub-id-type="pmid">22114288</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Protopapa</surname> <given-names>F.</given-names></name> <name><surname>Hayashi</surname> <given-names>M. J.</given-names></name> <name><surname>Kulashekhar</surname> <given-names>S.</given-names></name> <name><surname>van der Zwaag</surname> <given-names>W.</given-names></name> <name><surname>Battistella</surname> <given-names>G.</given-names></name> <name><surname>Murray</surname> <given-names>M. M.</given-names></name><etal/></person-group> (<year>2019</year>). <article-title>Chronotopic maps in human supplementary motor area.</article-title> <source><italic>PLoS Biol.</italic></source> <volume>17</volume>:<issue>e3000026</issue>. <pub-id pub-id-type="doi">10.1371/journal.pbio.3000026</pub-id> <pub-id pub-id-type="pmid">30897088</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Robbe</surname> <given-names>D.</given-names></name></person-group> (<year>2023</year>). <article-title>Lost in time: Relocating the perception of duration outside the brain.</article-title> <source><italic>Neurosci. Biobehav. Rev.</italic></source> <volume>153</volume>:<issue>105312</issue>. <pub-id pub-id-type="doi">10.1016/j.neubiorev.2023.105312</pub-id> <pub-id pub-id-type="pmid">37467906</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Robinson</surname> <given-names>E. M.</given-names></name> <name><surname>Wiener</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>Dissociable neural indices for time and space estimates during virtual distance reproduction.</article-title> <source><italic>NeuroImage</italic></source> <volume>226</volume>:<issue>117607</issue>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2020.117607</pub-id> <pub-id pub-id-type="pmid">33290808</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rucci</surname> <given-names>M.</given-names></name> <name><surname>Iovin</surname> <given-names>R.</given-names></name> <name><surname>Poletti</surname> <given-names>M.</given-names></name> <name><surname>Santini</surname> <given-names>F.</given-names></name></person-group> (<year>2007</year>). <article-title>Miniature eye movements enhance fine spatial detail.</article-title> <source><italic>Nature</italic></source> <volume>447</volume> <fpage>851</fpage>&#x2013;<lpage>854</lpage>. <pub-id pub-id-type="doi">10.1038/nature05866</pub-id> <pub-id pub-id-type="pmid">17568745</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rucci</surname> <given-names>M.</given-names></name> <name><surname>Poletti</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <article-title>Control and functions of fixational eye movements.</article-title> <source><italic>Annu. Rev. Vis. Sci.</italic></source> <volume>1</volume> <fpage>499</fpage>&#x2013;<lpage>518</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-vision-082114-035742</pub-id> <pub-id pub-id-type="pmid">27795997</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sabes</surname> <given-names>P. N.</given-names></name> <name><surname>Breznen</surname> <given-names>B.</given-names></name> <name><surname>Andersen</surname> <given-names>R. A.</given-names></name></person-group> (<year>2002</year>). <article-title>Parietal representation of object-based saccades.</article-title> <source><italic>J. Neurophysiol.</italic></source> <volume>88</volume> <fpage>1815</fpage>&#x2013;<lpage>1829</lpage>. <pub-id pub-id-type="doi">10.1152/jn.2002.88.4.1815</pub-id> <pub-id pub-id-type="pmid">12364508</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sadibolova</surname> <given-names>R.</given-names></name> <name><surname>Terhune</surname> <given-names>D. B.</given-names></name></person-group> (<year>2022</year>). <article-title>The temporal context in Bayesian models of interval timing: Recent advances and future directions.</article-title> <source><italic>Behav. Neurosci.</italic></source> <volume>136</volume> <fpage>364</fpage>&#x2013;<lpage>373</lpage>. <pub-id pub-id-type="doi">10.1037/bne0000513</pub-id> <pub-id pub-id-type="pmid">35737557</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Safaie</surname> <given-names>M.</given-names></name> <name><surname>Jurado-Parras</surname> <given-names>M.-T.</given-names></name> <name><surname>Sarno</surname> <given-names>S.</given-names></name> <name><surname>Louis</surname> <given-names>J.</given-names></name> <name><surname>Karoutchi</surname> <given-names>C.</given-names></name> <name><surname>Petit</surname> <given-names>L. F.</given-names></name><etal/></person-group> (<year>2020</year>). <article-title>Turning the body into a clock: Accurate timing is facilitated by simple stereotyped interactions with the environment.</article-title> <source><italic>Proc. Nat. Acad. Sci. U. S. A.</italic></source> <volume>117</volume> <fpage>13084</fpage>&#x2013;<lpage>13093</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1921226117</pub-id> <pub-id pub-id-type="pmid">32434909</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schonhaut</surname> <given-names>D. R.</given-names></name> <name><surname>Aghajan</surname> <given-names>Z. M.</given-names></name> <name><surname>Kahana</surname> <given-names>M. J.</given-names></name> <name><surname>Fried</surname> <given-names>I.</given-names></name></person-group> (<year>2023</year>). <article-title>A neural code for time and space in the human brain.</article-title> <source><italic>Cell Rep.</italic></source> <volume>42</volume>:<issue>113238</issue>. <pub-id pub-id-type="doi">10.1016/j.celrep.2023.113238</pub-id> <pub-id pub-id-type="pmid">37906595</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schroeger</surname> <given-names>A.</given-names></name> <name><surname>Raab</surname> <given-names>M.</given-names></name> <name><surname>Ca&#x00F1;al-Bruland</surname> <given-names>R.</given-names></name></person-group> (<year>2022</year>). <article-title>Tau and kappa in interception &#x2013; how perceptual spatiotemporal interrelations affect movements.</article-title> <source><italic>Attent. Percept. Psychophys.</italic></source> <volume>84</volume> <fpage>1925</fpage>&#x2013;<lpage>1943</lpage>. <pub-id pub-id-type="doi">10.3758/s13414-022-02516-0</pub-id> <pub-id pub-id-type="pmid">35705842</pub-id></citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schwertman</surname> <given-names>N. C.</given-names></name> <name><surname>Owens</surname> <given-names>M. A.</given-names></name> <name><surname>Adnan</surname> <given-names>R.</given-names></name></person-group> (<year>2004</year>). <article-title>A simple more general boxplot method for identifying outliers.</article-title> <source><italic>Comput. Stat. Data Anal.</italic></source> <volume>47</volume> <fpage>165</fpage>&#x2013;<lpage>174</lpage>. <pub-id pub-id-type="doi">10.1016/j.csda.2003.10.012</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Simen</surname> <given-names>P.</given-names></name> <name><surname>Balci</surname> <given-names>F.</given-names></name> <name><surname>deSouza</surname> <given-names>L.</given-names></name> <name><surname>Cohen</surname> <given-names>J. D.</given-names></name> <name><surname>Holmes</surname> <given-names>P.</given-names></name></person-group> (<year>2011</year>). <article-title>A model of interval timing by neural integration.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>31</volume> <fpage>9238</fpage>&#x2013;<lpage>9253</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3121-10.2011</pub-id> <pub-id pub-id-type="pmid">21697374</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thurley</surname> <given-names>K.</given-names></name> <name><surname>Schild</surname> <given-names>U.</given-names></name></person-group> (<year>2018</year>). <article-title>Time and distance estimation in children using an egocentric navigation task.</article-title> <source><italic>Sci. Rep.</italic></source> <volume>8</volume>:<issue>18001</issue>. <pub-id pub-id-type="doi">10.1038/s41598-018-36234-1</pub-id> <pub-id pub-id-type="pmid">30573744</pub-id></citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>&#x00DC;st&#x00FC;n</surname> <given-names>S.</given-names></name> <name><surname>S&#x0131;rmatel</surname> <given-names>B.</given-names></name> <name><surname>&#x00C7;i&#x00E7;ek</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>Can a common magnitude system theory explain the brain representation of space, time, and number?</article-title> <source><italic>Noro Psikiyatri Arsivi</italic></source> <volume>59</volume> (<issue>Suppl. 1</issue>), <fpage>S24</fpage>&#x2013;<lpage>S28</lpage>. <pub-id pub-id-type="doi">10.29399/npa.28159</pub-id> <pub-id pub-id-type="pmid">36578990</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vidaud-Laperri&#x00E8;re</surname> <given-names>K.</given-names></name> <name><surname>Brunel</surname> <given-names>L.</given-names></name> <name><surname>Syssau-Vaccarella</surname> <given-names>A.</given-names></name> <name><surname>Charras</surname> <given-names>P.</given-names></name></person-group> (<year>2022</year>). <article-title>Exploring spatiotemporal interactions: On the superiority of time over space.</article-title> <source><italic>Attent. Percept. Psychophys.</italic></source> <volume>84</volume> <fpage>2582</fpage>&#x2013;<lpage>2595</lpage>. <pub-id pub-id-type="doi">10.3758/s13414-022-02546-8</pub-id> <pub-id pub-id-type="pmid">36229633</pub-id></citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vitu</surname> <given-names>F.</given-names></name> <name><surname>Casteau</surname> <given-names>S.</given-names></name> <name><surname>Adeli</surname> <given-names>H.</given-names></name> <name><surname>Zelinsky</surname> <given-names>G. J.</given-names></name> <name><surname>Castet</surname> <given-names>E.</given-names></name></person-group> (<year>2017</year>). <article-title>The magnification factor accounts for the greater hypometria and imprecision of larger saccades: Evidence from a parametric human-behavioral study.</article-title> <source><italic>J. Vis.</italic></source> <volume>17</volume>:<issue>2</issue>. <pub-id pub-id-type="doi">10.1167/17.4.2</pub-id> <pub-id pub-id-type="pmid">28388698</pub-id></citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Walsh</surname> <given-names>V.</given-names></name></person-group> (<year>2003</year>). <article-title>A theory of magnitude: Common cortical metrics of time, space and quantity.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>7</volume> <fpage>483</fpage>&#x2013;<lpage>488</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2003.09.002</pub-id> <pub-id pub-id-type="pmid">14585444</pub-id></citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>T.</given-names></name> <name><surname>Luo</surname> <given-names>Y.</given-names></name> <name><surname>Ivry</surname> <given-names>R. B.</given-names></name> <name><surname>Tsay</surname> <given-names>J. S.</given-names></name> <name><surname>P&#x00F6;ppel</surname> <given-names>E.</given-names></name> <name><surname>Bao</surname> <given-names>Y.</given-names></name></person-group> (<year>2023</year>). <article-title>A unitary mechanism underlies adaptation to both local and global environmental statistics in time perception.</article-title> <source><italic>PLoS Comput. Biol.</italic></source> <volume>19</volume>:<issue>e1011116</issue>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1011116</pub-id> <pub-id pub-id-type="pmid">37146089</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wearden</surname> <given-names>J. H.</given-names></name> <name><surname>Philpott</surname> <given-names>K.</given-names></name> <name><surname>Win</surname> <given-names>T.</given-names></name></person-group> (<year>1999</year>). <article-title>Speeding up and (&#x2026;relatively&#x2026;) slowing down an internal clock in humans.</article-title> <source><italic>Behav. Process.</italic></source> <volume>46</volume> <fpage>63</fpage>&#x2013;<lpage>73</lpage>. <pub-id pub-id-type="doi">10.1016/S0376-6357(99)00004-2</pub-id> <pub-id pub-id-type="pmid">24925499</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Whitaker</surname> <given-names>M. M.</given-names></name> <name><surname>Hansen</surname> <given-names>R. C.</given-names></name> <name><surname>Creem-Regehr</surname> <given-names>S. H.</given-names></name> <name><surname>Stefanucci</surname> <given-names>J. K.</given-names></name></person-group> (<year>2022</year>). <article-title>The relationship between space and time perception: A registered replication of Casasanto and Boroditsky (2008).</article-title> <source><italic>Attent. Percept. Psychophys.</italic></source> <volume>84</volume> <fpage>347</fpage>&#x2013;<lpage>351</lpage>. <pub-id pub-id-type="doi">10.3758/s13414-021-02420-z</pub-id> <pub-id pub-id-type="pmid">35174467</pub-id></citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Yang</surname> <given-names>X.</given-names></name> <name><surname>Jin</surname> <given-names>Z.</given-names></name> <name><surname>Li</surname> <given-names>L.</given-names></name></person-group> (<year>2021</year>). <article-title>Distance estimation in virtual reality is affected by both the virtual and the real-world environments.</article-title> <source><italic>I-Perception</italic></source> <volume>12</volume>:<issue>20416695211023956</issue>. <pub-id pub-id-type="doi">10.1177/20416695211023956</pub-id> <pub-id pub-id-type="pmid">34211686</pub-id></citation></ref>
</ref-list>
</back>
</article>