<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2022.888871</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Modality Switching in Landmark-Based Wayfinding</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Schwarz</surname>
<given-names>Mira</given-names>
</name>
<xref rid="c001" ref-type="corresp"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1686909/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Hamburger</surname>
<given-names>Kai</given-names>
</name>
<xref rid="c002" ref-type="corresp"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/119807/overview"/>
</contrib>
</contrib-group>
<aff><institution>Department of Experimental Psychology and Cognitive Science, Faculty of Psychology and Sport Science, Justus Lieblig University</institution>, <addr-line>Gie&#x00DF;en</addr-line>, <country>Germany</country>
</aff>
<author-notes>
<fn id="fn0001" fn-type="edited-by">
<p>Edited by: Christina J. Howard, Nottingham Trent University, United Kingdom</p>
</fn>
<fn id="fn0002" fn-type="edited-by">
<p>Reviewed by: Carlo De Lillo, University of Leicester, United Kingdom; Eric Legge, MacEwan University, Canada; Dennis Edler, Ruhr University Bochum, Germany</p>
</fn>
<corresp id="c001">&#x002A;Correspondence: Mira Schwarz, <email>schwarzmira@aol.com</email></corresp>
<corresp id="c002">Kai Hamburger, <email>kai.hamburger@psychol.uni-giessen.de</email></corresp>
<fn id="fn0003" fn-type="other">
<p>This article was submitted to Cognitive Science, a section of the journal Frontiers in Psychology</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>10</day>
<month>06</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>13</volume>
<elocation-id>888871</elocation-id>
<history>
<date date-type="received">
<day>03</day>
<month>03</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>23</day>
<month>05</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Schwarz and Hamburger.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Schwarz and Hamburger</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>This study investigates switching costs in landmark-based wayfinding using olfactory and visual landmark information. It has already been demonstrated that there seem to be no switching costs, in terms of correct route decisions, when switching between acoustically and visually presented landmarks. Olfaction, on the other hand, is not extensively focused on in landmark-based wayfinding thus far, especially with respect to modality switching. The goal of this work is to empirically test and compare visual and olfactory landmark information with regard to their suitability for wayfinding including a modality switch. To investigate this, an experiment within a virtual environment was conducted in which participants were walked along a virtual route of 12 intersections. At each intersection, landmark information together with directional information was presented, which was to be memorized and recalled in the following phase, either in the same or in the other modality (i.e., visual or olfactory). The results of the study show that, in contrast to the no-switching costs between auditory and visual landmarks in previous studies, switching costs occur when switching modality from visual to olfactory and vice versa. This is indicated by both longer decision times and fewer correct decisions. This means that a modality switch involving olfactory landmark information is possible but could lead to poorer performance. Therefore, olfaction may still be valuable for landmark-based-wayfinding. We argue that the poorer performance in the switching-condition is possibly due to higher cognitive load and the separate initial processing of odors and images in different cognitive systems.</p>
</abstract>
<kwd-group>
<kwd>modality switch</kwd>
<kwd>switching costs</kwd>
<kwd>landmarks</kwd>
<kwd>olfactory</kwd>
<kwd>visual</kwd>
<kwd>wayfinding</kwd>
</kwd-group>
<counts>
<fig-count count="8"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="49"/>
<page-count count="12"/>
<word-count count="7946"/>
</counts>
</article-meta>
</front>
<body>
<sec id="sec1" sec-type="intro">
<title>Introduction</title>
<p>Every day, people are challenged to get from their current location to a destination, whether it is finding their way home from a train station or just locating the nearest supermarket. Navigating through our environment thus represents an everyday task in human as well as animal life. Here, <xref ref-type="bibr" rid="ref28">Montello (2005)</xref> makes a distinction between two components of navigation, which were also taken up by <xref ref-type="bibr" rid="ref29">Montello and Sas (2006)</xref>: Wayfinding and locomotion. Wayfinding is described as &#x201C;the efficient goal-directed and planning part of navigation&#x201D; (<xref ref-type="bibr" rid="ref29">Montello and Sas, 2006</xref>, p. 2) and is therefore directly associated with problem solving. In addition, locomotion is the &#x201C;real-time part of navigation&#x201D; (<xref ref-type="bibr" rid="ref29">Montello and Sas, 2006</xref>, p. 2), in which we try to avoid obstacles and arrive at our destination without further complications. In conclusion, navigation is a combination of wayfinding, i.e., route planning, which is the cognitive component, and locomotion, i.e., the process of moving along the route.</p>
<p>As soon as we are planning a route, we orientate ourselves on the basis of streets, buildings, or other objects (e.g., street signs, statues and the like). However, it is not just visual landmarks which play an important role even though research in landmark-based wayfinding mainly focusses on the visual aspects in human navigation (e.g., <xref ref-type="bibr" rid="ref26">Lynch, 1960</xref>; <xref ref-type="bibr" rid="ref33">Presson and Montello, 1988</xref>; <xref ref-type="bibr" rid="ref41">Sorrows and Hirtle, 1999</xref>). <xref ref-type="bibr" rid="ref20">Holden and Newcombe (2013)</xref> introduce a model of combining a variety of sources based on evidence concerning their validity. In this model, the reliability of spatial estimation accuracy increases when different modalities (i.e., auditory and visual cues) are combined in a Bayesian framework (<xref ref-type="bibr" rid="ref20">Holden and Newcombe, 2013</xref>). The impact of non-visual elements coupled with visual elements on human spatial cognition has hardly been investigated. However, the explanatory approach of <xref ref-type="bibr" rid="ref20">Holden and Newcombe (2013)</xref> was recently taken up by <xref ref-type="bibr" rid="ref40">Siepmann et al. (2020)</xref> in a study indicating effects of sound positions in maps as cues for spatial memory performance.</p>
<p>Orientation by smell is mainly associated with species other than humans. In the animal kingdom, the ability to orientate by olfactory information has been demonstrated primarily in desert ants (e.g., <xref ref-type="bibr" rid="ref44">Steck et al., 2009</xref>, <xref ref-type="bibr" rid="ref45">2011</xref>; <xref ref-type="bibr" rid="ref43">Steck, 2012</xref>), rats (e.g., <xref ref-type="bibr" rid="ref38">Rossier and Schenk, 2003</xref>) and dogs (<xref ref-type="bibr" rid="ref19">Hepper and Wells, 2005</xref>; <xref ref-type="bibr" rid="ref36">Reddy et al., 2022</xref>). Even untrained ring-tailed lemurs are able to track odor plumes, disproving the traditional belief that primates are unable to do so (<xref ref-type="bibr" rid="ref8">Cunningham et al., 2021</xref>). Our own research has repeatedly addressed this bias towards vision in human spatial cognition research (e.g., <xref ref-type="bibr" rid="ref16">Hamburger and Knauff, 2019</xref>) and demonstrated that humans are also able to orient themselves with auditory, visual verbal (i.e., words visually presented on screen) as well as olfactory cues (e.g., <xref ref-type="bibr" rid="ref37">R&#x00F6;ser et al., 2011</xref>; <xref ref-type="bibr" rid="ref18">Hamburger and R&#x00F6;ser, 2014</xref>; <xref ref-type="bibr" rid="ref23">Karimpur and Hamburger, 2016</xref>; <xref ref-type="bibr" rid="ref16">Hamburger and Knauff, 2019</xref>).</p>
<p>Apart from wayfinding research, several studies in other research fields are often concerned with switching costs. Switching costs, or more precisely within-task switching costs, are costs that arise when information from a certain task is presented to the user in a different sensory modality than expected (<xref ref-type="bibr" rid="ref24">Kotowick and Shah, 2018</xref>). In addition to the within-task switching costs, there is also a cost for switching between tasks in which a different task has to be performed than the one that was initially learned (<xref ref-type="bibr" rid="ref1">Arbuthnott and Woodward, 2002</xref>). This means that the modality remains the same, but the task changes. However, in wayfinding, information is not always available in the same modality in which we learned it (i.e., unimodal processing). So, what happens when the task stays the same (e.g., finding the correct path) but the modality switches (e.g., from visual to auditory information) within this task? What cognitive costs occur when we need to switch from one processing modality to another? In wayfinding research, it has been shown that there are no or hardly any switching costs in wayfinding performance (i.e., correct route decisions) when comparing visual and auditory landmark information within a wayfinding task (<xref ref-type="bibr" rid="ref17">Hamburger and R&#x00F6;ser, 2011</xref>). However, as mentioned above, olfactory information may also be of relevance and should not be underestimated (e.g., <xref ref-type="bibr" rid="ref15">Hamburger and Karimpur, 2017</xref>). Are people able to alternate, i.e., switch modality, between vision and olfaction without additional cognitive costs, i.e., more time required or more errors? In the following, wayfinding with modality switches between visual and olfactory landmarks are compared to wayfinding without a modality switch. The results could be of interest especially in the field of interventions for elderly people and people with impaired vision, for whom it is necessary to deal with a specific modality, which is often required especially in unfamiliar environments (<xref ref-type="bibr" rid="ref14">Hamburger, 2020</xref>).</p>
<p>People orientate themselves to their immediate environment in order to arrive at their destination. One core aspect in human orientation are orientation points, so-called landmarks (for review see <xref ref-type="bibr" rid="ref49">Yesiltepe et al., 2021</xref>). A landmark is described by <xref ref-type="bibr" rid="ref26">Lynch (1960)</xref> as any object that potentially serves as a reference point. Accordingly, a variety of different reference points can serve as landmarks, including trees, traffic lights, but also buildings or man-made objects (for an overview, see for example <xref ref-type="bibr" rid="ref26">Lynch, 1960</xref>; <xref ref-type="bibr" rid="ref12">Golledge, 1999</xref>).</p>
<p>The fact that landmarks can have a positive effect on wayfinding performance was shown by <xref ref-type="bibr" rid="ref39">Sharma et al. (2017)</xref>. In this study participants were given a wayfinding task that included a condition with and a condition without landmarks. The participants of the landmark condition made fewer mistakes and required less time on average compared to the participants of the condition without landmarks (<xref ref-type="bibr" rid="ref39">Sharma et al., 2017</xref>).</p>
<p>The relevance of visual landmarks was demonstrated by, for instance, <xref ref-type="bibr" rid="ref10">Denis et al. (2014)</xref> who compared routes with and without visual orientation points. Students learned either a route through an urban environment without visual references or a route in a neighborhood with many local stores and urban objects. Participants exposed to the landmark-rich environment with photographs of scenes along the route provided higher recognition scores and shorter decision times than participants who were not presented with visual references. In this case, visual landmarks had a positive impact on participants&#x2019; performance.</p>
<p>Human wayfinding with different sensory modalities than vision was tested by <xref ref-type="bibr" rid="ref18">Hamburger and R&#x00F6;ser (2014)</xref> who used different modalities to guide participants through a virtual maze. Their participants were divided into three experimental groups (visual, verbal or acoustic) and had to remember a route with the help of either visual, verbal or acoustic landmarks coupled with directional information. In the wayfinding phase, they had to indicate the correct direction at each intersection based on the landmark information given in the previous learning phase. Contrary to what might be expected, the participants showed a similar level of wayfinding performance for all three conditions. Visual, verbal, and acoustic information successfully constituted landmark information. Thus, human wayfinding can be supported not only through visual (e.g., <xref ref-type="bibr" rid="ref10">Denis et al., 2014</xref>), but also non-visual landmark information (e.g., <xref ref-type="bibr" rid="ref18">Hamburger and R&#x00F6;ser, 2014</xref>).</p>
<p>This again supports the assumption that visual landmarks are not the only helpful means for finding one&#x2019;s way. Therefore, other modalities should also be taken into account. Unfortunately, studies on human olfaction are rare in spatial cognition research. Nevertheless, to illustrate the current state of research on human wayfinding including olfactory landmarks we provide a few exceptions here. <xref ref-type="bibr" rid="ref9">Dahmani et al. (2018)</xref> found an intrinsic relationship between olfaction and spatial memory which is probably rooted in the parallel evolution of the olfactory and hippocampal systems. <xref ref-type="bibr" rid="ref31">Porter et al. (2007)</xref> found out that humans are able to follow a scent path just like rats and dogs do and are able to become better with practice. Furthermore, <xref ref-type="bibr" rid="ref22">Jacobs et al. (2015)</xref> showed that humans are able to return to a previously learned location on a map with the help of olfactory cues only. This finding suggests that humans might use this odor-map as mechanism for navigation, too. An experiment by <xref ref-type="bibr" rid="ref16">Hamburger and Knauff (2019)</xref> has shown that olfactory landmark information can be considered in the context of human wayfinding as well. They investigated this question in order to gain a more comprehensive understanding of the wayfinding ability of people with the help of olfactory information. In their study participants were walked through a virtual maze in which odors were presented as landmark information. At each intersection they had to memorize and later recall the olfactory information. It was demonstrated that participants were able to use the olfactory information to find their way (i.e., wayfinding performance was clearly above chance level). Further, olfactory landmarks have also been addressed in studies on how visually impaired people navigate in everyday life (<xref ref-type="bibr" rid="ref25">Koutsoklenis and Papadopoulos, 2011</xref>).</p>
<p>Another relevant aspect regarding landmark-based wayfinding is modality switching and possibly associated switching costs. In the following, we refer to the within-task switching costs mentioned above (<xref ref-type="bibr" rid="ref24">Kotowick and Shah, 2018</xref>) that arise when the task remains the same but the used modality changes (e.g., a picture of a clove of garlic is learned, but orientation must be based on the smell of garlic).</p>
<p><xref ref-type="bibr" rid="ref17">Hamburger and R&#x00F6;ser (2011)</xref> dealt with the question of whether a modality switch between learning and recalling routes results in additional cognitive costs, i.e., more time required for the route decisions or more incorrect decisions. They contrasted different constellations of a modality switch of visual, acoustic and (visual) verbal landmarks. In the learning phase, either animal words, or sounds, or pictures had to be learned. In a subsequent wayfinding phase, introducing a modality switch or not (e.g., visual &#x2794; acoustic, visual &#x2794; visual), landmarks had to be recalled and with their help the way should be found. In none of the constellations additional switching costs occurred. Only the comparison of visual and (visual) verbal landmarks revealed differences in decision times.</p>
<p>Furthermore, <xref ref-type="bibr" rid="ref23">Karimpur and Hamburger (2016)</xref> also investigated the wayfinding performance of participants using animal pictures and sounds. The difference to the previous study, however, was that they were not just concerned with unimodal but also multimodal processing. Similar wayfinding performances were found independent of whether participants were confronted with congruent stimuli (e.g., image of a dog paired with the barking of a dog) or incongruent stimuli (e.g., image of a dog paired with the chirping of a bird). Improved performance was demonstrated in the multimodal condition compared to the unimodal condition, which, according to <xref ref-type="bibr" rid="ref23">Karimpur and Hamburger (2016)</xref>, could be due to activation of both the visual and auditory sensory channels and therefore result in more elaborate representations or just better access to the stored information.</p>
<p><xref ref-type="bibr" rid="ref24">Kotowick and Shah (2018)</xref> also addressed the issue of modality switching. More specifically, they investigated the question of whether switching modality during navigation using navigation devices has certain advantages. They examined a system that switches between visual and haptic navigation guidance. Temporarily, performance deteriorated, but switching modalities seems to be beneficial for longer navigation tasks and to reduce both habituation effects and stimulus-specific adaptation.</p>
<p>In the following study, switching between visual and olfactory landmark information is contrasted with no-switch conditions in order to shed light on possible modality switching costs in landmark-based wayfinding.</p>
<p>Based on the theoretical and empirical background, it can be assumed that a modality switch is accompanied by none or marginal switching costs. However, it is important whether switching costs are defined as correct route decisions (i.e., correct turns) or as the time required for decision-making. The time required can be differentiated between the initial processing time and the time required to retrieve the correct route decision. Studies show that response times in the olfactory system range from 600 to 1,200&#x2009;ms (<xref ref-type="bibr" rid="ref6">Cain, 1976</xref>), which is significantly longer than the 200&#x2009;ms interval observed for visual, auditory and tactile stimuli (<xref ref-type="bibr" rid="ref42">Spence et al., 2000</xref>). People can respond to visual stimuli as early as 100&#x2009;ms apart (<xref ref-type="bibr" rid="ref32">Posner and Cohen, 1984</xref>), whereas the perception of odors is typically studied at 20&#x2013;30&#x2009;s intervals. The temporal resolution here is therefore 200 times greater for odor perception than for visual perception. The initial processing time is thus longer for olfactory than for visual inputs (<xref ref-type="bibr" rid="ref6">Cain, 1976</xref>; <xref ref-type="bibr" rid="ref42">Spence et al., 2000</xref>), whereas there should be little difference in the time required to retrieve the correct response, given the previous research in this area (e.g., <xref ref-type="bibr" rid="ref17">Hamburger and R&#x00F6;ser, 2011</xref>).</p>
<p>For this reason, it was hypothesized that a modality switch in the &#x201C;switch&#x201D; condition will result in (1) significantly higher decision times compared to the &#x201C;no switch&#x201D; conditions. Furthermore, based on the previous findings in other modalities [auditory, visual, and (visual) verbal] it was expected that the &#x201C;switch&#x201D; condition will result in (2) the same relative number of correct decisions compared to the &#x201C;no switch&#x201D; conditions. The experiment was based on a one factorial between-subjects design with four levels. The independent variable varied whether a modality switch occurred or not (&#x201C;switch&#x201D; vs. &#x201C;no switch&#x201D;). In the &#x201C;no switch&#x201D; condition, olfactory landmarks were presented to one group and visual landmarks to another in the learning and wayfinding phase. The &#x201C;switch&#x201D; condition was also divided into two groups that differed in the modality at learning and test (olfactory &#x2794; visual vs. visual &#x2794; olfactory). The dependent variables were the participants&#x2019; decision times on the one hand and the relative number of correct decisions on the other.</p>
</sec>
<sec id="sec2" sec-type="materials|methods">
<title>Materials and Methods</title>
<sec id="sec3">
<title>Participants</title>
<p>A total of 30 students (17 females and 13 males) of the Justus Liebig University were tested. The age range of the participants was 19&#x2013;66&#x2009;years (<italic>M</italic>&#x2009;=&#x2009;24.80, SD&#x2009;=&#x2009;8.53).</p>
<p>Exclusion criteria included any type of restriction in the ability to smell, such as respiratory problems or flu-like infections. Further exclusion criteria included epilepsy and non-corrected visual impairment. Participants were informed in advance to avoid spicy food and smoking on the day before the experiment, as this could have impaired the ability to smell. In addition, the participants were not supposed to use perfume before and during the experiment to ensure that no distraction due to additional odors occurred. Participation was voluntary and was compensated with course credits if required. All participants were na&#x00EF;ve with respect to the research question and provided informed written consent prior to participation. The study was approved by the local ethics committee (Department of Psychology, JLU; 2014-0017).</p>
</sec>
<sec id="sec4">
<title>Material</title>
<p>The program OpenSesame 3.2.8 (<xref ref-type="bibr" rid="ref27">Math&#x00F4;t et al., 2019</xref>), was used to present routes with 12 orthogonal intersections and for data recording. In total, there were three different route sequences. Participants were pseudo-randomly assigned to the different experimental conditions.</p>
<p>For the creation of the routes, a screenshot of an empty intersection taken from two studies by <xref ref-type="bibr" rid="ref17">Hamburger and R&#x00F6;ser (2011</xref>, <xref ref-type="bibr" rid="ref18">2014)</xref> was used (see <xref rid="fig1" ref-type="fig">Figure 1</xref>). Furthermore, for the purpose of the study, additional images and the corresponding odor samples were required. The odors were taken from the study by Hamburger and Knauff (i.e., garlic, strawberry, cinnamon, aftershave, etc.; for further details, such as an evaluation of the odors, see <xref ref-type="bibr" rid="ref16">Hamburger and Knauff, 2019</xref>). The odors used were those with the highest identification rates from a set of 44 odors. Odors were stored in amber glass vials. Since the odors and the images should match, images of objects matching the above odors were taken with a Samsung NX1000 SLR camera. Since participants were presented with either visual or olfactory landmark information, either images of objects implemented in the screenshot (visual landmark condition) or the screenshot of an empty intersection (<xref rid="fig1" ref-type="fig">Figure 1</xref>) only (olfactory landmark condition) were presented to the participants. The visual landmarks were placed in the center of the upper half of the virtual room to give the impression that the image was hanging on the ceiling of the intersection in front of the participant. If the participant was assigned to the olfactory condition, she was presented with an odor manually by the experimenter instead of the visual landmark while looking at the empty intersection. The sequence was randomized in advance.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption><p>Screenshot of an (empty) intersection taken from <xref ref-type="bibr" rid="ref17">Hamburger and R&#x00F6;ser (2011</xref>, <xref ref-type="bibr" rid="ref18">2014)</xref>.</p></caption>
<graphic xlink:href="fpsyg-13-888871-g001.tif"/>
</fig>
<p>In addition, a self-generated light gray arrow was inserted at each intersection (in the visual as well as the olfactory landmark condition) to indicate the direction. The arrow was also located in the center, but in the lower half of the virtual space. The direction in which the arrow pointed at each intersection was also pseudo-randomized.</p>
<p>The experiment was run on an Acer Aspire V17 Nitro with a 7th generation IntelCore i7 processor (16GB RAM). The laptop was connected to a Samsung 74-inch 4&#x2009;K LED flat screen <italic>via</italic> HDMI. A large screen was deliberately chosen to make the environment more realistic and to ensure a stronger immersion effect. Participants provided their decisions using the numeric keypad of an external computer keyboard (1&#x2009;=&#x2009;left, 2&#x2009;=&#x2009;straight ahead, 3&#x2009;=&#x2009;right).</p>
</sec>
<sec id="sec5">
<title>Procedure</title>
<p>Upon arrival participants were asked to sit at a table at the end of the room, where the screen was placed. The distance between the participants and the TV was approximately 60&#x2009;cm. The only thing that was varied was that the computer keyboard in front of the test person so that it was easily accessible with their hands. In addition to the informed written consent form and an instruction about the experiment, demographic data were collected. Regardless of which condition the participants were assigned to, the main experiment consisted of four phases, the practice phase (1), the learning phase (2), the wayfinding phase (3), and a randomized control phase (4). For clarification, the complete sequence of the main phases is visualized in <xref rid="fig2" ref-type="fig">Figure 2</xref>. Before each of these phases, the participants were presented with a detailed instruction, which they were asked to repeat orally in their own words to ensure that they understood the instruction. The instruction included an explanation of the duration of the experiment, the number as well as the sequence of the phases. In addition, each instruction included an explanation of the use of the numeric keypad and a reminder to both focus attention on the center of the screen and to make decisions as quickly and accurately as possible.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption><p>Example sequence of the main phases of the experiment in the visual condition (visual &#x2794; visual).</p></caption>
<graphic xlink:href="fpsyg-13-888871-g002.tif"/>
</fig>
<p>(1) The first phase of the experiment was a practice phase. Each participant was led through nine trials in which she was presented only with the screenshot of the intersection with a light gray arrow in the middle. There was no presentation of visual or olfactory landmarks in the practice phase. The arrows pointed equally often either to the left, straight ahead, or to the right. Before each intersection, participants were presented with a fixation dot for 3&#x2009;s to direct their attention to the following intersections. The task was to correctly respond to the presented arrow keys (1&#x2009;=&#x2009;left, 2&#x2009;=&#x2009;straight ahead, 3&#x2009;=&#x2009;right) using the numeric keypad. This gave the participants the opportunity to familiarize themselves with the procedure, the virtual room and the required material, i.e., the numeric keypad of the keyboard. At the end of the practice phase, each participant was presented with feedback showing the average decision time of the trials and the average of correct route decisions in percent. The values of the feedback had no influence on the main part of the experiment that was carried out afterwards.</p>
<p>(2) The practice phase was followed by the learning phase, in which the participants were presented with 12 intersections. In this phase, as well as in each subsequent phase, the participants first saw a blank gray screen for 5&#x2009;s, in which attention to the screen was not yet required. After that, the participants were presented with a fixation dot for another 3&#x2009;s, to which the participants were asked to direct their attention. Subsequently, the respective intersection of the participant&#x2019;s individually assigned route appeared. Depending on the condition assigned, participants were presented with either visual or olfactory landmark information. The task was to remember the presented landmark information with the associated direction, with each landmark (either visual or olfactory) being presented for 5&#x2009;s. This procedure was based on <xref ref-type="bibr" rid="ref23">Karimpur and Hamburger (2016)</xref>, who gave the participants a maximum of 5&#x2009;s to decide on directional information in a similar experiment. This was done for each of the 12 intersections. As soon as the test person had completed all 12 intersections, the learning phase ended (for an example trial of the learning phase in the visual condition see <xref rid="fig3" ref-type="fig">Figure 3</xref>; for the olfactory condition see <xref rid="fig4" ref-type="fig">Figure 4</xref>, for a schematic illustration of the wayfinding phase see also <xref ref-type="bibr" rid="ref16">Hamburger and Knauff (2019)</xref>.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption><p>Example sequence of a single pass in the learning phase of the visual condition (visual &#x2794; visual).</p></caption>
<graphic xlink:href="fpsyg-13-888871-g003.tif"/>
</fig>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption><p>Example sequence of a single pass in the learning phase of the olfactory condition (olfactory &#x2794; olfactory).</p></caption>
<graphic xlink:href="fpsyg-13-888871-g004.tif"/>
</fig>
<p>(3) The next phase was the so-called wayfinding phase. Here, the participants were presented with the same route sequence as in the learning phase. The difference, however, was that in the wayfinding phase (either visual or olfactory, see <xref rid="fig3" ref-type="fig">Figures 3</xref>, <xref rid="fig4" ref-type="fig">4</xref>) the presentation of the arrows, i.e., the directional information, was omitted. Participants in the &#x201C;no switch&#x201D; condition were presented with landmark information in the same modality, while participants in the &#x201C;switch&#x201D; condition were presented with corresponding landmark information in the other modality. Once the landmark was presented to the participant, her task was to respond with the associated direction key. In this phase, the landmark information (either visual or olfactory) was presented for a maximum of 10&#x2009;s. If the participant has already made a decision before the time expired, the experiment went on without interruption and the gray screen appeared followed by the fixation dot and the next intersection. The same applied if the participant did not make the correct decision. The experiment also went on without interruption by the appearance of the gray screen followed by the fixation dot and the next intersection.</p>
<p>(4) The final phase of the experiment was the randomized control phase. Here, the previously learned intersections (i.e., combination of landmark and directional information) were tested again within the same modality as in the wayfinding phase, but in a randomized order. The randomization of the intersections made it possible to compare the third and fourth phase and to check whether the respondent had linked the landmarks to the directions or had learned the path sequentially. After the last phase with again 12 intersections, the main experiment was completed. The duration of the experiment was between 30 and 45&#x2009;min.</p>
</sec>
</sec>
<sec id="sec6" sec-type="results">
<title>Results</title>
<sec id="sec7">
<title>Switch vs. No Switch</title>
<p>For the question of whether switching costs occur when switching between visual and olfactory landmark information, the following results were obtained. Significances, as well as effect sizes, are reported. Because of the small sample size in each group the test assumption of normal distribution was not given for all groups and conditions. However, while the normal distribution assumption is theoretically important for unpaired <italic>t</italic>-tests, numerous studies have practically shown that <italic>t</italic>-tests are relatively robust to violations of normal distribution assumption (e.g., <xref ref-type="bibr" rid="ref35">Rasch and Guiard, 2004</xref>; <xref ref-type="bibr" rid="ref48">Wilcox, 2012</xref>). That is why, independent <italic>t</italic>-tests will still be reported in this study. Additionally, non-parametric Mann&#x2013;Whitney-U-tests were calculated for the most important results of the study and are reported in brackets.</p>
<p>Wayfinding performance, in terms of the relative number of correct decisions, for the &#x201C;no-switch&#x201D; condition (<italic>M</italic>&#x2009;=&#x2009;0.78, SEM&#x2009;=&#x2009;0.06) was higher than for the &#x201C;switch&#x201D; condition (<italic>M</italic>&#x2009;=&#x2009;0.50, SEM&#x2009;=&#x2009;0.05). These findings are visualized in <xref rid="fig5" ref-type="fig">Figure 5</xref>. The collected data were analyzed using an independent two-tailed t-test which revealed significant differences between the &#x201C;no-switch&#x201D; and &#x201C;switch&#x201D; condition, <italic>t</italic>(27.35)&#x2009;=&#x2009;3.38, <italic>p</italic>&#x2009;=&#x2009;0.002, <italic>d</italic>&#x2009;=&#x2009;0.931 [<italic>U</italic>&#x2009;=&#x2009;46.00, <italic>Z</italic>&#x2009;=&#x2009;&#x2212;2.78=, <italic>p</italic>&#x2009;=&#x2009;0.005; according to <xref ref-type="bibr" rid="ref7">Cohen, 1988</xref> effect sizes are interpreted as follows: small effect size <italic>d</italic>&#x2009;=&#x2009;0.2, medium effect size <italic>d</italic>&#x2009;=&#x2009;0.5, large effect size <italic>d</italic>&#x2009;=&#x2009;0.8].</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption><p>Relative number of correct decisions with respect to the &#x201C;switch&#x201D; and &#x201C;no switch&#x201D; condition of the tested experiment (<italic>N</italic>&#x2009;=&#x2009;30, error bars&#x2009;=&#x2009;SEM).</p></caption>
<graphic xlink:href="fpsyg-13-888871-g005.tif"/>
</fig>
<p>In general, it turns out that a modality switch between visual and olfactory landmark information is possible since performance is significantly above chance level as shown by an one-sample <italic>t</italic>-test, <italic>t</italic>(13)&#x2009;=&#x2009;3.535, <italic>p</italic>&#x2009;=&#x2009;0.004, <italic>d</italic>&#x2009;=&#x2009;0.186. This result is independent of the switch-direction as an independent two-tailed <italic>t</italic>-test which revealed no significant differences between the &#x201C;visual &#x2794; olfactory&#x201D; and &#x201C;olfactory &#x2794; visual&#x201D; &#x201C;switch&#x201D; condition, <italic>t</italic>(12)&#x2009;=&#x2009;&#x2212;1,357, <italic>p</italic>&#x2009;=&#x2009;0.20, <italic>d</italic>&#x2009;=&#x2009;0.180 (<italic>U</italic>&#x2009;=&#x2009;15.00, <italic>Z</italic>&#x2009;=&#x2009;&#x2212;1.236, <italic>p</italic>&#x2009;=&#x2009;0.217). However, the &#x201C;switch&#x201D; condition seems to be associated with further cognitive costs in terms of a lower number of correct decisions (<xref rid="fig6" ref-type="fig">Figure 6</xref>).</p>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption><p>Relative number of correct decisions with respect to the &#x201C;visual-olfactory&#x201D; and &#x201C;olfactory-visual&#x201D; switch condition of the tested experiment (<italic>N</italic>&#x2009;=&#x2009;30, error bars&#x2009;=&#x2009;SEM).</p></caption>
<graphic xlink:href="fpsyg-13-888871-g006.tif"/>
</fig>
<p>Besides the higher performance in terms of correct decisions, there are also shorter decision times for the &#x201C;no-switch&#x201D; condition (<italic>M</italic>&#x2009;=&#x2009;2557.70, SEM&#x2009;=&#x2009;446.98) compared to the &#x201C;switch&#x201D; condition (<italic>M</italic>&#x2009;=&#x2009;3827.84, SEM&#x2009;=&#x2009;395.21; <xref rid="fig7" ref-type="fig">Figure 7</xref>). The collected data were also analyzed in terms of mean decision times using an independent two-tailed t-test and revealed significant differences between the &#x201C;no-switch&#x201D; and &#x201C;switch&#x201D; condition, <italic>t</italic>(28)&#x2009;=&#x2009;&#x2212;2.10, <italic>p</italic>&#x2009;=&#x2009;0.045, <italic>d</italic>&#x2009;=&#x2009;&#x2212;0.769 (<italic>U</italic>&#x2009;=&#x2009;60.00, <italic>Z</italic>&#x2009;=&#x2009;&#x2212;2.162, <italic>p</italic>&#x2009;=&#x2009;0.031).</p>
<fig position="float" id="fig7">
<label>Figure 7</label>
<caption><p>Mean decision time in ms with respect to the &#x201C;switch&#x201D; and &#x201C;no switch&#x201D; condition of the tested experiment (<italic>N</italic>&#x2009;=&#x2009;30, error bars&#x2009;=&#x2009;SEM in ms).</p></caption>
<graphic xlink:href="fpsyg-13-888871-g007.tif"/>
</fig>
<p>In this case, it is also possible to switch between olfactory and visual landmark information, but this is associated with longer decision times (<xref rid="fig8" ref-type="fig">Figure 8</xref>).</p>
<fig position="float" id="fig8">
<label>Figure 8</label>
<caption><p>Mean decision time in ms with respect to the &#x201C;visual-olfactory&#x201D; and &#x201C;olfactory-visual&#x201D; condition of the tested experiment (<italic>N</italic>&#x2009;=&#x2009;30, error bars&#x2009;=&#x2009;SEM in ms).</p></caption>
<graphic xlink:href="fpsyg-13-888871-g008.tif"/>
</fig>
<p>To test the extent to which participants oriented themselves using the landmark information and did not learn the route sequentially, a paired-samples <italic>t</italic>-test between wayfinding performance in terms of the relative number of correct decisions in the wayfinding phase and the control phase was conducted and showed a non-significant result, <italic>t</italic>(29)&#x2009;=&#x2009;0.872, <italic>p</italic>&#x2009;=&#x2009;0.391. This implies that the performance between the wayfinding phase and the subsequent randomized control phase is comparable and thus sequential learning of the route on the part of the participants can be ruled out. In the case of sequential learning, participants would have learned only the directional information without a connection to the presented landmark information, and in the case of randomized presentation of the landmark information, the control phase performance would have had to be at chance level.</p>
</sec>
<sec id="sec8">
<title>Comparison of All Levels</title>
<p>In addition to comparing the no-switch and switch conditions, comparisons were also made between all levels present, both for the relative number of correct decisions with <italic>F</italic>(3,26)&#x2009;=&#x2009;22.425, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, <italic>f</italic>&#x2009;=&#x2009;1.608 [according to <xref ref-type="bibr" rid="ref7">Cohen, 1988</xref> effect sizes are interpreted as follows: small effect size <italic>f</italic>&#x2009;=&#x2009;0.1, medium effect size <italic>f</italic>&#x2009;=&#x2009;0.25, large effect size <italic>f</italic>&#x2009;=&#x2009;0.4], and the mean decision times with <italic>F</italic>(3,26)&#x2009;=&#x2009;10.362, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, <italic>f</italic>&#x2009;=&#x2009;1.094.</p>
<p>The comparison of the relative number of correct decisions between the visual (<italic>M</italic>&#x2009;=&#x2009;0.972, SEM&#x2009;=&#x2009;0.048) and olfactory groups (<italic>M</italic>&#x2009;=&#x2009;0.524, SEM&#x2009;=&#x2009;0.054) in the no-switch condition, showed a significant difference, <italic>t</italic>(26)&#x2009;=&#x2009;&#x2212;6.176, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, <italic>r</italic>&#x2009;=&#x2009;0.771 (<italic>U</italic>&#x2009;=&#x2009;0.000, <italic>Z</italic>&#x2009;=&#x2009;&#x2212;3.440, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001) [according to <xref ref-type="bibr" rid="ref7">Cohen, 1988</xref> effect sizes are interpreted as follows: small effect size <italic>r</italic>&#x2009;=&#x2009;0.1, medium effect size <italic>r</italic>&#x2009;=&#x2009;0.3, large effect size <italic>r</italic>&#x2009;=&#x2009;0.5]. Based on the higher relative number of correct decisions, it can be concluded that visual landmarks are better suited for wayfinding than olfactory landmark information.</p>
<p>The participants of the visual &#x201C;no-switch&#x201D; condition also seems to have a better performance (i.e., a higher number of correct decisions) in comparison with the participants of the visual-olfactory (&#x201C;switch&#x201D;) condition <italic>t</italic>(26)&#x2009;=&#x2009;&#x2212;7.324, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, <italic>r</italic>&#x2009;=&#x2009;0.821 (<italic>U</italic>&#x2009;=&#x2009;0.000, <italic>Z</italic>&#x2009;=&#x2009;&#x2212;3.440, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001) and the participants of the olfactory-visual (&#x201C;switch&#x201D;) condition, <italic>t</italic>(26)&#x2009;=&#x2009;&#x2212;5.520, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, <italic>r</italic>&#x2009;=&#x2009;0.735 (<italic>U</italic>&#x2009;=&#x2009;6.00, <italic>Z</italic>&#x2009;=&#x2009;&#x2212;2.829, <italic>p</italic>&#x2009;=&#x2009;0.005).</p>
<p>The mean decision times of the two &#x201C;no-switch&#x201D; conditions visual (<italic>M</italic>&#x2009;=&#x2009;1249.19, SEM&#x2009;=&#x2009;414.90) and olfactory (<italic>M</italic>&#x2009;=&#x2009;4240.07, SEM&#x2009;=&#x2009;470.40) also differed significantly from each other, <italic>t</italic>(26)&#x2009;=&#x2009;4.769, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, <italic>r</italic>&#x2009;=&#x2009;0.683 (<italic>U</italic>&#x2009;=&#x2009;0.000, <italic>Z</italic>&#x2009;=&#x2009;&#x2212;3.334, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001).</p>
</sec>
</sec>
<sec id="sec9" sec-type="discussions">
<title>Discussion</title>
<p>In general, it turns out that a modality switch between visual and olfactory landmark information is possible since performance is significantly above chance level. In contrast to the switching costs between modalities other than olfaction, a modality switch between visual landmarks and olfactory landmark information seems to be associated with further cognitive costs in terms of a lower number of correct decisions.</p>
<p>First, it can be said that it was possible for participants to switch between visual and olfactory landmark information in a wayfinding task. Our results imply that humans may very well use their sense of smell to orientate and navigate. According to <xref ref-type="bibr" rid="ref8">Cunningham et al. (2021)</xref>, the ability to track olfactory plumes may have been an important skill in foraging. However, this incurred additional cognitive costs, which manifested themselves in the form of a lower relative number of correct decisions and higher mean decision times. This is surprising given the empirical data for switching costs in other modalities. Since, for instance, <xref ref-type="bibr" rid="ref17">Hamburger and R&#x00F6;ser (2011)</xref> showed no switching costs in the performance when switching modality from auditory to visual and vice versa.</p>
<p>This could be explained by the fact that auditory information engages both the phonological loop and the visuospatial sketchpad of working memory (<xref ref-type="bibr" rid="ref2">Baddeley and Hitch, 1974</xref>; <xref ref-type="bibr" rid="ref46">Tranel et al., 2003</xref>). This would mean that sounds are also initially processed in a different modality as well, namely as images. Thus, there would be an advantage for switching between both modalities within a wayfinding task, as no additional cognitive resources would be required to transfer the learned information into the other modality. In this case it would create a facilitation effect (<xref ref-type="bibr" rid="ref18">Hamburger and R&#x00F6;ser, 2014</xref>), which does not seem to be the case for olfactory information. Consistent with the assumption, <xref ref-type="bibr" rid="ref17">Hamburger and R&#x00F6;ser (2011)</xref> found no switching costs between auditory and visual landmark information. It is possible that neither odors nor images are initially processed in the other modality, which could explain the poorer performance of the participants in the &#x201C;switch&#x201D; condition, as it would require additional cognitive effort to transfer the information to the other modality. On the other hand, our findings illustrated in <xref rid="fig6" ref-type="fig">Figure 6</xref> also show that it was easier for the participants to switch from olfactory to visual stimuli than vice versa. This means an advantage for switching from olfactory to visual stimuli since fewer cognitive effort is required to transfer the olfactory information into the visual modality. In addition to the above explanation, based on these results, it is also possible that odors are initially processed in the visual modality as well (i.e., mental images), but images do not mentally occur in the olfactory modality, which would explain the participants&#x2019; poorer performance in the &#x201C;visual to olfactory&#x201D; switching condition. If this were the case, it would mean an initial double-coding for the first case but a single-coding for the second. Thus, additional cognitive resources would only be required when the visual needs to be transferred to the olfactory modality during information retrieval.</p>
<p>Overall, it can be concluded that humans are able to orient themselves even when switching between visual and olfactory landmark information, but their performance decreases compared to a switch between visual and auditory information.</p>
<p>In addition to a lower relative number of correct decisions, decision times were higher in the &#x201C;switch&#x201D; condition than in the &#x201C;no-switch&#x201D; condition, which is consistent with the hypothesis about switching costs, i.e., decision times. Although, the decision times of olfactory stimuli should not be overestimated due to ambiguous findings. Since literature shows that response times for the olfactory system are significantly longer than for visual stimuli (<xref ref-type="bibr" rid="ref6">Cain, 1976</xref>; <xref ref-type="bibr" rid="ref42">Spence et al., 2000</xref>), this could also explain the differences in response time that we report. On the one hand, according to <xref ref-type="bibr" rid="ref34">Radil and Wysocki (1998)</xref>, the sense of smell is a diffuse sense, which is why an exact localization of olfactory stimuli proves to be difficult. On the other hand, <xref ref-type="bibr" rid="ref31">Porter et al. (2007)</xref> also investigated the sense of smell in humans and the ability to scent-track based on odors. According to them, humans are able to follow olfactory traces and even improve with practice.</p>
<sec id="sec10">
<title>Limitations</title>
<p>It is unclear whether the presentation of olfactory stimuli also triggers increased activation in visual cortex as described above, as is the case with auditory stimuli. To investigate this further, imaging techniques would have to be utilized after the application of odors. It could then be clarified whether olfactory information is initially also processed in the visual or another modality. This could provide further insight into landmark-based wayfinding as well.</p>
<p>Another explanation could also be of a methodological nature. The odors presented to the participants as landmark information were presented by hand, which is the reason why no standardized presentation of the stimuli was possible. Since the focus was on the investigation of switching costs, this did not pose a problem in answering the question to be examined. However, <xref ref-type="bibr" rid="ref21">Jacobs (2012)</xref> showed that an unequal distance of the odors to the nose results in a different intensity, which may affect the performance and decision times of the experimental participants. Therefore, for future research and especially for a time-accurate interpretation of odors, it would be useful to utilize devices that allow a standardized presentation of odors. This could be circumvented by using an olfactometer, which is capable of rapidly delivering discrete odor stimuli without tactile, thermal, or auditory variations (<xref ref-type="bibr" rid="ref13">Gottfried et al., 2002</xref>), and which would allow a more valid interpretation of the decision times. Moreover, the presentation of odors by hand while seeing an empty intersection on screen limits the ecological validity. Since this study serves primarily as fundamental research, the focus here was on whether navigation with a modality switch is possible. Future studies, i.e., application studies, should use a more realistic implementation of the odor cues, for example by doing an open-field study where the olfactory landmark cues would be located along a real-world route.</p>
<p>In general, participants in all stages in which odors were included showed poorer performance compared to the participants in the condition in which only visual stimuli were tested (&#x201C;no-switch&#x201D;). However, it must be emphasized that this is only due to the increased difficulty of using olfactory cues in visual environments compared to using visual cues and not due to a general inability of humans to orient and navigate using olfactory landmark information, as evidenced by several studies mentioned above (e.g., <xref ref-type="bibr" rid="ref22">Jacobs et al., 2015</xref>; <xref ref-type="bibr" rid="ref16">Hamburger and Knauff, 2019</xref>).</p>
<p>In addition to the already mentioned higher initial processing times of olfactory than for visual inputs (<xref ref-type="bibr" rid="ref6">Cain, 1976</xref>; <xref ref-type="bibr" rid="ref42">Spence et al., 2000</xref>), the emotionality of the participants could possibly provide an explanation. Emotions generally have an influence in wayfinding as well, as demonstrated by <xref ref-type="bibr" rid="ref30">Palmiero and Piccardi (2017)</xref>. In this study participants who saw visual emotional landmark information showed better orientation performance than participants who saw neutral emotional landmarks, which is in line with the results of <xref ref-type="bibr" rid="ref3">Balaban et al. (2017)</xref>. Accordingly, an emotional association appears to have an impact on wayfinding performance when visual landmark information is presented, but whether this is also the case for olfactory landmark information is unclear. Moreover, <xref ref-type="bibr" rid="ref4">Bestgen et al. (2015)</xref> showed that the emotional quality of odors predicts odor identification. However, it is yet unclear, whether odor quality might have an impact on (spatial) memory performance and therefore human wayfinding as well.</p>
<p>It is equally possible that the <italic>Proust effect</italic> (e.g., <xref ref-type="bibr" rid="ref47">Van Campen, 2014</xref>) applies to olfactory landmark information. This effect occurs when odors induce episodic memories. Here, odors evoke different memories and could thus lead not only to a higher load on the cognitive system (i.e., working memory) but also to a distraction from the actual wayfinding task. Accordingly, this could likely cause longer decision times. This would mean that if a certain odor were to induce a specific memory from the past, the working memory would be under more load and an additional cognitive effort would be the result. On the one hand, the load on working memory could lead to a greater depth of processing, but on the other hand, triggered memories could also provide distraction and thus poorer performance (e.g., attention), which the results tend to suggest. Thus, the research interest extends to landmark-based wayfinding of olfactory cues with an emotional component. Specifically, we could investigate whether a specific emotional meaning of the stimuli, i.e., positive, negative, or neutral, leads to differences in orientation performance.</p>
<p>Closely related to this is also the salience of odors. <xref ref-type="bibr" rid="ref5">Caduff and Timpf (2008)</xref> focused on the concept of saliency, which refers to relatively distinct, salient, or obvious features compared to other features. Visual salience dominates visual attention during indoor wayfinding (<xref ref-type="bibr" rid="ref11">Dong et al., 2020</xref>). It is questionable whether the salience of olfactory information also influences participants&#x2019; wayfinding.</p>
</sec>
<sec id="sec11">
<title>Conclusion</title>
<p>With this study, we demonstrated that a modality switch between visual and olfactory landmark information has a significant impact on wayfinding. For this reason, we again underline the necessity to consider different approaches to study the role of the different modalities in landmark-based wayfinding, in order to achieve a more comprehensive understanding of the underlying cognitive processes in human spatial orientation.</p>
</sec>
</sec>
<sec id="sec12" sec-type="data-availability">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="sec13">
<title>Ethics Statement</title>
<p>The studies involving human participants were reviewed and approved by FB06, JLU Giessen; 2014-0017. The participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="sec14">
<title>Author Contributions</title>
<p>KH contributed to conception and design of the study and organized the database. KH and MS performed the statistical analysis. MS wrote all sections of the manuscript. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec id="conf1" sec-type="COI-statement">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="sec16" sec-type="disclaimer">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ack>
<p>We thank Emma Fuchs for her help with the conception of the experiment and for data collection. We thank the reviewers for their careful and detailed reviews, which helped to significantly improve our manuscript.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="ref1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arbuthnott</surname> <given-names>K. D.</given-names></name> <name><surname>Woodward</surname> <given-names>T. S.</given-names></name></person-group> (<year>2002</year>). <article-title>The influence of cue-task association and location on switch cost and alternating-switch cost</article-title>. <source>Can. J. Exp. Psychol.</source> <volume>56</volume>, <fpage>18</fpage>&#x2013;<lpage>29</lpage>. doi: <pub-id pub-id-type="doi">10.1037/h0087382</pub-id>, PMID: <pub-id pub-id-type="pmid">11901958</pub-id></citation></ref>
<ref id="ref2"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Baddeley</surname> <given-names>A. D.</given-names></name> <name><surname>Hitch</surname> <given-names>G.</given-names></name></person-group> (<year>1974</year>). &#x201C;<article-title>Working memory</article-title>,&#x201D; in <source>The Psychology of Learning and Motivation: Advances in Research and Theory.</source> ed. <person-group person-group-type="editor"><name><surname>Bower</surname> <given-names>G. H.</given-names></name></person-group>, (<publisher-loc>New York</publisher-loc>: <publisher-name>Academic Press</publisher-name>) <fpage>47</fpage>&#x2013;<lpage>89</lpage>.</citation></ref>
<ref id="ref3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Balaban</surname> <given-names>C. Z.</given-names></name> <name><surname>Karimpur</surname> <given-names>H.</given-names></name> <name><surname>R&#x00F6;ser</surname> <given-names>F.</given-names></name> <name><surname>Hamburger</surname> <given-names>K.</given-names></name></person-group> (<year>2017</year>). <article-title>Turn left where you felt unhappy: how affect influences landmark-based wayfinding</article-title>. <source>Cogn. Process.</source> <volume>18</volume>, <fpage>135</fpage>&#x2013;<lpage>144</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10339-017-0790-0</pub-id>, PMID: <pub-id pub-id-type="pmid">28070686</pub-id></citation></ref>
<ref id="ref4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bestgen</surname> <given-names>A.</given-names></name> <name><surname>Schulze</surname> <given-names>P.</given-names></name> <name><surname>Kuchinke</surname> <given-names>L.</given-names></name></person-group> (<year>2015</year>). <article-title>Odor emotional quality predicts odor identification</article-title>. <source>Chem. Senses</source> <volume>40</volume>, <fpage>517</fpage>&#x2013;<lpage>523</lpage>. doi: <pub-id pub-id-type="doi">10.1093/chemse/bjv037</pub-id>, PMID: <pub-id pub-id-type="pmid">26142420</pub-id></citation></ref>
<ref id="ref5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caduff</surname> <given-names>D.</given-names></name> <name><surname>Timpf</surname> <given-names>S.</given-names></name></person-group> (<year>2008</year>). <article-title>On the assessment of landmark salience for human navigation</article-title>. <source>Cogn. Process.</source> <volume>9</volume>, <fpage>249</fpage>&#x2013;<lpage>267</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10339-007-0199-2</pub-id>, PMID: <pub-id pub-id-type="pmid">17999102</pub-id></citation></ref>
<ref id="ref6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cain</surname> <given-names>W. S.</given-names></name></person-group> (<year>1976</year>). <article-title>Olfaction and the common chemical sense: some psychophysical contrasts</article-title>. <source>Sens. Process.</source> <volume>1</volume>, <fpage>57</fpage>&#x2013;<lpage>67</lpage>.</citation></ref>
<ref id="ref7"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Cohen</surname> <given-names>J.</given-names></name></person-group> (<year>1988</year>). <source>Statistical Power Analysis for the Behavioral Sciences</source>, <edition>(2nd Edn.)</edition>, <publisher-loc>Hillsdale, NJ</publisher-loc>: <publisher-name>Lawrence Earlbaum Associates</publisher-name>.</citation></ref>
<ref id="ref8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cunningham</surname> <given-names>E. P.</given-names></name> <name><surname>Edmonds</surname> <given-names>D.</given-names></name> <name><surname>Stalter</surname> <given-names>L.</given-names></name> <name><surname>Janal</surname> <given-names>M. N.</given-names></name></person-group> (<year>2021</year>). <article-title>Ring-tailed lemurs (<italic>Lemur catta</italic>) use olfaction to locate distant fruit</article-title>. <source>Am. J. Phys. Anthropol.</source> <volume>175</volume>, <fpage>300</fpage>&#x2013;<lpage>307</lpage>. doi: <pub-id pub-id-type="doi">10.1002/ajpa.24255</pub-id>, PMID: <pub-id pub-id-type="pmid">33624841</pub-id></citation></ref>
<ref id="ref9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dahmani</surname> <given-names>L.</given-names></name> <name><surname>Patel</surname> <given-names>R. M.</given-names></name> <name><surname>Yang</surname> <given-names>Y.</given-names></name> <name><surname>Chakravarty</surname> <given-names>M. M.</given-names></name> <name><surname>Fellows</surname> <given-names>L. K.</given-names></name> <name><surname>Bohbot</surname> <given-names>V. D.</given-names></name></person-group> (<year>2018</year>). <article-title>An intrinsic association between olfactory identification and spatial memory in humans</article-title>. <source>Nat. Commun.</source> <volume>9</volume>:<fpage>4162</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41467-018-06569-4</pub-id>, PMID: <pub-id pub-id-type="pmid">30327469</pub-id></citation></ref>
<ref id="ref10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Denis</surname> <given-names>M.</given-names></name> <name><surname>Mores</surname> <given-names>C.</given-names></name> <name><surname>Gras</surname> <given-names>D.</given-names></name> <name><surname>Gyselinck</surname> <given-names>V.</given-names></name> <name><surname>Daniel</surname> <given-names>M. P.</given-names></name></person-group> (<year>2014</year>). <article-title>Is memory for routes enhanced by an environment's richness in visual landmarks?</article-title> <source>Spat. Cogn. Comput.</source> <volume>14</volume>, <fpage>284</fpage>&#x2013;<lpage>305</lpage>. doi: <pub-id pub-id-type="doi">10.1080/13875868.2014.945586</pub-id></citation></ref>
<ref id="ref11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dong</surname> <given-names>W.</given-names></name> <name><surname>Qin</surname> <given-names>T.</given-names></name> <name><surname>Liao</surname> <given-names>H.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>Comparing the roles of landmark visual salience and semantic salience in visual guidance during indoor wayfinding</article-title>. <source>Cartogr. Geogr. Inf. Sci.</source> <volume>47</volume>, <fpage>229</fpage>&#x2013;<lpage>243</lpage>. doi: <pub-id pub-id-type="doi">10.1080/15230406.2019.1697965</pub-id></citation></ref>
<ref id="ref12"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Golledge</surname> <given-names>R. G.</given-names></name></person-group> (<year>1999</year>). <source>Wayfinding Behavior: Cognitive Mapping and other Spatial Processes.</source> <publisher-loc>Baltimore, MD</publisher-loc>: <publisher-name>The Johns Hopkins University Press</publisher-name>.</citation></ref>
<ref id="ref13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gottfried</surname> <given-names>J. A.</given-names></name> <name><surname>Deichmann</surname> <given-names>R.</given-names></name> <name><surname>Winston</surname> <given-names>J. S.</given-names></name> <name><surname>Dolan</surname> <given-names>R. J.</given-names></name></person-group> (<year>2002</year>). <article-title>Functional heterogeneity in human olfactory cortex: an event-related functional magnetic resonance imaging study</article-title>. <source>J. Neurosci.</source> <volume>22</volume>, <fpage>10819</fpage>&#x2013;<lpage>10828</lpage>. doi: <pub-id pub-id-type="doi">10.1523/JNEUROSCI.22-24-10819.2002</pub-id>, PMID: <pub-id pub-id-type="pmid">12486175</pub-id></citation></ref>
<ref id="ref14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hamburger</surname> <given-names>K.</given-names></name></person-group> (<year>2020</year>). <article-title>Visual landmarks are exaggerated: a theoretical and empirical view on the meaning of landmarks in human wayfinding</article-title>. <source>K&#x00FC;nstl. Intell.</source> <volume>34</volume>, <fpage>557</fpage>&#x2013;<lpage>562</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s13218-020-00668-5</pub-id></citation></ref>
<ref id="ref15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hamburger</surname> <given-names>K.</given-names></name> <name><surname>Karimpur</surname> <given-names>H.</given-names></name></person-group> (<year>2017</year>). <article-title>A psychological approach to olfactory information as cues in our environment</article-title>. <source>J. Biourban.</source> <volume>6</volume>, <fpage>59</fpage>&#x2013;<lpage>73</lpage>.</citation></ref>
<ref id="ref16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hamburger</surname> <given-names>K.</given-names></name> <name><surname>Knauff</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>Odors can serve as landmarks in human wayfinding</article-title>. <source>Cogn. Sci.</source> <volume>43</volume>:<fpage>e12798</fpage>. doi: <pub-id pub-id-type="doi">10.1111/cogs.12798</pub-id>, PMID: <pub-id pub-id-type="pmid">31742755</pub-id></citation></ref>
<ref id="ref17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hamburger</surname> <given-names>K.</given-names></name> <name><surname>R&#x00F6;ser</surname> <given-names>F.</given-names></name></person-group> (<year>2011</year>). <article-title>The meaning of gestalt for human wayfinding&#x2014;how much does it cost to switch modalities?</article-title> <source>Gestalt Theory</source> <volume>33</volume>, <fpage>363</fpage>&#x2013;<lpage>382</lpage>.</citation></ref>
<ref id="ref18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hamburger</surname> <given-names>K.</given-names></name> <name><surname>R&#x00F6;ser</surname> <given-names>F.</given-names></name></person-group> (<year>2014</year>). <article-title>The role of landmark modality and familiarity in human wayfinding</article-title>. <source>Swiss J. Psychol.</source> <volume>73</volume>, <fpage>205</fpage>&#x2013;<lpage>213</lpage>. doi: <pub-id pub-id-type="doi">10.1024/1421-0185/a000139</pub-id></citation></ref>
<ref id="ref19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hepper</surname> <given-names>P. G.</given-names></name> <name><surname>Wells</surname> <given-names>D. L.</given-names></name></person-group> (<year>2005</year>). <article-title>How many footsteps do dogs need to determine the direction of an Odour Trail?</article-title> <source>Chem. Senses</source> <volume>30</volume>, <fpage>291</fpage>&#x2013;<lpage>298</lpage>. doi: <pub-id pub-id-type="doi">10.1093/chemse/bji023</pub-id>, PMID: <pub-id pub-id-type="pmid">15741595</pub-id></citation></ref>
<ref id="ref20"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Holden</surname> <given-names>M. P.</given-names></name> <name><surname>Newcombe</surname> <given-names>N. S.</given-names></name></person-group> (<year>2013</year>). &#x201C;<article-title>The development of location coding: an adaptive combination account</article-title>,&#x201D; in <source>Handbook of Spatial Cognition.</source> eds. <person-group person-group-type="editor"><name><surname>Waller</surname> <given-names>D.</given-names></name> <name><surname>Nadel</surname> <given-names>L.</given-names></name></person-group> (<publisher-loc>Washington, DC</publisher-loc>: <publisher-name>APA</publisher-name>), <fpage>191</fpage>&#x2013;<lpage>209</lpage>.</citation></ref>
<ref id="ref21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jacobs</surname> <given-names>L. F.</given-names></name></person-group> (<year>2012</year>). <article-title>From chemotaxis to the cognitive map: the function of olfaction</article-title>. <source>Proc. Natl. Acad. Sci. U. S. A.</source> <volume>109</volume>, <fpage>10693</fpage>&#x2013;<lpage>10700</lpage>. doi: <pub-id pub-id-type="doi">10.1073/pnas.1201880109</pub-id>, PMID: <pub-id pub-id-type="pmid">22723365</pub-id></citation></ref>
<ref id="ref22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jacobs</surname> <given-names>L. F.</given-names></name> <name><surname>Arter</surname> <given-names>J.</given-names></name> <name><surname>Cook</surname> <given-names>A.</given-names></name> <name><surname>Sulloway</surname> <given-names>F. J.</given-names></name></person-group> (<year>2015</year>). <article-title>Olfactory orientation and navigation in humans</article-title>. <source>PLoS One</source> <volume>10</volume>:<fpage>e0129387</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pone.0129387</pub-id>, PMID: <pub-id pub-id-type="pmid">26083337</pub-id></citation></ref>
<ref id="ref23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Karimpur</surname> <given-names>H.</given-names></name> <name><surname>Hamburger</surname> <given-names>K.</given-names></name></person-group> (<year>2016</year>). <article-title>Multimodal integration of spatial information: The influence of object-related factors and self-reported strategies</article-title>. <source>Front. Psychol.</source> <volume>7</volume>:<fpage>1443</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2016.01443</pub-id>, PMID: <pub-id pub-id-type="pmid">27708608</pub-id></citation></ref>
<ref id="ref24"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Kotowick</surname> <given-names>K.</given-names></name> <name><surname>Shah</surname> <given-names>J.</given-names></name></person-group> (<year>2018</year>). &#x201C;<article-title>Modality switching for mitigation of sensory adaptation and habituation in personal navigation systems</article-title>,&#x201D; in <source>23rd International Conference on Intelligent User Interfaces.</source> eds. <person-group person-group-type="editor"><name><surname>Berkovsky</surname> <given-names>S.</given-names></name> <name><surname>Hijikata</surname> <given-names>Y.</given-names></name> <name><surname>Rekimoto</surname> <given-names>J.</given-names></name> <name><surname>Burnett</surname> <given-names>M.</given-names></name> <name><surname>Billinghurst</surname> <given-names>M.</given-names></name> <name><surname>Quigley</surname> <given-names>A.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>115</fpage>&#x2013;<lpage>127</lpage>.</citation></ref>
<ref id="ref25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koutsoklenis</surname> <given-names>A.</given-names></name> <name><surname>Papadopoulos</surname> <given-names>K.</given-names></name></person-group> (<year>2011</year>). <article-title>Olfactory cues used for Wayfinding in urban environments by individuals with visual impairments</article-title>. <source>J. Vis. Impair Blind</source> <volume>105</volume>, <fpage>692</fpage>&#x2013;<lpage>702</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0145482X1110501015</pub-id></citation></ref>
<ref id="ref26"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Lynch</surname> <given-names>K.</given-names></name></person-group> (<year>1960</year>). <source>The Image of the City.</source> <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</citation></ref>
<ref id="ref27"><citation citation-type="other"><person-group person-group-type="author"><name><surname>Math&#x00F4;t</surname> <given-names>S.</given-names></name> <name><surname>Schreij</surname> <given-names>D.</given-names></name> <name><surname>Theeuwes</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>OpenSesame (3.2.8) [Software]</article-title>. Available at: <ext-link xlink:href="https://osdoc.cogsci.nl/3.2/" ext-link-type="uri">https://osdoc.cogsci.nl/3.2/</ext-link> (Accessed March 03, 2022).</citation></ref>
<ref id="ref28"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Montello</surname> <given-names>D. R.</given-names></name></person-group> (<year>2005</year>). &#x201C;<article-title>Navigation</article-title>,&#x201D; in <source>Camb Handb Visuospatial Think.</source> eds. <person-group person-group-type="editor"><name><surname>Shah</surname> <given-names>P.</given-names></name> <name><surname>Miyake</surname> <given-names>A.</given-names></name></person-group> (<publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>), <fpage>257</fpage>&#x2013;<lpage>294</lpage>.</citation></ref>
<ref id="ref29"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Montello</surname> <given-names>D. R.</given-names></name> <name><surname>Sas</surname> <given-names>C.</given-names></name></person-group> (<year>2006</year>). &#x201C;<article-title>Human factors of wayfinding in navigation</article-title>,&#x201D; in <source>Int Encycl Ergon Hum Factors, Second Edition&#x2014;3 Volume Set.</source> ed. <person-group person-group-type="editor"><name><surname>Karwowski</surname> <given-names>W.</given-names></name></person-group> (<publisher-loc>Boca Raton, FL</publisher-loc>: <publisher-name>CRC Press</publisher-name>).</citation></ref>
<ref id="ref30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Palmiero</surname> <given-names>M.</given-names></name> <name><surname>Piccardi</surname> <given-names>L.</given-names></name></person-group> (<year>2017</year>). <article-title>The role of emotional landmarks on topographical memory</article-title>. <source>Front. Psychol.</source> <volume>8</volume>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2017.00763</pub-id>, PMID: <pub-id pub-id-type="pmid">28539910</pub-id></citation></ref>
<ref id="ref31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Porter</surname> <given-names>J.</given-names></name> <name><surname>Craven</surname> <given-names>B.</given-names></name> <name><surname>Khan</surname> <given-names>R. M.</given-names></name> <name><surname>Chang</surname> <given-names>S.-J.</given-names></name> <name><surname>Kang</surname> <given-names>I.</given-names></name> <name><surname>Judkewitz</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2007</year>). <article-title>Mechanisms of scent-tracking in humans</article-title>. <source>Nat. Neurosci.</source> <volume>10</volume>, <fpage>27</fpage>&#x2013;<lpage>29</lpage>. doi: <pub-id pub-id-type="doi">10.1038/nn1819</pub-id>, PMID: <pub-id pub-id-type="pmid">17173046</pub-id></citation></ref>
<ref id="ref32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Posner</surname> <given-names>M. I.</given-names></name> <name><surname>Cohen</surname> <given-names>Y.</given-names></name></person-group> (<year>1984</year>). <article-title>Components of visual orienting</article-title>. <source>Attention and performance</source> <volume>32</volume>, <fpage>531</fpage>&#x2013;<lpage>556</lpage>.</citation></ref>
<ref id="ref33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Presson</surname> <given-names>C. C.</given-names></name> <name><surname>Montello</surname> <given-names>D. R.</given-names></name></person-group> (<year>1988</year>). <article-title>Points of reference in spatial cognition: stalking the elusive landmark</article-title>. <source>Br. J. Dev. Psychol.</source> <volume>6</volume>, <fpage>378</fpage>&#x2013;<lpage>381</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.2044-835X.1988.tb01113.x</pub-id></citation></ref>
<ref id="ref34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Radil</surname> <given-names>T.</given-names></name> <name><surname>Wysocki</surname> <given-names>C. J.</given-names></name></person-group> (<year>1998</year>). <article-title>Spatiotemporal masking in pure olfaction</article-title>. <source>Ann. N. Y. Acad. Sci.</source> <volume>855</volume>, <fpage>641</fpage>&#x2013;<lpage>644</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.1749-6632.1998.tb10638.x</pub-id>, PMID: <pub-id pub-id-type="pmid">9929664</pub-id></citation></ref>
<ref id="ref35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rasch</surname> <given-names>D.</given-names></name> <name><surname>Guiard</surname> <given-names>V.</given-names></name></person-group> (<year>2004</year>). <article-title>The robustness of parametric statistical methods</article-title>. <source>Psycho. Sci.</source> <volume>137</volume>, <fpage>2706</fpage>&#x2013;<lpage>2720</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.jspi.2006.04.011</pub-id></citation></ref>
<ref id="ref36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reddy</surname> <given-names>G.</given-names></name> <name><surname>Murthy</surname> <given-names>V. N.</given-names></name> <name><surname>Vergassola</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>Olfactory sensing and navigation in turbulent environments</article-title>. <source>Annu. Rev. Condens. Matter Phys.</source> <volume>13</volume>, <fpage>191</fpage>&#x2013;<lpage>213</lpage>. doi: <pub-id pub-id-type="doi">10.1146/annurev-conmatphys-031720-032754</pub-id></citation></ref>
<ref id="ref37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>R&#x00F6;ser</surname> <given-names>F.</given-names></name> <name><surname>Hamburger</surname> <given-names>K.</given-names></name> <name><surname>Knauff</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>The Giessen virtual environment laboratory: human wayfinding and landmark salience</article-title>. <source>Cogn. Process.</source> <volume>12</volume>, <fpage>209</fpage>&#x2013;<lpage>214</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10339-011-0390-3</pub-id>, PMID: <pub-id pub-id-type="pmid">21279666</pub-id></citation></ref>
<ref id="ref38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rossier</surname> <given-names>J.</given-names></name> <name><surname>Schenk</surname> <given-names>F.</given-names></name></person-group> (<year>2003</year>). <article-title>Olfactory and/or visual cues for spatial navigation through ontogeny: olfactory cues enable the use of visual cues</article-title>. <source>Behav. Neurosci.</source> <volume>117</volume>, <fpage>412</fpage>&#x2013;<lpage>425</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0735-7044.117.3.412</pub-id></citation></ref>
<ref id="ref39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sharma</surname> <given-names>G.</given-names></name> <name><surname>Kaushal</surname> <given-names>Y.</given-names></name> <name><surname>Chandra</surname> <given-names>S.</given-names></name> <name><surname>Singh</surname> <given-names>V.</given-names></name> <name><surname>Mittal</surname> <given-names>A. P.</given-names></name> <name><surname>Dutt</surname> <given-names>V.</given-names></name></person-group> (<year>2017</year>). <article-title>Influence of landmarks on wayfinding and brain connectivity in immersive virtual reality environment</article-title>. <source>Front. Psychol.</source> <volume>8</volume>:<fpage>1220</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2017.01220</pub-id>, PMID: <pub-id pub-id-type="pmid">28775698</pub-id></citation></ref>
<ref id="ref40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Siepmann</surname> <given-names>N.</given-names></name> <name><surname>Edler</surname> <given-names>D.</given-names></name> <name><surname>Keil</surname> <given-names>J.</given-names></name> <name><surname>Kuchinke</surname> <given-names>L.</given-names></name> <name><surname>Dickmann</surname> <given-names>F.</given-names></name></person-group> (<year>2020</year>). <article-title>The position of sound in audiovisual maps: An experimental study of performance in spatial memory</article-title>. <source>Cartographica: International J. Geographic Information Geovisualization</source> <volume>55</volume>, <fpage>136</fpage>&#x2013;<lpage>150</lpage>. doi: <pub-id pub-id-type="doi">10.3138/cart-2019-0008</pub-id></citation></ref>
<ref id="ref41"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Sorrows</surname> <given-names>M. E.</given-names></name> <name><surname>Hirtle</surname> <given-names>S. C.</given-names></name></person-group> (<year>1999</year>). &#x201C;<article-title>The nature of landmarks for real and electronic spaces</article-title>.&#x201D; in <source>Spatial Information Theory: Cognitive and Computational Foundations of Geographic Information Science, International Conference COSIT '99</source>, August 25&#x2013;29, 1999; <publisher-name>Stade</publisher-name>, <publisher-loc>Germany</publisher-loc>.</citation></ref>
<ref id="ref42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spence</surname> <given-names>C.</given-names></name> <name><surname>Lloyd</surname> <given-names>D.</given-names></name> <name><surname>McGlone</surname> <given-names>F.</given-names></name> <name><surname>Nicholls</surname> <given-names>M. E.</given-names></name> <name><surname>Driver</surname> <given-names>J.</given-names></name></person-group> (<year>2000</year>). <article-title>Inhibition of return is supramodal: a demonstration between all possible pairings of vision, touch, and audition</article-title>. <source>Exp. Brain Res.</source> <volume>134</volume>, <fpage>42</fpage>&#x2013;<lpage>48</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s002210000442</pub-id>, PMID: <pub-id pub-id-type="pmid">11026724</pub-id></citation></ref>
<ref id="ref43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Steck</surname> <given-names>K.</given-names></name></person-group> (<year>2012</year>). <article-title>Just follow your nose: homing by olfactory cues in ants</article-title>. <source>Curr. Opin. Neurobiol.</source> <volume>22</volume>, <fpage>231</fpage>&#x2013;<lpage>235</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.conb.2011.11.011</pub-id>, PMID: <pub-id pub-id-type="pmid">22137100</pub-id></citation></ref>
<ref id="ref44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Steck</surname> <given-names>K.</given-names></name> <name><surname>Hansson</surname> <given-names>B. S.</given-names></name> <name><surname>Knaden</surname> <given-names>M.</given-names></name></person-group> (<year>2009</year>). <article-title>Smells like home: desert ants, <italic>Cataglyphis fortis</italic>, use olfactory landmarks to pinpoint the nest</article-title>. <source>Front. Zool.</source> <volume>6</volume>:<fpage>5</fpage>. doi: <pub-id pub-id-type="doi">10.1186/1742-9994-6-5</pub-id>, PMID: <pub-id pub-id-type="pmid">19250516</pub-id></citation></ref>
<ref id="ref45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Steck</surname> <given-names>K.</given-names></name> <name><surname>Hansson</surname> <given-names>B. S.</given-names></name> <name><surname>Knaden</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>Desert ants benefit from combining visual and olfactory landmarks</article-title>. <source>J. Exp. Biol.</source> <volume>214</volume>, <fpage>1307</fpage>&#x2013;<lpage>1312</lpage>. doi: <pub-id pub-id-type="doi">10.1242/jeb.053579</pub-id>, PMID: <pub-id pub-id-type="pmid">21430208</pub-id></citation></ref>
<ref id="ref46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tranel</surname> <given-names>D.</given-names></name> <name><surname>Damasio</surname> <given-names>H.</given-names></name> <name><surname>Eichhorn</surname> <given-names>G. R.</given-names></name> <name><surname>Grabowski</surname> <given-names>T.</given-names></name> <name><surname>Ponto</surname> <given-names>L. L. B.</given-names></name> <name><surname>Hichwa</surname> <given-names>R. D.</given-names></name></person-group> (<year>2003</year>). <article-title>Neural correlates of naming animals from their characteristic sounds</article-title>. <source>Neuropsychologia</source> <volume>41</volume>, <fpage>847</fpage>&#x2013;<lpage>854</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S0028-3932(02)00223-3</pub-id>, PMID: <pub-id pub-id-type="pmid">12631534</pub-id></citation></ref>
<ref id="ref47"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Van Campen</surname> <given-names>C.</given-names></name></person-group> (<year>2014</year>). <source>The Proust effect: The Senses as Doorways to lost Memories.</source> <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</citation></ref>
<ref id="ref48"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Wilcox</surname> <given-names>R.</given-names></name></person-group> (<year>2012</year>). <source>Introduction to Robust Estimation Hypothesis Testing.</source> <edition>(3rd Edn.)</edition>, <publisher-name>Academic Press</publisher-name>, <publisher-loc>New York, NY</publisher-loc>.</citation></ref>
<ref id="ref49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yesiltepe</surname> <given-names>D.</given-names></name> <name><surname>Conroy Dalton</surname> <given-names>R.</given-names></name> <name><surname>Ozbil Torun</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Landmarks in wayfinding: a review of the existing literature</article-title>. <source>Cogn. Process.</source> <volume>22</volume>, <fpage>369</fpage>&#x2013;<lpage>410</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10339-021-01012-x</pub-id>, PMID: <pub-id pub-id-type="pmid">33682034</pub-id></citation></ref>
</ref-list>
</back>
</article>