<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="editorial" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Virtual Real.</journal-id>
<journal-title>Frontiers in Virtual Reality</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Virtual Real.</abbrev-journal-title>
<issn pub-type="epub">2673-4192</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">578080</article-id>
<article-id pub-id-type="doi">10.3389/frvir.2021.578080</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Virtual Reality</subject>
<subj-group>
<subject>Specialty Grand Challenge</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Grand Challenges for Augmented Reality</article-title>
<alt-title alt-title-type="left-running-head">Billinghurst</alt-title>
<alt-title alt-title-type="right-running-head">Grand Challenges for Augmented Reality</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Billinghurst</surname>
<given-names>Mark</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/135107/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<label>
<sup>1</sup>
</label>STEM, University of South Australia, <addr-line>Mawson Lakes</addr-line>, <addr-line>SA</addr-line>, <country>Australia</country>
</aff>
<aff id="aff2">
<label>
<sup>2</sup>
</label>Auckland Bioengineering Institute, University of Auckland, <addr-line>Auckland</addr-line>, <country>New Zealand</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited and reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1114/overview">Mel Slater</ext-link>, University of Barcelona, Spain</p>
</fn>
<fn>
<p>This article was submitted to Augmented Reality, a section of the journal Frontiers in Virtual Reality</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Mark Billinghurst, <email>mark.billinghurst@unisa.edu.au</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>05</day>
<month>03</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>2</volume>
<elocation-id>578080</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>06</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>28</day>
<month>01</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2021 Billinghurst.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Billinghurst</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<kwd-group>
<kwd>augmented reality</kwd>
<kwd>grand challenge</kwd>
<kwd>display</kwd>
<kwd>Interaction</kwd>
<kwd>tracking</kwd>
<kwd>collaboration</kwd>
<kwd>ethics</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>In his 1965 article, The Ultimate Display, Ivan Sutherland imagined a future computer interface that blurred the separation between the digital and physical worlds (<xref ref-type="bibr" rid="B42">Sutherland, 1965</xref>). At the time, he was making this vision a reality, creating a see-through head mounted display (HMD) that allowed users to see virtual images superimposed over the real world (<xref ref-type="bibr" rid="B41">Sutherland, 1968</xref>). The user&#x2019;s head position was tracked, so the virtual content appeared fixed in space, and a handheld wand could be used to interact with it.</p>
<p>Although the term was not coined until decades later, Sutherland&#x2019;s system was the first working Augmented Reality (AR) interface. AR is technology with three key characteristics (<xref ref-type="bibr" rid="B3">Azuma, 1997</xref>); 1) it combines real and virtual images, 2) is interactive in real time, and 3) the virtual imagery is registered in three dimensions. Sutherland&#x2019;s work had these properties, but over 50&#xa0;years later, his vision of the Ultimate Display still hasn&#x2019;t been achieved and more research is needed.</p>
<p>Azuma&#x2019;s definition of AR provides guidance on the technology required to create an AR experience. In order to combine real and virtual images display technology is needed. To support interaction in real time user interface technologies are required. To register AR content in three dimensions tracking technology is needed.</p>
<p>Once these technologies were only available in research labs, but today they are available in people&#x2019;s hands. Current mobile phones with cameras, GPS and inertial sensors, high resolution screens, fast networking and powerful CPUs and graphics processors are the most common way that people experience AR. Compatible with hundreds of millions of devices, Apple&#x2019;s ARKit (<xref ref-type="bibr" rid="B2">Apple, 2020</xref>), and Google&#x2019;s ARCore (<xref ref-type="bibr" rid="B12">Google, 2020a</xref>) provide accurate AR tracking for mobiles. A user can look at the camera view on their phone screen and see virtual objects in their real world. Mobile AR applications such as Pokemon Go have been downloaded over a billion times (<xref ref-type="bibr" rid="B26">NintendoSoup, 2019</xref>), showing how readily accessible the technology is.</p>
<p>However, the user experience provided by a phone is very different from the Sutherland&#x2019;s vision of hands-free interaction, stereo graphics, and virtual imagery always in a person&#x2019;s field of view. Mobile AR provides an easily accessible entry point, but the true potential of AR is achieved through using head mounted displays, with richer interaction and better tracking techniques. In each of these areas there are important Grand Challenges that need research, as discussed below.</p>
<sec id="s1-1">
<title>Research in Display Technology</title>
<p>Sutherland used miniature cathode ray tubes mounted on the head with optical combiners to create a stereo see-through AR display. However, this had a limited field of view, resolution and refresh rate. One Grand Challenge is to create a wide field of view, high resolution, see-through display in a socially acceptable form factor. There are a number of factors that need to be addressed before HMDs can become a replacement for smartphones. These include creating a sunglass like form factor, providing sufficient brightness and contrast, having a high resolution and wide field of view, addressing eyestrain, and enabling people to see each other&#x2019;s eyes (<xref ref-type="bibr" rid="B4">Azuma, 2017</xref>). Research is ongoing in many of these areas. For example, a pinhole screen can be used to create a wide field of view see-through AR display (<xref ref-type="bibr" rid="B20">Maimone et al., 2014</xref>) and holographic projection can be used to achieve full color, high contrast AR images in an eye-glass form factor (<xref ref-type="bibr" rid="B19">Maimone et al., 2017</xref>).</p>
<p>Other areas are also important, such as the vergence accommodation problem caused by a display only having a single focal plane, preventing people from keeping the AR content in focus while also focusing on objects in the real world at a different distance. Variable focal planes can enable users to view virtual content at different focal lengths (<xref ref-type="bibr" rid="B18">Liu et al., 2008</xref>). Light Field Displays and light fields provide one way to show photorealistic content to the user and are a prerequisite for creating &#x201c;True Augmented Reality&#x201d; (<xref ref-type="bibr" rid="B35">Sandor et al., 2015</xref>). There are also interesting innovations happening in the commercial sector, such as from companies like Mojo Vision (<xref ref-type="bibr" rid="B24">Mojo Vision, 2020</xref>) who are developing AR enabled contact lenses, but these are many years away from commercialization.</p>
</sec>
<sec id="s1-2">
<title>Research in Interaction</title>
<p>Sutherland&#x2019;s system supported simple interaction with a handheld wand. Another Grand Challenge is to enable people to interact with AR content as easily as they do with real objects. Many researchers are exploring natural user interfaces such as using tangible objects to interact with AR content (Tangible AR interfaces (<xref ref-type="bibr" rid="B7">Billinghurst et al., 2008</xref>)) or free-hand gesture manipulation (<xref ref-type="bibr" rid="B36">Sharp et al., 2015</xref>). Modern AR displays such as the Hololens2 (<xref ref-type="bibr" rid="B22">Microsoft, 2020a</xref>) support natural two-handed gesture input, allowing people to reach out and grab virtual content. However, it is possible to go beyond this and combine speech and gesture together to create multimodal interfaces where the strengths of one modality compensates for the weakness of another (<xref ref-type="bibr" rid="B27">Nizam et al., 2018</xref>). Addition of eye-tracking, full-body input, and other non-verbal cues can provide even more intuitive multimodal interaction. Research also needs to be conducted into interaction methods using techniques not possible in the real world. Brain computer interaction methods enable brain activity to select AR content (<xref ref-type="bibr" rid="B37">Si-Mohammad et al., 2018</xref>), and other physiological sensors can enable AR to respond to user heart rate or emotional state. There are many opportunities to create even better AR interaction methods.</p>
</sec>
<sec id="s1-3">
<title>Research in Tracking</title>
<p>A key feature of AR systems is that the content appears to be fixed in space, which requires the user&#x2019;s viewpoint to be continuously tracked. Sutherland achieved this by using mechanical and ultrasonic trackers to measure where the user&#x2019;s HMD was and render the virtual imagery from that same position. Tracking technology has improved significantly, but another Grand Challenge is to precisely locate a user&#x2019;s position in any location. There has been a significant amount of research on computer vision methods for tracking user viewpoint without knowing any visual features (<xref ref-type="bibr" rid="B14">Kim et al., 2018</xref>). Hybrid approaches that combine vision-based SLAM tracking with GPS and inertial sensors can be used for a more robust result (<xref ref-type="bibr" rid="B17">Liu et al., 2016</xref>). However, one area that hasn&#x2019;t been well explored are hybrid approaches for very large-scale tracking. Wide area tracking can be achieved using sensor fusion from a dynamic combination of mobile and stationary tracking (<xref ref-type="bibr" rid="B32">Pustka and Klinker, 2008</xref>). Deep Learning could be used to coordinate multiple tracking systems and provide some scene understanding (<xref ref-type="bibr" rid="B11">Garon and Lalonde, 2017</xref>). Finally, there is a recent trend toward AR cloud-based tracking where features captured by a user&#x2019;s device are uploaded to the cloud and fused to provide a ubiquitous tracking service. HoloRoyale is one of the first examples of using city scale AR tracking from an AR cloud service to enable collaborative gaming (<xref ref-type="bibr" rid="B34">Rompapas et al., 2019</xref>). Commercial software from companies such as Ubiquity6 (<xref ref-type="bibr" rid="B43">Ubiquity6, 2020</xref>) enable large scale AR cloud tracking. However, none of these systems yet provide large-scale precise tracking, so more work is needed.</p>
</sec>
<sec id="s1-4">
<title>Research in Perception and Neuroscience</title>
<p>In addition to Grand Challenges in fundamental technology, there are other areas of AR that need to be addressed, such as exploring perceptual and neuroscience issues. AR systems create an illusion to convince the brain that virtual content actually exists in the real world. There are a number of perceptual problems that can occur in AR, classified into environmental, capturing, augmentation, display device, and user issues (<xref ref-type="bibr" rid="B16">Kruijff et al., 2010</xref>). Considerable research has been conducted on how to make AR content appear the same as real objects, including the use of virtual lighting (<xref ref-type="bibr" rid="B1">Agusanto et al., 2003</xref>), shadows (<xref ref-type="bibr" rid="B40">Sugano et al., 2003</xref>), real object occlusion (<xref ref-type="bibr" rid="B8">Breen et al., 1996</xref>) and similar methods. The goal is to create digital objects that have strong &#x201c;Object Presence&#x201d; and appear to be really there (<xref ref-type="bibr" rid="B39">Stevens and Jerrams-Smith, 2000</xref>). However, unlike Presence in Virtual Reality, Object Presence in AR has not been well studied. Most of these systems are evaluated using subjective measures, but EEG can be used as an objective measure to evaluate the quality of experience (<xref ref-type="bibr" rid="B5">Bauman and Seeling, 2018</xref>). EEG could also be used to explore the cognitive load of using AR interfaces, measure emotional response to AR stimuli, monitor shared brain activity in collaborative AR experiences, and more. So, there is significant opportunity to use neuroscience to understand the perceptual and psychological basis of AR.</p>
</sec>
<sec id="s1-5">
<title>Research in Collaboration</title>
<p>There are also many application areas that could be studied in more detail. One important area is using AR to enable remote people to work together as easily as if they were face to face. Early experiments showed that AR views of video avatars provided a significantly higher degree of Social Presence than traditional video conferencing (<xref ref-type="bibr" rid="B6">Billinghurst and Kato, 2002</xref>). More recently, Microsoft&#x2019;s Holoportation captured full 3D models of people in real time and showed them as life-sized AR avatars in a user&#x2019;s real environment, enabling the sharing of rich communication cues (<xref ref-type="bibr" rid="B28">Orts-Escolano et al., 2016</xref>). The company Spatial provides a commercial application that can superimpose AR avatars over the real world in a very natural way (<xref ref-type="bibr" rid="B38">Spatial.io, 2020</xref>).</p>
<p>There are also many examples of wearable AR systems can be used to enable a remote expert to see through a local user&#x2019;s eyes and provide AR cues to help them perform real-world tasks (<xref ref-type="bibr" rid="B15">Kim et al., 2019</xref>). Microsoft&#x2019;s Remote Assist product (<xref ref-type="bibr" rid="B23">Microsoft, 2020b</xref>), and others, have made this type of experience commercially available. The emerging field of Empathic Computing (<xref ref-type="bibr" rid="B31">Piumsomboon et al., 2017</xref>) goes beyond this to explore how physiological cues can be combined with AR in collaborative interfaces to enable remote people to share what they are seeing, hearing and feeling. There is also opportunity to study how to support viewing large scale social networks in AR interfaces, including using visual and spatial cues to separate out dozens of social contacts (<xref ref-type="bibr" rid="B25">Nassani et al., 2017</xref>). However, there is still very little research conducted on collaborative AR. A survey of 10&#xa0;years of user studies until 2015, found that only 15 of the 369 AR studies reviewed were collaborative studies, and only seven of these used AR HMDs (<xref ref-type="bibr" rid="B10">Dey et al., 2018</xref>).</p>
</sec>
<sec id="s1-6">
<title>Research in Social and Ethical Issues</title>
<p>Finally, there are social and ethical issues that need to be addressed. The difficulty of Google Glass (<xref ref-type="bibr" rid="B13">Google, 2020b</xref>) and other AR displays to get consumer acceptance, shows that widespread use of HMD-based AR may depend more on social than technical issues. Rauschnabel explored the technology acceptance drivers of AR smart glasses (<xref ref-type="bibr" rid="B33">Rauschnabel, and Ro, 2016</xref>), while Pascoal studied acceptance in outdoor environments (<xref ref-type="bibr" rid="B29">Pascoal et al., 2018</xref>).</p>
<p>When AR devices become more widely used a number of ethical issues may arise. Who should be allowed to place AR content in the view of a person and what are the ethics around AR advertising? What is the consequence of people having different views of the same real environment? Brinkman discusses the privacy implications of AR as an extension of the home and AR advertising (<xref ref-type="bibr" rid="B9">Brinkman, 2014</xref>). Pase lists a number of questionable ethical uses of pervasive AR, such as deception, surveillance, behavior modification, and punishment (<xref ref-type="bibr" rid="B30">Pase et al., 2012</xref>). AR technology could be used to create mediated reality experiences, removing from view certain parts of the real world, which could have public safety issues (<xref ref-type="bibr" rid="B21">Mann, 2002</xref>). Users capturing and sharing their surroundings for AR cloud tracking or remote collaboration could also raise significant concerns. Wasson has written about the legal, ethical and privacy issues of AR (<xref ref-type="bibr" rid="B44">Wassom, 2014</xref>), but there is still much more research needed.</p>
</sec>
</sec>
<sec sec-type="conclusion" id="s2">
<title>Conclusion</title>
<p>Over 50 years ago Sutherland provided a compelling vision of how the physical and digital worlds could be seamlessly combined together. However, there is still significant research that needs to be done to make this vision a reality. Grand Challenges exist in fundamental display, interaction and tracking technologies, and also the perception/neuroscience of AR, using AR for collaboration, and exploring the social and ethical aspects. Addressing these topics will enable Augmented Reality to reach its full potential as a transformative technology.</p>
</sec>
</body>
<back>
<sec id="s3">
<title>Author Contributions</title>
<p>The author confirms being the sole contributor of this work and has approved it for publication.</p>
</sec>
<sec sec-type="COI-statement" id="s4">
<title>Conflict of Interest</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Agusanto</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Chuangui</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Sing</surname>
<given-names>N. W.</given-names>
</name>
</person-group> (<year>2003</year>). &#x201c;<article-title>Photorealistic rendering for augmented reality using environment illumination</article-title>,&#x201d; in <conf-name>The second IEEE and ACM international symposium on mixed and augmented reality</conf-name>, <conf-loc>Tokyo, Japan</conf-loc>, <conf-date>October 10, 2003</conf-date> (<publisher-name>Piscataway, New Jersey, USA: IEEE</publisher-name>), <fpage>208</fpage>&#x2013;<lpage>216</lpage>. </citation>
</ref>
<ref id="B2">
<citation citation-type="book">
<collab>Apple</collab> (<year>2020</year>). <source>ARKit</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://developer.apple.com/augmented-reality/">https://developer.apple.com/augmented-reality/</ext-link> (Accessed June 27, 2020).</citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Azuma</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>1997</year>). <article-title>A survey of augmented reality</article-title>. <source>Presence</source> <volume>6</volume> (<issue>4</issue>), <fpage>355</fpage>&#x2013;<lpage>385</lpage>. <pub-id pub-id-type="doi">10.1162/pres.1997.6.4.355</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Azuma</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2017</year>). &#x201c;<article-title>Making augmented reality a reality</article-title>,&#x201d; in <conf-name>Applied industrial optics: spectroscopy, imaging and metrology</conf-name>, <conf-loc>San Francisco, CA</conf-loc>, <conf-date>June 26&#x2013;29, 2019</conf-date> (<publisher-name>Washington, D.C, USA: Optical Society of America</publisher-name>), <fpage>JTu1F-1</fpage>. </citation>
</ref>
<ref id="B5">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Bauman</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Seeling</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Evaluation of EEG-based predictions of image QoE in augmented reality scenarios</article-title>,&#x201d; in <conf-name>2018 IEEE 88th vehicular technology conference (VTC-Fall)</conf-name>, <conf-loc>Chicago, IL</conf-loc>, <conf-date>August 27&#x2013;30, 2018</conf-date> (<publisher-name>Piscataway, New Jersey, USA: IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>5</lpage>. </citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Billinghurst</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kato</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2002</year>). <article-title>Collaborative augmented reality</article-title>. <source>Commun. ACM</source> <volume>45</volume> (<issue>7</issue>), <fpage>64</fpage>&#x2013;<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1145/514236.514265</pub-id> </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Billinghurst</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kato</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Poupyrev</surname>
<given-names>I.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Tangible augmented reality</article-title>. <source>ACM SIGGRAPH Asia</source> <volume>7</volume> (<issue>2</issue>), <fpage>1</fpage>&#x2013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1145/1508044.1508051</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Breen</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Whitaker</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Rose</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Tuceryan</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>1996</year>). <article-title>Interactive occlusion and automatic object placement for augmented reality</article-title>. <source>Comput. Graphics Forum</source> <volume>15</volume> (<issue>3</issue>), <fpage>11</fpage>&#x2013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1111/1467-8659.1530011</pub-id> </citation>
</ref>
<ref id="B9">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Brinkman</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2014</year>). &#x201c;<article-title>Ethics and pervasive augmented reality: some challenges and approaches</article-title>,&#x201d; in <source>Emerging pervasive information and communication technologies (PICT). Law, governance and technology series</source>. Editor <person-group person-group-type="editor">
<name>
<surname>Pimple</surname>
<given-names>K.</given-names>
</name>
</person-group> (<publisher-loc>Dordrecht</publisher-loc>: <publisher-name>Springer</publisher-name>), <volume>Vol. 11</volume>, <fpage>149</fpage>&#x2013;<lpage>175</lpage>. </citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dey</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Billinghurst</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lindeman</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Swan</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>A systematic review of 10 years of augmented reality usability studies: 2005 to 2014</article-title>. <source>Front. Robot AI</source> <volume>5</volume>, <fpage>37</fpage>. <pub-id pub-id-type="doi">10.3389/frobt.2018.00037</pub-id> </citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Garon</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lalonde</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Deep 6-DOF tracking</article-title>. <source>IEEE Trans. Vis. Comput. Graph</source> <volume>23</volume> (<issue>11</issue>), <fpage>2410</fpage>&#x2013;<lpage>2418</lpage>. <pub-id pub-id-type="doi">10.1109/TVCG.2017.2734599</pub-id> </citation>
</ref>
<ref id="B12">
<citation citation-type="web">
<collab>Google</collab> (<year>2020a</year>). <article-title>ARCore</article-title>. Available at: <ext-link ext-link-type="uri" xlink:href="https://developers.google.com/ar">https://developers.google.com/ar</ext-link> (<comment>Accessed December 15, 2020</comment>). </citation>
</ref>
<ref id="B13">
<citation citation-type="web">
<collab>Google</collab> (<year>2020b</year>). <article-title>Google glass</article-title>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.google.com/glass/start/">https://www.google.com/glass/start/</ext-link> (<comment>Accessed Febuary 4, 2020</comment>). </citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kim</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Billinghurst</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bruder</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Duh</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Welch</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Revisiting trends in augmented reality research: a review of the 2nd decade of ISMAR (2008&#x2013;2017)</article-title>. <source>IEEE Trans. Vis. Comput. Graph</source> <volume>24</volume> (<issue>11</issue>), <fpage>2947</fpage>&#x2013;<lpage>2962</lpage>. <pub-id pub-id-type="doi">10.1109/TVCG.2018.2868591</pub-id> </citation>
</ref>
<ref id="B15">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kim</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Woo</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Billinghurst</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2019</year>). &#x201c;<article-title>Evaluating the combination of visual communication cues for HMD-based mixed reality remote collaboration</article-title>,&#x201d; in <conf-name>Proceedings of the 2019 CHI conference on human factors in computing systems</conf-name>, <conf-loc>Glasgow Scotland, United Kingdom</conf-loc>, <conf-date>May, 2019</conf-date> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>13</lpage>. </citation>
</ref>
<ref id="B16">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kruijff</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Swan</surname>
<given-names>J. E.</given-names>
</name>
<name>
<surname>Feiner</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2010</year>). &#x201c;<article-title>Perceptual issues in augmented reality revisited</article-title>,&#x201d; in <conf-name>IEEE international symposium on mixed and augmented reality</conf-name>, <conf-loc>Seoul, Korea</conf-loc>, <conf-date>October 13&#x2013;16, 2010</conf-date> (<publisher-name>Piscataway, New Jersey, USA: IEEE</publisher-name>), <fpage>3</fpage>&#x2013;<lpage>12</lpage>. </citation>
</ref>
<ref id="B17">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Bao</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2016</year>). &#x201c;<article-title>Robust keyframe-based monocular SLAM for augmented reality</article-title>,&#x201d; in <conf-name>IEEE international symposium on mixed and augmented reality (ISMAR)</conf-name>, <conf-loc>Merida, Mexico</conf-loc>, <conf-date>September 19&#x2013;23, 2016</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>10</lpage>. </citation>
</ref>
<ref id="B18">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hua</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2008</year>). &#x201c;<article-title>An optical see-through head mounted display with addressable focal planes</article-title>,&#x201d; in <conf-name>2008 7th IEEE/ACM international symposium on mixed and augmented reality</conf-name>, <conf-loc>Cambridge, United Kingdom</conf-loc>, <conf-date>September 15&#x2013;19, 2008</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>33</fpage>&#x2013;<lpage>42</lpage>. </citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maimone</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Georgiou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kollin</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Holographic near-eye displays for virtual and augmented reality</article-title>. <source>ACM Trans. Graph.</source> <volume>36</volume> (<issue>4</issue>), <fpage>1</fpage>&#x2013;<lpage>16</lpage>. <pub-id pub-id-type="doi">10.1145/3072959.3073624</pub-id> </citation>
</ref>
<ref id="B20">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Maimone</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Lanman</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Rathinavel</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Keller</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Luebke</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Fuchs</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2014</year>). &#x201c;<article-title>Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources</article-title>,&#x201d; in <conf-name>ACM SIGGRAPH 2014 emerging technologies</conf-name>, <conf-loc>Vancouver, Canada</conf-loc>, <conf-date>August, 2014</conf-date> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>1</fpage>. </citation>
</ref>
<ref id="B21">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Mann</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2002</year>). <source>Mediated reality with implementations for everyday life</source>. <publisher-name>Presence Connect</publisher-name>, <fpage>1</fpage>.</citation>
</ref>
<ref id="B22">
<citation citation-type="book">
<collab>Microsoft</collab> (<year>2020a</year>). <source>Hololens2</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.microsoft.com/en-us/hololens/">https://www.microsoft.com/en-us/hololens/</ext-link> (<comment>Accessed December 7, 2019</comment>).</citation>
</ref>
<ref id="B23">
<citation citation-type="book">
<collab>Microsoft</collab> (<year>2020b</year>). <source>Remote assist</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://dynamics.microsoft.com/mixed-reality/remote-assist/">https://dynamics.microsoft.com/mixed-reality/remote-assist/</ext-link> (<comment>Accessed May 28, 2020</comment>).</citation>
</ref>
<ref id="B24">
<citation citation-type="web">
<collab>Mojo Vision</collab> (<year>2020</year>). <article-title>Moja vision</article-title>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.mojo.vision/">https://www.mojo.vision/</ext-link> (<comment>Accessed April 29, 2020</comment>). </citation>
</ref>
<ref id="B25">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Nassani</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Billinghurst</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Langlotz</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Lindeman</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2017</year>). &#x201c;<article-title>Using visual and spatial cues to represent social contacts in AR</article-title>,&#x201d; in <conf-name>SIGGRAPH asia 2017 mobile graphics and interactive applications</conf-name>, <conf-loc>Bangkok, Thailand</conf-loc>, <conf-date>November 2017</conf-date> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>6</lpage>. </citation>
</ref>
<ref id="B26">
<citation citation-type="web">
<collab>NintendoSoup</collab> (<year>2019</year>). <article-title>Pokemon go officially hits 1 billion downloads worldwide</article-title>. Available at: <ext-link ext-link-type="uri" xlink:href="https://nintendosoup.com/pokemon-go-officially-hits-1-billion-downloads-worldwide/">https://nintendosoup.com/pokemon-go-officially-hits-1-billion-downloads-worldwide/</ext-link> (<comment>Accessed April 11, 2019</comment>). </citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nizam</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Abidin</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Hashim</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Lam</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Arshad</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Majid</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>A review of multimodal interaction technique in augmented reality environment</article-title>. <source>Int. J. Adv. Sci. Eng. Inf. Technol.</source> <volume>8</volume> (<issue>4&#x2013;2</issue>), <fpage>1460</fpage>. <pub-id pub-id-type="doi">10.18517/ijaseit.8.4-2.6824</pub-id> </citation>
</ref>
<ref id="B28">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Orts-Escolano</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Rhemann</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Fanello</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Kowdle</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Degtyarev</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Tankovich</surname>
<given-names>V.</given-names>
</name>
</person-group> (<year>2016</year>). &#x201c;<article-title>Holoportation: virtual 3D teleportation in real-time</article-title>,&#x201d; in <conf-name>Proceedings of the 29th annual symposium on user interface software and technology</conf-name>, <conf-loc>Tokyo, Japan</conf-loc>, <conf-date>October 2016</conf-date> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>741</fpage>&#x2013;<lpage>754</lpage>. </citation>
</ref>
<ref id="B29">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Pascoal</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Alturas</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>de Almeida</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Sofia</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>A survey of augmented reality: making technology acceptable in outdoor environments</article-title>,&#x201d; in <conf-name>2018 13th Iberian conference on information systems and technologies. CISTI</conf-name>, <conf-loc>Caceres</conf-loc>, <conf-date>June 13&#x2013;16, 2018</conf-date> (<publisher-name>Piscataway, New Jersey, USA: IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>6</lpage>. </citation>
</ref>
<ref id="B30">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Pase</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2012</year>). &#x201c;<article-title>Ethical considerations in augmented reality applications</article-title>,&#x201d; in <conf-name>Proceedings of the international conference on e-learninge-business, enterprise information systems, and e-Government EEE</conf-name>. (<publisher-name>The Steering Committee of the World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp)</publisher-name>), <fpage>1</fpage>. Las Vegas, Nevada, USA, July 16th - 19th, 2012. </citation>
</ref>
<ref id="B31">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Piumsomboon</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Dey</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Billinghurst</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2017</year>). &#x201c;<article-title>Empathic mixed reality: sharing what you feel and interacting with what you see</article-title>,&#x201d; in <conf-name>International symposium on ubiquitous virtual reality(ISUVR)</conf-name>, <conf-loc>Nara</conf-loc>, <conf-date>June 27&#x2013;29, 2017</conf-date> (<publisher-name>Piscataway, New Jersey, USA: IEEE</publisher-name>), <fpage>38</fpage>&#x2013;<lpage>41</lpage>. </citation>
</ref>
<ref id="B32">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Pustka</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Klinker</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2008</year>). &#x201c;<article-title>Dynamic gyroscope fusion in ubiquitous tracking environments</article-title>,&#x201d; in <conf-name>2008 7th IEEE/ACM international symposium on mixed and augmented reality</conf-name>, <conf-loc>Cambridge, United Kingdom</conf-loc>, <conf-date>September 15&#x2013;18, 2008</conf-date> (<publisher-name>Piscataway, New Jersey, USA: IEEE</publisher-name>), <fpage>13</fpage>&#x2013;<lpage>20</lpage>. </citation>
</ref>
<ref id="B33">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rauschnabel</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Ro</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Augmented reality smart glasses: an investigation of technology acceptance drivers</article-title>. <source>Int. J. Technol. Mark.</source> <volume>11</volume> (<issue>2</issue>), <fpage>123</fpage>&#x2013;<lpage>148</lpage>. <pub-id pub-id-type="doi">10.1504/IJTMKT.2016.075690</pub-id> </citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rompapas</surname>
<given-names>D. C.</given-names>
</name>
<name>
<surname>Sandor</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Plopski</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Saakes</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Shin</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Taketomi</surname>
<given-names>T.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <article-title>Towards large scale high fidelity collaborative augmented reality</article-title>. <source>Comput. Graph.</source> <volume>84</volume>, <fpage>24</fpage>&#x2013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.1016/j.cag.2019.08.007</pub-id> </citation>
</ref>
<ref id="B35">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sandor</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Fuchs</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Cassinelli</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Newcombe</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Yamamoto</surname>
<given-names>G.</given-names>
</name>
<etal/>
</person-group> (<year>2015</year>). <article-title>Breaking the barriers to true augmented reality</article-title>. <source>arXiv:1512.05471</source>. </citation>
</ref>
<ref id="B36">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Sharp</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Keskin</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Robertson</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Taylor</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Shotton</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>D.</given-names>
</name>
<etal/>
</person-group> (<year>2015</year>). &#x201c;<article-title>Accurate, robust, and flexible real-time hand tracking</article-title>,&#x201d; in <conf-name>Proceedings of the 33rd annual ACM conference on human factors in computing systems</conf-name>, <conf-loc>South Korea</conf-loc>, <conf-date>April 2015</conf-date> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>3633</fpage>&#x2013;<lpage>3642</lpage>. </citation>
</ref>
<ref id="B37">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Si-Mohammed</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Petit</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jeunet</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Argelaguet</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Spindler</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Evain</surname>
<given-names>A.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>Towards BCI-based interfaces for augmented reality: feasibility, design and evaluation</article-title>. <source>IEEE Trans. Vis. Comput. Graph.</source> <volume>26</volume> (<issue>3</issue>), <fpage>1608</fpage>&#x2013;<lpage>1621</lpage>. <pub-id pub-id-type="doi">10.1109/TVCG.2018.2873737</pub-id> </citation>
</ref>
<ref id="B38">
<citation citation-type="web">
<collab>Spatial.io</collab> (<year>2020</year>). <article-title>Spatial</article-title>. Available at: <ext-link ext-link-type="uri" xlink:href="https://spatial.io/">https://spatial.io/</ext-link> (<comment>Accessed May 1, 2020</comment>). </citation>
</ref>
<ref id="B39">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Stevens</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Jerrams-Smith</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2000</year>). &#x201c;<article-title>The sense of object-presence with projection-augmented models</article-title>,&#x201d; in <conf-name>International workshop on haptic human-computer interaction</conf-name>, <conf-loc>Glasgow, United Kingdom</conf-loc>, <conf-date>August&#x2013;September 31&#x2013;1, 2000</conf-date> (<publisher-loc>Berlin, Heidelberg</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>194</fpage>&#x2013;<lpage>198</lpage>. </citation>
</ref>
<ref id="B40">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Sugano</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Kato</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Tachibana</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2003</year>). &#x201c;<article-title>The effects of shadow representation of virtual objects in augmented reality</article-title>,&#x201d; in <conf-name>The second IEEE and ACM international symposium on mixed and augmented reality</conf-name>, <conf-loc>Tokyo, Japan</conf-loc>, <conf-date>October 10, 2003</conf-date> (<publisher-name>Piscataway, New Jersey, USA: IEEE</publisher-name>), <fpage>76</fpage>&#x2013;<lpage>83</lpage>. </citation>
</ref>
<ref id="B41">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Sutherland</surname>
<given-names>I.</given-names>
</name>
</person-group> (<year>1968</year>). &#x201c;<article-title>A head-mounted three dimensional display</article-title>,&#x201d; in <conf-name>Fall joint computer conference (Fall, part I)</conf-name>, <conf-loc>San Francisco, California</conf-loc>, <conf-date>December 9&#x2013;11, 1968</conf-date> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM Press</publisher-name>), <fpage>757</fpage>&#x2013;<lpage>764</lpage>. </citation>
</ref>
<ref id="B42">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sutherland</surname>
<given-names>I.</given-names>
</name>
</person-group> (<year>1965</year>). <article-title>The ultimate display</article-title>. <source>Proc. IFIP Congress</source> <volume>2</volume>, <fpage>506</fpage>&#x2013;<lpage>508</lpage>. </citation>
</ref>
<ref id="B43">
<citation citation-type="web">
<collab>Ubiquity6</collab> (<year>2020</year>). <article-title>Ubiquity6</article-title>. Available at: <ext-link ext-link-type="uri" xlink:href="https://ubiquity6.com/">https://ubiquity6.com/</ext-link> (<comment>Accessed January 14, 2020</comment>). </citation>
</ref>
<ref id="B44">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Wassom</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2014</year>). <source>Augmented reality law, privacy, and ethics: law, society, and emerging AR technologies</source>. <publisher-name>Waltham, Massachusetts, USA: Syngress</publisher-name>, <fpage>360</fpage>.</citation>
</ref>
</ref-list>
</back>
</article>