AUTHOR=Su Hang , Qi Wen , Chen Jiahao , Yang Chenguang , Sandoval Juan , Laribi Med Amine TITLE=Recent advancements in multimodal human–robot interaction JOURNAL=Frontiers in Neurorobotics VOLUME=Volume 17 - 2023 YEAR=2023 URL=https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2023.1084000 DOI=10.3389/fnbot.2023.1084000 ISSN=1662-5218 ABSTRACT=Robotics has advanced significantly over the decades, and human-robot interaction is now playing an important part in delivering the best experience for users, cutting down on laborious tasks, and raising public approval of robots. To further the evolution of robots, new human-robot interaction approaches are necessary, with a more natural and flexible interaction manner being clearly the most crucial. As a newly emerging approach to human-robot interaction, multi-modal HRI is a method for individuals to communicate with a robot utilizing a variety of multi-modal data, including voice, image, writing, eye movement, touch, emotion, as well as bio-signals like EEG and ECG. It is a broad field tightly tied to cognitive science, ergonomics, multimedia technology, and VR, with numerous applications being brought up each year. However, a systematic review of the state-of-the-art of applications of multi-modal HRI remains to be explored. To this end, this paper carefully examines the current advancement and emerging directions of multi-modal HRI by summarizing the latest research articles relating to its applications. This manuscript also carefully examines the research development in terms of the input signal, the output signal, as well as the practical applications of multi-modal HRI.