AUTHOR=Ribeiro-Gomes José , Gaspar José , Bernardino Alexandre TITLE=Event-based feature tracking in a visual inertial odometry framework JOURNAL=Frontiers in Robotics and AI VOLUME=Volume 10 - 2023 YEAR=2023 URL=https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2023.994488 DOI=10.3389/frobt.2023.994488 ISSN=2296-9144 ABSTRACT=Event cameras report pixel-wise brightness changes at high temporal resolutions, allowing for high speed tracking of features in visual inertial odometry (VIO) estimation, but require a paradigm shift, as common practices from the past decades using conventional cameras, such feature detection and tracking, do not translate directly. One method for feature detection and tracking is the Event-based Kanade-Lucas-Tomasi tracker (EKLT), an hybrid approach that combines frames with events to provide a high speed tracking of features. Despite the high temporal resolution of the events, the local nature of the registration features imposes conservative limits to the camera motion speed. Our proposed approach expands on EKLT by relying on the concurrent use of the event-based feature tracker with a visual inertial odometry system performing pose estimation, leveraging frames, events and Inertial Measurement Unit (IMU) information to improve tracking. The inertial information, despite prone to drifting over time, allows keeping track of the features. Then, feature tracking synergically helps estimating and minimizing the drift. The problem of temporally combining high-rate IMU information with asynchronous event cameras is solved by means of an asynchronous probabilistic filter, in particular an Unscented Kalman Filter (UKF). The proposed method of feature tracking based on EKLT takes into accountbthe state estimation of the pose estimator, that is running in parallel, and provides this informationbto the feature tracker, resulting in a synergy that can improve not only the feature tracking, but also the pose estimation. This approach can be seen as a feedback, where the state estimation of the filter is fed back into the tracker, which then produces visual information for the filter, creating a “closed loop”. To the best of our knowledge, this is the first work proposing the fusion of visual with inertial information using events cameras by means of an UKF, as well as the use of EKLT in the context of pose estimation. Furthermore, our closed loop approach proved to be an improvement over the base EKLT, resulting in better feature tracking and pose estimation. The method is tested on rotational motions only.