<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2014.00009</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Asynchronous visual event-based time-to-contact</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Clady</surname> <given-names>Xavier</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Clercq</surname> <given-names>Charles</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Ieng</surname> <given-names>Sio-Hoi</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Houseini</surname> <given-names>Fouzhan</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Randazzo</surname> <given-names>Marco</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Natale</surname> <given-names>Lorenzo</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Bartolozzi</surname> <given-names>Chiara</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Benosman</surname> <given-names>Ryad</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Vision Institute, Universit&#x000E9;e Pierre et Marie Curie, UMR S968 Inserm, UPMC, CNRS UMR 7210, CHNO des Quinze-Vingts</institution> <country>Paris, France</country></aff>
<aff id="aff2"><sup>2</sup><institution>iCub Facility, Istituto Italiano di Tecnologia</institution> <country>Genova, Italia</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Jennifer Hasler, Georgia Insitute of Technology, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Ueli Rutishauser, California Institute of Technology, USA; Leslie S. Smith, University of Stirling, UK; Scott M. Koziol, Baylor University, USA</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Xavier Clady, Vision Institute, Universit&#x000E9; Pierre et Marie Curie, UMR S968 Inserm, UPMC, CNRS UMR 7210, CHNO des Quinze-Vingts, 17 rue Moreau, 75012 Paris, France e-mail: <email>xavier.clady&#x00040;upmc.fr</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Neuromorphic Engineering, a section of the journal Frontiers in Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>02</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>9</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>09</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>16</day>
<month>01</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 Clady, Clercq, Ieng, Houseini, Randazzo, Natale, Bartolozzi and Benosman.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>Reliable and fast sensing of the environment is a fundamental requirement for autonomous mobile robotic platforms. Unfortunately, the frame-based acquisition paradigm at the basis of main stream artificial perceptive systems is limited by low temporal dynamics and redundant data flow, leading to high computational costs. Hence, conventional sensing and relative computation are obviously incompatible with the design of high speed sensor-based reactive control for mobile applications, that pose strict limits on energy consumption and computational load. This paper introduces a fast obstacle avoidance method based on the output of an asynchronous event-based time encoded imaging sensor. The proposed method relies on an event-based Time To Contact (TTC) computation based on visual event-based motion flows. The approach is event-based in the sense that every incoming event adds to the computation process thus allowing fast avoidance responses. The method is validated indoor on a mobile robot, comparing the event-based TTC with a laser range finder TTC, showing that event-based sensing offers new perspectives for mobile robotics sensing.</p></abstract>
<kwd-group>
<kwd>neuromophic vision</kwd>
<kwd>event-based computation</kwd>
<kwd>time to contact</kwd>
<kwd>robotics</kwd>
<kwd>computer vision</kwd>
</kwd-group>
<counts>
<fig-count count="10"/>
<table-count count="1"/>
<equation-count count="11"/>
<ref-count count="30"/>
<page-count count="10"/>
<word-count count="6113"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>1. Introduction</title>
<p>A fundamental navigation task for autonomous mobile robots is to detect and avoid obstacles in their path. This paper introduces a full methodology for the event-based computation of Time To Contact (TTC) for obstacle avoidance, using an asynchronous event-based sensor.</p>
<p>Sensors such as ultrasonic sensors, laser range finders or infrared sensors are often mounted on-board of robotic platforms in order to provide distance to obstacles. Such active devices are used to measure signals transmitted by the sensor and reflected by the obstacle(s). Their performance is essentially dependent on how the transmitted energy (ultrasonic waves, light,&#x02026;) interacts with the environment Everett (<xref ref-type="bibr" rid="B8">1995</xref>); Ge (<xref ref-type="bibr" rid="B10">2010</xref>).</p>
<p>These sensors have limitations. In the case of ultrasonic sensors, corners and oblique surfaces, or even temperature variations can provide artifacts in the measurements. Infrared-based sensors (including recently emerged Time-Of-Light or RGB-D cameras) are sensitive to sunlight and can fail if the obstacle absorbs the signal. Laser range finder readings may also be erroneous because of specular reflections; additionally, the potential problems of eye-safety limit the use of many laser sensors to environments where humans are not present. In addition, most of the sensors have restrictions in terms of field-of-view and/or spatial resolution, requiring a mechanical scanning system or a network of several sensors. This leads to severe restrictions in terms of temporal responsiveness and computational load.</p>
<p>Vision can potentially overcome many of these restrictions; visual sensors often provide better resolution, wider range at faster rates than active scanning sensors. Their capacity to detect the natural light reflected by the objects or the surrounding areas paves the way to biological-inspired approaches.</p>
<p>Several navigation strategies using vision have been proposed, the most common consist of extracting depth information from visual information. Stereo-vision techniques can also produce accurate depth maps if the stability of the calibration parameters and a relative sufficient inter-camera distance can be ensured. However, these are strong requirements for high-speed and small robots. Another class of algorithms (Lorigo et al., <xref ref-type="bibr" rid="B20">1997</xref>; Ulrich and Nourbakhsh, <xref ref-type="bibr" rid="B28">2000</xref>), is based on color or texture segmentation of the ground plane. Even if this approach works on a single image, it requires the assumption that the robot is operating on a flat and uni-colored/textured surface and all objects have their bases on the ground.</p>
<p>Another extensively studied strategy is based on the evaluation of the TTC, noted &#x003C4;. This measure, introduced by Lee (<xref ref-type="bibr" rid="B17">1976</xref>), corresponds to the time that would elapse before the robot reaches an obstacle if the current relative motion between the robot and the obstacle itself were to continue without change. As the robot can navigate through the environment following a trajectory decomposed into straight lines (which is a classic and efficient strategy for autonomous robots in most environments), a general definition of TTC can be expressed as follows:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mi>Z</mml:mi><mml:mrow><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>Z</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
<p>where <italic>Z</italic> is the distance between the camera and the obstacle, and <inline-formula><mml:math id="M12"><mml:mrow><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>Z</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:math></inline-formula> corresponds to the relative speed.</p>
<p>The Time-to-contact can be computed considering only visual information, without extracting relative depth information and speed, as demonstrated by Camus (<xref ref-type="bibr" rid="B6">1995</xref>) (see Section 3.2). Its computation has the advantage to work with a single camera, without camera calibration or binding assumptions about the environment. Several techniques for the measure of TTC have been proposed. In Negre et al. (<xref ref-type="bibr" rid="B23">2006</xref>); Alyena et al. (<xref ref-type="bibr" rid="B1">2009</xref>), it is approximated measuring the local scale change of the obstacle, under the assumption that the obstacle is planar and parallel to the image plane. This approach requires either to precisely segment the obstacle in the image or to compute complex features in multi-scales representation of the image. Most studied methods of TTC rely on the estimation of optical flow. Optical flow conveys all necessary information from the environment Gibson (<xref ref-type="bibr" rid="B11">1978</xref>), but its estimation on natural scenes is well-known to be a difficult problem. Existing techniques are computationally expensive and are mostly used off line (Negahdaripour and Ganesan, <xref ref-type="bibr" rid="B22">1992</xref>; Horn et al., <xref ref-type="bibr" rid="B14">2007</xref>). Real-time implementations, using gradient-based, feature matching-based (Tomasi and Shi, <xref ref-type="bibr" rid="B27">1994</xref>) or differential ones, do not deal with large displacements. Multi-scale process, as proposed by Weber and Malik (<xref ref-type="bibr" rid="B29">1995</xref>), can manage with this limitation, at the cost of computing time and hardware memory to store and process frames at different scales and timings.</p>
<p>Rind and Simmons (<xref ref-type="bibr" rid="B26">1999</xref>) proposed a bio-inspired neural network modeling the lobula giant movement detector (LGMD), a visual part of the optic lobe of the locust that responds most strongly to approaching objects. In order to process the frames provided by a conventional camera, existing implementations proposed by Blanchard et al. (<xref ref-type="bibr" rid="B3">2000</xref>) and Yue and Rind (<xref ref-type="bibr" rid="B30">2006</xref>) required a distributed computing environment (three PCs connected via ethernet). Another promising approach consists in VLSI architecture implementing functional models of similar neural networks, but it will require huge investments to go beyond the single proof of concept, such as the 1-D architecture of 25 pixels proposed by Indiveri (<xref ref-type="bibr" rid="B15">1998</xref>) modeling locust descending contralateral movement detector (DCMD) neurons. The hardware systems constructed in Manchester and Heidelberg, and described respectively by Bruderle et al. (<xref ref-type="bibr" rid="B5">2011</xref>) and Furber et al. (<xref ref-type="bibr" rid="B9">2012</xref>), could be an answer to this issue.</p>
<p>Globally, most of these approaches suffer from the limitations imposed by frame-based acquisition of visual information in the conventional cameras, that output large and redundant data flow, at a relative low temporal frequency. Most of the calculations are operated on uninformative parts of the images, or are dedicated to compensate for the lack of temporal precision. Existing implementations are often a trade off between accuracy and efficiency and are restricted to mobile robots moving relatively slowly. For example, Low and Wyeth (<xref ref-type="bibr" rid="B21">2005</xref>) and Guzel and Bicker (<xref ref-type="bibr" rid="B13">2010</xref>) present experiments on the navigation of a wheeled mobile robotic platform using optical flow based TTC computation applied with an embedded conventional camera. Their softwares run at approximatively 5 Hz and the maximal speed of the mobile robot is limited to 0.2 m/s.</p>
<p>In this perspective, free-frame acquisition of the neuromorphic cameras (Guo et al., <xref ref-type="bibr" rid="B12">2007</xref>; Lichtsteiner et al., <xref ref-type="bibr" rid="B19">2008</xref>; Lenero-Bardallo et al., <xref ref-type="bibr" rid="B18">2011</xref>; Posch et al., <xref ref-type="bibr" rid="B25">2011</xref>), can introduce significant improvements in robotic applications. The operation of such sensors is based on independent pixels that asynchronously collect and send their own data, when the processed signal exceeds a tunable threshold. The resulting compressed stream of events includes the spatial location of active pixels and an accurate time stamping at which a given signal change occurs. Events can be processed locally while encoding the additional temporal dynamics of the scene.</p>
<p>This article presents an event-based methodology to measure the TTC from the events stream provided by a neuromorphic vision sensor mounted on a wheeled robotic platform. The TTC is computed and then updated for each incoming event, minimizing the computational load of the robot. The performance of the developed event-based TTC is compared with a laser range finder, showing that event-driven sensing and computation, with their sub-microsecond temporal resolution and the inherent redundancy suppression, are a promising solution to vision-based technology for high-speed robots.</p>
<p>In the following we briefly introduce the used neuromorphic vision sensor (Section 2, describe the event-based approach proposed to compute the TTC (Section 3) and present experimental results validating the accuracy and the robustness of the proposed technique on a mobile robots moving in an indoor environment (Section 4).</p>
</sec>
<sec>
<title>2. Time encoded imaging</title>
<p>Biomimetic, event-based cameras are a novel type of vision devices that&#x02014;like their biological counterparts&#x02014;are driven by &#x0201C;events&#x0201D; happening within the scene, and not by artificially created timing and control signals (i.e., frame clock of conventional image sensors) that have no relation whatsoever with the source of the visual information. Over the past few years, a variety of these event-based devices, reviewed in Delbruck et al. (<xref ref-type="bibr" rid="B7">2010</xref>), have been implemented, including temporal contrast vision sensors that are sensitive to relative light intensity change, gradient-based sensors sensitive to static edges, edge-orientation sensitive devices and optical-flow sensors. Most of these vision sensors encode visual information about the scene in the form of asynchronous address events (AER) Boahen (<xref ref-type="bibr" rid="B4">2000</xref>)using time rather than voltage, charge or current.</p>
<p>The ATIS (&#x0201C;Asynchronous Time-based Image Sensor&#x0201D;) used in this work is a time-domain encoding image sensor with QVGA resolution Posch et al. (<xref ref-type="bibr" rid="B25">2011</xref>). It contains an array of fully autonomous pixels that combine an illuminance change detector circuit and a conditional exposure measurement block.</p>
<p>As shown in the functional diagram of the ATIS pixel in Figure <xref ref-type="fig" rid="F1">1</xref>, the change detector individually and asynchronously initiates the measurement of an exposure/gray scale value only if a brightness change of a certain magnitude has been detected in the field-of-view of the respective pixel. The exposure measurement circuit encodes the absolute instantaneous pixel illuminance into the timing of asynchronous event pulses, more precisely into inter-event intervals.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Functional diagram of an ATIS pixel Posch (<xref ref-type="bibr" rid="B24">2010</xref>)</bold>. Two types of asynchronous events, encoding change and brightness information, are generated and transmitted individually by each pixel in the imaging array.</p></caption>
<graphic xlink:href="fnins-08-00009-g0001.tif"/>
</fig>
<p>Since the ATIS is not clocked, the timing of events can be conveyed with a very accurate temporal resolution in the order of microseconds. The time-domain encoding of the intensity information automatically optimizes the exposure time separately for each pixel instead of imposing a fixed integration time for the entire array, resulting in an exceptionally high dynamic range and an improved signal to noise ratio. The pixel-individual change detector driven operation yields almost ideal temporal redundancy suppression, resulting in a sparse encoding of the image data.</p>
<p>Figure <xref ref-type="fig" rid="F2">2</xref> shows the general principle of asynchronous imaging in a spatio-temporal representation. Frames are absent from this acquisition process. They can however be reconstructed, when needed, at frequencies limited only by the temporal resolution of the pixel circuits (up to hundreds of kiloframes per second) (Figure <xref ref-type="fig" rid="F2">2</xref> top). Static objects and background information, if required, can be recorded as a snapshot at the start of an acquisition. And henceforward moving objects in the visual scene describe a spatio-temporal surface at very high temporal resolution (Figure <xref ref-type="fig" rid="F2">2</xref> bottom).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Lower part The spatio-temporal space of imaging events: static objects and scene background are acquired first</bold>. Then, dynamic objects trigger pixel-individual, asynchronous gray-level events after each change. Frames are absent from this acquisition process. Samples of generated images from the presented spatio-temporal space are shown in the upper part of the figure.</p></caption>
<graphic xlink:href="fnins-08-00009-g0002.tif"/>
</fig>
</sec>
<sec>
<title>3. Event-based TTC computation</title>
<sec>
<title>3.1. Event-based visual motion flow</title>
<p>The stream of events from the silicon retina can be mathematically defined as follows: let <italic>e</italic>(<bold>p</bold>, t) &#x0003D; (<bold>p</bold>, t)<sup><italic>T</italic></sup> a triplet giving the position <bold>p</bold> &#x0003D; (<italic>x, y</italic>)<sup><italic>T</italic></sup> and the time <italic>t</italic> of an event. We can then define locally the function &#x003A3;<sub><italic>e</italic></sub> that maps to each <bold>p</bold>, the time <italic>t</italic>:</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;</mml:mtext><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo>:</mml:mo><mml:msup><mml:mi>&#x02115;</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo>&#x02192;</mml:mo><mml:mi>&#x0211D;</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>p</mml:mtext><mml:mo>&#x021A6;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mtext>p</mml:mtext><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>t</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<p>Time being an increasing function, &#x003A3;<sub><italic>e</italic></sub> is a monotonically increasing surface in the direction of the motion.</p>
<p>We then set the first partial derivatives with respect to the parameters as: <inline-formula><mml:math id="M13"><mml:mrow><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mrow><mml:msub><mml:mi>e</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mi>x</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="M14"><mml:mrow><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mrow><mml:msub><mml:mi>e</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mi>y</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:math></inline-formula> (see Figure <xref ref-type="fig" rid="F3">3</xref>). We can then write &#x003A3;<sub><italic>e</italic></sub> as:</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mrow><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x02207;</mml:mo><mml:msubsup><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi><mml:mi>T</mml:mi></mml:msubsup><mml:mi>&#x00394;</mml:mi><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>+</mml:mo><mml:mi>o</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>&#x0007C;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>with <inline-formula><mml:math id="M15"><mml:mrow><mml:mo>&#x02207;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mi>x</mml:mi></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mi>y</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mi>T</mml:mi></mml:msup></mml:mrow></mml:math></inline-formula>.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>General principle of visual flow computation, the surface of active events &#x003A3;<sub><italic>e</italic></sub> is derived to provide an estimation of orientation and amplitude of motion</bold>.</p></caption>
<graphic xlink:href="fnins-08-00009-g0003.tif"/>
</fig>
<p>The partial functions of &#x003A3;<sub><italic>e</italic></sub> are functions of a single variable, whether <italic>x</italic> or <italic>y</italic>. Time being a strictly increasing function, &#x003A3;<sub><italic>e</italic></sub> is a nonzero derivatives surface at any point. It is then possible to use the inverse function theorem to write around a location <bold>p</bold> &#x0003D; (<italic>x, y</italic>)<sup><italic>T</italic></sup>:</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M4"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mi>x</mml:mi></mml:mrow></mml:mfrac><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mi>y</mml:mi></mml:mrow></mml:mfrac><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>T</mml:mi></mml:msup><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mtext>d</mml:mtext><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mrow><mml:mi>y</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mtext>d</mml:mtext><mml:mi>x</mml:mi></mml:mrow></mml:mfrac><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:mfrac><mml:mrow><mml:mtext>d</mml:mtext><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mtext>d</mml:mtext><mml:mi>y</mml:mi></mml:mrow></mml:mfrac><mml:mo stretchy='false'>(</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>T</mml:mi></mml:msup></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:msub><mml:mrow></mml:mrow><mml:mi>x</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:msub><mml:mrow></mml:mrow><mml:mi>y</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>T</mml:mi></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><inline-formula><mml:math id="M17"><mml:mrow><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M18"><mml:mrow><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mrow><mml:mi>y</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> being &#x003A3;<sub><italic>e</italic></sub> restricted respectively to <italic>x</italic> &#x0003D; <italic>x</italic><sub>0</sub> and <italic>y</italic> &#x0003D; <italic>y</italic><sub>0</sub>, and <bold>v</bold><sub>n</sub>(<italic>x, y</italic>) &#x0003D; (<italic>v<sub>nx</sub>, v<sub>ny</sub></italic>)<sup><italic>T</italic></sup> represents the normal component of the visual motion flow; it is perpendicular to the object boundary (describing the local surface &#x003A3;<sub><italic>e</italic></sub>).</p>
<p>The gradient of &#x003A3;<sub><italic>e</italic></sub> or &#x02207;&#x003A3;<sub><italic>e</italic></sub>, is then:</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M5"><mml:mrow><mml:mo>&#x02207;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mtext>p</mml:mtext><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:msub><mml:mrow></mml:mrow><mml:mi>x</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:msub><mml:mrow></mml:mrow><mml:mi>y</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mi>T</mml:mi></mml:msup></mml:mrow></mml:math></disp-formula>
<p>The vector &#x02207;&#x003A3;<sub><italic>e</italic></sub> measures the rate and the direction of change of time with respect to the space, its components are also the inverse of the components of the velocity vector estimated at <bold>p</bold>.</p>
<p>The flow definition given by Equation 5 is sensitive to noise since it consists in estimating the partial derivatives of &#x003A3;<sub><italic>e</italic></sub> at each individual event. One way to make the flow estimation robust against noise is to add a regularization process to the estimation. To achieve this, we assume a local velocity constancy. This hypothesis is satisfied in practice for small clusters of events. It is then equivalent to assume &#x003A3;<sub><italic>e</italic></sub> being locally planar since its partial spatial derivatives are the inverse of the speed, hence constant velocities produce constant spatial rate of change in &#x003A3;<sub><italic>e</italic></sub>. Finally, the slope of the fitted plane with respect to the time axis is directly proportional to the motion velocity. The regularization also compensates for absent events in the neighborhood of active events where motion is being computed. The plane fitting provides an approximation of the timing of still non active spatial locations due the non idealities and the asynchronous nature of the sensor. The reader interested in the computation of motion flow can refer to Benosman et al. (<xref ref-type="bibr" rid="B2">2014</xref>) for more details. A full characterization of its computational cost is proposed; it shows that the event-based calculation required much less computation time than the frame-based one.</p>
</sec>
<sec>
<title>3.2. Time-to-contact</title>
<p>Assuming parts of the environment are static, while the camera is moving forward, the motion flow diverges around a point called the focus of expansion (FOE). The visual motion flow field and the corresponding focus of expansion can be used to determine the time-to-contact (TTC) or time-to-collision. If the camera is embedded on an autonomous robot moving with a constant velocity, the TTC can be determined without any knowledge of the distance to be traveled or the velocity the robot is moving.</p>
<p>We assume the obstacle is at <bold>P</bold> &#x0003D; (<italic>X<sub>c</sub>, Y<sub>c</sub>, Z<sub>c</sub></italic>)<sup><italic>T</italic></sup> in the camera coordinate frame and <bold>p</bold> &#x0003D; (<italic>x, y</italic>)<sup><italic>T</italic></sup> is its projection into the camera&#x00027;s focal plane coordinate frame (see Figure <xref ref-type="fig" rid="F4">4</xref>). The velocity vector <bold>V</bold> is also projected into the focal plane as <bold>v</bold> &#x0003D; (<inline-formula><mml:math id="M19"><mml:mrow><mml:mover accent='true'><mml:mi>x</mml:mi><mml:mo>&#x002D9;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover accent='true'><mml:mi>y</mml:mi><mml:mo>&#x002D9;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula>)<sup><italic>T</italic></sup>.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>3D obstacle velocity <italic>V</italic> projected into the camera focal plane as <italic>v</italic></bold>. The dotted letters refer to temporal derivatives of each component.</p></caption>
<graphic xlink:href="fnins-08-00009-g0004.tif"/>
</fig>
<p>By deriving the pinhole model&#x00027;s equations, Camus (<xref ref-type="bibr" rid="B6">1995</xref>) demonstrates that, if the coordinates <bold>p</bold><italic>f</italic> &#x0003D; (<italic>x<sub>f</sub>, y<sub>f</sub></italic>)<sup><italic>T</italic></sup> of the FOE are known, the following relation is satisfied:</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M6"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mtext>&#x000A0;</mml:mtext><mml:mi>&#x003C4;</mml:mi><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>Z</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>Z</mml:mi><mml:mo>&#x002D9;</mml:mo></mml:mover><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mi>f</mml:mi></mml:msub></mml:mrow><mml:mover accent='true'><mml:mi>y</mml:mi><mml:mo>&#x002D9;</mml:mo></mml:mover></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>x</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>f</mml:mi></mml:msub></mml:mrow><mml:mover accent='true'><mml:mi>x</mml:mi><mml:mo>&#x002D9;</mml:mo></mml:mover></mml:mfrac><mml:mo>,</mml:mo><mml:mtext>where</mml:mtext></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mover accent='true'><mml:mi>Z</mml:mi><mml:mo>&#x002D9;</mml:mo></mml:mover><mml:mi>c</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mi>Z</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mtext>&#x02009;&#x02009;</mml:mtext><mml:mover accent='true'><mml:mi>x</mml:mi><mml:mo>&#x002D9;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mtext>&#x02009;&#x02009;</mml:mtext><mml:mover accent='true'><mml:mi>y</mml:mi><mml:mo>&#x002D9;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>With our notation, this is equivalent to:</p>
<disp-formula id="E7"><label>(7)</label><mml:math id="M7"><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mtext>v</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mi>f</mml:mi></mml:msub></mml:mrow></mml:math></disp-formula>
<p>The TTC is then obtained at pixel <bold>p</bold> according to the relation:</p>
<disp-formula id="E8"><label>(8)</label><mml:math id="M8"><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msup><mml:mtext>v</mml:mtext><mml:mi>T</mml:mi></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mi>f</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mtext>v</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msup><mml:mo>&#x0007C;</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
<p>The TTC as defined is a signed real value because of the scalar product. Its sign refers to the direction of the motion: when &#x003C4; is positive, the robot is going toward the obstacle and, viceversa, for negative &#x003C4; it is getting away. This equality shows also that &#x003C4; can be determined only if the velocity <bold>v</bold> at <bold>p</bold> is known or can be estimated for any <bold>p</bold> at anytime <italic>t</italic>. There is unfortunately no general technique for estimating densely the velocity <bold>v</bold> from the visual information. However, optical flow techniques allow to compute densely the vector field of velocities normal to the edges, noted as <bold>v</bold><sub><italic>n</italic></sub>. The visual flow technique presented in subsection 3.2 is the ideal technique to compute &#x003C4;, not only because of its event-based formulation, but it is also showing that the normal to the edge component of <bold>v</bold> is sufficient for &#x003C4; determination. From Equation 7, we apply the scalar product of both end sides with &#x02207;&#x003A3;<sub><italic>e</italic></sub>:</p>
<disp-formula id="E9"><label>(9)</label><mml:math id="M9"><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mtext>v</mml:mtext><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mi>T</mml:mi></mml:msup><mml:mo>&#x02207;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mi>f</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mi>T</mml:mi></mml:msup><mml:mo>&#x02207;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Because <bold>v</bold> can be decomposed as the sum of a tangential vector <bold>v</bold><sub><italic>t</italic></sub>, and a normal vector <bold>v</bold><sub><italic>n</italic></sub>, the left end side of Equation 9 simplifies into:</p>
<disp-formula id="E10"><label>(10)</label><mml:math id="M10"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi>&#x003C4;</mml:mi><mml:msup><mml:mtext>v</mml:mtext><mml:mi>T</mml:mi></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02207;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>&#x003C4;</mml:mi><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mtext>v</mml:mtext><mml:mi>t</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:msub><mml:mtext>v</mml:mtext><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>T</mml:mi></mml:msup><mml:mo>&#x02207;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mi>&#x003C4;</mml:mi><mml:msubsup><mml:mtext>v</mml:mtext><mml:mi>n</mml:mi><mml:mi>T</mml:mi></mml:msubsup><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02207;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x003C4;</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><bold>v</bold><sup><italic>T</italic></sup><sub><italic>t</italic></sub> &#x02207; &#x003A3;<sub><italic>e</italic></sub> &#x0003D; 0 since the tangential component is orthogonal to &#x02207;&#x003A3;<sub><italic>e</italic></sub>. Therefore &#x003C4; is given by:</p>
<disp-formula id="E11"><label>(11)</label><mml:math id="M11"><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mi>f</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mi>T</mml:mi></mml:msup><mml:mo>&#x02207;</mml:mo><mml:msub><mml:mi>&#x003A3;</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
</sec>
<sec>
<title>3.3 Focus of expansion</title>
<p>The FOE is the projection of the observer&#x00027;s direction of translation (or heading) on the sensor&#x00027;s image plane. The radial pattern of flows depends only on the observer&#x00027;s heading and is independent of 3D structure, while the magnitude of flow depends on both heading and depth. Thus, in principle, the FOE could be obtained by triangulation of two vectors in a radial flow pattern. However, such a method would be vulnerable to noise. To calculate the FOE, we used the redundancy in the flow pattern to reduce errors.</p>
<p>The principle of the approach is described in Algorithm <xref ref-type="other" rid="A1">1</xref>. We consider a probability map of the visual field, where each point represents the likelihood of the FOE to be located on the corresponding point in the field. Every flow provides an estimation of the location of the FOE in the visual field; indeed, because the visual flow is diverging from the FOE, it belongs to the negative semi-plane defined by the normal motion flow vector. So, for each incoming event, all the corresponding potential locations of the FOE are also computed (step 3 in Algorithm <xref ref-type="other" rid="A1">1</xref>) and their likelihood is increased (step 4). Finding the location of the probability map with maximum value, the FOE is shifted toward this location (step 5)). This principe is illustrated in Figure <xref ref-type="fig" rid="F5">5A</xref>. The area with the maximum of probability is highlighted as the intersection of the negative semi-planes defined by the normal motion flow vectors. Finally, an exponential decreasing function is applied on the probability map; it allows updating the location of the FOE, giving more importance to the contributions provided by the most recent events and their associated flow.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Computation of the focus of expansion: (A) the focus of expansion lies under the normal flow, we can then vote for an area of the focal plane shown in (B) the FOE is the max of this area (C) Motion flow vectors obtained during a time period of &#x00394;<italic>t</italic> &#x0003D; 10 ms and superimposed over a corresponding snapshot (realized using the PWM grayscale events; the viewed pattern is the same as used in Experiment 1, cf. Figure <xref ref-type="fig" rid="F7">7</xref>)</bold>. Note that only the vectors with high amplitude are represented in order to enhance the readability of the Figure. Most of the motion flow vectors are diverging from the estimated FOE. The white ellipse in the up left corner shows a group of inconsistent motion flow vectors: they are probably due to a temporary noise micro-motion (vibration, unexpected roll-, pitch-, or yaw-motion).</p></caption>
<graphic xlink:href="fnins-08-00009-g0005.tif"/>
</fig>
<table-wrap position="float" id="A1">
<label>Algorithm 1</label>
<caption><p><bold>Computation of the Focus Of Expansion</bold>.</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td align="left"><bold>Require</bold> <italic>M</italic><sub><italic>prob</italic></sub> &#x02208; &#x0211D;<sup><italic>m</italic></sup> &#x000D7; &#x0211D;<sup><italic>n</italic></sup> and <italic>M</italic><sub><italic>time</italic></sub> &#x02208; &#x0211D;<sup><italic>m</italic></sup> &#x000D7; &#x0211D;<sup><italic>n</italic></sup> (<italic>M</italic><sub><italic>prob</italic></sub> is the probability map and holds the likelihood for each spatial location and <italic>M</italic><sub><italic>time</italic></sub> the last time when its likelihood has been increased).</td>
</tr>
<tr>
<td align="left">1: Initiate the matrices <italic>M</italic><sub><italic>prob</italic></sub> and <italic>M</italic><sub><italic>time</italic></sub> to 0</td>
</tr>
<tr>
<td align="left">2: <bold>for</bold> every incoming <italic>e</italic>(<bold>p</bold>, <italic>t</italic>) at velocity <bold>v<sub>n</sub> do</bold></td>
</tr>
<tr>
<td align="left">3:&#x000A0;&#x000A0;&#x000A0;&#x000A0;Determine all spatial locations <bold>p</bold><italic><sub>i</sub></italic> such as (<bold>p</bold>&#x02212;<bold>p<sub>i</sub></bold>)<sup><italic>T</italic></sup>.<bold>v<sub>n</sub></bold> &#x0003E; 0</td>
</tr>
<tr>
<td align="left">4:&#x000A0;&#x000A0;&#x000A0;&#x000A0;for all <bold>p</bold><sub><italic>i</italic></sub>: <italic>M</italic><sub><italic>prob</italic></sub>(<bold>p</bold><sub><italic>i</italic></sub>) &#x0003D; <italic>M<sub>prob</sub></italic>(<bold>p</bold><sub><italic>i</italic></sub>) &#x0002B; 1 and <italic>M</italic><sub><italic>time</italic></sub>(<bold>p</bold><sub><italic>i</italic></sub>) &#x0003D; <italic>t</italic><sub><italic>i</italic></sub></td></tr>
<tr>
<td align="left">5:&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x02200; <bold>p</bold><sub><italic>i</italic></sub> &#x02208; &#x0211D;<sup><italic>m</italic></sup> &#x000D7; &#x0211D;<sup><italic>n</italic></sup>, update the probability map <italic>M</italic><sub><italic>prob</italic></sub>(<bold>p</bold><sub><italic>i</italic></sub>) &#x0003D; <italic>M</italic><sub><italic>prob</italic></sub>(<bold>p</bold><sub><italic>i</italic></sub>)<inline-formula><mml:math id="M16"><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>m</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left">6:&#x000A0;&#x000A0;&#x000A0;&#x000A0;Find <bold>p</bold><sub><italic>f</italic></sub> &#x0003D; (<italic>x<sub>f,y</sub><sub>f</sub></italic>)<sup><italic>T</italic></sup> the spatial location of the maximum value of <italic>M</italic><sub><italic>prob</italic></sub> corresponding to the FOE location</td>
</tr>
<tr>
<td align="left">7: <bold>end for</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Figures <xref ref-type="fig" rid="F5">5B</xref>,<xref ref-type="fig" rid="F5">C</xref> show real results obtained viewing a densely textured pattern (the same as used in Experiment 1, see Figure <xref ref-type="fig" rid="F7">7</xref>). Figure <xref ref-type="fig" rid="F5">5B</xref> shows the probability map defined as an accumulative table and the resulting FOE. The corresponding motion flow is given in Figure <xref ref-type="fig" rid="F5">5C</xref>; the normal motion vectors (with an amplitude superior than a threshold) computed in a time interval &#x00394;<italic>t</italic> &#x0003D; 10 ms are represented as yellow arrows. Globally, the estimated FOE is consistent with the motion flow. However, some small groups of vectors (an example is surrounded by a white dotted ellipse) that seems converging, instead of diverging, to the FOE. Such flow events do not occur at the same time as the others; they are most probably generated by a temporary micro-motion (vibration, unexpected roll-, pitch- or yaw-motion). The cumulative process allows to filter such noise motions and to keep a FOE stable.</p>
<p>For an incoming event <italic>e</italic>(<bold>p</bold>, <italic>t</italic>) with a velocity vector <bold>v</bold><sub>n</sub>, we can define the following algorithm to estimate the FOE:</p>
</sec>
</sec>
<sec>
<title>4. Experimental results</title>
<p>The method proposed in the previous sections is validated in the experimental setup illustrated in Figure <xref ref-type="fig" rid="F6">6</xref>. The neuromorphic camera is mounted on a Pioneer 2 robotic platform, equipped with a Hokuyo laser range finder (LRF) providing the actual distance between the platform and the obstacles. In experimental environment free of specular or transparent objects (as in the first proposed experiment), the TTC based on the LRF can be estimated using the Equation 1 and is used as ground truth measure against which the event-based TTC is benchmarked.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>Experimental setup: (A) the Pioneer 2, (B) the asynchronous event-based ATIS camera, (C) the Hokuyo laser range finder (LRF)</bold>.</p></caption>
<graphic xlink:href="fnins-08-00009-g0006.tif"/>
</fig>
<p>In the first experiment, the robot is moving forward and backward in the direction of a textured obstacle as shown in Figure <xref ref-type="fig" rid="F7">7</xref>, the corresponding TTC estimated by both sensors (LRF and ATIS) is shown in Figure <xref ref-type="fig" rid="F8">8A</xref>. The TTC is expressed in the coordinate system of the obstacle: the vertical axis corresponds to time and the horizontal axis to the size of the obstacle. The extremes (and the white parts of the plots) correspond to the changes of direction of the robot: when its speed tends to 0, the LRF based TTC tends to infinity and the vision based TTC cannot be computed because too few events are generated. In order to show comparable results, only the TTC obtained with a robot speed superior to 0.1 m/s are shown; under this value, the robot motion is relatively unstable, the robot tilting during the acceleration periods.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>First experiment: (A) setup and location of the coordinate system (<italic>X<sub>O</sub>,Y<sub>O</sub>,Z<sub>O</sub></italic>) related to the obstacle; (B) distance between the robot and the obstacle, velocity of the robot and the relative estimated TTC over time are computed based on the odometer of the robot</bold>. Only the TTC computed while the velocity of the robot is superior to 0.1 m/s is given, because it tends to infinity when velocity tends to 0.</p></caption>
<graphic xlink:href="fnins-08-00009-g0007.tif"/>
</fig>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p><bold>Comparison of the results obtained while the robot is moving forward and backward in the direction of an obstacle</bold>. Results are expressed related to time and the coordinates system of the obstacle. <bold>(A)</bold> TTC computed using the LRF (right) and the ATIS (left). <bold>(B)</bold> Relative errors between bothTTC estimations. illustrated using a color map, blue to red for increasing TTC</p></caption>
<graphic xlink:href="fnins-08-00009-g0008.tif"/>
</fig>
<p>Figure <xref ref-type="fig" rid="F8">8B</xref> shows the relative error of the event-based TTC with respect of the ground truth calculated with the LRF TTC. The error is large during the phases of positive and negative accelerations of the robot. There are two potential explanations. The estimation of the speed of the robot based on the LRF is relatively inaccurate during the change of velocity. In addition, brutal changes of velocity could generate fast pitch motions which produce unstable FOE. Globally, more than the 60% of the relative errors are inferior to 20% showing that that the event-based approach is robust and accurate when the motion of the robot is stable.</p>
<p>In the second experiment, the robot moves along a corridor. In this conditions, multiple objects reflect the light from the LRF, that fails to detect obstacles, on the contrary the event-based algorithm succeeds in estimating the TTC relative to the obstacles. Figure <xref ref-type="fig" rid="F8">8</xref> shows the robot&#x00027;s trajectory: during the first stage the robot navigates toward an obstacle (portion A-B of the trajectory). An avoidance maneuver is performed during portion B-C that leads the robot to continue its trajectory to enter the warehouse (portion C-D). The estimated TTC to the closest obstacle, is shown as red plots in Figure <xref ref-type="fig" rid="F9">9</xref> and compared to the ground truth given by the odometer&#x00027;s data (in blue). It corresponds to the TTC collected in a region of interest of 60 &#x000D7; 60 pixels, matching with the closest obstacle. The image plane is segmented into four regions of interest (ROI) of 60 &#x000D7; 60 pixels (4 squares represented in the Figure <xref ref-type="fig" rid="F10">10</xref>) around the x-coordinate of the FOE. Only the normal flow vectors into the lower ROI, in which the activity, expressed as the number of events per second, is superior to a threshold (&#x0003E;5000 events/s), are considered, assuming that the closest obstacle is on the ground and so viewed in the bottom part of the vision field.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p><bold>Results of second experiment: the top Figure represents the trajectory followed by the the robot, on a schematic view of the warehouse, both middle Figures represents data collected from the odometer (the trajectory and the speed of the robot) and finally, the bottom Figures represent the time-to-contact estimated during the two time intervals during which the TTC is estimated; the red curves correspond to the TTC estimated from the neuromorphic camera&#x00027;s data, compared to an estimation of the ttc (blue curves) using the odometer&#x00027;s data and the knowledge of the obstacles&#x00027; locations in the map</bold>.</p></caption>
<graphic xlink:href="fnins-08-00009-g0009.tif"/>
</fig>
<fig id="F10" position="float">
<label>Figure 10</label>
<caption><p><bold>TTC computation: the yellow arrows represent the motion flow vectors obtained during a time period of 1 ms</bold>. These flow vectors are superimposed over a corresponding image of the scene (realized using the PWM grayscale events). In order to enhance the readability of the Figure, only 10% vectors with high strengths and orientations close to &#x000B1;&#x003C0;/2 have been draw. The red square corresponds to the ROI where the measure of TTC is estimated.</p></caption>
<graphic xlink:href="fnins-08-00009-g0010.tif"/>
</fig>
<p>The low shift between them can be explained by the drift in odometer&#x00027;s data (especially after the avoidance maneuver) Everett (<xref ref-type="bibr" rid="B8">1995</xref>); Ge (<xref ref-type="bibr" rid="B10">2010</xref>); a difference of 0.5 m. has been observed between the real position of the robot and the odometer-based estimate of the ending location. This is an expected effect, as odometer always drifts in the same measured proportions Ivanjko et al. (<xref ref-type="bibr" rid="B16">2007</xref>). In addition, the estimations are slightly less precise once the robot is in the warehouse, where the poor environment with white walls without texture or objects produces less events and the computation degrades. This shows the robustness of the technique even in poorly textured.</p>
<p>All programs have been written in C&#x0002B;&#x0002B; under linux and run in real time. We estimated the average time per event spent to compute the Time-to-Contact: it is approximately 20 &#x003BC;s per event on the computer used in the experiments (Intel Core i7 at 2.40 GHz). When the robot is at its maximum speed, data stream acquired during 1 s is processed in 0.33 s. The estimation of the visual flow is the most computationally expensive task (&#x0003E;99% of the total computational cost), but could be easily run in parallel to further accelerate it.</p>
<p>The most significant result of this work is that the TTC can be processed at an unprecedented rate and with a low computational cost. The output frequency of our method reaches over 16 kHz, which is largely superior to the ones which can be expected from any other conventional cameras, limited by their frame-based acquisitions and processing load needed to process data.</p>
</sec>
<sec>
<title>5. Conclusions and perspectives</title>
<p>The use of vision based navigation using conventional frame-based cameras is impractical for the limited available resources usually embedded on autonomous robots. The corresponding large amount of data to process is not compatible with fast and reactive navigation commands, especially when parts of the processing are allocated to extract the useful information. Such computational requirements are out of the reach of most small robots. Additionally, the temporal resolution of frame-based cameras trades off with the quantity of data that need to be processed, posing limits on the robot&#x00027;s speed and computational demand. In this paper, we gave an example of a simple collision avoidance technique based on the estimation of the TTC by combining the use of an event-based vision sensor and a recent previously developed event-based optical flow. We showed that event-based techniques can solve vision tasks in a more efficient way than traditional approaches that are used to do, by means of complex and hungry algorithms.</p>
<p>One remarkable highlight of this work is how well the event-based optical flow presented in Benosman et al. (<xref ref-type="bibr" rid="B2">2014</xref>) helped in estimating the TTC. This is because we have ensured the preservation of the high temporal dynamics of the signal from its acquisition to its processing. The precise timing conveyed by the neuromorphic camera allows to process locally around each event for a low computational cost, whilst ensuring a precise computation of the visual motion flow and thus, of the TTC. The experiments carried out on a wheeled robotic platform support this statement, as the results are as reliable as the ones obtained with a laser range finder, at a much higher frequency. With event-based vision, the motion behavior of a robot could be controlled with a time delay far below the one that is inherent to the frame-based acquisition in conventional cameras.</p>
<p>The method described in this work stands on the constant velocity hypothesis since Equation 1 is a result derived from that assumption. for this reason, the normal to the edges velocity is sufficient for the TTC estimation. For more general motion, the proposed method should be modified by for example assuming the velocity to be constant only locally.</p>
<p>This work supports the observation that event-driven (bio-inspired) asynchronous sensing and computing are opening promising perspectives for autonomous robotic applications. The event-based approaches would allow small robots to avoid obstacles in natural environment with high speed that has never been achieved until now. Extending our approach to more complex scenarios than those exposed in this paper, and proposing a complete navigation system able to deal with motion or uncontrolled environment, requires to combine the visual information with other provided from top-down process and proprioceptive sensing, as for humans or animals.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</sec>
</body>
<back>
<ack>
<p>This work benefitted from the fruitful discussions and collaborations fostered by the CapoCaccia Cognitive Neuromorphic Engineering Workshop and the NSF Telluride Neuromorphic Cognition workshops.</p>
</ack>
<sec>
<title>Funding</title>
<p>This work has been supported by the EU grant eMorph (ICT-FET-231467). The authors are also grateful to the Lifesense Labex.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Alyena</surname> <given-names>G.</given-names></name> <name><surname>Negre</surname> <given-names>A.</given-names></name> <name><surname>Crowley</surname> <given-names>J. L</given-names></name></person-group>. (<year>2009</year>). <article-title>Time to contact for obstacle avoidance</article-title>, in <source>European Conference on Mobile Robotics</source> (<publisher-loc>Dubrovnik</publisher-loc>).</citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Benosman</surname> <given-names>R.</given-names></name> <name><surname>Clercq</surname> <given-names>C.</given-names></name> <name><surname>Lagorce</surname> <given-names>X.</given-names></name> <name><surname>Ieng</surname> <given-names>S.-H.</given-names></name> <name><surname>Bartolozzi</surname> <given-names>C.</given-names></name></person-group> (<year>2014</year>). <article-title>Event-based visual flow</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst</source>. <volume>25</volume>, <fpage>407</fpage>&#x02013;<lpage>417</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2013.2273537</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blanchard</surname> <given-names>M.</given-names></name> <name><surname>Rind</surname> <given-names>F.</given-names></name> <name><surname>Verschure</surname> <given-names>P. F.</given-names></name></person-group> (<year>2000</year>). <article-title>Collision avoidance using a model of the locust LGMD neuron</article-title>. <source>Robot. Auton. Syst</source>. <volume>30</volume>, <fpage>17</fpage>&#x02013;<lpage>38</lpage>. <pub-id pub-id-type="doi">10.1016/S0921-8890(99)00063-9</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boahen</surname> <given-names>K. A.</given-names></name></person-group> (<year>2000</year>). <article-title>Point-to-point connectivity between neuromorphic chips using address-events</article-title>. <source>Circuits and Sys. II: Analog and Digit. Signal Process. IEEE Trans</source>. <volume>47</volume>, <fpage>416</fpage>&#x02013;<lpage>434</lpage>. <pub-id pub-id-type="doi">10.1109/82.842110</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bruderle</surname> <given-names>D.</given-names></name> <name><surname>Petrovici</surname> <given-names>M. A.</given-names></name> <name><surname>Vogginger</surname> <given-names>B.</given-names></name> <name><surname>Ehrlich</surname> <given-names>M.</given-names></name> <name><surname>Pfeil</surname> <given-names>T.</given-names></name> <name><surname>Millner</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems</article-title>. <source>Biol. Cybern</source>. <volume>104</volume>, <fpage>263</fpage>&#x02013;<lpage>296</lpage>. <pub-id pub-id-type="doi">10.1007/s00422-011-0435-9</pub-id><pub-id pub-id-type="pmid">21618053</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Camus</surname> <given-names>T.</given-names></name></person-group> (<year>1995</year>). <article-title>Culating time-to-contact using real-time quantized optical flow</article-title>. <source>National Institute of Standards and Technology NISTIR 5609</source>.</citation>
</ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Delbruck</surname> <given-names>T.</given-names></name> <name><surname>Linares-Barranco</surname> <given-names>B.</given-names></name> <name><surname>Culurciello</surname> <given-names>E.</given-names></name> <name><surname>Posch</surname> <given-names>C.</given-names></name></person-group> (<year>2010</year>). <article-title>Activity-driven, event-based vision sensors</article-title>, in <source>IEEE International Symposium on Circuits and Systems</source> (<publisher-loc>Paris</publisher-loc>), <fpage>2426</fpage>&#x02013;<lpage>2429</lpage>. <pub-id pub-id-type="doi">10.1109/ISCAS.2010.5537149</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Everett</surname> <given-names>H.</given-names></name></person-group> (<year>1995</year>). <source>Sensors for Mobile Robots: Theory and Applications</source>. <publisher-loc>Natick, MA</publisher-loc>: <publisher-name>A K Peters/CRC Press</publisher-name>.</citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Furber</surname> <given-names>S.</given-names></name> <name><surname>Lester</surname> <given-names>D.</given-names></name> <name><surname>Plana</surname> <given-names>L.</given-names></name> <name><surname>Garside</surname> <given-names>J.</given-names></name> <name><surname>Painkras</surname> <given-names>E.</given-names></name> <name><surname>Temple</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>Overview of the spinnaker system architecture</article-title>. <source>IEEE Trans. Comput</source>. <volume>62</volume>, <fpage>2454</fpage>&#x02013;<lpage>2467</lpage>. <pub-id pub-id-type="doi">10.1109/TC.2012.142</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ge</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <source>Autonomous Mobile Robots: Sensing, Control, Decision Making and Applications</source>. Automation and Control Engineering. <publisher-loc>Boca Raton, FL</publisher-loc>: <publisher-name>Taylor and Francis</publisher-name>.</citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gibson</surname> <given-names>J. J.</given-names></name></person-group> (<year>1978</year>). <article-title>The ecological approach to the visual perception of pictures</article-title>. <source>Leonardo</source> <volume>11</volume>, <fpage>227</fpage>&#x02013;<lpage>235</lpage>. <pub-id pub-id-type="doi">10.2307/1574154</pub-id><pub-id pub-id-type="pmid">18606293</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guo</surname> <given-names>X.</given-names></name> <name><surname>Qi</surname> <given-names>X.</given-names></name> <name><surname>Harris</surname> <given-names>J.</given-names></name></person-group> (<year>2007</year>). <article-title>A time-to-first-spike cmos image sensor</article-title>. <source>Sens. Jo. IEEE</source> <volume>7</volume>, <fpage>1165</fpage>&#x02013;<lpage>1175</lpage>. <pub-id pub-id-type="doi">10.1109/JSEN.2007.900937</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Guzel</surname> <given-names>M.</given-names></name> <name><surname>Bicker</surname> <given-names>R.</given-names></name></person-group> (<year>2010</year>). <article-title>Optical flow based system design for mobile robots</article-title>, in <source>Robotics Automation and Mechatronics (RAM), 2010 IEEE Conference on</source> (<publisher-loc>Singapour</publisher-loc>), <fpage>545</fpage>&#x02013;<lpage>550</lpage>. <pub-id pub-id-type="doi">10.1109/RAMECH.2010.5513134</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Horn</surname> <given-names>B.</given-names></name> <name><surname>Fang</surname> <given-names>Y.</given-names></name> <name><surname>Masaki</surname> <given-names>I.</given-names></name></person-group> (<year>2007</year>). <article-title>Time to contact relative to a planar surface</article-title>, in <source>Intelligent Vehicles Symposium, 2007 IEEE</source> (<publisher-loc>Istanbul</publisher-loc>), <fpage>68</fpage>&#x02013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.1109/IVS.2007.4290093</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group> (<year>1998</year>) <article-title>Analog vlsi model of locust dcmd neuron response for computation of object approach</article-title>. <source>Prog. Neural Process</source>. <volume>10</volume>, <fpage>47</fpage>&#x02013;<lpage>60</lpage>. <pub-id pub-id-type="doi">10.1142/9789812816535_0005</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ivanjko</surname> <given-names>E.</given-names></name> <name><surname>Komsic</surname> <given-names>I.</given-names></name> <name><surname>Petrovic</surname> <given-names>I.</given-names></name></person-group> (<year>2007</year>). <article-title>Simple off-line odometry calibration of differential drive mobile robots</article-title>, in <source>Proceedings of 16th Int. Workshop on Robotics in Alpe-Adria-Danube Region-RAAD</source>. (<publisher-loc>Ljubljana</publisher-loc>).</citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>D. N.</given-names></name></person-group> (<year>1976</year>). <article-title>A theory of visual control of braking based on information about time-to-collision</article-title>. <source>Perception</source> <volume>5</volume>, <fpage>437</fpage>&#x02013;<lpage>459</lpage>. <pub-id pub-id-type="doi">10.1068/p050437</pub-id><pub-id pub-id-type="pmid">1005020</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lenero-Bardallo</surname> <given-names>J.</given-names></name> <name><surname>Serrano-Gotarredona</surname> <given-names>T.</given-names></name> <name><surname>Linares-Barranco</surname> <given-names>B.</given-names></name></person-group> (<year>2011</year>). <article-title>A 3.6 &#x003BC;s latency asynchronous frame-free event-driven dynamic-vision-sensor</article-title>. <source>J. Solid-State Circ</source>. <volume>46</volume>, <fpage>1443</fpage>&#x02013;<lpage>1455</lpage>. <pub-id pub-id-type="doi">10.1109/JSSC.2011.2118490</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lichtsteiner</surname> <given-names>P.</given-names></name> <name><surname>Posch</surname> <given-names>C.</given-names></name> <name><surname>Delbruck</surname> <given-names>T.</given-names></name></person-group> (<year>2008</year>). <article-title>A 128&#x000D7;128 120 db 15&#x003BC;s latency asynchronous temporal contrast vision sensor</article-title>. <source>J. Solid-State Circ</source>. <volume>43</volume>, <fpage>566</fpage>&#x02013;<lpage>576</lpage>. <pub-id pub-id-type="doi">10.1109/JSSC.2007.914337</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lorigo</surname> <given-names>L.</given-names></name> <name><surname>Brooks</surname> <given-names>R.</given-names></name> <name><surname>W.</surname> <given-names>Grimsou W.</given-names></name></person-group> (<year>1997</year>). <article-title>Visually-guided obstacle avoidance in unstructured environments</article-title>, in <source>Proceedings of the IEEE International Conference on Intelligent Robots and Systems</source> <volume>Vol. 1</volume> (<publisher-loc>Grenoble</publisher-loc>), <fpage>373</fpage>&#x02013;<lpage>379</lpage>. <pub-id pub-id-type="doi">10.1109/IROS.1997.649086</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Low</surname> <given-names>T.</given-names></name> <name><surname>Wyeth</surname> <given-names>G.</given-names></name></person-group> (<year>2005</year>). <article-title>Obstacle detection using optical flow</article-title>, in <source>Proceedings of Australasian Conference on Robotics and Automation</source> (<publisher-loc>Sydney, NSW</publisher-loc>). <pub-id pub-id-type="doi">10.1109/IVS.1992.252254</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Negahdaripour</surname> <given-names>S.</given-names></name> <name><surname>Ganesan</surname> <given-names>V.</given-names></name></person-group> (<year>1992</year>). <article-title>Simple direct computation of the FOE with confidence measures</article-title>, in <source>Computer Vision and Pattern Recognition</source> (<publisher-loc>Champaign, IL</publisher-loc>), <fpage>228</fpage>&#x02013;<lpage>235</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.1992.22327</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Negre</surname> <given-names>A.</given-names></name> <name><surname>Braillon</surname> <given-names>C.</given-names></name> <name><surname>Crowley</surname> <given-names>J. L.</given-names></name> <name><surname>Laugier</surname> <given-names>C.</given-names></name></person-group> (<year>2006</year>). <article-title>Real-time time-to-collision from variation of intrinsic scale</article-title>, in <source>International Symposium of Experimental Robotics</source> (<publisher-loc>Rio de Janeiro</publisher-loc>), <fpage>75</fpage>&#x02013;<lpage>84</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-540-77457-0_8</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Posch</surname> <given-names>C.</given-names></name></person-group> (<year>2010</year>). <article-title>High-dr frame-free pwm imaging with asynchronous aer intensity encoding and focal-plane temporal redundancy suppression</article-title>, in <source>Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on Circuits and Systems</source> (<publisher-loc>Paris</publisher-loc>). <pub-id pub-id-type="doi">10.1109/ISCAS.2010.5537150</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Posch</surname> <given-names>C.</given-names></name> <name><surname>Matolin</surname> <given-names>D.</given-names></name> <name><surname>Wohlgenannt</surname> <given-names>R.</given-names></name></person-group> (<year>2011</year>). <article-title>A qvga 143 db dynamic range frame-free pwm image sensor with lossless pixel-level video compression and time-domain cds</article-title>. <source>J. Solid-State Circ</source>. <volume>46</volume>, <fpage>259</fpage>&#x02013;<lpage>275</lpage>. <pub-id pub-id-type="doi">10.1109/JSSC.2010.2085952</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rind</surname> <given-names>F. C.</given-names></name> <name><surname>Simmons</surname> <given-names>P. J.</given-names></name></person-group> (<year>1999</year>). <article-title>Seeing what is coming: building collision-sensitive neurones</article-title>. <source>Trends Neurosci</source>. <volume>22</volume>. <fpage>215</fpage>&#x02013;<lpage>220</lpage>. <pub-id pub-id-type="doi">10.1016/S0166-2236(98)01332-0</pub-id><pub-id pub-id-type="pmid">10322494</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Tomasi</surname> <given-names>C.</given-names></name> <name><surname>Shi</surname> <given-names>J.</given-names></name></person-group> (<year>1994</year>). <article-title>Good features to track</article-title>, in <source>Proceedings CVPR &#x00027;94., IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994</source>, (<publisher-loc>Seattle, WA</publisher-loc>), <fpage>593</fpage>&#x02013;<lpage>600</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.1994.323794</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ulrich</surname> <given-names>I.</given-names></name> <name><surname>Nourbakhsh</surname> <given-names>I. R.</given-names></name></person-group> (<year>2000</year>). <article-title>Appearance-based obstacle detection with monocular color vision</article-title>, in <source>Proceedings of the International Conference on AAAI/IAAI</source> (<publisher-loc>Austin, TX</publisher-loc>), <fpage>866</fpage>&#x02013;<lpage>871</lpage>.</citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Weber</surname> <given-names>J.</given-names></name> <name><surname>Malik</surname> <given-names>J.</given-names></name></person-group> (<year>1995</year>). <article-title>Robust computation of optical flow in a multi-scale differential framework</article-title>. <source>Int. J. Comput. Vis</source>. <volume>14</volume>, <fpage>67</fpage>&#x02013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.1007/BF01421489</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yue</surname> <given-names>S.</given-names></name> <name><surname>Rind</surname> <given-names>F. C.</given-names></name></person-group> (<year>2006</year>). <article-title>Collision detection in complex dynamic scenes using an lgmd-based visual neural network with feature enhancement</article-title>. <source>Neural Netw. IEEE Trans</source>. <volume>17</volume>, <fpage>705</fpage>&#x02013;<lpage>716</lpage>. <pub-id pub-id-type="doi">10.1109/TNN.2006.873286</pub-id><pub-id pub-id-type="pmid">16722174</pub-id></citation> 
</ref>
</ref-list>
</back>
</article>
