<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Robot. AI</journal-id>
<journal-title>Frontiers in Robotics and AI</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Robot. AI</abbrev-journal-title>
<issn pub-type="epub">2296-9144</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">898075</article-id>
<article-id pub-id-type="doi">10.3389/frobt.2022.898075</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Robotics and AI</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>GradTac: Spatio-Temporal Gradient Based Tactile Sensing</article-title>
<alt-title alt-title-type="left-running-head">Ganguly et al.</alt-title>
<alt-title alt-title-type="right-running-head">GradTac: Spatio-Temporal Tactile Sensing</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Ganguly</surname>
<given-names>Kanishka</given-names>
</name>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1701927/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Mantripragada</surname>
<given-names>Pavan</given-names>
</name>
<uri xlink:href="https://loop.frontiersin.org/people/1803878/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Parameshwara</surname>
<given-names>Chethan M.</given-names>
</name>
<uri xlink:href="https://loop.frontiersin.org/people/1803949/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ferm&#xfc;ller</surname>
<given-names>Cornelia</given-names>
</name>
<uri xlink:href="https://loop.frontiersin.org/people/299990/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sanket</surname>
<given-names>Nitin J.</given-names>
</name>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Aloimonos</surname>
<given-names>Yiannis</given-names>
</name>
<uri xlink:href="https://loop.frontiersin.org/people/134547/overview"/>
</contrib>
</contrib-group>
<aff>
<institution>Perception and Robotics Group</institution>, <institution>University of Maryland</institution>, <addr-line>College Park</addr-line>, <addr-line>MD</addr-line>, <country>United States</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/687608/overview">Shan Luo</ext-link>, University of Liverpool, United Kingdom</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/533059/overview">Qiang Li</ext-link>, Bielefeld University, Germany</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1781582/overview">Guanqun Cao</ext-link>, University of Liverpool, United Kingdom</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Kanishka Ganguly, <email>kganguly@terpmail.umd.edu</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Robot Learning and Evolution, a section of the journal Frontiers in Robotics and AI</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>17</day>
<month>06</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>9</volume>
<elocation-id>898075</elocation-id>
<history>
<date date-type="received">
<day>16</day>
<month>03</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>18</day>
<month>05</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2022 Ganguly, Mantripragada, Parameshwara, Ferm&#xfc;ller, Sanket and Aloimonos.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Ganguly, Mantripragada, Parameshwara, Ferm&#xfc;ller, Sanket and Aloimonos</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Tactile sensing for robotics is achieved through a variety of mechanisms, including magnetic, optical-tactile, and conductive fluid. Currently, the fluid-based sensors have struck the right balance of anthropomorphic sizes and shapes and accuracy of tactile response measurement. However, this design is plagued by a low Signal to Noise Ratio (SNR) due to the fluid based sensing mechanism &#x201c;damping&#x201d; the measurement values that are hard to model. To this end, we present a spatio-temporal gradient representation on the data obtained from fluid-based tactile sensors, which is inspired from neuromorphic principles of event based sensing. We present a novel algorithm (GradTac) that converts discrete data points from spatial tactile sensors into spatio-temporal surfaces and tracks tactile contours across these surfaces. Processing the tactile data using the proposed spatio-temporal domain is robust, makes it less susceptible to the inherent noise from the fluid based sensors, and allows accurate tracking of regions of touch as compared to using the raw data. We successfully evaluate and demonstrate the efficacy of GradTac on many real-world experiments performed using the Shadow Dexterous Hand, equipped with the BioTac SP sensors. Specifically, we use it for tracking tactile input across the sensor&#x2019;s surface, measuring relative forces, detecting linear and rotational slip, and for edge tracking. We also release an accompanying task-agnostic dataset for the BioTac SP, which we hope will provide a resource to compare and quantify various novel approaches, and motivate further research.</p>
</abstract>
<kwd-group>
<kwd>tactile-sensing</kwd>
<kwd>tactile-events</kwd>
<kwd>active-perception</kwd>
<kwd>event-based</kwd>
<kwd>bio-inspired</kwd>
</kwd-group>
<contract-num rid="cn001">BCS 1824198 OISE 2020624</contract-num>
<contract-num rid="cn002">W911NF2120076</contract-num>
<contract-sponsor id="cn001">National Science Foundation<named-content content-type="fundref-id">10.13039/100000001</named-content>
</contract-sponsor>
<contract-sponsor id="cn002">Army Research Laboratory<named-content content-type="fundref-id">10.13039/100006754</named-content>
</contract-sponsor>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Computational tactile sensing has myriad applications in robotics, especially in tasks related to grasping and manipulation. The robotics community has put a significant amount of effort into the design of hardware and algorithms to equip robots with tactile sensing capabilities that rival that of the human skin. Decades of research have led to the design of fluid based sensing mechanisms as the gold-standard for striking the balance between anthropomorphic shapes, sizes and responses. However, as computational algorithms have utilized such sensors widely, some largely unexplored issues still persist due to their non-linear behavior observed in both spatial and temporal responses due to external factors that are hard to model <xref ref-type="bibr" rid="B22">Wettels et al. (2008)</xref>.</p>
<p>Primarily, these sensors have low Signal to Noise Ratios (SNR), owing to the use of a fluid-based transmission of forces from the skin to the sensing electronics which &#x201c;damps&#x201d; the values. Secondly, because of the non-uniform distribution of the sensing elements inside the mechanical construction, each sensing element has a different sensing range, and respective biases. These issues have prohibited the development of a standard representation of the data, and processing techniques have been designed engineered for a particular set of tasks rather than being general.</p>
<p>Many approaches have been proposed for interpreting the sensor data, with highly accurate computer models on one end <xref ref-type="bibr" rid="B14">Narang et al. (2021b</xref>,<xref ref-type="bibr" rid="B13">a)</xref>, and a variety of signal processing techniques <xref ref-type="bibr" rid="B21">Wettels and Loeb (2011)</xref> on the raw data on the other. Both these approaches are computationally expensive and need extensive hand-crafted calibration procedures for them to be operational.</p>
<p>On the contrary, biological systems calibrate for these environmental factors on-the-fly by processing tactile information as spikes or events, which provides advantages for transmission and processing along with built-in robustness. This ideology inspired neuromorphic engineers to develop sensors and low-power hardware <xref ref-type="bibr" rid="B1">Brandli et al. (2014)</xref>, that record and process events, as well as algorithms to compute events <xref ref-type="bibr" rid="B11">Mitrokhin et al. (2018)</xref>; <xref ref-type="bibr" rid="B4">Gallego et al. (2020)</xref>; <xref ref-type="bibr" rid="B15">Sanket et al. (2020)</xref>. Recently event based hardware has become available for the research community. The best known among these is a vision sensor called DVS <xref ref-type="bibr" rid="B1">Brandli et al. (2014)</xref>; <xref ref-type="bibr" rid="B8">Lichtsteiner et al. (2008)</xref>, and another sensor is the event based audio cochlea <xref ref-type="bibr" rid="B23">Yang et al. (2016)</xref>. Event-based processing has also been introduced to the olfactory domain <xref ref-type="bibr" rid="B6">Jing et al. (2016)</xref> and for tactile data <xref ref-type="bibr" rid="B5">Janotte et al. (2021)</xref>.</p>
<p>We propose a novel intermediate representation computed directly from the raw fluid-based tactile data such as that of the BioTac SP sensor. Instead of accurately simulating the deformations and forces on the sensor, as in <xref ref-type="bibr" rid="B13">Narang et al. (2021a</xref>,<xref ref-type="bibr" rid="B14">b)</xref>, we compute robust features from the spatio-temporal changes in the tactile data, which carry essential information about the sensor&#x2019;s deformation and forces at the location of touch. The approach is computationally inexpensive and sufficiently accurate to perform a series of tasks.</p>
<p>The main idea is to compute from a sequence of raw data, the significant changes in data values from individual sensors, which we call <italic>Tactile Events</italic>, and then compute the essential tactile features from these events via a spatial interpolation. Specifically, by temporally accumulating the tactile events we construct surface contours, that can be used as a generic representation for tracking touch across the BioTac SP skin. Our approach handles the challenges mentioned above, <italic>i</italic>.<italic>e</italic>., it can account for noise and individual sensor biases.</p>
<sec id="s1-1">
<title>1.1 Problem Formulation and Contribution</title>
<p>The question we tackle in this work can be summarised as <italic>&#x201c;What representation do we need to handle noisy data from a Fluid Based Tactile Sensor (FBTS)?&#x201d;</italic>. To answer this question, we draw inspiration from neuromorphic computing and propose a computational model for representing tactile data using spatio-temporal gradients. Our contributions are formally described next.<list list-type="simple">
<list-item>
<p>&#x2022; We present an intuition for the relationship between the volumetric deformations of the skin and fluid on a fluid based tactile sensor and spatio-temporal gradients. We further discuss why our method can robustly compute the maximal region of deformation.</p>
</list-item>
<list-item>
<p>&#x2022; We present a computational model to convert raw tactile signals from an FBTS into an interpolated spatio-temporal surface. This is then used to track regions of applied stimulus across the sensor&#x2019;s skin surface which corresponds to the regions of touch.</p>
</list-item>
<list-item>
<p>&#x2022; We demonstrate the capabilities of our proposed approach on several real-world experiments, including detecting slippage during grasp, detecting relative direction of motion between fingers, and following planar shape contours.</p>
</list-item>
<list-item>
<p>&#x2022; We release a novel dataset containing the various experiments we perform on the BioTac SP. It can be used to validate not only our method, but also for comparing other tracking algorithms for the BioTac SP. and help push the field forward.</p>
</list-item>
</list>
</p>
</sec>
<sec id="s1-2">
<title>1.2 Prior Work</title>
<p>Tactile sensors broadly fall into several broad categories, including but not limited to piezoresistive, piezoelectric, optical, capacitive, and elastoresistive. We further categorize them into two main classes, based on their sensing modality: optical-tactile (<italic>i</italic>.<italic>e</italic>. indirect) and direct. This categorization is based on whether the sensing element makes direct contact with the surface being touched. The main tasks performed with tactile data found in the literature include: 1) estimation of the contact location and the net force vector, 2) estimation of high-density deformations on the sensor surface, 3) slip detection and classification, and 4) tracking object edges. We next discuss state-of-the-art works on using the various classes of tactile sensors and solving tasks related to those mentioned above.</p>
<p>Studies that perform estimation directly on the sensor data include <xref ref-type="bibr" rid="B9">Lin et al. (2013)</xref>, who present an analytical method to estimate the 3D point of contact and net force acting on the BioTac sensor based on electrode values, where they assume that electrodes measure force in the direction their normals. <xref ref-type="bibr" rid="B16">Su et al. (2015)</xref> discuss several methods for force estimation from tactile data, including Locally Weighted Projection Regression and neural network based regression. They also present a signal processing technique for slip detection using the BioTac, comparing their results using an IMU. <xref ref-type="bibr" rid="B17">Sundaralingam et al. (2019)</xref> introduce a method to infer forces from tactile data using a learning-based approach. They implement a 3D voxel grid to maintain spatial relations of the data, and use a convolutional neural network to map forces to tactile signals.</p>
<p>Recently, some studies modeled a mapping between sensor readings and the field of deformations on the whole sensor surface. <xref ref-type="bibr" rid="B13">Narang Y. S. et al. (2021)</xref> presented a finite element model for the 19 taxel BioTac sensor and demonstrated the most accurate simulations of the sensor thus far. They relate forces applied to specific locations to the sensor&#x2019;s skin deformation. They learn using data they collected, the mapping between 3D contact locations and netforce vectors to the 19 taxel readings, and then by combining the FEM simulation and experimental data they extrapoloate a mapping between taxel sensor measurements and skin deformations and vice-versa. In <xref ref-type="bibr" rid="B13">Narang Y. et al. (2021)</xref> the authors extended this work using variational autoencoder networks to represent both FEM deformations and electrode signals as low-dimensional latent variables, and they performed cross-modal learning over these latent variables. This enhanced the accuracy of the mapping between taxel readings and skin deformations previously obtained. However, they also showed that for unseen indenter shapes these methods poorly generalise in predicting deformation magnitudes and distributions from electrode values.</p>
<p>
<xref ref-type="bibr" rid="B7">Lepora et al. (2019)</xref> using a TacTip optical-tactile sensor (<xref ref-type="bibr" rid="B20">Ward-Cherrier et al. (2018)</xref>) learn via a CNN to perform reliable edge detection, and then use that in a visual servoing control policy for tracking and moving across object contours. In related work by the authors (<xref ref-type="bibr" rid="B3">Cramphorn et al. (2018)</xref>), they present a Voronoi tesselation based processing pipeline to predict contact location, as well as shear direction and magnitude on the surface of the sensor. This method is novel in that it does not use any classification or regression techniques and is purely analytical in nature.</p>
<p>
<xref ref-type="bibr" rid="B18">Taunyazoz et al. (2020)</xref> use the NeuTouch, a novel event-based tactile sensor along with a Visual-Tactile Spiking Neural Network to perform object classification and rotational slip detection. They also perform ablation studies with an event-based visual camera, and compare their spiking neural networks to traditional network architectures like 3D convolutional networks, and Gated Recurrent Units.</p>
<p>We use the prior work described above as a source of motivation for our pipeline, and we attempt to use the validated experiments in them as a proof of concept of our approach. We perform slip detection experiments as in <xref ref-type="bibr" rid="B16">Su et al. (2015)</xref>, perform edge tracking using visual servoing as in <xref ref-type="bibr" rid="B7">Lepora et al. (2019)</xref> and compute forces from touch as described by <xref ref-type="bibr" rid="B17">Sundaralingam et al. (2019)</xref>.</p>
</sec>
<sec id="s1-3">
<title>1.3 Organization of the Paper</title>
<p>Our paper is organized as follows: In <xref ref-type="sec" rid="s2">Section 2</xref>, we present the motivation for using the BioTac SP sensor for tactile sensing, and how our method is a practical solution to the challenges posed by this particular type of sensor. We describe in detail why the spatio-temporal gradients are an intuitive way for computing features of deformation on the BioTac SP.</p>
<p>
<xref ref-type="sec" rid="s3">Section 3</xref> discusses our high-level pipeline, and our experimental setup. We then go into detail regarding our algorithm to generate spatio-temporal gradients, <italic>i</italic>.<italic>e</italic>. events from raw tactile data, and then discuss how we generate contour surfaces from these events. Lastly, in this section we discuss how we use these spatio-temporal surfaces to track touch stimulus across the BioTac SP skin.</p>
<p>In <xref ref-type="sec" rid="s5">Section 5</xref> we demonstrate our pipeline on three distinctly different tasks, and discuss their results and outputs. We first show that our contour surfaces are able to accurately track tactile stimulus in motion across the surface of the BioTac SP skin. We then discuss results on experiments involving varying applied forces on the BioTac SP, where we show that our contour surfaces can distinguish between various levels of force. Lastly, we employ our algorithm on a more practical task of slippage detection during grasping, where we detect time of slippage, and distinguish between longitudinal and rotational slippage types.</p>
</sec>
</sec>
<sec id="s2">
<title>2 Methods</title>
<sec id="s2-1">
<title>2.1 Motivation</title>
<p>We consider for our work the BioTac SP tactile sensor, which comes with a unique sensing mechanism as compared to other contemporary tactile sensors. Tactile sensing mechanisms, as they are available commercially today, lie on a spectrum ranging from <italic>accurate sensing capabilities</italic> on one side to <italic>biomimetic form-factors</italic> on the other. Most sensors on this spectrum make trade-offs on form factor to provide high accuracy. The BioTac SP is one particular sensor that strikes a right balance and is in the middle of the range, where we have a physical shape and sensing mechanism very close to the human finger tip, but this comes at the cost of accuracy and fidelity of sensing.</p>
</sec>
<sec id="s2-2">
<title>2.2 Challenges With Fluid-Conductive Sensors</title>
<p>Unlike optical-tactile, magnetic or capacitive tactile sensors, fluid based tactile sensors use a conductive fluid to transmit electrical impulses from spatially distributed excitation electrodes to a few sensing locations (taxels) distributed over a solid core. The values generated by the taxels are thus primarily dependent on the characteristics of the fluid, specifically its conductivity.</p>
<p>The conductivity of a fluid, such as the electrolytic solution present in the BioTac SP sensor is non-linearly related to various external factors. These include, but are not limited to the temperature of the fluid, the humidity of the surroundings, the area and distance between the excitation and sensing electrodes, and the concentration of the conductive fluid. Each of these factors contribute non-linearly (<xref ref-type="bibr" rid="B22">Wettels et al. (2008)</xref>) to the noise of the individual taxels. Furthermore, the noise characteristics of the sensor electronics are also non-linear, which further exacerbates the situation.</p>
<p>We also need to consider sources of noise in the electronic implementation of each taxel&#x2019;s sensing mechanism, which include amplification and analog-to-digital conversion circuitry among others.</p>
</sec>
<sec id="s2-3">
<title>2.3 Modelling Fluid-Conductive Sensors</title>
<p>While it might be feasible to model each of the aforementioned sources of noise independently and in isolation, the combination and interactions between them when considered together in the system makes it an arduous task. There have been several attempts to develop a physical model of the BioTac sensor, the most recent of which is presented in the work by <xref ref-type="bibr" rid="B13">Narang Y. et al. (2021)</xref>. In this, the authors present a finite element model (FEM) of an ideal BioTac sensor, and provide an accurate simulation of the skin, the sensing core, and the internal fluid. While the FEM approach provides a physically accurate measurement of the deformation of the skin and fluid based on force stimuli, it does not account for the sources of noise described earlier. This is because the model of the sensor electronics is not considered along with the computational challenges of fluid modelling. Currently, to the best of our knowledge, there is no mathematical model between sensor readings and skin deformations, thereby inhibiting research in this area when utilizing raw sensor measurements.</p>
</sec>
<sec id="s2-4">
<title>2.4 Bio-Inspired Motivation for Logarithmic Change</title>
<p>In our work, we draw inspiration from nature regarding how changes over the skin surfaces may be related to location of touch and relative forces. To this end, we break away from the core robotics ideology that one requires a complex and accurate mathematical model or a very high quality sensor to perform useful tasks. In particular, we are driven by nature&#x2019;s efficient and parsimonious implementations which perform amazingly well with minimal quality sensors and very simple computing.</p>
<p>To build such an efficient data representation for fluid based sensors, we turn to the Weber-Fechner Laws of psychophysics, which state that the perceived stimulus on any of the human senses is related via an exponential function to the actual stimulus. As a result, humans perceive stimuli such as touch, sound or light as the changes in the logarithm between existing values and new ones. It is thus not surprising that the manufacturers of the BioTac SP sensor, who designed it to be as anthropomorphic as possible, also recommend that the best way to process the data from such fluid-based sensors is to use relative changes instead of raw taxel values.</p>
<p>In practice, the two main challenges with the BioTac SP sensor are that 1) the different taxels do not have same baseline value, and 2) the taxel values exhibit a low signal to noise ratio. By computing only the taxel changes on a logarithmic scale, our values become independent of the baseline and are more robust to noise, thus tackling both aforementioned issues.</p>
</sec>
<sec id="s2-5">
<title>2.5 Computing Events From Raw Data</title>
<p>One of the primary outputs of our pipeline is to generate &#x201c;events&#x201d; from raw tactile data. The concept of an event is inspired from the neuromorphic research community, which essentially is a data point in time that is &#x201c;fired&#x201d; only when there exists a change in the stimulus above a specified threshold.</p>
<p>We consider two consecutive packets of taxel data, at times <italic>t</italic> and <italic>t</italic> &#x2b; <italic>&#x3b4;</italic> respectively. Each of these packets contains the raw taxel values <inline-formula id="inf1">
<mml:math id="m1">
<mml:msubsup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2026;</mml:mo>
<mml:mn>24</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula> and <inline-formula id="inf2">
<mml:math id="m2">
<mml:msubsup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2026;</mml:mo>
<mml:mn>24</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b4;</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>. We then compute the logarithmic change between each of the <italic>j</italic> &#x2208; 1, 24 consecutive taxels, and fire an event when the logarithm of the value at a taxel increases or decreases by a threshold value <italic>&#x3c4;</italic>. That is, when:<disp-formula id="e1">
<mml:math id="m3">
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>ln</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b4;</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>ln</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mo>&#x3e;</mml:mo>
<mml:mi>&#x3c4;</mml:mi>
</mml:math>
<label>(1)</label>
</disp-formula>In other words, a positive event is said to be &#x201c;fired&#x201d; when<disp-formula id="e2">
<mml:math id="m4">
<mml:mi>ln</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b4;</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3e;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>ln</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
<label>(2)</label>
</disp-formula>and a negative event when<disp-formula id="e3">
<mml:math id="m5">
<mml:mi>ln</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3e;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>ln</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b4;</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
<label>(3)</label>
</disp-formula>This gives us intermediate taxel values between times <italic>t</italic> and <italic>t</italic> &#x2b; <italic>&#x3b4;</italic>, and their respective timestamps for each taxel <italic>j</italic> &#x2208; [1, 24].</p>
<p>We know from the design of the sensor, as well as ideal sensor simulations that the largest change in the values of the taxels correlates with the region of highest tactile stimulus. Also, this change is dependent on the forces already present on the region of touch, and reaches saturation and demonstrates hysteresis in the raw values. Our algorithm accounts for that by non-linearly interpolating the taxel data, on the log scale. The previously obtained taxel events thus give us a temporal gradient over the change in taxel values, caused by the deformation of the skin and fluid because of the applied force stimulus. Intuitively, these intermediate events between two discrete taxel data values signify change in localized volume of the skin and fluid over time due to the applied tactile stimulus.</p>
<p>As part of our algorithm, we then process these spatially discrete events for each of the 24 taxel locations and convert them into a continuous, interpolated surface. We use a Voronoi tesselation of the discrete and irregular grid, and perform Natural Neighbors Interpolation to construct an event surface, that gives us an interpolated event value at each point. This surface indirectly depicts the deformation of the skin and fluid, due to applied stimulus. Since the deformation due to applied forces on the BioTac SP is greatest at the region of touch, we generate iso-contours of the event surface, and consider only the maximal valued contour as the region of touch.</p>
</sec>
</sec>
<sec id="s3">
<title>3 Our Approach</title>
<sec id="s3-1">
<title>3.1 Pipeline</title>
<p>
<xref ref-type="fig" rid="F1">Figure 1</xref> shows an overview of the proposed framework, where we start with 24 points of raw tactile data from the BioTac SP sensors and generate a contact trajectory as output. The pipeline involves converting the raw data into events, aggregating said events by spatial clusters, performing Voronoi Tessellation on the aggregate events, and then using the interpolated values to generate a contour plot whose centroid is tracked over time. We elaborate each of the steps of our pipeline below.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>High-level organization of our pipeline.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g001.tif"/>
</fig>
</sec>
<sec id="s3-2">
<title>3.2 Setup and Methodology</title>
<p>Our hardware setup consists of a UR-10 manipulator equipped with the Shadow Dexterous Hand, which has the BioTac SP sensors attached to each finger tip. The BioTac SP provides a ROS interface to obtain the raw data, at a rate of 100&#xa0;Hz. This data consists of 24 electrode values which we term &#x201c;taxels&#x201d; (tactile element), as well as overall fluid pressure and temperature flux. For our pipeline, we use only the 24 taxel readings. These readings are the result of forces due to contact and the resultant compression of the skin and the enclosed fluid. The nature of our pipeline allows for processing readings from any other tactile sensor, as long as they are spatially distributed across some surface, and timestamps for each data packet are provided. We perform basic min-max normalization and Savitzky-Golay filtering before using the data. Our event-generation algorithm is influenced by principles of event-based sensors, which record logarithmic changes of signal on individual sensing elements, independently and asynchronously.</p>
</sec>
<sec id="s3-3">
<title>3.3 Generating Events From Raw Tactile Data</title>
<p>In <xref ref-type="sec" rid="s2-1">Section 2.1</xref>, we established that our approach does not approximate the entire sensor&#x2019;s surface but only the regions with maximum tactile stimulus. The data from the BioTac SP sensor is obtained at a rate of 100&#xa0;Hz, or one packet of data every 0.01&#xa0;s. Our method computes the number of events at each taxel, where each event corresponds to the change of some threshold value <italic>&#x3c4;</italic>. This essentially decides the granularity of change we are interested in measuring, and more events correspond to larger change, which is correlated to the amount of force that was applied to a particular region. For each event triggered, we also generate a corresponding timestamp between <italic>t</italic> and <italic>t</italic> &#x2b; <italic>&#x3b4;</italic>. Taking inspiration from the Weber-Fechner laws of psychophysics mentioned earlier in <xref ref-type="sec" rid="s1">Section 1</xref>, we trigger events based on the natural log of the threshold <italic>&#x3c4;</italic>. This intuitively means that the frequency of events are higher initially at time <italic>t</italic>, and gradually taper off as we get closer to the value at time <italic>t</italic> &#x2b; <italic>&#x3b4;</italic>.</p>
<p>Once the events have been computed for all the raw tactile data points, we aggregate them into temporal frames. The size of the temporal window used for aggregation is an important heuristic that can be fine tuned to favor robustness to noise or allow for a more sensitive tactile response.</p>
</sec>
<sec id="s3-4">
<title>3.4 Natural Neighbors Based Interpolation</title>
<p>The 24 taxels are located in some 3D space inside the BioTac SP, as per the sensor&#x2019;s design. We project these ellipsoidal locations onto a 2D surface, shown in <xref ref-type="fig" rid="F2">Figure 2A</xref>, to get an irregular grid of locations on a plane. For each of these 24 2D points, we have the aggregate event counts, as shown in <xref ref-type="fig" rid="F2">Figure 2B</xref>.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Contour Generation Pipeline. <bold>(A)</bold> 24 taxel locations, projected onto 2D plane, <bold>(B)</bold> Initial event aggregates per taxel, <bold>(C)</bold> Voronoi tesselation of the grid, <bold>(D)</bold> Contours generated from the interpolated surface.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g002.tif"/>
</fig>
<p>We proceed to perform Voronoi tessellation of this grid, based on the aggregate event values, shown in <xref ref-type="fig" rid="F2">Figure 2C</xref>. Compared to other methods of interpolation, such as Inverse Distance Weighting or Gaussian interpolation, Voronoi tessellation provides a more accurate representation of the underlying function we are trying to interpolate. Considering the unstructured nature of our data, i.e., an irregular grid of taxels, traditional methods of interpolation do not take into account the different areas of influence of each taxel when computing the interpolated function. Voronoi tesselation partitions the space proportional to the &#x201c;strength&#x201d; of each sample point, by &#x201c;stealing&#x201d; some area from the neighboring polygons any time a new point is interpolated <xref ref-type="bibr" rid="B10">Lucas (2021)</xref>.</p>
<p>This is mathematically represented by:<disp-formula id="e4">
<mml:math id="m6">
<mml:mi>G</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(4)</label>
</disp-formula>
<disp-formula id="e5">
<mml:math id="m7">
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
</mml:math>
<label>(5)</label>
</disp-formula>where <italic>G</italic>(<bold>x</bold>) is the estimate computed at <bold>x</bold>, and <italic>w</italic>
<sub>
<italic>i</italic>
</sub> are weights, and <italic>f</italic> (<bold>x</bold>
<sub>
<italic>i</italic>
</sub>) are the known data values at <bold>x</bold>
<sub>
<italic>i</italic>
</sub>, which are obtained from the 24 event aggregate values. <italic>A</italic>(<italic>x</italic>) is the volume of the new cell centered at <italic>x</italic>, and <italic>A</italic> (<italic>x</italic>
<sub>
<italic>i</italic>
</sub>) is the volume of the intersection between the new cell centered in <italic>x</italic> and the old cell centered in <italic>x</italic>
<sub>
<italic>i</italic>
</sub>.</p>
<p>Owing to the irregular structure of the sensing locations (taxels) within the BioTac SP, we want to employ a method of interpolation that gives weight to each taxel location proportional to the applied stimulus. Intuitively, Voronoi tessellation partitions the space into irregularly shaped polygons that are proportional to (or representative of) the tactile stimulus exerted on each taxel location. This is better than say, nearest neighbors interpolation which interpolates force values uniformly around each taxel. Although similar to a weighted-average interpolation, Natural Neighbors interpolation weights values by their proportionate area instead of just the raw values at each taxel. This resultant interpolation is a more &#x201c;truthful&#x201d; representation of the underlying surface function than other methods. The results of the Voronoi tessellation are used to interpolate points on the 2D surface of the BioTac SP, resulting in a continuous surface (<xref ref-type="fig" rid="F2">Figure 2D</xref>) whose values correspond to the amount of force on each taxel, and consequently, the deformation of that region of the BioTac SP skin.</p>
</sec>
</sec>
<sec id="s4">
<title>4 Dataset</title>
<p>There is a lack of standardized datasets in the tactile sensing community, especially when sensors like the BioTac SP are concerned. Most datasets available today are task-specific, or are from optical-tactile sensors. This makes quantitative comparisons difficult for novel algorithms being introduced to the field.</p>
<p>As part of our work, we are releasing an accompanying dataset on tactile motion on the BioTac SP sensor, which is independent of any particular task. The dataset samples include the following:<list list-type="simple">
<list-item>
<p>&#x2022; Tactile responses from various indenter sizes, applied at different forces</p>
</list-item>
<list-item>
<p>&#x2022; Motion across the sensor surface in various directional trajectories. We include 1) top-to-bottom, 2) bottom-to-top, 3) left-to-right, 4) right-to-left, 5) diagonal top-to-bottom, 6) diagonal bottom-to-top, 7) clockwise and 8) counter-clockwise data samples.</p>
</list-item>
<list-item>
<p>&#x2022; Longitudinal slippage for various objects from a labelled list of objects, as well as the ground-truth timestamps for slip events.</p>
</list-item>
<list-item>
<p>&#x2022; Rotational slippage for cylindrical object on a constant-speed turntable, as well as the ground-truth timestamps for slip events.</p>
</list-item>
</list>
</p>
<p>All our data is presented in both NumPy and CSV data formats, and includes all raw 24 taxel values as well as their timestamps. For ease of adoption and use, we eschew the use of ROS Bag format in this dataset, but it may be made available on request.</p>
</sec>
<sec id="s5">
<title>5 Experiments and Results</title>
<sec id="s5-1">
<title>5.1 Experimental Setup</title>
<p>The hardware used to perform all experiments, shown in <xref ref-type="fig" rid="F3">Figure 3</xref>, consists of a UR-10 manipulator equipped with a Shadow Dexterous Hand, with one BioTac SP sensor attached to each of the five finger tips.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Hardware Setup: Shadow Hand mounted on UR-10 manipulator.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g003.tif"/>
</fig>
<p>Alongside the 24 taxel values from each BioTac SP sensor, the setup also provides us with the 6 DoF pose of the arm and each finger, relative to a world coordinate system at the base of the manipulator. This information is used in the shape tracking experiments.</p>
<p>Replication of the experiments as described in this work is only feasible with access to the BioTac SP hardware. As such, our algorithm can be applied to, and modified for other sensors. We will release an accompanying dataset with the labelled data collected for each experiment, along with their respective ground truth values.</p>
</sec>
<sec id="s5-2">
<title>5.2 Tracking Location of Touch</title>
<p>We collected data from the BioTac SP at a rate of 100&#xa0;Hz by making contact at different sensor locations. Three different probes with varying indenter diameters (1, 2, and 5&#xa0;mm respectively) were used to gather this dataset. The taxel values are time-synchronised with an RGB camera feed which provides us with visual ground truth of contact location at every instance. This data was then used to generate events according to the method described in <xref ref-type="sec" rid="s3-3">Section 3.3</xref>.</p>
<p>We evaluate our method of tracking contact by comparing it qualitatively with the ground truth trajectories of the probes obtained from the RGB images. We hand-label several marker locations (shown in <xref ref-type="fig" rid="F4">Figure 4</xref> on the physical sensor and align them in image coordinate space to the 2D projected locations of the taxels. We used 8 different trajectories, as shown in <xref ref-type="fig" rid="F5">Figure 5</xref>.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Touch Tracking Ground Truth Marker Locations. <bold>(A)</bold> Markers for tracking horizontal trajectory, <bold>(B)</bold> Markers for tracking vertical trajectory, <bold>(C)</bold> Markers for tracking diagonal trajectory, and <bold>(D)</bold> Markers for tracking circular trajectory.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g004.tif"/>
</fig>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>Touch tracking trajectories. <bold>(A)</bold> Up and down trajectories, <bold>(B)</bold> left and right trajectories, <bold>(C)</bold> diagonal trajectories, <bold>(D)</bold> circular trajectories.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g005.tif"/>
</fig>
<p>We move the indenters on various trajectories along the surface of the BioTac SP, as shown in <xref ref-type="fig" rid="F5">Figure 5</xref>, from top to bottom, bottom to top (<xref ref-type="fig" rid="F5">Figure 5A</xref>), left to right, right to left ((<xref ref-type="fig" rid="F5">Figure 5B</xref>), diagonally top to bottom, diagonally bottom to top (<xref ref-type="fig" rid="F5">Figure 5C</xref>), circular clockwise, and circular counter-clockwise (<xref ref-type="fig" rid="F5">Figure 5D</xref>).</p>
<p>
<xref ref-type="fig" rid="F6">Figure 6</xref> shows the outputs from two sample trajectories&#x2013;diagonal motion from bottom left to top right and counter-clockwise circular motion. <xref ref-type="fig" rid="F6">Figure 6A</xref> and <xref ref-type="fig" rid="F6">Figure 6C</xref> show the trajectories overlaid on the contour surfaces generated from the event aggregates stacked along the time axis. In both outputs, we can clearly see the event aggregates in red representing the current region of touch. Tracking these across time, we can generate a trajectory of touch across the skin surface.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>Touch Tracking Trajectory Plots. The red line denotes the ground truth trajectory. <bold>(A)</bold> Diagonal Trajectory using Events Data, <bold>(B)</bold> Diagonal Trajectory using Raw Data, <bold>(C)</bold> Circular Trajectory using Events Data, <bold>(D)</bold> Circular Trajectory using Raw Data.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g006.tif"/>
</fig>
<p>For comparison, in <xref ref-type="fig" rid="F6">Figures 6B,D</xref> we compare the outputs obtained from the filtered, but otherwise unprocessed raw data from the BioTac SP. It is clear that the outputs from our approach, shown in <xref ref-type="fig" rid="F6">Figure 6</xref>, produces smoother trajectories with reduced noise.</p>
<p>
<xref ref-type="fig" rid="F7">Figure 7</xref> quantifies the median error in touch tracking results for each of the waypoints, for each of the four classes of trajectories (horizontal, vertical, diagonal and circular). Our results are most accurate for the waypoints in the center of the BioTac SP as compared to those near the edges due to the shape and fluid density of the underlying sensor.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption>
<p>Touch Tracking Error Plots. The middle bar represents the median error, the width of each bar is the Interquartile Range, and the fence widths are 1.5 &#xd7;IQR. <bold>(A)</bold> Average Deviation for Vertical Trajectory, <bold>(B)</bold> Average Deviation for Horizontal Trajectory, <bold>(C)</bold> Average Deviation for Diagonal Trajectory, <bold>(D)</bold> Average Deviation for Circular Trajectory.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g007.tif"/>
</fig>
<p>
<xref ref-type="table" rid="T1">Table 1</xref> provides a comparison of the average pixel-wise error in tracking the known waypoints (<xref ref-type="fig" rid="F4">Figure 4</xref>), computed with three different methods: First, we take the 24 taxel values corresponding to the timestamp at which the indenter is on each of the known waypoints 1 through 5, and train a fully connected neural network for regression on predicted locations. The average pixel-wise errors are reported under the <italic>Raw MLP</italic> heading. Similarly, we obtain the average pixel-wise errors for each of the five waypoints, using the contours from raw data and from event data. These are reported under the <italic>Raw Contours</italic> and <italic>Event Contours</italic> headers respectively in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Mean errors in the ratio of computed contact location and BioTac SP width at each way-point over different trajectories. The values in bold refer to the best results for a given trajectory.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th colspan="2" align="left"/>
<th align="center">
<bold>1</bold>
</th>
<th align="center">
<bold>2</bold>
</th>
<th align="center">
<bold>3</bold>
</th>
<th align="center">
<bold>4</bold>
</th>
<th align="center">
<bold>5</bold>
</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td rowspan="3" align="left">Circle</td>
<td align="left">
<italic>Raw MLP</italic>
</td>
<td align="char" char=".">0.76</td>
<td align="char" char=".">0.47</td>
<td align="char" char=".">0.78</td>
<td align="char" char=".">0.45</td>
<td align="char" char=".">0.75</td>
</tr>
<tr>
<td align="left">
<italic>Raw Contours</italic>
</td>
<td align="char" char=".">0.07</td>
<td align="char" char=".">
<bold>0.06</bold>
</td>
<td align="char" char=".">0.35</td>
<td align="char" char=".">0.15</td>
<td align="char" char=".">0.31</td>
</tr>
<tr>
<td align="left">
<italic>Event Contours</italic>
</td>
<td align="char" char=".">
<bold>0.04</bold>
</td>
<td align="char" char=".">0.11</td>
<td align="char" char=".">
<bold>0.29</bold>
</td>
<td align="char" char=".">
<bold>0.12</bold>
</td>
<td align="char" char=".">
<bold>0.02</bold>
</td>
</tr>
<tr>
<td rowspan="3" align="left">Diagonal</td>
<td align="left">
<italic>Raw MLP</italic>
</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.30</td>
<td align="char" char=".">
<bold>0.01</bold>
</td>
<td align="char" char=".">0.28</td>
<td align="char" char=".">0.75</td>
</tr>
<tr>
<td align="left">
<italic>Raw Contours</italic>
</td>
<td align="char" char=".">
<bold>0.02</bold>
</td>
<td align="char" char=".">0.24</td>
<td align="char" char=".">0.17</td>
<td align="char" char=".">0.05</td>
<td align="char" char=".">0.02</td>
</tr>
<tr>
<td align="left">
<italic>Event Contours</italic>
</td>
<td align="char" char=".">0.10</td>
<td align="char" char=".">
<bold>0.11</bold>
</td>
<td align="char" char=".">0.05</td>
<td align="char" char=".">
<bold>0.05</bold>
</td>
<td align="char" char=".">
<bold>0.01</bold>
</td>
</tr>
<tr>
<td rowspan="3" align="left">Horizontal</td>
<td align="left">
<italic>Raw MLP</italic>
</td>
<td align="char" char=".">0.47</td>
<td align="char" char=".">0.33</td>
<td align="char" char=".">
<bold>0.01</bold>
</td>
<td align="char" char=".">0.30</td>
<td align="char" char=".">0.45</td>
</tr>
<tr>
<td align="left">
<italic>Raw Contours</italic>
</td>
<td align="char" char=".">
<bold>0.04</bold>
</td>
<td align="char" char=".">0.10</td>
<td align="char" char=".">0.32</td>
<td align="char" char=".">0.40</td>
<td align="char" char=".">0.50</td>
</tr>
<tr>
<td align="left">
<italic>Event Contours</italic>
</td>
<td align="char" char=".">0.13</td>
<td align="char" char=".">
<bold>0.09</bold>
</td>
<td align="char" char=".">0.21</td>
<td align="char" char=".">
<bold>0.04</bold>
</td>
<td align="char" char=".">
<bold>0.01</bold>
</td>
</tr>
<tr>
<td rowspan="3" align="left">Vertical</td>
<td align="left">
<italic>Raw MLP</italic>
</td>
<td align="char" char=".">0.56</td>
<td align="char" char=".">0.28</td>
<td align="char" char=".">
<bold>0.02</bold>
</td>
<td align="char" char=".">0.22</td>
<td align="char" char=".">0.40</td>
</tr>
<tr>
<td align="left">
<italic>Raw Contours</italic>
</td>
<td align="char" char=".">0.59</td>
<td align="char" char=".">0.32</td>
<td align="char" char=".">0.17</td>
<td align="char" char=".">0.14</td>
<td align="char" char=".">0.10</td>
</tr>
<tr>
<td align="left">
<italic>Event Contours</italic>
</td>
<td align="char" char=".">
<bold>0.05</bold>
</td>
<td align="char" char=".">
<bold>0.11</bold>
</td>
<td align="char" char=".">
<bold>0.02</bold>
</td>
<td align="char" char=".">
<bold>0.12</bold>
</td>
<td align="char" char=".">
<bold>0.07</bold>
</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s5-3">
<title>5.3 Magnitude of Force</title>
<p>We applied varying forces on the BioTac SP skin using a 2&#xa0;mm indenter to demonstrate the ability of our event contours to measure the correlation between the magnitude of contact force and the area of the maximal contour. The ground truth forces were measured with the help of a calibrated and accurate force sensor.</p>
<p>In order to obtain the relationship between contour areas and applied force, we trained a fully connected neural network with 7 hidden layers, with layer widths of 8, 16, 32, 64, 32, 16, and 8 respectively using L2 loss. This network was used to compute a regression curve mapping the forces to the contour areas. We applied a logistic activation function, and used an inversely scaled learning rate. The network was trained for 5,000 epochs, on 200 data points.As a point of comparison, we applied two other regression methods, a stochastic gradient descent regression with the ElasticNet regularization and log loss, and another L2 regularized regression with Huber loss.We used the mean absolute percentage error, defined as<disp-formula id="e6">
<mml:math id="m8">
<mml:mi mathvariant="normal">M</mml:mi>
<mml:mi mathvariant="normal">A</mml:mi>
<mml:mi mathvariant="normal">P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="double-struck">E</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo stretchy="false">&#x7c;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>max</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">&#x7c;</mml:mo>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:math>
<label>(6)</label>
</disp-formula>where <inline-formula id="inf3">
<mml:math id="m9">
<mml:mi mathvariant="double-struck">E</mml:mi>
</mml:math>
</inline-formula> is the expectation operator.</p>
<p>Each of these methods were trained for 1,000 epochs over 200 data points, but for brevity in <xref ref-type="fig" rid="F8">Figure 8</xref> we only display 100 epochs. The figure also shows the results from the same regression techniques applied to the contours generated from raw data. It is evident from the plots, across all learning algorithms, that the event based data in comparison to the raw data, shows a better validation loss curve during training, and has an overall lower loss score at testing.</p>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption>
<p>Comparing various force-area regression methods for raw vs. event data.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g008.tif"/>
</fig>
<p>
<xref ref-type="fig" rid="F9">Figure 9</xref> shows a visual representation of the contour regions correlated with the applied forces. As is qualitatively evident, higher forces correspond to larger regions of tactile stimulus, as shown by the highest contour regions in red.</p>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption>
<p>Contour region areas correlated with applied force. <bold>(A)</bold> 3N applied force, <bold>(B)</bold> 6N applied force, <bold>(C)</bold> 12N applied force.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g009.tif"/>
</fig>
</sec>
<sec id="s5-4">
<title>5.4 Slippage Detection and Classification</title>
<p>There are many different ways slippage detection has been achieved using the BioTac SP (<xref ref-type="bibr" rid="B16">Su et al. (2015)</xref>; <xref ref-type="bibr" rid="B19">Veiga et al. (2020)</xref>; <xref ref-type="bibr" rid="B2">Calandra et al. (2018)</xref>; <xref ref-type="bibr" rid="B12">Naeini et al. (2019)</xref>), with most methods specifically designed for the task. Here we show that our generic method of spatio-temporal contours can also be used for slippage detection and classification, demonstrating that our approach is very adaptive.</p>
<p>By tracking the contours spatio-temporally, we are able to detect both the time at which slippage occurs, as well as its directionality. In case of longitudinal slip, i.e. in which the object moves linearly between the fingers, we can measure the direction as &#x201c;up&#x201d; or &#x201c;down&#x201d;. In case of rotational slip, i.e. in which the object rotates between the fingers, we can measure clockwise vs. counter-clockwise rotation.</p>
<p>We do this by comparing the event contours from fingers on opposing sides of the object, while the object is fully grasped by the Shadow Hand, as shown in <xref ref-type="fig" rid="F10">Figure 10</xref>. By tracking and comparing the trajectories generated by the contours on the first finger and the thumb, we can deduce both the time at which slippage occurs, as well as the direction. In case of longitudinal slippage, as in <xref ref-type="fig" rid="F11">Figure 11C</xref>, based on the orientation of the BioTac SP sensors with respect to the object, both the contour trajectories have the same direction of motion. In case of rotational slippage, as in <xref ref-type="fig" rid="F11">Figure 11F</xref>, because of opposing shear forces experienced on the thumb versus the first finger, the contour trajectories have opposing directions of motion.</p>
<fig id="F10" position="float">
<label>FIGURE 10</label>
<caption>
<p>Objects Used for Longitudinal and Rotational Slip Detection. <bold>(A)</bold> Box shape, <bold>(B)</bold> Spherical shape, <bold>(C)</bold> Cylinder shape, <bold>(D)</bold> Tumbler on constant-speed turntable.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g010.tif"/>
</fig>
<fig id="F11" position="float">
<label>FIGURE 11</label>
<caption>
<p>Examples of trajectories during longitudinal and rotational slippage. <bold>(A)</bold>, <bold>(B)</bold> First Finger and Thumb trajectories for longitudinal slippage, <bold>(C)</bold> Directional Diagram for longitudinal slippage, <bold>(D)</bold>, <bold>(E)</bold> First Finger and Thumb trajectories for rotational slippage, <bold>(F)</bold> Directional Diagram for rotational slippage.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g011.tif"/>
</fig>
<p>For the longitudinal slippage scenario, the object is allowed to slide down and is then gradually pulled back up, while maintaining a stable grasp. This can be seen by the contours moving from left to right spatially across the sensor&#x2019;s surface, and then from right back to the left. As is evident from the trajectories, because of shear forces being in the same direction for both sensors, the direction of the respective trajectories are also the same. For the rotational slippage scenario, the object is affixed to a constant-speed turntable and allowed to rotate slowly between the opposing fingers. In this motion, due to the resultant opposing shear forces, the contour trajectories for the index finger and the thumb have clearly opposite directions.</p>
<p>The event contour outputs of these experiments, longitudinal and rotational slippage, are shown in <xref ref-type="fig" rid="F11">Figures 11A,B,C,D</xref> respectively. In both cases the contours on the BioTac SP sensor are tracked over time, separately for the index finger and the thumb, which as per the diagrams (<xref ref-type="fig" rid="F11">Figures 11A,B,C,D</xref>) have different orientations.We obtain ground truth for our experiments using an 6-axis IMU mounted on each object, and use time synchronized outputs from the IMU to compute the time of slip. We compare our event-based approach to a regression slope computed on the raw data, and the results of one such experiment is shown in <xref ref-type="fig" rid="F12">Figure 12</xref>.</p>
<fig id="F12" position="float">
<label>FIGURE 12</label>
<caption>
<p>Slip Detection Comparison Plots. From top to bottom, we have a low-pass filtered acceleration on the <italic>z</italic>-axis, the regression slope on the raw data, and the binary slip detection results from event contours.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g012.tif"/>
</fig>
</sec>
<sec id="s5-5">
<title>5.5 Tracking Edges Using Contact Location</title>
<p>As another implementation of our contour tracking pipeline, we demonstrate a simple controller that takes the contour location relative to the UR-10 manipulator, and outputs a motion vector for the finger to follow. The controller is based on a simple tactile servoing algorithm, where we try to maintain the location of the contour location in the center of the surface frame.</p>
<p>As the finger and the attached sensor move over the edge, only one portion of the BioTac SP is in contact with the edge surface. This can be detected and tracked by our controller, and since we start our controller execution with the sensor&#x2019;s center touching the edge, any deviations in the contours from this center is compensated by an opposing motion vector sent to the UR-10 manipulator as a control command.</p>
<p>We track the edges of various non-trivial patterns, namely circle, spiral, triangle, and zig-zag. We overlay the ground truth image of our shapes over the trajectory that is tracked from the robot&#x2019;s pose data for the finger. Barring minor alignment issues between the surface and the finger, and some sliding experienced during the execution, the controller is able to guide the finger across the edges with relative accuracy. The results, with the ground truth shapes, are shown in <xref ref-type="fig" rid="F13">Figure 13</xref>. In each of the plots, we have the trajectory of the BioTac SP in world coordinate space in red, and the black polygons denote the inner and outer diameters of the edges of the shapes we track, also in world coordinate space measured in millimeters. We perform pixel-wise trajectory alignment to align the sensor pose to the ground truth boundary.</p>
<fig id="F13" position="float">
<label>FIGURE 13</label>
<caption>
<p>Edge tracking shapes, and results. <bold>(A)</bold> Circular edge, <bold>(B)</bold> Triangular edge, <bold>(C)</bold> spiral edge, <bold>(D)</bold> Zig-Zag Edge.</p>
</caption>
<graphic xlink:href="frobt-09-898075-g013.tif"/>
</fig>
</sec>
</sec>
<sec id="s6">
<title>6 Discussion</title>
<p>In conclusion, the work proposes a novel method to convert raw tactile data from the BioTac SP sensor into a spatio-temporal gradient (events) surface that closely tracks the regions of maximum tactile stimulus. Our algorithm approximates the region of touch on the skin of the BioTac SP sensor sufficiently accurate to perform various tactile feedback tasks. Specifically, we demonstrated the usefulness of the new representation experimentally, for the tasks of tracking tactile stimulus across the sensor, measuring relative force, slippage detection and classification of direction, and tracking edges on a plane. In comparison to other methods for data processing of fluid-based tactile sensors, our method is real-time and requires minimal overhead in computation. Our approach provides a robust, analytical method for detecting and tracking location of tactile stimulus on the BioTac SP from just 24 data points, improves the signal-to-noise ratio of the raw data and is independent of the baseline taxel values. The benefits of this approach should be even more apparent if hardware-based implementations of our algorithm is considered, due to the inherent nature of event-based processing transmitting only changes in tactile stimulus. Lastly, our approach is also independent of any particular sensor type, and we present an accompanying dataset of task-agnostic data samples gathered with the BioTac SP sensor. These include motion tracking over known trajectories and their time-synchronized RGB images, force sensor readings for varying forces applied to the surface using different indenter diameters, and slippage data for several objects and their accompanying ground truth timestamps.</p>
</sec>
</body>
<back>
<sec id="s7">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="s8">
<title>Author Contributions</title>
<p>KG and PM were the primary researchers involved in presenting the work. KG conceptualized the algorithms and prepared the codebase, performed the experiments, and prepared the drafts of the manuscript. PM collected data, performed experiments, prepared plots, and analyzed results for comparisons and the dataset. CP suggested modifications to the experiments and presentation of the drafts, as well as provided domain knowledge. CF suggested the topic of this study, was involved in discussions on the work and contributed to the paper writing. NS suggested modifications to all sections of the manuscript, was involved in proofreading, and provided overall structural changes to the final manuscript. YA was the academic advisor, and provided expert advice on the work. All authors contributed to manuscript revision, read, and approved the submitted version.</p>
</sec>
<sec id="s9">
<title>Funding</title>
<p>The support of the National Science Foundation under grants BCS 1824198 and OISE 2020624, and the Maryland Robotics Center for a Graduate Student Fellowship to KG are gratefully acknowledged.</p>
</sec>
<sec sec-type="COI-statement" id="s10">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s11">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ack>
<p>The authors would like to acknowledge Behzad Sadrfaridpour and Chahat Deep Singh, for their contributions to the presented work. This work has been submitted to ArXiv for pre-print publication.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brandli</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Berner</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Minhao Yang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Shih-Chii Liu</surname>
<given-names>S.-C.</given-names>
</name>
<name>
<surname>Delbruck</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>A 240 &#xd7; 180 130 dB 3 &#x39c;s Latency Global Shutter Spatiotemporal Vision Sensor</article-title>. <source>IEEE J. Solid-State Circuits</source> <volume>49</volume>, <fpage>2333</fpage>&#x2013;<lpage>2341</lpage>. <pub-id pub-id-type="doi">10.1109/jssc.2014.2342715</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Calandra</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Owens</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Jayaraman</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Malik</surname>
<given-names>J.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>More Than a Feeling: Learning to Grasp and Regrasp Using Vision and Touch</article-title>. <source>IEEE Robot. Autom. Lett.</source> <volume>3</volume>, <fpage>3300</fpage>&#x2013;<lpage>3307</lpage>. <pub-id pub-id-type="doi">10.1109/lra.2018.2852779</pub-id> </citation>
</ref>
<ref id="B3">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Cramphorn</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Lloyd</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lepora</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Voronoi Features for Tactile Sensing: Direct Inference of Pressure, Shear, and Contact Locations</article-title>,&#x201d; in <conf-name>2018 IEEE International Conference on Robotics and Automation (ICRA)</conf-name>, <conf-loc>Brisbane, QLD</conf-loc>, <conf-date>May 21&#x2013;25, 2018</conf-date>. <publisher-name>Institute of Electrical and Electronics Engineers (IEEE)</publisher-name>. <pub-id pub-id-type="doi">10.1109/ICRA.2018.8460644</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gallego</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Delbr&#xfc;ck</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Orchard</surname>
<given-names>G. M.</given-names>
</name>
<name>
<surname>Bartolozzi</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Taba</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Censi</surname>
<given-names>A.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Event-based Vision: A Survey</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell.</source> <volume>PP</volume>, <fpage>154</fpage>&#x2013;<lpage>180</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2020.3008413</pub-id> </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Janotte</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Mastella</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Chicca</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Bartolozzi</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Touch in Robots: A Neuromorphic Approach</article-title>. <source>ERCIM News</source> <volume>2021</volume>, <fpage>34</fpage>&#x2013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.5465/ambpp.2021.13070abstract</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jing</surname>
<given-names>Y.-Q.</given-names>
</name>
<name>
<surname>Meng</surname>
<given-names>Q.-H.</given-names>
</name>
<name>
<surname>Qi</surname>
<given-names>P.-F.</given-names>
</name>
<name>
<surname>Zeng</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.-J.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Signal Processing Inspired from the Olfactory Bulb for Electronic Noses</article-title>. <source>Meas. Sci. Technol.</source> <volume>28</volume>, <fpage>015105</fpage>. <pub-id pub-id-type="doi">10.1088/1361-6501/28/1/015105</pub-id> </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lepora</surname>
<given-names>N. F.</given-names>
</name>
<name>
<surname>Church</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>De Kerckhove</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Hadsell</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Lloyd</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>From Pixels to Percepts: Highly Robust Edge Perception and Contour Following Using Deep Learning and an Optical Biomimetic Tactile Sensor</article-title>. <source>IEEE Robot. Autom. Lett.</source> <volume>4</volume>, <fpage>2101</fpage>&#x2013;<lpage>2107</lpage>. <pub-id pub-id-type="doi">10.1109/LRA.2019.2899192</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lichtsteiner</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Posch</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Delbruck</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>A 128$\times$128 120 dB 15 $\mu$s Latency Asynchronous Temporal Contrast Vision Sensor</article-title>. <source>IEEE J. Solid-State Circuits</source> <volume>43</volume>, <fpage>566</fpage>&#x2013;<lpage>576</lpage>. <pub-id pub-id-type="doi">10.1109/jssc.2007.914337</pub-id> </citation>
</ref>
<ref id="B9">
<citation citation-type="web">
<person-group person-group-type="author">
<name>
<surname>Lin</surname>
<given-names>C.-h.</given-names>
</name>
<name>
<surname>Fishel</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Loeb</surname>
<given-names>G. E.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Estimating Point of Contact, Force and Torque in a Biomimetic Tactile Sensor with Deformable Skin</article-title>. <comment>Available at: <ext-link ext-link-type="uri" xlink:href="https://syntouchinc.com/wp-content/uploads/2016/12/2013_Lin_Analytical-1.pdf">https://syntouchinc.com/wp-content/uploads/2016/12/2013_Lin_Analytical-1.pdf</ext-link>.</comment> </citation>
</ref>
<ref id="B10">
<citation citation-type="web">
<person-group person-group-type="author">
<name>
<surname>Lucas</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>A Fast and Accurate Algorithm for Natural Neighbor Interpolation</article-title>. <comment>Available at: <ext-link ext-link-type="uri" xlink:href="https://gwlucastrig.github.io/TinfourDocs/NaturalNeighborTinfourAlgorithm/index.html">https://gwlucastrig.github.io/TinfourDocs/NaturalNeighborTinfourAlgorithm/index.html</ext-link>.</comment> </citation>
</ref>
<ref id="B11">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Mitrokhin</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ferm&#xfc;ller</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Parameshwara</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Aloimonos</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Event-based Moving Object Detection and Tracking</article-title>,&#x201d; in <conf-name>2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1109/iros.2018.8593805</pub-id> </citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Naeini</surname>
<given-names>F. B.</given-names>
</name>
<name>
<surname>AlAli</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Al-Husari</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Rigi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Al-Sharman</surname>
<given-names>M. K.</given-names>
</name>
<name>
<surname>Makris</surname>
<given-names>D.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <article-title>A Novel Dynamic-Vision-Based Approach for Tactile Sensing Applications</article-title>. <source>IEEE Trans. Instrum. Meas.</source> <volume>69</volume>, <fpage>1881</fpage>&#x2013;<lpage>1893</lpage>. </citation>
</ref>
<ref id="B13">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Narang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Sundaralingam</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Macklin</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Mousavian</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2021a</year>). <source>Sim-to-real for Robotic Tactile Sensing via Physics-Based Simulation and Learned Latent Projections</source>. <publisher-name>arXiv preprint arXiv:2103.16747</publisher-name>. </citation>
</ref>
<ref id="B14">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Narang</surname>
<given-names>Y. S.</given-names>
</name>
<name>
<surname>Sundaralingam</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Van Wyk</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Mousavian</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2021b</year>). <source>Interpreting and Predicting Tactile Signals for the Syntouch Biotac</source>. <publisher-name>arXiv preprint arXiv:2101.05452</publisher-name>. </citation>
</ref>
<ref id="B15">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Sanket</surname>
<given-names>N. J.</given-names>
</name>
<name>
<surname>Parameshwara</surname>
<given-names>C. M.</given-names>
</name>
<name>
<surname>Singh</surname>
<given-names>C. D.</given-names>
</name>
<name>
<surname>Kuruttukulam</surname>
<given-names>A. V.</given-names>
</name>
<name>
<surname>Ferm&#xfc;ller</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Scaramuzza</surname>
<given-names>D.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). &#x201c;<article-title>Evdodgenet: Deep Dynamic Obstacle Dodging with Event Cameras</article-title>,&#x201d; in <conf-name>2020 IEEE International Conference on Robotics and Automation (ICRA)</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>10651</fpage>&#x2013;<lpage>10657</lpage>. <pub-id pub-id-type="doi">10.1109/icra40945.2020.9196877</pub-id> </citation>
</ref>
<ref id="B16">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Su</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Hausman</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Chebotar</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Molchanov</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Loeb</surname>
<given-names>G. E.</given-names>
</name>
<name>
<surname>Sukhatme</surname>
<given-names>G. S.</given-names>
</name>
<etal/>
</person-group> (<year>2015</year>). &#x201c;<article-title>Force Estimation and Slip Detection/classification for Grip Control Using a Biomimetic Tactile Sensor</article-title>,&#x201d; in <conf-name>IEEE-RAS International Conference on Humanoid Robots (Humanoids)</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>297</fpage>&#x2013;<lpage>303</lpage>. <pub-id pub-id-type="doi">10.1109/humanoids.2015.7363558</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Sundaralingam</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Lambert</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Handa</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Boots</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Hermans</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Birchfield</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <source>Robust Learning of Tactile Force Estimation through Robot Interaction</source>. <publisher-name>arXiv:1810.06187</publisher-name>. </citation>
</ref>
<ref id="B18">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Taunyazoz</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Sng</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>See</surname>
<given-names>H. H.</given-names>
</name>
<name>
<surname>Lim</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Kuan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ansari</surname>
<given-names>A. F.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). &#x201c;<article-title>Event-driven Visual-Tactile Sensing and Learning for Robots</article-title>,&#x201d; in <conf-name>Proceedings of Robotics: Science and Systems</conf-name> (<publisher-name>IEEE</publisher-name>). </citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Veiga</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Edin</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Peters</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Grip Stabilization through Independent Finger Tactile Feedback Control</article-title>. <source>Sensors</source> <volume>20</volume>, <fpage>1748</fpage>. <pub-id pub-id-type="doi">10.3390/s20061748</pub-id> </citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ward-Cherrier</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Pestell</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Cramphorn</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Winstone</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Giannaccini</surname>
<given-names>M. E.</given-names>
</name>
<name>
<surname>Rossiter</surname>
<given-names>J.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>The Tactip Family: Soft Optical Tactile Sensors with 3d-Printed Biomimetic Morphologies</article-title>. <source>Soft Robot.</source> <volume>5</volume>, <fpage>216</fpage>&#x2013;<lpage>227</lpage>. <pub-id pub-id-type="doi">10.1089/soro.2017.0052</pub-id> </citation>
</ref>
<ref id="B21">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Wettels</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Loeb</surname>
<given-names>G. E.</given-names>
</name>
</person-group> (<year>2011</year>). &#x201c;<article-title>Haptic Feature Extraction from a Biomimetic Tactile Sensor: Force, Contact Location and Curvature</article-title>,&#x201d; in <conf-name>2011 IEEE International Conference on Robotics and Biomimetics</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>2471</fpage>&#x2013;<lpage>2478</lpage>. <pub-id pub-id-type="doi">10.1109/robio.2011.6181676</pub-id> </citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wettels</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Santos</surname>
<given-names>V. J.</given-names>
</name>
<name>
<surname>Johansson</surname>
<given-names>R. S.</given-names>
</name>
<name>
<surname>Loeb</surname>
<given-names>G. E.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Biomimetic Tactile Sensor Array</article-title>. <source>Adv. Robot.</source> <volume>22</volume>, <fpage>829</fpage>&#x2013;<lpage>849</lpage>. <pub-id pub-id-type="doi">10.1163/156855308x314533</pub-id> </citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Chien</surname>
<given-names>C.-H.</given-names>
</name>
<name>
<surname>Delbruck</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>S.-C.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>A 0.5 V 55 $\mu \Text{W}$ 64 $\times $ 2 Channel Binaural Silicon Cochlea for Event-Driven Stereo-Audio Sensing</article-title>. <source>IEEE J. Solid-State Circuits</source> <volume>51</volume>, <fpage>2554</fpage>&#x2013;<lpage>2569</lpage>. <pub-id pub-id-type="doi">10.1109/jssc.2016.2604285</pub-id> </citation>
</ref>
</ref-list>
</back>
</article>