<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2022.858329</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>EventHD: Robust and efficient hyperdimensional learning with neuromorphic sensor</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Zou</surname> <given-names>Zhuowen</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1643701/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Alimohamadi</surname> <given-names>Haleh</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1692018/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Kim</surname> <given-names>Yeseong</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1527394/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Najafi</surname> <given-names>M. Hassan</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1643666/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Srinivasa</surname> <given-names>Narayan</given-names></name>
<xref ref-type="aff" rid="aff5"><sup>5</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Imani</surname> <given-names>Mohsen</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1392248/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Computer Science, University of California, Irvine</institution>, <addr-line>Irvine, CA</addr-line>, <country>United States</country></aff>
<aff id="aff2"><sup>2</sup><institution>School of Engineering, University of California, Los Angeles</institution>, <addr-line>Los Angeles, CA</addr-line>, <country>United States</country></aff>
<aff id="aff3"><sup>3</sup><institution>Daegu Gyeongbuk Institute of Science and Technology</institution>, <addr-line>Daegu</addr-line>, <country>South Korea</country></aff>
<aff id="aff4"><sup>4</sup><institution>School of Computing and Informatics, University of Louisiana</institution>, <addr-line>Lafayette, LA</addr-line>, <country>United States</country></aff>
<aff id="aff5"><sup>5</sup><institution>Intel Labs</institution>, <addr-line>Santa Clara, CA</addr-line>, <country>United States</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Cornelia Fermuller, University of Maryland, College Park, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Qinru Qiu, Syracuse University, United States; Tomas Teijeiro, Swiss Federal Institute of Technology Lausanne, Switzerland</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Mohsen Imani <email>m.imani&#x00040;uci.edu</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Neuromorphic Engineering, a section of the journal Frontiers in Neuroscience</p></fn></author-notes>
<pub-date pub-type="epub">
<day>27</day>
<month>07</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>16</volume>
<elocation-id>858329</elocation-id>
<history>
<date date-type="received">
<day>19</day>
<month>01</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>27</day>
<month>07</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 Zou, Alimohamadi, Kim, Najafi, Srinivasa and Imani.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Zou, Alimohamadi, Kim, Najafi, Srinivasa and Imani</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Brain-inspired computing models have shown great potential to outperform today&#x00027;s deep learning solutions in terms of robustness and energy efficiency. Particularly, Hyper-Dimensional Computing (HDC) has shown promising results in enabling efficient and robust cognitive learning. In this study, we exploit HDC as an alternative computational model that mimics important brain functionalities toward high-efficiency and noise-tolerant neuromorphic computing. We present <sans-serif>EventHD</sans-serif>, an end-to-end learning framework based on HDC for robust, efficient learning from neuromorphic sensors. We first introduce a spatial and temporal encoding scheme to map event-based neuromorphic data into high-dimensional space. Then, we leverage HDC mathematics to support learning and cognitive tasks over encoded data, such as information association and memorization. <sans-serif>EventHD</sans-serif> also provides a notion of confidence for each prediction, thus enabling self-learning from unlabeled data. We evaluate <sans-serif>EventHD</sans-serif> efficiency over data collected from Dynamic Vision Sensor (DVS) sensors. Our results indicate that <sans-serif>EventHD</sans-serif> can provide online learning and cognitive support while operating over raw DVS data without using the costly preprocessing step. In terms of efficiency, <sans-serif>EventHD</sans-serif> provides 14.2&#x000D7; faster and 19.8&#x000D7; higher energy efficiency than state-of-the-art learning algorithms while improving the computational robustness by 5.9&#x000D7;.</p></abstract>
<kwd-group>
<kwd>hyperdimensional computing</kwd>
<kwd>neuromorphic sensor</kwd>
<kwd>brain-inspired computing</kwd>
<kwd>Dynamic Vision Sensor</kwd>
<kwd>machine learning</kwd>
</kwd-group>
<contract-sponsor id="cn001">National Science Foundation<named-content content-type="fundref-id">10.13039/100000001</named-content></contract-sponsor>
<contract-sponsor id="cn002">Office of Naval Research<named-content content-type="fundref-id">10.13039/100000006</named-content></contract-sponsor>
<contract-sponsor id="cn003">Semiconductor Research Corporation<named-content content-type="fundref-id">10.13039/100000028</named-content></contract-sponsor>
<contract-sponsor id="cn004">Cisco Systems<named-content content-type="fundref-id">10.13039/100004351</named-content></contract-sponsor>
<contract-sponsor id="cn005">Air Force Office of Scientific Research<named-content content-type="fundref-id">10.13039/100000181</named-content></contract-sponsor>
<counts>
<fig-count count="8"/>
<table-count count="1"/>
<equation-count count="3"/>
<ref-count count="43"/>
<page-count count="14"/>
<word-count count="8840"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Many applications run machine learning algorithms to assimilate the data collected in the swarm of devices on the Internet of Things (IoT). Sending all the data to the cloud for processing is not scalable and cannot guarantee a real-time response. However, the high computational complexity and memory requirement of existing machine learning models hinder usability in a wide variety of real-life embedded applications where the device resources and power budget is limited (Denil et al., <xref ref-type="bibr" rid="B1">2013</xref>; Zaslavsky et al., <xref ref-type="bibr" rid="B41">2013</xref>; Sun et al., <xref ref-type="bibr" rid="B37">2016</xref>; Xiang and Kim, <xref ref-type="bibr" rid="B40">2019</xref>). Therefore, we need alternative learning methods to train less-powerful IoT devices while ensuring robustness and generalization.</p>
<p>System efficiency comes from sensing and data processing. Unlike classical vision systems, neuromorphic systems try to efficiently capture a notion of seeing motion. Although bio-inspired learning methods, i.e., spiking neural networks (SNNs) (Schemmel et al., <xref ref-type="bibr" rid="B35">2006</xref>; Liu et al., <xref ref-type="bibr" rid="B23">2014</xref>), address issues related to energy efficiency (Huh and Sejnowski, <xref ref-type="bibr" rid="B10">2017</xref>; Neftci et al., <xref ref-type="bibr" rid="B26">2019</xref>), these systems still require to provide robustness and brain-like cognitive support. For example, the existing bio-inspired method cannot integrate perceptions and actions.</p>
<p>To achieve real-time performance with high energy efficiency and robustness, our approach redesigns learning algorithms using strategies that closely model <italic>the human brain</italic> at an abstract level. We exploit Hyper-Dimensional Computing (HDC) as an alternative computational model that mimics important brain functionalities toward high-efficiency and noise-tolerant computation (Kanerva, <xref ref-type="bibr" rid="B16">2009</xref>; Rahimi et al., <xref ref-type="bibr" rid="B32">2016b</xref>; Pale et al., <xref ref-type="bibr" rid="B27">2021</xref>, <xref ref-type="bibr" rid="B28">2022</xref>; Zou et al., <xref ref-type="bibr" rid="B43">2021</xref>). HDC supports operators that emulate the behavior of associative memory and enables higher cognitive functionalities (Gayler, <xref ref-type="bibr" rid="B5">2004</xref>; Kanerva, <xref ref-type="bibr" rid="B16">2009</xref>; Poduval et al., <xref ref-type="bibr" rid="B29">2022</xref>). In HDC, objects are thereby encoded with high-dimensional vectors, called <italic>hypervectors</italic>, which have thousands of elements (Kanerva, <xref ref-type="bibr" rid="B16">2009</xref>; Rahimi et al., <xref ref-type="bibr" rid="B32">2016b</xref>; Imani et al., <xref ref-type="bibr" rid="B12">2019c</xref>). HDC incorporates learning capability along with typical memory functions of storing/loading information. HDC is well suited to enable efficient and robust learning because: (i) HDC models are computationally efficient to train, highly parallel at heart, and amenable to hardware-level optimization (Wu et al., <xref ref-type="bibr" rid="B39">2018</xref>; Imani et al., <xref ref-type="bibr" rid="B13">2019b</xref>), (ii) HDC supports single-pass learning tasks using a small amount of data (Rahimi et al., <xref ref-type="bibr" rid="B30">2016a</xref>), and (iii) HDC exploits redundant and holographic representation with significant robustness to noise and failure in hardware (Li et al., <xref ref-type="bibr" rid="B22">2016</xref>).</p>
<p>There are a few recent studies that tried to exploit HDC to process neuromorphic sensors (Mitrokhin et al., <xref ref-type="bibr" rid="B25">2019</xref>; Hersche et al., <xref ref-type="bibr" rid="B9">2020</xref>). However, these solutions are not end-to-end as they operate over preprocessed data. Pre-processing is a costly <italic>time-image</italic> feature extraction that maps noisy neuromorphic data to a small number of features. This preprocessing has the following drawbacks: (1) dominates the entire computation cost (Mitrokhin et al., <xref ref-type="bibr" rid="B25">2019</xref>), (2) reduces the necessity of using HDC-based learning, as a less-sophisticated learning algorithm can also provide acceptable accuracy over-extracted features, (3) requires heterogeneous data processing and non-uniform data flow to accelerate preprocessing and HDC-based steps, and (4) finally, suffers from low computational robustness, as the preprocessing step operates over original data with high sensitivity to noise (Hersche et al., <xref ref-type="bibr" rid="B9">2020</xref>; Imani et al., <xref ref-type="bibr" rid="B14">2020</xref>).</p>
<p>In this article, we proposed <sans-serif>EventHD</sans-serif>, a neurally-inspired hyperdimensional system for real-time learning from a neuromorphic sensor. To the best of our knowledge, <sans-serif>EventHD</sans-serif> is the first HDC-based algorithm that provides robust and efficient learning by operating over raw spike data from a neuromorphic sensor. The main contributions of the article are listed as follows:</p>
<list list-type="bullet">
<list-item><p>We propose a novel hyperdimensional encoding module that receives neuromorphic data and maps it to holographic hyperdimensional spikes with highly sparse representation. Our encoding preserves the spatial and temporal correlation between the input events to naturally keep their similarity in high dimensions. In addition, our encoding module preserves asynchrony from the neuromorphic devices.</p></list-item>
<list-item><p>We enable supervised and semi-supervised learning using HDC-based algorithms. Our solution enables single-pass training where the HDC model can be updated in real-time by one-time looking at each train data. <sans-serif>EventHD</sans-serif> also defines confidence for each prediction and enables self-learning from unlabeled data.</p></list-item>
<list-item><p>We show <sans-serif>EventHD</sans-serif> capability to memorize associated perception-action and define the theoretical capacity of this model to reason based on prior knowledge.</p></list-item>
</list>
<p>We evaluate <sans-serif>EventHD</sans-serif> efficiency and accuracy over various data collected from DVS sensors. Our results indicate that <sans-serif>EventHD</sans-serif> can provide real-time learning and cognitive support while operating over raw DVS data without using the costly preprocessing step. Furthermore, <sans-serif>EventHD</sans-serif> in a single node provides 14.2&#x000D7; and 19.8&#x000D7; faster and higher energy efficiency than state-of-the-art learning algorithms while improving the computational robustness by 5.9&#x000D7;.</p>
</sec>
<sec id="s2">
<title>2. Preliminary and overview</title>
<sec>
<title>2.1. Hyperdimensional learning</title>
<p>The brain&#x00027;s circuits are massive in terms of numbers of neurons and synapses, suggesting that large circuits are fundamental to the brain&#x00027;s computing. Hyperdimensional computing (HDC) (Kanerva, <xref ref-type="bibr" rid="B16">2009</xref>) explores this idea by looking at computing with ultra-wide words&#x02014;i.e., with very high-dimensional vectors or hypervectors. The fundamental computation units in HDC are high dimensional representations of data known as &#x0201C;hypervectors&#x0201D; constructed from raw signals using an encoding procedure. There exist a huge number of different, nearly orthogonal hypervectors with the dimensionality in the thousands (Kanerva, <xref ref-type="bibr" rid="B15">1998</xref>). This lets us combine such hypervectors into a new hypervector using well-defined vector space operations while keeping the information of the two with high probability. Hypervectors are holographic, that is, the information encoded into the hypervector is distributed &#x0201C;equally&#x0201D; over all the components. In our case, it is done using (pseudo)random hypervectors with i.i.d. components as our ingredients for the encoding. A hypervector contains all the information combined and spread across all its components in a full holistic representation so that no component is more responsible for storing any piece of information than another.</p>
<p>In HDC, hypervectors are compositional&#x02014;they enable computation in superposition, unlike standard neural representations (Kanerva, <xref ref-type="bibr" rid="B16">2009</xref>). These HDC operations allow us to reason about and search through images that satisfy pre-specified constraints. These composite representations can be combined using HDC operations to encode temporal information or complex hierarchical relationships. This capability is especially powerful for understanding the relationship between objects in images in both time and space. These operations are simple in HDC and require only trivial element-wise arithmetic. By contrast, to achieve the same effect in a neural network, e.g., spiking neural networks (Wang et al., <xref ref-type="bibr" rid="B38">2018</xref>), we would need to assign images corresponding to composite classes a new label and train a separate model for prediction. HDC also provides a natural way to preserve temporal information using a permutation operator (Rahimi et al., <xref ref-type="bibr" rid="B32">2016b</xref>). For example, we encode a sequence of video frames while preserving the temporal structure. This would allow us to efficiently compute a similarity score for entire sequences of video using a standard HDC similarity search, which is extremely efficient in hardware (Li et al., <xref ref-type="bibr" rid="B22">2016</xref>).</p>
</sec>
<sec>
<title>2.2. Hyperdimensional primitives</title>
<p>Let us assume <inline-formula><mml:math id="M1"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>, <inline-formula><mml:math id="M2"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> are two randomly generated hypervectors (<inline-formula><mml:math id="M3"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>D</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula>) and <inline-formula><mml:math id="M4"><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02243;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>, where &#x003B4; is the cosine similarity function, <inline-formula><mml:math id="M5"><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mo stretchy="false">&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo stretchy="false">&#x02225;</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mo stretchy="false">&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo stretchy="false">&#x02225;</mml:mo></mml:mrow></mml:mfrac></mml:math></inline-formula>.</p>
<p><bold>Binding (&#x0002A;)</bold> of two hypervectors <inline-formula><mml:math id="M6"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="M7"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> is done by component-wise multiplication (<monospace>XOR</monospace> in binary) and denoted as <inline-formula><mml:math id="M8"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002A;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>. The result of the operation is a new hypervector that is dissimilar to its constituent vectors i.e., <inline-formula><mml:math id="M9"><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002A;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02248;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>; thus, binding is well suited for associating two hypervectors. Binding is used for variable-value association and, more generally, for mapping.</p>
<p><bold>Bundling (&#x0002B;)</bold> operation is done <italic>via</italic> component-wise addition of hypervectors, denoted as <inline-formula><mml:math id="M10"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>. The bundling is a memorization function that keeps the information of input data in a bundled vector. The bundled hypervectors preserve similarity to their component hypervectors i.e., <inline-formula><mml:math id="M11"><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0003E;</mml:mo><mml:mo>&#x0003E;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>. Hence, a bundling of hypervectors is well suited for representing the set of elements corresponding to the hypervectors that are bundled, and we may test their membership by a similarity check.</p>
<p><bold>Permutation (</bold>&#x003C1;<bold>)</bold> operation, <inline-formula><mml:math id="M12"><mml:msup><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, shuffles components of <inline-formula><mml:math id="M13"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula> with <italic>n</italic>-bit(s) rotation. The intriguing property of the permutation is that it creates a near-orthogonal and <italic>reversible</italic> hypervector to <inline-formula><mml:math id="M14"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula>, i.e., <inline-formula><mml:math id="M15"><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02243;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula> when <italic>n</italic> &#x02260; 0 and <inline-formula><mml:math id="M16"><mml:msup><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula>. Thus, we can use it to represent <italic>sequences</italic> and <italic>orders</italic>.</p>
<p><bold>Reasoning</bold> is done by measuring the similarity of hypervectors. We design the encoding of the hypervectors such that the similarity between the hypervectors reflects the similarity between the entities that they represent.</p>
</sec>
<sec>
<title>2.3. Overview</title>
<p>This article focuses on learning over data collected by the Dynamic Vision Sensor (DVS). Unlike a normal camera that captures data synchronously and frame-based, a DVS camera mimics the mechanics of the human retina by detecting and recording the changes in the illumination of a pixel asynchronously, sending a stream of events to the memory. This leads to sparse data because only a small subset of pixels reports events at any time, with rich temporal information, because of the asynchrony, rendering it much more difficult to train. Nevertheless, DVS data has been actively studied and researched in the context of neuromorphic computing, e.g., in conjunction with the Spiking Neural Network, for various image-related tasks such as gesture recognition and object classification (Massa et al., <xref ref-type="bibr" rid="B24">2020</xref>).</p>
<p>In this article, we present <sans-serif>EventHD</sans-serif>, an end-to-end framework for robust, efficient hyperdimensional learning from the neuromorphic sensor. Unlike all prior works that operate over preprocessed data, to the best of our knowledge, <sans-serif>EventHD</sans-serif> is the first HDC-based solution that directly operates over raw neuromorphic data. We first develop a novel hyperdimensional encoding scheme to map event-based neuromorphic data into high-dimensional space. <sans-serif>EventHD</sans-serif> exploits hyperdimensional mathematics to preserve spatial and temporal information from raw sensor data (Section 3). Next, we introduce novel algorithm solutions to perform classification and self-learning over the encoded data (Section 4). This includes enabling single-pass classification and supporting association and memorization over perception-action space (Section 5).</p>
</sec>
</sec>
<sec id="s3">
<title>3. <sans-serif>EventHD</sans-serif> spatial encoding</title>
<p>We exploit hyperdimensional computing mathematics to design a novel encoding module that receives event-based spiking data and generates high-dimensional data. Our HDC mapping is not a random projection. Instead, it preserves the temporal and spatial correlation between the input data. The goal of this encoder is to represent spikes in a holographic representation; thus, a single noisy spike in original data represents a pattern of neural activity in high-dimensional space. The holographic representation means that the information of each original spike will be uniformly distributed over all dimensional of our encoded hypervector. In addition, given that our encoding is purely event-based, it can also be operated in an asynchronous setting, reacting to DVS events, thus preserving asynchrony.</p>
<p>Let us assume the output of the DVS camera is in a form of <italic>E</italic><sub><italic>k</italic></sub> &#x0003D; (<bold>x</bold><sub><italic>k</italic></sub>, <italic>t</italic><sub><italic>k</italic></sub>, <italic>p</italic><sub><italic>k</italic></sub>), signaling at time <italic>t</italic><sub><italic>k</italic></sub> and location <bold>x</bold><sub><italic>k</italic></sub> &#x0003D; (<italic>x</italic><sub><italic>k</italic></sub>, <italic>y</italic><sub><italic>k</italic></sub>). When the illumination change surpasses a threshold <italic>p</italic><sub><italic>k</italic></sub>&#x000B7;<italic>C</italic>, where <italic>p</italic><sub><italic>k</italic></sub> &#x02208; {&#x02212;1, 1} and <italic>C</italic> is a predetermined threshold. For simplicity, we first explain how our encoder preserves the spatial correlation of spikes in holographic high-dimensional space. Then, we add temporal locality as a memorization term to our encoder.</p>
<sec>
<title>3.1. Base generation</title>
<p>Hyperdimensional computing encoding is performed based on a set of base or seed hypervectors. The base hypervectors represent the basic alphabet of the data. For an example of DVS data, the alphabets are illumination changes and the position of events. <xref ref-type="fig" rid="F1">Figures 1A,B</xref> shows the <sans-serif>EventHD</sans-serif> base generation procedure.</p>
<list list-type="bullet">
<list-item><p><bold>Illumination hypervector:</bold> The illumination change has two possibilities, increase or decrease. This information can be represented using a random hypervector, where <inline-formula><mml:math id="M17"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>D</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> and <inline-formula><mml:math id="M18"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> (<xref ref-type="fig" rid="F1">Figure 1A</xref>).</p></list-item>
<list-item><p><bold>Event-position hypervector:</bold> The information of event positions can be represented using a set of position hypervectors <inline-formula><mml:math id="M19"><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mspace width="0.3em" class="thinspace"/><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>, where the indices represent the row and column location of an event in the input (<italic>r</italic> &#x000D7; <italic>c</italic> pixels in DVS camera). The position hypervectors can not be generated randomly, as they need to preserve the spatial correlation between the neighbor events (<xref ref-type="fig" rid="F1">Figure 1B</xref>). In other words, the events with closer physical distance have a higher correlation. Using techniques introduced in Gallant and Culliton (<xref ref-type="bibr" rid="B4">2016</xref>) and Kim et al. (<xref ref-type="bibr" rid="B18">2021</xref>), we generate position hypervectors in three steps: (1) partition events into smaller non-overlapping <italic>k</italic> &#x000D7; <italic>k</italic> windows, and (2) generate randomly generated hypervectors for pixels located on the corner of windows. For example, we generate random hypervector for <inline-formula><mml:math id="M20"><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>,</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>. This repeats over all <italic>k</italic> &#x000D7; <italic>k</italic> windows. Since these vectors are randomly chosen and they are in high-dimensional space, they are nearly orthogonal (<inline-formula><mml:math id="M21"><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02243;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>). (3) For all intermediate pixels, we perform interpolation to generate correlated hypervectors. Each pixel will get partial dimensions from the position hypervectors located in the four corners of a <italic>k</italic> &#x000D7; <italic>k</italic> window. The number of dimensions to take from each corner hypervectors depends on the relative position of a pixel within the window such that the generated position hypervectors preserve the 2D spatial correlation between events&#x00027; positions. For an in-depth description of the spatial interpolation, readers are referred to Gallant and Culliton (<xref ref-type="bibr" rid="B4">2016</xref>) and Kim et al. (<xref ref-type="bibr" rid="B18">2021</xref>).</p></list-item>
</list>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><sans-serif>EventHD</sans-serif> spatial encoding: <bold>(A)</bold> base generation for illumination, <bold>(B)</bold> base generation for event position to keep 2D correlation of pixels in neuromorphic image, and <bold>(C)</bold> spatial encoding that associates and memorizes illumination and position hypervectors.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-16-858329-g0001.tif"/>
</fig>
</sec>
<sec>
<title>3.2. Spatial encoding</title>
<p>In a given time, our encoder looks at neuromorphic data as an image with few activated spikes/events. The goal of the encoder is to map this data into high-dimensional space using pre-generated base hypervectors. The encoding is performed in two steps, as shown in (<xref ref-type="fig" rid="F1">Figure 1C</xref>):</p>
<p><bold>Associating event-illumination:</bold> For every activated event, our encoder exploits a binding operation to associate each event position with the corresponding illumination hypervector. For example, if an event in position [<italic>i, j</italic>] is activated, our encoder associates information using: <inline-formula><mml:math id="M22"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:math></inline-formula>, where <inline-formula><mml:math id="M23"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:math></inline-formula> can be <inline-formula><mml:math id="M24"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> or <inline-formula><mml:math id="M25"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> depending on illumination direction. The bound hypervector preserves position and illumination information in a new hypervector that is nearly orthogonal to its operands. We perform the same association for all activated events.</p>
<p><bold>Event memorization:</bold> In HDC, bundling acts as memorization. We exploit this feature to memorize the information of all activated events in a given time. Our solution bundles associated hypervectors for all activated events: <inline-formula><mml:math id="M26"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msubsup><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002A;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> when (<italic>r, c</italic>) has a spike event. The memorization and summation only happens for pixels that have spike events.</p>
</sec>
<sec>
<title>3.3. <sans-serif>EventHD</sans-serif> temporal encoding</title>
<p>Let us consider actual neuromorphic data with temporal spikes/events. As we explained in Section 3.2, for all events that happen in a time window, we exploit spatial encoding to map all events into single hypervectors. As time moves on, the information of new events needs to be encoded into a new hypervector. <xref ref-type="fig" rid="F2">Figure 2</xref> shows two solutions to memorizing signals and keeping temporal information.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><sans-serif>EventHD</sans-serif> temporal encoding: <bold>(A)</bold> permutation-based encoding, <bold>(B,C)</bold> correlative time hypervector used for associated-based encoding.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-16-858329-g0002.tif"/>
</fig>
<p><bold>Permutation-based:</bold> To incorporate a notion of time, our encoder represents the position of each time slot using a single permutation. The permutation in HDC is defined as a rotational shift, where the permuted hypervector is nearly orthogonal to its original vector. <xref ref-type="fig" rid="F2">Figure 2A</xref> shows how <italic>n</italic> spatial-encoded data can be temporally combined through time. For example, to memorize three consecutive encoded signals, <inline-formula><mml:math id="M27"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>, <inline-formula><mml:math id="M28"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>, and <inline-formula><mml:math id="M29"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>, we encode them into a single hypervector by <inline-formula><mml:math id="M30"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002A;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002A;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, where binding (&#x0002A;) and permutation (&#x003C1;) memorize sensor value and position. This encoding preserves the temporal information of events. Although permutation can preserve sequence information, it is a very exclusive operation that loses the information of continuous-time. For example, even when two continuous events are identical, this temporal encoding is orthogonal (<inline-formula><mml:math id="M31"><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mover accent="true"><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mover accent="true"><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02243;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>).</p>
<p><bold>Association-based:</bold> To give a notion of continuous time, we exploit binding operations to keep temporal correlative information. Our temporal encoding is performed using the following steps: (1) Similar to spatial encoding, we generate a set of correlated hypervectors to preserve temporal correlation. As <xref ref-type="fig" rid="F2">Figure 2B</xref> shows, our solution splits time into smaller <italic>t</italic>-size windows. We generate a random hypervector for each time that is a factor of <italic>t</italic>. For example, we generate random hypervectors representing <inline-formula><mml:math id="M32"><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">T</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">T</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">T</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mspace width="0.3em" class="thinspace"/><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">T</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>, where indices represent time steps. (2) We perform interpolation to generate a correlated hypervector representing intermediate times. Given time <italic>t</italic><sub>0</sub> &#x02208; [<italic>jt</italic>, (<italic>j</italic> &#x0002B; 1)<italic>t</italic>] for some non-negative integer <italic>j</italic>, <italic>T</italic><sub><italic>t</italic><sub>0</sub></sub> is generated by taking components from <italic>T</italic><sub><italic>jt</italic></sub> and <italic>T</italic><sub>(<italic>j</italic>&#x0002B;1)<italic>t</italic></sub> in (<italic>j</italic> &#x0002B; 1)<italic>t</italic> &#x02212;  <italic>t</italic><sub>0</sub>:<italic>t</italic><sub>0</sub>&#x02212; <italic>jt</italic> ratio such that the similarity between the three reflects their original correlation. For an example of <italic>t</italic> &#x0003D; 3, <inline-formula><mml:math id="M33"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">T</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> will be 66.6% similar to <inline-formula><mml:math id="M34"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">T</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> and 33.3% similar to <inline-formula><mml:math id="M35"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">T</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>. Our temporal correlation goes beyond a single-window; hypervectors in two neighbor windows are also correlated.</p>
<p>As <xref ref-type="fig" rid="F2">Figure 2</xref> shows, we exploit the time-base correlated hypervectors to preserve temporal information. Let us assume <inline-formula><mml:math id="M36"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is a hypervector of events happening in a time slot <italic>i</italic>. Our encoding preserves temporal correlation of <italic>p</italic> time-slot using: <inline-formula><mml:math id="M37"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">T</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002A;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>.</p>
</sec>
</sec>
<sec id="s4">
<title>4. <sans-serif>EventHD</sans-serif> classification</title>
<p>In this section, we introduce HDC-based classification algorithms that can directly learn from encoded query data. This includes developing algorithms that can effectively learn from both labeled and unlabeled data.</p>
<sec>
<title>4.1. Supervised learning</title>
<p><sans-serif>EventHD</sans-serif> supports two types of classification: <italic>accumulative</italic> and <italic>adaptive</italic> learning. Both methods are a single-pass approaches that can construct a learning model by one-time looking at training data. The single-pass model is significantly fast and efficient and enables learning from the data stream with no need for off-chip memory.</p>
<p><bold>Accumulative training (single-class update):</bold> Hyperdimensional computing models receive their dataset as copies of the memory component at the point of evaluation. To find the universal property for each class in the training dataset, the trainer module linearly combines hypervectors belonging to each class, i.e., adding the hypervectors to create a single hypervector for each class. Once combining all hypervectors, we treat per-class accumulated hypervectors, called <italic>class hypervectors</italic>, as the learned model. <xref ref-type="fig" rid="F3">Figure 3</xref> shows HDC functionality during single-pass training. Assuming a problem with <italic>k</italic> classes, the model represents using: <inline-formula><mml:math id="M38"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">M</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mspace width="0.3em" class="thinspace"/><mml:mo>,</mml:mo><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>. For example, after generating all encoding hypervector of inputs belonging to class/label <italic>l</italic>, the class hypervector <inline-formula><mml:math id="M39"><mml:mover accent="true"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula> can be updated using: <inline-formula><mml:math id="M40"><mml:mover accent="true"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">J</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, where there are <inline-formula><mml:math id="M41"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">J</mml:mi></mml:mrow></mml:math></inline-formula> inputs having label <italic>l</italic>. This weighted data accumulation continues for all train data available in each class. Accumulative training gives a rough estimation of a pattern of each class hypervector. However, it does not find a chance to adjust the class hypervectors for marginal predictions. This makes the HDC model sensitive to possible noise in the input data.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Hyperdimensional classification: Overview of <sans-serif>EventHD</sans-serif> for training and inference <bold>(left)</bold> and the routine for single-pass training <bold>(right)</bold>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-16-858329-g0003.tif"/>
</fig>
<p><bold>Adaptive training (multi-class update):</bold> We propose adaptive training that not only accumulates each train data with the correct class but also updates the class hypervectors with a possible marginal match. <sans-serif>EventHD</sans-serif> checks the similarity of each encoded query data with all class hypervectors. If an encoded query <inline-formula><mml:math id="M42"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula> corresponding to label <italic>l</italic>, the model miss-predicts it as label <italic>l</italic>&#x02032;, the model updates 2 &#x000D7; <italic>i</italic> neighbor classes using the following equation:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M43"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mo>&#x000B1;</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02190;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mo>&#x000B1;</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x000D7;</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>&#x000B1;</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02190;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>&#x000B1;</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x000D7;</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M44"><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo>,</mml:mo><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> and <inline-formula><mml:math id="M45"><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo>,</mml:mo><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> are the similarity of data with correct and miss-predicted classes, respectively. Unlike the accumulative training, our adaptive update provides two main features: (i) it updates multiple class hypervectors, which are centered around correct and miss-predicted classes. The neighbor class hypervector gets updated depending on its physical distance to the query (&#x003B7;<sub><italic>i</italic></sub> sets the update ratio). This method ensures that class hypervectors have a smoother pattern of similarity; thus, a small noise in the input data cannot cause miss-prediction. (ii) Adaptive training also ensures that we update the model adaptively based on how far a train data point is miss-classified with the current model. In case of a far miss-prediction, <inline-formula><mml:math id="M46"><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mo>&#x0003E;</mml:mo><mml:mo>&#x0003E;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, retraining makes major changes to the mode. While for marginal miss-prediction, <inline-formula><mml:math id="M47"><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mo>&#x02243;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, the update makes smaller changes to the model.</p>
<p><bold>Inference:</bold> checks the similarity of each encoded test data with the class hypervector in two steps. The first step encodes the input to produce a query hypervector <inline-formula><mml:math id="M48"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula>. Then, as <xref ref-type="fig" rid="F3">Figure 3</xref> shows, we compute the similarity (&#x003B4;) of <inline-formula><mml:math id="M49"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula> and all class hypervectors. Query data gets the label of the class with the highest similarity.</p>
</sec>
<sec>
<title>4.2. Self-learning</title>
<p><sans-serif>EventHD</sans-serif> also supports online self-learning where only a small portion of training data is labeled. <sans-serif>EventHD</sans-serif> exploits the HDC model transparency to improve the quality of the model. Using the techniques introduced in Imani et al. (<xref ref-type="bibr" rid="B11">2019a</xref>), it checks the similarity of each unlabeled data with the already trained model, obtaining the confidence level of the classification result. If the confidence level is higher than a threshold (e.g., &#x003B1; &#x0003E; 90%), <sans-serif>EventHD</sans-serif> updates the model by embedding encoded data into the corresponding class hypervector, as: <inline-formula><mml:math id="M50"><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo>&#x000D7;</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula>, where <inline-formula><mml:math id="M51"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">H</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula> is the query data and <inline-formula><mml:math id="M52"><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> is a class that has the maximum similarity with a query. <sans-serif>EventHD</sans-serif> exploits this same technique to update the model based on the user&#x00027;s feedback on the inference results. Given the absence of labels in the majority of observations, we assume that users would be willing to provide feedback when they are not satisfied and tune the confidence threshold accordingly.</p>
</sec>
</sec>
<sec id="s5">
<title>5. Cognitive support</title>
<p>There is a process in the brain where the perceptual system constructs an internal representation of the world. Such an assumption has led past study in robotics and artificial intelligence to rely on the input data and their complex representation in the system for most cognitive tasks. However, recent studies in human cognition show that cognition is <italic>enactive</italic>: that perceiving is a way of acting, and that our perception not only depends on but is also comprised of sensorimotor knowledge (Mitrokhin et al., <xref ref-type="bibr" rid="B25">2019</xref>). This makes it essential to associate the perception and the action of a model in accomplishing cognitive tasks.</p>
<sec>
<title>5.1. Perception-action association</title>
<p>Hyperdimensional computing can naturally correlate them in high-dimensional space (Mitrokhin et al., <xref ref-type="bibr" rid="B25">2019</xref>). This association enables <sans-serif>EventHD</sans-serif> to reason about each prediction by giving systems prior knowledge. Let us consider a system with <italic>n</italic>-feature as perception (<inline-formula><mml:math id="M53"><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mspace width="0.3em" class="thinspace"/><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>) and <italic>m</italic>-output actions (<inline-formula><mml:math id="M54"><mml:mover accent="true"><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mspace width="0.3em" class="thinspace"/><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>) in original space. Our approach encodes both perception and action data into high-dimensional space. For perception, we exploit the proposed encoding, explained in Section 3, that preserves the spatial correlation of events. However, the output actions are often independent and do not have any spatial correlations. Therefore, our encoding method randomly generates the position hypervectors, rather than generating correlated position hypervector for a given image data. <inline-formula><mml:math id="M55"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">X</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002A;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula>, where <inline-formula><mml:math id="M56"><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0007E;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>.</p>
<p><sans-serif>EventHD</sans-serif> also encodes the output action into high-dimensional space. The action is often a single output signal. Our method linearly or non-linearly quantizes the action signal and assigns a hypervector to each quantization level, <inline-formula><mml:math id="M57"><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">A</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">A</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mspace width="0.3em" class="thinspace"/><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">A</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>. Our solution naturally associates each pair of perception and action by binding their corresponding hypervector. The accumulation of the bound vectors over prior observations gives native HDC-based memorization to the system: <inline-formula><mml:math id="M58"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">X</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002A;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">A</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. Let us assume each reference hypervector store <italic>N</italic> encoded perception-action hypervector: <inline-formula><mml:math id="M59"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">R</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. We can predict an action for a perception <inline-formula><mml:math id="M60"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">X</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, using:</p>
<disp-formula id="E2"><mml:math id="M61"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">A</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02243;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">X</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002A;</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">R</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mstyle displaystyle="true"><mml:munder accentunder="false"><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">X</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002A;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">X</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0FE38;</mml:mo></mml:munder></mml:mstyle></mml:mrow><mml:mrow><mml:mn mathvariant="bold">1</mml:mn></mml:mrow></mml:munder></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">A</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mstyle displaystyle="true"><mml:munder accentunder="false"><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">X</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002A;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">X</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0FE38;</mml:mo></mml:munder></mml:mstyle></mml:mrow><mml:mrow><mml:mtext class="textit" mathvariant="bold">Noise</mml:mtext></mml:mrow></mml:munder></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">A</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M62"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">A</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is an interpolation between all actions that their perceptions have high similarity to <inline-formula><mml:math id="M63"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">X</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. <xref ref-type="fig" rid="F4">Figure 4A</xref> shows <sans-serif>EventHD</sans-serif> selecting between two discrete actions. Depending on the confidence, i.e., the similarity of a query to memorized perceptions, <sans-serif>EventHD</sans-serif> picks one of the actions. In continuous space, the selection translates to interpolation between the actions, depending on the perceptions similarity in HDC space.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Information association and memorization: <bold>(A)</bold> perception-action association. Depending on the confidence of a query to the memorized perceptions measured by similarity, <sans-serif>EventHD</sans-serif> picks one of the actions with the highest confidence. <bold>(B,C)</bold> Distribution of signal and noise signal when referencing hypervector with <italic>D</italic> &#x0003D; 4<italic>k</italic> is storing <italic>N</italic> &#x0003D; 10<sup>3</sup> and <italic>N</italic> &#x0003D; 10<sup>4</sup> orthogonal patterns. When the number of patterns stored is low, like <bold>(B)</bold>, the distribution of the similarity of signal and that of noise are separable, implying perfect signal detection quality; when the number of patterns is high, the distributions overlap, and signal detection has less accuracy. <bold>(D)</bold> The capacity of reference hypervector with different dimensions storing orthogonal and correlated hypervectors. Compared to orthogonal hypervectors, correlated hypervectors require less capacity to store, resulting in higher detection probability (bluer) given fixed hyperdimension and patterns.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-16-858329-g0004.tif"/>
</fig>
</sec>
<sec>
<title>5.2. Memorization in perception-action space</title>
<p>In HDC, bundling acts as a memory, storing the information of multiple encoded hypervectors into a single reference hypervector. <sans-serif>EventHD</sans-serif> exploits bundling to memorize the associated perception-action, <inline-formula><mml:math id="M64"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">R</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. The reference hypervector has limited capacity and, thus, cannot store the information of unlimited encoded data. The capacity depends on the dimensionality and the orthogonality of the encoded hypervectors. For a given query data, <sans-serif>EventHD</sans-serif> can refer to memory in order to retrieve the system&#x00027;s prior knowledge. For example, let us assume <italic>q</italic> is a perception with <inline-formula><mml:math id="M65"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">Q</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula> being its encoded data. <sans-serif>EventHD</sans-serif> can retrieve information about possible actions by checking the similarity of the query with the reference model:</p>
<disp-formula id="E3"><mml:math id="M66"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">R</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">Q</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mstyle displaystyle="true"><mml:munder accentunder="false"><mml:mrow><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi>Q</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0FE38;</mml:mo></mml:munder></mml:mstyle></mml:mrow><mml:mrow><mml:mtext class="textit" mathvariant="bold">Signal</mml:mtext></mml:mrow></mml:munder></mml:mstyle><mml:mo>&#x0002B;</mml:mo><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mstyle displaystyle="true"><mml:munder accentunder="false"><mml:mrow><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>i</mml:mi><mml:mo>&#x02260;</mml:mo><mml:mi>&#x003BB;</mml:mi></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi>Q</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0FE38;</mml:mo></mml:munder></mml:mstyle></mml:mrow><mml:mrow><mml:mtext class="textit" mathvariant="bold">Noise</mml:mtext></mml:mrow></mml:munder></mml:mstyle></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>If <inline-formula><mml:math id="M67"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi>Q</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula> for some &#x003BB;, the output of the function is going to be <inline-formula><mml:math id="M68"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">A</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. For reference patterns that do not match with the query, the similarity is nearly zero, <inline-formula><mml:math id="M69"><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi>Q</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02243;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>. Thus, we can check the existence of a query <inline-formula><mml:math id="M70"><mml:mover accent="true"><mml:mrow><mml:mi>Q</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula> in <inline-formula><mml:math id="M71"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">R</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula> using the following criteria: <inline-formula><mml:math id="M72"><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">R</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi>Q</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:mi>D</mml:mi><mml:mo>&#x0003E;</mml:mo><mml:mi>T</mml:mi></mml:math></inline-formula>, where <italic>T</italic> is a threshold and <inline-formula><mml:math id="M73"><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">R</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi>Q</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:mi>D</mml:mi></mml:math></inline-formula> is called the <italic>decision score</italic>.</p>
<p><xref ref-type="fig" rid="F4">Figures 4B,C</xref> show the normalized distribution of signal and noise in <sans-serif>EventHD</sans-serif> information retrieval (using <italic>D</italic> &#x0003D; 10<italic>k</italic>). These Gaussian distributions determine the capacity of each reference hypervector in memorizing the information. As our mathematical model indicated, the noise is getting a wider distribution when increasing the number of patterns stored in <inline-formula><mml:math id="M74"><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">R</mml:mi></mml:mrow><mml:mo>&#x02192;</mml:mo></mml:mover></mml:math></inline-formula>. When the noise overlaps with the signal, there is no threshold <italic>T</italic> that can separate noise, thus resulting in information loss (<xref ref-type="fig" rid="F4">Figure 4C</xref>). <xref ref-type="fig" rid="F4">Figure 4D</xref> shows the capacity of reference hypervector with <italic>D</italic> dimensionality in storing <italic>N</italic> nearly orthogonal and correlative patterns. Our evaluation shows that the capacity of the reference hypervector increases with dimensionality. For example, <sans-serif>EventHD</sans-serif> with <italic>D</italic> &#x0003D; 4<italic>k</italic> can stored <italic>N</italic> &#x0003D; 10<sup>3</sup> (<italic>N</italic> &#x0003D; 10<sup>4</sup>) orthogonal patterns with less than 0.5% (10%) information loss. Note that, in practice, the reference hypervector has a much higher capacity as <sans-serif>EventHD</sans-serif> encoder keeps the correlation between input signals. As <xref ref-type="fig" rid="F4">Figure 4D</xref> shows, a reference hypervector provides significantly higher capacity when the <sans-serif>EventHD</sans-serif> encoder preserves correlation in high-dimensional space. For a more in-depth analysis of the memory capacity, readers are referred to Frady et al. (<xref ref-type="bibr" rid="B3">2018</xref>).</p>
</sec>
<sec>
<title>5.3. Other applications: Beyond memorization</title>
<p><sans-serif>EventHD</sans-serif> similarity search on the memorized model gives us an estimation of the output action. <sans-serif>EventHD</sans-serif> uses this prediction as prior knowledge to trust the prediction. If the prediction is relatively far from the memorized action, <sans-serif>EventHD</sans-serif> gives very low confidence to that prediction. This approach enables us to reason about each prediction and potentially provide a more explainable learning solution.</p>
</sec>
</sec>
<sec id="s6">
<title>6. Evaluation</title>
<sec>
<title>6.1. Experimental setup</title>
<p>We implement <sans-serif>EventHD</sans-serif> using software, hardware, and system implementation. In software, we verified <sans-serif>EventHD</sans-serif> training and testing using a C&#x0002B;&#x0002B; implementation. For hardware, we design the <sans-serif>EventHD</sans-serif> functionality using Verilog and synthesize it using Xilinx Vivado Design Suite (Feist, <xref ref-type="bibr" rid="B2">2012</xref>). The synthesis code has been implemented on the Kintex-7 FPGA KC705 Evaluation Kit. We ensure our efficiency is higher than the automated FPGA implementation at (Salamat et al., <xref ref-type="bibr" rid="B34">2019</xref>).</p>
<p>We evaluate <sans-serif>EventHD</sans-serif> accuracy and efficiency on two Datasets: the Neuromorphic MNIST (N-MNIST) and the Multi-Vehicle Stereo Event Camera (MVSEC) dataset. N-MNIST is an event-based version of the MNIST dataset, containing event-stream recordings of the 60,000 training digits and 10,000 testing digits. The MVSEC dataset collects event-based DVS cameras on the self-driving car day and night (Zhu et al., <xref ref-type="bibr" rid="B42">2018</xref>; Mitrokhin et al., <xref ref-type="bibr" rid="B25">2019</xref>). This dataset is designed for regression tasks to predict the car velocity based on DVS data. The experiments correspond to mDAVIS-346B cameras with 346&#x000D7;260 pixel resolution. To find ground truth velocity values, the car is equipped with IMUs and GPS sensors. The evaluation is performed for five activities, two recorded during the day and three in the evening/night. The results are reported using two metrics: Average Relative Pose Error (ARPE) and Average End-point Error (AEE). ARPE shows the average angular error between translational vectors while ignoring the scale (Mitrokhin et al., <xref ref-type="bibr" rid="B25">2019</xref>), while AEE shows the absolute error in 2D linear space. Similar to other error metrics, the lower ARPE and AEEE indicate higher quality of learning. All results are reported for MVSEC data unless they are stated differently. We use a simple DNN with one 512-neuron hidden layer as our baseline from conventional neural networks. <sans-serif>EventHD</sans-serif> is configured to have a hyperdimension of <italic>D</italic> &#x0003D; 4, 000, a window size of <italic>k</italic> &#x0003D; 5 for positional, and a time window size of <italic>t</italic> &#x0003D; 50(<italic>ms</italic>) across all experiments, as it leads to the best average performance.</p>
</sec>
<sec>
<title>6.2. <sans-serif>EventHD</sans-serif> accuracy</title>
<p><xref ref-type="fig" rid="F5">Figure 5A</xref> compares <sans-serif>EventHD</sans-serif> quality of learning over classification task, using both day and night data. We compare <sans-serif>EventHD</sans-serif> with state-of-the-art HDC methods working on event-based sensors: DNN, Dense HDC (DenseHD) (Mitrokhin et al., <xref ref-type="bibr" rid="B25">2019</xref>), and Sparse HDC (SparseHD) (Hersche et al., <xref ref-type="bibr" rid="B9">2020</xref>). All three baseline approaches operate over six extracted features by the preprocessing method. In contrast, <sans-serif>EventHD</sans-serif> is an end-to-end framework that directly operates over raw neuromorphic data. Note that other algorithms, i.e., DNN, DenseHD, and SparseHD, provide close to a random prediction when processing the raw neuromorphic data. For <sans-serif>EventHD</sans-serif>, we report the results for single-class (Single-C) and multi-class (Mult-C) updates using both ARPE and AEE metrics. Our evaluation shows that <sans-serif>EventHD</sans-serif> using both accuracy metrics provides comparable or better quality of learning compared to the state-of-the-art solutions. For example, <sans-serif>EventHD</sans-serif> ARPE (AEE) error metric is, on average, 0.1% and 4.8% (37.0% and 14.1%) lower than DenseHD and SparseHD, respectively. These metrics indicate <sans-serif>EventHD</sans-serif> higher quality of learning. Note that <sans-serif>EventHD</sans-serif> efficiency and robustness are significantly higher than all baseline methods due to eliminating costly preprocessing (detailed evaluation in Section 6.3).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><sans-serif>EventHD</sans-serif> quality of learning over MVSEC and N-MNIST datasets <bold>(A,B)</bold>. The results are compared to the state-of-the-art HDC-based approach. For DNN, SNN, SparseHD, and DenseHD, we use their original implementation and allow preprocessing as needed. For <sans-serif>EventHD</sans-serif>, we report the results for single-class (Single-C) and multi-class (Mult-C) updates. For multi-class, we also report results for permutation-based temporal encoding (&#x003C1;) and association-based temporal encoding (&#x0002A;). Evaluations for MVSEC are measured by Average Relative Pose Error (ARPE) and Average End-point Error (AEE), and that for N-MNIST is classification. accuracy. <sans-serif>EventHD</sans-serif> provides comparable or better quality of learning compared to state-of-the-art solutions.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-16-858329-g0005.tif"/>
</fig>
<p><xref ref-type="fig" rid="F5">Figure 5B</xref> also evaluates the <sans-serif>EventHD</sans-serif> quality of learning on the N-MNIST dataset. The results are compared to SNN and HDC-based neuromorphic approaches. Unlike <sans-serif>EventHD</sans-serif> which operates over raw neuromorphic data, SparseHD and DenseHD rely on preprocessing algorithms to extract spatial-temporal information. Our evaluation shows that <sans-serif>EventHD</sans-serif> provides significantly higher classification accuracy than existing HDC-based algorithms, i.e., SparseHD and DenseHD.</p>
<p><bold>Temporal encoding:</bold> <xref ref-type="fig" rid="F5">Figure 5</xref> also compares <sans-serif>EventHD</sans-serif> accuracy using permutation-based (&#x003C1;) and association-based (&#x0002A;) temporal encoding. Our evaluation shows that the association-based encoding provides a lower error rate by enabling a notion of continuous-time dynamic, while permutation-based encoding only preserves the orders of events. For example, <sans-serif>EventHD</sans-serif> using association-based provides 10.2% (17.2%) lower ARPE (AEE) compared to a permutation-based solution on the MVSEC dataset.</p>
<p><bold>Single vs. multi-class:</bold> <xref ref-type="fig" rid="F6">Figure 6</xref> visually compares <sans-serif>EventHD</sans-serif> classification accuracy in two configurations over the MVSEC dataset: a single-class and a multi-class update. In both configurations, we show the final prediction <xref ref-type="fig" rid="F6">(Figure 6A</xref>) and the similarity of a query with different class hypervectors (<xref ref-type="fig" rid="F6">Figure 6B</xref>). <sans-serif>EventHD</sans-serif> with a single-class update creates a weak learning model with high sensitivity to noise and variation in the input data. Therefore, during inference, it may deviate toward the wrong class. However, our multi-class update solution keeps the correlation between the predicted speeds and strength of the class hypervectors, thus providing higher learning accuracy. The box in <xref ref-type="fig" rid="F6">Figure 6</xref> clearly shows the capability of <sans-serif>EventHD</sans-serif> multi-class update to strengthen the signal in related class hypervectors and provide a higher quality of prediction.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Visualization of classification results of <sans-serif>EventHD</sans-serif> prediction on single-class and multi-class update configurations for z-axis linear speed of MVSEC outdoor night 1: The model is trained on the first 1,000 ground truth samples and then used to predict up to 2,500 samples as indicated by the time axis. <bold>(A)</bold> Visualize <sans-serif>EventHD</sans-serif> final prediction for linear speed along z-axis compoared to ground truth and <bold>(B)</bold> Display the similarity between the query and each class hypervectors.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-16-858329-g0006.tif"/>
</fig>
<p><bold>Robustness to variation:</bold> Unlike prior HDC-based approaches that do not keep the correlation, <sans-serif>EventHD</sans-serif> encoding is asynchronous, thus preserving both temporal and spatial correlation over event-based data. We perform an experiment to show <sans-serif>EventHD</sans-serif> capability to respond to noisy data. <xref ref-type="fig" rid="F7">Figure 7A</xref> shows <sans-serif>EventHD</sans-serif> and HDC quality of learning when the activated events in each timestamp are randomly shifted in an arbitrary direction. Our evaluation shows that <sans-serif>EventHD</sans-serif> is highly robust against such possible variational data, as it provides the maximum accuracy even using a 5% shift. In contrast, the state-of-the-art HDC solutions do not keep the correlation between neighbor pixels (spatial correlation). Therefore, a single shift operation can generate a signal which is entirely orthogonal to the non-shifted version. As <xref ref-type="fig" rid="F7">Figure 7A</xref> shows, this makes the existing HDC solutions, DenseHD and SparseHD, very sensitive to possible noise or variation in the input signal.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><sans-serif>EventHD</sans-serif> robustness and self-learning capability: <bold>(A)</bold> Robustness to <sans-serif>EventHD</sans-serif> and other HDC-based algorithms to pixel variation and <bold>(B)</bold> <sans-serif>EventHD</sans-serif> self-learning over unlabeled data (semi-supervised).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-16-858329-g0007.tif"/>
</fig>
<p><bold>Self-learning:</bold> <xref ref-type="fig" rid="F7">Figure 7B</xref> shows <sans-serif>EventHD</sans-serif> classification accuracy during the self-learning iterations. The results are reported when <sans-serif>EventHD</sans-serif> has been trained, supervised over 10% of train data, and unsupervised over the other 90%. Our evaluation shows that <sans-serif>EventHD</sans-serif> can adaptively improve classification accuracy during self-learning. This advantage comes from the <sans-serif>EventHD</sans-serif> capability of computing confidence for each prediction. Therefore, <sans-serif>EventHD</sans-serif> trusts data with high confidence for model updates while ignoring low confidential data. On another side, a higher confidence threshold increases the required train samples to converge to maximum accuracy.</p>
</sec>
<sec>
<title>6.3. <sans-serif>EventHD</sans-serif> efficiency and robustness</title>
<p>We compare <sans-serif>EventHD</sans-serif> efficiency and robustness to state-of-the-art HDC solutions. The results are for the total processing time, including both preprocessing and learning. The existing HDC solutions use image-to-time transformation as a preprocessing step for feature extraction from the event-based information. The feature extraction result is only a few feature data (i.e., six features in our example). The preprocessing makes the learning task very simple such that even a simple learning solution, e.g., linear regression or perceptron, can provide acceptable accuracy. Due to the complexity of the preprocessing step, its cost eliminates the effectiveness of HDC in enhancing system efficiency. In contrast, <sans-serif>EventHD</sans-serif> is an end-to-end solution, directly operating over the raw data received by the event-based camera. Our solution eliminates the costly preprocessing step by enabling HDC encoding to preserve both the temporal and spatial locality of the raw data. This improves not only <sans-serif>EventHD</sans-serif> computation efficiency but also provides significant computational robustness.</p>
<p><bold>Efficiency:</bold> <xref ref-type="fig" rid="F8">Figure 8</xref> compares <sans-serif>EventHD</sans-serif> computation efficiency with the existing HDC solutions running on FPGA. The results are reported for both training and inference phases. For DNN, we used DNNWeaver V2.0 (Sharma et al., <xref ref-type="bibr" rid="B36">2016</xref>) for the inference and FPDeep (Geng et al., <xref ref-type="bibr" rid="B6">2018</xref>) for training implementation on a single FPGA device. For DenseHD and SparseHD, we use the F5-HD (Salamat et al., <xref ref-type="bibr" rid="B34">2019</xref>) framework for FPGA implementation. All FPGA implementations are optimized to maximize performance by utilizing FPGA resources. All results, in <xref ref-type="fig" rid="F8">Figure 8</xref>, are relative to DNN performance and energy efficiency. During training, <sans-serif>EventHD</sans-serif> achieves, on average, 10.6&#x000D7; faster and 16.3&#x000D7; more energy-efficient computation as compared to FPGA-based DNN implementation, respectively. The high efficiency of <sans-serif>EventHD</sans-serif> in training comes from <sans-serif>EventHD</sans-serif> capability in (i) creating an initial model that significantly lowers the number of required retraining iterations and (ii) eliminating the costly gradient for the model update. This results in higher <sans-serif>EventHD</sans-serif> efficiency, even in terms of a single training iteration. In inference, <sans-serif>EventHD</sans-serif> provides 4.3&#x000D7; faster and 6.8&#x000D7; higher energy efficiency as compared to FPGA-based DNN implementation. As compared to SparseHD (DNN), <sans-serif>EventHD</sans-serif> provides 1.9&#x000D7; and 2.1&#x000D7; (14.2&#x000D7; and 19.8&#x000D7;) faster and more energy-efficient training. The main computation efficiency comes from eliminating the costly preprocessing step and replacing it with HDC encoding.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>Efficiency analysis: Comparison of <sans-serif>EventHD</sans-serif> performance speedup and energy efficiency with state-of-the-art algorithms on the FPGA platform. The results are reported for both training and inference phases and is normalized relative to DNN performance and energy efficiency. During training (inference), <sans-serif>EventHD</sans-serif> achieves, on average, 10.6&#x000D7; (14.2&#x000D7;) faster and 16.3&#x000D7; (19.8&#x000D7;) more energy-efficient computation as compared to FPGA-based DNN implementation, respectively.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnins-16-858329-g0008.tif"/>
</fig>
<p><bold>Robustness:</bold> The noise in today&#x00027;s technology is coming from multiple sources. Unfortunately, the existing data representation has very low robustness to noise in hardware. An error bit on the exponents or Most Significant Bits (MSBs) result in a major change in the weight value, while an error in the Least Significant Bits (LSBs) adds minor changes to the computation. The randomness of the noise makes traditional data representations vulnerable to an error on the hardware. One of the main advantages of <sans-serif>EventHD</sans-serif> is its high robustness to noise and failure. In <sans-serif>EventHD</sans-serif>, hypervectors are random and holographic with i.i.d. components. Each hypervector stores the information across all its components so that no component is more responsible for storing any piece of information than another. This makes a hypervector robust against errors in its components. <sans-serif>EventHD</sans-serif> robustness depends on the hypervector dimensionality that determines the hypervector capacity and redundancy. <xref ref-type="table" rid="T1">Table 1</xref> compares <sans-serif>EventHD</sans-serif> robustness with the existing HDC and learning solutions operating the preprocessing or the entire learning task over the original data representation. The results indicate that <sans-serif>EventHD</sans-serif> quality of learning is almost constant, even using 5% noise. In contrast, even a small amount of error on an existing solution can result in significant quality loss. For example, under 20% random noise, <sans-serif>EventHD</sans-serif> using <italic>D</italic> &#x0003D; 4<italic>k</italic> provides 17.8% and 14.7% higher accuracy than DenseHD and SparseHD, respectively. Note that <sans-serif>EventHD</sans-serif> robustness increases with its dimensionality (as shown in <xref ref-type="table" rid="T1">Table 1</xref>). However, higher dimensionality results in lower computation efficiency.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Robustness analysis of different learning algorithms to hardware error rate.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Error rate</bold></th>
<th valign="top" align="left"><bold>1%</bold></th>
<th valign="top" align="left"><bold>2%</bold></th>
<th valign="top" align="left"><bold>5%</bold></th>
<th valign="top" align="left"><bold>10%</bold></th>
<th valign="top" align="left"><bold>15%</bold></th>
<th valign="top" align="left"><bold>20%</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">DNN</td>
<td valign="top" align="left">0.7%</td>
<td valign="top" align="left">1.9%</td>
<td valign="top" align="left">3.7%</td>
<td valign="top" align="left">11.5%</td>
<td valign="top" align="left">21.3%</td>
<td valign="top" align="left">38.6%</td>
</tr>
<tr>
<td valign="top" align="left">Dense HDC (Mitrokhin et al., <xref ref-type="bibr" rid="B25">2019</xref>)</td>
<td valign="top" align="left">0.2%</td>
<td valign="top" align="left">1.0%</td>
<td valign="top" align="left">1.7%</td>
<td valign="top" align="left">6.8%</td>
<td valign="top" align="left">14.2%</td>
<td valign="top" align="left">18.3%</td>
</tr>
<tr>
<td valign="top" align="left">Sparse HDC (Hersche et al., <xref ref-type="bibr" rid="B9">2020</xref>)</td>
<td valign="top" align="left">0.2%</td>
<td valign="top" align="left">0.9%</td>
<td valign="top" align="left">1.9%</td>
<td valign="top" align="left">6.3%</td>
<td valign="top" align="left">12.8%</td>
<td valign="top" align="left">21.4%</td>
</tr>
<tr>
<td valign="top" align="left"><sans-serif>EventHD</sans-serif> (<italic>D</italic> = 4 k)</td>
<td valign="top" align="left">0.0%</td>
<td valign="top" align="left">0.0%</td>
<td valign="top" align="left">0.2%</td>
<td valign="top" align="left">0.8%</td>
<td valign="top" align="left">1.2%</td>
<td valign="top" align="left">3.6%</td>
</tr>
<tr>
<td valign="top" align="left"><sans-serif>EventHD</sans-serif> (<italic>D</italic> = 8 k)</td>
<td valign="top" align="left">0.0%</td>
<td valign="top" align="left">0.0%</td>
<td valign="top" align="left">0.1%</td>
<td valign="top" align="left">0.6%</td>
<td valign="top" align="left">0.8%</td>
<td valign="top" align="left">2.4%</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>For each experiment, random bit flips of the model parameters are applied according to the error rates, and the average absolute accuracy drop over N-MNIST classification is reported.</p>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec id="s7">
<title>7. Related study</title>
<p>In recent years, HDC has been employed in a range of applications, such as text classification (Kanerva et al., <xref ref-type="bibr" rid="B17">2000</xref>), activity recognition (Kim et al., <xref ref-type="bibr" rid="B19">2018</xref>), biomedical signal processing (Rahimi et al., <xref ref-type="bibr" rid="B31">2018</xref>), multimodal sensor fusion (R&#x000E4;s&#x000E4;nen and Saarinen, <xref ref-type="bibr" rid="B33">2015</xref>), and distributed sensors (Kleyko and Osipov, <xref ref-type="bibr" rid="B20">2014</xref>; Kleyko et al., <xref ref-type="bibr" rid="B21">2018</xref>). A key HDC advantage is its training capability in a single pass, where object categories are learned as opposed to many iterations. HDC has achieved comparable to higher accuracy compared to state-of-the-art machine learning models with lower execution energy. Much research also exploits the memory-centric nature of HDC to design in-memory acceleration platforms (Li et al., <xref ref-type="bibr" rid="B22">2016</xref>; Halawani et al., <xref ref-type="bibr" rid="B7">2021a</xref>,<xref ref-type="bibr" rid="B8">b</xref>) However, existing HDC algorithms are often ineffective in encoding complex image data or keeping a notion of continuous-time. In contrast, we propose a novel method to preserve spatial-temporal correction, where spatial encoding keeps the correction of events in 2D space while temporal encoding defines correlation in a continuous-time dynamic.</p>
<p>In the context of neuromorphic computing, study in Mitrokhin et al. (<xref ref-type="bibr" rid="B25">2019</xref>) and Hersche et al. (<xref ref-type="bibr" rid="B9">2020</xref>) exploited HDC mathematics to learn from event-based neuromorphic sensors. However, these designs have the following challenges: (i) rely on the expensive preprocessing step to extract information from event-based sensors, (ii) lack computational robustness, as the preprocessing step operates over original data with high sensitivity to noise, and (iii) require heterogeneous data processing and non-uniform data flow to accelerate HDC and preprocessing step. In contrast, to the best of our knowledge, <sans-serif>EventHD</sans-serif> is the first HDC-based solution that directly operates over raw data received by the event-based sensors. <sans-serif>EventHD</sans-serif> not only enhances the learning efficiency but also results in a significantly higher computational robustness to noise in input or underlying hardware.</p>
</sec>
<sec id="s8">
<title>8. Conclusion and future study</title>
<p>In this article, we present <sans-serif>EventHD</sans-serif>, an end-to-end framework based on hyperdimensional computing for robust, efficient learning from neuromorphic sensors. <sans-serif>EventHD</sans-serif> proposes a novel encoding scheme to map event-based neuromorphic data into high-dimensional space while preserving spatial and temporal correlation. Then, <sans-serif>EventHD</sans-serif> exploits HDC mathematics to support learning and cognitive tasks over encoded data by inherently exploiting the associating and memorizing capabilities. Finally, we introduce a scalable learning framework to distribute <sans-serif>EventHD</sans-serif> computation over devices in IoT networks.</p>
<p>Our future study will exploit <sans-serif>EventHD</sans-serif> encoding to enhance current spiking neural networks (SNNs). Particularly, SNN and HDC have shown promising results in enabling efficient and robust cognitive learning. However, despite their success, these two brain-inspired models are complementary. While SNN mimics the physical properties of the brain, HDC models the human brain on a more abstract and functional level. Our goal is to exploit <sans-serif>EventHD</sans-serif> encoding to fundamentally combine SNN and HDC to design a scalable and strong cognitive learning system that better mimics brain functionality.</p>
</sec>
<sec sec-type="data-availability" id="s9">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.</p>
</sec>
<sec id="s10">
<title>Author contributions</title>
<p>ZZ and MI conceived the research. ZZ, HA, YK, MN, NS, and MI conducted the research and analyzed the data. ZZ, HA, YK, and MI wrote the manuscript. All authors reviewed the manuscript and agreed on the contents of the manuscript.</p>
</sec>
<sec sec-type="funding-information" id="s11">
<title>Funding</title>
<p>This study received funding from National Science Foundation (NSF) &#x00023;2127780 and &#x00023;2019511, Semiconductor Research Corporation (SRC) Task No. 2988.001, Department of the Navy, Office of Naval Research, grant &#x00023;N00014-21-1-2225 and &#x00023;N00014-22-1-2067, Air Force Office of Scientific Research, grant &#x00023;22RT0060, the Louisiana Board of Regents Support Fund &#x00023;LEQSF(2020-23)-RD-A-26, and generous gifts from Cisco. The funders were not involved in the study design, collection, analysis, interpretation of data, the writing of this article, and the decision to submit it for publication.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>NS was employed by the company Intal Labs. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s13">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Denil</surname> <given-names>M.</given-names></name> <name><surname>Shakibi</surname> <given-names>B.</given-names></name> <name><surname>Dinh</surname> <given-names>L.</given-names></name> <name><surname>De Freitas</surname> <given-names>N.</given-names></name></person-group> (<year>2013</year>). <article-title>Predicting parameters in deep learning</article-title>, in <source>Advances in Neural Information Processing Systems</source> (<publisher-loc>Lake Tahoe</publisher-loc>), <fpage>2148</fpage>&#x02013;<lpage>2156</lpage>. <pub-id pub-id-type="pmid">34139437</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Feist</surname> <given-names>T.</given-names></name></person-group> (<year>2012</year>). <article-title>Vivado design suite</article-title>. <source>White Paper 5</source>.</citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Frady</surname> <given-names>E. P.</given-names></name> <name><surname>Kleyko</surname> <given-names>D.</given-names></name> <name><surname>Sommer</surname> <given-names>F. T.</given-names></name></person-group> (<year>2018</year>). <article-title>A theory of sequence indexing and working memory in recurrent neural networks</article-title>. <source>Neural Comput</source>. <volume>30</volume>, <fpage>1449</fpage>&#x02013;<lpage>1513</lpage>. <pub-id pub-id-type="doi">10.1162/neco_a_01084</pub-id><pub-id pub-id-type="pmid">29652585</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gallant</surname> <given-names>S. I.</given-names></name> <name><surname>Culliton</surname> <given-names>P.</given-names></name></person-group> (<year>2016</year>). <article-title>Positional binding with distributed representations</article-title>, in <source>2016 International Conference on Image, Vision and Computing (ICIVC)</source> (<publisher-loc>Portsmouth, UK</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>108</fpage>&#x02013;<lpage>113</lpage>.</citation>
</ref>
<ref id="B5">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gayler</surname> <given-names>R. W.</given-names></name></person-group> (<year>2004</year>). <article-title>Vector symbolic architectures answer jackendoff&#x00027;s challenges for cognitive neuroscience</article-title>. <source>arXiv preprint cs/0412059</source>. <pub-id pub-id-type="doi">10.48550/arXiv.cs/0412059</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Geng</surname> <given-names>T.</given-names></name> <name><surname>Wang</surname> <given-names>T.</given-names></name> <name><surname>Sanaullah</surname> <given-names>A.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Xu</surname> <given-names>R.</given-names></name> <name><surname>Patel</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Fpdeep: acceleration and load balancing of cnn training on fpga clusters</article-title>, in <source>2018 IEEE 26th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)</source> (<publisher-loc>Boulder, CO</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>81</fpage>&#x02013;<lpage>84</lpage>.</citation>
</ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Halawani</surname> <given-names>Y.</given-names></name> <name><surname>Hassan</surname> <given-names>E.</given-names></name> <name><surname>Mohammad</surname> <given-names>B.</given-names></name> <name><surname>Saleh</surname> <given-names>H.</given-names></name></person-group> (<year>2021a</year>). <article-title>Fused rram-based shift-add architecture for efficient hyperdimensional computing paradigm</article-title>, in <source>2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS)</source> (<publisher-loc>Lansing, MI</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>179</fpage>&#x02013;<lpage>182</lpage>.</citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Halawani</surname> <given-names>Y.</given-names></name> <name><surname>Kilani</surname> <given-names>D.</given-names></name> <name><surname>Hassan</surname> <given-names>E.</given-names></name> <name><surname>Tesfai</surname> <given-names>H.</given-names></name> <name><surname>Saleh</surname> <given-names>H.</given-names></name> <name><surname>Mohammad</surname> <given-names>B.</given-names></name></person-group> (<year>2021b</year>). <article-title>Rram-based cam combined with time-domain circuits for hyperdimensional computing</article-title>. <source>Sci. Rep</source>. <volume>11</volume>, <fpage>19848</fpage>. <pub-id pub-id-type="doi">10.21203/rs.3.rs-608660/v1</pub-id><pub-id pub-id-type="pmid">34615915</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hersche</surname> <given-names>M.</given-names></name> <name><surname>Rella</surname> <given-names>E. M.</given-names></name> <name><surname>Mauro</surname> <given-names>A. D.</given-names></name> <name><surname>Benini</surname> <given-names>L.</given-names></name> <name><surname>Rahimi</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Integrating event-based dynamic vision sensors with sparse hyperdimensional computing: a low-power accelerator with online learning capability</article-title>, in <source>ISLPED</source> (<publisher-loc>Boston, MA</publisher-loc>), <fpage>169</fpage>&#x02013;<lpage>174</lpage>. <pub-id pub-id-type="doi">10.1145/3370748.3406560</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huh</surname> <given-names>D.</given-names></name> <name><surname>Sejnowski</surname> <given-names>T. J.</given-names></name></person-group> (<year>2017</year>). <article-title>Gradient descent for spiking neural networks</article-title>. <source>arXiv preprint arXiv, 1706.04698</source>. <pub-id pub-id-type="doi">10.48550/arXiv.1706.04698</pub-id><pub-id pub-id-type="pmid">23500504</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Imani</surname> <given-names>M.</given-names></name> <name><surname>Bosch</surname> <given-names>S.</given-names></name> <name><surname>Javaheripi</surname> <given-names>M.</given-names></name> <name><surname>Rouhani</surname> <given-names>B. D.</given-names></name> <name><surname>Wu</surname> <given-names>X.</given-names></name> <name><surname>Koushanfar</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2019a</year>). <article-title>Semihd: semi-supervised learning using hyperdimensional computing</article-title>, in <source>ICCAD</source> (<publisher-loc>Westminster, CO</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>.</citation>
</ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Imani</surname> <given-names>M.</given-names></name> <name><surname>Kim</surname> <given-names>Y.</given-names></name> <name><surname>Riazi</surname> <given-names>S.</given-names></name> <name><surname>Messerly</surname> <given-names>J.</given-names></name> <name><surname>Liu</surname> <given-names>P.</given-names></name> <name><surname>Koushanfar</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2019c</year>). <article-title>A framework for collaborative learning in secure high-dimensional space</article-title>, in <source>2019 IEEE 12th International Conference on Cloud Computing (CLOUD)</source> (<publisher-loc>Milan</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>435</fpage>&#x02013;<lpage>446</lpage>.</citation>
</ref>
<ref id="B13">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Imani</surname> <given-names>M.</given-names></name> <name><surname>Morris</surname> <given-names>J.</given-names></name> <name><surname>Messerly</surname> <given-names>J.</given-names></name> <name><surname>Shu</surname> <given-names>H.</given-names></name> <name><surname>Deng</surname> <given-names>Y.</given-names></name> <name><surname>Rosing</surname> <given-names>T.</given-names></name></person-group> (<year>2019b</year>). <article-title>Bric: locality-based encoding for energy-efficient brain-inspired hyperdimensional computing</article-title>, in <source>DAC</source> (<publisher-loc>San Francisco, CA</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B14">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Imani</surname> <given-names>M.</given-names></name> <name><surname>Pampana</surname> <given-names>S.</given-names></name> <name><surname>Gupta</surname> <given-names>S.</given-names></name> <name><surname>Zhou</surname> <given-names>M.</given-names></name> <name><surname>Kim</surname> <given-names>Y.</given-names></name> <name><surname>Rosing</surname> <given-names>T.</given-names></name></person-group> (<year>2020</year>). <article-title>Dual: acceleration of clustering algorithms using digital-based processing in-memory</article-title>, in <source>Proceedings of the 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO)</source> (<publisher-loc>Athens</publisher-loc>: <publisher-name>IEEE Computer Society</publisher-name>).</citation>
</ref>
<ref id="B15">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kanerva</surname> <given-names>P.</given-names></name></person-group> (<year>1998</year>). <article-title>Encoding structure in boolean space</article-title>, in <source>ICANN 98</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>387</fpage>&#x02013;<lpage>392</lpage>.</citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kanerva</surname> <given-names>P.</given-names></name></person-group> (<year>2009</year>). <article-title>Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors</article-title>. <source>Cognit. Comput</source>. <volume>1</volume>, <fpage>139</fpage>&#x02013;<lpage>159</lpage>. <pub-id pub-id-type="doi">10.1007/s12559-009-9009-8</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kanerva</surname> <given-names>P.</given-names></name> <name><surname>Kristofersson</surname> <given-names>J.</given-names></name> <name><surname>Holst</surname> <given-names>A.</given-names></name></person-group> (<year>2000</year>). <article-title>Random indexing of text samples for latent semantic analysis</article-title>, in <source>Proceedings of the 22nd Annual Conference of the Cognitive Science Society, Vol. 1036</source> (<publisher-loc>Philadelphia, PA</publisher-loc>: <publisher-name>Citeseer</publisher-name>).</citation>
</ref>
<ref id="B18">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>J.</given-names></name> <name><surname>Lee</surname> <given-names>H.</given-names></name> <name><surname>Imani</surname> <given-names>M.</given-names></name> <name><surname>Kim</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>Efficient brain-inspired hyperdimensional learning with spatiotemporal structured data</article-title>, in <source>2021 29th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)</source> (<publisher-loc>Houston, TX</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>.</citation>
</ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>Y.</given-names></name> <name><surname>Imani</surname> <given-names>M.</given-names></name> <name><surname>Rosing</surname> <given-names>T. S.</given-names></name></person-group> (<year>2018</year>). <article-title>Efficient human activity recognition using hyperdimensional computing</article-title>, in <source>Proceedings of the 8th International Conference on the Internet of Things</source> (<publisher-loc>Santa Barbara, CA</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>38</fpage>.</citation>
</ref>
<ref id="B20">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kleyko</surname> <given-names>D.</given-names></name> <name><surname>Osipov</surname> <given-names>E.</given-names></name></person-group> (<year>2014</year>). <article-title>Brain-like classifier of temporal patterns</article-title>, in <source>2014 International Conference on Computer and Information Sciences (ICCOINS)</source> (<publisher-loc>Kuala Lumpur</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kleyko</surname> <given-names>D.</given-names></name> <name><surname>Osipov</surname> <given-names>E.</given-names></name> <name><surname>Papakonstantinou</surname> <given-names>N.</given-names></name> <name><surname>Vyatkin</surname> <given-names>V.</given-names></name></person-group> (<year>2018</year>). <article-title>Hyperdimensional computing in industrial systems: the use-case of distributed fault isolation in a power plant</article-title>. <source>IEEE Access</source> <volume>6</volume>, <fpage>30766</fpage>&#x02013;<lpage>30777</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2018.2840128</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>H.</given-names></name> <name><surname>Wu</surname> <given-names>T. F.</given-names></name> <name><surname>Rahimi</surname> <given-names>A.</given-names></name> <name><surname>Li</surname> <given-names>K. S.</given-names></name> <name><surname>Rusch</surname> <given-names>M.</given-names></name> <name><surname>Lin</surname> <given-names>C. H.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Hyperdimensional computing with 3d vrram in-memory kernels: device-architecture co-design for energy-efficient, error-resilient language recognition</article-title>, in <source>2016 IEEE International Electron Devices Meeting (IEDM)</source> (<publisher-loc>San Francisco, CA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>16</fpage>&#x02013;<lpage>1</lpage>.</citation>
</ref>
<ref id="B23">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>S.-C.</given-names></name> <name><surname>Delbruck</surname> <given-names>T.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name> <name><surname>Whatley</surname> <given-names>A.</given-names></name> <name><surname>Douglas</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <source>Event-Based Neuromorphic Systems</source>. John Wiley &#x00026; Sons.</citation>
</ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Massa</surname> <given-names>R.</given-names></name> <name><surname>Marchisio</surname> <given-names>A.</given-names></name> <name><surname>Martina</surname> <given-names>M.</given-names></name> <name><surname>Shafique</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>An efficient spiking neural network for recognizing gestures with a dvs camera on the loihi neuromorphic processor</article-title>, in <source>2020 International Joint Conference on Neural Networks (IJCNN)</source> (<publisher-loc>Glasgow, UK</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>9</lpage>.</citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mitrokhin</surname> <given-names>A.</given-names></name> <name><surname>Ferm&#x000FC;ller</surname> <given-names>C.</given-names></name> <name><surname>Aloimonos</surname> <given-names>Y.</given-names></name></person-group> (<year>2019</year>). <article-title>Learning sensorimotor control with neuromorphic sensors: toward hyperdimensional active perception</article-title>. <source>Sci. Rob</source>. <volume>4</volume>, <fpage>6736</fpage>. <pub-id pub-id-type="doi">10.1126/scirobotics.aaw6736</pub-id><pub-id pub-id-type="pmid">33137724</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Neftci</surname> <given-names>E. O.</given-names></name> <name><surname>Mostafa</surname> <given-names>H.</given-names></name> <name><surname>Zenke</surname> <given-names>F.</given-names></name></person-group> (<year>2019</year>). <article-title>Surrogate gradient learning in spiking neural networks: bringing the power of gradient-based optimization to spiking neural networks</article-title>. <source>IEEE Signal Process Mag</source>. <volume>36</volume>, <fpage>51</fpage>&#x02013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1109/MSP.2019.2931595</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pale</surname> <given-names>U.</given-names></name> <name><surname>Teijeiro</surname> <given-names>T.</given-names></name> <name><surname>Atienza</surname> <given-names>D.</given-names></name></person-group> (<year>2021</year>). <article-title>Multi-centroid hyperdimensional computing approach for epileptic seizure detection</article-title>. <source>arXiv preprint arXiv, 2111.08463</source>. <pub-id pub-id-type="doi">10.3389/fneur.2022.816294</pub-id><pub-id pub-id-type="pmid">35432152</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pale</surname> <given-names>U.</given-names></name> <name><surname>Teijeiro</surname> <given-names>T.</given-names></name> <name><surname>Atienza</surname> <given-names>D.</given-names></name></person-group> (<year>2022</year>). <article-title>Exploration of hyperdimensional computing strategies for enhanced learning on epileptic seizure detection</article-title>. <source>arXiv preprint arXiv,2201.09759</source>. <pub-id pub-id-type="doi">10.48550/arXiv.2201.09759</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poduval</surname> <given-names>P.</given-names></name> <name><surname>Zakeri</surname> <given-names>A.</given-names></name> <name><surname>Imani</surname> <given-names>F.</given-names></name> <name><surname>Alimohamadi</surname> <given-names>H.</given-names></name> <name><surname>Imani</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>Graphd: graph-based hyperdimensional memorization for brain-like cognitive learning</article-title>. <source>Front. Neurosci</source>. <volume>5</volume>, <fpage>757125</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2022.757125</pub-id><pub-id pub-id-type="pmid">35185456</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rahimi</surname> <given-names>A.</given-names></name> <name><surname>Benatti</surname> <given-names>S.</given-names></name> <name><surname>Kanerva</surname> <given-names>P.</given-names></name> <name><surname>Benini</surname> <given-names>L.</given-names></name> <name><surname>Rabaey</surname> <given-names>J. M.</given-names></name></person-group> (<year>2016a</year>). <article-title>Hyperdimensional biosignal processing: a case study for emg-based hand gesture recognition</article-title>, in <source>2016 IEEE International Conference on Rebooting Computing (ICRC)</source> (<publisher-loc>San Diego, CA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>.</citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rahimi</surname> <given-names>A.</given-names></name> <name><surname>Kanerva</surname> <given-names>P.</given-names></name> <name><surname>Benini</surname> <given-names>L.</given-names></name> <name><surname>Rabaey</surname> <given-names>J. M.</given-names></name></person-group> (<year>2018</year>). <article-title>Efficient biosignal processing using hyperdimensional computing: network templates for combined learning and classification of exg signals</article-title>. <source>Proc. IEEE</source> <volume>107</volume>, <fpage>123</fpage>&#x02013;<lpage>143</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2018.2871163</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rahimi</surname> <given-names>A.</given-names></name> <name><surname>Kanerva</surname> <given-names>P.</given-names></name> <name><surname>Rabaey</surname> <given-names>J. M.</given-names></name></person-group> (<year>2016b</year>). <article-title>A robust and energy-efficient classifier using brain-inspired hyperdimensional computing</article-title>, in <source>ISLPED</source> (<publisher-loc>ACM</publisher-loc>), <fpage>64</fpage>&#x02013;<lpage>69</lpage>. <pub-id pub-id-type="doi">10.1145/2934583.2934624</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>R&#x000E4;s&#x000E4;nen</surname> <given-names>O. J.</given-names></name> <name><surname>Saarinen</surname> <given-names>J. P.</given-names></name></person-group> (<year>2015</year>). <article-title>Sequence prediction with sparse distributed hyperdimensional coding applied to the analysis of mobile phone use patterns</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst</source>. <volume>27</volume>, <fpage>1878</fpage>&#x02013;<lpage>1889</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2015.2462721</pub-id><pub-id pub-id-type="pmid">26285224</pub-id></citation></ref>
<ref id="B34">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Salamat</surname> <given-names>S.</given-names></name> <name><surname>Imani</surname> <given-names>M.</given-names></name> <name><surname>Khaleghi</surname> <given-names>B.</given-names></name> <name><surname>Rosing</surname> <given-names>T.</given-names></name></person-group> (<year>2019</year>). <article-title>F5-HD: fast flexible fpga-based framework for refreshing hyperdimensional computing</article-title>, in <source>FPGA</source> (<publisher-loc>La Jolla, San Diego, CA</publisher-loc>), <fpage>53</fpage>&#x02013;<lpage>62</lpage>. <pub-id pub-id-type="doi">10.1145/3289602.3293913</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Grubl</surname> <given-names>A.</given-names></name> <name><surname>. Meier</surname> <given-names>K.</given-names></name> <name><surname>Mueller</surname> <given-names>E.</given-names></name></person-group> (<year>2006</year>). <article-title>Implementing synaptic plasticity in a vlsi spiking neural network model</article-title>, in <source>The 2006 IEEE International Joint Conference on Neural Network Proceedings</source> (<publisher-loc>Vancouver, BC</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sharma</surname> <given-names>H.</given-names></name> <name><surname>Park</surname> <given-names>J.</given-names></name> <name><surname>Mahajan</surname> <given-names>D.</given-names></name> <name><surname>Amaro</surname> <given-names>E.</given-names></name> <name><surname>Kim</surname> <given-names>J. K.</given-names></name> <name><surname>Shao</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>From high-level deep neural models to fpgas</article-title>, in <source>2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO)</source> (<publisher-loc>Taipei</publisher-loc>: <publisher-name>IEEEE</publisher-name>), <fpage>17</fpage>.</citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>Y.</given-names></name> <name><surname>Song</surname> <given-names>H.</given-names></name> <name><surname>Jara</surname> <given-names>A. J.</given-names></name> <name><surname>Bie</surname> <given-names>R.</given-names></name></person-group> (<year>2016</year>). <article-title>Internet of things and big data analytics for smart and connected communities</article-title>. <source>IEEE Access</source> <volume>4</volume>, <fpage>766</fpage>&#x02013;<lpage>773</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2016.2529723</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>W.</given-names></name> <name><surname>Pedretti</surname> <given-names>G.</given-names></name> <name><surname>Milo</surname> <given-names>V.</given-names></name> <name><surname>Carboni</surname> <given-names>R.</given-names></name> <name><surname>Calderoni</surname> <given-names>A.</given-names></name> <name><surname>Ramaswamy</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Learning of spatiotemporal patterns in a spiking neural network with resistive switching synapses</article-title>. <source>Sci. Adv</source>. <volume>4</volume>, <fpage>eaat4752</fpage>. <pub-id pub-id-type="doi">10.1126/sciadv.aat4752</pub-id><pub-id pub-id-type="pmid">30214936</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>T. F.</given-names></name> <name><surname>Li</surname> <given-names>H.</given-names></name> <name><surname>Huang</surname> <given-names>P.-C.</given-names></name> <name><surname>Rahimi</surname> <given-names>A.</given-names></name> <name><surname>Rabaey</surname> <given-names>J. M.</given-names></name> <name><surname>Wong</surname> <given-names>H.-S. P.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Brain-inspired computing exploiting carbon nanotube fets and resistive ram: hyperdimensional computing case study</article-title>, in <source>2018 IEEE International Solid-State Circuits Conference-(ISSCC)</source> (<publisher-loc>San Francisco, CA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>492</fpage>&#x02013;<lpage>494</lpage>.</citation>
</ref>
<ref id="B40">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Xiang</surname> <given-names>Y.</given-names></name> <name><surname>Kim</surname> <given-names>H.</given-names></name></person-group> (<year>2019</year>). <article-title>Pipelined data-parallel cpu/gpu scheduling for multi-dnn real-time inference</article-title>, in <source>2019 IEEE Real-Time Systems Symposium (RTSS)</source> (<publisher-loc>Hong Kong</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>392</fpage>&#x02013;<lpage>405</lpage>.</citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zaslavsky</surname> <given-names>A.</given-names></name> <name><surname>Perera</surname> <given-names>C.</given-names></name> <name><surname>Georgakopoulos</surname> <given-names>D.</given-names></name></person-group> (<year>2013</year>). <article-title>Sensing as a service and big data</article-title>. <source>arXiv preprint arXiv, 1301.0159</source>. <pub-id pub-id-type="doi">10.48550/arXiv.1301.0159</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>A. Z.</given-names></name> <name><surname>Thakur</surname> <given-names>D.</given-names></name> <name><surname>&#x000D6;zaslan</surname> <given-names>T.</given-names></name> <name><surname>Pfrommer</surname> <given-names>B.</given-names></name> <name><surname>Kumar</surname> <given-names>V.</given-names></name> <name><surname>Daniilidis</surname> <given-names>K.</given-names></name></person-group> (<year>2018</year>). <article-title>The multivehicle stereo event camera dataset: an event camera dataset for 3d perception</article-title>. <source>IEEE Rob. Autom. Lett</source>. <volume>3</volume>, <fpage>2032</fpage>&#x02013;<lpage>2039</lpage>. <pub-id pub-id-type="doi">10.1109/LRA.2018.2800793</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zou</surname> <given-names>Z.</given-names></name> <name><surname>Kim</surname> <given-names>Y.</given-names></name> <name><surname>Imani</surname> <given-names>F.</given-names></name> <name><surname>Alimohamadi</surname> <given-names>H.</given-names></name> <name><surname>Cammarota</surname> <given-names>R.</given-names></name> <name><surname>Imani</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>Scalable edge-based hyperdimensional learning system with brain-like neural adaptation</article-title>, in <source>Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis</source> (<publisher-loc>San Diego, CA</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>15</lpage>.</citation>
</ref>
</ref-list>
</back>
</article>