<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurorobot.</journal-id>
<journal-title>Frontiers in Neurorobotics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurorobot.</abbrev-journal-title>
<issn pub-type="epub">1662-5218</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnbot.2021.631159</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Neuromorphic NEF-Based Inverse Kinematics and PID Control</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Zaidel</surname> <given-names>Yuval</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1147016/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Shalumov</surname> <given-names>Albert</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1147006/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Volinski</surname> <given-names>Alex</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Supic</surname> <given-names>Lazar</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Ezra Tsur</surname> <given-names>Elishai</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/359800/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, Open University of Israel</institution>, <addr-line>Ra&#x00027;anana</addr-line>, <country>Israel</country></aff>
<aff id="aff2"><sup>2</sup><institution>Accenture Labs</institution>, <addr-line>San Francisco, CA</addr-line>, <country>United States</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Jose De Jesus Rubio, Instituto Polit&#x000E9;cnico Nacional (IPN), Mexico</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Yulia Sandamirskaya, Intel, Germany; Lea Steffen, Research Center for Information Technology, Germany; Fernando Perez-Pe&#x000F1;a, University of C&#x000E1;diz, Spain; Yiming Wu, Nankai University, China</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Elishai Ezra Tsur <email>elishai&#x00040;NBEL-lab.com</email></corresp>
<fn fn-type="other" id="fn001"><p>&#x02020;These authors have contributed equally to this work</p></fn></author-notes>
<pub-date pub-type="epub">
<day>03</day>
<month>02</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>15</volume>
<elocation-id>631159</elocation-id>
<history>
<date date-type="received">
<day>19</day>
<month>11</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>01</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2021 Zaidel, Shalumov, Volinski, Supic and Ezra Tsur.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Zaidel, Shalumov, Volinski, Supic and Ezra Tsur</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license></permissions>
<abstract><p>Neuromorphic implementation of robotic control has been shown to outperform conventional control paradigms in terms of robustness to perturbations and adaptation to varying conditions. Two main ingredients of robotics are inverse kinematic and Proportional&#x02013;Integral&#x02013;Derivative (PID) control. Inverse kinematics is used to compute an appropriate state in a robot&#x00027;s configuration space, given a target position in task space. PID control applies responsive correction signals to a robot&#x00027;s actuators, allowing it to reach its target accurately. The Neural Engineering Framework (NEF) offers a theoretical framework for a neuromorphic encoding of mathematical constructs with spiking neurons for the implementation of functional large-scale neural networks. In this work, we developed NEF-based neuromorphic algorithms for inverse kinematics and PID control, which we used to manipulate 6 degrees of freedom robotic arm. We used online learning for inverse kinematics and signal integration and differentiation for PID, offering high performing and energy-efficient neuromorphic control. Algorithms were evaluated in simulation as well as on Intel&#x00027;s Loihi neuromorphic hardware.</p></abstract>
<kwd-group>
<kwd>neural engineering framework</kwd>
<kwd>robotic control software</kwd>
<kwd>Loihi</kwd>
<kwd>neuromorphic engineering</kwd>
<kwd>spiking neural networks</kwd>
<kwd>robotic arm</kwd>
</kwd-group>
<counts>
<fig-count count="6"/>
<table-count count="0"/>
<equation-count count="9"/>
<ref-count count="29"/>
<page-count count="12"/>
<word-count count="6920"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>While computational motion planning and sensing have emerged as focal points for countless state-of-the-art robotic systems, in many ways, they are inadequate when compared with biological systems, particularly in terms of energy efficiency, robustness, versatility, and adaptivity (DeWolf et al., <xref ref-type="bibr" rid="B10">2016</xref>). Consequently, neuromorphic (brain-inspired) computing hardware and algorithms have been used in numerous robotic applications (Krichmar and Wagatsuma, <xref ref-type="bibr" rid="B19">2011</xref>). A typical neuromorphic processor comprises densely connected, physically implemented computing elements that communicate with spikes, and emulate biological neurons&#x00027; computational principles (Tsur and Rivlin-Etzion, <xref ref-type="bibr" rid="B28">2020</xref>). However, designing algorithms with spiking neurons is a challenging endeavor, as it requires the encoding, decoding, and transformation of mathematical constructs without a central processing unit nor address-based memory. One theoretical framework, which allows for efficient data encoding and decoding with spiking neurons is the Neural Engineering Framework (NEF) (Eliasmith and Anderson, <xref ref-type="bibr" rid="B11">2003</xref>). NEF is one of the most utilized theoretical frameworks in neuromorphic computing, and it was used to design neuromorphic systems capable of perception, memory, and motor control (DeWolf et al., <xref ref-type="bibr" rid="B9">2020</xref>). It serves as the foundation for Nengo, a Python-based &#x0201C;neural compiler,&#x0201D; which translates high-level descriptions to low-level neural models (Bekolay et al., <xref ref-type="bibr" rid="B3">2014</xref>). A version of NEF was compiled to work on the most prominent neuromorphic hardware architectures available, including the TrueNorth (Fischl et al., <xref ref-type="bibr" rid="B12">2018</xref>), developed by IBM research, the Loihi (Lin et al., <xref ref-type="bibr" rid="B22">2018</xref>), developed by Intel Labs, the NeuroGrid (Boahen, <xref ref-type="bibr" rid="B4">2017</xref>), developed at Stanford University and the SpiNNaker (Mundy et al., <xref ref-type="bibr" rid="B25">2015</xref>), developed at the University of Manchester.</p>
<p>Robot state can be defined in configuration space by a set of joint angles defining each limb segment&#x00027;s orientation. Forward Kinematics (FK) refers to the computation used to transform the robot&#x00027;s configuration into its End-Effector&#x00027;s (EE) cartesian coordinates. Inverse Kinematics (IK) refers to the opposite transformation in which a robot&#x00027;s joint configuration is computed from its EE location. While FK can be analytically solved using transformation matrices or trigonometry, IK is usually numerically optimized, as often several joint configurations can produce the same EE position. Many numerical optimization methods were developed for IK, ranging from Jacobian inverse (Lynch, <xref ref-type="bibr" rid="B24">2017</xref>) and fuzzy logic techniques (Hagras, <xref ref-type="bibr" rid="B15">2004</xref>) to artificial neural networks (Koker et al., <xref ref-type="bibr" rid="B18">2004</xref>). Once a target configuration is derived, the robot&#x00027;s actuators are controlled to approach it accurately. The most widely used paradigm for robotic control is to continuously actuate it by minimizing the distance between the robot&#x00027;s current EE location and its designated target. A PID controller applies correction signals based on the error&#x00027;s Proportional, Integral, and Derivative terms (Ang et al., <xref ref-type="bibr" rid="B1">2005</xref>). Robust neuromorphic implementations of IK and PID are an essential milestone for neurorobotics.</p>
<p>In this work, we propose NEF-based neuromorphic algorithms for IK and PID. Algorithms were designed with Nengo and evaluated on both simulation and Intel Loihi neuromorphic hardware (Davies et al., <xref ref-type="bibr" rid="B8">2018</xref>). We used real-time learning and signal integration and differentiation for IK and PID, respectively. Algorithms were utilized for the control of a 6 Degrees of Freedom (DOF) robotic arm. Our implementations offer high performing and energy-efficient neuromorphic robot control, which can be compiled over various neuromorphic hardware. In this work, we evaluated the algorithm performance in simulation and on the Loihi chip.</p></sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and Methods</title>
<sec>
<title>Robotic Arm</title>
<p>The robotic arm we used in this research comprises nine servo actuators (7 &#x000D7; Dynamixel&#x00027;s XM540-W270, 2 &#x000D7; Dynamixel&#x00027;s XM430-W350). All actuators are capable of 40 N radial load and have an embedded Cortex-M3 microcontroller. The M3 is coupled with contactless 12 bit absolute encoders, allowing the retrieval of the actuator&#x00027;s position, velocity, and trajectory as feedback for position estimation. The XM540 actuators are used for arm movements and have a stall torque of 10.6 Nm (at 12 v input). The XM430 actuators are used to manipulate the EE (grasping, rotating) and have a stall torque of 4.1 Nm (at 12 v input). Actuators are manufactured by ROBOTIS (Korea). Actuators do not have torque sensors and were therefore actuated by current specifications. The relation between the driven current and the generated rotational velocity is not linear as it has to account for friction. In this work, the current-speed association was estimated as described below. Arm chassis is based on 3D printed grippers (allowing function-tailored customization), ridged and lightweight T-slot extruded aluminum arms, and aluminum brackets. The chassis is connected to the actuators with industrial-grade slewing bearings, and it was assembled by Interbotix (Downers Grove, Illinois). Motion control was evaluated on the Nvidia Xavier chip (Jetson AGX Xavier) and then realized on Intel&#x00027;s Loihi chip. Communication with the daisy-chained servos was based on TTL half-duplex asynchronous serial communication, handled by Dynamixel&#x00027;s U2D2 control board. Overall, the arm design provides 6 DOF, 82 cm reach, 1.64 m span, 1 mm accuracy, and 750 g payload.</p></sec>
<sec>
<title>Robot Simulation</title>
<p>To simulate the robot described above, we used the Multi-Joint dynamics with Contact (MuJoCo) physics simulation framework. The robotic arm and joint&#x00027;s accurate dynamic were specified using CAD-derived mechanical description and inertia and mass matrices. CAD and dynamic specifications were provided by Trossen Robotics (USA). The simulation was developed using Nengo, a Python package for building, testing, and deploying NEF-based neural networks.</p></sec>
<sec>
<title>Forward and Inverse Kinematics</title>
<p>FK transform a robot&#x00027;s configuration to the cartesian coordinates of its EE. Here, it was implemented using transformation matrices, which characterize the relative transformation (rotation, translation) from each joint to the next. For our five joints robot, FK would take the form of <italic>T</italic> &#x0003D; <italic>T</italic><sub>01</sub><italic>T</italic><sub>12</sub><italic>T</italic><sub>23</sub><italic>T</italic><sub>34</sub><italic>T</italic><sub>45</sub><italic>T</italic><sub>56</sub>, where <italic>T</italic><sub><italic>ij</italic></sub> is the transformation matrix in homogenous coordinates from the reference frame at joint <italic>i</italic> to reference frame at joint <italic>j</italic> (indices 0 and 6 refer to the world&#x00027;s and EE coordinate reference frames, respectively). Initializing <italic>T</italic> with the appropriate set of rotations and translations and multiplying it by a zero vector [0, 0, 0, 1]<sup><italic>T</italic></sup> (in homogenous coordinate) will result in our FK model <italic>T</italic><sub><italic>x</italic></sub>(<italic>q</italic>):</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mtext>T</mml:mtext></mml:mrow><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>2</mml:mn><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>3</mml:mn><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>06</mml:mn><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>2</mml:mn><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>3</mml:mn><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>06</mml:mn><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>118</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>2</mml:mn><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>3</mml:mn><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>06</mml:mn><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Where <italic>s</italic><sub><italic>x</italic></sub> &#x0003D; <italic>sin</italic>(<italic>q</italic><sub><italic>x</italic></sub>), <italic>c</italic><sub><italic>x</italic></sub> &#x0003D; <italic>cos</italic>(<italic>q</italic><sub><italic>x</italic></sub>), and <italic>q</italic><sub><italic>x</italic></sub> is actuator <italic>x</italic> angle of rotation. <italic>T</italic><sub><italic>x</italic></sub>(<italic>q</italic>) returns the EE position in the world&#x00027;s coordinate system, where the origin is at the robot&#x00027;s base. The numerical coefficients were derived by calculating the transformations while taking into account the robot&#x00027;s geometry, retrieved from the robot&#x00027;s CAD file.</p>
<p>IK refers to the transformation in which a robot&#x00027;s configuration is computed from its EE&#x00027;s desired location. Generally, IK cannot be analytically solved, and it is, therefore, usually numerically optimized. Here we used the Jacobian inverse for IK. We calculate the Jacobian <italic>J</italic> of <italic>T</italic>, which relates the change of the EE position <italic>x</italic> to the change of joint angles <italic>q</italic>: <inline-formula><mml:math id="M2"><mml:mi>J</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B4;</mml:mi><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003B4;</mml:mi><mml:mi>q</mml:mi></mml:mrow></mml:mfrac></mml:math></inline-formula>. The Jacobian relates a change in robot configuration <inline-formula><mml:math id="M3"><mml:mover accent="true"><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mo>.</mml:mo></mml:mover></mml:math></inline-formula> to a change in EE position &#x01E8B; with:</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M4"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mi>J</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mo>.</mml:mo></mml:mover></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Equation (2) allows us to specify a target in task space&#x02014;that is, the cartesian space centered on the EE&#x00027;s origin&#x02014;rather than the space that can be directly controlled, the configuration space. Note that the Jacobian has to be recalculated along the trajectory. With our robotic system, the calculated Jacobian has the shape of (3, 5), where 3 is the number of space dimensions (task space) and 5 is the number of joints (configuration space). To compute IK, we need to invert Equation (2). Since the Jacobian is not necessarily invertible, a common practice is to use its pseudo-inverse form <italic>J</italic><sup>&#x0002B;</sup> allowing to compute <inline-formula><mml:math id="M5"><mml:mover accent="true"><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mo>.</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi>J</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo></mml:mrow></mml:msup><mml:mi>&#x01E8B;</mml:mi></mml:math></inline-formula>. Therefore, given an error in space coordinates <italic>x</italic><sub><italic>d</italic></sub> as the difference between the EE current position <italic>x</italic><sub><italic>c</italic></sub> and its target position <italic>x</italic><sub><italic>y</italic></sub>, the appropriate change in joint space can be computed using:</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M6"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>d</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi>J</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Where <italic>d</italic> (<italic>q</italic>) is the change in joint angles, for which the robot&#x00027;s EE will get closer to its target. In each iteration, this equation is re-evaluated until <italic>x</italic><sub><italic>d</italic></sub> is within some accuracy threshold. Once the joint configuration for a given target point is concluded, control signals, which achieve it, can be calculated using PID control. Further details are provided in Lynch (<xref ref-type="bibr" rid="B24">2017</xref>).</p></sec>
<sec>
<title>PID Control</title>
<p>PID control is used universally in applications requiring accurate control (Ang et al., <xref ref-type="bibr" rid="B1">2005</xref>). Given a target position, a PID controller will continuously reduce an error signal by providing the robot&#x00027;s actuator with the appropriate control signal to come closer to its target. To do so, the PID controller generates a signal <italic>u</italic> (<italic>t</italic>), which is proportional to the value of the error signal <italic>e</italic> (<italic>t</italic>) (accounting for the current value of the error), to the error integrated value over time (accounting for the <italic>past</italic> values of the error), and to the error derivative (accounting for the projected value of the error), using:</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M7"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>u</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mi>e</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mstyle displaystyle="true"><mml:msubsup><mml:mrow><mml:mo>&#x0222B;</mml:mo></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mstyle><mml:mi>e</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Where <italic>K</italic><sub><italic>p</italic></sub>, <italic>K</italic><sub><italic>i</italic></sub>, and <italic>K</italic><sub><italic>d</italic></sub> are the proportional, integral, and derivative gain coefficients, respectively.</p></sec>
<sec>
<title>Neural Engineering Framework</title>
<sec>
<title>Neuromorphic Representation</title>
<p>To represent a computation in a form suitable for neuromorphic hardware, we represent numerical input vectors (or stimuli) with spikes. Stimulus <italic>x</italic> can be represented as <italic>a</italic> using <italic>a</italic> &#x0003D; <italic>f</italic> (<italic>x</italic>), where <italic>a</italic> takes the form of <italic>a</italic> &#x0003D; <italic>G</italic> (<italic>J</italic> (<italic>x</italic>)). <italic>G</italic> is a spiking neuron model and <italic>J</italic> is its input current. Here, we used the leaky-integrate-and-fire (LIF) model (Burkitt, <xref ref-type="bibr" rid="B6">2006</xref>) for <italic>G</italic>. A distributed neuron representation, where each neuron <italic>i</italic> responds independently to <italic>x</italic>, will take the form <italic>a</italic><sub><italic>i</italic></sub> &#x0003D; <italic>G</italic><sub><italic>i</italic></sub> (<italic>J</italic><sub><italic>i</italic></sub> (<italic>x</italic>)). Since neurons usually have some preferred stimuli <italic>e</italic> to which they respond to a high frequency of spikes, <italic>J</italic> can be defined using: <italic>J</italic> &#x0003D; &#x003B1;<italic>x</italic>&#x000B7;<italic>e</italic>&#x0002B;<italic>J</italic><sup><italic>bias</italic></sup>, where &#x003B1; is a gain term, and <italic>J</italic><sup><italic>bias</italic></sup> is a fixed background current. Note that both <italic>x</italic> and <italic>e</italic> are vectors. Therefore, <italic>x</italic>&#x000B7;<italic>e</italic> equals 1 when both <italic>x</italic> and <italic>e</italic> are in the same direction and 0 when they are opposing each other, where &#x000B7; is the dot product. With NEF, a neuron firing rate &#x003B4;<sub><italic>i</italic></sub> is defined using (rate coding):</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M8"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>G</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:msup><mml:mrow><mml:msub><mml:mrow><mml:mi>J</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>b</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>An ensemble of neurons, in which each neuron has its gain and preferred direction, can distributively represent a high-dimensional stimulus <italic>x</italic>. The represented stimulus <inline-formula><mml:math id="M9"><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula> (which is an approximation of <italic>x</italic>) can be linearly decoded using:</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M10"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:mo>*</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mi>h</mml:mi><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Where <italic>d</italic><sub><italic>i</italic></sub> are linear decoders, which were optimized to reproduce <italic>x</italic> using least squared optimization and <italic>a</italic><sub><italic>i</italic></sub>&#x0002A;<italic>h</italic> is the spiking activity <italic>a</italic><sub><italic>i</italic></sub> convolved with filter <italic>h</italic> (both are functions of time). Equations (5) and (6) specify the encoding and decoding of mathematical constructs using neuronal ensembles&#x00027; distributed activity.</p></sec>
<sec>
<title>Neuromorphic Transformation and Online Learning</title>
<p>A key aspect of neuromorphic computing is activity propagation, or the transformation of represented values, implemented by connecting neuron ensembles with a weighted matrix of synaptic connections. The resulting activity transformation is a function of <italic>x</italic>. Notably, it was shown that any function <italic>f</italic> (<italic>x</italic>) could be approximated using some set of decoding weights d<sup>f</sup> (Eliasmith and Anderson, <xref ref-type="bibr" rid="B11">2003</xref>). Here we will use it to compute Equations (2)&#x02013;(4) to provide a neuromorphic implementation of IK and PID control. Defining <italic>f</italic> (<italic>x</italic>) in NEF can be made by connecting two neuronal ensembles A and B via synaptic connection weights <italic>w</italic><sub><italic>ij</italic></sub>(<italic>x</italic>) using:</p>
<disp-formula id="E7"><label>(7)</label><mml:math id="M11"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02297;</mml:mo><mml:msub><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Where <italic>i</italic> is the neuron index in ensemble <italic>A</italic>, <italic>j</italic> is the neuron index in ensemble <italic>B</italic>, <italic>d</italic><sub><italic>i</italic></sub> are the decoders of ensemble A, <italic>e</italic><sub><italic>j</italic></sub> are the encoders of ensemble B, which represents <italic>f</italic> (<italic>x</italic>) and &#x02297; is the outer product operation.</p>
<p>Connection weights, which govern the transformation between one representation to another, can also be adapted or learned in real-time, rather than optimized during model building. Weights adaptation in real-time is of particular interest in robotics, where unknown perturbations from the environment can affect the error. One efficient way to implement real-time learning with NEF is using the Prescribed Error Sensitivity (PES) learning rule. PES is a biologically plausible supervised learning rule that modifies a connection&#x00027;s decoders <italic>d</italic> to minimize an error signal <italic>e</italic> calculated as the difference between the stimulus and its approximated representation: <inline-formula><mml:math id="M12"><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>-</mml:mo><mml:mi>x</mml:mi></mml:math></inline-formula>. The PES applies the update rule: &#x00394;<italic>d</italic> &#x0003D; &#x003BA;<italic>e&#x003B4;</italic>, where &#x003BA; is the learning rate. Notably, it was shown that when <italic>a</italic> &#x02212; &#x003BA; &#x02016;&#x003B4;&#x02016;<sup>2</sup> (denoted &#x003B3;) is larger than &#x02212;1, error <italic>e</italic> goes to 0 exponentially with rate &#x003B3;. PES is described at length in Voelker (<xref ref-type="bibr" rid="B29">2015</xref>).</p></sec>
<sec>
<title>Neuromorphic Dynamical System and Integration</title>
<p>System dynamics is a theoretical framework concerning the non-linear behavior of complex systems over time. Dynamics is the third basic principle of NEF, and it provides the framework with the capacity of using Spiking Neural Networks (SNN) to solve differential equations. It is essentially a combination of the two first principles of NEF: representation and transformation, where we are using transformation in a recurrent scheme. Following Equation (6), a recurrent connection (connecting a neural ensemble back to itself) is defined using: <italic>x</italic> (<italic>t</italic>) &#x0003D; <italic>f</italic> (<italic>x</italic> (<italic>t</italic>)) &#x0002A;<italic>h</italic>(<italic>t</italic>). A canonical description of a linear error-correcting feedback loop can be described using: <inline-formula><mml:math id="M13"><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mi>A</mml:mi><mml:mi>x</mml:mi><mml:mrow><mml:mtext>&#x000A0;</mml:mtext><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mi>B</mml:mi><mml:mi>u</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, where <italic>x</italic> (<italic>t</italic>) is a state vector, which summarizes the effect of all past inputs, <italic>u</italic> (<italic>t</italic>) is the input vector, <italic>B</italic> is the input matrix, and <italic>A</italic> is the dynamic matrix. In NEF, this standard control can be realized by using:</p>
<disp-formula id="E8"><label>(8)</label><mml:math id="M14"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mi>x</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msup><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mi>u</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Where <italic>A</italic>&#x02032; is the recurrent connection, which is defined as &#x003C4;<italic>A</italic> &#x0002B; <italic>I</italic>, where <italic>I</italic> is the identity matrix, and <italic>B</italic>&#x02032; is the input connection, which is defined as &#x003C4;<italic>B</italic> (Eliasmith and Anderson, <xref ref-type="bibr" rid="B11">2003</xref>). This neural implementation can be used to implement a neuromorphic integrator. For an integrator, input (e.g., velocity) <italic>u</italic> is integrated to define <italic>x</italic> (e.g., position), where (<inline-formula><mml:math id="M15"><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mi>u</mml:mi></mml:math></inline-formula>). In terms of Equation (8), <italic>A</italic> &#x0003D; 0 and <italic>B</italic> &#x0003D; 1, resulting in a recurrent connection of <italic>A</italic>&#x02032; &#x0003D; 1 and <italic>B</italic> &#x0003D; &#x003C4;.</p></sec></sec>
<sec>
<title>Loihi Chip</title>
<p>In this work, we&#x00027;ve implemented IK and PID on Intel&#x00027;s neuromorphic research chip Loihi (Davies et al., <xref ref-type="bibr" rid="B8">2018</xref>). NEF was compiled on the board using the nengo_loihi library (version 0.19) (Lin et al., <xref ref-type="bibr" rid="B22">2018</xref>). The nengo_loihi library was designed to execute Nengo models on Loihi boards. It contains a Loihi emulator backend for rapid model development and a hardware backend for running models on the board itself. Nengo Loihi&#x00027;s hardware backend uses Intel&#x00027;s NxSDK API to interact with the host and configure the board. Each Loihi chip is comprised of 128 neuron-cores; each simulates 1,024 neurons and has 4,096 ports. Each chip also has &#x000D7; 86 cores, which are used for spike routing and monitoring. Communication was established via an SSH channel between our local computer and a virtual machine installed on Intel&#x00027;s neuromorphic research cloud.</p></sec></sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Simulating Neuromorphic Inverse Kinematics</title>
<p>IK was implemented with NEF using Nengo and tested on our robotic arm. Our model schematic is shown in <xref ref-type="fig" rid="F1">Figure 1A</xref>. In terms of joint angles, the configuration of the robot was introduced through a node into a neuron ensemble denoted <italic>Current q</italic>. Ensemble <italic>Current q</italic> is fully connected to another neuron ensemble, denoted <italic>Target q</italic>. These synaptic weights are modulated to minimize the value decoded from neuron ensemble <italic>Error</italic> using PES optimization. Ensemble <italic>Distance to target</italic> encodes the EE distance from its designated target by subtracting the current position of the EE (calculated using Equation 1, applied on the value decoded from <italic>Target q</italic>) with its desired position (given as an input through node <italic>xyz Target</italic>). Both <italic>Target_q</italic> and the <italic>distance to target</italic> ensembles are connected to a 5D ensemble, allowing for non-linear computation between the five robot&#x00027;s joint states. The difference between the current and the desired robot&#x00027;s joint configuration is calculated through Equation (3), which is implemented through the connection to ensemble <italic>Error</italic>. When the <italic>Error</italic> decoded value is 0, ensemble <italic>Target q</italic> encodes the desired robot configuration. The <italic>Error</italic> ensemble is connected to an <italic>inhibition</italic> signaling node. Upon actuation (initiated once a sufficiently accurate result is achieved), the error signal is zeroed; thus, <italic>Target q</italic> is stabilized at its current state. Here, we performed IK on our robot&#x00027;s 5 joints, starting from an initial configuration where all joints were zeroed (<xref ref-type="fig" rid="F1">Figure 1B</xref>). As learning progresses, the new robot configuration is calculated (<xref ref-type="fig" rid="F1">Figure 1C</xref>), while the error is continually minimized (<xref ref-type="fig" rid="F1">Figure 1D</xref>). Raster plots of ensembles <italic>q, target q</italic>, and <italic>error</italic> are shown in <xref ref-type="fig" rid="F1">Figure 1E</xref>. While the spiking pattern, which represents the initial joint configuration, is constant, the target&#x00027;s spiking pattern changes as the error spiking pattern becomes more amorphous, indicating convergence to zero.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Neuromorphic implementation of IK. <bold>(A)</bold> A model for neuromorphic IK with online learning. Initial joint configuration is represented with neural ensemble Current <italic>q</italic> and transformed to a target joint configuration, represented with neural ensemble <italic>Target q</italic>. This transformation is modulated (or learned) by minimizing an error term, represented with neural ensemble <italic>Error</italic>, defined by distance to target. Nodes, which were used here to introduce signals are shown as rounded squares and ensembles, which represent groups of spiking neurons are represented with a group of 5 circles; <bold>(B)</bold> Initialized zeroed states of the five joints angle; <bold>(C)</bold> Monitored target joint angles as the algorithm optimized arm reaching to point [0.246, 0.62, 0.373] in task space. Each color represents a different joint angle, where bottom to top curves (orange to green) correspond to the base to top joints of the robotic arm; <bold>(D)</bold> Monitored error, demonstrating error flattening as the arm is reaching its target (learning rate is 0.001); <bold>(E)</bold> Raster plots of ensembles <italic>Current_q, Target q</italic> and <italic>Error</italic>.</p></caption>
<graphic xlink:href="fnbot-15-631159-g0001.tif"/>
</fig>
<p>We further analyzed the model by modulating neurons&#x00027; encoders and learning rates. Each neuron&#x00027;s intercept defines the part of the representation space in which the neuron responds by firing, and it is reflected on the neuron tuning curve (firing rate as a function of input). Note that the intercept is the input value for which the neuron initiates spiking at a high rate. Distributing intercepts uniformly between &#x02212;1 and 1 makes sense for 1D ensembles for which they create a uniform spanning of that space. This is not the case for high dimensional ensembles. In our implementation, we used high dimensional ensembles to represent the DOF of our robotic system. With uniformly distributed intercepts, the resulted tuning curves are uniformly distributed, resulting in a non-efficient spanning of the representation space. As a result, the system does not converge to its target, as is evident from the error&#x00027;s non-decreasing value (<xref ref-type="fig" rid="F2">Figure 2A</xref>). Changing the intercept distribution to follow a triangular pattern modulates the neurons&#x00027; tuning curved distribution such that the representation space is adequality spanned. As a result, the system converges to its target, as is evident from the decreasing error (<xref ref-type="fig" rid="F2">Figure 2B</xref>) (Gosmann and Eliasmith, <xref ref-type="bibr" rid="B14">2016</xref>). This modification of the intercept distribution is crucial for accurate representation in 5D space and it is briefly described in DeWolf et al. (<xref ref-type="bibr" rid="B9">2020</xref>) and Gosmann and Eliasmith (<xref ref-type="bibr" rid="B14">2016</xref>). Our model relies on PES-based optimization, and it is therefore constrained to a prespecified learning rate. We tested our model with three different learning rates, and as expected, error flattening is slower as we increase the learning rate (<xref ref-type="fig" rid="F2">Figure 2C</xref>). Suppose we permit our system to keep optimizing. In that case, the ensembles&#x00027; fluctuating encoded values induce continuously changing results, where changes in one joint&#x00027;s angle are compensated with changes in other joints (convergence is driven toward a zero gradient potential field). Therefore, we are inhibiting learning once some accuracy threshold is reached, thus holding the computed weights constant and providing a stable target configuration (<xref ref-type="fig" rid="F2">Figure 2D</xref>). Once the desired configuration is calculated, the robot should be actuated accordingly by using, for example, a PID-controller.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Analysis of neuromorphic IK. <bold>(A)</bold> Histogram of uniformly distributed intercepts (left). The uniformly distributed intercepts as reflected in the tuning curve&#x00027;s distribution, in which spike rate as a function of input is presented for each neuron. Each color represents a different neuron in the ensemble (middle). These uniformly distributed intercepts lead to a non-uniform spanning of the representation space, thus driving non-zero-converging errors. Each color represents a joint&#x00027;s error following the colors&#x00027; mapping taken in <xref ref-type="fig" rid="F1">Figure 1</xref> and panel D below (right); <bold>(B)</bold> Histogram of trigonally distributed intercepts (left). Tuning curve&#x00027;s distribution (middle). These triagonally distributed intercepts lead to a uniform spanning of the representation space, thus driving zero-converging errors. Each color represents a joint&#x00027;s error (right); <bold>(C)</bold> Error value for the base joint (indicated otherwise in orange) with three learning rates: 0.02, 0.001, and 0.0005 (left to right), demonstrating that lower learning rates induce slower error convergence. <bold>(D)</bold>. Learned joint configuration with non-inhibited learning leads to a non-stabled joint configuration, as the arm is continuously trying to improve its conformation in space (left). The introduction of an inhibition signal (marked red) is zeroing the error signal (middle). Learned joint configuration with inhibited learning leads to a stable joint configuration (right).</p></caption>
<graphic xlink:href="fnbot-15-631159-g0002.tif"/>
</fig></sec>
<sec>
<title>Simulating Neuromorphic PID Controller</title>
<p>A PID controller integrates three error modulations to provide the desired actuation, such that the system would approach a target position. These three signals are described in Equation (4) and are schematically presented in <xref ref-type="fig" rid="F3">Figure 3A</xref>. Our PID controller holds a model of engine actuation. Here, we modeled the actuators using a basic speed-torque (implemented with a driving current) model, which corresponds to our physical actuators. In our model, the actuator experiences static friction and it responds exponentially fast once its gears become active, saturating at some maximum speed. As we stop driving the engine, it is losing momentum due to dynamic friction. When actuation is reversed (current in reversed direction), the position is changed accordingly. The actuation model is shown in <xref ref-type="fig" rid="F3">Figure 3B</xref>. In this work, we implemented the PID with spiking neurons using NEF. The model schematic is shown in <xref ref-type="fig" rid="F3">Figure 3C</xref>. The robot&#x00027;s current configuration is introduced through node <italic>Current q</italic>. We subtract a feedback signal <italic>y</italic> (<italic>t</italic>) from it to compute an error signal. This error signal is propagated to the output ensemble through three paths: 1. Proportional path in which the error is proportionally transformed through a gain factor <italic>k</italic><sub><italic>p</italic></sub>, producing signal <italic>e</italic><sub><italic>p</italic></sub>(<italic>t</italic>); 2. Integration path in which the error is integrated using a neuromorphic integrator (see Methods for further details). The result is scaled by a gain factor <italic>k</italic><sub><italic>i</italic></sub>, producing signal <italic>e</italic><sub><italic>i</italic></sub> (<italic>t</italic>); and 3. A derivative path, implemented by connecting the error ensemble to a 2D derivative ensemble. To implement derivation, the error is propagated through two synapses: one with a short time scale (&#x003C4;) and the other with a longer one. The two values are subtracted and scaled by a gain factor <italic>k</italic><sub><italic>d</italic></sub>, producing signal <italic>e</italic><sub><italic>d</italic></sub> (<italic>t</italic>). These error signals <italic>e</italic><sub><italic>p</italic></sub>(<italic>t</italic>)<italic>, e</italic><sub><italic>i</italic></sub>(<italic>t), e</italic><sub><italic>d</italic></sub>(<italic>t</italic>) are summed in the output ensemble, delivered to the engine as <italic>u</italic> (<italic>t</italic>), which also feedback as <italic>y</italic> (<italic>t</italic>). A running example of this model is shown in <xref ref-type="fig" rid="F3">Figure 3D</xref>. Given a target angle position for the engine (normalized to 1), the error is quickly reduced as the engine&#x00027;s location approaches its target. Raster plots for <italic>y</italic> (<italic>t</italic>) and <italic>u</italic> (<italic>t</italic>) are shown in <xref ref-type="fig" rid="F3">Figure 3E</xref>. These stable spiking dynamics of the control signals reflect fast convergence to target.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Neuromorphic implementation of PID control. <bold>(A)</bold> The canonical schematic of a PID controller, comprising of integrated [<italic>e</italic><sub><italic>i</italic></sub>(<italic>t</italic>)], proportional [<italic>e</italic><sub><italic>p</italic></sub>(<italic>t</italic>)], and differential [<italic>e</italic><sub><italic>d</italic></sub>(<italic>t</italic>)] error terms; <bold>(B)</bold> Engine actuation model, which is used by our PID controller to induce motion. The engine is induced by a current (blue), generating rotational speed (orange) in a non-linear fashion, as it is taking into account both static and dynamic gear&#x00027;s friction; <bold>(C)</bold> Schematic of a neuromorphic PID controller; <bold>(D)</bold> Neuromorphic PID controller in action. The engine is actuated such that its position is approaching the target while reducing the error; <bold>(E)</bold> Raster plots of ensembles <italic>y</italic> (<italic>t</italic>) (feedback) and <italic>u</italic> (<italic>t</italic>) (robotic control).</p></caption>
<graphic xlink:href="fnbot-15-631159-g0003.tif"/>
</fig>
<p>To further analyze our neuromorphic PID control, we examined it as a P (proportional path was enabled), a PI (proportional and integrative paths were enabled), and a PD (proportional and derivative paths were enabled) controller. By implementing all three models, we demonstrated the classic PID characteristics in a neuromorphic implementation. Particularly, the P controller was shown to fall short of reaching the target, the PI controller reached the target with inefficient dynamics, and the PD controller had an improved reaching dynamic, but it failed to reach the target accurately (<xref ref-type="fig" rid="F4">Figure 4A</xref>). We further examined our model by changing the number of neurons. Allocating 250 neurons per ensemble per dimension produced accurate results. Reduced number of allocated neurons dramatically affected performance and stability (<xref ref-type="fig" rid="F4">Figure 4B</xref>). This result is compatible with the noise characteristics of NEF-based representation in which the decoders-induced static noise is proportional to <inline-formula><mml:math id="M16"><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:math></inline-formula>, where <italic>N</italic> is the number of neurons (Eliasmith and Anderson, <xref ref-type="bibr" rid="B11">2003</xref>). Synaptic time constants also constrain neuromorphic implementations. Reducing these time-constants inhibits the integration dynamic (Equation 8), as demonstrated in <xref ref-type="fig" rid="F4">Figure 4C</xref>.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Analysis of neuromorphic PID control. <bold>(A)</bold> Implemented P, PI and PD control (left to right), where &#x003C4;<sub><italic>i</italic></sub> &#x0003D; 1, &#x003C4;<sub><italic>d</italic></sub> &#x0003D; 1, <italic>k</italic><sub><italic>p</italic></sub> &#x0003D; 2, <italic>k</italic><sub><italic>i</italic></sub> &#x0003D; 1, <italic>k</italic><sub><italic>d</italic></sub> &#x0003D; 0.4; <bold>(B)</bold> PID control implemented with 250, 15 and 5 neurons per ensemble per dimension (left to right); <bold>(C)</bold> PID control, implemented with &#x003C4;<sub><italic>i</italic></sub> &#x0003D; 1, 0.1, 0.01 (left to right).</p></caption>
<graphic xlink:href="fnbot-15-631159-g0004.tif"/>
</fig></sec>
<sec>
<title>Robotic Control</title>
<p>Control was evaluated on a physical 6 DOF robotic arm (described in the Methods). To demonstrate robot performance, we utilized the MuJoCo physical simulator, in which we accurately described the dynamic and mechanical characteristics of our physical arm. Here, we used it to demonstrate the integration of neuromorphic IK and PID control in a physical setting. While IK was used to derive the robot configuration from the desired EE location in space, PID was used to actuate the robot, generating an EE trajectory toward the target. We created a uniformly distributed 10,000 target points in a 2 &#x000D7; 2 &#x000D7; 1 meters volume (<xref ref-type="fig" rid="F5">Figure 5A</xref>). We tested each point for reachability using IK, constructing a 3D reachability map, where a black point designates a reachable point with an accuracy of at least 1 mm (<xref ref-type="fig" rid="F5">Figure 5B</xref>). Arm base is located at the origin (0, 0, 0). For demonstration, we randomly chose two points and used PID control to generate robot motion. The selected points, the final arm configuration, and the generated EE trajectories are shown in <xref ref-type="fig" rid="F5">Figure 5C</xref>. Trajectories are linear (minimal path) as expected. Distance to target curve is shown in <xref ref-type="fig" rid="F5">Figure 5D</xref>, demonstrating fast convergence to target.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Robot control. <bold>(A)</bold> uniformly distributed 10,000 target points in a 2 &#x000D7; 2 &#x000D7; 1 meters volume; <bold>(B)</bold> IK reachability map, where a black point designates a reachable point with an accuracy of at least 1 mm and a red point designates a non-reachable point. Arm base is located at the origin (0, 0, 0). A 3D reachability map is shown in the left panel, and a cross-section at y = 0 is shown in the right panel; <bold>(C)</bold> Two random points (red) were chosen among the reachable points (semi-transparent black). The arm configuration, resolved for each of the two target points are shown on the right, and the corresponding EE trajectories are shown on the bottom (target is indicated with a red point, EE-origin with a black point and trajectory in blue); <bold>(D)</bold> Distance to the furthest target while reaching it using PID control.</p></caption>
<graphic xlink:href="fnbot-15-631159-g0005.tif"/>
</fig></sec>
<sec>
<title>Loihi Implementation</title>
<p>We implemented both IK and PID control on Intel&#x00027;s Loihi chip. When implementing IK with different learning rates, the same error convergence pattern appears in both simulation and on the board (<xref ref-type="fig" rid="F6">Figures 6A,B</xref>). However, superimposed results showed that the Loihi could converge better, as its error reduced faster than in the simulated model (<xref ref-type="fig" rid="F6">Figure 6C</xref>). We found it to be consistent for various learning rates (<xref ref-type="fig" rid="F6">Figure 6D</xref>). When implementing PID control on the Loihi, we found it hard to implement the derivative pathway, as it is currently cannot support high synaptic time constants. Defining a long time-constant is essential, as the generated control signal fluctuates, affecting our capacity to derive the signal&#x00027;s rate of change accurately. However, for a time-constant of 10 ms, and a configuration of <italic>k</italic><sub><italic>p</italic></sub>= &#x02212;1, <italic>k</italic><sub><italic>i</italic></sub> = &#x02212;0.1, <italic>k</italic><sub><italic>d</italic></sub>= 0.35, the Loihi was able to converge faster than simulation to the desired target, pointing out its embedded learning accelerator (Davies et al., <xref ref-type="bibr" rid="B8">2018</xref>).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Comparative analysis between simulation and the Loihi. <bold>(A)</bold> IK error trace in simulation with three different learning rates: 0.001, 0.005, and 0.0095 (left). PID-control in a simulation where <italic>k</italic><sub><italic>p</italic></sub>= 1, <italic>k</italic><sub><italic>i</italic></sub> = 0.1, <italic>k</italic><sub><italic>d</italic></sub>= 0.35, and &#x003C4; = 0.1 (right); <bold>(B)</bold> IK error trace (left) and PID control (right) on the Loihi, with the same parameters used in simulation; <bold>(C)</bold> Super imposed IK error traces on simulation and the Loihi with a learning rate of 0.0095; <bold>(D)</bold> Mean squared error for IK across different learning rates on both simulation and the Loihi.</p></caption>
<graphic xlink:href="fnbot-15-631159-g0006.tif"/>
</fig></sec></sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>IK and PID control are two of the most fundamental algorithms for robotic control. While IK allows for defining trajectories in task space and implementing it in configuration space, PID provides a canonical way of efficiently approaching a target. Neuromorphic control algorithms may acquire some of the advantages of biological motor control. These neuromorphic algorithms may closely emulate key features of neurophysiological analogs, such as cerebro-cerebellar inverse models (Ishikawa et al., <xref ref-type="bibr" rid="B16">2016</xref>), in the case of IK, and vestibular, oculomotor circuits (Lenz et al., <xref ref-type="bibr" rid="B20">2008</xref>), in the case of PID control. Notably, the cerebellum is known for maintaining internal forward and inverse models for motion control. Moreover, it was shown that the vestibulo-ocular reflex integrates inertial and proportional visual information to drive the eyes in the opposite direction to head motion, achieving retinal image stabilization. However, from a pure engineering perspective, executing control models with energy-efficient hardware is an important endeavor, regardless of its biological plausibility. For example, SpikeProp is one of the most widely utilized back-propagation-based learning rules for SNNs (Bohte et al., <xref ref-type="bibr" rid="B5">2000</xref>), regardless of backpropagation being biologically plausible or not (Lillicrap et al., <xref ref-type="bibr" rid="B21">2020</xref>).</p>
<p>The notion of utilizing artificial neural networks for inverse kinematics and robot control was explored back in 1993 (Jack et al., <xref ref-type="bibr" rid="B17">1993</xref>) and more recently revisited by Csiszar et al. (<xref ref-type="bibr" rid="B7">2017</xref>). Neuromorphic implementations, which are based on SNN, have gained tremendous traction in past decay due to the increased attention to neurorobotics and, more recently, the emergent availability of neuromorphic software and hardware frameworks. Accordingly, neuromorphic implementation of IK and PID control was addressed in several studies. For example, Folgheraiter et al. (<xref ref-type="bibr" rid="B13">2019</xref>) utilized LIF neurons to implement a learning algorithm for adaptive motion control. Barhen and Gulati (<xref ref-type="bibr" rid="B2">1991</xref>) demonstrated neuromorphic inverse kinematics, concentrating on redundant manipulators, using terminal attractors. More recently, PID controllers have been neuromorphically implemented on an FPGA board by Linares-Barranco et al. (<xref ref-type="bibr" rid="B23">2020</xref>) and on the Loihi chip by Stagsted et al. (<xref ref-type="bibr" rid="B26">2020</xref>). Interestingly, Tieck et al. (<xref ref-type="bibr" rid="B27">2009</xref>) demonstrated a neuromorphic PID-based control with no need for inverse kinematic nor planning. These approaches, however, are hardware/software &#x02013; framework specific. NEF has the advantage of being able to deploy on numerous neuromorphic hardware. IK with NEF was demonstrated by DeWolf et al. in their work on the REACH adaptive controller (DeWolf et al., <xref ref-type="bibr" rid="B10">2016</xref>) and, more recently, in DeWolf et al. (<xref ref-type="bibr" rid="B9">2020</xref>). REACH uses adaptive signals computed online (using PES-learning) to modulate arm movement to adapt to unexpected conditions. Our implementation takes a more direct approach, aiming specifically at neuromorphic IK by transforming task space to configuration space with a single adjustable connection.</p>
<p>Neuromorphic systems are fundamentally limited to the number of neurons, the encoding error, and the synaptic time constants. In this work, we addressed these constraints in the context of robotic control. NEF-based representation is limited to a distortion error, which is induced by the decoders themselves. Representation error is expressed with:</p>
<disp-formula id="E9"><label>(9)</label><mml:math id="M17"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:msubsup><mml:mrow><mml:mo>&#x0222B;</mml:mo></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:mstyle><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>-</mml:mo><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mi>d</mml:mi><mml:mi>x</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:msubsup><mml:mrow><mml:mo>&#x0222B;</mml:mo></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:mstyle><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>-</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mi>d</mml:mi><mml:mi>x</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Where <italic>x</italic> is the encoded stimulus, <inline-formula><mml:math id="M18"><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:math></inline-formula> is the represented stimulus, <italic>a</italic><sub><italic>i</italic></sub> is the activity of neuron <italic>i</italic>, <italic>n</italic> is the number of neurons, and <italic>d</italic><sub><italic>i</italic></sub> are the computed decoders, derived by the minimization of <italic>E</italic>. This static distortion is proportional to the number of neurons, according to <inline-formula><mml:math id="M19"><mml:mi>E</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x02248;</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:math></inline-formula> (Eliasmith and Anderson, <xref ref-type="bibr" rid="B11">2003</xref>). As we increase the number of neurons, representation error is reduced (<xref ref-type="fig" rid="F4">Figure 4B</xref>). However, there is much more to it. The selection of the encoders and the distribution of the neurons&#x00027; tuning curves (intercept, maximal firing rate) have a drastic effect on the representation, especially in higher dimensions. Distributing intercepts uniformly between &#x02212;1 and 1 makes sense for 1D ensembles. However, in a uniformly occupied 2D space, a neuron with an intercept of 0.75 fires spikes for only 7.2% of the represented space. In higher dimensions, the proportions become exponentially smaller (or larger for negatively encoded neurons). In high dimensions, the naive distribution of intercepts results in many neurons that rarely produce spikes or are always active, providing a poor representation (see <xref ref-type="fig" rid="F2">Figure 2A</xref>). A rational distribution of encoders, particularly choosing encoders following a triangular distribution (DeWolf et al., <xref ref-type="bibr" rid="B9">2020</xref>), dramatically improved the representation as was demonstrated in <xref ref-type="fig" rid="F2">Figure 2B</xref>. Our design is also constrained to synaptic time constants, which govern the PID&#x00027;s integral and derivative pathways, and the learning rate, which regulates the learning pace of the IK model. The time constant constraints on the PID&#x00027;s integrative path were explored in <xref ref-type="fig" rid="F4">Figure 4C</xref>, and the effect of the learning rate on the IK model was demonstrated in <xref ref-type="fig" rid="F2">Figure 2C</xref>. While in simulations, these time constants can be arbitrarily defined to range across time scales, and indeed biological counterparts to these signals extend from just a few milliseconds to minutes and hours, current neuromorphic hardware does not provide the same flexibility. This fact might have a dramatic effect when a derivate of a noisy signal has to be calculated. The Loihi chip, for example, only supports a time constant of up to 100 ms. Working with such short time constants forces a more accurate representation. However, implementing the model on the Loihi suggests that its embedded learning circuitry (Davies et al., <xref ref-type="bibr" rid="B8">2018</xref>) allows it to converge faster to the target compared to the simulated model (<xref ref-type="fig" rid="F6">Figure 6B</xref>). The Loihi representation accuracy is also demonstrated in the IK model, where it performed continuously better than the simulated model across different learning rates (<xref ref-type="fig" rid="F6">Figure 6D</xref>).</p>
<p>In this article, we presented SNNs capable of PID control and learning-based IK. We explored their implementation on both simulation and neuromorphic hardware, thus demonstrating NEF-based models for neurorobotics. Our implementations use neuromorphic learning for IK and signal integration and differentiation for PID, offering high performing and energy-efficient robotic neuromorphic control.</p></sec>
<sec sec-type="data-availability-statement" id="s5">
<title>Data Availability Statement</title>
<p>The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.</p></sec>
<sec id="s6">
<title>Author Contributions</title>
<p>YZ and AS designed, implemented the algorithms, and analyzed the results. LS and AV contributed to the discussions and revised the manuscript. EE conceptualized the research, designed the algorithms, and wrote the manuscript. All authors contributed to the article and approved the submitted version.</p></sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>LS is employed by Accenture Labs (San Francisco, USA). The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</body>
<back>
<ack><p>The authors would like to thank the Applied Brain Research (ABR) team, and particularly to Travis DeWolf, for the support; to Intel labs for granting us access to their neuromorphic cloud and for the technical support; and to Timothy Shea from Accenture Labs and Tamara Pearlman Tsur for their insightful comments.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ang</surname> <given-names>K. H.</given-names></name> <name><surname>Chong</surname> <given-names>G.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name></person-group> (<year>2005</year>). <article-title>PID control system analysis, design, and technology</article-title>. <source>IEEE Trans. Control Syst. Technol.</source> <volume>13</volume>, <fpage>559</fpage>&#x02013;<lpage>576</lpage>. <pub-id pub-id-type="doi">10.1109/TCST.2005.847331</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Barhen</surname> <given-names>J.</given-names></name> <name><surname>Gulati</surname> <given-names>S.</given-names></name></person-group> (<year>1991</year>). <article-title>Self-organizing neuromorphic architecture for manipulator inverse kinematics</article-title>, in <source>Sensor-Based Robots: Algorithms and Architectures</source>, ed <person-group person-group-type="editor"><name><surname>George Lee</surname> <given-names>C. S.</given-names></name></person-group> (<publisher-loc>Berlin; Heidelberg</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>179</fpage>&#x02013;<lpage>202</lpage>.</citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bekolay</surname> <given-names>T.</given-names></name> <name><surname>Bergstra</surname> <given-names>J.</given-names></name> <name><surname>Hunsberger</surname> <given-names>E.</given-names></name> <name><surname>DeWolf</surname> <given-names>T.</given-names></name> <name><surname>Stewart</surname> <given-names>T.</given-names></name> <name><surname>Rasmussen</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Nengo: a Python tool for building large-scale functional brain models</article-title>. <source>Front. Neuroinform.</source> <volume>7</volume>:<fpage>48</fpage>. <pub-id pub-id-type="doi">10.3389/fninf.2013.00048</pub-id><pub-id pub-id-type="pmid">24431999</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boahen</surname> <given-names>K.</given-names></name></person-group> (<year>2017</year>). <article-title>A neuromorph&#x00027;s prospectus</article-title>. <source>Comput. Sci. Eng</source>. <volume>19</volume>, <fpage>14</fpage>&#x02013;<lpage>28</lpage>. <pub-id pub-id-type="doi">10.1109/MCSE.2017.33</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bohte</surname> <given-names>S. M.</given-names></name> <name><surname>Kok</surname> <given-names>J. N.</given-names></name> <name><surname>La Poutre</surname> <given-names>J. A.</given-names></name></person-group> (<year>2000</year>). <article-title>SpikeProp: backpropagation for networks of spiking neurons</article-title>. <source>ESAN</source> <volume>48</volume>, <fpage>17</fpage>&#x02013;<lpage>37</lpage>. <pub-id pub-id-type="doi">10.1016/S0925-2312(01)00658-0</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burkitt</surname> <given-names>A.</given-names></name></person-group> (<year>2006</year>). <article-title>A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input</article-title>. <source>Biol. Cybern.</source> <volume>95</volume>, <fpage>1</fpage>&#x02013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1007/s00422-006-0068-6</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Csiszar</surname> <given-names>A.</given-names></name> <name><surname>Eilers</surname> <given-names>J.</given-names></name> <name><surname>Verl</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>On solving the inverse kinematics problem using neural networks</article-title>, in <source>International Conference on Mechatronics and Machine Vision in Practice</source>.</citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davies</surname> <given-names>M.</given-names></name> <name><surname>Srinivasa</surname> <given-names>N.</given-names></name> <name><surname>Lin</surname> <given-names>T.-H.</given-names></name> <name><surname>Chinya</surname> <given-names>G.</given-names></name> <name><surname>Cao</surname> <given-names>Y.</given-names></name> <name><surname>Choday</surname> <given-names>S. H.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Loihi: a neuromorphic manycore processor with on-chip learning</article-title>. <source>IEEE Micro.</source> <volume>38</volume>, <fpage>82</fpage>&#x02013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.1109/MM.2018.112130359</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>DeWolf</surname> <given-names>T.</given-names></name> <name><surname>Jaworski</surname> <given-names>P.</given-names></name> <name><surname>Eliasmith</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Nengo and low-power AI hardware for robust, embedded neurorobotics</article-title>. <source>Front. Neurorobot</source>. <volume>14</volume>:<fpage>568359</fpage>. <pub-id pub-id-type="doi">10.3389/fnbot.2020.568359</pub-id><pub-id pub-id-type="pmid">33162886</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>DeWolf</surname> <given-names>T.</given-names></name> <name><surname>Stewart</surname> <given-names>T. C.</given-names></name> <name><surname>Slotine</surname> <given-names>J.-J.</given-names></name> <name><surname>Eliasmith</surname> <given-names>C.</given-names></name></person-group> (<year>2016</year>). <article-title>A spiking neural model of adaptive arm control</article-title>. <source>Proc. R. Soc. B Biol. Sci.</source> <volume>283</volume>, <fpage>2016</fpage>&#x02013;<lpage>2134</lpage>. <pub-id pub-id-type="doi">10.1098/rspb.2016.2134</pub-id><pub-id pub-id-type="pmid">27903878</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Eliasmith</surname> <given-names>C.</given-names></name> <name><surname>Anderson</surname> <given-names>C. H.</given-names></name></person-group> (<year>2003</year>). <source>Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems</source>. <publisher-loc>London</publisher-loc>: <publisher-name>MIT Press</publisher-name>. <pub-id pub-id-type="pmid">21573688</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Fischl</surname> <given-names>K.</given-names></name> <name><surname>Andreou</surname> <given-names>A.</given-names></name> <name><surname>Stewart</surname> <given-names>T.</given-names></name> <name><surname>Fair</surname> <given-names>K.</given-names></name></person-group> (<year>2018</year>). <article-title>Implementation of the neural engineering framework on the TrueNorth neurosynaptic system</article-title>, in <source>IEEE Biomedical Circuits and Systems Conference (BioCAS)</source>.</citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Folgheraiter</surname> <given-names>M.</given-names></name> <name><surname>Keldibek</surname> <given-names>A.</given-names></name> <name><surname>Aubakir</surname> <given-names>B.</given-names></name> <name><surname>Gini</surname> <given-names>G.</given-names></name> <name><surname>Franchi</surname> <given-names>A. M.</given-names></name> <name><surname>Bana</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>A neuromorphic control architecture for a biped robot</article-title>. <source>Robot. Auton. Syst.</source> <volume>120</volume>:<fpage>103244</fpage>. <pub-id pub-id-type="doi">10.1016/j.robot.2019.07.014</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gosmann</surname> <given-names>J.</given-names></name> <name><surname>Eliasmith</surname> <given-names>C.</given-names></name></person-group> (<year>2016</year>). <article-title>Optimizing semantic pointer representations for symbol-like processing in spiking neural networks</article-title>. <source>PLoS ONE</source> <volume>11</volume>:<fpage>e0149928</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0149928</pub-id><pub-id pub-id-type="pmid">26900931</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hagras</surname> <given-names>H.</given-names></name></person-group> (<year>2004</year>). <article-title>A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots</article-title>. <source>IEEE Trans Fuzzy Syst.</source> <volume>12</volume>, <fpage>524</fpage>&#x02013;<lpage>539</lpage>. <pub-id pub-id-type="doi">10.1109/TFUZZ.2004.832538</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ishikawa</surname> <given-names>T.</given-names></name> <name><surname>Tomatsu</surname> <given-names>S.</given-names></name> <name><surname>Izawa</surname> <given-names>J.</given-names></name> <name><surname>Kakei</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>The cerebro-cerebellum: Could it be loci of forward models?</article-title> <source>Neurosci. Res.</source> <volume>104</volume>, <fpage>72</fpage>&#x02013;<lpage>79</lpage>. <pub-id pub-id-type="doi">10.1016/j.neures.2015.12.003</pub-id><pub-id pub-id-type="pmid">26704591</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jack</surname> <given-names>H.</given-names></name> <name><surname>Lee</surname> <given-names>D.</given-names></name> <name><surname>Buchal</surname> <given-names>R.</given-names></name> <name><surname>Elmaraghy</surname> <given-names>W. H.</given-names></name></person-group> (<year>1993</year>). <article-title>Neural networks and the inverse kinematics problem</article-title>. <source>J. Intell. Manuf.</source> <volume>4</volume>, <fpage>43</fpage>&#x02013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1007/BF00124980</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koker</surname> <given-names>R.</given-names></name> <name><surname>Oz</surname> <given-names>C.</given-names></name> <name><surname>Cakar</surname> <given-names>T.</given-names></name> <name><surname>Ekiz</surname> <given-names>H.</given-names></name></person-group> (<year>2004</year>). <article-title>A study of neural network based inverse kinematics solution for a three-joint robot</article-title>. <source>Robot. Auton. Syst.</source> <volume>49</volume>, <fpage>227</fpage>&#x02013;<lpage>234</lpage>. <pub-id pub-id-type="doi">10.1016/j.robot.2004.09.010</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Krichmar</surname> <given-names>J. L.</given-names></name> <name><surname>Wagatsuma</surname> <given-names>H.</given-names></name></person-group> (<year>2011</year>). <source>Neuromorphic and Brain-Based Robots</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="pmid">30061820</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lenz</surname> <given-names>A.</given-names></name> <name><surname>Balakrishnan</surname> <given-names>T.</given-names></name> <name><surname>Pipe</surname> <given-names>A. G.</given-names></name> <name><surname>Melhuish</surname> <given-names>C.</given-names></name></person-group> (<year>2008</year>). <article-title>An adaptive gaze stabilization controller inspired by the vestibulo-ocular reflex</article-title>. <source>Bioinspir. Biomim.</source> <volume>3</volume>:<fpage>035001</fpage>. <pub-id pub-id-type="doi">10.1088/1748-3182/3/3/035001</pub-id><pub-id pub-id-type="pmid">18583732</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lillicrap</surname> <given-names>T. P.</given-names></name> <name><surname>Santoro</surname> <given-names>A.</given-names></name> <name><surname>Marris</surname> <given-names>L.</given-names></name> <name><surname>Akerman</surname> <given-names>C. J.</given-names></name> <name><surname>Hinton</surname> <given-names>G.</given-names></name></person-group> (<year>2020</year>). <article-title>Backpropagation and the brain</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>21</volume>:<fpage>335</fpage>&#x02013;<lpage>346</lpage>. <pub-id pub-id-type="doi">10.1038/s41583-020-0277-3</pub-id><pub-id pub-id-type="pmid">32303713</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lin</surname> <given-names>C.-K.</given-names></name> <name><surname>Wild</surname> <given-names>A.</given-names></name> <name><surname>Chinya</surname> <given-names>G.</given-names></name> <name><surname>Cao</surname> <given-names>Y.</given-names></name> <name><surname>Davies</surname> <given-names>M.</given-names></name> <name><surname>Lavery</surname> <given-names>D. M.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Programming spiking neural networks on intel&#x00027;s loihi</article-title>. <source>Computer</source> <volume>51</volume>, <fpage>52</fpage>&#x02013;<lpage>61</lpage>. <pub-id pub-id-type="doi">10.1109/MC.2018.157113521</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Linares-Barranco</surname> <given-names>A.</given-names></name> <name><surname>Perez-Pena</surname> <given-names>F.</given-names></name> <name><surname>Jimenez-Fernandez</surname> <given-names>A.</given-names></name> <name><surname>Chicca</surname> <given-names>E.</given-names></name></person-group> (<year>2020</year>). <article-title>ED-BioRob: a neuromorphic robotic arm with FPGA-based infrastructure for bio-inspired spiking motor controllers</article-title>. <source>Front. Neurorobot.</source> <volume>14</volume>:<fpage>590163</fpage>. <pub-id pub-id-type="doi">10.3389/fnbot.2020.590163</pub-id><pub-id pub-id-type="pmid">33328951</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lynch</surname> <given-names>K. P. F.</given-names></name></person-group> (<year>2017</year>). <source>Modern Robotics</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="pmid">9542128</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mundy</surname> <given-names>A.</given-names></name> <name><surname>Knight</surname> <given-names>J. S. T.</given-names></name> <name><surname>Furber</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>An efficient SpiNNaker implementation of the neural engineering framework</article-title>, in <source>International Joint Conference on Neural Networks (IJCNN)</source>.</citation></ref>
<ref id="B26">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Stagsted</surname> <given-names>R. K.</given-names></name> <name><surname>Vitale</surname> <given-names>A.</given-names></name> <name><surname>Binz</surname> <given-names>J.</given-names></name> <name><surname>Larsen</surname> <given-names>L. B.</given-names></name> <name><surname>Sandamirskaya</surname> <given-names>Y.</given-names></name></person-group> (<year>2020</year>). <article-title>Towards neuromorphic control: a spiking neural network based PID controller for UAV</article-title>, in <source>Robotics: Science and Systems XVI</source> (<publisher-loc>Corvalis, QR</publisher-loc>: <publisher-name>MIT Press</publisher-name>).</citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tieck</surname> <given-names>J. C. V.</given-names></name> <name><surname>Steffen</surname> <given-names>L.</given-names></name> <name><surname>Kaiser</surname> <given-names>J.</given-names></name> <name><surname>Reichard</surname> <given-names>D.</given-names></name> <name><surname>Roennau</surname> <given-names>A.</given-names></name> <name><surname>Dillmann</surname> <given-names>R.</given-names></name></person-group> (<year>2009</year>). <article-title>Combining motor primitives for perception driven target reaching with spiking neurons</article-title>. <source>Int. J. Cogn. Inform. Nat. Intell.</source> <volume>13</volume>, <fpage>1</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.4018/IJCINI.2019010101</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsur</surname> <given-names>E. E.</given-names></name> <name><surname>Rivlin-Etzion</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>Neuromorphic implementation of motion detection using oscillation interference</article-title>. <source>Neurocomputing</source> <volume>374</volume>, <fpage>54</fpage>&#x02013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2019.09.072</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Voelker</surname> <given-names>R.</given-names></name></person-group> (<year>2015</year>). <source>A Solution to the Dynamics of the Prescribed Error Sensitivity Learning rule.</source> <publisher-loc>Waterloo</publisher-loc>: <publisher-name>Centre for Theoretical Neuroscience</publisher-name>.</citation></ref>
</ref-list>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding.</bold> This research was funded by Accenture Labs as part of Intel&#x00027;s INRC (Intel Neuromorphic Research Community) initiative and by the Open University of Israel research grant.</p>
</fn>
</fn-group>
</back>
</article>