<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2024.1402646</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Synchronized stepwise control of firing and learning thresholds in a spiking randomly connected neural network toward hardware implementation</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes" equal-contrib="yes">
<name><surname>Nomura</surname> <given-names>Kumiko</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<xref ref-type="author-notes" rid="fn0001"><sup>&#x2020;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2634676/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/visualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
</contrib>
<contrib contrib-type="author" equal-contrib="yes">
<name><surname>Nishi</surname> <given-names>Yoshifumi</given-names></name>
<xref ref-type="author-notes" rid="fn0001"><sup>&#x2020;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2757415/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/supervision/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff><institution>Frontier Research Laboratory, Corporate Research and Development Center, Toshiba Corporation</institution>, <addr-line>Kawasaki</addr-line>, <country>Japan</country></aff>
<author-notes>
<fn fn-type="edited-by" id="fn0002">
<p>Edited by: Qinru Qiu, Syracuse University, United States</p>
</fn>
<fn fn-type="edited-by" id="fn0003">
<p>Reviewed by: Seenivasan M. A., National Institute of Technology Meghalaya, India</p>
<p>Jingang Jin, Syracuse University, United States</p>
</fn>
<corresp id="c001">&#x002A;Correspondence: Kumiko Nomura, <email>kumiko.nomura@toshiba.co.jp</email></corresp>
<fn fn-type="equal" id="fn0001">
<p><sup>&#x2020;</sup>These authors have contributed equally to this work and share first authorship</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>13</day>
<month>11</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>18</volume>
<elocation-id>1402646</elocation-id>
<history>
<date date-type="received">
<day>18</day>
<month>03</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>22</day>
<month>10</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2024 Nomura and Nishi.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Nomura and Nishi</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Spiking randomly connected neural network (RNN) hardware is promising as ultimately low power devices for temporal data processing at the edge. Although the potential of RNNs for temporal data processing has been demonstrated, randomness of the network architecture often causes performance degradation. To mitigate such degradation, self-organization mechanism using intrinsic plasticity (IP) and synaptic plasticity (SP) should be implemented in the spiking RNN. Therefore, we propose hardware-oriented models of these functions. To implement the function of IP, a variable firing threshold is introduced to each excitatory neuron in the RNN that changes stepwise in accordance with its activity. We also define other thresholds for SP that synchronize with the firing threshold, which determine the direction of stepwise synaptic update that is executed on receiving a pre-synaptic spike. To discuss the effectiveness of our model, we perform simulations of temporal data learning and anomaly detection using publicly available electrocardiograms (ECGs) with a spiking RNN. We observe that the spiking RNN with our IP and SP models realizes the true positive rate of 1 with the false positive rate being suppressed at 0 successfully, which does not occur otherwise. Furthermore, we find that these thresholds as well as the synaptic weights can be reduced to binary if the RNN architecture is appropriately designed. This contributes to minimization of the circuit of the neuronal system having IP and SP.</p>
</abstract>
<kwd-group>
<kwd>spiking neural network</kwd>
<kwd>intrinsic plasticity</kwd>
<kwd>synaptic plasticity</kwd>
<kwd>randomly connected neural network</kwd>
<kwd>neuromorphic chip</kwd>
</kwd-group>
<counts>
<fig-count count="9"/>
<table-count count="1"/>
<equation-count count="7"/>
<ref-count count="64"/>
<page-count count="11"/>
<word-count count="8662"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Neuromorphic Engineering</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>Randomly connected neural networks (RNNs), which have been studied as a simplified theoretical model of the nervous system of biological brains (<xref ref-type="bibr" rid="ref55">Somopolinsky et al., 1988</xref>; <xref ref-type="bibr" rid="ref36">Lazar et al., 2009</xref>; <xref ref-type="bibr" rid="ref31">Kadmon and Somopolinsky, 2015</xref>; <xref ref-type="bibr" rid="ref6">Bourdoukan and Deneve, 2015</xref>; <xref ref-type="bibr" rid="ref59">Tetzlaff et al., 2015</xref>; <xref ref-type="bibr" rid="ref60">Thalmeier et al., 2016</xref>; <xref ref-type="bibr" rid="ref35">Landau and Somopolinsky, 2018</xref>; <xref ref-type="bibr" rid="ref22">Frenkel and Indiveri, 2022</xref>), are attracting much attention as a promising artificial intelligence (AI) technique that can perform prediction and anomaly detection of time series data in real time without executing sophisticated AI algorithms (<xref ref-type="bibr" rid="ref30">Jaeger, 2001</xref>; <xref ref-type="bibr" rid="ref41">Maass et al., 2002</xref>; <xref ref-type="bibr" rid="ref58">Sussillo and Abbott, 2009</xref>; <xref ref-type="bibr" rid="ref45">Nicola and Clopath, 2017</xref>; <xref ref-type="bibr" rid="ref15">Das et al., 2018</xref>; <xref ref-type="bibr" rid="ref5">Bauer et al., 2019</xref>). In particular, hardware implementation of RNNs is expected to reduce the power consumption of time series data processing, enabling intelligent operations of edge systems in our society. While the potential of RNNs has been well demonstrated in previous works (<xref ref-type="bibr" rid="ref30">Jaeger, 2001</xref>; <xref ref-type="bibr" rid="ref41">Maass et al., 2002</xref>; <xref ref-type="bibr" rid="ref58">Sussillo and Abbott, 2009</xref>; <xref ref-type="bibr" rid="ref45">Nicola and Clopath, 2017</xref>; <xref ref-type="bibr" rid="ref15">Das et al., 2018</xref>; <xref ref-type="bibr" rid="ref5">Bauer et al., 2019</xref>; <xref ref-type="bibr" rid="ref12">Covi et al., 2021</xref>), inherent randomness sometimes causes uncontrollable data inference failures, leading to low reliability of the technique. Self-organization mechanism improves the reliability, which can be realized by including intrinsic plasticity (IP) and synaptic plasticity (SP) in the neuronal operation model (<xref ref-type="bibr" rid="ref36">Lazar et al., 2009</xref>). IP is a homeostatic mechanism of biological neurons that controls neuron firing frequencies within a certain range. It has been shown to be indispensable for unsupervised learning in neuromorphic systems (<xref ref-type="bibr" rid="ref18">Desai et al., 1999</xref>; <xref ref-type="bibr" rid="ref57">Steil, 2007</xref>; <xref ref-type="bibr" rid="ref4">Bartolozzi et al., 2008</xref>; <xref ref-type="bibr" rid="ref36">Lazar et al., 2009</xref>; <xref ref-type="bibr" rid="ref19">Diehl and Cook, 2015</xref>; <xref ref-type="bibr" rid="ref54">Qiao et al., 2017</xref>; <xref ref-type="bibr" rid="ref16">Davies et al., 2018</xref>; <xref ref-type="bibr" rid="ref49">Payvand et al., 2022</xref>). SP is a mechanism where a synapse changes its own weight in accordance with incoming signals and the post-synaptic neuron&#x2019;s activity, known as the fundamental principle of learning in biological brains (<xref ref-type="bibr" rid="ref37">Legenstein et al., 2005</xref>; <xref ref-type="bibr" rid="ref50">Pfister et al., 2006</xref>; <xref ref-type="bibr" rid="ref52">Ponulak and Kasinski, 2010</xref>; <xref ref-type="bibr" rid="ref34">Kuzum et al., 2012</xref>; <xref ref-type="bibr" rid="ref46">Ning et al., 2015</xref>; <xref ref-type="bibr" rid="ref53">Prezioso et al., 2015</xref>; <xref ref-type="bibr" rid="ref1">Ambrogio et al., 2016</xref>; <xref ref-type="bibr" rid="ref11">Covi et al., 2016</xref>; <xref ref-type="bibr" rid="ref33">Kreiser et al., 2017</xref>; <xref ref-type="bibr" rid="ref56">Srinivasan et al., 2017</xref>; <xref ref-type="bibr" rid="ref2">Ambrogio et al., 2018</xref>; <xref ref-type="bibr" rid="ref20">Faria et al., 2018</xref>; <xref ref-type="bibr" rid="ref38">Li et al., 2018</xref>; <xref ref-type="bibr" rid="ref3">Amirshahi and Hashemi, 2019</xref>; <xref ref-type="bibr" rid="ref8">Cai et al., 2020</xref>; <xref ref-type="bibr" rid="ref62">Yongqiang et al., 2020</xref>; <xref ref-type="bibr" rid="ref13">Dalgaty et al., 2021</xref>; <xref ref-type="bibr" rid="ref21">Frenkel et al., 2023</xref>).</p>
<p>Since computing resources may be limited at the edge, we focus on analog spiking neural network (SNN) hardware having ultimately high-power efficiency for edge AI devices (<xref ref-type="bibr" rid="ref46">Ning et al., 2015</xref>; <xref ref-type="bibr" rid="ref16">Davies et al., 2018</xref>; <xref ref-type="bibr" rid="ref49">Payvand et al., 2022</xref>). The most general neuron model for SNNs is the leaky integrate-and-fire (LIF) model (<xref ref-type="bibr" rid="ref27">Holt and Koch, 1997</xref>). For a LIF neuron, IP function may be added by adjusting its time constant of the membrane potential <inline-formula>
<mml:math id="M1">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> according to its own firing rate <inline-formula>
<mml:math id="M2">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. If we are to design LIF neurons with analog circuitry, tunable capacitor and resistor are required to control the time constant. The former is difficult because no practical device element having variable capacitance has been invented. For the latter, <xref ref-type="bibr" rid="ref49">Payvand et al. (2022)</xref> proposed an IP circuit using memristors, namely, variable resistors. However, this circuit requires an auxiliary unit for memristor control, whose details are not yet discussed. Considering large device-to-device variability of memristors, each unit must be tuned according to the respective memristor&#x2019;s characteristics, which would result in a complicated circuit system with large overhead (<xref ref-type="bibr" rid="ref48">Payvand et al., 2020</xref>; <xref ref-type="bibr" rid="ref17">Demirag et al., 2021</xref>; <xref ref-type="bibr" rid="ref44">Moro et al., 2022</xref>; <xref ref-type="bibr" rid="ref47">Payvand et al., 2023</xref>).</p>
<p>Alternative method for controlling <inline-formula>
<mml:math id="M3">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> is to adjust the firing threshold <inline-formula>
<mml:math id="M4">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> itself (<xref ref-type="bibr" rid="ref19">Diehl and Cook, 2015</xref>; <xref ref-type="bibr" rid="ref63">Zhang and Li, 2019</xref>; <xref ref-type="bibr" rid="ref64">Zhang et al., 2021</xref>). For a LIF neuron designed with analog circuitry, <inline-formula>
<mml:math id="M5">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> is given as a reference voltage applied to a comparator connected to the neuron&#x2019;s membrane capacitor (<xref ref-type="bibr" rid="ref10">Chicca et al., 2014</xref>; <xref ref-type="bibr" rid="ref46">Ning et al., 2015</xref>; <xref ref-type="bibr" rid="ref9">Chicca and Indiveri, 2020</xref>; <xref ref-type="bibr" rid="ref49">Payvand et al., 2022</xref>), hence IP can be implemented by adding a circuit that can change the reference voltage in accordance with <inline-formula>
<mml:math id="M6">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. It would be straightforward to employ a variable voltage source, but we need a considerable effort to design such a compact voltage source as to be added to every neuron. Instead, we may prepare several fixed voltages and multiplex them to the comparator according to neuronal activity. This is the motivation of this study. What we are interested in are (i) whether or not stepwise control of the threshold voltage is effective for the IP function in a spiking RNN (SRNN) for temporal data learning and (ii) if it is, how far we can go in reducing the number of the voltage lines.</p>
<p>When we introduce variable <inline-formula>
<mml:math id="M7">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, we need to care about SP for hardware design. With regard to SP implementation, spike-timing dependent plasticity (STDP; <xref ref-type="bibr" rid="ref37">Legenstein et al., 2005</xref>; <xref ref-type="bibr" rid="ref46">Ning et al., 2015</xref>; <xref ref-type="bibr" rid="ref56">Srinivasan et al., 2017</xref>) is the most popular synaptic update rule. STDP is a comprehensive synaptic update rule that obeys Hebb&#x2019;s law, but it is not hardware-friendly; it requires every synapse to have a mechanism to measure elapsed time from arrival of a spike. Alternatively, we employ spike-driven synaptic plasticity (SDSP; <xref ref-type="bibr" rid="ref7">Brader et al., 2007</xref>; <xref ref-type="bibr" rid="ref42">Mitra et al., 2009</xref>; <xref ref-type="bibr" rid="ref46">Ning et al., 2015</xref>; <xref ref-type="bibr" rid="ref23">Frenkel et al., 2019</xref>; <xref ref-type="bibr" rid="ref26">Gurunathan and Iyer, 2020</xref>; <xref ref-type="bibr" rid="ref49">Payvand et al., 2022</xref>; <xref ref-type="bibr" rid="ref21">Frenkel et al., 2023</xref>) which is much more convenient for hardware implementation. It is a rule where an incoming spike change the synaptic weight depending on whether <inline-formula>
<mml:math id="M8">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> of the post-synaptic neuron is higher than a threshold <inline-formula>
<mml:math id="M9">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> or lower than another threshold <inline-formula>
<mml:math id="M10">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>.The magnitude relationship <inline-formula>
<mml:math id="M11">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
<mml:mo>&#x2264;</mml:mo>
</mml:math>
</inline-formula> <inline-formula>
<mml:math id="M12">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
<mml:mo>&#x003C;</mml:mo>
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> is essential for correct learning hence <inline-formula>
<mml:math id="M13">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M14">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> should be defined according to <inline-formula>
<mml:math id="M15">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>.</p>
<p>In this work we study an SRNN with IP and SP where <inline-formula>
<mml:math id="M16">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M17">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>, and <inline-formula>
<mml:math id="M18">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">Down</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> are discretized and synchronized. In order to make our model hardware-oriented, synaptic weights <inline-formula>
<mml:math id="M19">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula> are also discretized so that we can assume conventional digital memory circuits for storing weights. We perform simulations of learning and anomaly detection tasks for publicly available electrocardiograms (ECGs; <xref ref-type="bibr" rid="ref40">Liu et al., 2013</xref>; <xref ref-type="bibr" rid="ref32">Kiranyaz et al., 2016</xref>; <xref ref-type="bibr" rid="ref15">Das et al., 2018</xref>; <xref ref-type="bibr" rid="ref3">Amirshahi and Hashemi, 2019</xref>; <xref ref-type="bibr" rid="ref5">Bauer et al., 2019</xref>; <xref ref-type="bibr" rid="ref61">Wang et al., 2019</xref>) and show the effectiveness of our model. In particular, we discuss how much we can reduce the discretized levels of <inline-formula>
<mml:math id="M20">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M21">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula>, which is an essential aspect for hardware implementation.</p>
</sec>
<sec sec-type="methods" id="sec2">
<label>2</label>
<title>Methods</title>
<sec id="sec3">
<label>2.1</label>
<title>LIF neuron model</title>
<p>The neuron model we employ in this work is the LIF model (<xref ref-type="bibr" rid="ref27">Holt and Koch, 1997</xref>), which is one of the best-known spiking neuron models due to its computational effectiveness and mathematical simplicity. The membrane potential <inline-formula>
<mml:math id="M22">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> of neuron <inline-formula>
<mml:math id="M23">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula> is given as</p>
<disp-formula id="E1">
<mml:math id="M24">
<mml:mi>C</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mi>R</mml:mi>
</mml:mfrac>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M25">
<mml:mi>C</mml:mi>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M26">
<mml:mi>R</mml:mi>
</mml:math>
</inline-formula>, and <inline-formula>
<mml:math id="M27">
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> denote the membrane capacitance, resistance, and the sum of the input current flowing into the neuron, respectively. If <inline-formula>
<mml:math id="M28">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> exceeds the firing threshold <inline-formula>
<mml:math id="M29">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>, neuron <inline-formula>
<mml:math id="M30">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula> fires and transfers a spike signal to the next neurons connected via a synapse. Then, neuron <inline-formula>
<mml:math id="M31">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula> resets <inline-formula>
<mml:math id="M32">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> to <inline-formula>
<mml:math id="M33">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">reset</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> and enters a refractory state for time <inline-formula>
<mml:math id="M34">
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>f</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, during which <inline-formula>
<mml:math id="M35">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> stays at <inline-formula>
<mml:math id="M36">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">reset</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> regardless of <inline-formula>
<mml:math id="M37">
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. The LIF neuron is hardware-friendly because it can be implemented in analog circuits using industrially manufacturable complementary-metal-oxide-semiconductor (CMOS) devices (<xref ref-type="bibr" rid="ref29">Indiveri et al., 2011</xref>), as illustrated in <xref ref-type="fig" rid="fig1">Figure 1A</xref>.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Model and behavior of each component of SRNN. <bold>(A)</bold> LIF neuron circuit diagram. <bold>(B)</bold> Schematic diagram of synaptic weight variation. <bold>(C)</bold> Behavior of <inline-formula>
<mml:math id="M38">
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M39">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> depending on <inline-formula>
<mml:math id="M40">
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>.</p>
</caption>
<graphic xlink:href="fnins-18-1402646-g001.tif"/>
</fig>
</sec>
<sec id="sec4">
<label>2.2</label>
<title>Synapse and SDSP</title>
<p>A synapse receives spikes from neurons and external input nodes. When a spike comes, a synapse converts the spike into a synaptic current <inline-formula>
<mml:math id="M41">
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> proportional to <inline-formula>
<mml:math id="M42">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula> defined as</p>
<disp-formula id="E2">
<mml:math id="M43">
<mml:msub>
<mml:mi>&#x03C4;</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfrac>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
<mml:mi>W</mml:mi>
<mml:mi>&#x03B4;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi mathvariant="italic">spike</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mtext>,</mml:mtext>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M44">
<mml:msub>
<mml:mi>&#x03C4;</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M45">
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi mathvariant="italic">spike</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> are a time constant, and <inline-formula>
<mml:math id="M46">
<mml:mi>&#x03B1;</mml:mi>
</mml:math>
</inline-formula> is an appropriately defined constant. This synapse model is also compatible with the CMOS design.</p>
<p>As mentioned above, we employ SDSP as the synaptic update rule for SP. The synaptic weight <inline-formula>
<mml:math id="M47">
<mml:mi>W</mml:mi>
<mml:mfenced open="(" close=")" separators=",">
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> between pre-synaptic neuron <inline-formula>
<mml:math id="M48">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula> and post-synaptic neuron <inline-formula>
<mml:math id="M49">
<mml:mi>j</mml:mi>
</mml:math>
</inline-formula> increases or decreases if <inline-formula>
<mml:math id="M50">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mi>j</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> is higher or lower than the learning threshold <inline-formula>
<mml:math id="M51">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>j</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> or <inline-formula>
<mml:math id="M52">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>j</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> when the pre-synaptic neuron <inline-formula>
<mml:math id="M53">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula> fires, as follows:</p>
<disp-formula id="E3">
<mml:math id="M54">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mi>W</mml:mi>
<mml:msub>
<mml:mfenced open="(" close=")" separators=",">
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mfenced>
<mml:mi mathvariant="italic">new</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfenced close="" open="{">
<mml:mtable equalrows="true" equalcolumns="true">
<mml:mtr>
<mml:mtd>
<mml:mi>W</mml:mi>
<mml:mfenced open="(" close=")" separators=",">
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mfenced>
<mml:mo>+</mml:mo>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
<mml:mspace width="1em"/>
<mml:mi mathvariant="italic">if</mml:mi>
<mml:mspace width="0.91em"/>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mi>j</mml:mi>
</mml:msubsup>
<mml:mo>&#x003E;</mml:mo>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>j</mml:mi>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>W</mml:mi>
<mml:mfenced open="(" close=")" separators=",">
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
<mml:mspace width="1em"/>
<mml:mi mathvariant="italic">if</mml:mi>
<mml:mspace width="0.91em"/>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mi>j</mml:mi>
</mml:msubsup>
<mml:mo>&#x003C;</mml:mo>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>j</mml:mi>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="normal">when</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="normal">a</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="normal">spike arrives</mml:mi>
<mml:mtext>,</mml:mtext>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M55">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> is the learning step, which is set to a constant value, as illustrated in <xref ref-type="fig" rid="fig1">Figure 1B</xref>.</p>
<p>In practice, the range of <inline-formula>
<mml:math id="M56">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula> is finite, <inline-formula>
<mml:math id="M57">
<mml:mn>0</mml:mn>
<mml:mo>&#x2264;</mml:mo>
<mml:mi>W</mml:mi>
<mml:mo>&#x2264;</mml:mo>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mtext>max</mml:mtext>
</mml:msub>
</mml:math>
</inline-formula>, hence <inline-formula>
<mml:math id="M58">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> defines the resolution of <inline-formula>
<mml:math id="M59">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula>. Higher resolution is favorable for better performance in general, but this leads to a larger circuit area for storing <inline-formula>
<mml:math id="M60">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula> values. Emerging memory elements such as memristors and phase change memory devices may be employed to avoid this issue (<xref ref-type="bibr" rid="ref36">Lazar et al., 2009</xref>; <xref ref-type="bibr" rid="ref39">Li and Li, 2013</xref>), but practical use of these emerging technologies is still a big challenge. In this work, we assume conventional CMOS digital memory cells for storing <inline-formula>
<mml:math id="M61">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula>, raising our interest in how much we can reduce the resolution of <inline-formula>
<mml:math id="M62">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula> for practical application task. In this view, we discuss the feasibility of binary <inline-formula>
<mml:math id="M63">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula>, which is ideal for hardware implementation, later in this work.</p>
<p>A circuit that determines whether <inline-formula>
<mml:math id="M64">
<mml:mi>W</mml:mi>
<mml:mfenced open="(" close=")" separators=",">
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> should be potentiated, depressed, or unchanged can be designed with two comparator circuits; the one compares <inline-formula>
<mml:math id="M65">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mi>j</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> with <inline-formula>
<mml:math id="M66">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>j</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> and the other with <inline-formula>
<mml:math id="M67">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>j</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> (see <xref rid="SM1" ref-type="supplementary-material">Supplementary materials</xref>). Note that it is sufficient for each neuron to have a determinator; it is not necessary for each synapse to have it.</p>
</sec>
<sec id="sec5">
<label>2.3</label>
<title>Event-driven stepwise IP</title>
<p>The IP model we employ executes a stepwise change of the firing threshold voltage <inline-formula>
<mml:math id="M68">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> of neuron <inline-formula>
<mml:math id="M69">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula> in an event-driven manner as</p>
<disp-formula id="E4">
<mml:math id="M70">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mi mathvariant="italic">new</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mfenced close="" open="{">
<mml:mtable equalrows="true" equalcolumns="true">
<mml:mtr>
<mml:mtd>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width="1em"/>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">if</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo>&#x003E;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:mi>&#x03C3;</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width="1em"/>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">if</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo>&#x003C;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03C3;</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="normal">when neuron</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:mi>i</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="normal">fires</mml:mi>
<mml:mtext>,</mml:mtext>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M71">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> denotes the changing step of <inline-formula>
<mml:math id="M72">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> in a single IP operation, <inline-formula>
<mml:math id="M73">
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> a parameter that measures of the activity of neuron <inline-formula>
<mml:math id="M74">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M75">
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> a constant corresponding to the target activity. <inline-formula>
<mml:math id="M76">
<mml:mi>&#x03C3;</mml:mi>
</mml:math>
</inline-formula> is a parameter that defines a healthy regime of <inline-formula>
<mml:math id="M77">
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M78">
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03C3;</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x003C;</mml:mo>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo>&#x003C;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:mi>&#x03C3;</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, where IP operation is not executed (see <xref rid="SM1" ref-type="supplementary-material">Supplementary materials</xref> for details; <xref ref-type="bibr" rid="ref49">Payvand et al., 2022</xref>). <inline-formula>
<mml:math id="M79">
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> is often referred to as a calcium potential (<xref ref-type="bibr" rid="ref7">Brader et al., 2007</xref>; <xref ref-type="bibr" rid="ref28">Indiveri and Fusi, 2007</xref>; <xref ref-type="bibr" rid="ref46">Ning et al., 2015</xref>), defined as</p>
<disp-formula id="E5">
<mml:math id="M80">
<mml:msub>
<mml:mi>&#x03C4;</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfrac>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:munder>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi mathvariant="italic">Firings</mml:mi>
<mml:mspace width="0.25em"/>
<mml:mi mathvariant="italic">of</mml:mi>
<mml:mspace width="0.25em"/>
<mml:mi mathvariant="italic">Neuron</mml:mi>
<mml:mspace width="0.25em"/>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi>&#x03B4;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msubsup>
<mml:mi>t</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mtext>,</mml:mtext>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M81">
<mml:msub>
<mml:mi>&#x03C4;</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> is a constant and <inline-formula>
<mml:math id="M82">
<mml:msubsup>
<mml:mi>t</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> represents all the firing times of neuron <inline-formula>
<mml:math id="M83">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula> (note that all the firing times are summed up). The behavior of <inline-formula>
<mml:math id="M84">
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> is illustrated in <xref ref-type="fig" rid="fig1">Figure 1C</xref>, showing that it can be used as an indicator of the neuron activity if the threshold <inline-formula>
<mml:math id="M85">
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> is appropriately determined.</p>
<p>The firing threshold of a LIF neuron is given as a reference voltage applied to a comparator connected to the membrane capacitor. Stepwise change of <inline-formula>
<mml:math id="M86">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> is advantageous for hardware implementation because we do not need to design a compact voltage source circuit that can tune the output continuously. Instead, we need to prepare several fixed voltage lines and select one of them using a multiplexer, which is not a difficult task.</p>
</sec>
<sec id="sec6">
<label>2.4</label>
<title>Synchronization of IP and SP thresholds</title>
<p>If the SDSP thresholds <inline-formula>
<mml:math id="M87">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">UP</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula> are fixed to be constants, the IP rule introduced above interfere with SP because it changes the magnitude relationship between <inline-formula>
<mml:math id="M88">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">UP</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M89">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>. For example, let us assume that <inline-formula>
<mml:math id="M90">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> is lowered by IP and comes below <inline-formula>
<mml:math id="M91">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>. In this case, <inline-formula>
<mml:math id="M92">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula> decreases every time a spike comes and finally reaches zero because <inline-formula>
<mml:math id="M93">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> is always less than <inline-formula>
<mml:math id="M94">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> and never exceeds <inline-formula>
<mml:math id="M95">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>. This would lead to incorrect learning of the input information.</p>
<p>To operate both IP and SP at the same time correctly, we synchronize the three thresholds of neuron <inline-formula>
<mml:math id="M96">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula>, that is, <inline-formula>
<mml:math id="M97">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M98">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula>, and <inline-formula>
<mml:math id="M99">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> so that the magnitude relationship <inline-formula>
<mml:math id="M100">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
<mml:mo>&#x003C;</mml:mo>
</mml:math>
</inline-formula> <inline-formula>
<mml:math id="M101">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
<mml:mo>&#x003C;</mml:mo>
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> should be kept during IP operations. Along with the firing threshold<inline-formula>
<mml:math id="M102">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>, the learning thresholds <inline-formula>
<mml:math id="M103">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M104">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> are updated by IP as follows,</p>
<disp-formula id="E6">
<mml:math id="M105">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">UP</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>=</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mfenced close="" open="{">
<mml:mtable equalrows="true" equalcolumns="true">
<mml:mtr>
<mml:mtd>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">UP</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>+</mml:mo>
<mml:mi>L</mml:mi>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">UP</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mspace width="0.91em"/>
<mml:mi mathvariant="italic">if</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo>&#x003E;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:mi>&#x03C3;</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">UP</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>L</mml:mi>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mfrac>
<mml:mi mathvariant="italic">UP</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:mfrac>
</mml:msubsup>
<mml:mspace width="0.58em"/>
<mml:mi mathvariant="italic">if</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mi mathvariant="italic">fire</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo>&#x003C;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03C3;</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="normal">when neuron</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:mi>i</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="normal">fires</mml:mi>
<mml:mtext>,</mml:mtext>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M106">
<mml:mi>L</mml:mi>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">UP</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula> are the change width of the learning thresholds.</p>
</sec>
<sec id="sec7">
<label>2.5</label>
<title>Network model</title>
<p><xref ref-type="fig" rid="fig2">Figure 2A</xref> shows the architecture of the SRNN system we study in this work. It consists of an input layer, a middle layer, and an output layer. The middle layer (M-SRNN) is an RNN with random connections and synaptic weights, consisting of two neuron types which are excitatory and inhibitory neurons. The M-SRNN in this work consists of 80% excitatory and 20% inhibitory neurons. Input-layer neurons send Poisson spikes to the neurons of the M-SRNN at a frequency corresponding to the value of the input data. The input-layer neurons connect with excitatory neurons of M-SRNN with a probability of <inline-formula>
<mml:math id="M107">
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>, which is 0.1 in this work. Note that they have no connections to inhibitory neurons. The excitatory neurons connect with other excitatory neurons with probability <inline-formula>
<mml:math id="M108">
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> and with inhibitory neurons with probability <inline-formula>
<mml:math id="M109">
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>I</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>. Inhibitory neurons connect with excitatory neurons with probability <inline-formula>
<mml:math id="M110">
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi mathvariant="italic">IE</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> and do not connect with inhibitory neurons. Output-layer neurons are connected from all excitatory neurons of M-SRNN. Not all M-SRNNs will give the desired result because of the random nature, so parameters related to the structure of M-SRNN must be set carefully to obtain the desired results (<xref ref-type="bibr" rid="ref49">Payvand et al., 2022</xref>). With self-organization mechanism by IP and SP, the M-SRNN reconstruction is automatically performed using spike signals from input layer neurons.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Hardware implementation for an SRNN. <bold>(A)</bold> SRNN consists of input, middle (M-SRNN), and output layers. The M-SRNN consists of excitatory (<italic>E</italic>, black) and inhibitory (<italic>I</italic>, blue) sub-population layers. <bold>(B)</bold> Hardware implementation for M-SRNN.</p>
</caption>
<graphic xlink:href="fnins-18-1402646-g002.tif"/>
</fig>
<p>The M-SRNN can be implemented as a crossbar architecture (<xref ref-type="bibr" rid="ref36">Lazar et al., 2009</xref>) shown in <xref ref-type="fig" rid="fig2">Figure 2B</xref>. There, each row line is connected to a neuron of the M-SRNN, and each column line is connected to either an input-neuron emitting spikes in response to external inputs or a recurrent input from an M-SRNN neuron. A cross point is a synapse, where spikes from the column line are converted to synaptic current flowing into the row line. Some of the synapses are set inactive to realize the random connection of the RNN.</p>
</sec>
</sec>
<sec id="sec8">
<label>3</label>
<title>Simulation and results</title>
<sec id="sec9">
<label>3.1</label>
<title>Simulation configuration and parameters</title>
<p>The effectiveness of our M-SRNN model with IP and SP explained above is evaluated using Brian simulator (<xref ref-type="bibr" rid="ref25">Goodman and Brette, 2008</xref>) by ECG anomaly detection benchmark (<xref ref-type="bibr" rid="ref51">PhysioNet, 1999</xref>; <xref ref-type="bibr" rid="ref24">Goldberger et al., 2000</xref>; <xref ref-type="bibr" rid="ref43">Moody and Mark, 2001</xref>) with parameters listed in <xref ref-type="table" rid="tab1">Table 1</xref>. Input-layer neurons convert the ECG data to Poisson spikes and send them to excitatory neurons in the M-SRNN. The simulation consists of three phases. Phase 1 is the unsupervised learning phase of the M-SRNN by using the training data of the ECGs. Thresholds (<inline-formula>
<mml:math id="M111">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, and <inline-formula>
<mml:math id="M112">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">UP</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>) of excitatory neurons and synaptic weights (<inline-formula>
<mml:math id="M113">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula>) between excitatory neurons in the M-SRNN are learned by IP and SP, respectively. Phase 2 is a readout learning phase. The synaptic weights between neurons inside of the M-SRNN are not changed. Synaptic weights between excitatory neurons in the M-SRNN and the neurons in the output layer are calculated by linear regression in a supervised fashion. Phase 3 is the test phase. Using test ECG data, anomaly detection performance of the M-SRNN determined in Phase 1 is evaluated.</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>Initial values in SRNN simulations.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="top" colspan="3">Neurons</th>
<th/>
<th align="center" valign="top" colspan="2">Synapses</th>
</tr>
<tr>
<th/>
<th align="center" valign="top">Excitatory</th>
<th align="center" valign="top">Inhibitory</th>
<th/>
<th align="center" valign="top"><italic>W</italic></th>
<th align="center" valign="top">1.0</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top"># of Neurons</td>
<td align="center" valign="top">160</td>
<td align="center" valign="top">40</td>
<td/>
<td align="center" valign="top" colspan="2">SRNN</td>
</tr>
<tr>
<td align="left" valign="top"><italic>R</italic> (<italic>M&#x03A9;</italic>)</td>
<td align="center" valign="top">400</td>
<td align="center" valign="top">400</td>
<td/>
<td align="center" valign="top"><italic>P<sub>EE</sub></italic></td>
<td align="center" valign="top">5%</td>
</tr>
<tr>
<td align="left" valign="top"><italic>C</italic> (<italic>pF</italic>)</td>
<td align="center" valign="top">10</td>
<td align="center" valign="top">10</td>
<td/>
<td align="center" valign="top"><italic>P<sub>II</sub></italic></td>
<td align="center" valign="top">0%</td>
</tr>
<tr>
<td align="left" valign="top"><italic>&#x03C4;<sub>ca</sub></italic> (<italic>ms</italic>)</td>
<td align="center" valign="top">100</td>
<td align="center" valign="top">100</td>
<td/>
<td align="center" valign="top"><italic>P<sub>EI</sub></italic></td>
<td align="center" valign="top">2%</td>
</tr>
<tr>
<td align="left" valign="top"><italic>V<sub>th</sub></italic> (<italic>V</italic>)</td>
<td align="center" valign="top">0.2</td>
<td align="center" valign="top">0.2</td>
<td/>
<td align="center" valign="top"><italic>PI<sub>E</sub></italic></td>
<td align="center" valign="top">10%</td>
</tr>
<tr>
<td align="left" valign="top"><italic>C<sub>ip</sub></italic> (#of fires /sec)</td>
<td align="center" valign="top">15</td>
<td align="center" valign="top">-</td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top"><italic>&#x03C4;<sub>ip</sub></italic> (<italic>ms</italic>)</td>
<td align="center" valign="top">100</td>
<td align="center" valign="top">100</td>
<td/>
<td/>
<td/>
</tr>
</tbody>
</table>
</table-wrap>
<p>In the simulation, the learning step <inline-formula>
<mml:math id="M114">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> and the firing threshold change width <inline-formula>
<mml:math id="M115">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> are selected from <inline-formula>
<mml:math id="M116">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>R</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfenced close="}" open="{" separators=",,,,">
<mml:mn>0.1</mml:mn>
<mml:mn>0.2</mml:mn>
<mml:mn>0.5</mml:mn>
<mml:mn>1.0</mml:mn>
<mml:mn>2.0</mml:mn>
</mml:mfenced>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M117">
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfenced close="}" open="{">
<mml:mrow>
<mml:mn>0.025</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="0.58em"/>
<mml:mn>0.05</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="0.58em"/>
<mml:mn>0.1</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="0.58em"/>
<mml:mn>0.3</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mtext>,</mml:mtext>
</mml:math>
</inline-formula> respectively. The ranges of <inline-formula>
<mml:math id="M118">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M119">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> are <inline-formula>
<mml:math id="M120">
<mml:mn>0</mml:mn>
<mml:mo>&#x2264;</mml:mo>
<mml:mi>W</mml:mi>
<mml:mo>&#x2264;</mml:mo>
<mml:mn>2</mml:mn>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M121">
<mml:mn>0.125</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi>V</mml:mi>
<mml:mo>&#x2264;</mml:mo>
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2264;</mml:mo>
<mml:mn>0.4</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi>V</mml:mi>
</mml:math>
</inline-formula>. With regard to the SP synchronization with IP, we set <inline-formula>
<mml:math id="M122">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">/</mml:mo>
<mml:mn>2</mml:mn>
</mml:math>
</inline-formula> throughout this work, hence <inline-formula>
<mml:math id="M123">
<mml:mi>L</mml:mi>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">UP</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>=<inline-formula>
<mml:math id="M124">
<mml:mi>L</mml:mi>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mi mathvariant="italic">DOWN</mml:mi>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="true">/</mml:mo>
<mml:mn>2</mml:mn>
</mml:math>
</inline-formula>. All initial synaptic weights between excitatory neurons are set to <inline-formula>
<mml:math id="M125">
<mml:mn>1.0</mml:mn>
</mml:math>
</inline-formula>, and the initial firing threshold is set to 0.2&#x2009;V for all neurons. All other synaptic weights are set randomly. The validity of our method is though Counting Task Benchmark (<xref ref-type="bibr" rid="ref36">Lazar et al., 2009</xref>; <xref ref-type="bibr" rid="ref49">Payvand et al., 2022</xref>) as shown in <xref rid="SM1" ref-type="supplementary-material">Supplementary materials 4.1</xref>.</p>
</sec>
<sec id="sec10">
<label>3.2</label>
<title>ECG anomaly detection</title>
<p>For ECG anomaly detection, we use the MIT-BIH arrhythmia database (<xref ref-type="bibr" rid="ref51">PhysioNet, 1999</xref>; <xref ref-type="bibr" rid="ref24">Goldberger et al., 2000</xref>; <xref ref-type="bibr" rid="ref43">Moody and Mark, 2001</xref>). Using the PhysioBank ATM provided by <xref ref-type="bibr" rid="ref51">PhysioNet (1999)</xref>, we download and use MIT-BIT Long-Term ECG data No.14046 for performance evaluation. <xref ref-type="fig" rid="fig3">Figure 3A</xref> shows a part of normal waveform of the ECG used as training data. As test data, we use <inline-formula>
<mml:math id="M126">
<mml:mn>10</mml:mn>
</mml:math>
</inline-formula> hours waveform data of No.14046 that partially include multiple abnormal waveforms. <xref ref-type="fig" rid="fig3">Figure 3B</xref> shows a part of ECG waveform data used in the test. To perform anomaly detection, the SRNN is used as an inference machine. Values of the data points of the ECG waveform are inputted to the SRNN one by one in the time order. At the <inline-formula>
<mml:math id="M127">
<mml:mi>k</mml:mi>
</mml:math>
</inline-formula>-th input, it predicts the next <inline-formula>
<mml:math id="M128">
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>-st. The firing frequency <inline-formula>
<mml:math id="M129">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">out</mml:mi>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> of the output-layer neuron at the <inline-formula>
<mml:math id="M130">
<mml:mi>k</mml:mi>
</mml:math>
</inline-formula>-th input is compared to the firing frequency of the input neuron at the <inline-formula>
<mml:math id="M131">
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>-st input <inline-formula>
<mml:math id="M132">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>. Here, we define the abnormality judgment level <inline-formula>
<mml:math id="M133">
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> to detect anomalies; if the absolute difference <inline-formula>
<mml:math id="M134">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mspace width="0.33em"/>
<mml:mo stretchy="true">|</mml:mo>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">out</mml:mi>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo stretchy="true">|</mml:mo>
</mml:math>
</inline-formula>is greater than a predefined level <inline-formula>
<mml:math id="M135">
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, the <inline-formula>
<mml:math id="M136">
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>-st input data is regarded to be abnormal.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>A part of ECG benchmark waveform No. 14046 used in the simulation. <bold>(A)</bold> A part of Normal ECG waveform used in training for M-SRNN. <bold>(B)</bold> A part of ECG waveform with abnormal points (labeled with <italic>AB</italic>). <bold>(C)</bold> The test results <inline-formula>
<mml:math id="M137">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> of input <bold>(B)</bold>.</p>
</caption>
<graphic xlink:href="fnins-18-1402646-g003.tif"/>
</fig>
<p><xref ref-type="fig" rid="fig3">Figure 3C</xref> shows the anomaly detection results <inline-formula>
<mml:math id="M138">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> using M-SRNN reconstructed by our proposed method when the waveform data in <xref ref-type="fig" rid="fig3">Figure 3B</xref> is input. For highly accurate abnormality detection, <inline-formula>
<mml:math id="M139">
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> must be set between <inline-formula>
<mml:math id="M140">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">no</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M141">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">ab</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula>, where <inline-formula>
<mml:math id="M142">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">no</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> is the highest peak of <inline-formula>
<mml:math id="M143">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> for normal data input point, and <inline-formula>
<mml:math id="M144">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">ab</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> is the lowest peak of <inline-formula>
<mml:math id="M145">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> for the abnormal points (<xref ref-type="fig" rid="fig3">Figure 3C</xref>). In other words, <inline-formula>
<mml:math id="M146">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">no</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> is the smallest <inline-formula>
<mml:math id="M147">
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> that does not misdetect normal data points, and <inline-formula>
<mml:math id="M148">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">ab</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> is the smallest <inline-formula>
<mml:math id="M149">
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> that does not overlook any anomalies. Note that <inline-formula>
<mml:math id="M150">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">ab</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> is unknown in practical use; it is defined for discussion purpose. The window <inline-formula>
<mml:math id="M151">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">ab</mml:mi>
</mml:msubsup>
<mml:mo>&#x2212;</mml:mo>
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">no</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> represents judgment margin, which should be large enough for correct detection without overlooking or misdetecting.</p>
<p>Since the raw ECG data <inline-formula>
<mml:math id="M152">
<mml:msub>
<mml:mi>E</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> is given by time-series data of electrostatic potential in <italic>mV</italic>, the input-layer neurons convert the potential <inline-formula>
<mml:math id="M153">
<mml:msub>
<mml:mi>E</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> to the firing frequency <inline-formula>
<mml:math id="M154">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> as follows,</p>
<disp-formula id="E7">
<mml:math id="M155">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">poisson</mml:mi>
</mml:msub>
<mml:mo>&#x00D7;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>4</mml:mn>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#x00D7;</mml:mo>
<mml:msub>
<mml:mi>E</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:mrow>
<mml:mn>5</mml:mn>
</mml:mfrac>
<mml:mtext>.</mml:mtext>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M156">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> is the conversion coefficient. Since an input-layer neuron fires with Poisson probability <inline-formula>
<mml:math id="M157">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">in</mml:mi>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula>, a single input is required to be kept for a certain duration (<inline-formula>
<mml:math id="M158">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>) to generate a desired Poisson spike train.</p>
</sec>
<sec id="sec11">
<label>3.3</label>
<title>Simulation results</title>
<sec id="sec12">
<label>3.3.1</label>
<title>Effectiveness of proposed method on anomaly detection</title>
<p>Anomaly detection results of the initial M-SRNN and the M-SRNN reconstructed with both SP and IP are shown in <xref ref-type="fig" rid="fig4">Figures 4A</xref>,<xref ref-type="fig" rid="fig4">B</xref>, respectively. For reconstruction of the M-SRNN, we use the waveform data from <inline-formula>
<mml:math id="M159">
<mml:mn>0</mml:mn>
</mml:math>
</inline-formula> to <inline-formula>
<mml:math id="M160">
<mml:mn>10</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">ms</mml:mi>
</mml:math>
</inline-formula> of ECG waveform No. 14046 which does not include anomalies. The blue and orange line represent the probability of detecting an abnormal point as abnormal (true positive rate, TPR) and the probability of misdetecting a normal point (false positive rate, FPR) at each <inline-formula>
<mml:math id="M161">
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, respectively. These probabilities are obtained statistically from the 10&#x2009;h data of No. 14064. As shown in <xref ref-type="fig" rid="fig4">Figure 4A</xref>, the initial M-SRNN cannot detect anomalies correctly because <inline-formula>
<mml:math id="M162">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> is negative; the misdetection rate (orange) is always larger than the correct detection rate (blue) at any <inline-formula>
<mml:math id="M163">
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>. On the other hand, since <inline-formula>
<mml:math id="M164">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> is positive, the reconstructed M-SRNN can correctly detect anomalies (<xref ref-type="fig" rid="fig4">Figure 4B</xref>). Indeed, if <inline-formula>
<mml:math id="M165">
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> is selected between <inline-formula>
<mml:math id="M166">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, the <inline-formula>
<mml:math id="M167">
<mml:mn>100</mml:mn>
<mml:mo>%</mml:mo>
</mml:math>
</inline-formula> accuracy of the anomaly detection can be achieved while the misdetection rate is suppressed to <inline-formula>
<mml:math id="M168">
<mml:mn>0</mml:mn>
<mml:mo>%</mml:mo>
</mml:math>
</inline-formula>. <xref ref-type="fig" rid="fig4">Figures 4C</xref>,<xref ref-type="fig" rid="fig4">D</xref> show Receiver Operating Characteristic (ROC) curves of the initial M-SRNN and the reconstructed M-SRNN, respectively. Since <inline-formula>
<mml:math id="M169">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x003C;</mml:mo>
<mml:mn>0</mml:mn>
</mml:math>
</inline-formula> in the case of the initial M-SRNN, the ideal condition for anomaly detection, TPR&#x2009;=&#x2009;1.0 and FPR&#x2009;=&#x2009;0.0, cannot be achieved (<xref ref-type="fig" rid="fig4">Figure 4C</xref>). On the other hand, such condition is realized in the case of the reconstructed M-SRNN because <inline-formula>
<mml:math id="M170">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x003E;</mml:mo>
<mml:mn>0</mml:mn>
</mml:math>
</inline-formula> (<xref ref-type="fig" rid="fig4">Figure 4D</xref>). Therefore, our proposed method for the M-SRNN reconstruction is effective for detecting abnormalities in periodic waveform data (in practical use of this method, <inline-formula>
<mml:math id="M171">
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> may be defined as an arbitrary value slightly larger than <inline-formula>
<mml:math id="M172">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">no</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> because the actual value of <inline-formula>
<mml:math id="M173">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">ab</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> hence <inline-formula>
<mml:math id="M174">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> is unknown). Note that the M-SRNN should be reconstructed for individual ECG data (in this case No. 14064). If we are to execute detection tasks for another data set, we need to reconstruct of the M-SRNN using a normal part of the target data set prior to the detection task.</p>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Analysis of anomaly detection capability in the case of using initial M-SRNN and reconstructed M-SRNN with <inline-formula>
<mml:math id="M175">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>150</mml:mn>
<mml:mspace width="0.25em"/>
<mml:mi mathvariant="italic">ms</mml:mi>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M176">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>2.0</mml:mn>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M177">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.025</mml:mn>
<mml:mspace width="0.25em"/>
<mml:mi>V</mml:mi>
</mml:math>
</inline-formula>, and <inline-formula>
<mml:math id="M178">
<mml:mi>&#x03C3;</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0.3</mml:mn>
</mml:math>
</inline-formula>. <bold>(A,B)</bold> The probability of detecting an abnormal point as abnormal (TPR, blue) and the probability of misdetecting a normal point (FPR, orange) at each <inline-formula>
<mml:math id="M179">
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> using initial M-SRNN and reconstructed M-SRNN, respectively. <bold>(C,D)</bold> ROC for initial M-SRNN and reconstructed M-SRNN, respectively.</p>
</caption>
<graphic xlink:href="fnins-18-1402646-g004.tif"/>
</fig>
</sec>
<sec id="sec13">
<label>3.3.2</label>
<title>Reduction of parameter resolutions toward hardware implementation</title>
<p><xref ref-type="fig" rid="fig5">Figure 5</xref> shows a heat map of <inline-formula>
<mml:math id="M180">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> at each <inline-formula>
<mml:math id="M181">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>R</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M182">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> when the processing time <inline-formula>
<mml:math id="M183">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> per one ECG data point for reconstruction is set to be <inline-formula>
<mml:math id="M184">
<mml:mn>7</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">ms</mml:mi>
</mml:math>
</inline-formula> (A), <inline-formula>
<mml:math id="M185">
<mml:mn>150</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">ms</mml:mi>
</mml:math>
</inline-formula> (B) and <inline-formula>
<mml:math id="M186">
<mml:mn>600</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">ms</mml:mi>
</mml:math>
</inline-formula> (C). These figures show that <inline-formula>
<mml:math id="M187">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> becomes large as the operation time <inline-formula>
<mml:math id="M188">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> increases, which is a reasonable result because the longer <inline-formula>
<mml:math id="M189">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> becomes, the more information is learned from the data point, leading to higher accuracy of the abnormal detection. In fact, as can be seen in <xref ref-type="fig" rid="fig6">Figure 6</xref>, which shows <inline-formula>
<mml:math id="M190">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> patterns for an abnormal waveform obtained with the M-SRNN reconfigured by <inline-formula>
<mml:math id="M191">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.1</mml:mn>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M192">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.3</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi>V</mml:mi>
</mml:math>
</inline-formula> for each <inline-formula>
<mml:math id="M193">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M194">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> becomes smoother and <inline-formula>
<mml:math id="M195">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mi mathvariant="italic">no</mml:mi>
<mml:mtext>max</mml:mtext>
</mml:msubsup>
</mml:math>
</inline-formula> lower as <inline-formula>
<mml:math id="M196">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> is set longer.</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Heatmaps of <inline-formula>
<mml:math id="M197">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>. <inline-formula>
<mml:math id="M198">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mi mathvariant="bold">A</mml:mi>
</mml:mfenced>
<mml:mn>7</mml:mn>
<mml:mspace width="0.25em"/>
<mml:mi mathvariant="italic">ms</mml:mi>
<mml:mo>,</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mi mathvariant="bold">B</mml:mi>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mn>150</mml:mn>
<mml:mspace width="0.25em"/>
<mml:mi mathvariant="italic">ms</mml:mi>
<mml:mtext>,</mml:mtext>
</mml:math>
</inline-formula> and <bold>(C)</bold> <inline-formula>
<mml:math id="M199">
<mml:mn>600</mml:mn>
<mml:mspace width="0.25em"/>
<mml:mi mathvariant="italic">ms</mml:mi>
</mml:math>
</inline-formula>, respectively. (ECG benchmark No. 14046).</p>
</caption>
<graphic xlink:href="fnins-18-1402646-g005.tif"/>
</fig>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption>
<p><inline-formula>
<mml:math id="M200">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> when abnormal ECG waveform No. 14046 is detected in SRNN reconstructed with <inline-formula>
<mml:math id="M201">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.1</mml:mn>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M202">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.3</mml:mn>
<mml:mi>V</mml:mi>
</mml:math>
</inline-formula>. <inline-formula>
<mml:math id="M203">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mi mathvariant="bold">A</mml:mi>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mn>7</mml:mn>
<mml:mspace width="0.25em"/>
<mml:mi mathvariant="italic">ms</mml:mi>
<mml:mo>,</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mi mathvariant="bold">B</mml:mi>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mn>150</mml:mn>
<mml:mspace width="0.25em"/>
<mml:mi mathvariant="italic">ms</mml:mi>
<mml:mtext>,</mml:mtext>
</mml:math>
</inline-formula> and <bold>(C)</bold> <inline-formula>
<mml:math id="M204">
<mml:mn>600</mml:mn>
<mml:mspace width="0.25em"/>
<mml:mi mathvariant="italic">ms</mml:mi>
</mml:math>
</inline-formula>, respectively.</p>
</caption>
<graphic xlink:href="fnins-18-1402646-g006.tif"/>
</fig>
</sec>
<sec id="sec14">
<label>3.3.3</label>
<title>Real-time operation for practical applications</title>
<p>For practical application, it is desired that the abnormal data should be detected at the moment it occurs and thus real-time operation is highly expected. In this sense, <inline-formula>
<mml:math id="M205">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> is desired to be as short as possible. In the case of the ECG anomaly detection, data is collected at 128 steps/s. Therefore, the learning process and anomaly detection must be performed within <inline-formula>
<mml:math id="M206">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>7</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">ms</mml:mi>
</mml:math>
</inline-formula>. However, as discussed above, such short <inline-formula>
<mml:math id="M207">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> leads to small <inline-formula>
<mml:math id="M208">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> because the learning duration for each data point is insufficient.</p>
<p>Now we assume that employing longer <inline-formula>
<mml:math id="M209">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> is equivalent to increasing the number of IP and SP operations within short <inline-formula>
<mml:math id="M210">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. To increase the number of IP and SP operations, we have to enhance the activities of neurons, hence two options. The first one is to enhance the parallelism of the inputs; we increase the number of neurons in the input layer <inline-formula>
<mml:math id="M211">
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> so that a neuron in the M-SRNN being connected to the input layer receive more spike signals during short <inline-formula>
<mml:math id="M212">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. The other is to enhance the seriality of the input neuron signals; we increase the rate of Poisson spikes <inline-formula>
<mml:math id="M213">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">Poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> from the input layer. The effects of these two methods are verified by simulation.</p>
<p><xref ref-type="fig" rid="fig7">Figure 7</xref> shows the heatmaps of <inline-formula>
<mml:math id="M214">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> for <inline-formula>
<mml:math id="M215">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>7</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">ms</mml:mi>
</mml:math>
</inline-formula> in the cases of <inline-formula>
<mml:math id="M216">
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>10</mml:mn>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M217">
<mml:mn>100</mml:mn>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M218">
<mml:mn>200</mml:mn>
</mml:math>
</inline-formula>. We observe that <inline-formula>
<mml:math id="M219">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> increases with <inline-formula>
<mml:math id="M220">
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> in general, indicating that our first idea is effective; real-time anomaly detection without false positive detection is possible by increasing <inline-formula>
<mml:math id="M221">
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. Note that the binary <inline-formula>
<mml:math id="M222">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M223">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula> i.e., <inline-formula>
<mml:math id="M224">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>2.0</mml:mn>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M225">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.3</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi>V</mml:mi>
</mml:math>
</inline-formula> result in sufficiently large <inline-formula>
<mml:math id="M226">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> even with <inline-formula>
<mml:math id="M227">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>7</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">ms</mml:mi>
</mml:math>
</inline-formula> in the case of <inline-formula>
<mml:math id="M228">
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>100</mml:mn>
</mml:math>
</inline-formula>. Thus, a highly parallelized input layer has been shown to be effective for performance improvement with short <inline-formula>
<mml:math id="M229">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. However, when <inline-formula>
<mml:math id="M230">
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> is increased too much, the effect would be negative. As can be seen in <xref ref-type="fig" rid="fig7">Figure 7C</xref>, where <inline-formula>
<mml:math id="M231">
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>200</mml:mn>
</mml:math>
</inline-formula>, the M-SRNN does not work appropriately when <inline-formula>
<mml:math id="M232">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>2.0</mml:mn>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M233">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>0.2</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi>V</mml:mi>
</mml:math>
</inline-formula>. Since the M-SRNN neurons that receive input spikes are always very close to the saturation in the case of large <inline-formula>
<mml:math id="M234">
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>, precise control of the parameters such as <inline-formula>
<mml:math id="M235">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M236">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula> is required.</p>
<fig position="float" id="fig7">
<label>Figure 7</label>
<caption>
<p>Heatmaps of <inline-formula>
<mml:math id="M237">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> in case of <inline-formula>
<mml:math id="M238">
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi mathvariant="italic">input</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mi mathvariant="bold">A</mml:mi>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mn>10</mml:mn>
<mml:mo>,</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mi mathvariant="bold">B</mml:mi>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mn>100</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="normal">and</mml:mi>
<mml:mspace width="0.25em"/>
<mml:mfenced open="(" close=")">
<mml:mi mathvariant="bold">C</mml:mi>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mn>200</mml:mn>
</mml:math>
</inline-formula> (<inline-formula>
<mml:math id="M239">
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi mathvariant="italic">bin</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>7</mml:mn>
<mml:mi mathvariant="italic">ms</mml:mi>
</mml:math>
</inline-formula>), respectively. (ECG benchmark No. 14046).</p>
</caption>
<graphic xlink:href="fnins-18-1402646-g007.tif"/>
</fig>
<p>To examine the latter idea, we perform the anomaly detection tasks with <inline-formula>
<mml:math id="M240">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">Poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> being varied. In the center of <xref ref-type="fig" rid="fig8">Figure 8</xref>, we plot obtained <inline-formula>
<mml:math id="M241">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> as a function of <inline-formula>
<mml:math id="M242">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">Poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. If increasing <inline-formula>
<mml:math id="M243">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">Poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> does not play an effective role on performance improvement, <inline-formula>
<mml:math id="M244">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> increases just linearly with <inline-formula>
<mml:math id="M245">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">Poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>, as indicated by a red dotted line. As a matter of the fact, however, we obtain <inline-formula>
<mml:math id="M246">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> above the red line up to <inline-formula>
<mml:math id="M247">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">Poisson</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>1200</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">Hz</mml:mi>
</mml:math>
</inline-formula>, indicating that raising <inline-formula>
<mml:math id="M248">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">Poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> improves the anomaly detection performance of an M-SRNN.</p>
<fig position="float" id="fig8">
<label>Figure 8</label>
<caption>
<p><inline-formula>
<mml:math id="M249">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M250">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> in the case of <inline-formula>
<mml:math id="M251">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>2.0</mml:mn>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M252">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.3</mml:mn>
<mml:mi>V</mml:mi>
</mml:math>
</inline-formula>. The center graph shows the <inline-formula>
<mml:math id="M253">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> against <inline-formula>
<mml:math id="M254">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. The outer diagrams represent <inline-formula>
<mml:math id="M255">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> corresponding to <bold>(A&#x2013;D)</bold> points in the center diagram. <inline-formula>
<mml:math id="M256">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">poisson</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mi mathvariant="bold">A</mml:mi>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mn>150</mml:mn>
<mml:mo>,</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mi mathvariant="bold">B</mml:mi>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mn>750</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="0.35em"/>
<mml:mfenced open="(" close=")">
<mml:mi mathvariant="bold">C</mml:mi>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mn>1200</mml:mn>
</mml:math>
</inline-formula>, and <bold>(D)</bold> <inline-formula>
<mml:math id="M257">
<mml:mn>1500</mml:mn>
<mml:mi mathvariant="italic">Hz</mml:mi>
</mml:math>
</inline-formula>, respectively. (ECG benchmark No. 14046).</p>
</caption>
<graphic xlink:href="fnins-18-1402646-g008.tif"/>
</fig>
<p>We observe in <xref ref-type="fig" rid="fig8">Figures 8A</xref>&#x2013;<xref ref-type="fig" rid="fig8">C</xref> that increasing <inline-formula>
<mml:math id="M258">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">Poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> elevates the base line of <inline-formula>
<mml:math id="M259">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> and magnify the peaks. This is reasonable because the more input spikes come, the more frequently the neurons in the M-SRNN fire, hence <inline-formula>
<mml:math id="M260">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> being scaled with <inline-formula>
<mml:math id="M261">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">Poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. At the same time, it smoothens variation of <inline-formula>
<mml:math id="M262">
<mml:mi>D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula>, indicating improved learning performance due to the increased IP and SP operations. This results in <inline-formula>
<mml:math id="M263">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> being larger than the red dotted line. When <inline-formula>
<mml:math id="M264">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">Poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> is increased further to <inline-formula>
<mml:math id="M265">
<mml:mn>1500</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">Hz</mml:mi>
</mml:math>
</inline-formula>, the peaks corresponding to the abnormal data in the original waveform saturate, as can be seen in <xref ref-type="fig" rid="fig8">Figure 8D</xref>. This is because of the refractory time of neurons. Since a neuron cannot fire faster than its refractory time, it has an upper limit in its firing frequency. The saturation observed in <xref ref-type="fig" rid="fig8">Figure 8D</xref> is interpreted as a case where the firing frequency at the anomaly data points reaches its limit. As a result, <inline-formula>
<mml:math id="M266">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> at <inline-formula>
<mml:math id="M267">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">poisson</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>1500</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">Hz</mml:mi>
</mml:math>
</inline-formula> is suppressed and comes below the red dotted line. This discussion can be clearly seen in <xref ref-type="fig" rid="fig9">Figure 9</xref>, which shows the evolutions of <inline-formula>
<mml:math id="M268">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">no</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M269">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">ab</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> with <inline-formula>
<mml:math id="M270">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> of the input neurons. We observe that <inline-formula>
<mml:math id="M271">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mi mathvariant="italic">no</mml:mi>
<mml:mtext>max</mml:mtext>
</mml:msubsup>
</mml:math>
</inline-formula> increases linearly, while <inline-formula>
<mml:math id="M272">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">ab</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> increases only up to <inline-formula>
<mml:math id="M273">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">poisson</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>1200</mml:mn>
<mml:mi mathvariant="italic">Hz</mml:mi>
<mml:mtext>.</mml:mtext>
</mml:math>
</inline-formula> For <inline-formula>
<mml:math id="M274">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">poisson</mml:mi>
</mml:msub>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>1200</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="italic">Hz</mml:mi>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M275">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">ab</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> reaches its limit and only <inline-formula>
<mml:math id="M276">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">no</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> increases, hence smaller <inline-formula>
<mml:math id="M277">
<mml:msub>
<mml:mi>&#x0394;</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>. We note that the results shown in <xref ref-type="fig" rid="fig8">Figure 8</xref> are obtained with <inline-formula>
<mml:math id="M278">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>2.0</mml:mn>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M279">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.3</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi>V</mml:mi>
</mml:math>
</inline-formula> i.e., binarized <inline-formula>
<mml:math id="M280">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M281">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula>.</p>
<fig position="float" id="fig9">
<label>Figure 9</label>
<caption>
<p><inline-formula>
<mml:math id="M282">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">ab</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M283">
<mml:msubsup>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi mathvariant="italic">no</mml:mi>
</mml:msubsup>
</mml:math>
</inline-formula> with <inline-formula>
<mml:math id="M284">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi mathvariant="italic">SDSP</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>2.0</mml:mn>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M285">
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.3</mml:mn>
<mml:mi>V</mml:mi>
</mml:math>
</inline-formula> against <inline-formula>
<mml:math id="M286">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. (ECG benchmark No. 14046).</p>
</caption>
<graphic xlink:href="fnins-18-1402646-g009.tif"/>
</fig>
<p>It is noteworthy that we have found that binary <inline-formula>
<mml:math id="M287">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M288">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula> may be employed if the input layer is optimized. This is highly advantageous for hardware implementation. For <inline-formula>
<mml:math id="M289">
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> (and also for <inline-formula>
<mml:math id="M290">
<mml:msubsup>
<mml:mi>V</mml:mi>
<mml:mi mathvariant="italic">Lthr</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">UP</mml:mi>
<mml:mo stretchy="true">/</mml:mo>
<mml:mi mathvariant="italic">Down</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>), we may prepare the smallest 2-input multiplexers and only two voltage lines (see <xref rid="SM1" ref-type="supplementary-material">Supplementary materials</xref>). What is more conspicuous is that <inline-formula>
<mml:math id="M291">
<mml:mi>W</mml:mi>
</mml:math>
</inline-formula> can be reduced to binary. This means that for synapses we have no need of using an area-hungry multi-bit SRAM array or waiting for analog emerging memories, but we may employ just small 1-bit latches (see <xref rid="SM1" ref-type="supplementary-material">Supplementary materials</xref>). Since the number of synapses scales with square of the number of neurons, this result has a large impact on the SRNN chip size.</p>
<p>Thus, optimization of the input gives a large impact on both performance and physical chip size of the SRNN. Whether we optimize <inline-formula>
<mml:math id="M292">
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi mathvariant="italic">iput</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> or <inline-formula>
<mml:math id="M293">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi mathvariant="italic">Poisson</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> may be up to engineering convenience. It is possible to optimize both. As we have seen in <xref ref-type="fig" rid="fig7">Figures 7</xref>, <xref ref-type="fig" rid="fig8">8</xref>, the former has a better smoothing effect in the normal data area than the latter. Considering hardware implementation, on the other hand, the latter is more favorable because the former requires physical extension of the input layer system. For the latter, we only have to tune the conversion rate of raw input data to spike trains, which may be done externally. Therefore, the parameters in the input layer should be designed carefully taking those conditions discussed above into consideration.</p>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="sec15">
<label>4</label>
<title>Discussion</title>
<p>Lazer et al. proposed to introduce two plasticity mechanisms, SP and IP, to an RNN to reconstruct its network structure in the training phase (<xref ref-type="bibr" rid="ref36">Lazar et al., 2009</xref>). While software implementation of SP and IP seems to be quite simple, we need some effort for hardware implementation.</p>
<p>With regard to the IP operation, Lazer et al. adjusted the firing threshold of each neuron according to its firing rate at every time step. In hardware implementation, constantly controlling the thresholds of all of the <italic>N</italic> neurons is not realistic. Therefore, we proposed a mechanism that regulate the threshold of a neuron in an event-driven way; each neuron changes its firing threshold when it fires in accordance with its activity being higher or lower than the predetermined levels. This event-driven mechanism releases us from designing a circuit for precise control of the thresholds. As discussed by Lazar et al., we need to control the thresholds with an accuracy of <inline-formula>
<mml:math id="M294">
<mml:mn>1</mml:mn>
<mml:mo stretchy="true">/</mml:mo>
<mml:mn>1000</mml:mn>
</mml:math>
</inline-formula> if it is done constantly, which requires quite large hardware resource that consumes power as well. Our event-driven method, on the other hand, has been shown to allow us stepwise control of the thresholds with only a few gradations, which is highly advantageous for hardware implementation.</p>
<p>Another way to realize the IP mechanism is to regulate the current of a LIF neuron (<xref ref-type="bibr" rid="ref27">Holt and Koch, 1997</xref>). The current value can be adjusted by changing the resistance values in the previous researches (<xref ref-type="bibr" rid="ref14">Dalgaty et al., 2019</xref>; <xref ref-type="bibr" rid="ref64">Zhang et al., 2021</xref>). This can be achieved by using variable resistors such as memristors (<xref ref-type="bibr" rid="ref14">Dalgaty et al., 2019</xref>; <xref ref-type="bibr" rid="ref49">Payvand et al., 2022</xref>) or by selecting several fixed resistors prepared in advance. For the former method, precise control of the resistance would be a central technical issue, but it is still a big challenge even today because the current memristor has large variation (<xref ref-type="bibr" rid="ref14">Dalgaty et al., 2019</xref>). Payvand et al. discussed that variation and stochasticity of rewriting may lead to better performance, but further studies including practical hardware implementation and general verification are yet to be done. The latter requires a set of large resistors (~<inline-formula>
<mml:math id="M295">
<mml:mn>100</mml:mn>
<mml:mspace width="thickmathspace"/>
<mml:mi>M</mml:mi>
<mml:mi>&#x03A9;</mml:mi>
</mml:math>
</inline-formula>) for each neuron, which is not favorable for hardware implementation because resistors occupy quite large chip area. We believe that stepwise change of the firing threshold is the most favorable implementation of IP.</p>
<p>For implementation of the SP mechanism, STDP (<xref ref-type="bibr" rid="ref37">Legenstein et al., 2005</xref>; <xref ref-type="bibr" rid="ref46">Ning et al., 2015</xref>; <xref ref-type="bibr" rid="ref56">Srinivasan et al., 2017</xref>) is widely known as a biologically plausible synaptic update rule, but it is not hardware friendly as discussed in the introduction. Hence recent neuromorphic chips tend to employ SDSP (<xref ref-type="bibr" rid="ref7">Brader et al., 2007</xref>; <xref ref-type="bibr" rid="ref42">Mitra et al., 2009</xref>; <xref ref-type="bibr" rid="ref46">Ning et al., 2015</xref>; <xref ref-type="bibr" rid="ref23">Frenkel et al., 2019</xref>; <xref ref-type="bibr" rid="ref26">Gurunathan and Iyer, 2020</xref>; <xref ref-type="bibr" rid="ref49">Payvand et al., 2022</xref>; <xref ref-type="bibr" rid="ref21">Frenkel et al., 2023</xref>). However, SDSP cannot be implemented concurrently with threshold-controlled IP in its original form, because the latter may push down the upper limit of the membrane potential (i.e., the firing threshold) below the synaptic potentiation threshold. Our proposal that the synaptic update thresholds synchronize with the firing threshold realized the concurrent implementation of the two, and their interplay with each other led to successful learning and anomaly detection of ECG benchmark data (<xref ref-type="bibr" rid="ref51">PhysioNet, 1999</xref>; <xref ref-type="bibr" rid="ref24">Goldberger et al., 2000</xref>; <xref ref-type="bibr" rid="ref43">Moody and Mark, 2001</xref>) even with binary thresholds and weights if the parallelism and the seriality of the input are well optimized. This is highly advantageous for analog circuitry implementation from the viewpoints of circuit complexity and size.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="sec16">
<title>Data availability statement</title>
<p>Publicly available datasets were analyzed in this study. This data can be found here: <ext-link xlink:href="https://physionet.org/content/ltdb/1.0.0/" ext-link-type="uri">https://physionet.org/content/ltdb/1.0.0/</ext-link>.</p>
</sec>
<sec sec-type="author-contributions" id="sec17">
<title>Author contributions</title>
<p>KN: Conceptualization, Formal analysis, Methodology, Visualization, Writing &#x2013; original draft. YN: Conceptualization, Formal analysis, Methodology, Supervision, Validation, Writing &#x2013; original draft, Writing &#x2013; review &#x0026; editing.</p>
</sec>
<sec sec-type="funding-information" id="sec18">
<title>Funding</title>
<p>The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.</p>
</sec>
<sec sec-type="COI-statement" id="sec19">
<title>Conflict of interest</title>
<p>KN and YN were employed by Toshiba Corporation.</p>
</sec>
<sec sec-type="disclaimer" id="sec20">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec sec-type="supplementary-material" id="sec21">
<title>Supplementary material</title>
<p>The Supplementary material for this article can be found online at: <ext-link xlink:href="https://www.frontiersin.org/articles/10.3389/fnins.2024.1402646/full#supplementary-material" ext-link-type="uri">https://www.frontiersin.org/articles/10.3389/fnins.2024.1402646/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.PDF" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ambrogio</surname> <given-names>S.</given-names></name> <name><surname>Ciocchini</surname> <given-names>N.</given-names></name> <name><surname>Laudato</surname> <given-names>M.</given-names></name> <name><surname>Milo</surname> <given-names>V.</given-names></name> <name><surname>Pirovano</surname> <given-names>A.</given-names></name> <name><surname>Fantini</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Unsupervised learning by spike timing dependent plasticity in phase change memory (pcm) synapses</article-title>. <source>Front. Neurosci.</source> <volume>10</volume>:<fpage>56</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnins.2016.00056</pub-id>, PMID: <pub-id pub-id-type="pmid">27013934</pub-id></citation>
</ref>
<ref id="ref2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ambrogio</surname> <given-names>S.</given-names></name> <name><surname>Narayanan</surname> <given-names>P.</given-names></name> <name><surname>Tsai</surname> <given-names>H.</given-names></name> <name><surname>Shelby</surname> <given-names>R. M.</given-names></name> <name><surname>Boybat</surname> <given-names>I.</given-names></name> <name><surname>Nolfo</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Equivalent-accuracy accelerated neural-network training using analogue memory</article-title>. <source>Nature</source> <volume>558</volume>, <fpage>60</fpage>&#x2013;<lpage>67</lpage>. doi: <pub-id pub-id-type="doi">10.1038/s41586-018-0180-5</pub-id>, PMID: <pub-id pub-id-type="pmid">29875487</pub-id></citation>
</ref>
<ref id="ref3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amirshahi</surname> <given-names>A.</given-names></name> <name><surname>Hashemi</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>ECG classification algorithm based on STDP and R-STDP neural networks for real-time monitoring on ultra low-power personal wearable devices</article-title>. <source>IEEE Trans. Biomed. Circ. Syst.</source> <volume>13</volume>, <fpage>1483</fpage>&#x2013;<lpage>1493</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TBCAS.2019.2948920</pub-id>, PMID: <pub-id pub-id-type="pmid">31647445</pub-id></citation>
</ref>
<ref id="ref4">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Bartolozzi</surname> <given-names>C.</given-names></name> <name><surname>Nikolayeva</surname> <given-names>O.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group> (<year>2008</year>). &#x201C;Implementing homeostatic plasticity in VLSI networks of spiking neurons.&#x201D; in <italic>Proc. of 15th IEEE International Conference on Electronics, Circuits and Systems</italic>. pp. 682&#x2013;685.</citation>
</ref>
<ref id="ref5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bauer</surname> <given-names>F. C.</given-names></name> <name><surname>Muir</surname> <given-names>D. R.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group> (<year>2019</year>). <article-title>Real-time ultra-low power ECG anomaly detection using an event-driven neuromorphic processor</article-title>. <source>IEEE Trans. Biomed. Circ. Syst.</source> <volume>13</volume>, <fpage>1575</fpage>&#x2013;<lpage>1582</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TBCAS.2019.2953001</pub-id>, PMID: <pub-id pub-id-type="pmid">31715572</pub-id></citation>
</ref>
<ref id="ref6">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Bourdoukan</surname> <given-names>R.</given-names></name> <name><surname>Deneve</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). Enforcing balance allows local supervised learning in spiking recurrent networks. Advances in Neural Information Processing Systems.</citation>
</ref>
<ref id="ref7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brader</surname> <given-names>J. M.</given-names></name> <name><surname>Senn</surname> <given-names>W.</given-names></name> <name><surname>Fusi</surname> <given-names>S.</given-names></name></person-group> (<year>2007</year>). <article-title>Learning real-world stimuli in a neural network with spike-driven synaptic dynamics</article-title>. <source>Neural Comput.</source> <volume>19</volume>, <fpage>2881</fpage>&#x2013;<lpage>2912</lpage>. doi: <pub-id pub-id-type="doi">10.1162/neco.2007.19.11.2881</pub-id>, PMID: <pub-id pub-id-type="pmid">17883345</pub-id></citation>
</ref>
<ref id="ref8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cai</surname> <given-names>F.</given-names></name> <name><surname>Kumar</surname> <given-names>S.</given-names></name> <name><surname>Vaerenbergh</surname> <given-names>T. V.</given-names></name> <name><surname>Sheng</surname> <given-names>X.</given-names></name> <name><surname>Liu</surname> <given-names>R.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Power-efficient combinatorial optimization using intrinsic noise in memristor hopfield neural networks</article-title>. <source>Nat. Elect.</source> <volume>3</volume>, <fpage>409</fpage>&#x2013;<lpage>418</lpage>. doi: <pub-id pub-id-type="doi">10.1038/s41928-020-0436-6</pub-id></citation>
</ref>
<ref id="ref9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chicca</surname> <given-names>E.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group> (<year>2020</year>). <article-title>A recipe for creating ideal hybrid memristive-CMOS neuromorphic processing systems</article-title>. <source>Appl. Phys. Lett.</source> <volume>116</volume>:<fpage>120501</fpage>. doi: <pub-id pub-id-type="doi">10.1063/1.5142089</pub-id></citation>
</ref>
<ref id="ref10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chicca</surname> <given-names>E.</given-names></name> <name><surname>Stefanini</surname> <given-names>F.</given-names></name> <name><surname>Bartolozzi</surname> <given-names>C.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group> (<year>2014</year>). <article-title>Neuromorphic electronic circuits for building autonomous cognitive systems</article-title>. <source>Proc. IEEE</source> <volume>102</volume>, <fpage>1367</fpage>&#x2013;<lpage>1388</lpage>. doi: <pub-id pub-id-type="doi">10.1109/JPROC.2014.2313954</pub-id></citation>
</ref>
<ref id="ref11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Covi</surname> <given-names>E.</given-names></name> <name><surname>Brivio</surname> <given-names>S.</given-names></name> <name><surname>Serb</surname> <given-names>A.</given-names></name> <name><surname>Prodromakis</surname> <given-names>T.</given-names></name> <name><surname>Fanciulli</surname> <given-names>M.</given-names></name> <name><surname>Spiga</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Analog memristive synapse in spiking networks implementing unsupervised learning</article-title>. <source>Front. Neurosci.</source> <volume>10</volume>:<fpage>482</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnins.2016.00482</pub-id>, PMID: <pub-id pub-id-type="pmid">27826226</pub-id></citation>
</ref>
<ref id="ref12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Covi</surname> <given-names>E.</given-names></name> <name><surname>Donati</surname> <given-names>E.</given-names></name> <name><surname>Liang</surname> <given-names>X.</given-names></name> <name><surname>Kappel</surname> <given-names>D.</given-names></name> <name><surname>Heidari</surname> <given-names>H.</given-names></name> <name><surname>Payvand</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Adaptive extreme edge computing for wearable devices</article-title>. <source>Front. Neurosci.</source> <volume>15</volume>:<fpage>611300</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnins.2021.611300</pub-id>, PMID: <pub-id pub-id-type="pmid">34045939</pub-id></citation>
</ref>
<ref id="ref13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dalgaty</surname> <given-names>T.</given-names></name> <name><surname>Castellani</surname> <given-names>N.</given-names></name> <name><surname>Turck</surname> <given-names>C.</given-names></name> <name><surname>Harabi</surname> <given-names>K.-E.</given-names></name> <name><surname>Querlioz</surname> <given-names>D.</given-names></name> <name><surname>Vianello</surname> <given-names>E.</given-names></name></person-group> (<year>2021</year>). <article-title>In situ learning using intrinsic memristor variability via Markov chain Monte Carlo sampling</article-title>. <source>Nat. Elect.</source> <volume>4</volume>, <fpage>151</fpage>&#x2013;<lpage>161</lpage>. doi: <pub-id pub-id-type="doi">10.1038/s41928-020-00523-3</pub-id></citation>
</ref>
<ref id="ref14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dalgaty</surname> <given-names>T.</given-names></name> <name><surname>Payvand</surname> <given-names>M.</given-names></name> <name><surname>Moro</surname> <given-names>F.</given-names></name> <name><surname>Ly</surname> <given-names>D. R. B.</given-names></name> <name><surname>Pebay-Peyroula</surname> <given-names>F.</given-names></name> <name><surname>Casas</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Hybrid neuromorphic circuits exploiting non-conventional properties of RRAM for massively parallel local plasticity mechanisms</article-title>. <source>APL Materials</source> <volume>7</volume>:<fpage>8663</fpage>. doi: <pub-id pub-id-type="doi">10.1063/1.5108663</pub-id></citation>
</ref>
<ref id="ref15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Das</surname> <given-names>A.</given-names></name> <name><surname>Pradhapan</surname> <given-names>P.</given-names></name> <name><surname>Groenendaal</surname> <given-names>W.</given-names></name> <name><surname>Adiraju</surname> <given-names>P.</given-names></name> <name><surname>Rajan</surname> <given-names>R. T.</given-names></name> <name><surname>Catthoor</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Unsupervised heart-rate estimation in wearables with liquid states and a probabilistic readout</article-title>. <source>Neural Netw.</source> <volume>99</volume>, <fpage>134</fpage>&#x2013;<lpage>147</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neunet.2017.12.015</pub-id>, PMID: <pub-id pub-id-type="pmid">29414535</pub-id></citation>
</ref>
<ref id="ref16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davies</surname> <given-names>M.</given-names></name> <name><surname>Srinivasa</surname> <given-names>N.</given-names></name> <name><surname>Lin</surname> <given-names>T. H.</given-names></name> <name><surname>Chinya</surname> <given-names>G.</given-names></name> <name><surname>Cao</surname> <given-names>Y.</given-names></name> <name><surname>Choday</surname> <given-names>S. H.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Loihi: a neuromorphic manycore processor with on-chip learning</article-title>. <source>IEEE Micro.</source> <volume>38</volume>, <fpage>82</fpage>&#x2013;<lpage>99</lpage>. doi: <pub-id pub-id-type="doi">10.1109/MM.2018.112130359</pub-id></citation>
</ref>
<ref id="ref17">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Demirag</surname> <given-names>Y.</given-names></name> <name><surname>Moro</surname> <given-names>F.</given-names></name> <name><surname>Dalgaty</surname> <given-names>T.</given-names></name> <name><surname>Navarro</surname> <given-names>G.</given-names></name> <name><surname>Frenkel</surname> <given-names>C.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2021</year>). &#x201C;PCM-trace: scalable synaptic eligibility traces with resistivity drift of phase-change materials.&#x201D; in <italic>Proc. of 2021 IEEE International Symposium on Circuits and Systems</italic>. pp. 1&#x2013;5.</citation>
</ref>
<ref id="ref18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Desai</surname> <given-names>N. S.</given-names></name> <name><surname>Rutherford</surname> <given-names>L. C.</given-names></name> <name><surname>Turrigiano</surname> <given-names>G. G.</given-names></name></person-group> (<year>1999</year>). <article-title>Plasticity in the intrinsic excitability of cortical pyramidal neurons</article-title>. <source>Nat. Neurosci.</source> <volume>2</volume>, <fpage>515</fpage>&#x2013;<lpage>520</lpage>. doi: <pub-id pub-id-type="doi">10.1038/9165</pub-id>, PMID: <pub-id pub-id-type="pmid">10448215</pub-id></citation>
</ref>
<ref id="ref19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Diehl</surname> <given-names>P. U.</given-names></name> <name><surname>Cook</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <article-title>Unsupervised learning of digit recognition using spike-timing-dependent plasticity</article-title>. <source>Front. Comput. Neurosci.</source> <volume>9</volume>:<fpage>99</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fncom.2015.00099</pub-id>, PMID: <pub-id pub-id-type="pmid">26941637</pub-id></citation>
</ref>
<ref id="ref20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Faria</surname> <given-names>R.</given-names></name> <name><surname>Camsari</surname> <given-names>K. Y.</given-names></name> <name><surname>Datta</surname> <given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>Implementing bayesian networks with embedded stochastic MRAM</article-title>. <source>AIP Adv.</source> <volume>8</volume>:<fpage>1332</fpage>. doi: <pub-id pub-id-type="doi">10.1063/1.5021332</pub-id></citation>
</ref>
<ref id="ref21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Frenkel</surname> <given-names>C.</given-names></name> <name><surname>Bol</surname> <given-names>D.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group> (<year>2023</year>). <article-title>Bottom-up and top-down neural processing systems design: neuromorphic intelligence as the convergence of natural and artificial intelligence</article-title>. <source>Proc. IEEE</source> <volume>28</volume>:<fpage>1288</fpage>. doi: <pub-id pub-id-type="doi">10.48550/arXiv.2106.01288</pub-id></citation>
</ref>
<ref id="ref22">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Frenkel</surname> <given-names>C.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group> (<year>2022</year>). ReckOn: a 28nm sub-mm2 task-agnostic spiking recurrent neural network processor enabling on-Chip learning over second-long timescales. IEEE International Solid-State Circuits Conference.</citation>
</ref>
<ref id="ref23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Frenkel</surname> <given-names>C.</given-names></name> <name><surname>Lefebvre</surname> <given-names>M.</given-names></name> <name><surname>Legat</surname> <given-names>J. D.</given-names></name> <name><surname>Bol</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>A 0.086-mm<sup>2</sup> 12.7-pJ/SOP 64k-synapse 256-neuron online-learning digital spiking neuromorphic processor in 28nm CMOS</article-title>. <source>IEEE Trans. Biomed. Circ. Syst.</source> <volume>13</volume>, <fpage>145</fpage>&#x2013;<lpage>158</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TBCAS.2018.2880425</pub-id>, PMID: <pub-id pub-id-type="pmid">30418919</pub-id></citation>
</ref>
<ref id="ref24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goldberger</surname> <given-names>A.</given-names></name> <name><surname>Amaral</surname> <given-names>L.</given-names></name> <name><surname>Glass</surname> <given-names>L.</given-names></name> <name><surname>Hausdorff</surname> <given-names>J.</given-names></name> <name><surname>Ivanov</surname> <given-names>P. C.</given-names></name> <name><surname>Mark</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2000</year>). <article-title>PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals</article-title>. <source>Circulation</source> <volume>101</volume>, <fpage>e215</fpage>&#x2013;<lpage>e220</lpage>. doi: <pub-id pub-id-type="doi">10.1161/01.cir.101.23.e215</pub-id>, PMID: <pub-id pub-id-type="pmid">10851218</pub-id></citation>
</ref>
<ref id="ref25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goodman</surname> <given-names>D. F. M.</given-names></name> <name><surname>Brette</surname> <given-names>R.</given-names></name></person-group> (<year>2008</year>). <article-title>Brian: a simulator for spiking neural networks in python</article-title>. <source>Front. Neuroinform.</source> <volume>2</volume>:<fpage>5</fpage>. doi: <pub-id pub-id-type="doi">10.3389/neuro.11.005.2008</pub-id>, PMID: <pub-id pub-id-type="pmid">19115011</pub-id></citation>
</ref>
<ref id="ref26">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Gurunathan</surname> <given-names>A</given-names></name> <name><surname>Iyer</surname> <given-names>L</given-names></name></person-group>. (<year>2020</year>). &#x201C;Spurious learning in networks with spike driven synaptic plasticity.&#x201D; in <italic>International Conference on Neuromorphic Systems</italic>. pp. 1&#x2013;8.</citation>
</ref>
<ref id="ref27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holt</surname> <given-names>G. R.</given-names></name> <name><surname>Koch</surname> <given-names>C.</given-names></name></person-group> (<year>1997</year>). <article-title>Shunting inhibition does not have a divisive effect on firing rates</article-title>. <source>Neural Comput.</source> <volume>9</volume>, <fpage>1001</fpage>&#x2013;<lpage>1013</lpage>. doi: <pub-id pub-id-type="doi">10.1162/neco.1997.9.5.1001</pub-id>, PMID: <pub-id pub-id-type="pmid">9188191</pub-id></citation>
</ref>
<ref id="ref28">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Indiveri</surname> <given-names>G.</given-names></name> <name><surname>Fusi</surname> <given-names>S.</given-names></name></person-group> (<year>2007</year>). &#x201C;Spike-based learning in VLSI networks of integrate-and-fire neurons.&#x201D; in <italic>Proc. of 2007 IEEE International Symposium on Circuits and Systems</italic>.</citation>
</ref>
<ref id="ref29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Indiveri</surname> <given-names>G.</given-names></name> <name><surname>Linares-Barranco</surname> <given-names>B.</given-names></name> <name><surname>Hamilton</surname> <given-names>T. J.</given-names></name> <name><surname>van Schaik</surname> <given-names>A.</given-names></name> <name><surname>Etienne-Cummings</surname> <given-names>R.</given-names></name> <name><surname>Delbruck</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>Neuromorphic silicon neuron circuits</article-title>. <source>Front. Neurosci.</source> <volume>5</volume>:<fpage>73</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnins.2011.00073</pub-id>, PMID: <pub-id pub-id-type="pmid">21747754</pub-id></citation>
</ref>
<ref id="ref30">
<citation citation-type="other"><person-group person-group-type="author">
<name><surname>Jaeger</surname> <given-names>H.</given-names></name>
</person-group> (<year>2001</year>) The echo state approach to analysing and training recurrent neural networks with an erratum note. GMD Report.</citation>
</ref>
<ref id="ref31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kadmon</surname> <given-names>J.</given-names></name> <name><surname>Somopolinsky</surname> <given-names>H.</given-names></name></person-group> (<year>2015</year>). <article-title>Transition to Chaos in random neuronal networks</article-title>. <source>Phys. Rev. X</source> <volume>5</volume>:<fpage>041030</fpage>. doi: <pub-id pub-id-type="doi">10.1103/PhysRevX.5.041030</pub-id>, PMID: <pub-id pub-id-type="pmid">37656915</pub-id></citation>
</ref>
<ref id="ref32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kiranyaz</surname> <given-names>S.</given-names></name> <name><surname>Ince</surname> <given-names>T.</given-names></name> <name><surname>Gabbouj</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Real-time patient-specific ECG classification by 1-D convolutional neural networks</article-title>. <source>IEEE Trans. Biomed. Eng.</source> <volume>63</volume>, <fpage>664</fpage>&#x2013;<lpage>675</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TBME.2015.2468589</pub-id>, PMID: <pub-id pub-id-type="pmid">26285054</pub-id></citation>
</ref>
<ref id="ref33">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Kreiser</surname> <given-names>R.</given-names></name> <name><surname>Moraitis</surname> <given-names>T.</given-names></name> <name><surname>Sandamirskaya</surname> <given-names>Y.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group>, (<year>2017</year>). &#x201C;On-chip unsupervised learning in winner-take-all networks on spiking neurons.&#x201D; in <italic>Proc. of 2017 IEEE Biomedical Circuits and Systems Conference</italic>. pp. 1&#x2013;4.</citation>
</ref>
<ref id="ref34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuzum</surname> <given-names>D.</given-names></name> <name><surname>Jeyasingh</surname> <given-names>R.-G.-D.</given-names></name> <name><surname>Lee</surname> <given-names>B.</given-names></name> <name><surname>Wong</surname> <given-names>H.-S. P.</given-names></name></person-group> (<year>2012</year>). <article-title>Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing</article-title>. <source>Nano Lett.</source> <volume>12</volume>, <fpage>2179</fpage>&#x2013;<lpage>2186</lpage>. doi: <pub-id pub-id-type="doi">10.1021/nl201040y</pub-id>, PMID: <pub-id pub-id-type="pmid">21668029</pub-id></citation>
</ref>
<ref id="ref35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Landau</surname> <given-names>I. D.</given-names></name> <name><surname>Somopolinsky</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>Coherent chaos in a recurrent neural network with structured connectivity</article-title>. <source>PLoS Comput. Biol.</source> <volume>14</volume>:<fpage>e1006309</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pcbi.1006309</pub-id>, PMID: <pub-id pub-id-type="pmid">30543634</pub-id></citation>
</ref>
<ref id="ref36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lazar</surname> <given-names>A.</given-names></name> <name><surname>Pipa</surname> <given-names>G.</given-names></name> <name><surname>Triesch</surname> <given-names>J.</given-names></name></person-group> (<year>2009</year>). <article-title>SORN: a self-organizing recurrent neural network</article-title>. <source>Front. Comput. Neurosci.</source> <volume>3</volume>:<fpage>23</fpage>. doi: <pub-id pub-id-type="doi">10.3389/neuro.10.023.2009</pub-id>, PMID: <pub-id pub-id-type="pmid">19893759</pub-id></citation>
</ref>
<ref id="ref37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Legenstein</surname> <given-names>R.</given-names></name> <name><surname>Naeger</surname> <given-names>C.</given-names></name> <name><surname>Maass</surname> <given-names>W.</given-names></name></person-group> (<year>2005</year>). <article-title>What can a neuron learn with spike-timing-dependent plasticity?</article-title> <source>Neural Comput.</source> <volume>17</volume>, <fpage>2337</fpage>&#x2013;<lpage>2382</lpage>. doi: <pub-id pub-id-type="doi">10.1162/0899766054796888</pub-id>, PMID: <pub-id pub-id-type="pmid">16156932</pub-id></citation>
</ref>
<ref id="ref38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Belkin</surname> <given-names>D.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Yan</surname> <given-names>P.</given-names></name> <name><surname>Hu</surname> <given-names>M.</given-names></name> <name><surname>Ge</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Efficient and self-adaptive in-situ learning in multilayer memristor neural networks</article-title>. <source>Nat. Commun.</source> <volume>9</volume>:<fpage>2385</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41467-018-04484-2</pub-id>, PMID: <pub-id pub-id-type="pmid">29921923</pub-id></citation>
</ref>
<ref id="ref39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name></person-group> (<year>2013</year>). <article-title>A spike-based model of neuronal intrinsic plasticity</article-title>. <source>IEEE Trans. Auton. Ment. Dev.</source> <volume>5</volume>, <fpage>62</fpage>&#x2013;<lpage>73</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TAMD.2012.2211101</pub-id>, PMID: <pub-id pub-id-type="pmid">35069102</pub-id></citation>
</ref>
<ref id="ref40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>S.-H.</given-names></name> <name><surname>Cheng</surname> <given-names>D.-C.</given-names></name> <name><surname>Lin</surname> <given-names>C.-M.</given-names></name></person-group> (<year>2013</year>). <article-title>Arrhythmia identification with two-lead electrocardiograms using artificial neural networks and support vector machines for a portable ECG monitor system</article-title>. <source>Sensors</source> <volume>13</volume>, <fpage>813</fpage>&#x2013;<lpage>828</lpage>. doi: <pub-id pub-id-type="doi">10.3390/s130100813</pub-id>, PMID: <pub-id pub-id-type="pmid">23303379</pub-id></citation>
</ref>
<ref id="ref41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maass</surname> <given-names>W.</given-names></name> <name><surname>Natschlger</surname> <given-names>T.</given-names></name> <name><surname>Markram</surname> <given-names>H.</given-names></name></person-group> (<year>2002</year>). <article-title>Real-time computing without stable states: a new framework for neural computation based on perturbations</article-title>. <source>Neural Comput.</source> <volume>14</volume>, <fpage>2531</fpage>&#x2013;<lpage>2560</lpage>. doi: <pub-id pub-id-type="doi">10.1162/089976602760407955</pub-id>, PMID: <pub-id pub-id-type="pmid">12433288</pub-id></citation>
</ref>
<ref id="ref42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mitra</surname> <given-names>S.</given-names></name> <name><surname>Fusi</surname> <given-names>D.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group> (<year>2009</year>). <article-title>Real-time classification of complex patterns using spike-based learning in neuromorphic VLSI</article-title>. <source>IEEE Trans. Biomed. Circ. Syst.</source> <volume>3</volume>, <fpage>32</fpage>&#x2013;<lpage>42</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TBCAS.2008.2005781</pub-id>, PMID: <pub-id pub-id-type="pmid">23853161</pub-id></citation>
</ref>
<ref id="ref43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moody</surname> <given-names>G. B.</given-names></name> <name><surname>Mark</surname> <given-names>R. G.</given-names></name></person-group> (<year>2001</year>). <article-title>The impact of the MIT-BIH arrhythmia database</article-title>. <source>IEEE Eng. Med. Biol. Mag.</source> <volume>20</volume>, <fpage>45</fpage>&#x2013;<lpage>50</lpage>. doi: <pub-id pub-id-type="doi">10.1109/51.932724</pub-id>, PMID: <pub-id pub-id-type="pmid">11446209</pub-id></citation>
</ref>
<ref id="ref44">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Moro</surname> <given-names>F.</given-names></name> <name><surname>Esmanhotto</surname> <given-names>E.</given-names></name> <name><surname>Hirtzlin</surname> <given-names>T.</given-names></name> <name><surname>Castellani</surname> <given-names>N.</given-names></name> <name><surname>Trabelsi</surname> <given-names>A.</given-names></name> <name><surname>Dalgaty</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2022</year>). &#x201C;Hardware calibrated learning to compensate heterogeneity in analog RRAM-based spiking neural networks.&#x201D; <italic>IEEE International Symposium on Circuits and Systems (ISCAS)</italic>. pp. 380&#x2013;383.</citation>
</ref>
<ref id="ref45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nicola</surname> <given-names>W.</given-names></name> <name><surname>Clopath</surname> <given-names>C.</given-names></name></person-group> (<year>2017</year>). <article-title>Supervised learning in spiking neural networks with force training</article-title>. <source>Nat. Commun.</source> <volume>8</volume>:<fpage>2208</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41467-017-01827-3</pub-id>, PMID: <pub-id pub-id-type="pmid">29263361</pub-id></citation>
</ref>
<ref id="ref46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ning</surname> <given-names>Q.</given-names></name> <name><surname>Hesham</surname> <given-names>M.</given-names></name> <name><surname>Federico</surname> <given-names>C.</given-names></name> <name><surname>Marc</surname> <given-names>O.</given-names></name> <name><surname>Stefanini</surname> <given-names>F.</given-names></name> <name><surname>Sumislawska</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses</article-title>. <source>Front. Neurosci.</source> <volume>9</volume>:<fpage>141</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnins.2015.00141</pub-id>, PMID: <pub-id pub-id-type="pmid">25972778</pub-id></citation>
</ref>
<ref id="ref47">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Payvand</surname> <given-names>M.</given-names></name> <name><surname>D&#x2019;Agostino</surname> <given-names>S.</given-names></name> <name><surname>Moro</surname> <given-names>F.</given-names></name> <name><surname>Demirag</surname> <given-names>Y.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name> <name><surname>Vianello</surname> <given-names>E.</given-names></name></person-group> (<year>2023</year>). &#x201C;Dendritic computation through exploiting resistive memory as both delays and weights.&#x201D; in <italic>Proc. of the 2023 International Conference on Neuromorphic Systems</italic>. p. 27.</citation>
</ref>
<ref id="ref48">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Payvand</surname> <given-names>M.</given-names></name> <name><surname>Demirag</surname> <given-names>Y.</given-names></name> <name><surname>Dalgathy</surname> <given-names>T.</given-names></name> <name><surname>Vianello</surname> <given-names>E.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group> (<year>2020</year>). &#x201C;Analog weight updates with compliance current modulation of binary ReRAMs for on-chip learning.&#x201D; in <italic>Proc. of 2020 IEEE International Symposium on Circuits and Systems</italic>. pp. 1&#x2013;5.</citation>
</ref>
<ref id="ref49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Payvand</surname> <given-names>M.</given-names></name> <name><surname>Moro</surname> <given-names>F.</given-names></name> <name><surname>Nomura</surname> <given-names>K.</given-names></name> <name><surname>Dalgaty</surname> <given-names>T.</given-names></name> <name><surname>Vianello</surname> <given-names>E.</given-names></name> <name><surname>Nishi</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Self-organization of an inhomogeneous memristive hardware for sequence learning</article-title>. <source>Nat. Commun.</source> <volume>13</volume>:<fpage>5793</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41467-022-33476-6</pub-id>, PMID: <pub-id pub-id-type="pmid">36184665</pub-id></citation>
</ref>
<ref id="ref50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfister</surname> <given-names>J.</given-names></name> <name><surname>Toyoizumi</surname> <given-names>T.</given-names></name> <name><surname>Barber</surname> <given-names>D.</given-names></name> <name><surname>Gerstner</surname> <given-names>W.</given-names></name></person-group> (<year>2006</year>). <article-title>Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning</article-title>. <source>Neural Comput.</source> <volume>18</volume>, <fpage>1318</fpage>&#x2013;<lpage>1348</lpage>. doi: <pub-id pub-id-type="doi">10.1162/neco.2006.18.6.1318</pub-id>, PMID: <pub-id pub-id-type="pmid">16764506</pub-id></citation>
</ref>
<ref id="ref51">
<citation citation-type="other"><person-group person-group-type="author">
<collab id="coll1">PhysioNet</collab>
</person-group> (<year>1999</year>). MITBIH Long-Term ECG Database (Version 1.0). Available at: <ext-link xlink:href="https://physionet.org/content/ltdb/1.0.0/" ext-link-type="uri">https://physionet.org/content/ltdb/1.0.0/</ext-link> [Accessed August 3, 1999].</citation>
</ref>
<ref id="ref52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ponulak</surname> <given-names>F.</given-names></name> <name><surname>Kasinski</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting</article-title>. <source>Neural Comput.</source> <volume>22</volume>, <fpage>467</fpage>&#x2013;<lpage>510</lpage>. doi: <pub-id pub-id-type="doi">10.1162/neco.2009.11-08-901</pub-id>, PMID: <pub-id pub-id-type="pmid">19842989</pub-id></citation>
</ref>
<ref id="ref53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prezioso</surname> <given-names>M.</given-names></name> <name><surname>Merrikh-Bayat</surname> <given-names>F.</given-names></name> <name><surname>Hoskins</surname> <given-names>B. D.</given-names></name> <name><surname>Adam</surname> <given-names>G. C.</given-names></name> <name><surname>Likharev</surname> <given-names>K. K.</given-names></name> <name><surname>Strukov</surname> <given-names>D. B.</given-names></name></person-group> (<year>2015</year>). <article-title>Training and operation of an integrated neuromorphic network based on metal-oxide memristors</article-title>. <source>Nature</source> <volume>521</volume>, <fpage>61</fpage>&#x2013;<lpage>64</lpage>. doi: <pub-id pub-id-type="doi">10.1038/nature14441</pub-id>, PMID: <pub-id pub-id-type="pmid">25951284</pub-id></citation>
</ref>
<ref id="ref54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qiao</surname> <given-names>N.</given-names></name> <name><surname>Bartolozzi</surname> <given-names>C.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group> (<year>2017</year>). <article-title>An ultralow leakage synaptic scaling homeostatic plasticity circuit with configurable time scales up to 100ks</article-title>. <source>IEEE Trans. Biomed. Circ. Syst.</source> <volume>11</volume>, <fpage>1271</fpage>&#x2013;<lpage>1277</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TBCAS.2017.2754383</pub-id>, PMID: <pub-id pub-id-type="pmid">29293423</pub-id></citation>
</ref>
<ref id="ref55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Somopolinsky</surname> <given-names>H.</given-names></name> <name><surname>Crisanti</surname> <given-names>A.</given-names></name> <name><surname>Sommers</surname> <given-names>H. J.</given-names></name></person-group> (<year>1988</year>). <article-title>Chaos in random neural networks</article-title>. <source>Phys. Rev. Lett.</source> <volume>61</volume>, <fpage>259</fpage>&#x2013;<lpage>262</lpage>. doi: <pub-id pub-id-type="doi">10.1103/PhysRevLett.61.259</pub-id>, PMID: <pub-id pub-id-type="pmid">39242659</pub-id></citation>
</ref>
<ref id="ref56">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Srinivasan</surname> <given-names>G.</given-names></name> <name><surname>Roy</surname> <given-names>S.</given-names></name> <name><surname>Raghunathan</surname> <given-names>V.</given-names></name> <name><surname>Roy</surname> <given-names>K.</given-names></name></person-group> (<year>2017</year>). &#x201C;Spike timing dependent plasticity based enhanced self-learning for efficient pattern recognition in spiking neural networks.&#x201D; in <italic>Proc. of 2017 International Joint Conference on Neural Networks</italic>. pp. 1847&#x2013;1854.</citation>
</ref>
<ref id="ref57">
<citation citation-type="journal"><person-group person-group-type="author">
<name><surname>Steil</surname> <given-names>J. J.</given-names></name>
</person-group> (<year>2007</year>). <article-title>Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning</article-title>. <source>Neural Netw.</source> <volume>20</volume>, <fpage>353</fpage>&#x2013;<lpage>364</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neunet.2007.04.011</pub-id>, PMID: <pub-id pub-id-type="pmid">17517491</pub-id></citation>
</ref>
<ref id="ref58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sussillo</surname> <given-names>D.</given-names></name> <name><surname>Abbott</surname> <given-names>L. F.</given-names></name></person-group> (<year>2009</year>). <article-title>Generating coherent patterns of activity from chaotic neural networks</article-title>. <source>Neuron</source> <volume>63</volume>, <fpage>544</fpage>&#x2013;<lpage>557</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neuron.2009.07.018</pub-id>, PMID: <pub-id pub-id-type="pmid">19709635</pub-id></citation>
</ref>
<ref id="ref59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tetzlaff</surname> <given-names>C.</given-names></name> <name><surname>Dasgupta</surname> <given-names>S.</given-names></name> <name><surname>Kulvicius</surname> <given-names>T.</given-names></name> <name><surname>W&#x00F6;rg&#x00F6;tter</surname> <given-names>F.</given-names></name></person-group> (<year>2015</year>). <article-title>The use of Hebbian cell assemblies for nonlinear computation</article-title>. <source>Sci. Rep.</source> <volume>5</volume>:<fpage>12866</fpage>. doi: <pub-id pub-id-type="doi">10.1038/srep12866</pub-id>, PMID: <pub-id pub-id-type="pmid">26249242</pub-id></citation>
</ref>
<ref id="ref60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thalmeier</surname> <given-names>D.</given-names></name> <name><surname>Uhlmann</surname> <given-names>M.</given-names></name> <name><surname>Kappen</surname> <given-names>J. H.</given-names></name> <name><surname>Memmesheimer</surname> <given-names>R. M.</given-names></name></person-group> (<year>2016</year>). <article-title>Learning universal computations with spikes</article-title>. <source>PLoS Comput. Biol.</source> <volume>12</volume>:<fpage>e1004895</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pcbi.1004895</pub-id>, PMID: <pub-id pub-id-type="pmid">27309381</pub-id></citation>
</ref>
<ref id="ref61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>N.</given-names></name> <name><surname>Zhou</surname> <given-names>J.</given-names></name> <name><surname>Dai</surname> <given-names>G.</given-names></name> <name><surname>Huang</surname> <given-names>J.</given-names></name> <name><surname>Xie</surname> <given-names>Y.</given-names></name></person-group> (<year>2019</year>). <article-title>Energy-efficient intelligent ECG monitoring for wearable devices</article-title>. <source>IEEE Trans. Biomed. Circ. Syst.</source> <volume>13</volume>, <fpage>1112</fpage>&#x2013;<lpage>1121</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TBCAS.2019.2930215</pub-id>, PMID: <pub-id pub-id-type="pmid">31329129</pub-id></citation>
</ref>
<ref id="ref62">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Yongqiang</surname> <given-names>M.</given-names></name> <name><surname>Donati</surname> <given-names>E.</given-names></name> <name><surname>Chen</surname> <given-names>B.</given-names></name> <name><surname>Ren</surname> <given-names>P.</given-names></name> <name><surname>Zheng</surname> <given-names>N.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name></person-group> (<year>2020</year>). &#x201C;Neuromorphic implementation of a recurrent neural network for EMG classification.&#x201D; in <italic>Proc. of 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)</italic>. pp. 69&#x2013;73.</citation>
</ref>
<ref id="ref63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>W.</given-names></name> <name><surname>Li</surname> <given-names>P.</given-names></name></person-group> (<year>2019</year>). <article-title>Information-theoretic intrinsic plasticity for online unsupervised learning in spiking neural networks</article-title>. <source>Front. Neurosci.</source> <volume>13</volume>:<fpage>31</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnins.2019.00031</pub-id>, PMID: <pub-id pub-id-type="pmid">30804736</pub-id></citation>
</ref>
<ref id="ref64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>A.</given-names></name> <name><surname>Li</surname> <given-names>X.</given-names></name> <name><surname>Gao</surname> <given-names>Y.</given-names></name> <name><surname>Niu</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>Event-driven intrinsic plasticity for spiking convolutional neural networks</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst.</source> <volume>33</volume>, <fpage>1986</fpage>&#x2013;<lpage>1995</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TNNLS.2021.3084955</pub-id>, PMID: <pub-id pub-id-type="pmid">34106868</pub-id></citation>
</ref>
</ref-list>
</back>
</article>