<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2016.00482</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Analog Memristive Synapse in Spiking Networks Implementing Unsupervised Learning</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Covi</surname> <given-names>Erika</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/296395/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Brivio</surname> <given-names>Stefano</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/360943/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Serb</surname> <given-names>Alexander</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/118994/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Prodromakis</surname> <given-names>Themis</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/70165/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Fanciulli</surname> <given-names>Marco</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Spiga</surname> <given-names>Sabina</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/235910/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Laboratorio MDM, Istituto per la Microelettronica e i Microsistemi - Consiglio Nazionale delle Ricerch (CNR)</institution> <country>Agrate Brianza, Italy</country></aff>
<aff id="aff2"><sup>2</sup><institution>Nano Group, Department of Electronics and Computer Science, University of Southampton</institution> <country>UK</country></aff>
<aff id="aff3"><sup>3</sup><institution>Dipartimento di Scienza Dei Materiali, Universit&#x000E0; di Milano Bicocca</institution> <country>Milano, MI, Italy</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Gert Cauwenberghs, University of California, San Diego, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Siddharth Joshi, University of California, San Diego, USA; Khaled Nabil Salama, King Abdullah University of Science and Technology, Saudi Arabia</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Erika Covi <email>erika.covi&#x00040;mdm.imm.cnr.it</email></p></fn>
<fn fn-type="corresp" id="fn002"><p>Sabina Spiga <email>sabina.spiga&#x00040;mdm.imm.cnr.it</email></p></fn>
<fn fn-type="other" id="fn003"><p>This article was submitted to Neuromorphic Engineering, a section of the journal Frontiers in Neuroscience</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>25</day>
<month>10</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>10</volume>
<elocation-id>482</elocation-id>
<history>
<date date-type="received">
<day>16</day>
<month>05</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>07</day>
<month>10</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2016 Covi, Brivio, Serb, Prodromakis, Fanciulli and Spiga.</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Covi, Brivio, Serb, Prodromakis, Fanciulli and Spiga</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>Emerging brain-inspired architectures call for devices that can emulate the functionality of biological synapses in order to implement new efficient computational schemes able to solve ill-posed problems. Various devices and solutions are still under investigation and, in this respect, a challenge is opened to the researchers in the field. Indeed, the optimal candidate is a device able to reproduce the complete functionality of a synapse, i.e., the typical synaptic process underlying learning in biological systems (activity-dependent synaptic plasticity). This implies a device able to change its resistance (synaptic strength, or weight) upon proper electrical stimuli (synaptic activity) and showing several stable resistive states throughout its dynamic range (analog behavior). Moreover, it should be able to perform spike timing dependent plasticity (STDP), an associative homosynaptic plasticity learning rule based on the delay time between the two firing neurons the synapse is connected to. This rule is a fundamental learning protocol in state-of-art networks, because it allows unsupervised learning. Notwithstanding this fact, STDP-based unsupervised learning has been proposed several times mainly for binary synapses rather than multilevel synapses composed of many binary memristors. This paper proposes an HfO<sub>2</sub>-based analog memristor as a synaptic element which performs STDP within a small spiking neuromorphic network operating unsupervised learning for character recognition. The trained network is able to recognize five characters even in case incomplete or noisy images are displayed and it is robust to a device-to-device variability of up to &#x000B1;30%.</p></abstract>
<kwd-group>
<kwd>memristor</kwd>
<kwd>resistive switching</kwd>
<kwd>HfO<sub>2</sub></kwd>
<kwd>artificial synapse</kwd>
<kwd>synaptic plasticity</kwd>
<kwd>spike time dependent plasticity</kwd>
<kwd>spiking neuromorphic network</kwd>
<kwd>unsupervised learning</kwd>
</kwd-group>
<contract-num rid="cn001">612058</contract-num>
<contract-sponsor id="cn001">Seventh Framework Programme<named-content content-type="fundref-id">10.13039/501100004963</named-content></contract-sponsor>
<counts>
<fig-count count="8"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="39"/>
<page-count count="13"/>
<word-count count="8968"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>The human brain is a massively parallel, fault-tolerant, adaptive system integrating storage and computation (Kuzum et al., <xref ref-type="bibr" rid="B19">2013</xref>; Matveyev et al., <xref ref-type="bibr" rid="B22">2015</xref>). Moreover, it is able to visually recognize a large amount of living beings and objects and to process huge volumes of data in real-time (Kuzum et al., <xref ref-type="bibr" rid="B19">2013</xref>; Yu et al., <xref ref-type="bibr" rid="B36">2013a</xref>; Wang et al., <xref ref-type="bibr" rid="B34">2015</xref>). Therefore, biologically-inspired systems are attracting a lot of interest as vehicles toward the implementation of real-time adaptive systems for a variety of applications. In such applications, the system is required to continuously adapt to time-varying external stimuli in an autonomous way, therefore an on-line learning without external supervision is preferable (Serb et al., <xref ref-type="bibr" rid="B31">2016</xref>). In neuromorphic hardware, learning is obtained through reconfiguration of the connectivity of a network through local modulation of synaptic weights. The adjustment of the weight of a single synapse, i.e., plasticity, should follow simple update rules that can be implemented uniformly across the entire network and allow unsupervised learning. In this respect, spike timing dependent plasticity (STDP) has been recognized as one of most promising, because it establishes that the weight of a synapse is adjusted according to the timing of the spikes fired by connected neurons (Serrano-Gotarredona et al., <xref ref-type="bibr" rid="B32">2013</xref>; Bill and Legenstein, <xref ref-type="bibr" rid="B6">2014</xref>; Ambrogio et al., <xref ref-type="bibr" rid="B3">2016b</xref>).</p>
<p>Recently, the implementation of artificial synapses with memristor devices has been proposed. Memristors (memory &#x0002B; resistor) are compact two terminal devices that change their resistance when subjected to voltage stimulation. The memristor resistance state can be considered inversely proportional to the synaptic weight. Various practical implementations have been proposed, such as phase change (Kuzum et al., <xref ref-type="bibr" rid="B18">2012</xref>; Ambrogio et al., <xref ref-type="bibr" rid="B3">2016b</xref>), ferroelectric (Du et al., <xref ref-type="bibr" rid="B13">2015</xref>; Nishitani et al., <xref ref-type="bibr" rid="B23">2015</xref>), spin transfer torque (Querlioz et al., <xref ref-type="bibr" rid="B29">2015</xref>) devices, and oxide-based resistive switching memristors (Wang et al., <xref ref-type="bibr" rid="B34">2015</xref>; Ambrogio et al., <xref ref-type="bibr" rid="B1">2016a</xref>). When memristors are employed in neuromorphic networks, two main operational modes are used, binary and analog. The former relies on memristors featuring only two states, high resistance state (HRS) or low resistance state (LRS), and it is proved to be effective in specific applications (Suri et al., <xref ref-type="bibr" rid="B33">2013</xref>; Wang et al., <xref ref-type="bibr" rid="B34">2015</xref>; Ambrogio et al., <xref ref-type="bibr" rid="B1">2016a</xref>). On the other hand, analog evolution of device resistance is desirable to improve the robustness of the network (Bill and Legenstein, <xref ref-type="bibr" rid="B6">2014</xref>; Garbin et al., <xref ref-type="bibr" rid="B17">2015</xref>; Park et al., <xref ref-type="bibr" rid="B24">2015</xref>), but the difficulty of operating memristors in an analog fashion renders hardware implementations of networks with analog synapses still challenging (Garbin et al., <xref ref-type="bibr" rid="B17">2015</xref>). Indeed, several memristors show only a partial analog behavior, either when increasing the resistance (synaptic depression), which is common in filamentary devices as oxide-based memristors (Kuzum et al., <xref ref-type="bibr" rid="B19">2013</xref>; Yu et al., <xref ref-type="bibr" rid="B36">2013a</xref>), or when decreasing the resistance (synaptic potentiation) as in some kinds of phase change memristors (Eryilmaz et al., <xref ref-type="bibr" rid="B14">2014</xref>). Well established protocols to obtain analog behavior require controlling of the current flow through the memristor (Yu et al., <xref ref-type="bibr" rid="B38">2011</xref>; Ambrogio et al., <xref ref-type="bibr" rid="B2">2013</xref>), or the modulation of either the time width (Park et al., <xref ref-type="bibr" rid="B25">2013</xref>; Mandal et al., <xref ref-type="bibr" rid="B21">2014</xref>) or the voltage (Kuzum et al., <xref ref-type="bibr" rid="B18">2012</xref>; Park et al., <xref ref-type="bibr" rid="B25">2013</xref>) of the spike. However, this device programming requires the use of extra circuit elements for monitoring the state of the memristor and shaping the spike accordingly. A second proposed approach is to consider multi-memristor synapses (compound synapse with stochastic programming) (Bill and Legenstein, <xref ref-type="bibr" rid="B6">2014</xref>; Burr et al., <xref ref-type="bibr" rid="B10">2015</xref>; Garbin et al., <xref ref-type="bibr" rid="B17">2015</xref>; Prezioso et al., <xref ref-type="bibr" rid="B27">2015</xref>) at the expense of increased area consumption. Only recently some works demonstrated analog behavior in both potentiation and depression without current or voltage control (Park et al., <xref ref-type="bibr" rid="B25">2013</xref>; Covi et al., <xref ref-type="bibr" rid="B11">2015</xref>, <xref ref-type="bibr" rid="B12">2016</xref>; Matveyev et al., <xref ref-type="bibr" rid="B22">2015</xref>; Brivio et al., <xref ref-type="bibr" rid="B7">2016</xref>; Serb et al., <xref ref-type="bibr" rid="B31">2016</xref>).</p>
<p>Within this class of devices, unsupervised learning based on STDP has been successfully demonstrated and analyzed in detail for binary synapses or compound synapses (with binary memristors) (Suri et al., <xref ref-type="bibr" rid="B33">2013</xref>; Bill and Legenstein, <xref ref-type="bibr" rid="B6">2014</xref>; Ambrogio et al., <xref ref-type="bibr" rid="B1">2016a</xref>,<xref ref-type="bibr" rid="B3">b</xref>). Some works deal with networks utilizing analog resistance transition in only one direction, either in depression (Yu et al., <xref ref-type="bibr" rid="B37">2013b</xref>) or in potentiation (Eryilmaz et al., <xref ref-type="bibr" rid="B14">2014</xref>). Only few works use analog synapses to simulate neuromorphic networks, as an example Querlioz et al. (<xref ref-type="bibr" rid="B28">2013</xref>), Yu et al. (<xref ref-type="bibr" rid="B35">2015</xref>), and Serb et al. (<xref ref-type="bibr" rid="B31">2016</xref>). The latter, in particular, proposes a network realized in part with real hardware analog memristors and in part with software simulation.</p>
<p>In this framework, we propose a fully analog oxide-filamentary device as a memristive synapse for networks with deterministic neurons implementing unsupervised learning. The proposed memristor features an analog modulation of its resistance in various long-term functional plasticity spiking conditions and it emulates a type of homosynaptic STDP learning rule. To prove its usefulness in deterministic STDP-based networks, a simple fully-connected spiking neuromorphic network (SNN) for pattern recognition is conceived and simulated. The SNN consists of 30 neurons (25 pre-neurons disposed in a 5 &#x000D7; 5 layer and 5 post-neurons) and 125 synapses. The network is trained with an associative unsupervised STDP-based learning protocol. After training, the SNN is able to recognize five characters displayed as 5 &#x000D7; 5 black-and-white pixels images even when incomplete characters or noisy ones (intended as purely additive noise) are displayed. Moreover, the SNN is proved to be robust against device-to-device variability.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>2. Materials and methods</title>
<p>The device stack is made of 40 nm TiN/5 nm HfO<sub>2</sub>/10 nm Ti/40 nm TiN layers and the area of the device is 40 &#x000D7; 40 &#x003BC;m<sup>2</sup>. Ti and TiN layers are deposited by magnetron sputtering and the HfO<sub>2</sub> layer is deposited by atomic layer deposition at 300 &#x000B0;C, as described elsewhere (Brivio et al., <xref ref-type="bibr" rid="B8">2015</xref>; Frascaroli et al., <xref ref-type="bibr" rid="B15">2015</xref>). The switching mechanism of the proposed memristor is filamentary (Brivio et al., <xref ref-type="bibr" rid="B9">2014</xref>), i.e., it is based on the disruption and the restoration of a conductive filament formed inside the oxide.</p>
<p>The electrical DC characterizations are performed using Source Measuring Units (B1511B and B1517A) of a B1500A Semiconductor Device Parameter Analyzer by Keysight. Figure <xref ref-type="fig" rid="F1">1</xref> shows a typical I-V curve of the device. In its pristine state, the device has a conductance of tens of nS (not shown). A forming operation (DC current sweep up to 150 &#x003BC;A) at around 1.8 V (data not shown) is needed to bring the device in its LRS for the first time. To switch the device from LRS to HRS, and vice versa, DC sweeps from 0 V to 1 V (LRS to HRS) and from 0 V to &#x02212;0.7 V (HRS to LRS) are applied. The device maximum resistance (read at 100 mV) ratio obtainable in DC is about one order of magnitude, which is in agreement with the literature (Garbin et al., <xref ref-type="bibr" rid="B17">2015</xref>; Matveyev et al., <xref ref-type="bibr" rid="B22">2015</xref>; Wang et al., <xref ref-type="bibr" rid="B34">2015</xref>).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>DC characterization of the device</bold>. Transition from LRS to HRS is obtained with a DC sweep from 0 V to 1 V, transition from HRS to LRS is obtained with a DC sweep from 0 V to &#x02212;0.7 V.</p></caption>
<graphic xlink:href="fnins-10-00482-g0001.tif"/>
</fig>
<p>The device response to spike stimulation has been characterized either by trains of pulses with increasing amplitude and fixed time width or by repetition of the same spike. In the former case, during depression spike amplitude ranges from 0.1 to 1.2 V, during potentiation, from &#x02212;0.1 to &#x02212;0.65 V. The same spike is repeated 5 times before the amplitude is incremented by 50 mV (decremented by &#x02212;50 mV for negative voltages) and the pulse duration is fixed at 100 &#x003BC;s. Measurements are performed using the custom instrument described in Berdan et al. (<xref ref-type="bibr" rid="B4">2015</xref>). In the second experiment, the trains of identical pulses are constituted by 300 repetitions of &#x02212;550 mV&#x02014;high and 25 &#x003BC;s&#x02014;long pulses for potentiation and 300 repetitions of 700 mV&#x02014;high and 20 &#x003BC;s&#x02014;long pulses for depression. This second pulse scheme is implemented by a custom setup interfacing High Voltage Semiconductor Pulse Generator Unit (B1525A) and Source Measuring Units of a B1500A. The motivation for the choice of the spike parameters will be given in Section 3. In both experimental procedures, reading operation is carried out using a voltage amplitude which induced no changes in the device resistance.</p>
<p>STDP experiments are carried out placing the device between two spiking channels, i.e., two Waveform Generator/Fast Measurement Units (B1530A) of the already mentioned B1500A, acting as spiking neurons. The relative timing between the two overlapping spikes from the two neurons is mapped in a voltage amplitude, as it will be described in Section 3.2.</p>
<p>The SNN is developed and simulated in <sc>Matlab</sc>&#x000AE; environment. The network is a simple fully connected winner-take-all SNN of 30 integrate-and-fire neurons, of which 25 are pre-neurons and 5 post-neurons. The pre-neurons are arranged in a 5 &#x000D7; 5 layer and each pre-neuron is connected to all the post-neurons through 125 artificial synapses. The learning method is unsupervised and the experimental STDP data used to update the synaptic weights during learning are collected in a look up table. The operating principle of the network will be described in detail in Section 3.3. Using the same <sc>Matlab</sc>&#x000AE; software, a graphic user interface (GUI) is developed to enhance the software usability (further details in the Supplementary Figure <xref ref-type="supplementary-material" rid="SM3">1</xref>).</p>
</sec>
<sec sec-type="results" id="s3">
<title>3. Results</title>
<p>The tests described in the following are carried out in order to provide a thorough overview of the device behavior which is finally exploited in a simple example of neuromorphic computation. The present section is therefore divided in three parts. In the first one, long-term functional plasticity is investigated through two different spiking algorithms, which are exploited to achieve a form of STDP learning rule, in the second part. Finally, a SNN is presented.</p>
<sec>
<title>3.1. Long-term functional synaptic plasticity</title>
<p>The plasticity of the device is investigated through two different spiking stimulations, which are fundamental to achieve the shape of STDP required in learning.</p>
<p>Figure <xref ref-type="fig" rid="F2">2A</xref> shows the evolution of the device resistance during some potentiation and depression cycles (top panel) using trains of spikes of fixed time width and increasing amplitude (bottom panel). The maximum voltages for potentiation (&#x02212;650 mV) and depression (1.2 V) are those leading to a maximum resistance change of about one order of magnitude and are close to the maximum voltages used in DC operation (Figure <xref ref-type="fig" rid="F1">1</xref>). During depression (resistance increase, green circles), the first spikes, corresponding to lower voltages (see bottom panel of Figure <xref ref-type="fig" rid="F2">2A</xref>), do not induce any resistance change up to a voltage threshold which can be identified at about 550 mV. As the threshold is overcome, the resistance starts increasing gradually. The device therefore presents several intermediate resistive states throughout the programming window. Similarly, during potentiation (resistance decrease, orange circles), several intermediate states are reached between the maximum and minimum resistances using spikes with increasing voltage amplitude. It can be noted that, in this case, the resistance change begins at different voltage levels from cycle to cycle, but for voltages higher than &#x02212;500 mV a resistance decrease can always be observed. Therefore, &#x02212;500 mV is considered the voltage threshold for potentiation. It is worth noticing that time widths, as well as voltages, influence the resistance evolution, as already reported by Covi et al. (<xref ref-type="bibr" rid="B11">2015</xref>) for similar devices. On the other hand, resistance changes are more sensitive to voltage variations rather than to time widths variations, so that for time widths in the range of 10 to 100 &#x003BC;s roughly the same voltages can be applied for obtaining the same resistance evolution. It has to be mentioned that a stair-case like algorithm, like the one used here, is not practical to implement in real large-scale system, because requires neurons to keep track of previous activity. On the other hand, the testing procedure reported in Figure <xref ref-type="fig" rid="F2">2A</xref> is useful for characterizing the device and to clarify the functioning principle of the STDP implementation described below, which has actually been proposed as a learning rule for practical implementation of neuromorphic hardware (Sa&#x000EF;ghi et al., <xref ref-type="bibr" rid="B30">2015</xref>).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>(A)</bold> Potentiation (orange) and depression (green) cycles using ramped trains of spikes. Spike time width 100 &#x003BC;s, 5 repetitions. Potentiation: ramps from &#x02212;0.1 V to &#x02212;0.65 V. Depression: ramps from 0.1 V to 1.2 V. Upper graph: device resistance after spikes; lower graph: spike amplitude. <bold>(B)</bold> Potentiation (orange) and depression (green) cycles using trains of 300 identical spikes. Potentiation: spike amplitude &#x02212;0.55 V, time width 25 &#x003BC;s. Depression: spike amplitude 0.7 V, time width 20 &#x003BC;s. Upper graph: device resistance after spikes; lower graph: spike amplitude.</p></caption>
<graphic xlink:href="fnins-10-00482-g0002.tif"/>
</fig>
<p>In the set of measurements shown in Figure <xref ref-type="fig" rid="F2">2B</xref>, plasticity is investigated as a function of trains of identical spikes. Some depression/potentiation cycles are performed. During both potentiation and depression, the resistance gradually changes, featuring several intermediate states between the LRS and the HRS. In all the cycles, the resistance rate change is not constant with respect to the number of spikes. Indeed, for both potentiation and depression the resistance change is faster for the first spikes. In general, analog resistance variation due to trains of identical spikes can be found for voltages values close to the voltage thresholds, identified by the results of voltage staircase stimulation for similar time widths (as that shown in Figure <xref ref-type="fig" rid="F2">2A</xref>). Indeed, gradual resistance change is achievable as an intermediate regime between a low voltage stimulation, which does not affect the resistance, and a high voltage stimulation, which induces a digital behavior (Covi et al., <xref ref-type="bibr" rid="B11">2015</xref>). The resistance window obtained through identical pulses is in the order of 2, which has been considered sufficient when dealing with neuromorphic systems (Kuzum et al., <xref ref-type="bibr" rid="B18">2012</xref>; Prezioso et al., <xref ref-type="bibr" rid="B26">2016</xref>).</p>
</sec>
<sec>
<title>3.2. Homosynaptic input-specific plasticity toward learning</title>
<p>Based on the plasticity results described in Section 3.1 as a function of voltage modulation and spike repetition, STDP experiments relying on engineering of pre- and post-spike superimposition are carried out. Indeed the voltage drop on the memristor is modulated according to the voltage difference resulting from the superimposition of pre- and post-spike waveforms, which depends on their relative timing. To this aim, pre-spike is shaped as a triangular-like pulse (Figure <xref ref-type="fig" rid="F3">3A</xref>), thus acting as a bias performing the voltage-to-time mapping. The rectangular-like shape of the post-spike (Figure <xref ref-type="fig" rid="F3">3A</xref>) determines the supra threshold spike width. Figure <xref ref-type="fig" rid="F3">3B</xref> reports two examples of the superimposition of pre- and post-spikes giving either potentiation or depression and Figure <xref ref-type="fig" rid="F3">3C</xref> reports the quantitative voltage-to-delay-time mapping. In particular, the resulting maximum voltage dropping on the device depends on &#x00394;t and varies between &#x02212;650 mV for potentiation and 800 mV for depression.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>(A)</bold> Setup for Spike Time Dependent Plasticity and waveforms used as pre-spike (left) and post-spike (right) in STDP experiments. <bold>(B)</bold> Overlapping of pre-spike and post-spike to obtain a potentiation (left) and a depression (right). <bold>(C)</bold> Voltage-to-delay time mapping. Resulting voltage across the artificial synapse as a function of &#x00394;t.</p></caption>
<graphic xlink:href="fnins-10-00482-g0003.tif"/>
</fig>
<p>To emulate STDP with &#x00394;t &#x0003E; 0 (&#x00394;t &#x0003C; 0), first the device is brought in its HRS (LRS) with a DC sweep, then 250 identical pairs of pre- and post-spikes are applied to the top and bottom electrodes of the device, respectively, keeping &#x00394;t constant. The experiment is repeated for different delay times (&#x00394;t) and each time the parameter &#x00394;t is varied, the device is reinitialized accordingly. Figures <xref ref-type="fig" rid="F4">4A,B</xref> show the device resistance evolution as a function of spike pair repetitions for different delay times in both potentiation (Figure <xref ref-type="fig" rid="F4">4A</xref>) and depression (Figure <xref ref-type="fig" rid="F4">4B</xref>). During potentiation and for every delay time, resistance decreases quickly in the initial phase (about &#x0007E;25 repetitions) before slowing down markedly in later phases (please notice the vertical scale as going like <italic>R</italic><sub>0</sub>/<italic>R</italic> with the increase of the number of spikes, in qualitatively agreement with Figure <xref ref-type="fig" rid="F2">2B</xref>). The same qualitative trend is respected also during depression (Figure <xref ref-type="fig" rid="F4">4B</xref>): the first 10&#x02013;20 spike pair repetitions significantly change the resistance, whereas the following ones are less effective, until a saturation level is reached after &#x0007E;150&#x02013;200 spikes. In both potentiation and depression, the variation of &#x00394;t, i.e., the voltage drop, drives the amplitude of the resistance change, i.e., the longer the delay time, the lower the change in resistance. Moreover, &#x00394;t affects the resistance change rate in the initial stage of the plasticity operation, i.e., the smaller the delay time (i.e., the higher the voltage drop), the sharper the resistance evolution (e.g., compare the blue and pink curves of Figures <xref ref-type="fig" rid="F4">4A,B</xref>).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>(A)</bold> Potentiation and <bold>(B)</bold> depression dynamics with 250 identical spikes. Different voltage amplitudes and delay times are explored. The values of both voltage amplitude and &#x00394;t are written nearby each curve. Insets: detail of the first 12 spikes. <bold>(C,D)</bold> Spike Time Dependent Plasticity learning curve for different number of pre- post-spikes pair repetitions (&#x00394;t &#x0003E; 0 and &#x00394;t &#x0003C; 0). <italic>R</italic><sub>0</sub> is the initial HRS <bold>(C)</bold> and LRS <bold>(D)</bold>.</p></caption>
<graphic xlink:href="fnins-10-00482-g0004.tif"/>
</fig>
<p>Figures <xref ref-type="fig" rid="F4">4C,D</xref> show the STDP curve represented as the normalized resistance change as a function of the spike delay (and consequently of the voltage amplitude, as shown in the top x-axis of Figures <xref ref-type="fig" rid="F4">4C,D</xref>) for few representative fixed numbers of spike pair repetitions (1, 10, 25, 50, 100, and 150). The plots, which are derived from aforementioned results, qualitatively follow the biological STDP curve shown in Bi and Poo (<xref ref-type="bibr" rid="B5">1998</xref>). In accordance with Figures <xref ref-type="fig" rid="F4">4A,C</xref> shows that when &#x00394;t is positive and small, the first spike pair induces a resistance variation equal to 75% of the dynamic range. As a consequence, the following repetitions have a reduced effect in further changing the device resistance. On the contrary, when &#x00394;t is longer and the resulting spike voltage amplitude is lower, the spike repetitions play an important role in the evolution of the device resistance. Indeed, it becomes progressively more pronounced with increasing &#x00394;t. This effect is valid up to a point where &#x00394;t is so large that the voltage drop across the device does not exceed the device threshold and no more changes in device resistance are induced, regardless of the number of applied spikes. The same effect is shown also in Figure <xref ref-type="fig" rid="F4">4D</xref>, where results for negative &#x00394;t are plotted, even though here the effect is less pronounced. Indeed, a change in the synaptic weight is present also for &#x00394;t &#x0003D; &#x02212;400 &#x003BC;s. This result is in agreement both with Figure <xref ref-type="fig" rid="F2">2A</xref>, where the effect of the voltage amplitude on the device resistance is shown, and with Figure <xref ref-type="fig" rid="F2">2B</xref>, where it is demonstrated that the weight change progressively decreases with increasing spike repetition number.</p>
<p>It is worth mentioning that when the device behavior is tested for &#x00394;t &#x0003E; 0 (&#x00394;t &#x0003C; 0), the device is first brought in its HRS (LRS). In case the memristor in the LRS (HRS) is subjected to pulses with &#x00394;t &#x0003E; 0 (&#x00394;t &#x0003C; 0), no changes in its resistance would occur, since the synapse is already completely potentiated (depressed). This is explicitly shown in Figures <xref ref-type="fig" rid="F4">4C,D</xref>, where for negative (positive) delay times no resistance changes are shown.</p>
<p>From Figures <xref ref-type="fig" rid="F4">4C,D</xref>, a behavioral difference between potentiation and depression dynamics emerges. Despite in both cases the final resistance is strongly influenced by the applied voltage amplitude, during potentiation the applied voltage affects the change in the device resistance starting from the very first spike pair, whereas during depression the effect of the voltage is more evident from the second spike pair on, rather than in the first. Such asymmetry of the curve, even though in principle improvable by optimizing the spike shapes, does not affect the possibility of using the STDP rule for a neuromorphic network.</p>
</sec>
<sec>
<title>3.3. Associative unsupervised learning in spiking neuromorphic networks</title>
<p>The goal of the following Section is to demonstrate the operation of a small unsupervised network which makes use of the plastic response of the memristor described above to emulate the functionality of a synapse. To this end, we concentrate just on a network with fixed timings, i.e., restricting for simplicity to a subset of the STDP data presented in Section 3.2. More specifically, the curve with &#x00394;t &#x0003D; 300 &#x003BC;s of Figure <xref ref-type="fig" rid="F4">4A</xref> is selected for potentiation and the one with &#x00394;t &#x0003D; &#x02212;50 &#x003BC;s of Figure <xref ref-type="fig" rid="F4">4B</xref> for depression. Of course, the shape of the STDP curve provides additional degrees of freedom that can be exploited for addressing more biologically plausible learning algorithms, e.g., for the treatment of gray-scale or color images. However, such applications go beyond the scope of the present manuscript.</p>
<p>Figure <xref ref-type="fig" rid="F5">5</xref> shows the proposed SNN. For ease of visualization, in Figure <xref ref-type="fig" rid="F5">5</xref> only a limited number of the connections between pre- and post-neurons is shown. Each of the 25 pixels composing the images is associated to a different pre-neuron. Initially, the network is untrained and a learning phase is executed. At the end, the SNN is able to recognize 5, capital characters (<italic>A, E, I, O</italic>, and <italic>U</italic>, Figure <xref ref-type="fig" rid="F5">5</xref>, inset) given as 5 &#x000D7; 5 pixel black-and-white images. The network learns through an unsupervised STDP protocol. Once the training session is over, the network is able to recognize incomplete or noisy images, representing any of the characters, following a winner-take-all approach.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Proposed fully connected SNN</bold>. 25 pre-neurons are connected to 5 post-neurons through a layer of 125 artificial synapses. Each pixel of the images shown to the network are associated with a pre-neuron. Inset: images showed to the SNN during the training phase.</p></caption>
<graphic xlink:href="fnins-10-00482-g0005.tif"/>
</fig>
<p>The plasticity of the memristor plays most of the role in the learning session of the SNN. The training is performed one character at a time. As an example, the procedure to make the network learn letter <italic>A</italic> is described. The same procedure is then used for all the other characters. The spiking diagram of the neurons is shown in Figure <xref ref-type="fig" rid="F6">6A</xref> and it will be explained together with the unsupervised learning protocol.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>(A)</bold> Training session: spiking diagram of one epoch. The training character is shown at 0 s and the duration of an epoch is about 2.15 ms. <bold>(B)</bold> Image shown to the network (top left panel), synaptic weights after 200 epochs (top right panel), and detailed synaptic weight evolution during training session of character <italic>A</italic> (bottom panel). Black lines represent the synapses which are being depressed during the session and orange lines the ones potentiated. <bold>(C)</bold> Example of synaptic weight changes during a learning session. Each 5 &#x000D7; 5 matrix represents the group of 25 synapses contributing to the firing of neurons &#x003B1; to &#x003F5;. Color bar on the right indicates the conductance range of the synapses. Increasing the number of epochs (from top to bottom), the SNN specializes each post-neuron to recognize a different character. <bold>(D)</bold> Distribution of the synaptic weights during the training session.</p></caption>
<graphic xlink:href="fnins-10-00482-g0006.tif"/>
</fig>
<p>At first, character <italic>A</italic> is shown to the network. Black pixels stimulate the associated pre-neurons (Figure <xref ref-type="fig" rid="F5">5</xref>), which fire toward all the post-neurons (Figure <xref ref-type="fig" rid="F6">6A</xref>, top panel). Post-neurons integrate the signals and the one which first reaches its threshold voltage (e.g., post-neuron &#x003B3;), which is fixed and equal for all the post-neurons, fires back to all the pre-neurons (Figure <xref ref-type="fig" rid="F6">6A</xref>, middle panel). The fired spike has three effects: (i) the discharge of all the other post-neurons, following the winner-take-all rule; (ii) the potentiation of the synapses connecting pre-neurons associated with black pixels and post-neuron &#x003B3; (&#x00394;t &#x0003E; 0); (iii) the triggering of the firing of the pre-neurons associated with white pixels (Figure <xref ref-type="fig" rid="F6">6A</xref>, bottom panel). Afterward, about 500 &#x003BC;s after the first spike, post-neuron &#x003B3; fires again (Figure <xref ref-type="fig" rid="F6">6A</xref>, middle panel), thus depressing the synapses connecting it with firing pre-neurons (&#x00394;t &#x0003C; 0). Pre-neurons associated with black pixels are in their absolute refractory period, therefore the second spike form post-neuron &#x003B3; has no effect on them. This procedure of neurons handshaking, lasting about 2.15 ms, is called epoch and it occurs each time an image is presented to the SNN during training session. To reach successful learning (i.e., each post-neuron is specialized for a different character) with a probability of 99%, the same character is shown to the network up to 200 times (epochs).</p>
<p>Figure <xref ref-type="fig" rid="F6">6B</xref> shows an example of training session for letter <italic>A</italic>. The Figure is an excerpt extracted from the video <xref ref-type="supplementary-material" rid="SM1">VideoS1.mp4</xref>, which can be found in the Supplementary Material and it summarizes the whole 200 epochs occurred to specialize the SNN to recognize character <italic>A</italic>. In panel (i), the image shown to the network is represented. In panel (ii), the synaptic weight after 200 epochs of the subset of synapses contributing to the firing of post-neuron &#x003B3; is shown. The potentiated synapses are the orange squares in the panel, whereas the depressed ones are colored in black. A close correspondence of panels (i) and (ii) is evident, which is at the basis of the relationship between potentiated synapses and character learned. In panel (iii), the weight evolution as a function of the number of epochs is shown. The depressed synapses (black lines) tend to converge to the lowest conductance value of about 800 &#x003BC;S. On the contrary, the potentiated synapses (orange lines) show a very slight change in the conductance, if any, due to the limit imposed by the initial condition of the synaptic layer. Indeed, the initial conductance of each synapse is set in the range from 1.8 to 2.5 mS. The initial distribution is the result of a potentiation operation and it simulates the device-to-device variability plausible in a real network. Both the width and the average value of the initial weight distribution are fundamental to allow the SNN to uniquely specialize post-neurons during learning session. The variability in the initial resistance, which is actually unavoidable for real devices, allows one post-neuron to be favored with respect to the others and therefore to fire first. The narrower the distribution of initial synaptic weight toward high conductance values, the higher the probability of success during learning. This is true up to the unrealistic situation where all the synapses have the same weight and, therefore, all post-neurons would fire simultaneously, thus failing the learning task. Similarly, the widening of the initial state range leads to a situation where two similar characters, e.g., <italic>E</italic> and <italic>U</italic>, fall in the basin of attraction of the same post-neuron, thus resulting in an unsuccessful learning (i.e., the SNN forgetting the former character and specializing the same post-neuron to recognize the latest character presented). The same erroneous behavior is obtained if the average value of the initial distribution is moved toward lower conductances.</p>
<p>An example of complete training session is illustrated in Figure <xref ref-type="fig" rid="F6">6C</xref> (an animation of the first 50 epochs is shown in Supplementary Material, <xref ref-type="supplementary-material" rid="SM2">VideoS2.mp4</xref>). Each 5 &#x000D7; 5 matrix in Figure <xref ref-type="fig" rid="F6">6C</xref> represents the group of 25 synapses contributing to the firing of post-neurons &#x003B1; to &#x003F5;. Initially, all the weights are randomly distributed between 1.8 and 2.5 mS. Increasing the number of epochs (in the Figure, initial state, 5th, 50th, and 200th epochs are shown), the weight of each synapse gradually changes until, at the 200th epoch, the SNN is trained and the characters are recognizable also in the synaptic layer. In addition, Figure <xref ref-type="fig" rid="F6">6D</xref> evidences the distribution of all the 125 synaptic weights in the initial states and after 5, 50, and 200 epochs. It can be noted that during the session the initial distribution, which is initially grouped unimodally toward the highest conductive values, is split in two, one group for depressed synapses and one for potentiated ones, which is consistent with the results shown in Figure <xref ref-type="fig" rid="F6">6B</xref>, panel (iii).</p>
<p>Similar to the training session, during recognition, when an image is shown to the SNN, the stimulated pre-neurons fire toward all the post-neurons. The post-neuron which is first charged above its threshold fires, both recognizing the character shown and discharging the other post-neurons.</p>
<p>The recognition tests are carried out on 100 networks configurations resulting from the same number of learning simulations with different initial synaptic weights. The test set can be divided into two classes of images, one with missing pixels and one with additive noise (Supplementary Figure <xref ref-type="supplementary-material" rid="SM3">2</xref>). In the first test, several images with missing black pixels are shown to the SNN. The results demonstrate that in the worst case the network is always (100% recognition rate) able to recognize the character if the percentage of missing pixel is equal to or lower than 21% for character <italic>A</italic>, 27% for character <italic>E</italic>, 20% for character <italic>I</italic>, 33% for character <italic>O</italic>, and 18% for character <italic>U</italic>. In the second test, noisy images are shown to the network. The test images are chosen among the ones considered mostly critical for the SNN to be recognized, so that worst cases could be explored. Further details about the images shown and the choice criterion can be found in the Supplementary Figures <xref ref-type="supplementary-material" rid="SM3">2</xref>, <xref ref-type="supplementary-material" rid="SM3">3</xref>, and Supplementary Table <xref ref-type="supplementary-material" rid="SM3">1</xref>. The network recognition rate resulted 85.71% for images with up to 4 noise pixels. However, the recognition rate is correlated with the number of epochs in the training session. As already mentioned, a training session for a character consists of 200 epochs and it almost always leads to a successful learning. If the number of epochs during training is reduced, both the success rate of the learning session and the recognition rate decrease. Simulations of learning sessions with different number of epochs (200, 50, 10, 8, and 5) are carried out. With a number of epochs of 8, 2 learning sessions out of 3 failed, and with 5 epochs the SNN can never perform a successful learning. After concluding a successful learning session, the same test images (see Supplementary Figure <xref ref-type="supplementary-material" rid="SM3">2</xref>) are shown to the SNN during recognition. The recognition rate decreases from 88.22% (200 epochs) to 82.61% (50 epochs), 75.29% (10 epochs), and 72.03% (8 epochs). This means that, when a limited number of epochs is performed, the synapses may be insufficiently depressed and during recognition they may conflict with the potentiated ones, thus resulting in incorrect recognition.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>4. Discussion</title>
<p>In Section 3 a filamentary HfO<sub>2</sub> memristor featuring analog behavior is presented. The proposed device is able to emulate both long-term plasticity and STDP learning rule. Moreover, a simple fully-connected SNN which takes advantage of the memristor plastic behavior and which uses an associative unsupervised STDP-based learning protocol is simulated. After a training session the network is able to recognize five characters, even when the images displayed are incomplete or noisy. It should be mentioned that non-ideal elements, such as parasitic or jitter, are deliberately not considered in the proposed network, because the performed investigation focuses on the basic principles of the network with analog memristor, where a study at a high level of abstraction is mandatory before considering practical implementations.</p>
<p>We demonstrate long-term functional plasticity with two different spiking algorithms, which have been already used in the literature (Park et al., <xref ref-type="bibr" rid="B25">2013</xref>; Yu et al., <xref ref-type="bibr" rid="B36">2013a</xref>; Li et al., <xref ref-type="bibr" rid="B20">2014</xref>; Zhao et al., <xref ref-type="bibr" rid="B39">2014</xref>) to emulate plasticity. The two algorithms allow an investigation on the device behavior as a function of the voltage amplitude (Figure <xref ref-type="fig" rid="F2">2A</xref>) and on its integrative response when stimulated by identical spikes (Figure <xref ref-type="fig" rid="F2">2B</xref>). An algorithm that modulates the spike voltage applied to the device is not easy to be implemented in a system. Indeed, dedicated read-out and variable voltage biasing circuits are required. On the other hand, the voltage on the memristor in a system could be modulated through superimposition of long spikes, as proposed several times in literature (Serrano-Gotarredona et al., <xref ref-type="bibr" rid="B32">2013</xref>; Sa&#x000EF;ghi et al., <xref ref-type="bibr" rid="B30">2015</xref>). This method allows the neuron to always fire the same spike and let the delay times between spike determine the actual voltage on the device.</p>
<p>The combined results of the measurements shown in Figure <xref ref-type="fig" rid="F2">2</xref> are used to engineer the shape of pre- and post-spikes used to emulate homosynaptic plasticity and to conceive a biologically plausible STDP curve (Figures <xref ref-type="fig" rid="F4">4C,D</xref>), which takes advantage of both the relative timing between the two spikes (&#x00394;t) and the plasticity given by spike pair repetition. It should be noted that, though analog changes can be obtained around the previously found thresholds for potentiation and depression, the device can be operated in an analog fashion in a range of voltage of some hundreds of mV (Figure <xref ref-type="fig" rid="F4">4</xref>). From Figures <xref ref-type="fig" rid="F4">4A,B</xref>, it can be observed that voltages from 580 to 800 mV for depression and from &#x02212;440 to &#x02212;650 mV for potentiation allow a resistance evolution as a function of the repetition of identical spikes. In particular, Figures <xref ref-type="fig" rid="F4">4C,D</xref> show that the dynamic range decreases with the decreasing of the applied voltage, but resistance still gradually changes. In a network, it can be expected that different devices show analog transitions for a range of voltages whose end values (V<sub><italic>min</italic></sub>, V<sub><italic>max</italic></sub>) can be different from device to device, but in general a sub-range of voltages allowing analog resistance modulation is shared by many devices. A threshold difference in the devices (provided it is within few tens to one hundred mV) would not prevent analog behavior, as demonstrated in Figure <xref ref-type="fig" rid="F7">7</xref>, which shows the behavior of 3 different devices during potentiation (Figure <xref ref-type="fig" rid="F7">7A</xref>) and depression (Figure <xref ref-type="fig" rid="F7">7B</xref>) when stimulated by trains of 300 identical spikes. In both Figures <xref ref-type="fig" rid="F7">7A,B</xref>, the mean value of 10 repetitions of the same train of spikes is represented by symbols and the shaded area indicates the standard deviation of the measurements. It can be noted that potentiation suffers of major variability with respect to depression. Nevertheless, despite the device-to-device variability, all the devices show an analog behavior in both operations. In addition, different resistance evolutions due to different device thresholds are compensated in SNNs by the high parallelism of the architecture itself which enhances the network tolerance to device variability (Yu et al., <xref ref-type="bibr" rid="B36">2013a</xref>). In this respect, the performance of the presented SNN against variability is tested adding &#x000B1;10% (Figures <xref ref-type="fig" rid="F8">8A,B</xref>) and &#x000B1;30% (Figures <xref ref-type="fig" rid="F8">8C,D</xref>) device-to-device variability in the artificial synapses behavior, i.e., the look up table associated to each synapse has been multiplied by a random factor extracted between 0.9/1.1 and 0.7/1.3 respectively. Figure <xref ref-type="fig" rid="F8">8A</xref> summarizes the synaptic weight evolution during the training session of all the characters as a function of the epoch number when a variability of &#x000B1;10% is set. Each graph shows the weight evolution of the group of synapses contributing to the firing of a specific post-neuron. During learning, depression (black) and potentiation (orange) of synapses occur, but the weight evolution with and without variability (as in Figure <xref ref-type="fig" rid="F6">6B</xref>) is different, because in the former case for some presentation of the images to the network some groups of synapses are not updated (green lines for synapses connecting to post-neuron that is finally specialized to characters <italic>A</italic> and <italic>O</italic>). This is explained as follows. In the examples reported in Figure <xref ref-type="fig" rid="F8">8</xref>, first, <italic>O</italic> is presented and post-neuron <italic>O</italic> (meaning post-neuron that finally specializes to recognize <italic>O</italic>) starts firing and updating its associated synapses in the first epoch. On the other hand, variability causes that the weight are adjusted in such a way that from epoch 2 to 6, a different post-neuron fires and synaptic weights associated to post-neuron <italic>O</italic> are frozen. Then, specialization proceeds with one post-neuron specializing for only one character. The success of the learning session demonstrates the robustness of the network against device-to-device variability, in accordance with Yu et al. (<xref ref-type="bibr" rid="B35">2015</xref>), provided analog behavior holds in each device. Figure <xref ref-type="fig" rid="F8">8B</xref> shows the weight distribution of the synaptic matrix during training. Increasing the number of epochs, the initial synaptic weight distribution tends to separate in two groups, one for depressed synapses and one for potentiated synapses, as it happens also in Figure <xref ref-type="fig" rid="F6">6D</xref>. However, in the case of Figure <xref ref-type="fig" rid="F8">8B</xref>, the two distribution are wider than in the case where no variability factor is considered. The same above observations are valid also when variability is increased to &#x000B1;30%, as shown in Figures <xref ref-type="fig" rid="F8">8C,D</xref>. Indeed, also Figure <xref ref-type="fig" rid="F8">8C</xref> shows, in the bottom two graphs, some epochs where the synaptic weight is not updated. Moreover, considering the &#x000B1;30% variability test, the final distribution of the synaptic weights is larger than the one achieved for &#x000B1;10% variability (17% larger for depression and 136% larger for potentiation). In this respect, it is worth analyzing the recognition rate of the test set shown in Supplementary Figure <xref ref-type="supplementary-material" rid="SM3">2</xref>, as a function of the number of epochs carried out during learning. Figure <xref ref-type="fig" rid="F8">8E</xref> shows the recognition rate (blue circles) as a function of the number of epochs in a SNN neglecting device variability. Each circle is the average recognition rate over 100 simulations (i.e., 100 learning sessions each starting with a different initial configuration of the synaptic weights) and the results of each simulation lie in the gray shaded area delimited by the best simulation result (dotted red line) and the worst one (dashed green line). The increase of the number of epochs during learning improves the average recognition rate and decreases the spread of the results. Indeed, the recognition rate varies between 43.75 and 93.75% at 8 learning epochs whereas it varies between 75 and 100% at 200 learning epochs. As already mentioned in Section 3.3, the recognition rate is closely related to the distribution of the synaptic weights at the end of the training session. The nearer the distributions of the potentiated and depressed synapses, the lower the recognition rate. As a consequence, the increase of the number of learning epochs contributes to enhance the separation of the two above-mentioned distributions and, therefore, to improve the recognition rate. It is interesting to note that in this respect the impact of device-to-device variability is almost negligible. Indeed, we performed the same recognition tests with the same methodology also in case of SNNs with &#x000B1;10 and &#x000B1;30% device-to-device variability. Figure <xref ref-type="fig" rid="F8">8F</xref> shows the average recognition rate as a function of the epochs during learning in case of 0% (blue circles), &#x000B1;10% (red squares), and &#x000B1;30% (green triangles) device-to-device variability. The vertical bars indicate the standard deviation &#x003C3;. The same increasing trend can be noted for all the curves regardless of the variability. In accordance with Figure <xref ref-type="fig" rid="F8">8E</xref>, in each curve also &#x003C3; decreases with increasing number of epochs, but the value of &#x003C3; for each number of epochs during learning increases with increasing variability. On the other hand, the network proves to be robust also for variability up to &#x000B1;30%. The network robustness lies in the gradual synaptic weight update. Indeed for every post-neuron spike, the weight is adjusted by a small amount. If an erroneous spiking (like the one of a post-neuron responding to two different characters) occurs, the weight change is small enough that the following epochs can recover the error.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>Variability in the behavior of 3 different devices for (A) potentiation and (B) depression when stimulated by trains of 300 identical spikes</bold>. Potentiation: voltage amplitude &#x02212;0.55 V, time width 25 &#x003BC;s. Depression: voltage amplitude 0.75 V, time width 20 &#x003BC;s. Symbols indicate the mean value of 10 repetitions of the same train of spikes and the shaded area indicates the standard deviation.</p></caption>
<graphic xlink:href="fnins-10-00482-g0007.tif"/>
</fig>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p><bold>Simulation of the training session including &#x000B1;10% (A,B) and &#x000B1;30% (C,D) of variability in synaptic behavior. (A,C)</bold> Detailed synaptic weight evolution during training session of all characters. Black lines represent the synapses which are being depressed during the session and orange lines the ones potentiated. Green lines indicate that the neuron did not fire in the corresponding epoch. <bold>(B,D)</bold> Distribution of the synaptic weights during the training session. <bold>(E)</bold> Recognition rate as a function of the number of epochs in the learning session. The blue circles represent the average recognition rate from 100 simulations where device-to-device variability is not taken into account. The red dotted line and the green dashed one indicate the best and worst results obtained in the simulations, respectively, whereas the other results lie in the shaded gray area. <bold>(F)</bold> Average recognition rate of 100 simulations as a function of the number of epochs in the learning session with device-to-device variability of 0% (blue circles), &#x000B1;10% (red squares), and &#x000B1;30% (green triangles). Error bars show the standard deviation of the results.</p></caption>
<graphic xlink:href="fnins-10-00482-g0008.tif"/>
</fig>
<p>Given the observations above, we would like to stress that it is fundamental in deterministic networks to have analog synapses even though, as in the proposed SNN, the images shown are only black and white. Indeed, in a system with deterministic neurons, as in the proposed one, binary deterministic memristors would lead to fast learning (only few epochs would be necessary to complete the training session), but also to fast forgetting (Fusi and Abbott, <xref ref-type="bibr" rid="B16">2007</xref>). Indeed, if a noisy image were shown to a trained SNN employing binary synapses, the network would classify that image and, therefore, would adjust the synaptic matrix also according to the pixel which is not representative for that image, disrupting learning. In the case of analog synapses, the same permanent and significant change leading to failure would result only if the same noisy image were shown to the network for several epochs, which is statistically improbable.</p>
<p>In the presented SNN, using two fixed delay times (one for potentiation and one for depression) in the STDP is sufficient as a proof-of-concept. In this respect, two values are selected (&#x00394;t &#x0003D; 300 &#x003BC;s and &#x00394;t &#x0003D; &#x02212;50 &#x003BC;s) which are coherent with a post-neuron firing as a consequence of the stimulation by the pre-neuron (synapses potentiation for &#x00394;t &#x0003D; 300 &#x003BC;s) rather than with a pre-neuron firing because of the stimulation by the activated post-neurons in case of synaptic depression (&#x00394;t &#x0003D; &#x02212;50 &#x003BC;s). On the other hand, a network exploiting also the possibility of variable delay times between pre- and post-spikes, would allow increasing the available resistance states, therefore, improving the network robustness even further. As an example, in the case of input-specific associative learning rules for pattern recognition, the possibility to combine different parameters (&#x00394;t and spike pair repetition) to achieve various resistive states with different evolution histories offers a further degree of freedom. Indeed, a possible application could be in networks where images have different colors o shades of gray, which can be linked to different delay times. In this case, at the end of a learning session with a certain number of epochs, the weight distribution of the synaptic matrix would give an indication of the common features of the various images presented to the network. More specifically, the more a group of synapses is potentiated, the more they are stimulated, i.e., the potentiated group identifies a common feature in the set of displayed images.</p>
</sec>
<sec sec-type="conclusions" id="s5">
<title>5. Conclusion</title>
<p>In summary, a thorough analysis of the synaptic features of the proposed oxide-based memristor is carried out. Initially, the device ability to emulate long-term functional potentiation and depression is proved upon stimulation with spikes with increasing amplitude (stair-case like) and trains of identical spikes. These experiments show that the memristor has an analog behavior in tuning its resistance and it can reach a dynamic range up to one order of magnitude depending on the spiking algorithm employed. Then, homosynaptic plasticity is tested through STDP experiments, which demonstrates the device biological-like behavior when subjected to synaptic activity. Finally, the possibility of developing deterministic networks using unsupervised learning is investigated. A subset of the STDP collected data is used to simulate a simple fully-connected SNN featuring an associative unsupervised STDP-based learning protocol. The network is able, after a training session, to recognize the five characters, also when partially incomplete or noisy letters are displayed. Therefore, the SNN proves that the proposed memristor can be used to emulate the functionality of an artificial synapse in future neuromorphic architectures with deterministic neurons, and analog memristive synapses, and making use of unsupervised learning for real-time applications.</p>
</sec>
<sec id="s6">
<title>Author contributions</title>
<p>EC, SB, and SS conceived the experiments and wrote the manuscript. SB and SS developed the memristor device. EC and SB collected the data on synaptic plasticity, in collaboration with AS. EC and SB performed the STDP experiments. EC and SB developed the SNN, in collaboration with AS. All authors discussed the results and contributed to manuscript preparation.</p>
</sec>
<sec id="s7">
<title>Funding</title>
<p>The work has been partially supported by the FP7 European project RAMP (grant agreement n. 612058).</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewer SJ and handling Editor declared their shared affiliation, and the handling Editor states that the process nevertheless met the standards of a fair and objective review.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>The authors acknowledge Dr. M. Alia for his support in device fabrication.</p>
</ack>
<sec sec-type="supplementary-material" id="s8">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="http://journal.frontiersin.org/article/10.3389/fnins.2016.00482">http://journal.frontiersin.org/article/10.3389/fnins.2016.00482</ext-link></p>
<supplementary-material xlink:href="Video1.MP4" id="SM1" mimetype="video/mp4" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Video2.MP4" id="SM2" mimetype="video/mp4" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="DataSheet1.PDF" id="SM3" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ambrogio</surname> <given-names>S.</given-names></name> <name><surname>Balatti</surname> <given-names>S.</given-names></name> <name><surname>Milo</surname> <given-names>V.</given-names></name> <name><surname>Carboni</surname> <given-names>R.</given-names></name> <name><surname>Wang</surname> <given-names>Z. Q.</given-names></name> <name><surname>Calderoni</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2016a</year>). <article-title>Neuromorphic learning and recognition with one-transistor-one-resistor synapses and bistable metal oxide RRAM</article-title>. <source>IEEE Trans. Electr. Dev.</source> <volume>63</volume>, <fpage>1508</fpage>&#x02013;<lpage>1515</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2016.2526647</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ambrogio</surname> <given-names>S.</given-names></name> <name><surname>Balatti</surname> <given-names>S.</given-names></name> <name><surname>Nardi</surname> <given-names>F.</given-names></name> <name><surname>Facchinetti</surname> <given-names>S.</given-names></name> <name><surname>Ielmini</surname> <given-names>D.</given-names></name></person-group> (<year>2013</year>). <article-title>Spike-timing dependent plasticity in a transistor-selected resistive switching memory</article-title>. <source>Nanotechnology</source> <volume>24</volume>:<fpage>384012</fpage>. <pub-id pub-id-type="doi">10.1088/0957-4484/24/38/384012</pub-id><pub-id pub-id-type="pmid">23999495</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ambrogio</surname> <given-names>S.</given-names></name> <name><surname>Ciocchini</surname> <given-names>N.</given-names></name> <name><surname>Laudato</surname> <given-names>M.</given-names></name> <name><surname>Milo</surname> <given-names>V.</given-names></name> <name><surname>Pirovano</surname> <given-names>A.</given-names></name> <name><surname>Fantini</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2016b</year>). <article-title>Unsupervised learning by spike timing dependent plasticity in phase change memory (PCM) synapses</article-title>. <source>Front. Neurosci.</source> <volume>10</volume>:<fpage>56</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2016.00056</pub-id><pub-id pub-id-type="pmid">27013934</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berdan</surname> <given-names>R.</given-names></name> <name><surname>Serb</surname> <given-names>A.</given-names></name> <name><surname>Khiat</surname> <given-names>A.</given-names></name> <name><surname>Regoutz</surname> <given-names>A.</given-names></name> <name><surname>Papavassiliou</surname> <given-names>C.</given-names></name> <name><surname>Prodromakis</surname> <given-names>T.</given-names></name></person-group> (<year>2015</year>). <article-title>A &#x003BC;-controller-based system for interfacing selector-less RRAM crossbar arrays</article-title>. <source>IEEE Trans. Electr. Dev.</source> <volume>62</volume>, <fpage>2190</fpage>&#x02013;<lpage>2196</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2015.2433676</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bi</surname> <given-names>G.-Q.</given-names></name> <name><surname>Poo</surname> <given-names>M.-M.</given-names></name></person-group> (<year>1998</year>). <article-title>Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type</article-title>. <source>J. Neurosci.</source> <volume>18</volume>, <fpage>10464</fpage>&#x02013;<lpage>10472</lpage>. <pub-id pub-id-type="pmid">9852584</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bill</surname> <given-names>J.</given-names></name> <name><surname>Legenstein</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>A compound memristive synapse model for statistical learning through stdp in spiking neural networks</article-title>. <source>Front. Neurosci.</source> <volume>8</volume>:<fpage>412</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2014.00412</pub-id><pub-id pub-id-type="pmid">25565943</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brivio</surname> <given-names>S.</given-names></name> <name><surname>Covi</surname> <given-names>E.</given-names></name> <name><surname>Serb</surname> <given-names>A.</given-names></name> <name><surname>Prodromakis</surname> <given-names>T.</given-names></name> <name><surname>Fanciulli</surname> <given-names>M.</given-names></name> <name><surname>Spiga</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Experimental study of gradual/abrupt dynamics of HfO<sub>2</sub>-based memristive devices</article-title>. <source>Appl. Phys. Lett.</source> <volume>109</volume>, <fpage>133504</fpage>. <pub-id pub-id-type="doi">10.1063/1.4963675</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brivio</surname> <given-names>S.</given-names></name> <name><surname>Frascaroli</surname> <given-names>J.</given-names></name> <name><surname>Spiga</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Role of metal-oxide interfaces in the multiple resistance switching regimes of Pt/HfO<sub>2</sub>/TiN devices</article-title>. <source>Appl. Phys. Lett.</source> <volume>107</volume>, <fpage>023504</fpage>. <pub-id pub-id-type="doi">10.1063/1.4926340</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brivio</surname> <given-names>S.</given-names></name> <name><surname>Tallarida</surname> <given-names>G.</given-names></name> <name><surname>Cianci</surname> <given-names>E.</given-names></name> <name><surname>Spiga</surname> <given-names>S.</given-names></name></person-group> (<year>2014</year>). <article-title>Formation and disruption of conductive filaments in a HfO<sub>2</sub>/TiN structure</article-title>. <source>Nanotechnology</source> <volume>25</volume>:<fpage>385705</fpage>. <pub-id pub-id-type="doi">10.1088/0957-4484/25/38/385705</pub-id><pub-id pub-id-type="pmid">25181606</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burr</surname> <given-names>G. W.</given-names></name> <name><surname>Shelby</surname> <given-names>R. M.</given-names></name> <name><surname>Sidler</surname> <given-names>S.</given-names></name> <name><surname>di Nolfo</surname> <given-names>C.</given-names></name> <name><surname>Jang</surname> <given-names>J.</given-names></name> <name><surname>Boybat</surname> <given-names>I.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Experimental demonstration and tolerancing of a large-scale neural network (165 000 Synapses) using phase-change memory as the synaptic weight element</article-title>. <source>IEEE Trans. Electr. Dev.</source> <volume>62</volume>, <fpage>3498</fpage>&#x02013;<lpage>3507</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2015.2439635</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Covi</surname> <given-names>E.</given-names></name> <name><surname>Brivio</surname> <given-names>S.</given-names></name> <name><surname>Fanciulli</surname> <given-names>M.</given-names></name> <name><surname>Spiga</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Synaptic potentiation and depression in Al:HfO<sub>2</sub>-based memristor</article-title>. <source>Microelectron. Eng.</source> <volume>147</volume>, <fpage>41</fpage>&#x02013;<lpage>44</lpage>. <pub-id pub-id-type="doi">10.1016/j.mee.2015.04.052</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Covi</surname> <given-names>E.</given-names></name> <name><surname>Brivio</surname> <given-names>S.</given-names></name> <name><surname>Serb</surname> <given-names>A.</given-names></name> <name><surname>Prodromakis</surname> <given-names>T.</given-names></name> <name><surname>Fanciulli</surname> <given-names>M.</given-names></name> <name><surname>Spiga</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>HfO<sub>2</sub>-based memristors for neuromorphic applications</article-title>, in <source>2016 IEEE International Symposium on Circuits and Systems (ISCAS)</source> (<publisher-loc>Montreal, QC</publisher-loc>), <fpage>393</fpage>&#x02013;<lpage>396</lpage>.</citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Du</surname> <given-names>N.</given-names></name> <name><surname>Kiani</surname> <given-names>M.</given-names></name> <name><surname>Mayr</surname> <given-names>C. G.</given-names></name> <name><surname>You</surname> <given-names>T.</given-names></name> <name><surname>B&#x000FC;rger</surname> <given-names>D.</given-names></name> <name><surname>Skorupa</surname> <given-names>I.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Single pairing spike-timing dependent plasticity in BiFeO<sub>3</sub> memristors with a time window of 25ms to 125&#x003BC;s</article-title>. <source>Front. Neurosci.</source> <volume>9</volume>:<fpage>227</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2015.00227</pub-id><pub-id pub-id-type="pmid">26175666</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eryilmaz</surname> <given-names>S. B.</given-names></name> <name><surname>Kuzum</surname> <given-names>D.</given-names></name> <name><surname>Jeyasingh</surname> <given-names>R.</given-names></name> <name><surname>Kim</surname> <given-names>S.</given-names></name> <name><surname>BrightSky</surname> <given-names>M.</given-names></name> <name><surname>Lam</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array</article-title>. <source>Front. Neurosci.</source> <volume>8</volume>:<fpage>205</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2014.00205</pub-id><pub-id pub-id-type="pmid">25100936</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Frascaroli</surname> <given-names>J.</given-names></name> <name><surname>Brivio</surname> <given-names>S.</given-names></name> <name><surname>Ferrarese Lupi</surname> <given-names>F.</given-names></name> <name><surname>Seguini</surname> <given-names>G.</given-names></name> <name><surname>Boarino</surname> <given-names>L.</given-names></name> <name><surname>Perego</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Resistive switching in high-density nanodevices fabricated by block copolymer self-assembly</article-title>. <source>ACS Nano</source> <volume>9</volume>, <fpage>2518</fpage>&#x02013;<lpage>2529</lpage>. <pub-id pub-id-type="doi">10.1021/nn505131b</pub-id><pub-id pub-id-type="pmid">25743480</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fusi</surname> <given-names>S.</given-names></name> <name><surname>Abbott</surname> <given-names>L. F.</given-names></name></person-group> (<year>2007</year>). <article-title>Limits on the memory storage capacity of bounded synapses</article-title>. <source>Nat. Neurosci.</source> <volume>10</volume>, <fpage>485</fpage>&#x02013;<lpage>493</lpage>. <pub-id pub-id-type="doi">10.1038/nn1859</pub-id><pub-id pub-id-type="pmid">17351638</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Garbin</surname> <given-names>D.</given-names></name> <name><surname>Vianello</surname> <given-names>E.</given-names></name> <name><surname>Bichler</surname> <given-names>O.</given-names></name> <name><surname>Rafhay</surname> <given-names>Q.</given-names></name> <name><surname>Gamrat</surname> <given-names>C.</given-names></name> <name><surname>Ghibaudo</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>HfO<sub>2</sub>-based oxRAM devices as synapses for convolutional neural networks</article-title>. <source>IEEE Trans. Electr. Dev.</source> <volume>62</volume>, <fpage>2494</fpage>&#x02013;<lpage>2501</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2015.2440102</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuzum</surname> <given-names>D.</given-names></name> <name><surname>Jeyasingh</surname> <given-names>R. G. D.</given-names></name> <name><surname>Lee</surname> <given-names>B.</given-names></name> <name><surname>Wong</surname> <given-names>H.-S. P.</given-names></name></person-group> (<year>2012</year>). <article-title>Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing</article-title>. <source>Nano Lett.</source> <volume>12</volume>, <fpage>2179</fpage>&#x02013;<lpage>2186</lpage>. <pub-id pub-id-type="doi">10.1021/nl201040y</pub-id><pub-id pub-id-type="pmid">21668029</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuzum</surname> <given-names>D.</given-names></name> <name><surname>Yu</surname> <given-names>S.</given-names></name> <name><surname>Wong</surname> <given-names>H.-S. P.</given-names></name></person-group> (<year>2013</year>). <article-title>Synaptic electronics: materials, devices and applications</article-title>. <source>Nanotechnology</source> <volume>24</volume>:<fpage>382001</fpage>. <pub-id pub-id-type="doi">10.1088/0957-4484/24/38/382001</pub-id><pub-id pub-id-type="pmid">23999572</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Zhong</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Xu</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>Q.</given-names></name> <name><surname>Sun</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Activity-dependent synaptic plasticity of a chalcogenide electronic synapse for neuromorphic systems</article-title>. <source>Sci. Rep.</source> <volume>4</volume>, <fpage>1</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1038/srep04906</pub-id><pub-id pub-id-type="pmid">24809396</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mandal</surname> <given-names>S.</given-names></name> <name><surname>El-Amin</surname> <given-names>A.</given-names></name> <name><surname>Alexander</surname> <given-names>K.</given-names></name> <name><surname>Rajendran</surname> <given-names>B.</given-names></name> <name><surname>Jha</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>Novel synaptic memory device for neuromorphic computing</article-title>. <source>Sci. Rep.</source> <volume>4</volume>:<fpage>5333</fpage>. <pub-id pub-id-type="doi">10.1038/srep05333</pub-id><pub-id pub-id-type="pmid">24939247</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Matveyev</surname> <given-names>Y.</given-names></name> <name><surname>Egorov</surname> <given-names>K.</given-names></name> <name><surname>Markeev</surname> <given-names>A.</given-names></name> <name><surname>Zenkevich</surname> <given-names>A.</given-names></name></person-group> (<year>2015</year>). <article-title>Resistive switching and synaptic properties of fully atomic layer deposition grown TiN/HfO<sub>2</sub>/TiN devices</article-title>. <source>J. Appl. Phys.</source> <volume>117</volume>, <fpage>044901</fpage>. <pub-id pub-id-type="doi">10.1063/1.4905792</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nishitani</surname> <given-names>Y.</given-names></name> <name><surname>Kaneko</surname> <given-names>Y.</given-names></name> <name><surname>Ueda</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <article-title>Supervised learning using spike-timing-dependent plasticity of memristive synapses</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst.</source> <volume>26</volume>, <fpage>2999</fpage>&#x02013;<lpage>3008</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2015.2399491</pub-id><pub-id pub-id-type="pmid">26595417</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Chu</surname> <given-names>M.</given-names></name> <name><surname>Kim</surname> <given-names>J.</given-names></name> <name><surname>Noh</surname> <given-names>J.</given-names></name> <name><surname>Jeon</surname> <given-names>M.</given-names></name> <name><surname>Hun Lee</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Electronic system with memristive synapses for pattern recognition</article-title>. <source>Sci. Rep.</source> <volume>5</volume>, <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1038/srep10123</pub-id><pub-id pub-id-type="pmid">25941950</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Sheri</surname> <given-names>A.</given-names></name> <name><surname>Kim</surname> <given-names>J. H.</given-names></name> <name><surname>Noh</surname> <given-names>J.</given-names></name> <name><surname>Jang</surname> <given-names>J.</given-names></name> <name><surname>Jeon</surname> <given-names>M. G.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Neuromorphic speech systems using advanced ReRAM-based synapse</article-title>, in <source>Proceedings of IEEE International Electron Devices Meeting (IEDM)</source> (<publisher-loc>Washington, DC</publisher-loc>).</citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prezioso</surname> <given-names>M.</given-names></name> <name><surname>Merrikh-Bayat</surname> <given-names>F.</given-names></name> <name><surname>Hoskins</surname> <given-names>B.</given-names></name> <name><surname>Likharev</surname> <given-names>K.</given-names></name> <name><surname>Strukov</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>Self-adaptive spike-time-dependent plasticity of metal-oxide memristors</article-title>. <source>Sci. Rep.</source> <volume>6</volume>:<fpage>21331</fpage>. <pub-id pub-id-type="doi">10.1038/srep21331</pub-id><pub-id pub-id-type="pmid">26893175</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prezioso</surname> <given-names>M.</given-names></name> <name><surname>Merrikh-Bayat</surname> <given-names>F.</given-names></name> <name><surname>Hoskins</surname> <given-names>B. D.</given-names></name> <name><surname>Adam</surname> <given-names>G. C.</given-names></name> <name><surname>Likharev</surname> <given-names>K. K.</given-names></name> <name><surname>Strukov</surname> <given-names>D. B.</given-names></name></person-group> (<year>2015</year>). <article-title>Training and operation of an integrated neuromorphic network based on metal-oxide memristors</article-title>. <source>Nat. Lett.</source> <volume>521</volume>, <fpage>61</fpage>&#x02013;<lpage>64</lpage>. <pub-id pub-id-type="doi">10.1038/nature14441</pub-id><pub-id pub-id-type="pmid">25951284</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Querlioz</surname> <given-names>D.</given-names></name> <name><surname>Bichler</surname> <given-names>O.</given-names></name> <name><surname>Dollfus</surname> <given-names>P.</given-names></name> <name><surname>Gamrat</surname> <given-names>C.</given-names></name></person-group> (<year>2013</year>). <article-title>Immunity to device variations in a spiking neural network with memristive nanodevices</article-title>. <source>IEEE Trans. Nanotechnol.</source> <volume>12</volume>, <fpage>288</fpage>&#x02013;<lpage>295</lpage>. <pub-id pub-id-type="doi">10.1109/TNANO.2013.2250995</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Querlioz</surname> <given-names>D.</given-names></name> <name><surname>Bichler</surname> <given-names>O.</given-names></name> <name><surname>Vincent</surname> <given-names>A. F.</given-names></name> <name><surname>Gamrat</surname> <given-names>C.</given-names></name></person-group> (<year>2015</year>). <article-title>Bioinspired programming of memory devices for implementing an inference engine</article-title>. <source>Proc. IEEE</source> <volume>103</volume>, <fpage>1398</fpage>&#x02013;<lpage>1416</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2015.2437616</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sa&#x000EF;ghi</surname> <given-names>S.</given-names></name> <name><surname>Mayr</surname> <given-names>C. G.</given-names></name> <name><surname>Serrano-Gotarredona</surname> <given-names>T.</given-names></name> <name><surname>Schmidt</surname> <given-names>H.</given-names></name> <name><surname>Lecerf</surname> <given-names>G.</given-names></name> <name><surname>Tomas</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Plasticity in memristive devices for spiking neural networks</article-title>. <source>Front. Neurosci.</source> <volume>9</volume>:<fpage>51</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2015.00051</pub-id><pub-id pub-id-type="pmid">25784849</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Serb</surname> <given-names>A.</given-names></name> <name><surname>Bill</surname> <given-names>J.</given-names></name> <name><surname>Khiat</surname> <given-names>A.</given-names></name> <name><surname>Berdan</surname> <given-names>R.</given-names></name> <name><surname>Legenstein</surname> <given-names>R.</given-names></name> <name><surname>Prodromakis</surname> <given-names>T.</given-names></name></person-group> (<year>2016</year>). <article-title>Unsupervised learning in probabilistic neural networks with multi-state metal-oxide memristive synapses</article-title>. <source>Nat. Commun.</source> <volume>7</volume>:<fpage>12611</fpage>. <pub-id pub-id-type="doi">10.1038/ncomms12611</pub-id><pub-id pub-id-type="pmid">27681181</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Serrano-Gotarredona</surname> <given-names>T.</given-names></name> <name><surname>Masquelier</surname> <given-names>T.</given-names></name> <name><surname>Prodromakis</surname> <given-names>T.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name> <name><surname>Linares-Barranco</surname> <given-names>B.</given-names></name></person-group> (<year>2013</year>). <article-title>STDP and STDP Variations with Memristors</article-title>. <source>Front. Neurosci.</source> <volume>7</volume>:<fpage>2</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2013.00002</pub-id><pub-id pub-id-type="pmid">23423540</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Suri</surname> <given-names>M.</given-names></name> <name><surname>Querlioz</surname> <given-names>D.</given-names></name> <name><surname>Bichler</surname> <given-names>O.</given-names></name> <name><surname>Palma</surname> <given-names>G.</given-names></name> <name><surname>Vianello</surname> <given-names>E.</given-names></name> <name><surname>Vuillaume</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Bio-inspired stochastic computing using binary CBRAM synapses</article-title>. <source>IEEE Trans. Electr. Dev.</source> <volume>60</volume>, <fpage>2402</fpage>&#x02013;<lpage>2409</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2013.2263000</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Ambrogio</surname> <given-names>S.</given-names></name> <name><surname>Balatti</surname> <given-names>S.</given-names></name> <name><surname>Ielmini</surname> <given-names>D.</given-names></name></person-group> (<year>2015</year>). <article-title>A 2-transistor/1-resistor artificial synapse capable of communication and stochastic learning for neuromorphic systems</article-title>. <source>Front. Neurosci.</source> <volume>8</volume>:<fpage>438</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2014.00438</pub-id><pub-id pub-id-type="pmid">25642161</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yu</surname> <given-names>S.</given-names></name> <name><surname>Chen</surname> <given-names>P. Y.</given-names></name> <name><surname>Cao</surname> <given-names>Y.</given-names></name> <name><surname>Xia</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Wu</surname> <given-names>H.</given-names></name></person-group> (<year>2015</year>). <article-title>Scaling-up resistive synaptic arrays for neuro-inspired architecture: challenges and prospect</article-title>, in <source>2015 IEEE International Electron Devices Meeting (IEDM)</source> (<publisher-loc>Washington, DC</publisher-loc>). <pub-id pub-id-type="doi">10.1109/IEDM.2015.7409718</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yu</surname> <given-names>S.</given-names></name> <name><surname>Gao</surname> <given-names>B.</given-names></name> <name><surname>Fang</surname> <given-names>Z.</given-names></name> <name><surname>Yu</surname> <given-names>H.</given-names></name> <name><surname>Kang</surname> <given-names>J.</given-names></name> <name><surname>Wong</surname> <given-names>H.-S. P.</given-names></name></person-group> (<year>2013a</year>). <article-title>A low energy oxide-based electronic synaptic device for neuromorphic visual systems with tolerance to device variation</article-title>. <source>Adv. Materials</source> <volume>25</volume>, <fpage>1774</fpage>&#x02013;<lpage>1779</lpage>. <pub-id pub-id-type="doi">10.1002/adma.201203680</pub-id><pub-id pub-id-type="pmid">23355110</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yu</surname> <given-names>S.</given-names></name> <name><surname>Gao</surname> <given-names>B.</given-names></name> <name><surname>Fang</surname> <given-names>Z.</given-names></name> <name><surname>Yu</surname> <given-names>H.</given-names></name> <name><surname>Kang</surname> <given-names>J.</given-names></name> <name><surname>Wong</surname> <given-names>H.-S. P.</given-names></name></person-group> (<year>2013b</year>). <article-title>Stochastic learning in oxide binary synaptic device for neuromorphic computing</article-title>. <source>Front. Neurosci.</source> <volume>7</volume>:<fpage>186</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2013.00186</pub-id><pub-id pub-id-type="pmid">24198752</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yu</surname> <given-names>S.</given-names></name> <name><surname>Wu</surname> <given-names>Y.</given-names></name> <name><surname>Jeyasingh</surname> <given-names>R.</given-names></name> <name><surname>Kuzum</surname> <given-names>D.</given-names></name> <name><surname>Wong</surname> <given-names>H. S. P.</given-names></name></person-group> (<year>2011</year>). <article-title>An electronic synapse device based on metal oxide resistive switching memory for neuromorphic computation</article-title>. <source>IEEE Trans. Electr. Dev.</source> <volume>58</volume>, <fpage>2729</fpage>&#x02013;<lpage>2737</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2011.2147791</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>L.</given-names></name> <name><surname>Chen</surname> <given-names>H.-Y.</given-names></name> <name><surname>Wu</surname> <given-names>S.-C.</given-names></name> <name><surname>Jiang</surname> <given-names>Z.</given-names></name> <name><surname>Yu</surname> <given-names>S.</given-names></name> <name><surname>Hou</surname> <given-names>T.-H.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Multi-level control of conductive nano-filament evolution in HfO<sub>2</sub> ReRAM by pulse-train operations</article-title>. <source>Nanoscale</source> <volume>6</volume>, <fpage>5698</fpage>&#x02013;<lpage>5702</lpage>. <pub-id pub-id-type="doi">10.1039/c4nr00500g</pub-id><pub-id pub-id-type="pmid">24769626</pub-id></citation>
</ref>
</ref-list>
</back>
</article>