<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-4548</issn>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Research Foundation</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2012.00090</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Is a 4-Bit Synaptic Weight Resolution Enough? &#x02013; Constraints on Enabling Spike-Timing Dependent Plasticity in Neuromorphic Hardware</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Pfeil</surname> <given-names>Thomas</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001">&#x0002A;</xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Potjans</surname> <given-names>Tobias C.</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Schrader</surname> <given-names>Sven</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Potjans</surname> <given-names>Wiebke</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Schemmel</surname> <given-names>Johannes</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Diesmann</surname> <given-names>Markus</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<xref ref-type="aff" rid="aff5"><sup>5</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Meier</surname> <given-names>Karlheinz</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg</institution> <country>Heidelberg, Germany</country></aff>
<aff id="aff2"><sup>2</sup><institution>Computational and Systems Neuroscience (INM-6), Institute of Neuroscience and Medicine, Research Center J&#x000FC;lich</institution> <country>J&#x000FC;lich, Germany</country></aff>
<aff id="aff3"><sup>3</sup><institution>Brain and Neural Systems Team, RIKEN Computational Science Research Program</institution> <country>Wako-shi, Japan</country></aff>
<aff id="aff4"><sup>4</sup><institution>RIKEN Brain Science Institute</institution> <country>Wako-shi, Japan</country></aff>
<aff id="aff5"><sup>5</sup><institution>RWTH Aachen University</institution> <country>Aachen, Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Stefano Fusi, Columbia University, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Florentin W&#x000F6;rg&#x000F6;tter, University Goettingen, Germany</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Thomas Pfeil, Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 227, 69120 Heidelberg, Germany. e-mail: <email>thomas.pfeil&#x00040;kip.uni-heidelberg.de</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Frontiers in Neuromorphic Engineering, a specialty of Frontiers in Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>17</day>
<month>07</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<volume>6</volume>
<elocation-id>90</elocation-id>
<history>
<date date-type="received">
<day>27</day>
<month>01</month>
<year>2012</year>
</date>
<date date-type="accepted">
<day>04</day>
<month>06</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2012 Pfeil, Potjans, Schrader, Potjans, Schemmel, Diesmann and Meier.</copyright-statement>
<copyright-year>2012</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement"><p>This is an open-access article distributed under the terms of the <uri xlink:href="http://creativecommons.org/licenses/by/3.0/">Creative Commons Attribution License</uri>, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.</p></license>
</permissions>
<abstract>
<p>Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a <italic>good hardware verification practice</italic> may rise synergy effects between hardware developers and neuroscientists.</p>
</abstract>
<kwd-group>
<kwd>neuromorphic hardware</kwd>
<kwd>wafer-scale integration</kwd>
<kwd>large-scale spiking neural networks</kwd>
<kwd>spike-timing dependent plasticity</kwd>
<kwd>synaptic weight resolution</kwd>
<kwd>circuit variations</kwd>
<kwd>PyNN</kwd>
<kwd>NEST</kwd>
</kwd-group>
<counts>
<fig-count count="9"/>
<table-count count="8"/>
<equation-count count="6"/>
<ref-count count="78"/>
<page-count count="19"/>
<word-count count="14593"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction">
<label>1</label> <title>Introduction</title>
<p>Computer simulations have become an important tool to study cortical networks (e.g. Markram et al., <xref ref-type="bibr" rid="B45">1997</xref>; Brunel, <xref ref-type="bibr" rid="B10">2000</xref>; Morrison et al., <xref ref-type="bibr" rid="B51">2005</xref>, <xref ref-type="bibr" rid="B49">2007</xref>; Brette et al., <xref ref-type="bibr" rid="B8">2007</xref>; Johansson and Lansner, <xref ref-type="bibr" rid="B35">2007</xref>; Vogelstein et al., <xref ref-type="bibr" rid="B75">2008</xref>; Kunkel et al., <xref ref-type="bibr" rid="B39">2011</xref>; Yger et al., <xref ref-type="bibr" rid="B77">2011</xref>). While they provide insight into activity dynamics that can not otherwise be measured <italic>in vivo</italic> or calculated analytically, their computation times can be very time-consuming and consequently unsuitable for statistical analyses, especially for learning neural networks (Morrison et al., <xref ref-type="bibr" rid="B49">2007</xref>). Even the ongoing enhancement of the von Neumann computer architecture is not likely to reduce simulation runtime significantly, as both single- and multi-core scaling face their limits in terms of transistor size (Thompson and Parthasarathy, <xref ref-type="bibr" rid="B68">2006</xref>), energy consumption (Esmaeilzadeh et al., <xref ref-type="bibr" rid="B22">2011</xref>), or communication (Perrin, <xref ref-type="bibr" rid="B55">2011</xref>).</p>
<p>Neuromorphic hardware systems are an alternative to von Neumann computers that alleviates these limitations. Their underlying VLSI microcircuits are especially designed to solve neuron dynamics and can be highly accelerated compared to biological time (Indiveri et al., <xref ref-type="bibr" rid="B30">2011</xref>). For most neuron models whose dynamics can be analytically stated, the evaluation of its equations can be determined either digitally (Plana et al., <xref ref-type="bibr" rid="B56">2007</xref>) by means of numerical methods or with analog circuits that solve the neuron equations intrinsically (Millner et al., <xref ref-type="bibr" rid="B48">2010</xref>). The analog approach has the advantage of maximal parallelism, as all neuron circuits are evolving simultaneously in continuous time. Furthermore, high acceleration factors compared to biological time (e.g. up to 10<sup>5</sup> reported by Millner et al., <xref ref-type="bibr" rid="B48">2010</xref>), can be achieved by reducing the size of the analog neuron circuits. Nevertheless, many neuromorphic hardware systems are developed for operation in real-time to be applied in sensor applications or medical implants (Fromherz, <xref ref-type="bibr" rid="B24">2002</xref>; Vogels et al., <xref ref-type="bibr" rid="B74">2005</xref>; Levi et al., <xref ref-type="bibr" rid="B40">2008</xref>).</p>
<p>Typically, the large number of programmable and possibly plastic synapses accounts for the major part of chip resources in neuromorphic hardware systems (Figure <xref ref-type="fig" rid="F1">1</xref>). Hence, the limited chip area requires a trade-off between the number and size of neurons and their synapses, while providing sufficiently complex dynamics. For example, decreasing the resolution of synaptic weights offers an opportunity to reduce the area required for synapses and therefore allows more synapses on a chip, rendering the synaptic weights discretized.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Photograph of the HICANN (High Input Count Analog Neural Network) chip, the basic building block of the FACETS wafer-scale hardware system</bold>. Notice the large area occupied by mixed-signal synapse circuits (yellow boxes) compared to neuron circuits (orange boxes). A digital communication infrastructure (area between red and green boxes) ensures a high density of connections between neurons on the same and to other HICANN chips.</p></caption>
<graphic xlink:href="fnins-06-00090-g001.tif"/>
</fig>
<p>In this study, we will analyze the consequences of such a weight discretization and propose generic configuration strategies for spike-timing dependent plasticity on discrete weights. Deviations from original models caused by this discretization are quantified by particular benchmarks. In addition, we will investigate further hardware restrictions specific for the <italic>FACETS</italic><xref ref-type="fn" rid="fn1"><sup>1</sup></xref> <italic>wafer-scale hardware system</italic> (FACETS, <xref ref-type="bibr" rid="B23">2010</xref>), a pioneering neuromorphic device that implements a large amount of both configurable and plastic synapses (Schemmel et al., <xref ref-type="bibr" rid="B62">2008</xref>, <xref ref-type="bibr" rid="B60">2010</xref>; Br&#x000FC;derle et al., <xref ref-type="bibr" rid="B9">2011</xref>). To this end, custom hardware-inspired synapse models are integrated into a network benchmark using the simulation tool NEST (Gewaltig and Diesmann, <xref ref-type="bibr" rid="B28">2007</xref>). The objective is to determine the smallest hardware implementation of synapses without distorting the behavior of theoretical network models that have been approved by computer simulations.</p>
</sec>
<sec sec-type="materials|methods" id="s1">
<label>2</label> <title>Materials and Methods</title>
<sec>
<label>2.1</label> <title>Spike-timing dependent plasticity</title>
<p>Here, Spike-Timing Dependent Plasticity (STDP) is treated as a pair-based update rule as reviewed by e.g. Morrison et al. (<xref ref-type="bibr" rid="B50">2008</xref>). Most pair-based STDP models (Song et al., <xref ref-type="bibr" rid="B67">2000</xref>; van Rossum et al., <xref ref-type="bibr" rid="B72">2000</xref>; G&#x000FC;tig et al., <xref ref-type="bibr" rid="B29">2003</xref>; Morrison et al., <xref ref-type="bibr" rid="B49">2007</xref>) separate weight modifications &#x003B4;<italic>w</italic> into a spike-timing dependent factor <italic>x</italic>(&#x00394;<italic>t</italic>) and a weight-dependent factor <italic>F</italic>(<italic>w</italic>):</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M10"><mml:mi>&#x003B4;</mml:mi><mml:mi>w</mml:mi><mml:mrow><mml:mo class="MathClass-open">(</mml:mo><mml:mrow><mml:mi>w</mml:mi><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mo class="MathClass-close">)</mml:mo></mml:mrow><mml:mo class="MathClass-rel">=</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:mo class="MathClass-open">(</mml:mo><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo class="MathClass-close">)</mml:mo></mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo class="MathClass-open">(</mml:mo><mml:mrow><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mo class="MathClass-close">)</mml:mo></mml:mrow><mml:mo class="MathClass-punc">,</mml:mo></mml:math></disp-formula>
<p>where &#x00394;<italic>t</italic>&#x02009;&#x0003D;&#x02009;<italic>t</italic><sub>i</sub>&#x02009;&#x02212;&#x02009;<italic>t</italic><sub>j</sub> denotes the interval between spike times <italic>t<sub>j</sub></italic> and <italic>t<sub>i</sub></italic> at the pre- and postsynaptic terminal, respectively. Typically, <italic>x</italic>(&#x00394;<italic>t</italic>) is chosen to be exponentially decaying (e.g. Gerstner et al., <xref ref-type="bibr" rid="B27">1996</xref>; Kempter et al., <xref ref-type="bibr" rid="B36">1999</xref>).</p>
<p>In contrast, the weight-dependence <italic>F</italic>(<italic>w</italic>), which is divided into <italic>F</italic><sub>&#x0002B;</sub>(<italic>w</italic>) for a causal and <italic>F</italic><sub>&#x02212;</sub>(<italic>w</italic>) for an anti-causal spike-timing-dependence, differs between different STDP models. Examples are given in Table <xref ref-type="table" rid="T1">1</xref>. As <italic>F</italic><sub>&#x0002B;</sub>(<italic>w</italic>) is positive and <italic>F</italic><sub>&#x02212;</sub>(<italic>w</italic>) negative for all these STDP models, causal relationships (&#x00394;<italic>t</italic>&#x02009;&#x0003E;&#x02009;0) between pre- and postsynaptic spikes potentiate and anti-causal relationships (&#x00394;<italic>t</italic>&#x02009;&#x0003C;&#x02009;0) depress synaptic weights.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Weight- and spike-timing-dependence of pair-based STDP models: additive, multiplicative, G&#x000FC;tig, van Rossum, and power law model</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Model name</th>
<th align="left"><italic>F</italic><sub>&#x0002B;</sub>(<italic>w</italic>)</th>
<th align="left"><italic>F</italic><sub>&#x02212;</sub>(<italic>w</italic>)</th>
<th align="left"><italic>x</italic>(&#x00394;<italic>t</italic>)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Additive (Song et al., <xref ref-type="bibr" rid="B67">2000</xref>)</td>
<td align="left">&#x003BB;</td>
<td align="left">&#x02212;&#x003BB;&#x003B1;</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Multiplicative (Turrigiano et al., <xref ref-type="bibr" rid="B71">1998</xref>)</td>
<td align="left">&#x003BB;(1&#x02009;&#x02212;&#x02009;<italic>w</italic>)</td>
<td align="left">&#x02212;&#x003BB;&#x003B1;<italic>w</italic></td>
<td align="left"/>
</tr>
<tr>
<td align="left">G&#x000FC;tig (G&#x000FC;tig et al., <xref ref-type="bibr" rid="B29">2003</xref>)</td>
<td align="left">&#x003BB;(1&#x02009;&#x02212;&#x02009;<italic>w</italic>)<sup>&#x003BC;</sup></td>
<td align="left">&#x02212;&#x003BB;&#x003B1;<italic>w</italic><sup>&#x003BC;</sup></td>
<td align="left"><inline-formula><mml:math id="M1"><mml:mrow><mml:mi>e</mml:mi><mml:mi>x</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo class="MathClass-open">(</mml:mo><mml:mrow><mml:mo class="MathClass-bin">-</mml:mo><mml:mfrac><mml:mrow><mml:mo class="MathClass-rel">|</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi><mml:mo class="MathClass-rel">|</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>STDP</mml:mtext></mml:mstyle></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo class="MathClass-close">)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left">van Rossum (van Rossum et al., <xref ref-type="bibr" rid="B72">2000</xref>)</td>
<td align="left"><italic>c</italic><sub>p</sub></td>
<td align="left">&#x02212;<italic>c</italic><sub>d</sub><italic>w</italic></td>
<td align="left"/>
</tr>
<tr>
<td align="left">Power law (Morrison et al., <xref ref-type="bibr" rid="B49">2007</xref>)</td>
<td align="left">&#x003BB;<italic>w</italic><sup>&#x003BC;</sup></td>
<td align="left">&#x02212;&#x003BB;&#x003B1;<italic>w</italic></td>
<td align="left"/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic><italic>F</italic><sub>&#x0002B;</sub> in case of a causal spike-timing-dependence (&#x00394;<italic>t</italic>&#x02009;&#x0003E;&#x02009;0) and <italic>F<sub>&#x02212;</sub></italic> in the anti-causal case (&#x00394;<italic>t</italic>&#x02009;&#x0003C;&#x02009;0). Throughout this study, the model proposed by G&#x000FC;tig et al. is applied with parameters &#x003B1;&#x02009;&#x0003D;&#x02009;1.05, &#x003BB;&#x02009;&#x0003D;&#x02009;0.005, &#x003BC;&#x02009;&#x0003D;&#x02009;0.04, and &#x003C4;<sub>STDP</sub>&#x02009;<bold>&#x0003D;</bold>&#x02009;20&#x02009;ms in accordance with Song et al. (<xref ref-type="bibr" rid="B67">2000</xref>), van Rossum et al. (<xref ref-type="bibr" rid="B72">2000</xref>), Rubin et al. (<xref ref-type="bibr" rid="B59">2001</xref>), G&#x000FC;tig et al. (<xref ref-type="bibr" rid="B29">2003</xref>), Morrison et al. (<xref ref-type="bibr" rid="B50">2008</xref>)</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>In this study, the <italic>intermediate G&#x000FC;tig STDP model</italic> (bounded to the weight range [0, 1]) is chosen as an example STDP model. It represents a mixture of the multiplicative (&#x003BC;&#x02009;&#x0003D;&#x02009;1) and additive (&#x003BC;&#x02009;&#x0003D;&#x02009;0) STDP model and has been shown to provide stability in competitive synaptic learning (G&#x000FC;tig et al., <xref ref-type="bibr" rid="B29">2003</xref>). Nevertheless, the following studies can be applied to any pair-based STDP model with exponentially decaying time-dependence, e.g. all models listed in Table <xref ref-type="table" rid="T1">1</xref>.</p>
</sec>
<sec>
<label>2.2</label> <title>Synapses in large-scale hardware systems</title>
<p>The FACETS wafer-scale hardware system (Schemmel et al., <xref ref-type="bibr" rid="B62">2008</xref>, <xref ref-type="bibr" rid="B60">2010</xref>; Br&#x000FC;derle et al., <xref ref-type="bibr" rid="B9">2011</xref>) represents an example for a possible synapse size reduction in neuromorphic hardware systems. Figure <xref ref-type="fig" rid="F2">2</xref> schematizes the hardware implementation of a synapse enabling STDP similar as presented in Schemmel et al. (<xref ref-type="bibr" rid="B63">2006</xref>) and Schemmel et al. (<xref ref-type="bibr" rid="B61">2007</xref>). It provides the functionality to store the value of the synaptic weight, to measure the spike-timing-dependence between pre- and postsynaptic spikes and to update the synaptic weight according to this measurement. Synapse density is maximized by separating the <italic>accumulation</italic> of the spike-timing-dependence <italic>x</italic>(&#x00394;<italic>t</italic>) and the <italic>weight update controller</italic>, which is the hardware implementation of <italic>F</italic>(<italic>w</italic>). This allows 4&#x000B7;10<sup>7</sup> synapses on a single wafer (Schemmel et al., <xref ref-type="bibr" rid="B60">2010</xref>).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Schematic drawing of local hardware synapses which are consecutively processed by a global weight update controller</bold>. Analog circuits are highlighted in red (with solid frame) and digital circuits in green (dashed frames). The spike-timing-dependence (here one standard spike pair (SSP) with &#x00394;<italic>t</italic><sub>s</sub>, see text) between the pre- and postsynaptic neuron is (a) measured (here <italic>a</italic><sub>SSP</sub>) and (b) accumulated (here to <italic>a</italic><sub>c</sub> in case of a causal spike pair, <italic>a</italic><sub>a</sub> for anti-causal spike pairs is not affected). Then, the global weight update controller evaluates the accumulated spike-timing-dependence by means of a crossed threshold <italic>a</italic><sub>th</sub> (here <italic>a</italic><sub>c</sub>&#x02009;&#x0003E;&#x02009;<italic>a</italic><sub>th</sub>) and modifies the digital weight of the hardware synapse accordingly. The new synaptic weight <italic>w<sub>n&#x02009;&#x0002B;&#x02009;1</sub></italic> is retrieved from the LUT according to the accumulated spike-timing-dependence and the current weight <italic>w<sub>n</sub></italic> and is written back to the hardware synapse. The analog measurement and accumulation circuit is furthermore minimized by using the reduced symmetric nearest-neighbor spike pairing scheme (Morrison et al., <xref ref-type="bibr" rid="B50">2008</xref>): instead of considering all past and future spikes (all-to-all spike pairing scheme), only the latest and the following spike at both terminals of the synapse are taken into account.</p></caption>
<graphic xlink:href="fnins-06-00090-g002.tif"/>
</fig>
<p>Synaptic dynamics in the FACETS wafer-scale hardware system exploits the fact that weight dynamics typically evolves slower than electrical neuronal activity (Morrison et al., <xref ref-type="bibr" rid="B49">2007</xref>; Kunkel et al., <xref ref-type="bibr" rid="B39">2011</xref>). Therefore, weight updates can be divided into two steps (Figure <xref ref-type="fig" rid="F2">2</xref>). First, a measuring and accumulation step which locally determines the relative spike times between pairs of neurons and thus <italic>x</italic>(&#x00394;<italic>t</italic>). This stage is designed in analog hardware (red area in Figure <xref ref-type="fig" rid="F2">2</xref>), as analog measurement and accumulation circuits require less chip resources compared to digital realizations thereof. Second, the digital weight update controller (upper green area in Figure <xref ref-type="fig" rid="F2">2</xref>) implements <italic>F</italic>(<italic>w</italic>) based on the previous analog result. A global weight update controller<xref ref-type="fn" rid="fn2"><sup>2</sup></xref> is responsible for the consecutive updates of many synapses (Schemmel et al., <xref ref-type="bibr" rid="B63">2006</xref>) and hence limits the maximal rate at which a synapse can be updated, the update controller frequency <italic>v<sub>c</sub></italic>.</p>
<p>Sharing one weight update controller reduces synapses to small analog measurement and accumulation circuits as well as a digital circuit that implements the synaptic weight (Figure <xref ref-type="fig" rid="F2">2</xref>). The area required to implement these digital weights with a resolution of <italic>r</italic> bits is proportional to 2<italic><sup>r</sup></italic>, the number of discrete weights. Consequently, assuming the analog circuits to be fixed in size, the size of a synapse is determined by its weight storage exponentially growing with the weight resolution. E.g. the FACETS wafer-scale hardware system has a weight resolution of <italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits, letting the previously described circuits (analog and digital) equally sized on the chip.</p>
<p>Modifications in the layout of synapse circuits are time-consuming and involve expensive re-manufacturing of chips. Thus, the configuration of connections between neurons is designed flexible enough to avoid these modifications and provide a general-purpose modeling environment (Schemmel et al., <xref ref-type="bibr" rid="B60">2010</xref>). For the same reason, STDP is conform to the majority of available update rules. The STDP models listed in Table <xref ref-type="table" rid="T1">1</xref> share the same time-dependence <italic>x</italic>(&#x00394;<italic>t</italic>). Its exponential shape is mimicked by small analog circuit not allowing for other time-dependencies (Schemmel et al., <xref ref-type="bibr" rid="B63">2006</xref>, <xref ref-type="bibr" rid="B61">2007</xref>). The widely differing weight-dependences <italic>F</italic>(<italic>w</italic>), on the other hand, are programmable into the weight update controller. Due to limited weight update controller resources, arithmetic operations <italic>F</italic>(<italic>w</italic>) as listed in Table <xref ref-type="table" rid="T1">1</xref> are not realizable and are replaced by a programmable look-up table (LUT; Schemmel et al., <xref ref-type="bibr" rid="B63">2006</xref>).</p>
<p>Such a LUT lists, for each discrete weight, the resulting weights in case of causal or anti-causal spike-timing-dependence between pre- and postsynaptic spikes. Instead of performing arithmetic operations during each weight update (equation <xref ref-type="disp-formula" rid="E1">1</xref>), LUTs are used as a recallable memory consisting of precalculated weight modifications. Hence, LUTs do not limit the flexibility of weight updates if their weight-dependence (Table <xref ref-type="table" rid="T1">1</xref>) does not change over time. Throughout this study, we prefer the concept of LUTs to arithmetic operations, because we like to focus on the discretized weight space, a state space of limited dimension.</p>
<p>In addition to STDP, the FACETS wafer-scale hardware system also supports a variant of short-term plasticity mechanisms according to (Tsodyks and Markram, <xref ref-type="bibr" rid="B70">1997</xref>; Bi and Poo, <xref ref-type="bibr" rid="B4">1998</xref>; Schemmel et al., <xref ref-type="bibr" rid="B61">2007</xref>), which however leaves synaptic weights unchanged and therefore lies outside the scope of this study.</p>
</sec>
<sec id="s13">
<label>2.3</label> <title>Discretization of synaptic weights</title>
<p>Continuous weight values <italic>w<sub>c</sub>&#x02009;</italic>&#x02208;&#x02009;[0, 1], as assumed for the STDP models listed in Table <xref ref-type="table" rid="T1">1</xref>, are transformed into <italic>r</italic>-bit coded discrete weight values <italic>w</italic><sub>d</sub>:</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M11"><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>d</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mo class="MathClass-rel">=</mml:mo><mml:mi>c</mml:mi><mml:mfenced separators="" open="&#x0230A;" close="&#x0230B;"><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>c</mml:mtext></mml:mstyle></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:mfrac><mml:mo class="MathClass-bin">&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac></mml:mrow></mml:mfenced><mml:mspace width="1em" class="quad"/><mml:mstyle class="text"><mml:mtext>for</mml:mtext></mml:mstyle><mml:mspace width="2.77695pt" class="tmspace"/><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>c</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mo class="MathClass-rel">&#x02208;</mml:mo><mml:mi>I</mml:mi></mml:math></disp-formula>
<p>where <italic>c</italic>&#x02009;&#x0003D;&#x02009;1/(2<italic><sup>r</sup></italic>&#x02009;&#x02212;&#x02009;1) denotes the width of a bin and <inline-formula><mml:math id="M2"><mml:mrow><mml:mrow><mml:mo class="MathClass-open">&#x0230A;</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext class="textit" mathvariant="italic">x</mml:mtext></mml:mstyle></mml:mrow><mml:mo class="MathClass-close">&#x0230B;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> the floor-function, the largest integer less than or equal to <italic>x</italic>. This procedure divides the range of weight values <italic>I</italic>&#x02009;&#x0003D;&#x02009;[0, 1] into 2<italic><sup>r</sup></italic> bins. The term 1/2 allows for a correct discretization of weight values near the borders of <italic>I</italic>, effectively dividing the width of the ending bins (otherwise, only <italic>w</italic><sub>c</sub>&#x02009;&#x0003D;&#x02009;1 would be mapped to <italic>w</italic><sub>d</sub>&#x02009;&#x0003D;&#x02009;1).</p>
</sec>
<sec id="s14">
<label>2.4</label> <title>Discretization of spike-timing dependent plasticity</title>
<p>A single weight update, resulting from a pre- and postsynaptic spike, might be too fine grained to be captured by a low weight resolution (equation <xref ref-type="disp-formula" rid="E2">2</xref>). Therefore, it is necessary to accumulate the effect of weight updates of several consecutive spike pairs in order to reach the next discrete weight value (equation <xref ref-type="disp-formula" rid="E2">2</xref>; Figure <xref ref-type="fig" rid="F2">2</xref>). This is equivalent to state that the implementation of the STDP model assumes additive features for ms range intervals. To this end, we define a <italic>standard spike pair</italic> (SSP) as a spike pair with a time interval between a pre- and postsynaptic spike of &#x00394;<italic>t</italic><sub>s</sub>&#x02009;&#x0003D;&#x02009;10&#x02009;ms in accordance to biological measurements by Bi and Poo (<xref ref-type="bibr" rid="B5">2001</xref>), Sj&#x000F6;str&#x000F6;m et al. (<xref ref-type="bibr" rid="B66">2001</xref>), Markram (<xref ref-type="bibr" rid="B44">2006</xref>) in order to provide a standardized measure for the spike-timing-dependence. This time interval is chosen arbitrarily defining the granularity only (fine enough for the weight resolutions of interest) and is valid for both pre-post and post-pre spike pairs, as <italic>x</italic>(&#x00394;<italic>t</italic>) takes its absolute value.</p>
<p>The values for a LUT are constructed as follows. First, the parameters <italic>r</italic> (weight resolution) and <italic>n</italic> (number of SSPs consecutively applied for an accumulated weight update) as well as the STDP rule-specific parameters &#x003C4;<sub>STDP</sub>, &#x003BB;, &#x003BC;, &#x003B1; (Table <xref ref-type="table" rid="T1">1</xref>) are chosen. Next, starting with a discrete weight <italic>w</italic><sub>d</sub>, weight updates <italic>&#x003B4;w</italic>(<italic>w</italic>, &#x00394;<italic>t</italic><sub>s</sub>) specified by equation (<xref ref-type="disp-formula" rid="E1">1</xref>) are recursively applied <italic>n</italic> times in continuous weight space using either exclusively <italic>F</italic><sub>&#x0002B;</sub>(<italic>w</italic>) or <italic>F</italic><sub>&#x02212;</sub>(<italic>w</italic>). This results in two accumulated weight updates &#x00394;<italic>w</italic><sub>&#x0002B;/&#x02212;</sub>, one for each weight-dependence <italic>F</italic><sub>&#x0002B;/&#x02212;</sub>(<italic>w</italic>). Finally, the resulting weight value in continuous space is according to equation (<xref ref-type="disp-formula" rid="E2">2</xref>) transformed back to its discrete representation. This process is then carried out for each possible discrete weight value <italic>w</italic><sub>d</sub> (Table <xref ref-type="table" rid="T2">2</xref>). We will further compare different LUTs letting <italic>n</italic> be a free parameter. In the following a <italic>weight update</italic> refers to &#x00394;<italic>w</italic>, if not specified otherwise.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p><bold>Example look-up table for a weight resolution of <italic>r</italic>&#x02009;&#x0003D;&#x02009;2 bits and <italic>n</italic>&#x02009;&#x0003D;&#x02009;100 SSPs</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left"><italic>w</italic><sub>d</sub></th>
<th align="left"><italic>w</italic><sub>&#x0002B;</sub></th>
<th align="left"><italic>w</italic><sub>&#x02212;</sub></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">0</td>
<td align="left"><inline-formula><mml:math id="M21"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">0</td>
</tr>
<tr>
<td align="left"><inline-formula><mml:math id="M22"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left"><inline-formula><mml:math id="M23"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">0</td>
</tr>
<tr>
<td align="left"><inline-formula><mml:math id="M24"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">1</td>
<td align="left"><inline-formula><mml:math id="M25"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left">1</td>
<td align="left">1</td>
<td align="left"><inline-formula><mml:math id="M26"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Discrete weight <italic>w</italic><sub>d</sub> and the resulting weight increments <italic>w</italic><sub>&#x0002B;/&#x02212;</sub>&#x02009;&#x0003D;&#x02009;<italic>w</italic><sub>d</sub>&#x02009;&#x0002B;&#x02009;&#x00394;<italic>w</italic><sub>&#x0002B;/&#x02212;</sub> for causal and anti-causal weight-dependences</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>Although we are focusing on the G&#x000FC;tig STDP model, the updated weight values can in general under- or over-run the allowed weight interval <italic>I</italic> due to finite weight updates &#x00394;<italic>w</italic>. In this case, the weight is clipped to its minimum or maximum value, respectively.</p>
</sec>
<sec id="s15">
<label>2.5</label> <title>Equilibrium weight distributions</title>
<p>We analyze long-term effects of weight discretization by studying the equilibrium weight distribution of a synapse that is subject to Poissonian pre- and postsynaptic firing. Thus, potentiation and depression are equally probable (<italic>p</italic><sub>d</sub>&#x02009;&#x0003D;&#x02009;<italic>p</italic><sub>p</sub>&#x02009;&#x0003D;&#x02009;1/2). Equilibrium weight distributions in discrete weight space of low resolution (between 2 and 10 bits) are compared to those with high resolution (16 bits) via the mean squared error <italic>MSE</italic><sub>eq</sub>. Consecutive weight updates are performed based on precalculated LUTs.</p>
<p>Equilibrium weight distributions of discrete weights for a given weight resolution of <italic>r</italic> bits are calculated as follows. First, a LUT for 2<italic><sup>r</sup></italic> discrete weights is configured with <italic>n</italic> SSPs. Initially, all 2<italic><sup>r</sup></italic> discrete weight values <italic>w<sub>i</sub></italic> have the same probability <italic>P</italic><sub>i,0</sub>&#x02009;&#x0003D;&#x02009;1/2<italic><sup>r</sup></italic>. For a compact description, the discrete weights <italic>w<sub>i</sub></italic> are mapped to a 2<italic><sup>r</sup></italic> dimensional space with unit vectors <inline-formula><mml:math id="M3"><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>e</mml:mi><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mtext>i</mml:mtext></mml:msub><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mi>&#x02115;</mml:mi><mml:mrow><mml:msup><mml:mn>2</mml:mn><mml:mi>r</mml:mi></mml:msup></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula>. Then, for each iteration cycle <italic>j</italic>, the probability distribution is defined by <inline-formula><mml:math id="M4"><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>P</mml:mi><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mtext>j</mml:mtext></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:msubsup><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mtext>i</mml:mtext><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:msup><mml:mn>2</mml:mn><mml:mi>r</mml:mi></mml:msup><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mtext>i</mml:mtext><mml:mo>,</mml:mo><mml:mtext>j</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mtext>P</mml:mtext></mml:msub><mml:msub><mml:mover accent='true'><mml:mi>e</mml:mi><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mtext>c</mml:mtext></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mtext>d</mml:mtext></mml:msub><mml:msub><mml:mover accent='true'><mml:mi>e</mml:mi><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mtext>a</mml:mtext></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mstyle></mml:mrow></mml:math></inline-formula> where <italic>P<sub>i,j&#x02212;1</sub></italic> is the probability for each discrete weight value <italic>w</italic><sub>i</sub> of the previous iteration cycle <italic>j</italic>&#x02009;&#x02212;&#x02009;1. The indices of <inline-formula><mml:math id="M5"><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>e</mml:mi><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mtext>c</mml:mtext></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="M6"><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>e</mml:mi><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mtext>a</mml:mtext></mml:msub></mml:mrow></mml:math></inline-formula> are those of the resulting discrete weight values <italic>w</italic><sub>i</sub> in case of a causal and anti-causal weight update, respectively, and are represented by the LUT. We define an equilibrium state as reached if the Euclidean norm <inline-formula><mml:math id="M48"><mml:mrow><mml:mrow><mml:mo>&#x02016;</mml:mo><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>P</mml:mi><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mrow><mml:mtext>j</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>P</mml:mi><mml:mo>&#x02192;</mml:mo></mml:mover><mml:mtext>j</mml:mtext></mml:msub></mml:mrow><mml:mo>&#x02016;</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula> is smaller than a threshold <italic>h</italic>&#x02009;&#x0003D;&#x02009;10<sup>&#x02212;12</sup>.</p>
<p>An analytical approach for obtaining equilibrium weight distributions is derived in Section 6.1.</p>
</sec>
<sec>
<label>2.6</label> <title>Spiking network benchmarks</title>
<p>In addition to the behavior under Poissonian noise, we study the impact of discretized weights with a software implementation of hardware synapses, enabling us to analyze synapses in isolation as well as in network benchmarks. The design of our simulation environment is flexible enough to take further hardware constraints and biological applications into account.</p>
<sec id="s16">
<label>2.6.1</label> <title>Software implementation of hardware synapses</title>
<p>The hardware constraints considered in this study are implemented as a customized synapse model within the framework of the NEST simulation tool (Gewaltig and Diesmann, <xref ref-type="bibr" rid="B28">2007</xref>), allowing their well controlled application in simulator-based studies on large-scale neural networks. The basic properties of such a <italic>hardware-inspired synapse</italic> <italic>model</italic> are described as follows and are illustrated in Figures <xref ref-type="fig" rid="F2">2</xref> and <xref ref-type="fig" rid="F5">5</xref>.</p>
<p>For each LUT configuration defined by its weight resolution <italic>r</italic> and number <italic>n</italic> of SSPs, the threshold for allowing weight updates is set to</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M12"><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>th</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mo class="MathClass-rel">=</mml:mo><mml:mi>n</mml:mi><mml:mo class="MathClass-bin">&#x022C5;</mml:mo><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>SSP</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mo class="MathClass-punc">,</mml:mo></mml:math></disp-formula>
<p>defining <italic>a</italic>&#x02009;&#x0003D;&#x02009;&#x02211;<italic><sub>i</sub>x</italic>(&#x00394;<italic>t<sub>i</sub></italic>) as the <italic>spike pair accumulation</italic> for arbitrary intervals. Here, a single SSP is used, setting <italic>a</italic>&#x02009;&#x0003D;&#x02009;<italic>a</italic><sub>SSP</sub>&#x02009;&#x0003D;&#x02009;<italic>x</italic>(&#x00394;<italic>t</italic><sub>s</sub>). If either the causal or anti-causal spike pair accumulation <italic>a</italic><sub>c/a</sub> crosses the threshold <italic>a</italic><sub>th</sub>, the synapse is &#x0201C;tagged&#x0201D; for a weight update. At the next cycle of the weight update controller all tagged synapses are updated according to the LUT. Afterward, the spike pair accumulation (causal or anti-causal) is reset to zero. Untagged synapses remain unprocessed by the update controller, and spike pairs are further accumulated without performing any weight update. If a synapse accumulates <italic>a</italic><sub>c</sub> and <italic>a</italic><sub>a</sub> above threshold between two cycles of the weight update controller, both are reset to zero without updating the synaptic weight.</p>
<p>This threshold process implies that the frequency <italic>v</italic><sub>w</sub> of weight updates is dependent on <italic>n</italic>, which in turn determines the threshold <italic>a</italic><sub>th</sub>, but also on the firing rates and the correlation between the pre- and postsynaptic spike train. In general, <italic>a</italic> increases faster with higher firing rates or higher correlations. To circumvent these dependencies on network dynamics, we will use <italic>n</italic> as a generalized description for the weight update frequency <italic>v</italic><sub>w</sub>. The weight update frequency <italic>v</italic><sub>w</sub> should not be confused with the update controller frequency <italic>v</italic><sub>c</sub>, with which is checked for threshold crossings and hence limits <italic>v</italic><sub>w</sub>.</p>
<p>Furthermore, we have implemented a <italic>reference synapse model</italic> in NEST, which is based on G&#x000FC;tig et al. (<xref ref-type="bibr" rid="B29">2003</xref>). It has the reduction of employing nearest-neighbor instead of all-to-all spike pairing (Morrison et al., <xref ref-type="bibr" rid="B50">2008</xref>).</p>
<p>All simulations involving synapses are simulated with NEST. Spike trains are applied to built-in <italic>parrot neurons</italic>, that simply repeat their input, in order to control pre- and postsynaptic spike trains to interconnecting synapses.</p>
</sec>
<sec id="s17">
<label>2.6.2</label> <title>Single synapse benchmark</title>
<p>We compare the weight evolutions of hardware-inspired and reference synapses receiving correlated pre- and postsynaptic spike trains, drawn from a multiple interaction process (MIP; Kuhn et al., <xref ref-type="bibr" rid="B38">2003</xref>). This process introduces excess synchrony between two realizations by randomly thinning a template Poisson process. SSPs are then obtained by shifting one of the processes by &#x00394;<italic>t</italic><sub>s</sub>.</p>
<p>In this first scenario the spike pair accumulation <italic>a</italic> is checked for crossing <italic>a</italic><sub>th</sub> with a frequency of <italic>v</italic><sub>c</sub>&#x02009;&#x0003D;&#x02009;10&#x02009;Hz to focus on the effects of discrete weights only. This frequency is equal to the simulation step size, preventing the spike pair accumulation from overshooting the threshold <italic>a</italic><sub>th</sub> without eliciting a weight update.</p>
<p>Synaptic weights are recorded in time steps of 3&#x02009;s for an overall period of 150&#x02009;s and are averaged over 30 random MIP realizations. Afterward the mean weight at each recorded time step is compared between the hardware-inspired and the reference synapse model by applying the mean squared error <italic>MSE<sub>w</sub></italic>.</p>
</sec>
<sec id="s2">
<label>2.6.3</label> <title>Network benchmarks</title>
<p>The detection of presynaptic synchrony is taken as a benchmark for synapse implementations. Two populations of 10 neurons each converge to an integrate-and-fire neuron with exponentially decaying synaptic conductances (see schematic in Figure <xref ref-type="fig" rid="F7">7</xref>A and model description in Tables <xref ref-type="table" rid="T7">7</xref> and <xref ref-type="table" rid="T8">8</xref>) by either hardware-inspired or reference synapses. These synapses are excitatory, and their initial weights are drawn randomly from a uniform distribution over [0, 1). The amplitude of the postsynaptic conductance is <italic>wg</italic><sub>max</sub> with <italic>g</italic><sub>max</sub>&#x02009;&#x0003D;&#x02009;100&#x02009;nS. One population draws its spikes from a MIP with correlation coefficient <italic>c</italic> (Kuhn et al., <xref ref-type="bibr" rid="B38">2003</xref>), the other from a Poisson process (MIP with c&#x02009;&#x02192;&#x02009;0). We choose presynaptic firing rates of 7.2&#x02009;Hz such that the target neuron settles at a firing rate of 2&#x02013;22&#x02009;Hz depending on the synapse model. The exact postsynaptic firing rate is of minor importance as long as the synaptic weights reach an equilibrium state. The synaptic weights are recorded for 2,000&#x02009;s with a sampling frequency of 0.1&#x02009;Hz. The two resulting weight distributions are compared applying the Mann&#x02013;Whitney U test (Mann and Whitney, <xref ref-type="bibr" rid="B43">1947</xref>).</p>
<sec>
<label>2.6.3.1</label> <title>Further constraints</title>
<p>Not only the discretization of synaptic weights, but also the update controller frequency <italic>v</italic><sub>c</sub> and the reset behavior are constraints of the FACETS wafer-scale hardware system.</p>
<p>To study effects caused by a limited update controller frequency, we choose <italic>v</italic><sub>c</sub> such that the interval between sequent cycles is a multiple of the simulator time step. Consequently weight updates can only occur on a time grid.</p>
<p>A <italic>common reset</italic> means that both the causal and anti-causal spike pair accumulations are reset, although only either <italic>a</italic><sub>c</sub> or <italic>a</italic><sub>a</sub> has crossed <italic>a</italic><sub>th</sub>. Because the common reset requires only one reset line instead of two, it decreases the chip resources of synapses and is implemented in the current FACETS wafer-scale hardware system.</p>
<p>As a basis for a possible compensation mechanism for the common reset, we suggest analog-to-digital converters (ADCs) with a 4-bit resolution that read out the spike pair accumulations. Such ADCs require only a small chip area in the global weight update controller compared to the large area occupied by additional reset lines covering all synapses and are therefore resource saving alternatives to second reset lines. An ADC allows to compare the spike pair accumulations against multiple thresholds. Implementations of the common reset as well as ADCs are added to the existing software model. For multiple thresholds, the same number of LUTs is needed that have to be chosen carefully. To provide symmetry within the order of consecutive causal and anti-causal weight updates, the spike pair accumulation (causal or anti-causal) that dominates in means of crossing a higher threshold is applied first.</p>
</sec>
<sec>
<label>2.6.3.2</label> <title>Peri-stimulus-time-histograms</title>
<p>The difference between static and STDP synapses on eliciting postsynaptic spikes in the above network benchmark can be analyzed with peri-stimulus-time-histograms (PSTHs). Here, PSTHs show the probability of postsynaptic spike occurrences in dependence on the delay between a presynaptic trigger and its following postsynaptic spike. Spike times are recorded within the last third of an elongated simulation of 3,000&#x02009;s with <italic>c</italic>&#x02009;&#x0003D;&#x02009;0.025. During the last 1,000&#x02009;s the mean weights are already in their equilibrium state, but are still fluctuating around it. The first spike of any two presynaptic spikes within a time window of &#x00394;<italic>t</italic><sub>on</sub>&#x02009;&#x0003D;&#x02009;1&#x02009;ms is used as a trigger. The length of &#x00394;<italic>t</italic><sub>on</sub> is chosen small compared to the membrane time constant &#x003C4;<sub>m</sub>&#x02009;&#x0003D;&#x02009;15&#x02009;ms, such that the excitatory postsynaptic potentials of both presynaptic spikes overlap each other and increase the probability of eliciting a postsynaptic spike. On the other hand &#x00394;<italic>t</italic><sub>on</sub> is chosen large enough to not only include the simultaneous spikes generated by the MIP, but also include coincident spikes within the uncorrelated presynaptic population.</p>
</sec>
</sec>
</sec>
<sec>
<label>2.7</label> <title>Hardware variations</title>
<p>In contrast to arithmetic operations in software models, analog circuits vary due to the manufacturing process, although they are identically designed. The choice of precision for all building blocks should be governed by those that distort network functionality most. In this study, we assume that variations within the analog measurement and accumulation circuits are likely to be a key requirement for these choices, as they operate on the lowest level of STDP. Circuit variations are measured and compared between the causal and anti-causal part within a synapse and between synapses. All measurements are carried out with the FACETS chip-based hardware system (Schemmel et al., <xref ref-type="bibr" rid="B63">2006</xref>, <xref ref-type="bibr" rid="B61">2007</xref>) with hardware parameters listed in Table <xref ref-type="table" rid="T6">6</xref>. The FACETS chip-based hardware system shares a conceptually nearly identical STDP circuit with the FACETS wafer-scale hardware system (for details see Section <xref ref-type="sec" rid="s1">2</xref>) which was still in the assembly process at the course of this study. The hardware measurements are written in PyNN (Davison and Fr&#x000E9;gnac, <xref ref-type="bibr" rid="B17">2006</xref>) and use the workflow described in (Br&#x000FC;derle et al., <xref ref-type="bibr" rid="B9">2011</xref>).</p>
<sec id="s3">
<label>2.7.1</label> <title>Measurement</title>
<p>The circuit variations due to production imperfection are measured by recording <italic>STDP curves</italic> and comparing their integrals for &#x00394;<italic>t</italic>&#x02009;&#x0003E;&#x02009;0 and &#x00394;<italic>t</italic>&#x02009;&#x0003C;&#x02009;0. The curves are recorded by applying equidistant pairs of pre- and postsynaptic spikes with a predefined latency &#x00394;<italic>t</italic>. Presynaptic spikes can be fed into the hardware precisely. However, in contrast to NEST&#x02019;s parrot neurons, postsynaptic spikes are not directly adjustable and therefore has to be evoked by several synchronous external triggers (for details see Section 6.3). After discarding the first 10 spike pairs to ensure regular firing, the pre- and postsynaptic spike trains are shifted until the desired latency &#x00394;<italic>t</italic> is measured. Due to the low spike pair frequency of 10&#x02009;Hz, only the correlations within and not between the spike pairs are accumulated. The number <italic>N</italic> of consecutive spike pairs is increased until the threshold is crossed and hence a correlation flag is set (Figure <xref ref-type="fig" rid="F8">8</xref>A). The inverse of this number versus &#x00394;<italic>t</italic> is called an STDP curve. Such curves were recorded for 252 synapses within one synapse column, the remaining 4 synapses in this column were discarded.</p>
<p>For each STDP curve the total area <italic>A</italic><sub>t</sub>&#x02009;&#x0003D;&#x02009;<italic>A</italic><sub>a</sub>&#x02009;&#x0002B;&#x02009;<italic>A</italic><sub>c</sub> is calculated and normalized by the mean <inline-formula><mml:math id="M7"><mml:mrow><mml:mover accent='true'><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mtext>abs</mml:mtext></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy='true'>&#x000AF;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> of the absolute area <italic>A</italic><sub>abs</sub>&#x02009;&#x0003D;&#x02009;|<italic>A</italic><sub>a</sub>|&#x02009;&#x0002B;&#x02009;|<italic>A</italic><sub>c</sub>| over all STDP curves. Ideally, <italic>A</italic><sub>t</sub> would vanish if both circuits are manufactured identically. The standard deviation &#x003C3;<sub>a</sub> (assuming Gaussian distributed measurement data) of these normalized total areas <italic>A</italic><sub>t</sub> is taken as one measure for circuit variations. Besides this asymmetry which measures the variation <italic>within</italic> a synapse, a measure for variation <italic>across</italic> synapses is the standard deviation &#x003C3;<sub>t</sub> of the absolute areas <italic>A</italic><sub>abs</sub>. Therefore the absolute areas <italic>A</italic><sub>abs</sub> under each STDP curve are again normalized by <inline-formula><mml:math id="M8"><mml:mrow><mml:mover accent='true'><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mtext>abs</mml:mtext></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy='true'>&#x000AF;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> and furthermore the mean of all these normalized absolute areas is subtracted.</p>
</sec>
<sec id="s4">
<label>2.7.2</label> <title>Software analysis</title>
<p>In order to predict the effects of the previously measured variations on the network benchmark, these variations are integrated into computer simulations. The thresholds for the causal and anti-causal spike pair accumulations are drawn from two overlaying Gaussian distributions defined by the ideal thresholds (equation <xref ref-type="disp-formula" rid="E3">3</xref>) and their variations &#x003C3;<sub>t</sub>, &#x003C3;<sub>a</sub>. Again, the same network benchmark as described above is used, but with a fixed correlation coefficient of <italic>c</italic>&#x02009;&#x0003D;&#x02009;0.025 and an 8-bit LUT configured with <italic>n</italic>&#x02009;&#x0003D;&#x02009;12 SSPs.</p>
</sec>
</sec>
</sec>
<sec id="s5">
<label>3</label> <title>Results</title>
<p>Synaptic weights of the FACETS wafer-scale hardware system (Schemmel et al., <xref ref-type="bibr" rid="B60">2010</xref>) have a 4-bit resolution. We show that such a weight resolution is enough to exhibit learning in a neural network benchmark for synchrony detection. To this end, we analyze the effects of weight discretization in three steps as summarized in Table <xref ref-type="table" rid="T3">3</xref>.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p><bold>Outline of analyses on the effects of weight discretization and further hardware constraints</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Description</th>
<th align="left">Results</th>
<th align="left">Methods</th>
</tr>
</thead>
<tbody>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>LOOK-UP TABLE ANALYSIS</bold></td>
</tr>
<tr>
<td align="left">Basic analyses on the configuration of STDP on discrete weights by means of look-up tables (A)</td>
<td align="left">A) Section <xref ref-type="sec" rid="s6">3.1</xref></td>
<td align="left">A) Sections <xref ref-type="sec" rid="s13">2.3</xref> and <xref ref-type="sec" rid="s14">2.4</xref></td>
</tr>
<tr>
<td align="left">and their long-term dynamics (B).</td>
<td align="left">B) Section <xref ref-type="sec" rid="s7">3.2</xref></td>
<td align="left">B) Section <xref ref-type="sec" rid="s15">2.5</xref></td>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>SPIKING NETWORK BENCHMARKS</bold></td>
</tr>
<tr>
<td align="left">Software implementation of hardware-inspired synapses with discrete weights for application in spiking neural environments (C).</td>
<td align="left"/>
<td align="left">C) Section <xref ref-type="sec" rid="s16">2.6.1</xref></td>
</tr>
<tr>
<td align="left">Analyses of their effects on short-term weight dynamics in single synapses (D)</td>
<td align="left">D) Section <xref ref-type="sec" rid="s9">3.3.1</xref></td>
<td align="left">D) Section <xref ref-type="sec" rid="s17">2.6.2</xref></td>
</tr>
<tr>
<td align="left">and neural networks (E).</td>
<td align="left">E) Section <xref ref-type="sec" rid="s10">3.3.2</xref></td>
<td align="left">E) Section <xref ref-type="sec" rid="s2">2.6.3</xref></td>
</tr>
<tr>
<td align="left">Analyses on how additional hardware constraints effect the network benchmark (F).</td>
<td align="left">F) Section <xref ref-type="sec" rid="s11">3.3.3</xref></td>
<td align="left">F) Section <xref ref-type="sec" rid="s2">2.6.3</xref></td>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>HARDWARE MEASUREMENTS</bold></td>
</tr>
<tr>
<td align="left">Measurement of hardware variations (G)</td>
<td align="left">G) Section <xref ref-type="sec" rid="s12">3.4</xref></td>
<td align="left">G) Section <xref ref-type="sec" rid="s3">2.7.1</xref></td>
</tr>
<tr>
<td align="left">and computer simulations analyzing their effects on the network benchmark (H).</td>
<td align="left">H) Section <xref ref-type="sec" rid="s12">3.4</xref></td>
<td align="left">H) Section <xref ref-type="sec" rid="s4">2.7.2</xref></td>
</tr>
</tbody>
</table>
</table-wrap>
<sec id="s6">
<label>3.1</label> <title>Dynamic range of STDP on discrete weights</title>
<p>We choose the configuration of STDP on discrete weights according to Sections <xref ref-type="sec" rid="s13">2.3</xref> and <xref ref-type="sec" rid="s14">2.4</xref> to obtain weight dynamics comparable to that in continuous weight space. Each configuration can be described by a LUT &#x0201C;projecting&#x0201D; each discrete weight to new values, one for potentiation and one for depression (Table <xref ref-type="table" rid="T4">4</xref>). For a given weight resolution <italic>r</italic> the free configuration parameter <italic>n</italic> (number of SSPs) has to be adjusted to avoid a further reduction of the usable weight resolution by <italic>dead discrete weights</italic>. Dead discrete weights are defined as weights projecting to themselves in case of both potentiation and depression or not receiving any projections from other discrete weights. The percentage of dead discrete weights <italic>d</italic> defines the lower and upper limit of feasible values for <italic>n</italic>, the <italic>dynamic range</italic>. The absolute value of the interval within a SSP (&#x00394;<italic>t</italic><sub>s</sub>) is an arbitrary choice merely defining the granularity, but does not affect the results (not shown). Note that spike-timing precision <italic>in vivo</italic>, which is observed for high dimensional input such as dense noise and natural scenes, goes rarely beyond 5&#x02013;10&#x02009;ms (Butts et al., <xref ref-type="bibr" rid="B11">2007</xref>; Desbordes et al., <xref ref-type="bibr" rid="B19">2008</xref>, <xref ref-type="bibr" rid="B18">2010</xref>; Marre et al., <xref ref-type="bibr" rid="B46">2009</xref>; J. Fr&#x000E9;gnac, personal communication), and the choice of 10&#x02009;ms as a granular step is thus justified biologically.</p>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p><bold>Look-up tables for different numbers <italic>n</italic> of SSPs</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left"><italic>w<sub>d</sub></italic></th>
<th align="left"><italic>w<sub>&#x0002B;</sub></italic></th>
<th align="left"><italic>w<sub>&#x02212;</sub></italic></th>
<th align="left"><italic>w<sub>d</sub></italic></th>
<th align="left"><italic>w<sub>&#x0002B;</sub></italic></th>
<th align="left"><italic>w<sub>&#x02212;</sub></italic></th>
<th align="left"><italic>w<sub>d</sub></italic></th>
<th align="left"><italic>w<sub>&#x0002B;</sub></italic></th>
<th align="left"><italic>w<sub>&#x02212;</sub></italic></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">0</td>
<td align="left"><inline-formula><mml:math id="M27"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">0</td>
<td align="left">0</td>
<td align="left"><inline-formula><mml:math id="M28"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">0</td>
<td align="left">0</td>
<td align="left"><inline-formula><mml:math id="M29"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">0</td>
</tr>
<tr>
<td align="left"><inline-formula><mml:math id="M30"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left"><inline-formula><mml:math id="M31"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">0</td>
<td align="left"><inline-formula><mml:math id="M32"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left"><inline-formula><mml:math id="M33"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left"><inline-formula><mml:math id="M34"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left"><inline-formula><mml:math id="M35"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">1</td>
<td align="left">0</td>
</tr>
<tr>
<td align="left"><inline-formula><mml:math id="M36"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">1</td>
<td align="left"><inline-formula><mml:math id="M37"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left"><inline-formula><mml:math id="M38"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left"><inline-formula><mml:math id="M39"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left"><inline-formula><mml:math id="M40"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left"><inline-formula><mml:math id="M41"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">1</td>
<td align="left">0</td>
</tr>
<tr>
<td align="left">1</td>
<td align="left">1</td>
<td align="left"><inline-formula><mml:math id="M42"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">1</td>
<td align="left">1</td>
<td align="left"><inline-formula><mml:math id="M43"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula></td>
<td align="left">1</td>
<td align="left">1</td>
<td align="left">0</td>
</tr>
<tr>
<td colspan="3" align="center">(a)</td>
<td colspan="3" align="center">(b)</td>
<td colspan="3" align="center">(c)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>(a) As in Table <xref ref-type="table" rid="T2">2</xref> (<italic>n</italic>&#x02009;&#x0003D;&#x02009;100), which results in a LUT as expected. Weights are either potentiated or depressed through the entire table. (b) <italic>n</italic>&#x02009;&#x0003D;&#x02009;60, which is too low, because the discrete weights <inline-formula><mml:math id="M44"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="M45"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula> are projecting exclusively to themselves. (c) <italic>n</italic>&#x02009;&#x0003D;&#x02009;350, which is too large, because for <italic>w<sub>&#x0002B;</sub></italic> the discrete weight 0 is mapped right to <inline-formula><mml:math id="M51"><mml:mrow><mml:mfrac><mml:mn>2</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula> (and for <italic>w</italic><sub>-</sub> the weight 1 is mapped to 0), thus <inline-formula><mml:math id="M46"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula> is never reached</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>Generally, low values of <italic>n</italic> realize frequent, small weight updates. However, if <italic>n</italic> is too low, some discrete weights may project to themselves (see rounding in equation <xref ref-type="disp-formula" rid="E2">2</xref>) and prevent synaptic weights from evolving dynamically (see Table <xref ref-type="table" rid="T4">4</xref>; <italic>n</italic>&#x02009;&#x0003D;&#x02009;15 in Figure <xref ref-type="fig" rid="F3">3</xref>A).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>The dynamic range for configurations of STDP on discrete weights</bold>. <bold>(A)</bold> Equilibrium weight distributions for a 4-bit weight resolution: Intermediate discrete weights partly project to themselves (<italic>n</italic>&#x02009;&#x0003D;&#x02009;15). The equilibrium weight distribution widens with an increasing number of SSPs (<italic>n</italic>&#x02009;&#x0003D;&#x02009;40 and 70). For a large number of SSPs (<italic>n</italic>&#x02009;&#x0003D;&#x02009;225 and 500) the intermediate discrete weights do not receive projections from others. <bold>(B)</bold> Percentage of dead discrete weights <italic>d</italic>. The limits of the dynamic range (<italic>d</italic>&#x02009;&#x0003D;&#x02009;0%) are highlighted in red. The limit toward low numbers of SSPs (<italic>n</italic>&#x02009;&#x0003D;&#x02009;15 in case of <italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits) is caused by rounding effects (equation <xref ref-type="disp-formula" rid="E2">2</xref>), whereas the upper limit (<italic>n</italic>&#x02009;&#x0003D;&#x02009;206 in case of <italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits) is caused by too large weight updates. Green dashed lines indicate cross sections shown in <bold>(C,D)</bold>. <bold>(C)</bold> Cross section of <bold>(B)</bold> at a 4-bit weight resolution. The histograms shown in <bold>(A)</bold> are depicted with arrows. <bold>(D)</bold> Cross section of <bold>(B)</bold> at <italic>n</italic>&#x02009;&#x0003D;&#x02009;1.</p></caption>
<graphic xlink:href="fnins-06-00090-g003.tif"/>
</fig>
<p>On the other hand, if <italic>n</italic> exceeds the upper limit of the dynamic range, intermediate discrete weights may not be reached by others. Rare, large weight updates favor projections to discrete weights near the borders of the weight range <italic>I</italic> and lead to a bimodal equilibrium weight distribution as shown in Table <xref ref-type="table" rid="T4">4</xref> and Figure <xref ref-type="fig" rid="F3">3</xref>A (<italic>n</italic>&#x02009;&#x0003D;&#x02009;500).</p>
<p>The lower limit of the dynamic range decreases with increasing resolution (Figure <xref ref-type="fig" rid="F3">3</xref>B). Compared to a 4-bit weight resolution, an 8-bit weight resolution is sufficiently high to resolve weight updates down to a single SSP (Figure <xref ref-type="fig" rid="F3">3</xref>D). This allows frequent weight updates comparable to weight evolutions in continuous weight space. The upper limit of the dynamic range does not change over increasing weight resolutions, but is critical for limited update controller frequencies as investigated in Section <xref ref-type="sec" rid="s8">3.3</xref>.</p>
</sec>
<sec id="s7">
<label>3.2</label> <title>Equilibrium weight distributions</title>
<p>Studying learning in neural networks may span long periods of time. Therefore we analyze equilibrium weight distributions being the temporal limit of Poissonian distributed pre- and postsynaptic spiking. These distributions are obtained by applying random walks on LUTs with uniformly distributed occurrences of potentiations and depressions (Section <xref ref-type="sec" rid="s15">2.5</xref>). Figure <xref ref-type="fig" rid="F4">4</xref>A shows i.a. boundary effects caused by LUTs configured within the upper part of the dynamic range. E.g. for <italic>n</italic>&#x02009;&#x0003D;&#x02009;144, the relative frequencies of both boundary values are increased due to large weight steps (red and cyan distributions). Frequent weights, in turn, increase the probability of weights to which they project (according to the LUT). This effect decreases with the number of look-ups, due to the random nature of the stimulus, however, causing intermediate weight values to occur at higher probability.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Equilibrium weight distributions (long-term weight evolutions) for configurations of STDP on discrete weights</bold>. <bold>(A)</bold> Equilibrium weight distributions for weight resolutions of <italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits (red) and <italic>r</italic>&#x02009;&#x0003D;&#x02009;16 bits (cyan). Both distributions are displayed in 4-bit sampling, for better comparison. Black curves depict the analytical approach. We have chosen <italic>j</italic>&#x02009;&#x0003D;&#x02009;10<sup>5</sup> iterations for generating each discrete weight distribution to ensure convergence to the equilibrium state. <bold>(B)</bold> Mean squared error <italic>MSE</italic><sub>eq</sub> between the equilibrium weight distributions for weight resolutions <italic>r</italic> and the reference weight resolution of 16 bits versus the number <italic>n</italic> of SSPs. <bold>(C,D)</bold> Cross sections of <bold>(B)</bold> at <italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits and <italic>n</italic>&#x02009;&#x0003D;&#x02009;36, respectively.</p></caption>
<graphic xlink:href="fnins-06-00090-g004.tif"/>
</fig>
<p>The impact of weight discretization on long-term weight dynamics is quantified by comparing equilibrium weight distributions between low and high weight resolutions. Weight discretization involves distortions caused by rounding effects for small <italic>n</italic> (equation <xref ref-type="disp-formula" rid="E2">2</xref>; Figure <xref ref-type="fig" rid="F3">3</xref>) and boundary effects for high <italic>n</italic> (Figures <xref ref-type="fig" rid="F4">4</xref>A,C). High weight resolutions can compensate for rounding effects, but not for boundary effects (Figure <xref ref-type="fig" rid="F4">4</xref>B).</p>
<p>This analysis on long-term weight dynamics (Figure <xref ref-type="fig" rid="F4">4</xref>C) refines the choice for <italic>n</italic> roughly estimated by the dynamic range (Figure <xref ref-type="fig" rid="F3">3</xref>C).</p>
</sec>
<sec id="s8">
<label>3.3</label> <title>Spiking network benchmarks</title>
<p>We extend the above studies on temporal limits by analyses on short-term dynamics with unequal probabilities for potentiation <italic>p</italic><sub>p</sub> and depression <italic>p</italic><sub>d</sub>. A hardware-inspired synapse model is used in computer simulations of spiking neural networks, of which an example of typical dynamics is shown in Figure <xref ref-type="fig" rid="F5">5</xref>. As the pre- and postsynaptic spike trains are correlated in a causal fashion, the causal spike pair accumulation increases faster than the anti-causal one (Figure <xref ref-type="fig" rid="F5">5</xref>A). It crosses the threshold twice, evoking two potentiation steps (at around 7 and 13&#x02009;s) before the anti-causal spike pair accumulation evokes a depression at around 14&#x02009;s (Figures <xref ref-type="fig" rid="F5">5</xref>A,B). The first two potentiations project to the subsequent entry of the LUT, whereas the following depression rounds to the next but one discrete weight (omitting one entry in the LUT) due to the asymmetry measure &#x003B1; in the STDP model by G&#x000FC;tig et al. (<xref ref-type="bibr" rid="B29">2003</xref>).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Software implementation of STDP on discrete weights in spiking neural networks</bold>. <bold>(A)</bold> Temporal evolution of spike pair accumulations <italic>a</italic> (dimensionless) for causal (black) and anti-causal (gray) spike-timing-dependences. If <italic>a</italic> crosses the threshold <italic>a</italic><sub>th</sub> (cyan), the weight is updated and <italic>a</italic> is reset to zero. Pre- and postsynaptic spike trains are generated by a MIP with <italic>c</italic>&#x02009;&#x0003D;&#x02009;0.5 and <italic>r</italic>&#x02009;&#x0003D;&#x02009;10&#x02009;Hz. <bold>(B)</bold> Corresponding weight evolution (solid red) for a 4-bit weight resolution and a LUT configured with <italic>n</italic>&#x02009;&#x0003D;&#x02009;30. The weight evolution of the reference synapse model with continuous weights, but a reduced symmetric nearest-neighbor spike pairing scheme is depicted in solid blue. It differs from that of a synapse model with continuous weights and an all-to-all spike pairing scheme (dashed green).</p></caption>
<graphic xlink:href="fnins-06-00090-g005.tif"/>
</fig>
<sec id="s9">
<label>3.3.1</label> <title>Single synapse benchmark</title>
<p>This benchmark compares single weight traces between hardware-inspired and reference synapses (Section <xref ref-type="sec" rid="s17">2.6.2</xref>). A synapse receives correlated pre- and postsynaptic input (Figure <xref ref-type="fig" rid="F6">6</xref>A) resulting in weight dynamics as shown in Figure <xref ref-type="fig" rid="F6">6</xref>B. The standard deviation for discrete weights (hardware-inspired synapse model) is larger than that for continuous weights (reference model). This difference is caused by rare, large weight jumps (induced by high <italic>n</italic>) also responsible for the broadening of equilibrium weight distributions (Figure <xref ref-type="fig" rid="F4">4</xref>A). Consequently, the standard deviation increases further with decreasing weight resolutions (not shown here).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>Weight evolution of a single synapse with discrete weights</bold>. <bold>(A)</bold> Network layout for single synapse analyses. An STDP synapse (arrow) connects two neurons receiving correlated spike trains with correlation coefficient <italic>c</italic> (correlated spikes in red bars). <bold>(B)</bold> Example weight traces for the hardware-inspired (<italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits, <italic>n</italic>&#x02009;&#x0003D;&#x02009;36 in red) and reference synapse model (blue). Means and standard deviations over 30 realizations are plotted as bold lines and shaded areas, respectively. The single weight traces for one arbitrarily chosen random seed are depicted as thin lines. We applied a correlation coefficient <italic>c</italic>&#x02009;&#x0003D;&#x02009;0.2, an initial weight <italic>w</italic><sub>0</sub>&#x02009;&#x0003D;&#x02009;0.5 and firing rates of 10&#x02009;Hz. The results persist qualitatively for differing values staying within biologically relevant ranges (not shown here). <bold>(C)</bold> Mean squared error <italic>MSE<sub>w</sub></italic> between the mean weight traces as shown in <bold>(A)</bold> over the weight resolution <italic>r</italic> and the number <italic>n</italic> of SSPs. The parameters <italic>c</italic>, <italic>w</italic><sub>0</sub>, and the firing rates are chosen as in <bold>(B)</bold>. Other values for <italic>c</italic> and <italic>w</italic><sub>0</sub> do not change the results qualitatively. <bold>(D,E)</bold> Cross sections of <bold>(C)</bold> at <italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits and <italic>n</italic>&#x02009;&#x0003D;&#x02009;36 in green. Red curves are adapted from Figures <xref ref-type="fig" rid="F4">4</xref>C,D.</p></caption>
<graphic xlink:href="fnins-06-00090-g006.tif"/>
</fig>
<p>The dependence of the deviation between discrete and continuous weight traces on the weight resolution <italic>r</italic> and the number <italic>n</italic> of SSPs is qualitatively comparable to that of comparisons between equilibrium weight distributions (Figures <xref ref-type="fig" rid="F6">6</xref>D,E). This similarity, especially in dependence on <italic>n</italic> (Figure <xref ref-type="fig" rid="F6">6</xref>D), emphasizes the crucial impact of LUT configurations on both short- and long-term weight dynamics.</p>
<p>To further illustrate underlying rounding effects when configuring LUTs, the asymmetry value &#x003B1; in G&#x000FC;tig&#x02019;s STDP model can be taken as an example. In an extreme case both potentiation and depression are rounded down (compare weight step size for potentiation and depression in Figure <xref ref-type="fig" rid="F5">5</xref>B). This would increase the originally slight asymmetry drastically and therefore enlarge the distortion caused by weight discretization.</p>
<p>The weight update frequency <italic>v</italic><sub>w</sub> is determined by the weight resolution <italic>r</italic> and the number <italic>n</italic> of SSPs. High frequencies are beneficial for chronologically keeping up with weight evolutions in continuous weight space. They can be realized by small numbers of SSPs lowering the threshold <italic>a</italic><sub>th</sub> (equation <xref ref-type="disp-formula" rid="E3">3</xref>). On the other hand, rounding effects in the LUT configuration deteriorate for too small numbers of SSPs (Figure <xref ref-type="fig" rid="F6">6</xref>D). In case of a weight resolution <italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits (<italic>r</italic>&#x02009;&#x0003D;&#x02009;8 bits) choosing <italic>n</italic>&#x02009;&#x0003D;&#x02009;36 (<italic>n</italic>&#x02009;&#x0003D;&#x02009;12) for the LUT configuration represents a good balance between a high weight update frequency and proper both short- and long-term weight dynamics (Figures <xref ref-type="fig" rid="F3">3</xref>B, <xref ref-type="fig" rid="F4">4</xref>B and <xref ref-type="fig" rid="F6">6</xref>C). Note that <italic>n</italic> can be chosen smaller for higher weight resolutions, because the distorting impact of rounding effects decreases.</p>
</sec>
<sec id="s10">
<label>3.3.2</label> <title>Network benchmark: synchrony detection</title>
<p>Not only exact weight traces of single synapses (Section <xref ref-type="sec" rid="s8">3.3</xref>), but rather those of synapse populations are crucial to fulfill tasks, e.g. the detection of synchronous firing within neural networks. The principle of synchrony detection is a crucial feature of various neural networks with plasticity, e.g. reported by Senn et al. (<xref ref-type="bibr" rid="B64">1998</xref>), Kuba et al. (<xref ref-type="bibr" rid="B37">2002</xref>), Davison et al. (<xref ref-type="bibr" rid="B16">2009</xref>), El Boustani et al. (<xref ref-type="bibr" rid="B21">2012</xref>). Here, it is introduced by means of an elementary benchmark neural network (Figure <xref ref-type="fig" rid="F7">7</xref>A; Section <xref ref-type="sec" rid="s5">3</xref>), using the hardware-inspired or reference synapse model, respectively.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>Learning with discrete weights in a neural network benchmark for synchrony detection</bold>. <bold>(A)</bold> Layout of the network benchmark. Two populations of presynaptic neurons are connected to a postsynaptic neuron. On the right, example spike trains of the presynaptic neurons are shown. Red spikes indicate correlated firing due to shared spikes. <bold>(B)</bold> PSTH for static synapses and STDP reference synapses. The light gray histogram shows the difference between a simulation with STDP reference synapses (black) and static synapses (dark gray). <bold>(C)</bold> The mean weight traces (thick lines) and their standard deviations (shaded areas) for both populations of afferent synapses using the reference synapses model. Thin lines represent single synapses randomly chosen for each population. <bold>(D)</bold> As in <bold>(B)</bold>, but with the hardware-inspired synapse model (<italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits and <italic>n</italic>&#x02009;&#x0003D;&#x02009;36). <bold>(E)</bold> The probability (<italic>p</italic>-value of Mann&#x02013;Whitney U test) of having the same median of weights within both groups of synapses (with correlated and correlated input) at <italic>t</italic>&#x02009;&#x0003D;&#x02009;2,000&#x02009;s versus the correlation coefficient <italic>c</italic>. The hardware-inspired synapses model is represented in red (<italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits and <italic>n</italic>&#x02009;&#x0003D;&#x02009;36), green (<italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits and <italic>n</italic>&#x02009;&#x0003D;&#x02009;36) and blue (<italic>r</italic>&#x02009;&#x0003D;&#x02009;8 bits and <italic>n</italic>&#x02009;&#x0003D;&#x02009;12). Black depicts the reference synapse model (<italic>r</italic>&#x02009;&#x0003D;&#x02009;64 bits). The background shading represents the significance levels: <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.05, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.01, and <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.001. <bold>(F)</bold> Dependence of the <italic>p</italic>-value on the update controller frequency <italic>v</italic><sub>c</sub> for <italic>c</italic>&#x02009;&#x0003D;&#x02009;0.025. Colors as in <bold>(E)</bold>. <bold>(G)</bold> Black and red trace as in <bold>(E)</bold>. Additionally, <italic>p</italic>-values for hardware-inspired synapses with common resets are plotted in yellow (<italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits and <italic>n</italic>&#x02009;&#x0003D;&#x02009;36) and magenta (<italic>r</italic>&#x02009;&#x0003D;&#x02009;8 bits and <italic>n</italic>&#x02009;&#x0003D;&#x02009;12). Compensations with ADCs are depicted in gray (<italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits and <italic>n</italic>&#x02009;&#x0003D;&#x02009;15&#x02013;45 in steps of 2) and cyan (<italic>r</italic>&#x02009;&#x0003D;&#x02009;8 bits and <italic>n</italic>&#x02009;&#x0003D;&#x02009;1&#x02013;46 in steps of 3).</p></caption>
<graphic xlink:href="fnins-06-00090-g007.tif"/>
</fig>
<p>Figure <xref ref-type="fig" rid="F7">7</xref>B shows a delay distribution of postsynaptic spike occurrences, relative to the trigger onset, synchronous presynaptic firing (Section <xref ref-type="sec" rid="s1">2</xref>). For the shown range of &#x00394;<italic>t</italic><sub>del</sub>, the postsynaptic neuron is more likely to fire if connected with static (dark gray trace) instead of STDP (black trace) synapses. The correlated population causes its afferent synapses to strengthen more compared to those from the uncorrelated population. This can be seen in Figure <xref ref-type="fig" rid="F7">7</xref>C, where <italic>w</italic> saturates at different values (<italic>t</italic>&#x02009;&#x02248;&#x02009;700&#x02009;s). The same effect can be observed for discretized weights in Figure <xref ref-type="fig" rid="F7">7</xref>D. For &#x00394;<italic>t</italic><sub>del</sub>&#x02009;&#x0003E;&#x02009;170&#x02009;ms the delay distribution for static synapses is larger than that for STDP synapses (not shown here), because such delayed postsynaptic spikes are barely influenced by their presynaptic counterparts. This is due to small time constants of the postsynaptic neuron (see &#x003C4;<sub>m</sub>&#x02009;&#x0003D;&#x02009;<italic>C</italic><sub>m</sub>/<italic>g</italic><sub>L</sub> and &#x003C4;<sub>syn</sub> in Tables <xref ref-type="table" rid="T7">7</xref> and <xref ref-type="table" rid="T8">8</xref>) compared to &#x00394;<italic>t</italic><sub>del</sub>.</p>
<p>Figure <xref ref-type="fig" rid="F7">7</xref>E shows the <italic>p</italic>-values of the Mann&#x02013;Whitney U test applied to both groups of synaptic weights at <italic>t</italic>&#x02009;&#x0003D;&#x02009;2,000&#x02009;s for different configurations of weight resolution <italic>r</italic> and number <italic>n</italic> of SSPs. Generally, <italic>p</italic>-values (probability of having the same median within both groups of weights) decrease with an increasing correlation coefficient. Although applying previously selected &#x0201C;healthy&#x0201D; LUT configurations, weight discretization changes the required correlation coefficient for reaching significance level (gray shaded areas). Incrementing the weight resolution while retaining the number of SSPs <italic>n</italic> does not change the <italic>p</italic>-values significantly. Low weight resolutions cause larger spacings between discrete weights that can further facilitate the distinction between both medians (for <italic>n</italic>&#x02009;&#x0003D;&#x02009;36 compare <italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits to <italic>r</italic>&#x02009;&#x0003D;&#x02009;8 bits in Figure <xref ref-type="fig" rid="F7">7</xref>E). However, reducing <italic>n</italic> for high weight resolutions shortens the accumulation period and consequently allows the synapses to capture fluctuations in <italic>a</italic> on smaller time scales. This improves the <italic>p</italic>-value, but is inconvenient for low weight resolutions, because these LUT configurations do not yield the desired weight dynamics (Figures <xref ref-type="fig" rid="F3">3</xref>, <xref ref-type="fig" rid="F4">4</xref> and <xref ref-type="fig" rid="F6">6</xref>).</p>
</sec>
<sec id="s11">
<label>3.3.3</label> <title>Network benchmark: further constraints</title>
<p>In addition to the discretization of synaptic weights that has been analyzed so far, we also consider additional hardware constraints of the FACETS wafer-scale system (Section <xref ref-type="sec" rid="s2">2.6.3</xref>). This allows us to compare the effects of other hardware constraints to those of weight discretization.</p>
<p>First, we take into account a limited update controller frequency <italic>v</italic><sub>c</sub>. Figure <xref ref-type="fig" rid="F7">7</xref>F shows that low frequencies (&#x0003C;1&#x02009;Hz) distort the weight dynamics drastically and deteriorate the distinction between correlated and uncorrelated inputs. Ideally, a weight update would be performed whenever the spike pair accumulations cross the threshold (Figure <xref ref-type="fig" rid="F5">5</xref>A). However, these weight updates of frequency <italic>v</italic><sub>w</sub> are now limited to a time grid with frequency <italic>v</italic><sub>c</sub>. The larger the latency between a threshold crossing and the arrival of the weight update controller, the more likely this threshold is exceeded. Hence, the weight update is underestimated and delayed. Low weight resolutions are less affected, because a high ratio <italic>v</italic><sub>c</sub>/<italic>v</italic><sub>w</sub> reduces threshold overruns and hence distortions. This low resolution requires a high number of SSPs which in turn increases the threshold <italic>a</italic><sub>th</sub> (equation <xref ref-type="disp-formula" rid="E3">3</xref>) and thus the weight update frequency <italic>v</italic><sub>w</sub>.</p>
<p>Second, hardware-inspired synapses with the limitation to common reset lines cease to discriminate between correlated and uncorrelated input (Figure <xref ref-type="fig" rid="F7">7</xref>G, yellow and magenta traces). A crossing of the threshold by one spike pair accumulation resets the other (Figure <xref ref-type="fig" rid="F5">5</xref>) and suppresses its further weight updates, leading to underestimation of synapses with less correlated input.</p>
<p>To compensate for common resets we suggest ADCs that allow the comparison of spike pair accumulations to multiple thresholds. Nevertheless, ADCs compensate common resets only for high weight resolutions (Figure <xref ref-type="fig" rid="F7">7</xref>G). Again, for low weight resolutions and hence high numbers of SSPs fluctuations can not be taken into account (Figure <xref ref-type="fig" rid="F7">7</xref>G, gray values). This is the case for a 4-bit weight resolution, whereas a 8-bit weight resolution is high enough to resolve small fluctuations down to single SSPs (Figure <xref ref-type="fig" rid="F7">7</xref>G, cyan values). Each threshold has its own LUT configured with a number of SSPs that matches the dynamic range (Figure <xref ref-type="fig" rid="F3">3</xref>). The upper limit of <italic>n</italic> is chosen according to the results of Section <xref ref-type="sec" rid="s7">3.2</xref>. The update controller frequency is chosen to be low enough (<italic>v</italic><sub>c</sub>&#x02009;&#x0003D;&#x02009;0.2&#x02009;Hz) to enable all thresholds to be hit.</p>
</sec>
</sec>
<sec id="s12">
<label>3.4</label> <title>Hardware variations</title>
<p>So far, we neglected production imperfections in real hardware systems. However, fixed pattern noise induced by these imperfections are a crucial limitation on the transistor level and may distort the functionality of the analog synapse circuit making higher weight resolutions unnecessary. The smaller and denser the transistors, the larger the discrepancies from their theoretical properties (Pelgrom et al., <xref ref-type="bibr" rid="B54">1989</xref>). Using the protocol illustrated in Figure <xref ref-type="fig" rid="F8">8</xref>A we recorded STDP curves on the FACETS chip-based hardware system (Figures <xref ref-type="fig" rid="F8">8</xref>B,C; Section <xref ref-type="sec" rid="s3">2.7.1</xref>). Variations within (&#x003C3;<sub>a</sub>) and between (&#x003C3;<sub>t</sub>) individual synapses are shown as distributions in Figures <xref ref-type="fig" rid="F8">8</xref>D,E, both suggesting variations at around 20%. Both variations are incorporated into computer simulations of the network benchmark (Figure <xref ref-type="fig" rid="F7">7</xref>A; Section <xref ref-type="sec" rid="s4">2.7.2</xref>) to analyze their effects on synchrony detection. The <italic>p</italic>-value (as in Figures <xref ref-type="fig" rid="F7">7</xref>E&#x02013;G) rises with increasing asymmetry within synapses, but is hardly affected by variations between synapses (Figure <xref ref-type="fig" rid="F8">8</xref>F).</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p><bold>Measurement of hardware synapse variations and their effects on learning in the neural network benchmark</bold>. <bold>(A)</bold> Setup for recording STDP curves. At the top, spike trains of the pre- and postsynaptic neuron. Spike pairs with latency &#x00394;<italic>t</italic> are repeated with frequency 1/<italic>T</italic>. At the bottom, a spike pair accumulation that crosses the threshold <italic>a</italic><sub>th</sub> (arrow). The inverse of the number of SSPs until crossing <italic>a</italic><sub>th</sub> (here <italic>n</italic>&#x02009;&#x0003D;&#x02009;3) is plotted in <bold>(B)</bold>. <bold>(B)</bold> STDP curves of 252 hardware synapses within one synapse column (gray) and their mean with error (blue). A speed-up factor of 10<sup>5</sup> is assumed. These curves correspond to <italic>x</italic>(&#x00394;<italic>t</italic>) in equation (<xref ref-type="disp-formula" rid="E1">1</xref>), whereas <italic>F</italic>(<italic>w</italic>) is realized by the LUT. <bold>(C)</bold> One arbitrarily chosen STDP curve (over 5 trials) showing the areas for &#x00394;<italic>t</italic>&#x02009;&#x0003C;&#x02009;0 (<italic>A</italic><sub>a</sub> in red) and &#x00394;<italic>t</italic>&#x02009;&#x0003E;&#x02009;0 (<italic>A</italic><sub>c</sub> in blue). <bold>(D)</bold> Asymmetry between <italic>A</italic><sub>a</sub> and <italic>A</italic><sub>c</sub> within synapses (&#x003C3;<sub>a</sub>&#x02009;&#x0003D;&#x02009;21%). <bold>(E)</bold> Variation of the absolute areas between synapses (&#x003C3;<sub>a</sub>&#x02009;&#x0003D;&#x02009;17%). <bold>(F)</bold> The <italic>p</italic>-value (as in Figures <xref ref-type="fig" rid="F7">7</xref>E&#x02013;G) in dependence on &#x003C3;<sub>a</sub> and &#x003C3;<sub>t</sub>. The values for <bold>(D,E)</bold> are marked with an asterisk.</p></caption>
<graphic xlink:href="fnins-06-00090-g008.tif"/>
</fig>
</sec>
</sec>
<sec sec-type="discussion">
<label>4</label> <title>Discussion</title>
<sec>
<label>4.1</label> <title>Configuration of STDP on discrete weights</title>
<p>In this study, we demonstrate generic strategies to configure STDP on discrete weights as, e.g. implemented in neuromorphic hardware systems. Resulting weight dynamics is critically dependent on the frequency of weight updates that has to be adjusted to the available weight resolution. Choosing a frequency within the dynamic range (Figure <xref ref-type="fig" rid="F3">3</xref>) is a prerequisite for the exploitation of discrete weight space ensuring proper weight dynamics. Analyses on long-term dynamics using Poisson-driven equilibrium weight distributions help to refine this choice (Figure <xref ref-type="fig" rid="F4">4</xref>). The obtained configuration space is similar to that of short-term dynamics, being the evolution of single synaptic weights (Figure <xref ref-type="fig" rid="F6">6</xref>). This similarity confirms the crucial impact of the LUT configuration on weight dynamics which is caused by rounding effects. Based on these results, we have chosen two example LUT configurations (<italic>r</italic>&#x02009;&#x0003D;&#x02009;4 bits; <italic>n</italic>&#x02009;&#x0003D;&#x02009;36 and <italic>r</italic>&#x02009;&#x0003D;&#x02009;8 bits; <italic>n</italic>&#x02009;&#x0003D;&#x02009;12) for further analysis, both realizable on the FACETS wafer-scale hardware system. High weight resolutions allow for higher frequencies of weight updates approximating the ideal model, occasionally requiring several spike pairs to evoke a weight update. Correspondingly, in associative pairing literature, a minimal number of associations is required to detect functional changes (expressed by the spiking or postsynaptic potential response) and varies from studies to studies from a few to several tens (Cassenaer and Laurent, <xref ref-type="bibr" rid="B12">2007</xref>, <xref ref-type="bibr" rid="B13">2012</xref>).</p>
<p>Discretization not only affects the accuracy of weights, but also broadens their equilibrium weight distributions (Figure <xref ref-type="fig" rid="F4">4</xref>), which are actually shown to be narrow in large-scale neural networks (Morrison et al., <xref ref-type="bibr" rid="B49">2007</xref>). Furthermore, this broadening can distort the functionality of neural networks, e.g. it deteriorates the distinction between the two groups of weights (of synapses originating from the correlated or uncorrelated population) within the network benchmark (compare Figures <xref ref-type="fig" rid="F7">7</xref>C,D). On the other hand, weight discretization can also be advantageous for synchrony detection, if, e.g. groups of weights separate due to large step sizes between neighboring discrete weights (compare red and green in Figure <xref ref-type="fig" rid="F7">7</xref>E).</p>
<p>In summary, these analyses of STDP on discrete weights are necessary for obtaining appropriate configurations for a variety of STDP models and weight resolutions.</p>
</sec>
<sec>
<label>4.2</label> <title>4-bit weight resolution</title>
<p>Simulations of the network benchmark show that a 4-bit weight resolution is sufficient to detect synchronous presynaptic firing significantly (Figure <xref ref-type="fig" rid="F7">7</xref>). Groups of synapses receiving correlated input strengthen and in turn increase the probability of synchronous presynaptic activity to elicit postsynaptic spikes as compared to static synapses (Figure <xref ref-type="fig" rid="F7">7</xref>B). Thus, the weight distribution within the network reflects synchrony within sub-populations of presynaptic neurons. Increasing the weight resolution causes both weight distributions, for the correlated and uncorrelated input, to narrow and separate from each other. Consequently, an 8-bit resolution is sufficient to reproduce the <italic>p</italic>-values of continuous weights with floating point precision (corresponds to discrete weights with <italic>r</italic>&#x02009;&#x0003D;&#x02009;64 bits, Figure <xref ref-type="fig" rid="F7">7</xref>E). This resolution requires the combination of two hardware synapses and is under development (Schemmel et al., <xref ref-type="bibr" rid="B60">2010</xref>). On the other hand, increasing the weight resolution, but retaining the frequency of weight updates (number of SSPs), results in weight distributions of comparable width and consequently does not improve the <italic>p</italic>-values significantly (Figure <xref ref-type="fig" rid="F7">7</xref>E).</p>
<p>Other neuromorphic hardware systems implement bistable synapses corresponding to a 1-bit weight resolution (Badoni et al., <xref ref-type="bibr" rid="B3">2006</xref>; Indiveri et al., <xref ref-type="bibr" rid="B31">2010</xref>). Bistable synapse models are shown to be sufficient for memory formation (Amit and Fusi, <xref ref-type="bibr" rid="B1">1994</xref>; Fusi et al., <xref ref-type="bibr" rid="B25">2005</xref>; Brader et al., <xref ref-type="bibr" rid="B7">2007</xref>; Clopath et al., <xref ref-type="bibr" rid="B14">2008</xref>). However, these models do not only employ spike-timings (Levy and Steward, <xref ref-type="bibr" rid="B41">1983</xref>; Markram, <xref ref-type="bibr" rid="B44">2006</xref>; Mu and Poo, <xref ref-type="bibr" rid="B52">2006</xref>; Cassenaer and Laurent, <xref ref-type="bibr" rid="B12">2007</xref>; Bill et al., <xref ref-type="bibr" rid="B6">2010</xref>), but also read the postsynaptic membrane potential (Sj&#x000F6;str&#x000F6;m et al., <xref ref-type="bibr" rid="B66">2001</xref>; Trachtenberg et al., <xref ref-type="bibr" rid="B69">2002</xref>) requiring additional hardware resources. So far, there is no consensus of a general synapse model, and neuromorphic hardware systems are mostly limited to only subclasses of these models.</p>
<p>Studies on weight discretization are not limited to the FACETS hardware systems only, but are applicable to other backends for neural network simulations. For example, our results can be applied to the fully digital neuromorphic hardware system described by Jin et al. (<xref ref-type="bibr" rid="B34">2010b</xref>), who also report STDP with a reduced weight resolution. Furthermore, weight discretization may be a further approach to reduce memory consumption of &#x0201C;classical&#x0201D; neural simulators.</p>
</sec>
<sec>
<label>4.3</label> <title>Further hardware constraints</title>
<p>In addition to a limited weight resolution, we have studied further constraints of the current FACETS wafer-scale hardware system with the network benchmark.</p>
<p>A limited update controller frequency implying a minimum time interval between subsequent weight updates does not affect the <italic>p</italic>-values down to a critical frequency <italic>v</italic><sub>c</sub>&#x02009;&#x02248;&#x02009;1&#x02009;Hz (Figure <xref ref-type="fig" rid="F7">7</xref>F). The update controller frequency decreases linearly with the number of hardware synapses enabled for STDP. Assuming a hardware acceleration factor of 10<sup>3</sup> all synapses can be enabled for STDP staying below this critical frequency. However, the number of STDP synapses should be decreased if a higher update controller frequency is required, e.g. for a configuration with an 8-bit weight resolution and a small number of SSPs.</p>
<p>Common resets of spike pair accumulations reduce synapse chip resources by requiring one instead of two reset lines, but suppress synaptic depression and bias the weight evolution toward potentiation. This is due to the feed-forward network architecture, in which causal relationships between pre- and postsynaptic spikes are more likely than anti-causal ones. Long periods of accumulation (large numbers of SSPs) lower the probability of synaptic depression. Hence, all weights tend to saturate at the maximum weight value impeding a distinction between both populations of synapses within the network benchmark (Figure <xref ref-type="fig" rid="F7">7</xref>G). The probability of synaptic depression can be increased by high weight update frequencies (small numbers of SSPs) shortening the accumulation periods (equation <xref ref-type="disp-formula" rid="E3">3</xref>) and subsequently approaching the behavior of independent resets. However, high weight update frequencies require high weight resolutions and thus high update controller frequencies, which decreases the number of available synapses enabled for STDP.</p>
<p>As a compensation for common resets, we suggest that the single spike pair accumulation threshold is expanded to multiple thresholds implemented as ADCs. In comparison to synapses with common resets, ADCs improve <italic>p</italic>-values significantly only for an 8-bit weight resolutions (Figure <xref ref-type="fig" rid="F7">7</xref>G, compare cyan to magenta values). However, the combination of two 4-bit hardware synapses allows to mimic independent resets and hence yields <italic>p</italic>-values comparable to 8-bit synapses using ADCs (Figure <xref ref-type="fig" rid="F7">7</xref>G, compare red to cyan values). Mimicking independent resets is under development for the FACETS wafer-scale hardware system. Each of the two combined synapses will be configured to accumulate only either causal or anti-causal spike pairs, while both synapses are updated in a common process. This requires only minor hardware design changes within the weight update controller and should be preferred to more expensive changes for realizing ADCs. The implementation of real second reset lines is not possible without major hardware design changes, but is considered for future chip revisions.</p>
<p>Benchmark simulations incorporating the measured variations within and between synapse circuits due to production imperfections result in <italic>p</italic>-values worse (higher) than for a 4-bit weight resolution (compare asterisk in Figure <xref ref-type="fig" rid="F8">8</xref>F to red value for <italic>c</italic>&#x02009;&#x0003D;&#x02009;0.025 in Figure <xref ref-type="fig" rid="F7">7</xref>E). Consequently, a 4-bit weight resolution is sufficient for the current implementation of the measurement and accumulation circuits. We suppose that the isolatedly analyzed effects of production imperfections and weight discretization add up and limit the best possible <italic>p</italic>-value of each other. Analysis on combinations of hardware restrictions would allow to quantify how their effects add up and are considered for further studies. However, hardware variations can also be considered as a limitation on the transistor level making higher weight resolutions unnecessary.</p>
<p>Figure <xref ref-type="fig" rid="F9">9</xref> summarizes the results on how to configure STDP on discrete weights. For a given weight resolution <italic>r</italic> the number <italic>n</italic> of SSPs has to be chosen as low as possible to allow for high weight update frequencies <italic>v</italic><sub>w</sub>. However, <italic>n</italic> must be high enough to ensure STDP dynamics comparable to continuous weights (lightest gray shaded area) and to stay within the configuration space realizable by the FACETS wafer-scale hardware system. The hardware system limits the update controller frequency <italic>v</italic><sub>c</sub> and hence distorts STDP especially for low <italic>n</italic>.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p><bold>The configuration space of STDP on discrete weights spanned by the weight resolution <italic>r</italic> and the number <italic>n</italic> of SSPs that is inversely proportional to the weight update frequency <italic>v</italic><sub>w</sub></bold>. The darkest gray area depicts the configurations with dead discrete weights (Figure <xref ref-type="fig" rid="F3">3</xref>). The lower limits of configurations for proper equilibrium weight distributions (Figure <xref ref-type="fig" rid="F4">4</xref>) and single synapse dynamics (Figure <xref ref-type="fig" rid="F6">6</xref>) are shown with brighter shades. The dashed rectangle marks configurations realizable by the FACETS wafer-scale hardware system (assuming an acceleration factor of 10<sup>3</sup>, all synapses enabled for STDP and SSPs applied with 10&#x02009;Hz). The working points for a 4-bit (<italic>n</italic>&#x02009;&#x0003D;&#x02009;36) and 8-bit (<italic>n</italic>&#x02009;&#x0003D;&#x02009;12) weight resolution are highlighted as a triangle and circle, respectively.</p></caption>
<graphic xlink:href="fnins-06-00090-g009.tif"/>
</fig>
</sec>
<sec>
<label>4.4</label> <title>Outlook</title>
<p>Currently, STDP in neuromorphic hardware systems is enabled for only 10 to few 10,000 synapses in real-time (Arthur and Boahen, <xref ref-type="bibr" rid="B2">2006</xref>; Zou et al., <xref ref-type="bibr" rid="B78">2006</xref>; Daouzli et al., <xref ref-type="bibr" rid="B15">2008</xref>; Ramakrishnan et al., <xref ref-type="bibr" rid="B57">2011</xref>). Large-scale systems do not implement long-term plasticity (Merolla and Boahen, <xref ref-type="bibr" rid="B47">2006</xref>; Vogels et al., <xref ref-type="bibr" rid="B73">2011</xref>) or operate in real-time only (Jin et al., <xref ref-type="bibr" rid="B33">2010a</xref>). Enabling a large-scale (over 4&#x000B7;10<sup>7</sup> synapses) and highly accelerated neuromorphic hardware system (the FACETS wafer-scale hardware system) with configurable STDP requires trade-offs between number and size of synapses, which raises constraints in their implementation (Schemmel et al., <xref ref-type="bibr" rid="B63">2006</xref>, <xref ref-type="bibr" rid="B60">2010</xref>). Table <xref ref-type="table" rid="T5">5</xref> summarizes these trade-offs and gives an impression about the hardware costs and effects on STDP.</p>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><p><bold>Possible design modifications of hardware synapses, their reduction in terms of required chip resources and their effects on STDP</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Modification</th>
<th align="left">Resource reduction</th>
<th align="left">Effect on STDP</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Global weight update controller</td>
<td align="left">&#x0002B;&#x0002B;&#x0002B;</td>
<td align="left">Latency between synapse processings; spike pair accumulations necessary</td>
</tr>
<tr>
<td align="left">Analog measurement of spike-timing-dependence</td>
<td align="left">&#x0002B;&#x0002B;</td>
<td align="left">Analog measurements are affected by production imperfections</td>
</tr>
<tr>
<td align="left">Reduced spike pairing scheme</td>
<td align="left">&#x0002B;&#x0002B;</td>
<td align="left">n.a.</td>
</tr>
<tr>
<td align="left">Decreased weight resolution</td>
<td align="left">&#x0002B;&#x0002B;</td>
<td align="left">Loss in synapse dynamics and competition; large weight steps require spike pair accumulations</td>
</tr>
<tr>
<td align="left">Operation frequency <italic>v</italic><sub>c</sub> of the weight update controller (overall frequency could be increased by implementing multiple controllers)</td>
<td align="left">&#x0002B;&#x0002B;</td>
<td align="left">Threshold over-shootings distorts synchrony detection</td>
</tr>
<tr>
<td align="left">Common reset line</td>
<td align="left">&#x0002B;</td>
<td align="left">No synchrony detection possible</td>
</tr>
<tr>
<td align="left">LUTs (compared to arithmetic operations)</td>
<td align="left">&#x0002B;</td>
<td align="left">None</td>
</tr>
<tr>
<td align="left">ADCs as compensation for common resets</td>
<td align="left">&#x02212;</td>
<td align="left">No significant compensation in case of 4-bit synapses</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>These modifications are listed by their resource reduction in descending order inspired by the FACETS wafer-scale hardware system and its production process. A larger reduction of chip resources allows more synapses on a single chip</italic>.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T6">
<label>Table 6</label>
<caption><p><bold>Applied hardware parameters</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Parameter</th>
<th align="left">Description</th>
<th align="left">Value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><italic>V</italic><sub>clrc</sub></td>
<td align="left">Amount of charge that will be accumulated on the capacitor <italic>C</italic><sub>1</sub> (Schemmel et al., <xref ref-type="bibr" rid="B63">2006</xref>) in case of causal spike time correlations, corresponds to <italic>x</italic>(&#x00394;<italic>t</italic>)</td>
<td align="left">0.90&#x02009;V</td>
</tr>
<tr>
<td align="left"><italic>V</italic><sub>clra</sub></td>
<td align="left">See <italic>V</italic><sub>clrc</sub>, but for the anti-causal circuit</td>
<td align="left">0.94&#x02009;V</td>
</tr>
<tr>
<td align="left"><italic>V</italic><sub>ctlow</sub></td>
<td align="left">Lower spike pair accumulation threshold</td>
<td align="left">0.85&#x02009;V</td>
</tr>
<tr>
<td align="left"><italic>V</italic><sub>cthigh</sub></td>
<td align="left">Higher spike pair accumulation threshold</td>
<td align="left">1.0&#x02009;V</td>
</tr>
<tr>
<td align="left"><italic>Adjdel</italic></td>
<td align="left">Adjustable delay between the pre- and postsynaptic spike</td>
<td align="left">2.5&#x02009;&#x003BC;A</td>
</tr>
<tr>
<td align="left"><italic>V</italic><sub>m</sub></td>
<td align="left">Parameter to stretch the STDP time constant &#x003C4;<sub>STDP</sub></td>
<td align="left">0.0&#x02009;V</td>
</tr>
<tr>
<td align="left"><italic>I</italic><sub>bcorreadb</sub></td>
<td align="left">Bias current that influences timing issues during read outs</td>
<td align="left">2.0&#x02009;&#x003BC;A</td>
</tr>
<tr>
<td align="left"><italic>drvI</italic><sub>rise</sub></td>
<td align="left">Rise time of synaptic conductance</td>
<td align="left">1.0&#x02009;V</td>
</tr>
<tr>
<td align="left"><italic>drvI</italic><sub>fall</sub></td>
<td align="left">Fall time of synaptic conductance</td>
<td align="left">1.0&#x02009;V</td>
</tr>
<tr>
<td align="left"><italic>V</italic><sub>start</sub></td>
<td align="left">Start value of synaptic conductance, need for small rise times</td>
<td align="left">0.25&#x02009;V</td>
</tr>
<tr>
<td align="left"><italic>drvI</italic><sub>out</sub></td>
<td align="left">Maximum value of synaptic conductance, corresponds to <italic>g</italic><sub>max</sub></td>
<td align="left">Variable</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The difference <italic>V</italic><sub>cthigh</sub>&#x02009;&#x02212;&#x02009;<italic>V</italic><sub>ctlow</sub> corresponds to the threshold <italic>a</italic><sub>th</sub>. All data is recorded with the FACETS chip-based hardware system using chip number 444 and synapse column 4</italic>.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T7">
<label>Table 7</label>
<caption><p><bold>Model description of the network benchmark using the reference synapse model</bold>. After Nordlie et al. (<xref ref-type="bibr" rid="B53">2009</xref>).</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>A: MODEL SUMMARY</bold></td>
</tr>
<tr>
<td align="left">Populations</td>
<td align="left">Three: uncorrelated input (U), correlated input (C), target (T)</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Topology</td>
<td align="left">Feed-forward</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Connectivity</td>
<td align="left">All-to-one</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Neuron model</td>
<td align="left">Leaky integrate-and-fire, fixed voltage threshold, fixed absolute refractory period (voltage clamp)</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Synapse model</td>
<td align="left">Exponential-shaped postsynaptic conductances</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Plasticity</td>
<td align="left">Intermediate G&#x000FC;tig spike-timing dependent plasticity</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Input</td>
<td align="left">Fixed-rate Poisson (for U) and multiple interaction process (for C) spike trains</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Measurements</td>
<td align="left">Synaptic weights</td>
<td align="left"/>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>B: POPULATIONS</bold></td>
</tr>
<tr>
<td align="left">Name</td>
<td align="left">Elements</td>
<td align="left">Population size</td>
</tr>
<tr>
<td align="left">U</td>
<td align="left">Parrot neurons</td>
<td align="left"><italic>N</italic><sub>u</sub></td>
</tr>
<tr>
<td align="left">C</td>
<td align="left">Parrot neurons</td>
<td align="left"><italic>N</italic><sub>c</sub></td>
</tr>
<tr>
<td align="left">T</td>
<td align="left">IAF neurons</td>
<td align="left"><italic>N</italic><sub>T</sub></td>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>C: CONNECTIVITY</bold></td>
</tr>
<tr>
<td align="left">Source</td>
<td align="left">Target</td>
<td align="left">Pattern</td>
</tr>
<tr>
<td align="left">U</td>
<td align="left">T</td>
<td align="left">All-to-all, uniformly distributed initial weights <italic>w</italic>, STDP, delay <italic>d</italic></td>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>D: NEURON AND SYNAPSE MODEL</bold></td>
</tr>
<tr>
<td align="left">Name</td>
<td align="left">IAF neuron</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Type</td>
<td align="left">Leaky integrate-and-fire, exponential-shaped synaptic conductances</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Sub-threshold dynamics</td>
<td align="left"><italic>C</italic><sub>m</sub> d<italic>V</italic>/d<italic>t</italic>&#x02009;&#x0003D;&#x02009;<italic>g<sub>L</sub></italic> (<italic>E<sub>L</sub></italic>&#x02009;&#x02212;&#x02009;<italic>V</italic>)&#x02009;&#x0002B;&#x02009;<italic>g</italic>(<italic>t</italic>) (<italic>E</italic><sub>e</sub>&#x02009;&#x02212;&#x02009;<italic>V</italic>) if <italic>t</italic>&#x02009;&#x0003E;&#x02009;<italic>t&#x0002A;</italic>&#x02009;&#x0002B;&#x02009;&#x003C4;<sub>ref</sub> <italic>V</italic>(<italic>t</italic>)&#x02009;&#x0003D;&#x02009;<italic>V</italic><sub>reset</sub> else <italic>g</italic>(<italic>t</italic>)&#x02009;&#x0003D;&#x02009;<italic>wg</italic><sub>max</sub> exp(&#x02212;<italic>t</italic>/&#x003C4;<sub>syn</sub>)</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Spiking</td>
<td align="left">If <italic>V</italic>(<italic>t</italic>&#x02212;)&#x02009;&#x0003C;&#x02009;&#x000FE;eta&#x02009;&#x02227;&#x02009;<italic>V</italic>(<italic>t</italic>&#x0002B;)&#x02009;&#x02265;&#x02009;&#x003B8; 1. Set <italic>t</italic>&#x0002A;&#x02009;&#x0003D;&#x02009;<italic>t</italic>, 2. Emit spike with time stamp <italic>t</italic>&#x0002A;</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Name</td>
<td align="left">Parrot neuron</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Type</td>
<td align="left">Repeats input spikes with delay <italic>d</italic></td>
<td align="left"/>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>E: PLASTICITY</bold></td>
</tr>
<tr>
<td align="left">Name</td>
<td align="left">Intermediate G&#x000FC;tig STDP</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Spike pairing scheme</td>
<td align="left">Reduced symmetric nearest-neighbor</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Weight dynamics</td>
<td align="left"><inline-formula><mml:math id="M9"><mml:mrow><mml:mi>&#x003B4;</mml:mi><mml:mi>w</mml:mi><mml:mrow><mml:mo class="MathClass-open">(</mml:mo><mml:mrow><mml:mi>w</mml:mi><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mo class="MathClass-close">)</mml:mo></mml:mrow><mml:mo class="MathClass-rel">=</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:mo class="MathClass-open">(</mml:mo><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo class="MathClass-close">)</mml:mo></mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo class="MathClass-open">(</mml:mo><mml:mrow><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mo class="MathClass-close">)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula></td>
<td align="left"/>
</tr>
<tr>
<td align="left"/>
<td align="left"><italic>x</italic>(&#x00394;<italic>t</italic>)&#x02009;&#x0003D;&#x02009;exp(&#x02212;|&#x00394;<italic>t</italic>|/&#x003C4;<sub>STDP</sub>)</td>
<td align="left"/>
</tr>
<tr>
<td align="left"/>
<td align="left"><italic>F</italic>(<italic>w</italic>)&#x02009;&#x0003D;&#x02009;&#x003BB;(1&#x02009;&#x02212;&#x02009;<italic>w</italic>)<sup>&#x003BC;</sup> if &#x00394;<italic>t</italic>&#x02009;&#x0003E;&#x02009;0</td>
<td align="left"/>
</tr>
<tr>
<td align="left"/>
<td align="left"><italic>F</italic>(<italic>w</italic>)&#x02009;&#x0003D;&#x02009;&#x02212;&#x003BB;&#x003B1;<italic>w</italic><sup>&#x003BC;</sup> if &#x00394;<italic>t</italic>&#x02009;&#x0003C;&#x02009;0</td>
<td align="left"/>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>F: INPUT</bold></td>
</tr>
<tr>
<td align="left">Type</td>
<td align="left">Target</td>
<td align="left">Description</td>
</tr>
<tr>
<td align="left">Poisson generators</td>
<td align="left">U</td>
<td align="left">Independent Poisson spike trains with firing rate <italic>&#x003C1;</italic></td>
</tr>
<tr>
<td align="left">MIP generators</td>
<td align="left">C</td>
<td align="left">Spike trains with correlation <italic>c</italic> and firing rate <italic>&#x003C1;</italic></td>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>G: MEASUREMENTS</bold></td>
</tr>
<tr>
<td colspan="3" align="left">evolution and final distribution of all synaptic weights</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>For details about the hardware-inspired synapse model see Section</italic> <xref ref-type="sec" rid="s16">2.6.1</xref>.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T8">
<label>Table 8</label>
<caption><p><bold>Parameter specification</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Name</th>
<th align="left">Value</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>B: POPULATIONS</bold></td>
</tr>
<tr>
<td align="left"><italic>N</italic><sub>u</sub></td>
<td align="left">10</td>
<td align="left">Number of neurons in uncorrelated input population</td>
</tr>
<tr>
<td align="left"><italic>N</italic><sub>c</sub></td>
<td align="left">10</td>
<td align="left">Number of neurons in correlated input population</td>
</tr>
<tr>
<td align="left"><italic>N</italic><sub>T</sub></td>
<td align="left">1</td>
<td align="left">Number of neurons in target population</td>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>C: CONNECTIVITY</bold></td>
</tr>
<tr>
<td align="left"><italic>w</italic></td>
<td align="left">Uniformly distributed over [0, 1]</td>
<td align="left">Number of neurons in uncorrelated input population</td>
</tr>
<tr>
<td align="left"><italic>d</italic></td>
<td align="left">0.1&#x02009;ms</td>
<td align="left">Synaptic transmission delays</td>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>D: NEURON AND SYNAPSE MODEL</bold></td>
</tr>
<tr>
<td align="left"><italic>C</italic><sub>m</sub></td>
<td align="left">250&#x02009;pF</td>
<td align="left">Membrane capacity</td>
</tr>
<tr>
<td align="left"><italic>g</italic><sub>L</sub></td>
<td align="left">16.6667&#x02009;nS</td>
<td align="left">Leakage conductance</td>
</tr>
<tr>
<td align="left"><italic>E</italic><sub>L</sub></td>
<td align="left">&#x02212;70&#x02009;mV</td>
<td align="left">Leakage reversal potential</td>
</tr>
<tr>
<td align="left">&#x003B8;</td>
<td align="left">&#x02212;55&#x02009;mV</td>
<td align="left">Fixed firing threshold</td>
</tr>
<tr>
<td align="left"><italic>V</italic><sub>reset</sub></td>
<td align="left">&#x02212;60&#x02009;mV</td>
<td align="left">Reset potential</td>
</tr>
<tr>
<td align="left">&#x003C4;<sub>ref</sub></td>
<td align="left">2&#x02009;ms</td>
<td align="left">Absolute refractory period</td>
</tr>
<tr>
<td align="left"><italic>E</italic><sub>e</sub></td>
<td align="left">0&#x02009;mV</td>
<td align="left">Excitatory reversal potential</td>
</tr>
<tr>
<td align="left"><italic>g</italic><sub>max</sub></td>
<td align="left">100&#x02009;nS</td>
<td align="left">Postsynaptic maximum conductance</td>
</tr>
<tr>
<td align="left">&#x003C4;<sub>syn</sub></td>
<td align="left">0.2&#x02009;ms</td>
<td align="left">Postsynaptic conductance time constant</td>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>E: PLASTICITY</bold></td>
</tr>
<tr>
<td align="left">&#x003B1;</td>
<td align="left">1.05</td>
<td align="left">Asymmetry</td>
</tr>
<tr>
<td align="left">&#x003BB;</td>
<td align="left">0.005</td>
<td align="left">learning rate</td>
</tr>
<tr>
<td align="left">&#x003BC;</td>
<td align="left">0.4</td>
<td align="left">Exponent</td>
</tr>
<tr>
<td align="left">&#x003C4;<sub>STDP</sub></td>
<td align="left">20&#x02009;ms</td>
<td align="left">STDP time constant</td>
</tr>
<tr>
<td style="background-color:DarkGray;" colspan="3" align="left"><bold>F: INPUT</bold></td>
</tr>
<tr>
<td align="left">&#x003C1;</td>
<td align="left">7.2&#x02009;Hz</td>
<td align="left">Firing rate</td>
</tr>
<tr>
<td align="left"><italic>c</italic></td>
<td align="left">[0.005, 0.05]</td>
<td align="left">Pair-wise correlation between spike trains</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The categories refer to the model description in Table <xref ref-type="table" rid="T7">7</xref></italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>In this study, we introduced novel analysis tools allowing the investigation of hardware constraints and therefore verifying and improving the hardware design without the need for expensive and time-consuming prototyping. Ideally, this validation process should be shifted to an earlier stage of hardware design combining the expertise from Computational Neuroscience and Neuromorphic Engineering, as, e.g. published by Linares-Barranco et al. (<xref ref-type="bibr" rid="B42">2011</xref>). This kind of research is crucial for researchers to use and understand research executed on neuromorphic hardware systems and thereby transform it into a tool substituting von Neumann computers in Computational Neuroscience. Br&#x000FC;derle et al. (<xref ref-type="bibr" rid="B9">2011</xref>) report the development of a <italic>virtual hardware</italic>, a simulation tool replicating the functionality and configuration space of the entire FACETS wafer-scale hardware system. This tool will allow further analyses on hardware constraints, e.g. in the communication infrastructure and configuration space.</p>
<p>The presented results verify the current implementation of the FACETS wafer-scale hardware system in terms of balance between weight resolution, update controller frequency and circuit variations. Further improvement of the existing hardware implementation would require improvements of all aspects. The only substantial bottleneck has been identified to be common resets, already leading to design improvements of the wafer-scale system.</p>
<p>Although all presented studies refer to the intermediate G&#x000FC;tig STDP model, any other STDP model relying on equation (<xref ref-type="disp-formula" rid="E1">1</xref>) and an exponentially decaying time-dependence can be investigated with the existing software tools in a generic way, e.g. those models listed in Table <xref ref-type="table" rid="T1">1</xref>. In contrast to the fixed exponential time-dependence implemented as analog circuits in the FACETS wafer-scale hardware system, the weight-dependence is freely programmable and stored in a LUT.</p>
<p>Ideally, a high resolution in the weight range of highest plausibility is requested, a high <italic>effective resolution</italic>. Bounded STDP models (e.g. the intermediate G&#x000FC;tig STDP model applied in this study) are well suited for a 4-bit weight resolution and allow a linear mapping of continuous to discrete weights. A 4-bit weight resolution causes large weight updates and hence broadens the weight distribution spanning the whole weight range. This results in a high effective resolution. On the other hand, unbounded STDP models (e.g. the power law and van Rossum STDP models) have long tails toward high weights. Cutting the tail by only mapping low weights to discrete weights would increase the frequency of the highest discrete weight. A possible solution is a non-linear mapping of continuous to discrete weights &#x02013; large differences between high discrete weights and small differences between low discrete weights. However, a variable distance between discrete weights would require more hardware efforts.</p>
<p>An all-to-all spike pairing scheme applied to the reference synapses within the network benchmark results in <italic>p</italic>-values worse (higher) than for synapses implementing a reduced symmetric nearest-neighbor spike pairing scheme (not shown, but comparable to 4-bit discrete weights in Figure <xref ref-type="fig" rid="F7">7</xref>E, see red values). Detailed analyses on different spike pairing schemes could be investigated in further studies.</p>
<p>As a next step, our hardware synapse model can replace the regular STDP synapses in simulations of established neural networks, to test their robustness and applicability for physical emulation in the FACETS wafer-scale hardware system. The synapse model is available in the following NEST release and can easily be applied to NEST or PyNN network descriptions. If neural networks, or modifications of them, qualitatively reproduce the simulation, they can be applied to the hardware system, with which similar results can be expected. Thus, the presented simulation tools allow beforehand modifications of network architectures to ensure the compatibility with the hardware system.</p>
<p>With respect to more complex long-term plasticity models, the hardware system is currently being extended by a programmable microprocessor that is in control of all weight modifications. This processor allows to combine synapse rows in order to compensate for common resets. With possible access to further neuron or network properties the processor would allow for more complex plasticity rules as, e.g. those of Clopath et al. (<xref ref-type="bibr" rid="B14">2008</xref>) and Vogelstein et al. (<xref ref-type="bibr" rid="B76">2007</xref>). Even modifications of multiple neurons are feasible, a phenomenon observed in experiments with neuromodulators (Eckhorn et al., <xref ref-type="bibr" rid="B20">1990</xref>; Itti and Koch, <xref ref-type="bibr" rid="B32">2001</xref>; Reynolds and Wickens, <xref ref-type="bibr" rid="B58">2002</xref>; Shmuel et al., <xref ref-type="bibr" rid="B65">2005</xref>). Nevertheless, more experimental data and consensus about neuromodulator models and their applications are required to further customize the processor. New hardware revisions are rather expensive and consequently should only cover established models that are prepared for hardware implementation by dedicated studies.</p>
<p>This presented evaluation of the FACETS wafer-scale hardware system is meant to encourage neuroscientists to benefit from neuromorphic hardware without leaving their environment in terms of neuron, synapse and network models. We further endorse that, toward an efficient exploitation of hardware resources, the design of synapse models will be influenced by hardware implementations rather than only by their mathematical treatability (e.g. Badoni et al., <xref ref-type="bibr" rid="B3">2006</xref>).</p>
</sec>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<app-group>
<app id="A1">
<label>A</label> <title>Appendix</title>
<sec>
<label>A.1</label> <title>Analytical distributions</title>
<p>Weight evolutions can be described by asymmetric Markov processes with boundary conditions. Following van Rossum et al. (<xref ref-type="bibr" rid="B72">2000</xref>), the weight distribution <italic>P</italic>(<italic>w</italic>) can be expressed by a Taylor expansion of the underlying master equation</p>
<disp-formula id="E4"><mml:math id="M13"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd class="eqnarray-1"><mml:mfrac><mml:mrow><mml:mi>&#x02202;</mml:mi><mml:mi>P</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfenced></mml:mrow><mml:mrow><mml:mi>&#x02202;</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac></mml:mtd><mml:mtd class="eqnarray-2"><mml:mo class="MathClass-rel">=</mml:mo><mml:mo class="MathClass-bin">-</mml:mo><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>d</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mi>P</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfenced><mml:mo class="MathClass-bin">-</mml:mo><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mi>P</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfenced><mml:mo class="MathClass-bin">&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>d</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mi>P</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi><mml:mo class="MathClass-bin">&#x0002B;</mml:mo><mml:mi>&#x00394;</mml:mi><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>d</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfenced></mml:mtd><mml:mtd class="eqnarray-3"></mml:mtd><mml:mtd class="eqnarray-4"><mml:mtext class="eqnarray"></mml:mtext></mml:mtd></mml:mtr><mml:mtr><mml:mtd class="eqnarray-1"></mml:mtd><mml:mtd class="eqnarray-2"><mml:mspace width="1em" class="quad"/><mml:mo class="MathClass-bin">&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mi>P</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi><mml:mo class="MathClass-bin">-</mml:mo><mml:mi>&#x00394;</mml:mi><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfenced><mml:mo class="MathClass-punc">.</mml:mo></mml:mtd><mml:mtd class="eqnarray-3"></mml:mtd><mml:mtd class="eqnarray-4"><mml:mtext class="eqnarray">(A1)</mml:mtext></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In contrast to van Rossum et al. (<xref ref-type="bibr" rid="B72">2000</xref>), this study defines a weight step &#x00394;<italic>w</italic> by a sequence of <italic>n</italic> weight updates &#x003B4;<italic>w</italic> as described by equation (<xref ref-type="disp-formula" rid="E1">1</xref>). Hence the weight steps &#x00394;<italic>w</italic> can be written as &#x00394;<italic>w</italic><sub>d</sub>(<italic>w</italic>)&#x02009;&#x0003D;&#x02009;(<italic>w</italic>&#x02009;&#x0002B;&#x02009;<italic>F</italic><sub>&#x02212;</sub>(<italic>w</italic>))<italic><sub>n</sub></italic>&#x02009;&#x02212;&#x02009;<italic>w</italic> and &#x00394;<italic>w</italic><sub>p</sub>(<italic>w</italic>)&#x02009;&#x0003D;&#x02009;(<italic>w</italic>&#x02009;&#x0002B;&#x02009;<italic>F</italic><sub>&#x0002B;</sub>(<italic>w</italic>))<italic><sub>n</sub></italic>&#x02009;&#x02212;&#x02009;<italic>w</italic>, where <italic>f</italic>(<italic>w</italic>)<italic><sub>n</sub></italic> is the <italic>n</italic>-th recursive evaluation of <italic>f</italic>(<italic>w</italic>).</p>
<p>According to van Rossum et al. (<xref ref-type="bibr" rid="B72">2000</xref>) this Taylor expansion results in the Fokker&#x02013;Planck equation</p>
<disp-formula id="E5"><label>(A2)</label><mml:math id="M14"><mml:mfrac><mml:mrow><mml:mi>&#x02202;</mml:mi><mml:mi>P</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfenced></mml:mrow><mml:mrow><mml:mi>&#x02202;</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo class="MathClass-rel">=</mml:mo><mml:mo class="MathClass-bin">-</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x02202;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02202;</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:mfrac><mml:mfenced separators="" open="[" close="]"><mml:mrow><mml:mi>A</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:mfenced><mml:mi>P</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfenced></mml:mrow></mml:mfenced><mml:mo class="MathClass-bin">&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mi>&#x02202;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mi>&#x02202;</mml:mi><mml:msup><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac><mml:mfenced separators="" open="[" close="]"><mml:mrow><mml:mi>B</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:mfenced><mml:mi>P</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfenced></mml:mrow></mml:mfenced></mml:math></disp-formula>
<p>with jump moments <italic>A</italic>(<italic>w</italic>)&#x02009;&#x0003D;&#x02009;<italic>p</italic><sub>d</sub>&#x00394;<italic>w</italic><sub>d</sub>(<italic>w</italic>)&#x02009;&#x0002B;&#x02009;<italic>p</italic><sub>p</sub>&#x00394;<italic>w</italic><sub>p</sub>(<italic>w</italic>) and <italic>B</italic>(<italic>w</italic>)&#x02009;&#x0003D;&#x02009;<italic>p</italic><sub>d</sub>&#x00394;<italic>w</italic><sub>d</sub>(<italic>w</italic>)<sup>2</sup>&#x02009;&#x0002B;&#x02009;<italic>p</italic><sub>p</sub>&#x00394;<italic>w</italic><sub>p</sub>(<italic>w</italic>)<sup>2</sup>, which has the following solution for reflecting boundary conditions (Gardiner, <xref ref-type="bibr" rid="B26">2009</xref>):</p>
<disp-formula id="E6"><label>(A3)</label><mml:math id="M15"><mml:mi>P</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:mfenced><mml:mo class="MathClass-rel">=</mml:mo><mml:mfrac><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>B</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:mfenced></mml:mrow></mml:mfrac><mml:mo class="qopname">exp</mml:mo><mml:mfenced separators="" open="[" close="]"><mml:mrow><mml:mn>2</mml:mn><mml:msubsup><mml:mrow><mml:mo class="qopname">&#x0222B;</mml:mo></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msubsup><mml:mspace width="0.3em" class="thinspace"/><mml:mfrac><mml:mrow><mml:mi>A</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:msup><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:mfenced></mml:mrow><mml:mrow><mml:mi>B</mml:mi><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:msup><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:mfenced></mml:mrow></mml:mfrac><mml:mi>d</mml:mi><mml:msup><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:mfenced><mml:mo class="MathClass-punc">,</mml:mo></mml:math></disp-formula>
<p>with <italic>N</italic> as a normalization factor. For small <italic>n</italic> this equation can be solved analytically, but is integrated numerically to cover also large <italic>n</italic>.</p>
<p>However, this analytical approach fails, because the Taylor expansion in combination with the boundary conditions does not hold for large <italic>n</italic> (absorbing boundary conditions do not improve the results).</p>
</sec>
<sec>
<label>A.2</label> <title>STDP in the FACETS chip-based hardware system</title>
<p>The STDP mechanism of the FACETS chip-based hardware system differs from that of the FACETS wafer-scale hardware system as follows. The major difference is the comparison of spike pair accumulations with thresholds. The wafer-scale system analyzed in this study compares both spike pair accumulations with a threshold (the threshold can be set independently for both accumulations, but they are assumed to be equal in this study). An weight update is performed if a single accumulation crosses this threshold. In contrast, the chip-based system used for all measurements subtracts both spike pair accumulations and compares the absolute value of their difference |<italic>a</italic><sub>c</sub>&#x02009;&#x02212;&#x02009;<italic>a</italic><sub>a</sub>| with a single threshold. If this threshold is crossed, the sign of the difference between the spike pair accumulations sig (<italic>a</italic><sub>c</sub>&#x02009;&#x02212;&#x02009;<italic>a</italic><sub>a</sub>) determines, whether the causal or anti-causal accumulation prevails and the weight is updated accordingly. However, this difference between both hardware systems can be neglected, because both STDP mechanisms are identical if exclusively causal or anti-causal spike pairs are accumulated. This is the case for the measurement protocol of STDP curves.</p>
</sec>
<sec>
<label>A.3</label> <title>Generating spike pairs in hardware</title>
<p>Spike pairs in the FACETS chip-based hardware system are generated as follows. Presynaptic spike times can be set precisely, whereas postsynaptic spikes need to be triggered by presynaptic input. Therefore, a presynaptic spike (via the measured synapse) and <italic>m</italic> trigger spikes (eliciting a postsynaptic spike) are fed into a single neuron occupying <italic>m</italic>&#x02009;&#x0002B;&#x02009;1 synapses. The synaptic weights as well as the synapse driver strengths of the trigger synapses are proportional to the synaptic peak conductance and are adjusted in such a way that a single postsynaptic spike is evoked. The highest reliability of spike times within a hardware run and between runs is achieved for <italic>m</italic>&#x02009;&#x0003D;&#x02009;4 trigger synapses (not shown here). The synapse driver strength is set to the intermediate value between the limiting case of no and multiple postsynaptic spikes evoked by one trigger only. The synaptic weight of the measured synapse is set to zero and consequently the measured synapse has no influence on the elicitation of postsynaptic spikes.</p>
</sec>
</app>
</app-group>
<ack>
<p>The research leading to these results has received funding by the European Union 6th and 7th Framework Program under grant agreement no. 15879 (FACETS) and no. 269921 (BrainScaleS). Special thanks to Yves Fr&#x000E9;gnac, Daniel Br&#x000FC;derle, and Andreas Gr&#x000FC;bl for helpful discussions and technical support.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amit</surname> <given-names>D.</given-names></name> <name><surname>Fusi</surname> <given-names>S.</given-names></name></person-group> (<year>1994</year>). <article-title>Learning in neural networks with material synapses</article-title>. <source>Neural Comput.</source> <volume>6</volume>, <fpage>957</fpage>&#x02013;<lpage>982</lpage>.<pub-id pub-id-type="doi">10.1162/neco.1994.6.5.957</pub-id></citation></ref>
<ref id="B2"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Arthur</surname> <given-names>J. V.</given-names></name> <name><surname>Boahen</surname> <given-names>K.</given-names></name></person-group> (<year>2006</year>). <article-title>&#x0201C;Learning in silicon: timing is everything,&#x0201D;</article-title> in <conf-name>Advances in Neural Information Processing Systems (NIPS)</conf-name>, Vol. <volume>18</volume>. <conf-loc>Vancouver</conf-loc>: <conf-sponsor>MIT Press</conf-sponsor>, <fpage>75</fpage>&#x02013;<lpage>82</lpage>.</citation></ref>
<ref id="B3"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Badoni</surname> <given-names>D.</given-names></name> <name><surname>Giulioni</surname> <given-names>M.</given-names></name> <name><surname>Dante</surname> <given-names>V.</given-names></name> <name><surname>Del Giudice</surname> <given-names>P.</given-names></name></person-group> (<year>2006</year>). <article-title>&#x0201C;An aVLSI recurrent network of spiking neurons with reconfigurable and plastic synapses,&#x0201D;</article-title> in <conf-name>Proceedings of the 2006 International Symposium on Circuits and Systems (ISCAS)</conf-name> (<conf-loc>Island of Kos</conf-loc>: <conf-sponsor>IEEE Press</conf-sponsor>), <fpage>4</fpage>.<pub-id pub-id-type="pmid">16793158</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bi</surname> <given-names>G.</given-names></name> <name><surname>Poo</surname> <given-names>M.</given-names></name></person-group> (<year>1998</year>). <article-title>Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type</article-title>. <source>J. Neurosci.</source> <volume>18</volume>, <fpage>10464</fpage>&#x02013;<lpage>10472</lpage>.<pub-id pub-id-type="pmid">9852584</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bi</surname> <given-names>G.</given-names></name> <name><surname>Poo</surname> <given-names>M.</given-names></name></person-group> (<year>2001</year>). <article-title>Synaptic modification by correlated activity: Hebb&#x02019;s postulate revisited</article-title>. <source>Annu. Rev. Neurosci.</source> <volume>24</volume>, <fpage>139</fpage>&#x02013;<lpage>166</lpage>.<pub-id pub-id-type="doi">10.1146/annurev.neuro.24.1.139</pub-id><pub-id pub-id-type="pmid">11283308</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bill</surname> <given-names>J.</given-names></name> <name><surname>Schuch</surname> <given-names>K.</given-names></name> <name><surname>Br&#x000FC;derle</surname> <given-names>D.</given-names></name> <name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Maass</surname> <given-names>W.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name></person-group> (<year>2010</year>). <article-title>Compensating inhomogeneities of neuromorphic VLSI devices via short-term synaptic plasticity</article-title>. <source>Front. Comput. Neurosci.</source> <volume>4</volume>:<fpage>129</fpage>.<pub-id pub-id-type="doi">10.3389/fncom.2010.00129</pub-id><pub-id pub-id-type="pmid">21031027</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brader</surname> <given-names>J. M.</given-names></name> <name><surname>Senn</surname> <given-names>W.</given-names></name> <name><surname>Fusi</surname> <given-names>S.</given-names></name></person-group> (<year>2007</year>). <article-title>Learning real world stimuli in a neural network with spike-driven synaptic dynamics</article-title>. <source>Neural Comput.</source> <volume>19</volume>, <fpage>2881</fpage>&#x02013;<lpage>2912</lpage>.<pub-id pub-id-type="doi">10.1162/neco.2007.19.11.2881</pub-id><pub-id pub-id-type="pmid">17883345</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brette</surname> <given-names>R.</given-names></name> <name><surname>Rudolph</surname> <given-names>M.</given-names></name> <name><surname>Carnevale</surname> <given-names>T.</given-names></name> <name><surname>Hines</surname> <given-names>M.</given-names></name> <name><surname>Beeman</surname> <given-names>D.</given-names></name> <name><surname>Bower</surname> <given-names>J. M.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name> <name><surname>Morrison</surname> <given-names>A.</given-names></name> <name><surname>Goodman</surname> <given-names>P. H.</given-names></name> <name><surname>Harris</surname> <given-names>F. C.</given-names> <suffix>Jr.</suffix></name> <name><surname>Zirpe</surname> <given-names>M.</given-names></name> <name><surname>Natschl&#x000E4;ger</surname> <given-names>T.</given-names></name> <name><surname>Pecevski</surname> <given-names>D.</given-names></name> <name><surname>Ermentrout</surname> <given-names>B.</given-names></name> <name><surname>Djurfeldt</surname> <given-names>M.</given-names></name> <name><surname>Lansner</surname> <given-names>A.</given-names></name> <name><surname>Rochel</surname> <given-names>O.</given-names></name> <name><surname>Vieville</surname> <given-names>T.</given-names></name> <name><surname>Muller</surname> <given-names>E.</given-names></name> <name><surname>Davison</surname> <given-names>A. P.</given-names></name> <name><surname>El Boustani</surname> <given-names>S.</given-names></name> <name><surname>Destexhe</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>Simulation of networks of spiking neurons: a review of tools and strategies</article-title>. <source>J. Comput. Neurosci.</source> <volume>23</volume>, <fpage>349</fpage>&#x02013;<lpage>398</lpage>.<pub-id pub-id-type="doi">10.1007/s10827-007-0038-6</pub-id><pub-id pub-id-type="pmid">17629781</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Br&#x000FC;derle</surname> <given-names>D.</given-names></name> <name><surname>Petrovici</surname> <given-names>M.</given-names></name> <name><surname>Vogginger</surname> <given-names>B.</given-names></name> <name><surname>Ehrlich</surname> <given-names>M.</given-names></name> <name><surname>Pfeil</surname> <given-names>T.</given-names></name> <name><surname>Millner</surname> <given-names>S.</given-names></name> <name><surname>Gr&#x000FC;bl</surname> <given-names>A.</given-names></name> <name><surname>Wendt</surname> <given-names>K.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>E.</given-names></name> <name><surname>Schwartz</surname> <given-names>M.-O.</given-names></name> <name><surname>Husmann de Oliveira</surname> <given-names>D.</given-names></name> <name><surname>Jeltsch</surname> <given-names>S.</given-names></name> <name><surname>Fieres</surname> <given-names>J.</given-names></name> <name><surname>Schilling</surname> <given-names>M.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>P.</given-names></name> <name><surname>Breitwieser</surname> <given-names>O.</given-names></name> <name><surname>Petkov</surname> <given-names>V.</given-names></name> <name><surname>Muller</surname> <given-names>L.</given-names></name> <name><surname>Davison</surname> <given-names>A. P.</given-names></name> <name><surname>Krishnamurthy</surname> <given-names>P.</given-names></name> <name><surname>Kremkow</surname> <given-names>J.</given-names></name> <name><surname>Lundqvist</surname> <given-names>M.</given-names></name> <name><surname>Muller</surname> <given-names>E.</given-names></name> <name><surname>Partzsch</surname> <given-names>J.</given-names></name> <name><surname>Scholze</surname> <given-names>S.</given-names></name> <name><surname>Z&#x000FC;hl</surname> <given-names>L.</given-names></name> <name><surname>Destexhe</surname> <given-names>A.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name> <name><surname>Potjans</surname> <given-names>T. C.</given-names></name> <name><surname>Lansner</surname> <given-names>A.</given-names></name> <name><surname>Sch&#x000FC;ffny</surname> <given-names>R.</given-names></name> <name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name></person-group> (<year>2011</year>). <article-title>A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems</article-title>. <source>Biol. Cybern.</source> <volume>104</volume>, <fpage>263</fpage>&#x02013;<lpage>296</lpage>.<pub-id pub-id-type="doi">10.1007/s00422-011-0435-9</pub-id><pub-id pub-id-type="pmid">21618053</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brunel</surname> <given-names>N.</given-names></name></person-group> (<year>2000</year>). <article-title>Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons</article-title>. <source>J. Comput. Neurosci.</source> <volume>8</volume>, <fpage>183</fpage>&#x02013;<lpage>208</lpage>.<pub-id pub-id-type="doi">10.1023/A:1008925309027</pub-id><pub-id pub-id-type="pmid">10809012</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Butts</surname> <given-names>D. A.</given-names></name> <name><surname>Weng</surname> <given-names>C.</given-names></name> <name><surname>Jin</surname> <given-names>J.</given-names></name> <name><surname>Yeh</surname> <given-names>C.-I.</given-names></name> <name><surname>Lesica</surname> <given-names>N. A.</given-names></name> <name><surname>Alonso</surname> <given-names>J.-M.</given-names></name> <name><surname>Stanley</surname> <given-names>G. B.</given-names></name></person-group> (<year>2007</year>). <article-title>Temporal precision in the neural code and the timescales of natural vision</article-title>. <source>Nature</source> <volume>449</volume>, <fpage>92</fpage>&#x02013;<lpage>95</lpage>.<pub-id pub-id-type="doi">10.1038/nature06105</pub-id><pub-id pub-id-type="pmid">17805296</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cassenaer</surname> <given-names>S.</given-names></name> <name><surname>Laurent</surname> <given-names>G.</given-names></name></person-group> (<year>2007</year>). <article-title>Hebbian STDP in mushroom bodies facilitates the synchronous flow of olfactory information in locusts</article-title>. <source>Nature</source> <volume>448</volume>, <fpage>709</fpage>&#x02013;<lpage>713</lpage>.<pub-id pub-id-type="doi">10.1038/nature05973</pub-id><pub-id pub-id-type="pmid">17581587</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cassenaer</surname> <given-names>S.</given-names></name> <name><surname>Laurent</surname> <given-names>G.</given-names></name></person-group> (<year>2012</year>). <article-title>Conditional modulation of spike-timing-dependent plasticity for olfactory learning</article-title>. <source>Nature</source> <volume>482</volume>, <fpage>47</fpage>&#x02013;<lpage>52</lpage>.<pub-id pub-id-type="doi">10.1038/nature10776</pub-id><pub-id pub-id-type="pmid">22278062</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clopath</surname> <given-names>C.</given-names></name> <name><surname>Ziegler</surname> <given-names>L.</given-names></name> <name><surname>Vasilaki</surname> <given-names>E.</given-names></name> <name><surname>B&#x000FC;sing</surname> <given-names>L.</given-names></name> <name><surname>Gerstner</surname> <given-names>W.</given-names></name></person-group> (<year>2008</year>). <article-title>Tag-Trigger-Consolidation: a model of early and late long-term-potentiation and depression</article-title>. <source>PLoS Comput. Biol.</source> <volume>4</volume>, <fpage>e1000248</fpage>.<pub-id pub-id-type="doi">10.1371/journal.pcbi.1000248</pub-id></citation></ref>
<ref id="B15"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Daouzli</surname> <given-names>A.</given-names></name> <name><surname>Saighi</surname> <given-names>S.</given-names></name> <name><surname>Buhry</surname> <given-names>L.</given-names></name> <name><surname>Bornat</surname> <given-names>Y.</given-names></name> <name><surname>Renaud</surname> <given-names>S.</given-names></name></person-group> (<year>2008</year>). <article-title>&#x0201C;Weights convergence and spikes correlation in an adaptive neural network implemented on VLSI,&#x0201D;</article-title> in <conf-name>Proceedings of the International Conference on Bio-inspired Systems and Signal Processing</conf-name>, <conf-loc>Funchal</conf-loc>, <fpage>286</fpage>&#x02013;<lpage>291</lpage>.</citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davison</surname> <given-names>A.</given-names></name> <name><surname>Br&#x000FC;derle</surname> <given-names>D.</given-names></name> <name><surname>Eppler</surname> <given-names>J. M.</given-names></name> <name><surname>Kremkow</surname> <given-names>J.</given-names></name> <name><surname>Muller</surname> <given-names>E.</given-names></name> <name><surname>Pecevski</surname> <given-names>D.</given-names></name> <name><surname>Perrinet</surname> <given-names>L.</given-names></name> <name><surname>Yger</surname> <given-names>P.</given-names></name></person-group> (<year>2009</year>). <article-title>PyNN: a common interface for neuronal network simulators</article-title>. <source>Front. Neuroinformatics</source> <volume>2</volume>:<fpage>11</fpage>.<pub-id pub-id-type="doi">10.3389/neuro.11.011.2008</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davison</surname> <given-names>A. P.</given-names></name> <name><surname>Fr&#x000E9;gnac</surname> <given-names>Y.</given-names></name></person-group> (<year>2006</year>). <article-title>Learning cross-modal spatial transformations through spike timing-dependent plasticity</article-title>. <source>J. Neurosci.</source> <volume>26</volume>, <fpage>5604</fpage>&#x02013;<lpage>5615</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.5263-05.2006</pub-id><pub-id pub-id-type="pmid">16723517</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Desbordes</surname> <given-names>G.</given-names></name> <name><surname>Jin</surname> <given-names>J.</given-names></name> <name><surname>Alonso</surname> <given-names>J.-M.</given-names></name> <name><surname>Stanley</surname> <given-names>G. B.</given-names></name></person-group> (<year>2010</year>). <article-title>Modulation of temporal precision in thalamic population responses to natural visual stimuli</article-title>. <source>Front. Syst. Neurosci.</source> <volume>4</volume>:<fpage>151</fpage>.<pub-id pub-id-type="doi">10.3389/fnsys.2010.00151</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Desbordes</surname> <given-names>G.</given-names></name> <name><surname>Jin</surname> <given-names>J.</given-names></name> <name><surname>Weng</surname> <given-names>C.</given-names></name> <name><surname>Lesica</surname> <given-names>N. A.</given-names></name> <name><surname>Stanley</surname> <given-names>G. B.</given-names></name> <name><surname>Alonso</surname> <given-names>J.-M.</given-names></name></person-group> (<year>2008</year>). <article-title>Timing precision in population coding of natural scenes in the early visual system</article-title>. <source>PLoS Biol.</source> <volume>6</volume>, <fpage>e324</fpage>.<pub-id pub-id-type="doi">10.1371/journal.pbio.0060324</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eckhorn</surname> <given-names>R.</given-names></name> <name><surname>Reitb&#x000F6;ck</surname> <given-names>H.-J.</given-names></name> <name><surname>Arndt</surname> <given-names>M.</given-names></name> <name><surname>Dicke</surname> <given-names>P.</given-names></name></person-group> (<year>1990</year>). <article-title>Feature linking via synchronization among distributed assemblies: results from cat visual cortex and from simulations</article-title>. <source>Neural Comput.</source> <volume>2</volume>, <fpage>293</fpage>&#x02013;<lpage>307</lpage>.<pub-id pub-id-type="doi">10.1162/neco.1990.2.3.293</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>El Boustani</surname> <given-names>S.</given-names></name> <name><surname>Yger</surname> <given-names>P.</given-names></name> <name><surname>Fr&#x000E9;gnac</surname> <given-names>Y.</given-names></name> <name><surname>Destexhe</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Stable learning in stochastic network states</article-title>. <source>J. Neurosci.</source> <volume>32</volume>, <fpage>194</fpage>&#x02013;<lpage>214</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2496-11.2012</pub-id><pub-id pub-id-type="pmid">22219282</pub-id></citation></ref>
<ref id="B22"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Esmaeilzadeh</surname> <given-names>H.</given-names></name> <name><surname>Blem</surname> <given-names>E.</given-names></name> <name><surname>St Amant</surname> <given-names>R.</given-names></name> <name><surname>Sankaralingam</surname> <given-names>K.</given-names></name> <name><surname>Burger</surname> <given-names>D.</given-names></name></person-group> (<year>2011</year>). <article-title>&#x0201C;Dark silicon and the end of multicore scaling,&#x0201D;</article-title> in <conf-name>Proceedings of the 2011 International Symposium on Computer Architecture (ISCA)</conf-name> (<conf-loc>San Jose</conf-loc>: <conf-sponsor>ACM Press</conf-sponsor>), <fpage>365</fpage>&#x02013;<lpage>376</lpage>.</citation></ref>
<ref id="B23"><citation citation-type="web"><collab>FACETS</collab>. (<year>2010</year>). <source>Fast Analog Computing with Emergent Transient States</source>. Available at: <uri xlink:href="http://www.facets-project.org">http://www.facets-project.org</uri></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fromherz</surname> <given-names>P.</given-names></name></person-group> (<year>2002</year>). <article-title>Electrical interfacing of nerve cells and semiconductor chips</article-title>. <source>Chemphyschem</source> <volume>3</volume>, <fpage>276</fpage>&#x02013;<lpage>284</lpage>.<pub-id pub-id-type="doi">10.1002/1439-7641(20020315)3:3&#x0003C;276::AID-CPHC276&#x0003E;3.0.CO;2-A</pub-id><pub-id pub-id-type="pmid">12503174</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fusi</surname> <given-names>S.</given-names></name> <name><surname>Drew</surname> <given-names>P. J.</given-names></name> <name><surname>Abbott</surname> <given-names>L. F.</given-names></name></person-group> (<year>2005</year>). <article-title>Cascade models of synaptically stored memories</article-title>. <source>Neuron</source> <volume>45</volume>, <fpage>599</fpage>&#x02013;<lpage>611</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuron.2005.02.001</pub-id><pub-id pub-id-type="pmid">15721245</pub-id></citation></ref>
<ref id="B26"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Gardiner</surname> <given-names>C.</given-names></name></person-group> (<year>2009</year>). <source>Stochastic Methods: A Handbook for the Natural and Social Sciences</source> (<edition>4th Edn.</edition>). <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>.</citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gerstner</surname> <given-names>W.</given-names></name> <name><surname>Kempter</surname> <given-names>R.</given-names></name> <name><surname>van Hemmen</surname> <given-names>J. L.</given-names></name> <name><surname>Wagner</surname> <given-names>H.</given-names></name></person-group> (<year>1996</year>). <article-title>A neuronal learning rule for sub-millisecond temporal coding</article-title>. <source>Nature</source> <volume>383</volume>, <fpage>76</fpage>&#x02013;<lpage>78</lpage>.<pub-id pub-id-type="doi">10.1038/383076a0</pub-id><pub-id pub-id-type="pmid">8779718</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gewaltig</surname> <given-names>M.-O.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>NEST (NEural Simulation Tool)</article-title>. <source>Scholarpedia J.</source> <volume>2</volume>, <fpage>1430</fpage>.<pub-id pub-id-type="doi">10.4249/scholarpedia.1430</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>G&#x000FC;tig</surname> <given-names>R.</given-names></name> <name><surname>Aharonov</surname> <given-names>R.</given-names></name> <name><surname>Rotter</surname> <given-names>S.</given-names></name> <name><surname>Sompolinsky</surname> <given-names>H.</given-names></name></person-group> (<year>2003</year>). <article-title>Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity</article-title>. <source>J. Neurosci.</source> <volume>23</volume>, <fpage>3697</fpage>&#x02013;<lpage>3714</lpage>.<pub-id pub-id-type="pmid">12736341</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Indiveri</surname> <given-names>G.</given-names></name> <name><surname>Linares-Barranco</surname> <given-names>B.</given-names></name> <name><surname>Hamilton</surname> <given-names>T. J.</given-names></name> <name><surname>van Schaik</surname> <given-names>A.</given-names></name> <name><surname>Etienne-Cummings</surname> <given-names>R.</given-names></name> <name><surname>Delbruck</surname> <given-names>T.</given-names></name> <name><surname>Liu</surname> <given-names>S. -C.</given-names></name> <name><surname>Dudek</surname> <given-names>P.</given-names></name> <name><surname>H&#x000E4;fliger</surname> <given-names>P.</given-names></name> <name><surname>Renaud</surname> <given-names>S.</given-names></name> <name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Cauwenberghs</surname> <given-names>G.</given-names></name> <name><surname>Arthur</surname> <given-names>J.</given-names></name> <name><surname>Hynna</surname> <given-names>K.</given-names></name> <name><surname>Folowosele</surname> <given-names>F.</given-names></name> <name><surname>Saighi</surname> <given-names>S.</given-names></name> <name><surname>Serrano-Gotarredona</surname> <given-names>T.</given-names></name> <name><surname>Wijekoon</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Boahen</surname> <given-names>K.</given-names></name></person-group> (<year>2011</year>). <article-title>Neuromorphic silicon neuron circuits</article-title>. <source>Front. Neurosci.</source> <volume>5</volume>:<fpage>73</fpage>.<pub-id pub-id-type="doi">10.3389/fnins.2011.00073</pub-id><pub-id pub-id-type="pmid">21747754</pub-id></citation></ref>
<ref id="B31"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Indiveri</surname> <given-names>G.</given-names></name> <name><surname>Stefanini</surname> <given-names>F.</given-names></name> <name><surname>Chicca</surname> <given-names>E.</given-names></name></person-group> (<year>2010</year>). <article-title>&#x0201C;Spike-based learning with a generalized integrate and fire silicon neuron,&#x0201D;</article-title> in <conf-name>Proceedings of the 2010 International Symposium on Circuits and Systems (ISCAS)</conf-name> (<conf-loc>Paris</conf-loc>: <conf-sponsor>IEEE Press</conf-sponsor>), <fpage>1951</fpage>&#x02013;<lpage>1954</lpage>.</citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Itti</surname> <given-names>L.</given-names></name> <name><surname>Koch</surname> <given-names>C.</given-names></name></person-group> (<year>2001</year>). <article-title>Computational modeling of visual attention</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>2</volume>, <fpage>194</fpage>&#x02013;<lpage>203</lpage>.<pub-id pub-id-type="doi">10.1038/35058500</pub-id><pub-id pub-id-type="pmid">11256080</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jin</surname> <given-names>X.</given-names></name> <name><surname>Lujan</surname> <given-names>M.</given-names></name> <name><surname>Plana</surname> <given-names>L.</given-names></name> <name><surname>Davies</surname> <given-names>S.</given-names></name> <name><surname>Temple</surname> <given-names>S.</given-names></name> <name><surname>Furber</surname> <given-names>S.</given-names></name></person-group> (<year>2010a</year>). <article-title>Modeling spiking neural networks on SpiNNaker</article-title>. <source>Comput. Sci. Eng.</source> <volume>12</volume>, <fpage>91</fpage>&#x02013;<lpage>97</lpage>.<pub-id pub-id-type="doi">10.1109/MCSE.2010.112</pub-id></citation></ref>
<ref id="B34"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Jin</surname> <given-names>X.</given-names></name> <name><surname>Rast</surname> <given-names>A.</given-names></name> <name><surname>Galluppi</surname> <given-names>F.</given-names></name> <name><surname>Davies</surname> <given-names>S.</given-names></name> <name><surname>Furber</surname> <given-names>S.</given-names></name></person-group> (<year>2010b</year>). <article-title>&#x0201C;Implementing spike-timing-dependent plasticity on SpiNNaker neuromorphic hardware,&#x0201D;</article-title> in <conf-name>Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN)</conf-name> (<conf-loc>Barcelona</conf-loc>: <conf-sponsor>IEEE Press</conf-sponsor>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>.</citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johansson</surname> <given-names>C.</given-names></name> <name><surname>Lansner</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>Towards cortex sized artificial neural systems</article-title>. <source>Neural Netw.</source> <volume>20</volume>, <fpage>48</fpage>&#x02013;<lpage>61</lpage>.<pub-id pub-id-type="doi">10.1016/j.neunet.2006.05.029</pub-id><pub-id pub-id-type="pmid">16860539</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kempter</surname> <given-names>R.</given-names></name> <name><surname>Gerstner</surname> <given-names>W.</given-names></name> <name><surname>van Hemmen</surname> <given-names>J. L.</given-names></name></person-group> (<year>1999</year>). <article-title>Hebbian learning and spiking neurons</article-title>. <source>Phys. Rev. E</source> <volume>59</volume>, <fpage>4498</fpage>&#x02013;<lpage>4514</lpage>.<pub-id pub-id-type="doi">10.1103/PhysRevE.59.4498</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuba</surname> <given-names>H.</given-names></name> <name><surname>Koyano</surname> <given-names>K.</given-names></name> <name><surname>Ohmori</surname> <given-names>H.</given-names></name></person-group> (<year>2002</year>). <article-title>Synaptic depression improves coincidence detection in the nucleus laminaris in brainstem slices of the chick embryo</article-title>. <source>Eur. J. Neurosci.</source> <volume>15</volume>, <fpage>984</fpage>&#x02013;<lpage>990</lpage>.<pub-id pub-id-type="doi">10.1046/j.1460-9568.2002.01933.x</pub-id><pub-id pub-id-type="pmid">11918658</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuhn</surname> <given-names>A.</given-names></name> <name><surname>Aertsen</surname> <given-names>A.</given-names></name> <name><surname>Rotter</surname> <given-names>S.</given-names></name></person-group> (<year>2003</year>). <article-title>Higher-order statistics of input ensembles and the response of simple model neurons</article-title>. <source>Neural Comput.</source> <volume>1</volume>, <fpage>67</fpage>&#x02013;<lpage>101</lpage>.<pub-id pub-id-type="doi">10.1162/089976603321043702</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kunkel</surname> <given-names>S.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name> <name><surname>Morrison</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Limits to the development of feed-forward structures in large recurrent neuronal networks</article-title>. <source>Front. Comput. Neurosci.</source> <volume>4</volume>:<fpage>160</fpage>.<pub-id pub-id-type="doi">10.3389/fncom.2010.00160</pub-id></citation></ref>
<ref id="B40"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Levi</surname> <given-names>T.</given-names></name> <name><surname>Lewis</surname> <given-names>N.</given-names></name> <name><surname>Saighi</surname> <given-names>S.</given-names></name> <name><surname>Tomas</surname> <given-names>J.</given-names></name> <name><surname>Bornat</surname> <given-names>Y.</given-names></name> <name><surname>Renaud</surname> <given-names>S.</given-names></name></person-group> (<year>2008</year>). <source>VLSI Circuits for Biomedical Applications</source>, Chap. 12. <publisher-loc>Norwood</publisher-loc>: <publisher-name>Artech House</publisher-name>, <fpage>241</fpage>&#x02013;<lpage>264</lpage>.</citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Levy</surname> <given-names>W. B.</given-names></name> <name><surname>Steward</surname> <given-names>D.</given-names></name></person-group> (<year>1983</year>). <article-title>Temporal contiguity requirements for long-term associative potentiation/depression in the hippocampus</article-title>. <source>Neuroscience</source> <volume>8</volume>, <fpage>791</fpage>&#x02013;<lpage>797</lpage>.<pub-id pub-id-type="doi">10.1016/0306-4522(83)90011-8</pub-id><pub-id pub-id-type="pmid">6306504</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Linares-Barranco</surname> <given-names>B.</given-names></name> <name><surname>Serrano-Gotarredona</surname> <given-names>T.</given-names></name> <name><surname>Camunas-Mesa</surname> <given-names>L. A.</given-names></name> <name><surname>Perez-Carrasco</surname> <given-names>J. A.</given-names></name> <name><surname>Zamarreno-Ramos</surname> <given-names>C.</given-names></name> <name><surname>Masquelier</surname> <given-names>T.</given-names></name></person-group> (<year>2011</year>). <article-title>On spike-timing-dependent-plasticity, memristive devices, and building a self-learning visual cortex</article-title>. <source>Front. Neurosci.</source> <volume>5</volume>:<fpage>26</fpage>.<pub-id pub-id-type="doi">10.3389/fnins.2011.00026</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mann</surname> <given-names>H. B.</given-names></name> <name><surname>Whitney</surname> <given-names>D. R.</given-names></name></person-group> (<year>1947</year>). <article-title>On a test of whether one of two random variables is stochastically larger than the other</article-title>. <source>Ann. Stat.</source> <volume>18</volume>, <fpage>50</fpage>&#x02013;<lpage>60</lpage>.<pub-id pub-id-type="doi">10.1214/aoms/1177730491</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Markram</surname> <given-names>H.</given-names></name></person-group> (<year>2006</year>). <article-title>The blue brain project</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>7</volume>, <fpage>153</fpage>&#x02013;<lpage>160</lpage>.<pub-id pub-id-type="doi">10.1038/nrn1848</pub-id><pub-id pub-id-type="pmid">16429124</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Markram</surname> <given-names>H.</given-names></name> <name><surname>L&#x000FC;bke</surname> <given-names>J.</given-names></name> <name><surname>Frotscher</surname> <given-names>M.</given-names></name> <name><surname>Sakmann</surname> <given-names>B.</given-names></name></person-group> (<year>1997</year>). <article-title>Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs</article-title>. <source>Science</source> <volume>275</volume>, <fpage>213</fpage>&#x02013;<lpage>215</lpage>.<pub-id pub-id-type="doi">10.1126/science.275.5297.213</pub-id><pub-id pub-id-type="pmid">8985014</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marre</surname> <given-names>O.</given-names></name> <name><surname>Yger</surname> <given-names>P.</given-names></name> <name><surname>Davison</surname> <given-names>A. P.</given-names></name> <name><surname>Fr&#x000E9;gnac</surname> <given-names>Y.</given-names></name></person-group> (<year>2009</year>). <article-title>Reliable recall of spontaneous activity patterns in cortical networks</article-title>. <source>J. Neurosci.</source> <volume>29</volume>, <fpage>14596</fpage>&#x02013;<lpage>14606</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0753-09.2009</pub-id><pub-id pub-id-type="pmid">19923292</pub-id></citation></ref>
<ref id="B47"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Merolla</surname> <given-names>P.</given-names></name> <name><surname>Boahen</surname> <given-names>K.</given-names></name></person-group> (<year>2006</year>). <article-title>&#x0201C;Dynamic computation in a recurrent network of heterogeneous silicon neurons,&#x0201D;</article-title> in <conf-name>Proceedings of the 2006 International Symposium on Circuits and Systems (ISCAS)</conf-name> (<conf-loc>Island of Kos</conf-loc>: <conf-sponsor>IEEE Press</conf-sponsor>), <fpage>4539</fpage>&#x02013;<lpage>4542</lpage>.</citation></ref>
<ref id="B48"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Millner</surname> <given-names>S.</given-names></name> <name><surname>Gr&#x000FC;bl</surname> <given-names>A.</given-names></name> <name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name> <name><surname>Schwartz</surname> <given-names>M.-O.</given-names></name></person-group> (<year>2010</year>). <article-title>&#x0201C;A VLSI implementation of the adaptive exponential integrate-and-fire neuron model,&#x0201D;</article-title> in <conf-name>Advances in Neural Information Processing Systems (NIPS)</conf-name>, Vol. <volume>23</volume>, <conf-loc>Vancouver</conf-loc>, <fpage>1642</fpage>&#x02013;<lpage>1650</lpage>.</citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morrison</surname> <given-names>A.</given-names></name> <name><surname>Aertsen</surname> <given-names>A.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Spike-timing dependent plasticity in balanced random networks</article-title>. <source>Neural Comput.</source> <volume>19</volume>, <fpage>1437</fpage>&#x02013;<lpage>1467</lpage>.<pub-id pub-id-type="doi">10.1162/neco.2007.19.1.47</pub-id><pub-id pub-id-type="pmid">17444756</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morrison</surname> <given-names>A.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name> <name><surname>Gerstner</surname> <given-names>W.</given-names></name></person-group> (<year>2008</year>). <article-title>Phenomenological models of synaptic plasticity based on spike-timing</article-title>. <source>Biol. Cybern.</source> <volume>98</volume>, <fpage>459</fpage>&#x02013;<lpage>478</lpage>.<pub-id pub-id-type="doi">10.1007/s00422-008-0233-1</pub-id><pub-id pub-id-type="pmid">18491160</pub-id></citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morrison</surname> <given-names>A.</given-names></name> <name><surname>Mehring</surname> <given-names>C.</given-names></name> <name><surname>Geisel</surname> <given-names>T.</given-names></name> <name><surname>Aertsen</surname> <given-names>A.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>Advancing the boundaries of high connectivity network simulation with distributed computing</article-title>. <source>Neural Comput.</source> <volume>17</volume>, <fpage>1776</fpage>&#x02013;<lpage>1801</lpage>.<pub-id pub-id-type="doi">10.1162/0899766054026648</pub-id><pub-id pub-id-type="pmid">15969917</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mu</surname> <given-names>Y.</given-names></name> <name><surname>Poo</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Spike timing-dependent LTP/LTD mediates visual experience-dependent plasticity in a developing retinotectal system</article-title>. <source>Neuron</source> <volume>50</volume>, <fpage>115</fpage>&#x02013;<lpage>125</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuron.2006.03.009</pub-id><pub-id pub-id-type="pmid">16600860</pub-id></citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nordlie</surname> <given-names>E.</given-names></name> <name><surname>Gewaltig</surname> <given-names>M.-O.</given-names></name> <name><surname>Plesser</surname> <given-names>H. E.</given-names></name></person-group> (<year>2009</year>). <article-title>Towards reproducible descriptions of neuronal network models</article-title>. <source>PLoS Comput. Biol.</source> <volume>5</volume>, <fpage>e1000456</fpage>.<pub-id pub-id-type="doi">10.1371/journal.pcbi.1000456</pub-id></citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pelgrom</surname> <given-names>M.</given-names></name> <name><surname>Duinmaijer</surname> <given-names>A.</given-names></name> <name><surname>Welbers</surname> <given-names>A.</given-names></name></person-group> (<year>1989</year>). <article-title>Matching properties of MOS transistors</article-title>. <source>IEEE J. Solid-State Circuits</source> <volume>24</volume>, <fpage>1433</fpage>&#x02013;<lpage>1439</lpage>.<pub-id pub-id-type="doi">10.1109/JSSC.1989.572629</pub-id></citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perrin</surname> <given-names>D.</given-names></name></person-group> (<year>2011</year>). <article-title>Complexity and high-end computing in biology and medicine</article-title>. <source>Adv. Exp. Med. Biol.</source> <volume>696</volume>, <fpage>377</fpage>&#x02013;<lpage>384</lpage>.<pub-id pub-id-type="doi">10.1007/978-1-4419-7046-6_38</pub-id><pub-id pub-id-type="pmid">21431578</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Plana</surname> <given-names>L. A.</given-names></name> <name><surname>Furber</surname> <given-names>S. B.</given-names></name> <name><surname>Temple</surname> <given-names>S.</given-names></name> <name><surname>Khan</surname> <given-names>M.</given-names></name> <name><surname>Shi</surname> <given-names>Y.</given-names></name> <name><surname>Wu</surname> <given-names>J.</given-names></name> <name><surname>Yang</surname> <given-names>S.</given-names></name></person-group> (<year>2007</year>). <article-title>A GALS infrastructure for a massively parallel multiprocessor</article-title>. <source>IEEE Des. Test Comput.</source> <volume>24</volume>, <fpage>454</fpage>&#x02013;<lpage>463</lpage>.<pub-id pub-id-type="doi">10.1109/MDT.2007.149</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ramakrishnan</surname> <given-names>S.</given-names></name> <name><surname>Hasler</surname> <given-names>P.</given-names></name> <name><surname>Gordon</surname> <given-names>C.</given-names></name></person-group> (<year>2011</year>). <article-title>Floating gate synapses with spike-time-dependent plasticity</article-title>. <source>IEEE Trans. Biomed. Circuits Syst.</source> <volume>5</volume>, <fpage>244</fpage>&#x02013;<lpage>252</lpage>.<pub-id pub-id-type="doi">10.1109/TBCAS.2011.2109000</pub-id></citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reynolds</surname> <given-names>J. N.</given-names></name> <name><surname>Wickens</surname> <given-names>J. R.</given-names></name></person-group> (<year>2002</year>). <article-title>Dopamine-dependent plasticity of corticostriatal synapses</article-title>. <source>Neural Netw.</source> <volume>15</volume>, <fpage>507</fpage>&#x02013;<lpage>521</lpage>.<pub-id pub-id-type="doi">10.1016/S0893-6080(02)00045-X</pub-id><pub-id pub-id-type="pmid">12371508</pub-id></citation></ref>
<ref id="B59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rubin</surname> <given-names>J.</given-names></name> <name><surname>Lee</surname> <given-names>D.</given-names></name> <name><surname>Sompolinsky</surname> <given-names>H.</given-names></name></person-group> (<year>2001</year>). <article-title>Equilibrium properties of temporally asymmetric Hebbian plasticity</article-title>. <source>Phys. Rev. Lett.</source> <volume>86</volume>, <fpage>364</fpage>&#x02013;<lpage>367</lpage>.<pub-id pub-id-type="doi">10.1103/PhysRevLett.86.364</pub-id><pub-id pub-id-type="pmid">11177832</pub-id></citation></ref>
<ref id="B60"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Br&#x000FC;derle</surname> <given-names>D.</given-names></name> <name><surname>Gr&#x000FC;bl</surname> <given-names>A.</given-names></name> <name><surname>Hock</surname> <given-names>M.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name> <name><surname>Millner</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <article-title>&#x0201C;A wafer-scale neuromorphic hardware system for large-scale neural modeling,&#x0201D;</article-title> in <conf-name>Proceedings of the 2010 International Symposium on Circuits and Systems (ISCAS)</conf-name> (<conf-loc>Paris</conf-loc>: <conf-sponsor>IEEE Press</conf-sponsor>), <fpage>1947</fpage>&#x02013;<lpage>1950</lpage>.</citation></ref>
<ref id="B61"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Br&#x000FC;derle</surname> <given-names>D.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name> <name><surname>Ostendorf</surname> <given-names>B.</given-names></name></person-group> (<year>2007</year>). <article-title>&#x0201C;Modeling synaptic plasticity within networks of highly accelerated I&#x00026;F neurons,&#x0201D;</article-title> in <conf-name>Proceedings of the 2007 International Symposium on Circuits and Systems (ISCAS)</conf-name> (<conf-loc>New Orleans</conf-loc>: <conf-sponsor>IEEE Press</conf-sponsor>).</citation></ref>
<ref id="B62"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Fieres</surname> <given-names>J.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name></person-group> (<year>2008</year>). <article-title>&#x0201C;Wafer-scale integration of analog neural networks,&#x0201D;</article-title> in <conf-name>Proceedings of the 2008 International Joint Conference on Neural Networks (IJCNN)</conf-name> (<conf-loc>Hong Kong</conf-loc>: <conf-sponsor>IEEE Press</conf-sponsor>).</citation></ref>
<ref id="B63"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Gruebl</surname> <given-names>A.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name> <name><surname>Mueller</surname> <given-names>E.</given-names></name></person-group> (<year>2006</year>). <article-title>&#x0201C;Implementing synaptic plasticity in a VLSI spiking neural network model,&#x0201D;</article-title> in <conf-name>Proceedings of the 2006 International Joint Conference on Neural Networks (IJCNN)</conf-name> (<conf-loc>Vancouver</conf-loc>: <conf-sponsor>IEEE Press</conf-sponsor>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation></ref>
<ref id="B64"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Senn</surname> <given-names>W.</given-names></name> <name><surname>Segev</surname> <given-names>I.</given-names></name> <name><surname>Tsodyks</surname> <given-names>M.</given-names></name></person-group> (<year>1998</year>). <article-title>Reading neuronal synchrony with depressing synapses</article-title>. <source>Neural Comput.</source> <volume>10</volume>, <fpage>815</fpage>&#x02013;<lpage>819</lpage>.<pub-id pub-id-type="doi">10.1162/089976698300017494</pub-id><pub-id pub-id-type="pmid">9573406</pub-id></citation></ref>
<ref id="B65"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shmuel</surname> <given-names>A.</given-names></name> <name><surname>Korman</surname> <given-names>M.</given-names></name> <name><surname>Sterkin</surname> <given-names>A.</given-names></name> <name><surname>Harel</surname> <given-names>M.</given-names></name> <name><surname>Ullman</surname> <given-names>S.</given-names></name> <name><surname>Malach</surname> <given-names>R.</given-names></name> <name><surname>Grinvald</surname> <given-names>A.</given-names></name></person-group> (<year>2005</year>). <article-title>Retinotopic axis specificity and selective clustering of feedback projections from V2 to V1 in the owl monkey</article-title>. <source>J. Neurosci.</source> <volume>25</volume>, <fpage>2117</fpage>&#x02013;<lpage>2131</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4137-04.2005</pub-id><pub-id pub-id-type="pmid">15728852</pub-id></citation></ref>
<ref id="B66"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sj&#x000F6;str&#x000F6;m</surname> <given-names>P.</given-names></name> <name><surname>Turrigiano</surname> <given-names>G.</given-names></name> <name><surname>Nelson</surname> <given-names>S.</given-names></name></person-group> (<year>2001</year>). <article-title>Rate, timing, and cooperativity jointly determine cortical synaptic plasticity</article-title>. <source>Neuron</source> <volume>32</volume>, <fpage>1149</fpage>&#x02013;<lpage>1164</lpage>.<pub-id pub-id-type="doi">10.1016/S0896-6273(01)00542-6</pub-id><pub-id pub-id-type="pmid">11754844</pub-id></citation></ref>
<ref id="B67"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Song</surname> <given-names>S.</given-names></name> <name><surname>Miller</surname> <given-names>K. D.</given-names></name> <name><surname>Abbott</surname> <given-names>L. F.</given-names></name></person-group> (<year>2000</year>). <article-title>Competitive Hebbian learning through spike-timing-dependent synaptic plasticity</article-title>. <source>Nat. Neurosci.</source> <volume>3</volume>, <fpage>919</fpage>&#x02013;<lpage>926</lpage>.<pub-id pub-id-type="doi">10.1038/78829</pub-id><pub-id pub-id-type="pmid">10966623</pub-id></citation></ref>
<ref id="B68"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thompson</surname> <given-names>S. E.</given-names></name> <name><surname>Parthasarathy</surname> <given-names>S.</given-names></name></person-group> (<year>2006</year>). <article-title>Moore&#x02019;s law: the future of si microelectronics</article-title>. <source>Mater. Today</source> <volume>9</volume>, <fpage>20</fpage>&#x02013;<lpage>25</lpage>.<pub-id pub-id-type="doi">10.1016/S1369-7021(06)71385-2</pub-id></citation></ref>
<ref id="B69"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Trachtenberg</surname> <given-names>J. T.</given-names></name> <name><surname>Chen</surname> <given-names>B. E.</given-names></name> <name><surname>Knott</surname> <given-names>G. W.</given-names></name> <name><surname>Feng</surname> <given-names>G.</given-names></name> <name><surname>Sanes</surname> <given-names>J. R.</given-names></name> <name><surname>Welker</surname> <given-names>E.</given-names></name> <name><surname>Svoboda</surname> <given-names>K.</given-names></name></person-group> (<year>2002</year>). <article-title>Long-term in vivo imaging of experience-dependent synaptic plasticity in adult cortex</article-title>. <source>Nature</source> <volume>420</volume>, <fpage>788</fpage>&#x02013;<lpage>794</lpage>.<pub-id pub-id-type="doi">10.1038/nature01273</pub-id><pub-id pub-id-type="pmid">12490942</pub-id></citation></ref>
<ref id="B70"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsodyks</surname> <given-names>M. V.</given-names></name> <name><surname>Markram</surname> <given-names>H.</given-names></name></person-group> (<year>1997</year>). <article-title>The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>94</volume>, <fpage>719</fpage>&#x02013;<lpage>723</lpage>.<pub-id pub-id-type="doi">10.1073/pnas.94.2.719</pub-id><pub-id pub-id-type="pmid">9012851</pub-id></citation></ref>
<ref id="B71"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Turrigiano</surname> <given-names>G. G.</given-names></name> <name><surname>Leslie</surname> <given-names>K. R.</given-names></name> <name><surname>Desai</surname> <given-names>N. S.</given-names></name> <name><surname>Rutherford</surname> <given-names>L. C.</given-names></name> <name><surname>Nelson</surname> <given-names>S. B.</given-names></name></person-group> (<year>1998</year>). <article-title>Activity-dependent scaling of quantal amplitude in neocortical neurons</article-title>. <source>Nature</source> <volume>391</volume>, <fpage>892</fpage>&#x02013;<lpage>896</lpage>.<pub-id pub-id-type="doi">10.1038/36103</pub-id><pub-id pub-id-type="pmid">9495341</pub-id></citation></ref>
<ref id="B72"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>van Rossum</surname> <given-names>M. C. W.</given-names></name> <name><surname>Bi</surname> <given-names>G.</given-names></name> <name><surname>Turrigiano</surname> <given-names>G. G.</given-names></name></person-group> (<year>2000</year>). <article-title>Stable Hebbian learning from spike timing-dependent plasticity</article-title>. <source>J. Neurosci.</source> <volume>20</volume>, <fpage>8812</fpage>&#x02013;<lpage>8821</lpage>.<pub-id pub-id-type="pmid">11102489</pub-id></citation></ref>
<ref id="B73"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vogels</surname> <given-names>T.</given-names></name> <name><surname>Sprekeler</surname> <given-names>H.</given-names></name> <name><surname>Zenke</surname> <given-names>F.</given-names></name> <name><surname>Clopath</surname> <given-names>C.</given-names></name> <name><surname>Gerstner</surname> <given-names>W.</given-names></name></person-group> (<year>2011</year>). <article-title>Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks</article-title>. <source>Science</source> <volume>334</volume>, <fpage>1569</fpage>&#x02013;<lpage>1573</lpage>.<pub-id pub-id-type="doi">10.1126/science.1211095</pub-id><pub-id pub-id-type="pmid">22075724</pub-id></citation></ref>
<ref id="B74"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vogels</surname> <given-names>T. P.</given-names></name> <name><surname>Rajan</surname> <given-names>K.</given-names></name> <name><surname>Abbott</surname> <given-names>L. F.</given-names></name></person-group> (<year>2005</year>). <article-title>Neural network dynamics</article-title>. <source>Annu. Rev. Neurosci.</source> <volume>28</volume>, <fpage>357</fpage>&#x02013;<lpage>376</lpage>.<pub-id pub-id-type="doi">10.1146/annurev.neuro.28.061604.135637</pub-id><pub-id pub-id-type="pmid">16022600</pub-id></citation></ref>
<ref id="B75"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vogelstein</surname> <given-names>R.</given-names></name> <name><surname>Tenore</surname> <given-names>F.</given-names></name> <name><surname>Guevremont</surname> <given-names>L.</given-names></name> <name><surname>Etienne-Cummings</surname> <given-names>R.</given-names></name> <name><surname>Mushahwar</surname> <given-names>V.</given-names></name></person-group> (<year>2008</year>). <article-title>A silicon central pattern generator controls locomotion in vivo</article-title>. <source>IEEE Trans. Biomed. Circ. Syst.</source> <volume>2</volume>, <fpage>212</fpage>&#x02013;<lpage>222</lpage>.<pub-id pub-id-type="doi">10.1109/TBCAS.2008.2001867</pub-id></citation></ref>
<ref id="B76"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vogelstein</surname> <given-names>R. J.</given-names></name> <name><surname>Mallik</surname> <given-names>U.</given-names></name> <name><surname>Culurciello</surname> <given-names>E.</given-names></name> <name><surname>Cauwenberghs</surname> <given-names>G.</given-names></name> <name><surname>Etienne-Cummings</surname> <given-names>R.</given-names></name></person-group> (<year>2007</year>). <article-title>A multichip neuromorphic system for spike-based visual information processing</article-title>. <source>Neural Comput.</source> <volume>19</volume>, <fpage>2281</fpage>&#x02013;<lpage>2300</lpage>.<pub-id pub-id-type="doi">10.1162/neco.2007.19.9.2281</pub-id><pub-id pub-id-type="pmid">17650061</pub-id></citation></ref>
<ref id="B77"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yger</surname> <given-names>P.</given-names></name> <name><surname>El Boustani</surname> <given-names>S.</given-names></name> <name><surname>Destexhe</surname> <given-names>A.</given-names></name> <name><surname>Fr&#x000E9;gnac</surname> <given-names>Y.</given-names></name></person-group> (<year>2011</year>). <article-title>Topologically invariant macroscopic statistics in balanced networks of conductance-based integrate-and-fire neurons</article-title>. <source>J. Comput. Neurosci.</source> <volume>31</volume>, <fpage>229</fpage>&#x02013;<lpage>245</lpage>.<pub-id pub-id-type="doi">10.1007/s10827-010-0310-z</pub-id><pub-id pub-id-type="pmid">21222148</pub-id></citation></ref>
<ref id="B78"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zou</surname> <given-names>Q.</given-names></name> <name><surname>Bornat</surname> <given-names>Y.</given-names></name> <name><surname>Saighi</surname> <given-names>S.</given-names></name> <name><surname>Tomas</surname> <given-names>J.</given-names></name> <name><surname>Renaud</surname> <given-names>S.</given-names></name> <name><surname>Destexhe</surname> <given-names>A.</given-names></name></person-group> (<year>2006</year>). <article-title>Analog-digital simulations of full conductance-based networks of spiking neurons with spike timing dependent plasticity</article-title>. <source>Network</source> <volume>17</volume>, <fpage>211</fpage>&#x02013;<lpage>233</lpage>.<pub-id pub-id-type="doi">10.1080/09548980600711124</pub-id><pub-id pub-id-type="pmid">17162612</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn id="fn1"><p><sup>1</sup>Fast Analog Computing with Emergent Transient States</p></fn>
<fn id="fn2"><p><sup>2</sup>One weight update controller for all 256 neurons with 224 synapses each.</p></fn>
</fn-group>
</back>
</article>
