<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neuroinform.</journal-id>
<journal-title>Frontiers in Neuroinformatics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neuroinform.</abbrev-journal-title>
<issn pub-type="epub">1662-5196</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fninf.2021.659005</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>PyGeNN: A Python Library for GPU-Enhanced Neural Networks</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Knight</surname> <given-names>James C.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/215376/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Komissarov</surname> <given-names>Anton</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1263480/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Nowotny</surname> <given-names>Thomas</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/28940/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex</institution>, <addr-line>Brighton</addr-line>, <country>United Kingdom</country></aff>
<aff id="aff2"><sup>2</sup><institution>Bernstein Center for Computational Neuroscience Berlin</institution>, <addr-line>Berlin</addr-line>, <country>Germany</country></aff>
<aff id="aff3"><sup>3</sup><institution>Department of Engineering and Computer Science, Technische Universit&#x000E4;t Berlin</institution>, <addr-line>Berlin</addr-line>, <country>Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Gaute T. Einevoll, Norwegian University of Life Sciences, Norway</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Mikael Djurfeldt, Royal Institute of Technology, Sweden; Alexander K. Kozlov, Royal Institute of Technology, Sweden</p></fn>
<corresp id="c001">&#x0002A;Correspondence: James C. Knight <email>j.c.knight&#x00040;sussex.ac.uk</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>22</day>
<month>04</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>15</volume>
<elocation-id>659005</elocation-id>
<history>
<date date-type="received">
<day>26</day>
<month>01</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>15</day>
<month>03</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2021 Knight, Komissarov and Nowotny.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Knight, Komissarov and Nowotny</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>More than half of the Top 10 supercomputing sites worldwide use GPU accelerators and they are becoming ubiquitous in workstations and edge computing devices. GeNN is a C&#x0002B;&#x0002B; library for generating efficient spiking neural network simulation code for GPUs. However, until now, the full flexibility of GeNN could only be harnessed by writing model descriptions and simulation code in C&#x0002B;&#x0002B;. Here we present PyGeNN, a Python package which exposes all of GeNN&#x00027;s functionality to Python with minimal overhead. This provides an alternative, arguably more user-friendly, way of using GeNN and allows modelers to use GeNN within the growing Python-based machine learning and computational neuroscience ecosystems. In addition, we demonstrate that, in both Python and C&#x0002B;&#x0002B; GeNN simulations, the overheads of recording spiking data can strongly affect runtimes and show how a new spike recording system can reduce these overheads by up to 10&#x000D7;. Using the new recording system, we demonstrate that by using PyGeNN on a modern GPU, we can simulate a full-scale model of a cortical column faster even than real-time neuromorphic systems. Finally, we show that long simulations of a smaller model with complex stimuli and a custom three-factor learning rule defined in PyGeNN can be simulated almost two orders of magnitude faster than real-time.</p></abstract>
<kwd-group>
<kwd>GPU</kwd>
<kwd>high-performance computing</kwd>
<kwd>parallel computing</kwd>
<kwd>benchmarking</kwd>
<kwd>computational neuroscience</kwd>
<kwd>spiking neural networks</kwd>
<kwd>python</kwd>
</kwd-group>
<contract-num rid="cn001">EP/P006094/1</contract-num>
<contract-num rid="cn001">EP/S030964/1</contract-num>
<contract-num rid="cn002">945539</contract-num>
<contract-sponsor id="cn001">UK Research and Innovation<named-content content-type="fundref-id">10.13039/100014013</named-content></contract-sponsor>
<contract-sponsor id="cn002">Horizon 2020<named-content content-type="fundref-id">10.13039/501100007601</named-content></contract-sponsor>
<counts>
<fig-count count="6"/>
<table-count count="0"/>
<equation-count count="12"/>
<ref-count count="37"/>
<page-count count="12"/>
<word-count count="8137"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>A wide range of spiking neural network (SNN) simulators are available, each with their own application domains. NEST (Gewaltig and Diesmann, <xref ref-type="bibr" rid="B12">2007</xref>) is widely used for large-scale point neuron simulations on distributed computing systems; NEURON (Carnevale and Hines, <xref ref-type="bibr" rid="B7">2006</xref>) and Arbor (Akar et al., <xref ref-type="bibr" rid="B1">2019</xref>) specialize in the simulation of complex multi-compartmental models; NeuroKernel (Givon and Lazar, <xref ref-type="bibr" rid="B13">2016</xref>) is focused on emulating fly brain circuits using Graphics Processing Units (GPUs); and CARLsim (Chou et al., <xref ref-type="bibr" rid="B8">2018</xref>), ANNarchy (Vitay et al., <xref ref-type="bibr" rid="B36">2015</xref>), Spice (Bautembach et al., <xref ref-type="bibr" rid="B4">2021</xref>), NeuronGPU (Golosio et al., <xref ref-type="bibr" rid="B14">2021</xref>), and GeNN (Yavuz et al., <xref ref-type="bibr" rid="B37">2016</xref>) use GPUs to accelerate point neuron models. For performance reasons, many of these simulators are written in C&#x0002B;&#x0002B; and, especially amongst the older simulators, users describe their models either using a Domain-Specific Language (DSL) or directly in C&#x0002B;&#x0002B;. For programming language purists, fully custom DSLs such as the HOC network description language in NEURON (Carnevale and Hines, <xref ref-type="bibr" rid="B7">2006</xref>) or the NestML (Plotnikov et al., <xref ref-type="bibr" rid="B29">2016</xref>) neuron modeling language may be elegant solutions and, for simulator developers, using C&#x0002B;&#x0002B; directly and not having to add bindings to another language is convenient. However, both choices act as a barrier to potential users. Therefore, with both the computational neuroscience and machine learning communities gradually coalescing toward a Python-based ecosystem with a wealth of mature libraries for scientific computing (Hunter, <xref ref-type="bibr" rid="B18">2007</xref>; Millman and Aivazis, <xref ref-type="bibr" rid="B24">2011</xref>; Van Der Walt et al., <xref ref-type="bibr" rid="B35">2011</xref>), exposing spiking neural network simulators to Python with minimal domain specific modifications seems like a pragmatic choice. NEST (Eppler et al., <xref ref-type="bibr" rid="B11">2009</xref>), NEURON (Hines et al., <xref ref-type="bibr" rid="B15">2009</xref>), and CARLsim (Balaji et al., <xref ref-type="bibr" rid="B3">2020</xref>) have all taken this route and now all offer Python interfaces. Furthermore, newer simulators such as Arbor and Brian2 (Stimberg et al., <xref ref-type="bibr" rid="B32">2019</xref>) have been designed from the ground up with a Python interface.</p>
<p>Our GeNN simulator can already be used as a backend for the Python-based Brian2 simulator (Stimberg et al., <xref ref-type="bibr" rid="B32">2019</xref>) using the Brian2GeNN interface (Stimberg et al., <xref ref-type="bibr" rid="B33">2020</xref>) which modifies the C&#x0002B;&#x0002B; backend &#x0201C;cpp_standalone&#x0201D; of Brian 2 to generate C&#x0002B;&#x0002B; input files for GeNN. As for cpp_standalone, initialization of simulations is mostly done in C&#x0002B;&#x0002B; on the CPU and recording data is saved into binary files and re-imported into Python using Brian 2&#x00027;s native methods. While we have recently demonstrated some very competitive performance results (Knight and Nowotny, <xref ref-type="bibr" rid="B21">2018</xref>, <xref ref-type="bibr" rid="B22">2020</xref>) using GeNN in C&#x0002B;&#x0002B;, and through the Brian2GeNN interface (Stimberg et al., <xref ref-type="bibr" rid="B33">2020</xref>), GeNN could so far not be used directly from Python and it is not possible to expose all of GeNN&#x00027;s unique features through the Brian2 API. Specifically, GeNN not only allows users to easily define their own neuron and synapse models but, also &#x0201C;snippets&#x0201D; for offloading the potentially costly initialization of model parameters and connectivity onto the GPU. Additionally, GeNN provides a lot of freedom for users to integrate their own code into the simulation loop. In this paper we describe the implementation of PyGeNN&#x02014;a Python package which aims to expose the full range of GeNN functionality with minimal performance overheads. Unlike in the majority of other SNN simulators PyGeNN allows defining bespoke neuron and synapse models directly from Python without requiring users to extend the underling C&#x0002B;&#x0002B; code. Below, we demonstrate the flexibility and performance of PyGeNN in two scenarios where minimizing performance overheads is particularly critical.</p>
<list list-type="bullet">
<list-item><p>In a simulation of a large, highly-connected model of a cortical microcircuit (Potjans and Diesmann, <xref ref-type="bibr" rid="B30">2014</xref>) with small simulation timesteps. Here the cost of copying spike data off the GPU from a large number of neurons every timestep can become a bottleneck.</p></list-item>
<list-item><p>In a simulation of a much smaller model of Pavlovian conditioning (Izhikevich, <xref ref-type="bibr" rid="B20">2007</xref>) where learning occurs over 1 h of biological time and stimuli are delivered&#x02014;following a complex scheme&#x02014;throughout the simulation. Here any overheads are multiplied by a large number of timesteps and copying stimuli to the GPU can become a bottleneck.</p></list-item>
</list>
<p>Using the facilities provided by PyGeNN, we show that both scenarios can be simulated from Python with only minimal overheads over a pure C&#x0002B;&#x0002B; implementation.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>2. Materials and Methods</title>
<sec>
<title>2.1. GeNN</title>
<p>GeNN (Yavuz et al., <xref ref-type="bibr" rid="B37">2016</xref>) is a library for generating CUDA (NVIDIA et al., <xref ref-type="bibr" rid="B27">2020</xref>) code for the simulation of spiking neural network models. GeNN handles much of the complexity of using CUDA directly and automatically performs device-specific optimizations so as to maximize performance. GeNN consists of a main library&#x02014;implementing the API used to define models as well as the generic parts of the code generator&#x02014;and an additional library for each backend (currently there is a reference C&#x0002B;&#x0002B; backend for generating CPU code and a CUDA backend. An OpenCL backend is under development). Users describe their model by implementing a <inline-formula><mml:math id="M1"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>modelDefinition</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> function within a C&#x0002B;&#x0002B; file. For example, a model consisting of four Izhikevich neurons with heterogeneous parameters, driven by a constant input current might be defined as follows:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0001.tif"/></p>
<p>The <italic>genn-buildmodel</italic> command line tool is then used to compile this file; link it against the main GeNN library and the desired backend library; and finally run the resultant executable to generate the source code required to build a simulation dynamic library (a .dll file on Windows or a .so file on Linux and Mac). This dynamic library can then either be linked against a simulation loop provided by the user or dynamically loaded by the user&#x00027;s simulation code. To demonstrate this latter approach, the following example uses the <inline-formula><mml:math id="M2"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>SharedLibraryModel</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> helper class supplied with GeNN to dynamically load the previously defined model, initialize the heterogenous neuron parameters and print each neuron&#x00027;s membrane voltage every timestep:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0002.tif"/></p>
</sec>
<sec>
<title>2.2. SWIG</title>
<p>In order to use GeNN from Python, both the model creation API and the <inline-formula><mml:math id="M3"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>SharedLibraryModel</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> functionality need to be &#x0201C;wrapped&#x0201D; so they can be called from Python. While this is possible using the API built into Python itself, wrapper functions would need to be manually implemented for each GeNN function to be exposed which would result in a lot of maintenance overhead. Instead, we chose to use SWIG (Beazley, <xref ref-type="bibr" rid="B5">1996</xref>) to automatically generate wrapper functions and classes. SWIG generates Python modules based on special interface files which can directly include C&#x0002B;&#x0002B; code as well as special &#x0201C;directives&#x0201D; which control SWIG. For example, the following SWIG interface file would wrap the C&#x0002B;&#x0002B; code in test.h in a Python module called <inline-formula><mml:math id="M4"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>test_module</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> within a Python package called <inline-formula><mml:math id="M5"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>test_package</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula>:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0003.tif"/></p>
<p>The <monospace>%</monospace><inline-formula><mml:math id="M6"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>module</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> directive sets the name of the generated module and the package it will be located in and the <monospace>%</monospace><inline-formula><mml:math id="M7"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>include</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> directive parses and automatically generates wrapper functions for the C&#x0002B;&#x0002B; header file. We use SWIG in this manner to wrap both the model building and <inline-formula><mml:math id="M8"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>SharedLibraryModel</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> APIs described in section 2.1. However, key parts of GeNN&#x00027;s API such as the <inline-formula><mml:math id="M9"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>ModelSpec</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>::</monospace><inline-formula><mml:math id="M55"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>addNeuronPopulation</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> method employed in section 2.1, rely on C&#x0002B;&#x0002B; templates which are not directly translatable to Python. Instead, valid template instantiations need to be given a unique name in Python using the <monospace>%</monospace><inline-formula><mml:math id="M10"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>template</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> SWIG directive:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0004.tif"/></p>
<p>Having to manually add these directives whenever a model is added to GeNN would be exactly the sort of maintenance overhead we were trying to avoid by using SWIG. Therefore, when building the Python wrapper, we instead search the GeNN header files for the macros used to declare models in C&#x0002B;&#x0002B; and automatically generate SWIG <monospace>%</monospace><inline-formula><mml:math id="M11"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>template</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> directives.</p>
<p>As previously discussed, a key feature of GeNN is the ease with which it allows users to define their own neuron and synapse models as well as &#x0201C;snippets&#x0201D; defining how variables and connectivity should be initialized. Beneath the syntactic sugar described in our previous work (Knight and Nowotny, <xref ref-type="bibr" rid="B21">2018</xref>), new models are defined by simply writing a new C&#x0002B;&#x0002B; class derived from, for example, the <inline-formula><mml:math id="M12"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>NeuronModels</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>::</monospace><inline-formula><mml:math id="M56"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>Base</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> class. Being able to define such classes from Python was a key requirement of PyGeNN. However, to support this, GeNN&#x00027;s C&#x0002B;&#x0002B; code generator would need to be able to call through to the methods in the Python class used by the user to implement a model. SWIG makes this easy by generating all of the boilerplate code required to make C&#x0002B;&#x0002B; classes inheritable from Python using a single SWIG &#x0201C;director&#x0201D; directive:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0005.tif"/></p>
</sec>
<sec>
<title>2.3. PyGeNN</title>
<p>While GeNN <italic>could</italic> be used from Python via the wrapper generated using SWIG, the resultant code would be unpleasant to use directly. For example, rather than being able to specify neuron parameters using native Python types such as lists or dictionaries, one would have to use a wrapped type such as <inline-formula><mml:math id="M13"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>DoubleVector</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>([0.25, 10.0, 0.0, 0.0, 20.0, 2.0, 0.5])</monospace>. Therefore, in order to provide a more user-friendly and pythonic interface, we have built PyGeNN on top of the wrapper generated by SWIG. PyGeNN combines the separate model building and simulation stages of building a GeNN model in C&#x0002B;&#x0002B; into a single API, likely to be more familiar to users of existing Python-based model description languages such as PyNEST (Eppler et al., <xref ref-type="bibr" rid="B11">2009</xref>) or PyNN (Davison et al., <xref ref-type="bibr" rid="B9">2008</xref>). By combining the two stages together, PyGeNN can provide a unified dictionary-based API for initializing homogeneous and heterogeneous parameters as shown in this re-implementation of the previous example:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0006.tif"/></p>
<p>Initialization of variables with homogeneous values&#x02014;such as the neurons&#x00027; membrane potential&#x02014;is performed by initialization kernels generated by GeNN and the initial values of variables with heterogeneous values&#x02014;such as the <inline-formula><mml:math id="M15"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>a</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M16"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>b</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula>, and <inline-formula><mml:math id="M17"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>c</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> parameters&#x02014;are copied to the GPU by PyGeNN after the model is loaded. While the PyGeNN API is more pythonic and, hopefully, more user-friendly than the C&#x0002B;&#x0002B; interface, it still provides users with the same low-level control over the simulation. Furthermore, by using SWIG&#x00027;s numpy (Van Der Walt et al., <xref ref-type="bibr" rid="B35">2011</xref>) interface, the host memory allocated by GeNN can be accessed directly from Python using the <inline-formula><mml:math id="M18"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>pop</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>.</monospace><inline-formula><mml:math id="M57"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>vars</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>[</monospace><inline-formula><mml:math id="M58"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#3b8032"><mml:mtext>&#x00022;V&#x00022;</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>].</monospace><inline-formula><mml:math id="M59"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>view</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> syntax meaning that no potentially expensive additional copying of data is required.</p>
<p>As illustrated in the previously-defined model, for convenience, PyGeNN allows users to access GeNN&#x00027;s built-in models. However, one of PyGeNN&#x00027;s most powerful features is that it enables users to easily define their own neuron and synapse models from within Python. For example, an Izhikevich neuron model (Izhikevich, <xref ref-type="bibr" rid="B19">2003</xref>) can be defined using the <inline-formula><mml:math id="M19"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>create_custom_neuron_class</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> helper function which provides some syntactic sugar over directly inheriting from the SWIG director class:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0007.tif"/></p>
<p>The <inline-formula><mml:math id="M20"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>param_names</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> list defines the real-valued parameters that are constant across the whole population of neurons and the <inline-formula><mml:math id="M21"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>var_name_types</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> list defines the model state variables and their type (the <inline-formula><mml:math id="M22"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>scalar</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> type is an alias for either single or double-precision floating point, depending on the precision passed to the <inline-formula><mml:math id="M23"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>GeNNModel</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> constructor). The behavior of the model is then defined using a number of code strings. Unlike in tools like Brian 2 (Stimberg et al., <xref ref-type="bibr" rid="B32">2019</xref>), these code strings are specified in a C-like language rather than using differential equations. This language provides standard C control flow statements as well as the transcendental functions from the standard maths library. Additionally, variables provided by GeNN such as the membrane voltage in the model above can be accessed using the <inline-formula><mml:math id="M24"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>$</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>(</monospace><inline-formula><mml:math id="M60"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>V</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>)</monospace> syntax and functions provided by GeNN can be called using the <inline-formula><mml:math id="M25"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>$</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>(</monospace><inline-formula><mml:math id="M61"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>F</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>, 1, 2)</monospace> syntax (where <inline-formula><mml:math id="M26"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>F</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> is a 2 argument function). Using C-like code strings allows expert users to choose their own solver for models described in terms of differential equations and to programatically define models such as spike sources. For example, in the model presented above, we chose to implement the neuron using the idiosyncratic forward Euler integration scheme employed by Izhikevich (<xref ref-type="bibr" rid="B19">2003</xref>). Finally, the <inline-formula><mml:math id="M27"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>threshold_condition_code</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> expression defines <italic>when</italic> the neuron will spike whereas the <inline-formula><mml:math id="M28"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>reset_code</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> code string defines how the state variables should be reset after a spike.</p>
</sec>
<sec>
<title>2.4. Spike Recording System</title>
<p>Internally, GeNN stores the spikes emitted by a neuron population during one simulation timestep in an array containing the indices of the neurons that spiked alongside a counter of how many spikes have been emitted overall. Previously, recording spikes in GeNN was very similar to the recording of voltages shown in the previous example code&#x02014;the array of neuron indices was simply copied from the GPU to the CPU every timestep. However, especially when simulating models with a small simulation timestep, such frequent synchronization between the CPU and GPU is costly&#x02014;especially if a slower, interpreted language such as Python is involved. Furthermore, biological neurons typically spike at a low rate (in the cortex, the average firing rate is only around 3 Hz; Buzs&#x000E1;ki and Mizuseki, <xref ref-type="bibr" rid="B6">2014</xref>) meaning that the amount of spike data transferred every timestep is typically very small. One solution to these inefficiencies is to store many timesteps worth of spike data on the GPU and use more infrequent, larger transfers to copy it to the CPU.</p>
<p>When a model includes delays, the array of indices and the counter used to store spikes internally are duplicated for each delay &#x0201C;slot.&#x0201D; Additional delay slots could be artificially added to the neuron population so that this data structure could be re-used to also store spike data for subsequent recording. However, the array containing the indices has memory allocated for all neurons to handle the worst case where all neurons in the population fire in the same time step. Therefore, while this data structure is ideal for efficient spike propagation, using it to store many timesteps worth of spikes would be very wasteful of memory. At low firing rates, the most memory efficient solution would be to simply store the indices of neurons which spiked each timestep, for example in a data structure similar to a Yale sparse matrix with each &#x0201C;row&#x0201D; representing a timestep (Eisenstat et al., <xref ref-type="bibr" rid="B10">1977</xref>). However, not only would the efficiency of this approach rely on GeNN <italic>only</italic> being used for models with biologically-plausible firing rates, but the amount of memory required to store the spikes for a given number of timesteps could not be determined ahead of time. Therefore, either GeNN or the user would need to regularly check the level of usage to determine whether the buffer was exhausted, leading to exactly the type of host-synchronization overheads the spike recording system is designed to alleviate. Instead, we represent the spikes emitted by a population of <italic>N</italic> neurons in a single simulation timestep as a <italic>N</italic>bit bitfield where a &#x0201C;1&#x0201D; represents a spike and a &#x0201C;0&#x0201D; the absence of one. Spiking data over multiple timesteps is then represented by a circular buffer of these bitfields. While at very low firing rates, this approach uses more memory than storing the indices of the neurons which spiked, it still allows the spiking output of relatively large models, running for many timesteps to be stored in a small amount of memory. For example, the spiking output of a model with 100 &#x000D7; 10<sup>3</sup> neurons running for 10 &#x000D7; 10<sup>3</sup> simulation timesteps, required &#x0003C;120 MB&#x02014;a small fraction of the memory on a modern GPU. While efficiently handling spikes stored in a bitfield is a little trickier than working with a list of neuron indices, GeNN provides an efficient C&#x0002B;&#x0002B; helper function for saving the spikes stored in a bitfield to a text file and a numpy-based method for decoding them in PyGeNN.</p>
</sec>
<sec>
<title>2.5. Cortical Microcircuit Model</title>
<p>Potjans and Diesmann (<xref ref-type="bibr" rid="B30">2014</xref>) developed the cortical microcircuit model of 1 mm2 of early-sensory cortex illustrated in <xref ref-type="fig" rid="F1">Figure 1</xref>. The model consists of 77,169 LIF neurons, divided into separate populations representing the excitatory and inhibitory population in each of four cortical layers (2/3, 4, 5, and 6). The membrane voltage <italic>V</italic><sub><italic>i</italic></sub> of each neuron <italic>i</italic> is modeled as:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M29"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">m</mml:mtext></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">rest</mml:mtext></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">m</mml:mtext></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">syn</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ext</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003C4;<sub>m</sub> &#x0003D; 10 ms and <italic>R</italic><sub>m</sub> &#x0003D; 40M&#x003A9; represent the time constant and resistance of the neuron&#x00027;s cell membrane, <italic>V</italic><sub>rest</sub> &#x0003D; &#x02212;65 mV defines the resting potential, <italic>I</italic><sub>syn<sub><italic>i</italic></sub></sub> represents the synaptic input current and <italic>I</italic><sub>ext<sub><italic>i</italic></sub></sub> represents an external input current. When the membrane voltage crosses a threshold <italic>V</italic><sub>th</sub> &#x0003D; &#x02212;50 mV a spike is emitted, the membrane voltage is reset to <italic>V</italic><sub>rest</sub> and updating of <italic>V</italic> is suspended for a refractory period &#x003C4;<sub>ref</sub> &#x0003D; 2 ms. Neurons in each population are connected randomly with numbers of synapses derived from an extensive review of the anatomical literature. These synapses are current-based, i.e., presynaptic spikes lead to exponentially-decaying input currents <italic>I</italic><sub>syn<sub><italic>i</italic></sub></sub></p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M30"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">syn</mml:mtext></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">syn</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">syn</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munder></mml:mstyle><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003C4;<sub>syn</sub> &#x0003D; 0.5 ms represents the synaptic time constant, <italic>w</italic><sub><italic>ij</italic></sub> represents the synaptic weight and <italic>t</italic><sub><italic>j</italic></sub> are the arrival times of incoming spikes from <italic>n</italic> presynaptic neurons. Within each synaptic projection, all synaptic strengths and transmission delays are normally distributed using the parameters presented in Potjans and Diesmann (<xref ref-type="bibr" rid="B30">2014</xref>, Table 5) and, in total, the model has approximately 0.3 &#x000D7; 10<sup>9</sup> synapses. As well as receiving synaptic input, each neuron in the network also receives an independent Poisson input current, representing input from neighboring not explicitly modeled cortical regions. The Poisson input is delivered to each neuron via <italic>I</italic><sub>ext<sub><italic>i</italic></sub></sub> with</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M31"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">syn</mml:mtext></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ext</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ext</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ext</mml:mtext></mml:mrow></mml:msub><mml:mtext class="textrm" mathvariant="normal">Poisson</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003BD;</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ext</mml:mtext></mml:mrow></mml:msub><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003BD;<sub>ext</sub> represents the mean input rate and <italic>w</italic><sub>ext</sub> represents the weight. The ordinary differential Equations (1), (2), and (3) are solved with an exponential Euler algorithm. For a full description of the model parameters, please refer to Potjans and Diesmann (<xref ref-type="bibr" rid="B30">2014</xref>, Tables 4, 5) and for a description of the strategies used by GeNN to parallelize the initialization and subsequent simulation of this network, please refer to Knight and Nowotny (<xref ref-type="bibr" rid="B21">2018</xref>, section 2.3). This model requires simulation using a relatively small timestep of 0.1 ms, making the overheads of copying spikes from the GPU every timestep particularly problematic.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Illustration of the microcircuit model. Blue triangles represent excitatory populations, red circles represent inhibitory populations, and the number beneath each symbol shows the number of neurons in each population. Connection probabilities are shown in small bold numbers at the appropriate point in the connection matrix. All excitatory synaptic weights are normally distributed with a mean of 0.0878 nA (unless otherwise indicated in green) and a standard deviation of 0.0878 nA. All inhibitory synaptic weights are normally distributed with a mean of 0.3512 nA and a standard deviation of 0.03512 nA.</p></caption>
<graphic xlink:href="fninf-15-659005-g0001.tif"/>
</fig>
</sec>
<sec>
<title>2.6. Pavlovian Conditioning Model</title>
<p>The cortical microcircuit model described in the previous section is ideal for exploring the performance of short simulations of relatively large models. However, the performance of longer simulations of smaller models is equally vital. Such models can be particularly troublesome for GPU simulation as, not only might they not offer enough parallelism to fully occupy the device but, each timestep can be simulated so quickly that the overheads of launching kernels etc can dominate. Additional overheads can be incurred when models require injecting external stimuli throughout the simulation. Longer simulations are particularly useful when exploring synaptic plasticity so, to explore the performance of PyGeNN in this scenario, we simulate a model of Pavlovian conditioning using a three-factor Spike-Timing-Dependent Plasticity (STDP) learning rule (Izhikevich, <xref ref-type="bibr" rid="B20">2007</xref>).</p>
<sec>
<title>2.6.1. Neuron Model</title>
<p>The model illustrated in <xref ref-type="fig" rid="F2">Figure 2</xref> consists of an 800 neuron excitatory population and a 200 neuron inhibitory population, within which, each neuron <italic>i</italic> is modeled using the Izhikevich model (Izhikevich, <xref ref-type="bibr" rid="B19">2003</xref>) whose dimensionless membrane voltage <italic>V</italic><sub><italic>i</italic></sub> and adaption variables <italic>U</italic><sub><italic>i</italic></sub> evolve such that:</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M32"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>04</mml:mn><mml:msubsup><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:mn>5</mml:mn><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mn>140</mml:mn><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>U</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">syn</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ext</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E5"><label>(5)</label><mml:math id="M33"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mi>U</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>b</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>U</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>When the membrane voltage rises above 30, a spike is emitted and <italic>V</italic><sub><italic>i</italic></sub> is reset to <italic>c</italic> and <italic>d</italic> is added to <italic>U</italic><sub><italic>i</italic></sub>. Excitatory neurons use the regular-spiking parameters (Izhikevich, <xref ref-type="bibr" rid="B19">2003</xref>) where <italic>a</italic> &#x0003D; 0.02, <italic>b</italic> &#x0003D; 0.2, <italic>c</italic> &#x0003D; &#x02212;65.0, <italic>d</italic> &#x0003D; 8.0 and inhibitory neurons use the fast-spiking parameters (Izhikevich, <xref ref-type="bibr" rid="B19">2003</xref>) where <italic>a</italic> &#x0003D; 0.1, <italic>b</italic> &#x0003D; 0.2, <italic>c</italic> &#x0003D; &#x02212;65.0, <italic>d</italic> &#x0003D; 2.0. Again, <italic>I</italic><sub>syn<sub><italic>i</italic></sub></sub> represents the synaptic input current and <italic>I</italic><sub>ext<sub><italic>i</italic></sub></sub> represents an external input current. While there are numerous ways to solve Equations (4) and (5) (Humphries and Gurney, <xref ref-type="bibr" rid="B17">2007</xref>; Hopkins and Furber, <xref ref-type="bibr" rid="B16">2015</xref>; Pauli et al., <xref ref-type="bibr" rid="B28">2018</xref>), we chose to use the idiosyncratic forward Euler integration scheme employed by Izhikevich (<xref ref-type="bibr" rid="B19">2003</xref>) in the original work (Izhikevich, <xref ref-type="bibr" rid="B20">2007</xref>). Under this scheme, Equation (4) is first integrated for two 0.5 ms timesteps and then, based on the updated value of <italic>V</italic><sub><italic>i</italic></sub>, Equation (5) is integrated for a single 1 ms timestep.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Illustration of the balanced random network model. The blue triangle represents the excitatory population, the red circle represents the inhibitory population, and the numbers beneath each symbol show the number of neurons in each population. Connection probabilities are shown in small bold numbers at the appropriate point in the connection matrix. All excitatory synaptic weights are plastic and initialized to 1 and all inhibitory synaptic weights are initialized to &#x02212;1.</p></caption>
<graphic xlink:href="fninf-15-659005-g0002.tif"/>
</fig>
</sec>
<sec>
<title>2.6.2. Synapse Models</title>
<p>The excitatory and inhibitory neural populations are connected recurrently, as shown in <xref ref-type="fig" rid="F1">Figure 1</xref>, with instantaneous current-based synapses:</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M34"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">syn</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munder></mml:mstyle><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>t</italic><sub><italic>j</italic></sub> are the arrival times of incoming spikes from <italic>n</italic> presynaptic neurons. Inhibitory synapses are static with <italic>w</italic><sub><italic>ij</italic></sub> &#x0003D; &#x02212;1.0 and excitatory synapses are plastic. Each plastic synapse has an eligibility trace <italic>C</italic><sub><italic>ij</italic></sub> as well as a synaptic weight <italic>w</italic><sub><italic>ij</italic></sub> and these evolve according to a three-factor STDP learning rule (Izhikevich, <xref ref-type="bibr" rid="B20">2007</xref>):</p>
<disp-formula id="E7"><label>(7)</label><mml:math id="M35"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x0002B;</mml:mo><mml:mtext class="textrm" mathvariant="normal">STDP</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">pre/post</mml:mtext></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E8"><label>(8)</label><mml:math id="M36"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003C4;<sub><italic>c</italic></sub> &#x0003D; 1, 000<italic>ms</italic> represents the decay time constant of the eligibility trace and <italic>STDP</italic>(&#x00394;<italic>t</italic>) describes the magnitude of changes made to the eligibility trace in response to the relative timing of a pair of pre and postsynaptic spikes with temporal difference &#x00394;<italic>t</italic> &#x0003D; <italic>t</italic><sub><italic>post</italic></sub> &#x02212; <italic>t</italic><sub><italic>pre</italic></sub>. These changes are only applied to the trace at the times of pre and postsynaptic spikes as indicated by the Dirac delta function &#x003B4;(<italic>t</italic> &#x02212; <italic>t</italic><sub>pre/post</sub>). Here, a double exponential STDP kernel is employed such that:</p>
<disp-formula id="E9"><label>(9)</label><mml:math id="M37"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mtext class="textrm" mathvariant="normal">STDP</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo></mml:mrow></mml:msub><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mtext class="textrm" mathvariant="normal">if</mml:mtext><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi><mml:mo>&#x0003E;</mml:mo><mml:mn>0</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msub><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mtext class="textrm" mathvariant="normal">if</mml:mtext><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:mtext class="textrm" mathvariant="normal">otherwise</mml:mtext></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where the time constants of the STDP window &#x003C4;<sub>&#x0002B;</sub> &#x0003D; &#x003C4;<sub>&#x02212;</sub> &#x0003D; 20 ms and the strength of potentiation and depression are <italic>A</italic><sub>&#x0002B;</sub> &#x0003D; 0.1 and <italic>A</italic><sub>&#x02212;</sub> &#x0003D; 0.15, respectively. Finally, each excitatory neuron has an additional variable <italic>D</italic><sub><italic>j</italic></sub> which describes extracellular dopamine concentration:</p>
<disp-formula id="E10"><label>(10)</label><mml:math id="M38"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x0002B;</mml:mo><mml:mtext class="textrm" mathvariant="normal">DA</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003C4;<sub><italic>d</italic></sub> &#x0003D; 200 ms represents the time constant of dopamine uptake and DA(<italic>t</italic>) the dopamine input over time.</p>
</sec>
<sec>
<title>2.6.3. PyGeNN Implementation of Three-Factor STDP</title>
<p>The first step in implementing this learning rule in PyGeNN is to implement the STDP updates and decay of <italic>C</italic><sub><italic>ij</italic></sub> using GeNN&#x00027;s event-driven plasticity system, the implementation of which was described in our previous work (Knight and Nowotny, <xref ref-type="bibr" rid="B21">2018</xref>). Using a similar syntax to that described in section 2.3, we first create a new &#x0201C;weight update model&#x0201D; with the learning rule parameters and the <italic>w</italic><sub><italic>ij</italic></sub> and <italic>C</italic><sub><italic>ij</italic></sub> state variables:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0008.tif"/></p>
<p>We then instruct GeNN to record the times of current and previous pre and postsynaptic spikes. The current spike time will equal the current time if a spike of this sort is being processed in the current timestep whereas the previous spike time only tracks spikes which have occurred <italic>before</italic> the current timestep:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0009.tif"/></p>
<p>Next we define the &#x0201C;sim code&#x0201D; which is called whenever presynaptic spikes arrive at the synapse. This code first implements Equation (6)&#x02014;adding the synaptic weight (<italic>w</italic><sub><italic>ij</italic></sub>) to the postsynaptic neuron&#x00027;s input (<italic>I</italic><sub>syn<sub><italic>i</italic></sub></sub>) using the <inline-formula><mml:math id="M39"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>$</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>(</monospace><inline-formula><mml:math id="M62"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>addToInSyn</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>,</monospace><inline-formula><mml:math id="M63"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>)</monospace> function.</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0010.tif"/></p>
<p>Within the sim code we also need to calculate the time that has elapsed since the last update of <italic>C</italic><sub><italic>ij</italic></sub> using the spike times we previously requested that GeNN record. Within a timestep, GeNN processes presynaptic spikes before postsynaptic spikes so the time of the last update to <italic>C</italic><sub><italic>ij</italic></sub> will be the latest time either type of spike was processed in previous timesteps:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0011.tif"/></p>
<p>Using this time, we can now calculate how much to decay <italic>C</italic><sub><italic>ij</italic></sub> using the closed-form solution to Equation (7):</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0012.tif"/></p>
<p>To complete the sim code we calculate the depression case of Equation (9) (here we use the <italic>current</italic> postsynaptic spike time as, if a postsynaptic and presynaptic spike occur in the same timestep, there should be no update).</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0013.tif"/></p>
<p>Finally, we define the &#x0201C;learn post code&#x0201D; which is called whenever a postsynaptic spike arrives at the synapse. Other than implementing the potentiation case of Equation (9) and using the <italic>current</italic> presynaptic spike time when calculating the time since the last update of <italic>C</italic><sub><italic>ij</italic></sub>&#x02014;in order to correctly handle presynaptic updates made in the same timestep&#x02014;this code is very similar to the sim code:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0014.tif"/></p>
<p>Adding the synaptic weight <italic>w</italic><sub><italic>ij</italic></sub> update described by Equation (8) requires two further additions to the model. As well as the pre and postsynaptic spikes, the weight update model needs to receive events whenever dopamine is injected via DA. GeNN supports such events via the &#x0201C;spike-like event&#x0201D; system which allows events to be triggered based on an expression evaluated on the presynaptic neuron. In this case, this expression simply tests an <inline-formula><mml:math id="M40"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>injectDopamine</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> flag which gets set by the dopamine injection logic in our presynaptic neuron model:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0015.tif"/></p>
<p>In order to extend our event-driven update of <italic>C</italic><sub><italic>ij</italic></sub> to include spike-like events we need to instruct GeNN to record the times at which they occur:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0016.tif"/></p>
<p>The spike-like events can now be handled using a final &#x0201C;event code&#x0201D; string:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0017.tif"/></p>
<p>After updating the previously defined calculations of <inline-formula><mml:math id="M41"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>tc</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> in the sim code and learn post code in the same way to also include the times of spike-like events, all that remains is to update <italic>w</italic><sub><italic>ij</italic></sub>. Mikaitis et al. (<xref ref-type="bibr" rid="B23">2018</xref>) showed how Equation (8) could be solved algebraically, allowing <italic>w</italic><sub><italic>ij</italic></sub> to be updated in an event-driven manner with:</p>
<disp-formula id="E11"><label>(11)</label><mml:math id="M42"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mo>&#x00394;</mml:mo><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>C</mml:mi><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M43"><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>, <inline-formula><mml:math id="M44"><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>, and <inline-formula><mml:math id="M45"><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> represent the last times at which <italic>C</italic><sub><italic>ij</italic></sub>, <italic>W</italic><sub><italic>ij</italic></sub>, and <italic>D</italic><sub><italic>j</italic></sub>, respectively were updated. Because we will always update <italic>w</italic><sub><italic>ij</italic></sub> and <italic>C</italic><sub><italic>ij</italic></sub> together when presynaptic, postsynaptic and spike-like events occur, <inline-formula><mml:math id="M46"><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> and Equation (11) can be simplified to:</p>
<disp-formula id="E12"><label>(12)</label><mml:math id="M47"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mo>&#x00394;</mml:mo><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>C</mml:mi><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and this update can now be added to each of our three event handling code strings to complete the implementation of the learning rule.</p>
</sec>
<sec>
<title>2.6.4. PyGeNN Implementation of Pavlovian Conditioning Experiment</title>
<p>To perform the Pavlovian conditioning experiment described by Izhikevich (<xref ref-type="bibr" rid="B20">2007</xref>) using this model, we chose 100 random groups of 50 neurons (each representing stimuli <italic>S</italic><sub>1</sub>&#x02026;<italic>S</italic><sub>100</sub>) from amongst the two neural populations. Stimuli are presented to the network in a random order, separated by intervals sampled from <italic>U</italic>(100, 300)ms. The neurons associated with an active stimulus are stimulated for a single 1 ms simulation timestep with a current of 40.0 nA, in addition to the random background current of <italic>U</italic>(&#x02212;6.5, 6.5)nA, delivered to each neuron via <italic>I</italic><sub>ext<sub><italic>i</italic></sub></sub> throughout the simulation. <italic>S</italic><sub>1</sub> is arbitrarily chosen as the Conditioned Stimuli (CS) and, whenever this stimuli is presented, a reward in the form of an increase in dopamine is delivered by setting DA(<italic>t</italic>) &#x0003D; 0.5 after a delay sampled from <italic>U</italic>(0, 1000)ms. This delay period is large enough to allow a few irrelevant stimuli to be presented which act as distractors. The simplest way to implement this stimulation regime is to add a current source to the excitatory and inhibitory neuron populations which adds the uniformly-distributed input current to an externally-controllable per-neuron current. In PyGeNN, the following model can be defined to do just that:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0018.tif"/></p>
<p>where the <inline-formula><mml:math id="M48"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>n</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> parameter sets the magnitude of the background noise, the <inline-formula><mml:math id="M49"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>$</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>(</monospace><inline-formula><mml:math id="M66"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>injectCurrent</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>,</monospace> <inline-formula><mml:math id="M64"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>I</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>)</monospace> function injects a current of <italic>I</italic>nA into the neuron and <inline-formula><mml:math id="M50"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>$</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>(</monospace><inline-formula><mml:math id="M65"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>gennrand_uniform</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula><monospace>)</monospace> samples from <italic>U</italic>(0, 1) using the &#x0201C;XORWOW&#x0201D; pseudo-random number generator provided by cuRAND (NVIDIA Corporation, <xref ref-type="bibr" rid="B25">2019</xref>). Once a current source population using this model has been instantiated and a memory view to <inline-formula><mml:math id="M51"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>iExt</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> obtained in the manner described in section 2.3, in timesteps when stimulus injection is required, current can be injected into the list of neurons contained in <inline-formula><mml:math id="M52"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>stimuli_input_set</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> with:</p>
<p><inline-graphic xlink:href="fninf-15-659005-i0019.tif"/></p>
<p>The same approach can then be used to zero the current afterwards.</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>3. Results</title>
<p>In the following subsections we will analyse the performance of the models introduced in sections 2.5 and 2.6 on a representative selection of NVIDIA GPU hardware:</p>
<list list-type="bullet">
<list-item><p>Jetson Xavier NX&#x02014;a low-power embedded system with a GPU based on the Volta architecture with 8 GB of shared memory.</p></list-item>
<list-item><p>GeForce GTX 1050Ti&#x02014;a low-end desktop GPU based on the Pascal architecture with 4 GB of dedicated memory.</p></list-item>
<list-item><p>GeForce GTX 1650&#x02014;a low-end desktop GPU based on the Turing architecture with 4 GB of dedicated memory.</p></list-item>
<list-item><p>Titan RTX&#x02014;a high-end workstation GPU based on the Turing architecture with 24 GB of dedicated memory.</p></list-item>
</list>
<p>All of these systems run Ubuntu 18 apart from the system with the GeForce 1050 Ti which runs Windows 10.</p>
<sec>
<title>3.1. Cortical Microcircuit Model Performance</title>
<p><xref ref-type="fig" rid="F3">Figure 3</xref> shows the simulation times for the full-scale microcircuit model. We measured the total simulation time by querying the <inline-formula><mml:math id="M53"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>std</mml:mtext></mml:mstyle><mml:mstyle mathvariant="monospace" mathcolor="#231f20"><mml:mtext>::</mml:mtext></mml:mstyle><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>chrono</mml:mtext></mml:mstyle><mml:mstyle mathvariant="monospace" mathcolor="#231f20"><mml:mtext>::</mml:mtext></mml:mstyle><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>high_resolution_clock</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> in C&#x0002B;&#x0002B; and the <inline-formula><mml:math id="M54"><mml:mrow><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>time</mml:mtext></mml:mstyle><mml:mstyle mathvariant="monospace" mathcolor="#231f20"><mml:mtext>.</mml:mtext></mml:mstyle><mml:mstyle mathvariant="monospace" mathcolor="#006fb9"><mml:mtext>perf_counter</mml:mtext></mml:mstyle></mml:mrow></mml:math></inline-formula> in Python before and after the simulation loop; and used CUDA&#x00027;s own event timing system (NVIDIA Corporation, <xref ref-type="bibr" rid="B26">2021</xref>, Section 3.2.6.7) to record the time taken by the neuron and synapse kernels. As one might predict, the Jetson Xavier NX is slower than the three desktop GPUs but, considering that it only consumes a maximum of 15 W compared to 75 or 320 W for the GeForce cards and Titan RTX, respectively, it still performs impressively. The time taken to actually simulate the models (&#x0201C;Neuron simulation&#x0201D; and &#x0201C;Synapse simulation&#x0201D;) are the same when using PyGeNN and GeNN as all optimisation options are exposed to PyGeNN. Interestingly, when simulating <italic>this</italic> model, the larger L1 cache and architectural improvements present in the Turing-based GTX 1650 do not result in significantly improved performance over the Pascal-based GTX 1050Ti. Instead, the slightly improved performance of the GTX 1650 can probably be explained by its additional 128 CUDA cores.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Simulation times of the microcircuit model running on various GPU hardware for 1 s of biological time. &#x0201C;Overhead&#x0201D; refers to time spent in simulation loop but not within CUDA kernels. The dashed horizontal line indicates realtime performance.</p></caption>
<graphic xlink:href="fninf-15-659005-g0003.tif"/>
</fig>
<p>Without the recording system described in section 2.4, the CPU and GPU need to be synchronized after every timestep to allow spike data to be copied off the GPU and stored in a suitable data structure. The &#x0201C;overheads&#x0201D; shown in <xref ref-type="fig" rid="F3">Figure 3</xref> indicate the time taken by these processes as well as the unavoidable overheads of launching CUDA kernels etc. Because Python is an interpreted language, updating the spike data structures is somewhat slower and this is particularly noticeable on devices with a slower CPU such as the Jetson Xavier NX. However, unlike the desktop GPUs, the Jetson Xavier NX&#x00027;s 8 GB of memory is shared between the GPU and the CPU meaning that data does not need to be copied between their memories and can instead by accessed by both. While, using this shared memory for recording spikes reduces the overhead of copying data off the device, because the GPU and CPU caches are not coherent, caching must be disabled on this memory which reduces the performance of the neuron kernel. Although the Windows machine has a relatively powerful CPU, the overheads measured in both the PyGeNN and GeNN simulations run on this system are extremely large due to additional queuing between the application and the GPU driver caused by the Windows Display Driver Model (WDDM). When small&#x02014;in this case 0.1 ms&#x02014;simulation timesteps are used, this makes per-timestep synchronization disproportionately expensive.</p>
<p>However, when the spike recording system described in section 2.4 is used, spike data is kept in GPU memory until the end of the simulation and overheads are reduced by up to 10&#x000D7;. Because synchronization with the CPU is no longer required every timestep, simulations run approximately twice as fast on the Windows machine. Furthermore, on the high-end desktop GPU, the simulation now runs faster than real-time in both PyGeNN and GeNN versions&#x02014;significantly faster than other recently published GPU simulators (Golosio et al., <xref ref-type="bibr" rid="B14">2021</xref>) and even specialized neuromorphic systems (Rhodes et al., <xref ref-type="bibr" rid="B31">2020</xref>).</p>
</sec>
<sec>
<title>3.2. Pavlovian Conditioning Performance</title>
<p><xref ref-type="fig" rid="F4">Figure 4</xref> shows the results of an example simulation of the Pavlovian conditioning model. At the beginning of each simulation (<xref ref-type="fig" rid="F4">Figure 4A</xref>), the neurons representing every stimulus respond equally. However, after 1 h of simulation, the response to the CS becomes much stronger (<xref ref-type="fig" rid="F4">Figure 4B</xref>)&#x02014;showing that these neurons have been selectively associated with the stimulus even in the presence of the distractors and the delayed reward. In <xref ref-type="fig" rid="F5">Figure 5</xref>, we show the runtime performance of simulations of the Pavlovian conditioning model, running on the GPUs described above using PyGeNN with and without the recording system described in section 2.4. These PyGeNN results are compared to a GeNN simulation which also uses the recording system. Because each simulation timestep only takes a few &#x003BC;s, the overhead of using CUDA timing events significantly alters the performance so, for this model, we only measure the duration of the simulation loop using the approaches described in the previous section. Although we only record the spiking activity during the first and last 50 s, using the recording system still significantly improves the overall performance on all devices&#x02014;especially on the Jetson Xavier NX with its slower CPU. Interestingly the Titan RTX and GTX 1650 perform identically in this benchmark with speedups ranging from 62&#x000D7; to 72&#x000D7; real-time. This is because, as discussed previously, this model is simply not large enough to fill the 4,608 CUDA cores present on the Titan RTX. Therefore, as the two GPUs share the same Turing architecture and have very similar clock speeds (1,350&#x02013;1,770 MHz for the Titan RTX and 1,485&#x02013;1,665 MHz for the GTX 1650), the two GPUs perform very similarly. As for the simulations of the microcircuit model, the Jetson Xavier NX performs rather slower than the desktop GPUs but still achieves speedups of up to 31&#x000D7;.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Results of Pavlovian conditioning experiment. Raster plot and spike density function (SDF) (Sz&#x000FC;cs, <xref ref-type="bibr" rid="B34">1998</xref>) showing the activity centered around the first delivery of the Conditioned Stimulus (CS) during initial <bold>(A)</bold> and final <bold>(B)</bold> 50 s of simulation. Downward green arrows indicate times at which the CS is delivered and downward black arrows indicate times when other, un-rewarded stimuli are delivered. Vertical dashed lines indicate times at which dopamine is delivered. The population SDF was calculated by convolving the spikes with a Gaussian kernel of &#x003C3; &#x0003D; 10 ms width.</p></caption>
<graphic xlink:href="fninf-15-659005-g0004.tif"/>
</fig>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Simulation times of the Pavlovian Conditioning model running on various GPU hardware for 1 h of biological time. &#x0201C;GPU recording&#x0201D; indicates simulations where the new recording system is employed. Times are taken from averages calculated over 5 runs of each model.</p></caption>
<graphic xlink:href="fninf-15-659005-g0005.tif"/>
</fig>
<p>Interestingly, unlike in the simulations of the microcircuit model, here the GTX 1050 Ti performs rather differently. Although the clock speed of this device is approximately the same as the other GPUs (1,290&#x02013;1,392 MHz) and it has a similar number of CUDA cores to the GTX 1650, its performance is significantly worse. The difference in performance across all configurations is likely to be due to architectural differences between the older Pascal; and newer Volta and Turing architectures. Specifically, Pascal GPUs have one type of Arithmetic Logic Unit (ALU) which handles both integer and floating point arithmetic, whereas the newer Volta and Turing architectures have equal numbers of dedicated integer and floating point ALUs as well as significantly larger L1 caches. As discussed in our previous work (Knight and Nowotny, <xref ref-type="bibr" rid="B21">2018</xref>), these architectural features are particularly beneficial for SNN simulations with STDP where a large amount of floating point computation is required to update the synaptic state <italic>and</italic> additional integer arithmetic is required to calculate the indices into the sparse matrix data structures.</p>
<p>The difference between the speeds of the PyGeNN and GeNN simulations of the Pavlovian conditioning model (<xref ref-type="fig" rid="F5">Figure 5</xref>) <italic>appear</italic> much larger than those of the microcircuit model (<xref ref-type="fig" rid="F3">Figure 3</xref>). However, as <xref ref-type="fig" rid="F6">Figure 6</xref> illustrates, for individual timesteps the excess time due to overheads is approximately the same for both models and consistent with the cost of a small number of Python to C&#x0002B;&#x0002B; function calls (Apache Crail, <xref ref-type="bibr" rid="B2">2019</xref>). Depending on the size and complexity of the model as well as the hardware used, this overhead may or may not be important. For example, when simulating the microcircuit model for 1 s on the Titan RTX, the overhead of using PyGeNN is &#x0003C;0.2 % but, when simulating the Pavlovian conditioning model on the same device, the overhead of using PyGeNN is almost 31 %.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Comparison of the duration of individual timestep in PyGeNN and GeNN simulations of the microcircuit and Pavlovian conditioning experiments. Times are taken from averages calculated over 5 runs using the GPU recording system.</p></caption>
<graphic xlink:href="fninf-15-659005-g0006.tif"/>
</fig>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>4. Discussion</title>
<p>In this paper we have introduced PyGeNN, a Python interface to the C&#x0002B;&#x0002B; based GeNN library for GPU accelerated spiking neural network simulations.</p>
<p>Uniquely, the new interface provides access to all the features of GeNN, without leaving the comparative simplicity of Python and with, as we have shown, typically negligible overheads from the Python bindings. PyGeNN also allows bespoke neuron and synapse models to be defined from within Python, making PyGeNN much more flexible and broadly applicable than, for instance, the Python interface to NEST (Eppler et al., <xref ref-type="bibr" rid="B11">2009</xref>) or the PyNN model description language used to expose CARLsim to Python (Balaji et al., <xref ref-type="bibr" rid="B3">2020</xref>).</p>
<p>In many ways, the new interface resembles elements of the Python-based Brian 2 simulator (Stimberg et al., <xref ref-type="bibr" rid="B32">2019</xref>) (and it&#x00027;s Brian2GeNN backend; Stimberg et al., <xref ref-type="bibr" rid="B33">2020</xref>) with two key differences. Unlike in Brian 2, bespoke models in PyGeNN are defined with &#x0201C;C-like&#x0201D; code snippets. This has the advantage of unparalleled flexibility for the expert user, but comes at the cost of more complexity as the code for a timestep update needs to include a suitable solver and not merely differential equations. The second difference lies in how data structures are handled. Whereas simulations run using the C&#x0002B;&#x0002B; or Brian2GeNN Brian 2 backends use files to exchange data with Python, the underlying GeNN data structures are directly accessible from PyGeNN meaning that no disk access is involved.</p>
<p>As we have demonstrated, the PyGeNN wrapper, exactly like native GeNN, can be used on a variety of hardware from data center scale down to mobile devices such as the NVIDIA Jetson. This allows for the same codes to be used in large-scale brain simulations and embedded and embodied spiking neural network research. Supporting the popular Python language in this interface makes this ecosystem available to a wider audience of researchers in both Computational Neuroscience, bio-mimetic machine learning and autonomous robotics.</p>
<p>The new interface also opens up opportunities to support researchers that work with other Python based systems. In the Computational Neuroscience and Neuromorphic computing communities, we can now build a PyNN (Davison et al., <xref ref-type="bibr" rid="B9">2008</xref>) interface on top of PyGeNN and, in fact, a prototype of such an interface is in development. Furthermore, for the burgeoning spike-based machine learning community, we can use PyGeNN as the basis for a spike-based machine learning framework akin to TensorFlow or PyTorch for rate-based models. A prototype interface of this sort called mlGeNN is in development and close to release.</p>
<p>In this work we have introduced a new spike recording system for GeNN and have shown that, using this system, we can now simulate the Potjans microcircuit model (Potjans and Diesmann, <xref ref-type="bibr" rid="B30">2014</xref>) faster than real-time and, to the best of our knowledge, faster than any other system. Finally, the excellent performance we have demonstrated using low-end Turing architecture GPUs is very exciting in terms of increasing the accessibility of GPU accelerated Computational Neuroscience and SNN machine learning research.</p>
</sec>
<sec sec-type="data-availability-statement" id="s5">
<title>Data Availability Statement</title>
<p>All models, data and analysis scripts used for this study can be found in <ext-link ext-link-type="uri" xlink:href="https://github.com/BrainsOnBoard/pygenn_paper">https://github.com/BrainsOnBoard/pygenn_paper</ext-link>. All experiments were carried out using the GeNN 4.4.0 which is fully open source and available from <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.5281/zenodo.4419159">https://doi.org/10.5281/zenodo.4419159</ext-link>.</p>
</sec>
<sec id="s6">
<title>Author Contributions</title>
<p>JK and TN wrote the paper. TN was the original developer of GeNN. AK is the original developer of PyGeNN. JK is currently the primary developer of both GeNN and PyGeNN, responsible for implementing the spike recording system, and performed the experiments and the analysis of the results that are presented in this work. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack><p>We would like to thank Malin Sandstr&#x000F6;m and everyone else at the International Neuroinformatics Coordinating Facility (INCF) for their hard work running the Google Summer of Code mentoring organization every year. Without them, this and many other exciting Neuroinformatics projects would not be possible.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Akar</surname> <given-names>N. A.</given-names></name> <name><surname>Cumming</surname> <given-names>B.</given-names></name> <name><surname>Karakasis</surname> <given-names>V.</given-names></name> <name><surname>Kusters</surname> <given-names>A.</given-names></name> <name><surname>Klijn</surname> <given-names>W.</given-names></name> <name><surname>Peyser</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>&#x0201C;Arbor&#x02013;A morphologically-detailed neural network simulation library for contemporary high-performance computing architectures,&#x0201D;</article-title> in <source>2019 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)</source> (<publisher-loc>Pavia</publisher-loc>), <fpage>274</fpage>&#x02013;<lpage>282</lpage>. <pub-id pub-id-type="doi">10.1109/EMPDP.2019.8671560</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="author"><collab>Apache Crail</collab></person-group> (<year>2019</year>). <source>Crail Python API: Python -&#x0003E; C/C&#x0002B;&#x0002B; Call Overhead</source>.</citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Balaji</surname> <given-names>A.</given-names></name> <name><surname>Adiraju</surname> <given-names>P.</given-names></name> <name><surname>Kashyap</surname> <given-names>H. J.</given-names></name> <name><surname>Das</surname> <given-names>A.</given-names></name> <name><surname>Krichmar</surname> <given-names>J. L.</given-names></name> <name><surname>Dutt</surname> <given-names>N. D.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>PyCARL: a PyNN interface for hardware-software co-simulation of spiking neural network</article-title>. <source>arXiv:2003.09696</source>. <pub-id pub-id-type="doi">10.1109/IJCNN48605.2020.9207142</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bautembach</surname> <given-names>D.</given-names></name> <name><surname>Oikonomidis</surname> <given-names>I.</given-names></name> <name><surname>Argyros</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Multi-GPU SNN simulation with perfect static load balancing</article-title>. <source>arXiv:2102.04681</source>.</citation>
</ref>
<ref id="B5">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Beazley</surname> <given-names>D. M.</given-names></name></person-group> (<year>1996</year>). <article-title>&#x0201C;Using SWIG to control, prototype, and debug C programs with Python,&#x0201D;</article-title> in <source>Proc. 4th Int. Python Conf</source> (<publisher-loc>Livermore, CA</publisher-loc>).</citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buzs&#x000E1;ki</surname> <given-names>G.</given-names></name> <name><surname>Mizuseki</surname> <given-names>K.</given-names></name></person-group> (<year>2014</year>). <article-title>The log-dynamic brain: how skewed distributions affect network operations</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>15</volume>, <fpage>264</fpage>&#x02013;<lpage>278</lpage>. <pub-id pub-id-type="doi">10.1038/nrn3687</pub-id><pub-id pub-id-type="pmid">24569488</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Carnevale</surname> <given-names>N. T.</given-names></name> <name><surname>Hines</surname> <given-names>M. L.</given-names></name></person-group> (<year>2006</year>). <source>The NEURON Book</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9780511541612</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chou</surname> <given-names>T.-s.</given-names></name> <name><surname>Kashyap</surname> <given-names>H. J.</given-names></name> <name><surname>Xing</surname> <given-names>J.</given-names></name> <name><surname>Listopad</surname> <given-names>S.</given-names></name> <name><surname>Rounds</surname> <given-names>E. L.</given-names></name> <name><surname>Beyeler</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>&#x0201C;CARLsim 4: An open source library for large scale, biologically detailed spiking neural network simulation using heterogeneous clusters,&#x0201D;</article-title> in <source>2018 International Joint Conference on Neural Networks (IJCNN)</source> (<publisher-loc>Rio de Janeiro</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1109/IJCNN.2018.8489326</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davison</surname> <given-names>A. P.</given-names></name> <name><surname>Br&#x000FC;derle</surname> <given-names>D.</given-names></name> <name><surname>Eppler</surname> <given-names>J.</given-names></name> <name><surname>Kremkow</surname> <given-names>J.</given-names></name> <name><surname>Muller</surname> <given-names>E.</given-names></name> <name><surname>Pecevski</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2008</year>). <article-title>PyNN: A common interface for neuronal network simulators</article-title>. <source>Front. Neuroinform</source>. <volume>2</volume>:<fpage>11</fpage>. <pub-id pub-id-type="doi">10.3389/neuro.11.011.2008</pub-id><pub-id pub-id-type="pmid">19194529</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Eisenstat</surname> <given-names>S. C.</given-names></name> <name><surname>Gursky</surname> <given-names>M.</given-names></name> <name><surname>Schultz</surname> <given-names>M. H.</given-names></name> <name><surname>Sherman</surname> <given-names>A. H.</given-names></name></person-group> (<year>1977</year>). <source>Yale Sparse Matrix Package. I. The Symmetric Codes</source>. Technical report, Yale University; Department of Computer Science. <publisher-loc>New Haven, CT</publisher-loc>. <pub-id pub-id-type="doi">10.21236/ADA047725</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eppler</surname> <given-names>J. M.</given-names></name> <name><surname>Helias</surname> <given-names>M.</given-names></name> <name><surname>Muller</surname> <given-names>E.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name> <name><surname>Gewaltig</surname> <given-names>M. O.</given-names></name></person-group> (<year>2009</year>). <article-title>PyNEST: A convenient interface to the NEST simulator</article-title>. <source>Front. Neuroinform</source>. <volume>2</volume>:<fpage>2008</fpage>. <pub-id pub-id-type="doi">10.3389/neuro.11.012.2008</pub-id><pub-id pub-id-type="pmid">19198667</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gewaltig</surname> <given-names>M.-O.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>NEST (NEural Simulation Tool)</article-title>. <source>Scholarpedia</source> <volume>2</volume>:<fpage>1430</fpage>. <pub-id pub-id-type="doi">10.4249/scholarpedia.1430</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Givon</surname> <given-names>L. E.</given-names></name> <name><surname>Lazar</surname> <given-names>A. A.</given-names></name></person-group> (<year>2016</year>). <article-title>Neurokernel: An open source platform for emulating the fruit fly brain</article-title>. <source>PLoS ONE</source> <volume>11</volume>:<fpage>e146581</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0146581</pub-id><pub-id pub-id-type="pmid">26751378</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Golosio</surname> <given-names>B.</given-names></name> <name><surname>Tiddia</surname> <given-names>G.</given-names></name> <name><surname>De Luca</surname> <given-names>C.</given-names></name> <name><surname>Pastorelli</surname> <given-names>E.</given-names></name> <name><surname>Simula</surname> <given-names>F.</given-names></name> <name><surname>Paolucci</surname> <given-names>P. S.</given-names></name></person-group> (<year>2021</year>). <article-title>Fast simulations of highly-connected spiking cortical models using GPUs</article-title>. <source>Front. Comput. Neurosci</source>. <volume>15</volume>:<fpage>627620</fpage>. <pub-id pub-id-type="doi">10.3389/fncom.2021.627620</pub-id><pub-id pub-id-type="pmid">33679358</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hines</surname> <given-names>M. L.</given-names></name> <name><surname>Davison</surname> <given-names>A. P.</given-names></name> <name><surname>Muller</surname> <given-names>E.</given-names></name></person-group> (<year>2009</year>). <article-title>NEURON and python</article-title>. <source>Front. Neuroinform</source>. <volume>3</volume>:<fpage>2009</fpage>. <pub-id pub-id-type="doi">10.3389/neuro.11.001.2009</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hopkins</surname> <given-names>M.</given-names></name> <name><surname>Furber</surname> <given-names>S. B.</given-names></name></person-group> (<year>2015</year>). <article-title>Accuracy and efficiency in fixed-point neural ODE solvers</article-title>. <source>Neural Comput</source>. <volume>27</volume>, <fpage>2148</fpage>&#x02013;<lpage>2182</lpage>. <pub-id pub-id-type="doi">10.1162/NECO_a_00772</pub-id><pub-id pub-id-type="pmid">26313605</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Humphries</surname> <given-names>M. D.</given-names></name> <name><surname>Gurney</surname> <given-names>K.</given-names></name></person-group> (<year>2007</year>). <article-title>Solution methods for a new class of simple model neurons M</article-title>. <source>Neural Comput</source>. <volume>19</volume>, <fpage>3216</fpage>&#x02013;<lpage>3225</lpage>. <pub-id pub-id-type="doi">10.1162/neco.2007.19.12.3216</pub-id><pub-id pub-id-type="pmid">17970650</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hunter</surname> <given-names>J. D.</given-names></name></person-group> (<year>2007</year>). <article-title>Matplotlib: a 2D graphics environment</article-title>. <source>Comput. Sci. Eng</source>. <volume>9</volume>, <fpage>90</fpage>&#x02013;<lpage>95</lpage>. <pub-id pub-id-type="doi">10.1109/MCSE.2007.55</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Izhikevich</surname> <given-names>E. M.</given-names></name></person-group> (<year>2003</year>). <article-title>Simple model of spiking neurons</article-title>. <source>IEEE Trans. Neural netw</source>. <volume>14</volume>, <fpage>1569</fpage>&#x02013;<lpage>1572</lpage>. <pub-id pub-id-type="doi">10.1109/TNN.2003.820440</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Izhikevich</surname> <given-names>E. M.</given-names></name></person-group> (<year>2007</year>). <article-title>Solving the distal reward problem through linkage of STDP and Dopamine signaling</article-title>. <source>Cereb. Cortex</source> <volume>17</volume>, <fpage>2443</fpage>&#x02013;<lpage>2452</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhl152</pub-id><pub-id pub-id-type="pmid">17220510</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Knight</surname> <given-names>J. C.</given-names></name> <name><surname>Nowotny</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>GPUs outperform current HPC and neuromorphic solutions in terms of speed and energy when simulating a highly-connected cortical model</article-title>. <source>Front. Neurosci</source>. <volume>12</volume>:<fpage>941</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2018.00941</pub-id><pub-id pub-id-type="pmid">30618570</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Knight</surname> <given-names>J. C.</given-names></name> <name><surname>Nowotny</surname> <given-names>T.</given-names></name></person-group> (<year>2020</year>). <article-title>Larger GPU-accelerated brain simulations with procedural connectivity</article-title>. <source>bioRxiv</source>. <volume>1</volume>:<fpage>136</fpage>&#x02013;<lpage>142</lpage>. <pub-id pub-id-type="doi">10.1101/2020.04.27.063693</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mikaitis</surname> <given-names>M.</given-names></name> <name><surname>Pineda Garc&#x000ED;a</surname> <given-names>G.</given-names></name> <name><surname>Knight</surname> <given-names>J. C.</given-names></name> <name><surname>Furber</surname> <given-names>S. B.</given-names></name></person-group> (<year>2018</year>). <article-title>Neuromodulated synaptic plasticity on the SpiNNaker neuromorphic system</article-title>. <source>Front. Neurosci</source>. <volume>12</volume>:<fpage>105</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2018.00105</pub-id><pub-id pub-id-type="pmid">29535600</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Millman</surname> <given-names>K. J.</given-names></name> <name><surname>Aivazis</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>Python for scientists and engineers</article-title>. <source>Comput. Sci. Eng</source>. <volume>13</volume>, <fpage>9</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1109/MCSE.2011.36</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><collab>NVIDIA Corporation</collab></person-group> (<year>2019</year>). <source>cuRAND Library</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://docs.nvidia.com/cuda/pdf/CURAND_Library.pdf">https://docs.nvidia.com/cuda/pdf/CURAND_Library.pdf</ext-link></citation>
</ref>
<ref id="B26">
<citation citation-type="book"><person-group person-group-type="author"><collab>NVIDIA Corporation</collab></person-group> (<year>2021</year>). <source>CUDA C Programming Guide</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://docs.nvidia.com/cuda/pdf/CUDA_C_Programming_Guide.pdf">https://docs.nvidia.com/cuda/pdf/CUDA_C_Programming_Guide.pdf</ext-link></citation></ref>
<ref id="B27">
<citation citation-type="book"><person-group person-group-type="author"><collab>NVIDIA</collab> <name><surname>Vingelmann</surname> <given-names>P.</given-names></name> <name><surname>Fitzek</surname> <given-names>F. H.</given-names></name></person-group> (<year>2020</year>). <source>CUDA, Developer</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://nvidia.com/cuda-toolkit">https://nvidia.com/cuda-toolkit</ext-link></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pauli</surname> <given-names>R.</given-names></name> <name><surname>Weidel</surname> <given-names>P.</given-names></name> <name><surname>Kunkel</surname> <given-names>S.</given-names></name> <name><surname>Morrison</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>Reproducing polychronization: a guide to maximizing the reproducibility of spiking network models</article-title>. <source>Front. Neuroinform</source>. <volume>12</volume>:<fpage>46</fpage>. <pub-id pub-id-type="doi">10.3389/fninf.2018.00046</pub-id><pub-id pub-id-type="pmid">30123121</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Plotnikov</surname> <given-names>D.</given-names></name> <name><surname>Blundell</surname> <given-names>I.</given-names></name> <name><surname>Ippen</surname> <given-names>T.</given-names></name> <name><surname>Eppler</surname> <given-names>J. M.</given-names></name> <name><surname>Rumpe</surname> <given-names>B.</given-names></name> <name><surname>Morrison</surname> <given-names>A.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;NESTML: a modeling language for spiking neurons,&#x0201D;</article-title> in <source>Lecture Notes in Informatics (LNI), Vol. P-254, Modellierung 2016</source> (<publisher-loc>Karlsruhe</publisher-loc>: <publisher-name>Gesellschaft f&#x000FC;r Informatik e.V.</publisher-name>), <fpage>93</fpage>&#x02013;<lpage>108</lpage>.</citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Potjans</surname> <given-names>T. C.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model</article-title>. <source>Cereb. Cortex</source> <volume>24</volume>, <fpage>785</fpage>&#x02013;<lpage>806</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhs358</pub-id><pub-id pub-id-type="pmid">23203991</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rhodes</surname> <given-names>O.</given-names></name> <name><surname>Peres</surname> <given-names>L.</given-names></name> <name><surname>Rowley</surname> <given-names>A. G. D.</given-names></name> <name><surname>Gait</surname> <given-names>A.</given-names></name> <name><surname>Plana</surname> <given-names>L. A.</given-names></name> <name><surname>Brenninkmeijer</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Real-time cortical simulation on neuromorphic hardware</article-title>. <source>Philos. Trans. R. Soc. A Math. Phys. Eng. Sci</source>. <volume>378</volume>:<fpage>20190160</fpage>. <pub-id pub-id-type="doi">10.1098/rsta.2019.0160</pub-id><pub-id pub-id-type="pmid">31865885</pub-id></citation></ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stimberg</surname> <given-names>M.</given-names></name> <name><surname>Brette</surname> <given-names>R.</given-names></name> <name><surname>Goodman</surname> <given-names>D. F.</given-names></name></person-group> (<year>2019</year>). <article-title>Brian 2, an intuitive and efficient neural simulator</article-title>. <source>eLife</source> <volume>8</volume>, <fpage>1</fpage>&#x02013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.7554/eLife.47314</pub-id><pub-id pub-id-type="pmid">31429824</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stimberg</surname> <given-names>M.</given-names></name> <name><surname>Goodman</surname> <given-names>D. F.</given-names></name> <name><surname>Nowotny</surname> <given-names>T.</given-names></name></person-group> (<year>2020</year>). <article-title>Brian2GeNN: accelerating spiking neural network simulations with graphics hardware</article-title>. <source>Sci. Rep</source>. <volume>10</volume>, <fpage>1</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-019-54957-7</pub-id><pub-id pub-id-type="pmid">31941893</pub-id></citation></ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sz&#x000FC;cs</surname> <given-names>A.</given-names></name></person-group> (<year>1998</year>). <article-title>Applications of the spike density function in analysis of neuronal firing patterns</article-title>. <source>J. Neurosci. Methods</source> <volume>81</volume>, <fpage>159</fpage>&#x02013;<lpage>167</lpage>. <pub-id pub-id-type="doi">10.1016/S0165-0270(98)00033-8</pub-id><pub-id pub-id-type="pmid">9696321</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Der Walt</surname> <given-names>S.</given-names></name> <name><surname>Colbert</surname> <given-names>S. C.</given-names></name> <name><surname>Varoquaux</surname> <given-names>G.</given-names></name></person-group> (<year>2011</year>). <article-title>The NumPy array: a structure for efficient numerical computation</article-title>. <source>Comput. Sci. Eng</source>. <volume>13</volume>, <fpage>22</fpage>&#x02013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.1109/MCSE.2011.37</pub-id></citation></ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vitay</surname> <given-names>J.</given-names></name> <name><surname>Dinkelbach</surname> <given-names>H.</given-names></name> <name><surname>Hamker</surname> <given-names>F.</given-names></name></person-group> (<year>2015</year>). <article-title>ANNarchy: a code generation approach to neural simulations on parallel hardware</article-title>. <source>Front. Neuroinform</source>. <volume>9</volume>:<fpage>19</fpage>. <pub-id pub-id-type="doi">10.3389/fninf.2015.00019</pub-id><pub-id pub-id-type="pmid">26283957</pub-id></citation></ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yavuz</surname> <given-names>E.</given-names></name> <name><surname>Turner</surname> <given-names>J.</given-names></name> <name><surname>Nowotny</surname> <given-names>T.</given-names></name></person-group> (<year>2016</year>). <article-title>GeNN: a code generation framework for accelerated brain simulations</article-title>. <source>Sci. Rep</source>. <volume>6</volume>:<fpage>18854</fpage>. <pub-id pub-id-type="doi">10.1038/srep18854</pub-id><pub-id pub-id-type="pmid">26740369</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding.</bold> This work was funded by the EPSRC (Brains on Board project, grant number EP/P006094/1 and ActiveAI project, grant number EP/S030964/1), the European Union&#x00027;s Horizon 2020 research and innovation program under Grant Agreement 945539 (HBP SGA3) and a Google Summer of Code grant to AK.</p>
</fn>
</fn-group>
</back>
</article> 