<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Comput. Neurosci.</journal-id>
<journal-title>Frontiers in Computational Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Comput. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5188</issn>
<publisher>
<publisher-name>Frontiers Research Foundation</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fncom.2011.00026</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Information Diversity in Structure and Dynamics of Simulated Neuronal Networks</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>M&#x000E4;ki-Marttunen</surname> <given-names>Tuomo</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="author-notes" rid="fn001">&#x0002A;</xref>
</contrib>
<contrib contrib-type="author">
<name><surname>A&#x00107;imovi&#x00107;</surname> <given-names>Jugoslava</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Nykter</surname> <given-names>Matti</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Kesseli</surname> <given-names>Juha</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Ruohonen</surname> <given-names>Keijo</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Yli-Harja</surname> <given-names>Olli</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Linne</surname> <given-names>Marja-Leena</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Signal Processing, Tampere University of Technology</institution> <country>Tampere, Finland</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Mathematics, Tampere University of Technology</institution> <country>Tampere, Finland</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Arvind Kumar, Albert-Ludwig University, Germany</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Birgit Kriener, Norwegian University of Life Sciences, Norway; Kanaka Rajan, Princeton University, USA</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Tuomo M&#x000E4;ki-Marttunen, Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere, Finland. e-mail: <email>tuomo.maki-marttunen&#x00040;tut.fi</email></p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>01</day>
<month>06</month>
<year>2011</year>
</pub-date>
<pub-date pub-type="collection">
<year>2011</year>
</pub-date>
<volume>5</volume>
<elocation-id>26</elocation-id>
<history>
<date date-type="received">
<day>15</day>
<month>10</month>
<year>2010</year>
</date>
<date date-type="accepted">
<day>17</day>
<month>05</month>
<year>2011</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2011 M&#x000E4;ki-Marttunen, A&#x00107;imovi&#x00107;, Nykter, Kesseli, Ruohonen, Yli-Harja and Linne.</copyright-statement>
<copyright-year>2011</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement"><p>This is an open-access article subject to a non-exclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are complied with.</p></license>
</permissions>
<abstract>
<p>Neuronal networks exhibit a wide diversity of structures, which contributes to the diversity of the dynamics therein. The presented work applies an information theoretic framework to simultaneously analyze structure and dynamics in neuronal networks. Information diversity within the structure and dynamics of a neuronal network is studied using the normalized compression distance. To describe the structure, a scheme for generating distance-dependent networks with identical in-degree distribution but variable strength of dependence on distance is presented. The resulting network structure classes possess differing path length and clustering coefficient distributions. In parallel, comparable realistic neuronal networks are generated with NETMORPH simulator and similar analysis is done on them. To describe the dynamics, network spike trains are simulated using different network structures and their bursting behaviors are analyzed. For the simulation of the network activity the Izhikevich model of spiking neurons is used together with the Tsodyks model of dynamical synapses. We show that the structure of the simulated neuronal networks affects the spontaneous bursting activity when measured with bursting frequency and a set of intraburst measures: the more locally connected networks produce more and longer bursts than the more random networks. The information diversity of the structure of a network is greatest in the most locally connected networks, smallest in random networks, and somewhere in between in the networks between order and disorder. As for the dynamics, the most locally connected networks and some of the in-between networks produce the most complex intraburst spike trains. The same result also holds for sparser of the two considered network densities in the case of full spike trains.</p>
</abstract>
<kwd-group>
<kwd>information diversity</kwd>
<kwd>neuronal network</kwd>
<kwd>structure-dynamics relationship</kwd>
<kwd>complexity</kwd>
</kwd-group>
<counts>
<fig-count count="10"/>
<table-count count="5"/>
<equation-count count="14"/>
<ref-count count="39"/>
<page-count count="17"/>
<word-count count="11808"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction">
<label>1</label>
<title>Introduction</title>
<p>Neuronal networks exhibit diverse structural organization, which has been demonstrated in studies of both neuronal microcircuits and large-scale connectivity (Fr&#x000E9;gnac et al., <xref ref-type="bibr" rid="B10">2007</xref>; Voges et al., <xref ref-type="bibr" rid="B36">2010</xref>; Sporns, <xref ref-type="bibr" rid="B33">2011</xref>). Network structure, the connectivity pattern between elements contained in the network, constrains the interaction between these elements, and consequently, the overall dynamics of the system. The relationship between network structure and dynamics has been extensively considered in theoretical studies (Albert and Barab&#x000E1;si, <xref ref-type="bibr" rid="B2">2002</xref>; Newman, <xref ref-type="bibr" rid="B25">2003</xref>; Boccaletti et al., <xref ref-type="bibr" rid="B5">2006</xref>; Galas et al., <xref ref-type="bibr" rid="B11">2010</xref>). In networks of neurons, the pattern of interneuronal connectivity is only one of the components that affect the overall network dynamics, together with the non-linear activity of individual neurons and synapses. Therefore, the constraints that structure imposes on dynamics in such systems are difficult to infer, and reliable methods to quantify this relationship are needed. Several previous studies employed cross-correlation in this context (Kriener et al., <xref ref-type="bibr" rid="B18">2008</xref>; Ostojic et al., <xref ref-type="bibr" rid="B27">2009</xref>), while the study reported in Soriano et al. (<xref ref-type="bibr" rid="B32">2008</xref>) proposed a method to infer structure from recorded activity by estimating the moment in network development when all of the neurons become fully connected into a giant component.</p>
<p>The structure and activity can be examined in a simplified but easily tractable neuronal system, namely in dissociated cultures of cortical neurons. Neurons placed in a culture have the capability to develop and self-organize into functional networks that exhibit spontaneous bursting behavior (Kriegstein and Dichter, <xref ref-type="bibr" rid="B17">1983</xref>; Marom and Shahaf, <xref ref-type="bibr" rid="B23">2002</xref>; Wagenaar et al., <xref ref-type="bibr" rid="B37">2006</xref>). The structure of such networks can be manipulated by changing the physical characteristics of the environment where neurons live (Wheeler and Brewer, <xref ref-type="bibr" rid="B39">2010</xref>), while the activity is recorded using multielectrode array chips. Networks of spiking neurons have been systematically analyzed in the literature (for example, see Brunel, <xref ref-type="bibr" rid="B6">2000</xref>; Tuckwell, <xref ref-type="bibr" rid="B35">2006</xref>; Kumar et al., <xref ref-type="bibr" rid="B19">2008</xref>; Ostojic et al., <xref ref-type="bibr" rid="B27">2009</xref>). In addition, models aiming to study neocortical cultures are presented in (Latham et al., <xref ref-type="bibr" rid="B20">2000</xref>; Benayon et al., <xref ref-type="bibr" rid="B4">2010</xref>), among others.</p>
<p>In this work, we follow the modeling approach of a recent study (Gritsun et al., <xref ref-type="bibr" rid="B13">2010</xref>) in simulating the activity of a neuronal system. The model is composed of Izhikevich model neurons (Izhikevich, <xref ref-type="bibr" rid="B15">2003</xref>) and the synapse model with short term dynamics (Tsodyks et al., <xref ref-type="bibr" rid="B34">2000</xref>). We employ an information theoretic framework presented in Galas et al. (<xref ref-type="bibr" rid="B11">2010</xref>) in order to estimate the information diversity in both the structure and dynamics of simulated neuronal networks. This framework utilizes the normalized compression distance (NCD), which employs the approximation of Kolmogorov complexity (KC) to evaluate the difference in information content between a pair of data sequences. Both network dynamics in the form of spike trains and network structure described as a directed unweighted graph can be represented as binary sequences and analyzed using the NCD. KC is maximized for random sequences that cannot be compressed and small for the regular sequences with lot of repetitions. Contrary to KC, a complexity measure taking into account the context-dependence of data gives small values for the random and regular strings and is maximized for the strings that reflect both regularity and randomness, i.e., that correspond to the systems between order and disorder (Galas et al., <xref ref-type="bibr" rid="B11">2010</xref>; Sporns, <xref ref-type="bibr" rid="B33">2011</xref>). The notion of KC has been employed before to analyze experimentally recorded spike trains, i.e., the recordings of network dynamics, and to extract relevant features as in Amig&#x000F3; et al. (<xref ref-type="bibr" rid="B3">2004</xref>) and Christen et al. (<xref ref-type="bibr" rid="B8">2006</xref>). Another examples of application of information theoretic methods in the analysis of spike train data can be found in Paninski (<xref ref-type="bibr" rid="B28">2003</xref>).</p>
<p>The NCD has been used for analysis of Boolean networks in Nykter et al. (<xref ref-type="bibr" rid="B26">2008</xref>), where it demonstrated the capability to discriminate between different network dynamics, i.e., between critical, subcritical and supercritical networks. This study employs NCD in a more challenging context. As already mentioned, in neuronal networks the influence of structure on dynamics is not straightforwardly evident since both network elements (neurons) and connections between them (synapses) possess their own non-linear dynamics that contribute to the overall network dynamics in a non-trivial manner. The obtained results show that random and regular networks are separable by their NCD distributions, while the networks between order and disorder cover the continuum of values between the two extremes. The applied information theoretic framework is novel in the field of neuroscience, and introduces a measure of information diversity capable of assessing both structure and dynamics of neuronal networks.</p>
</sec>
<sec sec-type="materials|methods">
<label>2</label>
<title>Materials and Methods</title>
<sec>
<label>2.1</label>
<title>Network structure</title>
<p>Different types of network structures are considered in this study. In locally connected networks (LCN) with regular structure every node is preferentially connected to its spatially closest neighbors. Only for high enough connectivity a node connects to more distant neighbors. In random Erd&#x00151;s&#x02013;R&#x000E9;nyi (RN) networks every pair of nodes is connected with equal probability regardless of their location. Finally, networks with partially local and partially random connectivity (PLCN) possess order and disorder in their structure. In Algorithm 1, we describe a unified scheme for generating these three types of networks.</p>
<table-wrap position="float" id="TU1">
<label>Algorithm 1</label>
<caption><p><bold>Scheme for generating distance-dependent networks</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<td align="left"><bold>for</bold> node index <italic>i</italic>&#x02009;&#x02208;&#x02009;{1,&#x02026;,<italic>N</italic>} <bold>do</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left">&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;Take number of in-neighbors <italic>n<sub>i</sub></italic>&#x02009;&#x0223C;&#x02009;<italic>Bin</italic>(<italic>N</italic>&#x02009;&#x02212;&#x02009;1, <italic>p</italic>).</td>
</tr>
<tr>
<td align="left">&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;<bold>for</bold> in-neighbor index <italic>j</italic>&#x02009;&#x02208;&#x02009;{1,&#x02026;,<italic>n<sub>i</sub></italic>} <bold>do</bold></td>
</tr>
<tr>
<td align="left"><list list-type="order"><list-item><p>Give weights <italic>w<sub>k</sub></italic> to all nodes <italic>k</italic>&#x02009;&#x02260;&#x02009;<italic>i</italic> that are not yet connected to <italic>i</italic> s.t. <inline-formula><mml:math id="M1"><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mi>D</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>W</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo></mml:mrow></mml:math></inline-formula> where <italic>D<sub>ik</sub></italic> is the spatial distance between nodes <italic>i</italic> and <italic>k</italic>.</p></list-item>
<list-item><p>Normalize by <inline-formula><mml:math id="M2"><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>k</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle scriptlevel='+1'><mml:mfrac><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mo>&#x02211;</mml:mo><mml:mi>k</mml:mi></mml:msub><mml:msub><mml:mi>w</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>,</mml:mo></mml:mrow></mml:math></inline-formula> where <italic>P</italic>(<italic>k</italic>) represents the probability to draw node <italic>k</italic>.</p></list-item>
<list-item><p>Randomly pick <italic>k</italic> from the probability mass distribution <italic>P</italic> and create a connection from <italic>k</italic> to <italic>i</italic>.</p></list-item></list></td>
</tr>
<tr>
<td align="left">&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;<bold>end for</bold></td>
</tr>
<tr>
<td align="left">&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;<bold>end for</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The scheme uses three parameters: probability of connection between a pair of nodes <italic>p</italic>&#x02009;&#x02208;&#x02009;[0,1], factor that defines dependence on distance <italic>W</italic>&#x02009;&#x02265;&#x02009;0, and the spatial node-to-node distance matrix <italic>D</italic>&#x02009;&#x02208;&#x02009;&#x0211D;<sup><italic>N</italic>&#x02009;&#x000D7;&#x02009;<italic>N</italic></sup>. The matrix <italic>D</italic> is presumed positive and symmetric. For <italic>W</italic>&#x02009;&#x0003D;&#x02009;0 the scheme results in a RN, as for <italic>W</italic>&#x02009;&#x0003D;&#x02009;&#x0221E; we obtain a LCN. These latter networks are considered the limit cases of an arbitrarily big factor <italic>W</italic>: when choosing the in-neighbor one always picks the spatially closest one that has not yet been chosen as an in-neighbor. Randomness in the picking of the in-neighbors is applied only when there are two or more possible in-neighbors with the exact minimal distance from the considered node. In these cases, the in-neighbor is chosen by random.</p>
<p>It is notable that regardless of the choice of the distance-dependence factor <italic>W</italic> the scheme results in a network with in-degree distributed binomially as <italic>Bin</italic>(<italic>N</italic>&#x02212;1, <italic>p</italic>). Equal in-degree distribution makes the considered networks comparable: each network has the same average number of neurons with a high number of synaptic inputs as well as those with a low number. This property does not arise in most studied models of networks with varying distance-dependence, as Watts&#x02013;Strogatz networks (Watts and Strogatz, <xref ref-type="bibr" rid="B38">1998</xref>) or Erd&#x00151;s&#x02013;R&#x000E9;nyi based models where the probability of connection is altered by the spatial distance between the nodes (see e.g., Itzhack and Louzoun, <xref ref-type="bibr" rid="B14">2010</xref>).</p>
<sec>
<label>2.1.1</label>
<title>NETMORPH: a neuronal morphology simulator</title>
<p>In addition to networks described above, we study biologically realistic neuronal networks. NETMORPH is a simulator that combines various models concerning neuronal growth (Koene et al., <xref ref-type="bibr" rid="B16">2009</xref>). The simulator allows monitoring the evolution of the network from isolated cells with mere stubs of neurites into a dense neuronal network, moreover, observing the network structure determined by the synapses at given time instants <italic>in vitro</italic>. It simulates a given number of neurons that grow independently of each other, and forms synapses whenever an axon of a neuron and a dendrite of another neuron come near enough to each other. The neurite segments are static in the sense that when they are once put onto their places they are not allowed to move for the rest of the simulation.</p>
<p>The growth of the axons and dendrites is described by three processes: elongation, turning, and branching, all of which are only applied to the terminal segments of the dendritic and axonal trees. The elongation of a terminal segment obeys the equation</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M3"><mml:mrow><mml:msub><mml:mi>&#x003BD;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x003BD;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:msubsup><mml:mi>n</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>F</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where <italic>v<sub>i</sub></italic> is the elongation rate at time instant <italic>t<sub>i</sub></italic>, <italic>v</italic><sub>0</sub> is the initial elongation rate, <italic>n<sub>i</sub></italic> is the number of terminal segments in the arbor that the considered terminal segment belongs to, and <italic>F</italic> is a parameter that describes the dependence of the elongation rate on the size of the arbor.</p>
<p>The terminal segments continue to grow until a turning or branching occurs. The probability that a terminal segment <italic>j</italic> changes direction during time interval (<italic>t<sub>i</sub></italic>, <italic>t<sub>i</sub></italic>&#x02009;&#x0002B;&#x02009;&#x00394;<italic>t</italic>) obeys equation</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M4"><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mi>L</mml:mi></mml:msub><mml:mo>&#x00394;</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where &#x00394;<italic>L<sub>j</sub></italic>(<italic>t<sub>i</sub></italic>) is the total increase in the length of the terminal segment during the considered time interval and <italic>r<sub>L</sub></italic> is a parameter that describes the frequency of turnings. The new direction of growth is obtained by adding perturbation to a weighted mean of previous growing directions. The most recent growing directions are given more weight than the earliest ones.</p>
<p>The probability that a terminal segment branches is given by</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M5"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mi>n</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mi>B</mml:mi><mml:mi>&#x0221E;</mml:mi></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>t</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>/</mml:mo><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi><mml:mo>/</mml:mo><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msup><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>S</mml:mi><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow></mml:msup><mml:mo>/</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where <italic>n<sub>i</sub></italic> is the number of terminal segments in the whole neuron at time <italic>t<sub>i</sub></italic> and <italic>E</italic> is a parameter describing the dependence of branching probability on the number of terminal segments. Parameters <italic>B</italic><sub>&#x0221E;</sub> and &#x003C4; describe the overall branching probability and the dependence of branching probability on time, respectively &#x02013; the bigger the constant &#x003C4;, the longer the branching events will continue to occur. The variable &#x003B3;<italic><sub>j</sub></italic> is the <italic>order</italic> of the terminal segment <italic>j</italic>, i.e., how many segments there are between the terminal segment and the cell soma, and <italic>S</italic> is the parameter describing the effect of the order. Finally, the probability is normalized using the variable <inline-formula><mml:math id="M6"><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle scriptlevel='+1'><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mstyle><mml:msubsup><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:msubsup><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>S</mml:mi><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:mrow></mml:math></inline-formula></p>
<p>Whenever an axon and a dendrite of two separate neurons grow near enough to each other, there is a possibility of a synapse formation. The data consisting of information on the synapse formations, and hence describing the network connectivity, is output by the simulator. Technical information on the simulator and the model parameters that are used in this study are listed in Appendix 6.1.</p>
</sec>
<sec id="s4">
<label>2.1.2</label>
<title>Structural properties of a network</title>
<p>In this study we consider the network structure as a directed unweighted graph. These graphs can be represented by connectivity matrices <italic>M</italic>&#x02009;&#x02208;&#x02009;{0, 1}<sup><italic>N</italic>&#x02009;&#x000D7;&#x02009;<italic>N</italic></sup>, where <italic>M<sub>ij</sub></italic>&#x02009;&#x0003D;&#x02009;1 when there is an edge from node <italic>i</italic> to node <italic>j</italic>. The most crucial single measure characterizing the graphs is probably the degree of the graph, i.e., the average number of in- or out-connections of the nodes. When studying large networks, not only the average number but also the distributions of the number of in- and out-connections, i.e., in- and out-degree, are of interest.</p>
<p>Further considered measures of network structures are the shortest path length and the clustering coefficient (Newman, <xref ref-type="bibr" rid="B25">2003</xref>). We choose these two standard measures in order to show differences in the average distance between the nodes and the overall degree of clustering in the network. The shortest path length <italic>l<sub>ij</sub></italic> (referred to as path length from now on) from node <italic>i</italic> to node <italic>j</italic> is the minimum number of edges that have to be traversed to get from <italic>i</italic> to <italic>j</italic>. The mean path length of the network is calculated as <inline-formula><mml:math id="M7"><mml:mrow><mml:mi>L</mml:mi><mml:mo>=</mml:mo><mml:mstyle scriptlevel='+1'><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msup><mml:mi>N</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mstyle><mml:msubsup><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:msubsup><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:mrow></mml:math></inline-formula> where such path lengths <italic>l<sub>ij</sub></italic> where no path between the nodes exists are considered 0. The clustering coefficient <italic>c<sub>i</sub></italic> of node <italic>i</italic> is defined as follows. Consider &#x1D4A9;<italic><sub>i</sub></italic> as the set of neighbors of node <italic>i</italic>, i.e., the nodes that share an edge with node <italic>i</italic> in at least one direction. The clustering coefficient of node <italic>i</italic> is the proportion of traversable triangular paths that start and end at node <italic>i</italic> to the maximal number of such paths. This maximal number corresponds to the case where the subnetwork &#x1D4A9;<italic><sub>i</sub></italic>&#x02009;&#x0222A;&#x02009;{<italic>i</italic>} be fully connected. The clustering coefficient can thus be written as</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M8"><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mo>|</mml:mo> <mml:mrow><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x02208;</mml:mo><mml:msubsup><mml:mi mathvariant='script'>N</mml:mi><mml:mi>i</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02227;</mml:mo><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02227;</mml:mo><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow> <mml:mo>}</mml:mo></mml:mrow></mml:mrow> <mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi mathvariant='script'>N</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi mathvariant='script'>N</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>As the connections to self (autapses) are prohibited, one can use the diagonal values of the third power of connectivity matrix <italic>M</italic> to rewrite Eq. <xref ref-type="disp-formula" rid="E4">4</xref> as</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M9"><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msup><mml:mi>M</mml:mi><mml:mn>3</mml:mn></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi mathvariant='script'>N</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi mathvariant='script'>N</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>This definition of clustering coefficient is an extension of Eq. <xref ref-type="disp-formula" rid="E5">5</xref> in Newman (<xref ref-type="bibr" rid="B25">2003</xref>) to directed graphs. The clustering coefficient of the network is calculated as the average of those <italic>c<sub>i</sub></italic> for which |&#x1D4A9;<italic><sub>i</sub></italic>|&#x02009;&#x0003E;&#x02009;1.</p>
<p>Examples of connectivity patterns of different network structure classes, including a network produced by NETMORPH, are illustrated in Figure <xref ref-type="fig" rid="F1">1</xref>. The figure shows connectivity of a single cell (upper row), the connectivity matrix in total with black dots representing the ones (middle row) and a zoomed in segment of the connectivity matrix (bottom row). The structure classes shown are a RN, three examples of PLCN obtained for different values of distance-dependence factor <italic>W</italic>, a LCN and a NETMORPH network. Connection probability <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.1 is used in all network types. Note the variability in the spread of neighbors within different networks: for RNs they are spread totally random, as for LCNs they are distributed around the considered neuron. Due to the boundary conditions the spread of the out-neighbors in LCN is not circular, as the nodes near the border have on average more distant in-neighbors than the ones located at the center. In NETMORPH networks the spread of the out-neighbors is largely dictated by the direction of the axonal growth.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Upper: Examples of the connectivity patterns</bold>. White: target cell, red: cells having output to the target cell, black: cells receiving input from the target cell; Middle: Connectivity matrix. Y-axis: From-neuron index, X-axis: To-neuron index; Lower: Selected part of the connectivity matrix magnified.</p></caption>
<graphic xlink:href="fncom-05-00026-g001.tif"/>
</fig>
</sec>
</sec>
<sec id="s3">
<label>2.2</label>
<title>Network dynamics</title>
<sec>
<label>2.2.1</label>
<title>Model</title>
<p>To study the network activity we follow the modeling approach presented in Gritsun et al. (<xref ref-type="bibr" rid="B13">2010</xref>). We implement the Izhikevich model (Izhikevich, <xref ref-type="bibr" rid="B15">2003</xref>) of spiking neurons defined by the following membrane potential and recovery variable dynamics</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M10"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mn>0.04</mml:mn><mml:msup><mml:mi>v</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:mn>5</mml:mn><mml:mi>v</mml:mi><mml:mo>+</mml:mo><mml:mn>140</mml:mn><mml:mo>&#x02212;</mml:mo><mml:mi>r</mml:mi><mml:mo>+</mml:mo><mml:mi>I</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>b</mml:mi><mml:mi>v</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>r</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and the resetting scheme</p>
<disp-formula id="E7"><label>(7)</label><mml:math id="M11"><mml:mrow><mml:mtext>if&#x02009;</mml:mtext><mml:mi>v</mml:mi><mml:mo>&#x02265;</mml:mo><mml:mn>30</mml:mn><mml:mo>,</mml:mo><mml:mtext>&#x02009;&#x02009;then&#x02009;&#x02009;</mml:mtext><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mi>v</mml:mi><mml:mo>&#x02190;</mml:mo><mml:mi>c</mml:mi></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mi>r</mml:mi><mml:mo>&#x02190;</mml:mo><mml:mi>r</mml:mi><mml:mo>+</mml:mo><mml:mi>d</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Parameters <italic>a</italic>, <italic>b</italic>, <italic>c</italic>, and <italic>d</italic> are model parameters and</p>
<disp-formula id="E8"><label>(8)</label><mml:math id="M12"><mml:mrow><mml:mi>I</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>y</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mi>G</mml:mi></mml:msub></mml:mrow></mml:math></disp-formula>
<p>is an input term consisting of both synaptic input from other modeled neurons and a Gaussian noise term. The synaptic input to neuron <italic>j</italic> is described by Tsodyks&#x02019; dynamical synapse model (Tsodyks et al., <xref ref-type="bibr" rid="B34">2000</xref>) as</p>
<disp-formula id="E9"><label>(9)</label><mml:math id="M13"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>s</mml:mi><mml:mi>y</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mstyle><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>The parameter <italic>A<sub>ij</sub></italic> accounts for the strength and sign (positive for excitatory, negative for inhibitory) of the synapse whose presynaptic cell is <italic>i</italic> and postsynaptic cell <italic>j</italic> &#x02013; note the permutated roles of <italic>i</italic> and <italic>j</italic> compared to those in Tsodyks et al. (<xref ref-type="bibr" rid="B34">2000</xref>). The variable <italic>y<sub>ij</sub></italic> represents the fraction of synaptic resources in the active state and obeys the following dynamics:</p>
<disp-formula id="E10"><label>(10)</label><mml:math id="M14"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:mi>z</mml:mi><mml:mrow><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x02212;</mml:mo><mml:mi>u</mml:mi><mml:mi>x</mml:mi><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mi>y</mml:mi><mml:mrow><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mi>I</mml:mi></mml:msub></mml:mrow></mml:mfrac><mml:mo>+</mml:mo><mml:mi>u</mml:mi><mml:mi>x</mml:mi><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>z</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:mi>y</mml:mi><mml:mrow><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mi>I</mml:mi></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mi>z</mml:mi><mml:mrow><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Variables <italic>x</italic> and <italic>z</italic> are the fractions of synaptic resources in the recovered and inactive states, respectively, and &#x003C4;<italic><sub>rec</sub></italic> and &#x003C4;<italic><sub>I</sub></italic> are synaptic model parameters. The time instant <italic>t<sub>sp</sub></italic> stands for a spike time of the presynaptic cell; the spike causes a fraction <italic>u</italic> of the recovered resources to become active. For excitatory synapses the fraction <italic>u</italic> is a constant <italic>U</italic>, as for inhibitory synapses the dynamics of the fraction <italic>u</italic> is described as</p>
<disp-formula id="E11"><label>(11)</label><mml:math id="M15"><mml:mrow><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mi>u</mml:mi><mml:mrow><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mi>f</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>+</mml:mo><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>To solve the differential equations we apply Euler method on Eqs <xref ref-type="disp-formula" rid="E6">6</xref> and <xref ref-type="disp-formula" rid="E7">7</xref> and exact integration (see e.g., Rotter and Diesmann, <xref ref-type="bibr" rid="B30">1999</xref>) on Eqs <xref ref-type="disp-formula" rid="E10">10</xref> and <xref ref-type="disp-formula" rid="E11">11</xref>. The simulation setup for the activity model described above is discussed further in Section <xref ref-type="sec" rid="s1">3.1</xref>. Values of the model parameters and the initial conditions of the model are given in Appendix 6.2. Figure <xref ref-type="fig" rid="F2">2</xref> illustrates a typical population spike train of different network classes with connection probability 0.1, and a magnified view of one of their bursts.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>An example of population spike trains of four different networks with a selected burst magnified</bold>. The distance-dependence factor <italic>W</italic>&#x02009;&#x0003D;&#x02009;1 is used in the PLCN network. <bold>(A)</bold>: The full population spike train, <bold>(B&#x02013;D)</bold>: The spiking pattern of the selected burst illustrated by different orderings of the neurons, and <bold>(E)</bold>: The selected region in <bold>(D)</bold> magnified. In <bold>(B)</bold> the neurons are primarily ordered by their type and secondarily by their location in the grid such that the lower spike trains represent the excitatory neurons and the upper spike trains the inhibitory neurons. In <bold>(C)</bold> the neurons are ordered by the time of their first spike in the selected burst, i.e., the lower the spike train is, the earlier its first spike occurred. In <bold>(D)</bold> the neurons are ordered purely by their location in the grid.</p></caption>
<graphic xlink:href="fncom-05-00026-g002.tif"/>
</fig>
</sec>
<sec>
<label>2.2.2</label>
<title>Synchronicity analysis</title>
<p>Given a population spike train, we follow the network burst detection procedure as presented in Chiappalone et al. (<xref ref-type="bibr" rid="B7">2006</xref>), but using a minimum spike count minSpikes&#x02009;&#x0003D;&#x02009;400 and maximal interspike interval ISI&#x02009;&#x0003D;&#x02009;10&#x02009;ms. Once the starting and ending time of the burst are identified, the spike train data of the burst are smoothed using a Gaussian window with deviation 2.5&#x02009;ms to obtain a continuous curve as shown in Figure <xref ref-type="fig" rid="F3">3</xref>. The shape of the burst can be assessed with three statistics that are based on this curve: the maximum firing rate (mFr), half-width of the rising slope (Rs) and half-width of the falling slope (Fs) (Gritsun et al., <xref ref-type="bibr" rid="B13">2010</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Illustration of the meaning of variables mFr, Rs, and Fs</bold>. (as defined in Gritsun et al., <xref ref-type="bibr" rid="B13">2010</xref>).</p></caption>
<graphic xlink:href="fncom-05-00026-g003.tif"/>
</fig>
<p>In addition to the network burst analysis, we estimate the cross-correlations between spike trains of two neurons belonging to the same network. We follow the method presented in Shadlen and Newsome (<xref ref-type="bibr" rid="B31">1998</xref>), where the cross-correlation coefficient (CC) between spike trains of neurons <italic>j</italic> and <italic>k</italic> is defined as <inline-formula><mml:math id="M16"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle scriptlevel='+1'><mml:mfrac><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msqrt><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msqrt></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>.</mml:mo></mml:mrow></mml:math></inline-formula> Here, the variable <italic>A<sub>jk</sub></italic> represents the area below the cross-correlogram and is computed as</p>
<disp-formula id="E12"><label>(12)</label><mml:math id="M17"><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mn>100</mml:mn></mml:mrow><mml:mrow><mml:mn>100</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mo>&#x003BB;</mml:mo><mml:mi>j</mml:mi></mml:msub><mml:msub><mml:mo>&#x003BB;</mml:mo><mml:mi>&#x003BA;</mml:mi></mml:msub></mml:mrow></mml:mfrac><mml:mstyle displaystyle='true'><mml:mrow><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>i</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>i</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x003C4;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mstyle></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mo>&#x00398;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003C4;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mstyle><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>The variable <italic>x<sub>j</sub></italic>(<italic>i</italic>) is 1 for presence and 0 for absence of a spike in the <italic>i</italic>th time bin of spike train of neuron <italic>j</italic>, &#x003BB;<italic><sub>j</sub></italic> is the mean value of <italic>x<sub>j</sub></italic>(<italic>i</italic>) averaged over <italic>i</italic>, and <italic>T</italic> is the number of time bins in total. The running variable &#x003C4; is the time lag between the two compared signals, and the weighting function &#x00398; is chosen triangular as &#x00398;(&#x003C4;)&#x02009;&#x0003D;&#x02009;<italic>T</italic>&#x02212;|&#x003C4;|.</p>
</sec>
</sec>
<sec id="s5">
<label>2.3</label>
<title>Information diversity as a measure of data complexity</title>
<p>Complexity of different types of data and systems has been studied in numerous scientific disciplines, but no standard measure for it has been agreed upon. The most widely used measures are probably Shannon information (entropy) and the theoretical KC. Shannon information measures the information of a distribution. Thus, it is based on the underlying distribution of the observed random variable realizations. Unlike Shannon information, KC is not based on statistical properties, but on the information content of the object itself (Li and Vitanyi, <xref ref-type="bibr" rid="B22">1997</xref>). Hence, KC can be defined without considering the origin of an object. This makes it more attractive for the proposed studies as we can consider the information in individual network structures and their dynamics. The KC <italic>C</italic>(<italic>x</italic>) of a finite object <italic>x</italic> is defined as the length of the shortest binary program that with no input outputs <italic>x</italic> on a universal computer. Thereby, it is the minimum amount of information that is needed in order to generate <italic>x</italic>. Unfortunately, in practice this quantity is not computable (Li and Vitanyi, <xref ref-type="bibr" rid="B22">1997</xref>). While the computation of KC is not possible an upper bound can be estimated using lossless compression. We utilize this approach to obtain approximations for KC.</p>
<p>In this work we study the complexity of an object by the means of diversity of the information it carries. The object of our research is the structure of a neuronal network and the dynamics it produces. There are numerous existing measures for the complexity of a network (Neel and Orrison, <xref ref-type="bibr" rid="B24">2006</xref>), and a few measures exist also for the complexity of the output of a neuronal network (Rapp et al., <xref ref-type="bibr" rid="B29">1994</xref>), but no measure of complexity that could be used for both structure and dynamics has &#x02013; to the best of our knowledge &#x02013; been studied. To study the complexity of the structure we consider the connectivity matrix that represents the network, as for the complexity of the network activity we study the spike trains representing spontaneous activity in the neuronal network. We apply the same measure for assessing complexity in both structure and dynamics.</p>
<sec id="s2">
<label>2.3.1</label>
<title>Inferring complexity from NCD distribution</title>
<p>We use the NCD presented in Li et al. (<xref ref-type="bibr" rid="B21">2004</xref>) as a measure of information distance between two arbitrary strings. The NCD is a computable approximation of an information distance based on KC. The NCD between strings <italic>x</italic> and <italic>y</italic> is defined by</p>
<disp-formula id="E13"><label>(13)</label><mml:math id="M18"><mml:mrow><mml:mtext>NCD</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mi>y</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mtext>min</mml:mtext><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mtext>max</mml:mtext><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where <italic>C</italic>(<italic>x</italic>) and <italic>C</italic>(<italic>y</italic>) are the lengths of the strings <italic>x</italic> and <italic>y</italic> when compressed &#x02013; accounting for approximations of KCs of the respective strings &#x02013; and <italic>C</italic>(<italic>xy</italic>) is that of the concatenation of strings <italic>x</italic> and <italic>y</italic>. In our study we use standard lossless compression algorithms for data compression<xref ref-type="fn" rid="fn1"><sup>1</sup></xref>.</p>
<p>The NCD has recently been used in addressing the question whether a set of data is similarly complex as another (Emmert-Streib and Scalas, <xref ref-type="bibr" rid="B9">2010</xref>), based on a statistical approach. In another study (Galas et al., <xref ref-type="bibr" rid="B11">2010</xref>), the complexity of a set of strings is estimated using a notion of context-dependence, also assessable by the means of the NCD. We follow the latter framework and aim at estimating the complexity of an independent set of data &#x02013; in our study, this set of data is either a set of connectivity patterns or a set of spike trains. In Galas et al. (<xref ref-type="bibr" rid="B11">2010</xref>) the <italic>set complexity</italic> measure is introduced; it can be formulated as</p>
<disp-formula id="E14"><label>(14)</label><mml:math id="M19"><mml:mrow><mml:mo mathvariant='bold'>&#x003A8;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mi>S</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:mrow><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mstyle><mml:mstyle displaystyle='true'><mml:mrow><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x02260;</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>N</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>N</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>g</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mstyle><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>To calculate the set complexity <bold>&#x003A8;</bold> one has to approximate the KCs of all strings <italic>x<sub>i</sub></italic> in the set <italic>S</italic>&#x02009;&#x0003D;&#x02009;{<italic>x</italic><sub>1</sub>,&#x02026;,<italic>x<sub>N</sub></italic>} and the NCDs <italic>d<sub>ij</sub></italic>&#x02009;&#x0003D;&#x02009;NCD(<italic>x<sub>i</sub></italic>,<italic>x<sub>j</sub></italic>) between the strings. The functions <italic>f</italic> and <italic>g</italic> of NCD values are continuous on interval [0,1] such that <italic>f</italic> reaches zero at 1 and <italic>g</italic> reaches zero at 0.</p>
<p>In this study we, for reasons to follow, diverge from this definition. We define the complexity of a set of data as the <italic>magnitude of variation</italic> of NCD between its elements: the wider the spread of NCD values, the more versatile the set is considered. That is, a complex set is thought to include both pairs of elements that are close to each other from an information distance point of view, pairs of elements that are far from each other, and pairs whose distance is somewhere in between.</p>
<p>Although the variation of NCD by no means captures all the properties that are required of a complexity measure and fulfilled by the set complexity <bold>&#x003A8;</bold>, it lacks the difficulty arising in determining the functions <italic>f</italic> and <italic>g</italic> in Eq. <xref ref-type="disp-formula" rid="E14">14</xref>. Let us consider this in more detail from the point of view that we do not know how the functions <italic>f</italic> and <italic>g</italic> should be like &#x02013; which is a fact, apart from the knowledge on them having roots in 0 and 1. Suppose we have two finite sets of strings, <inline-formula><mml:math id="M20"><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:msubsup><mml:mi>s</mml:mi><mml:mn>1</mml:mn><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mi>s</mml:mi><mml:mi>n</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup></mml:mrow> <mml:mo>}</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="M21"><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:msubsup><mml:mi>s</mml:mi><mml:mn>1</mml:mn><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mi>s</mml:mi><mml:mi>m</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup></mml:mrow> <mml:mo>}</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></inline-formula> Denote the NCDs between the strings of set <italic>S</italic><sub>1</sub> by <inline-formula><mml:math id="M22"><mml:mrow><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup><mml:mo>&#x02208;</mml:mo><mml:mi>&#x0211A;</mml:mi><mml:mo>,</mml:mo></mml:mrow></mml:math></inline-formula> where <italic>i</italic>,<italic>j</italic>&#x02009;&#x02208;&#x02009;{1,&#x02026;,<italic>n</italic>}, and accordingly, let <inline-formula><mml:math id="M23"><mml:mrow><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> be the NCDs between the strings of set <italic>S</italic><sub>2</sub>. If any of the NCDs <inline-formula><mml:math id="M24"><mml:mrow><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> (<italic>i</italic>&#x02260;<italic>j</italic>) is unique in the sense that it is unequal to all NCDs <inline-formula><mml:math id="M25"><mml:mrow><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> (<italic>k</italic>&#x02260;<italic>l</italic>), then we find an &#x003B5;-neighborhood <inline-formula><mml:math id="M26"><mml:mrow><mml:msub><mml:mi>B</mml:mi><mml:mo>&#x02208;</mml:mo></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> that contains an NCD value of <italic>S</italic><sub>1</sub> but none of those of <italic>S</italic><sub>2</sub>. Thereby, we can choose the functions <italic>f</italic> and <italic>g</italic> such that the value of <italic>f</italic>&#x02009;&#x000D7;&#x02009;<italic>g</italic> is arbitrarily large at <inline-formula><mml:math id="M27"><mml:mrow><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> and arbitrarily small outside <inline-formula><mml:math id="M28"><mml:mrow><mml:msub><mml:mi>B</mml:mi><mml:mo>&#x02208;</mml:mo></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo></mml:mrow></mml:math></inline-formula> leading to <bold>&#x003A8;</bold>(<italic>S</italic><sub>1</sub>)&#x02009;&#x0003E;&#x02009;<bold>&#x003A8;</bold>(<italic>S</italic><sub>2</sub>). We can generalize this to a case of any finite number of sets <italic>S</italic><sub>1</sub>,&#x02026;,<italic>S<sub>N</sub></italic>: if for set <italic>S<sub>I</sub></italic>, <italic>I</italic>&#x02009;&#x02208;&#x02009;{1,&#x02026;,<italic>N</italic>} there is an NCD value <inline-formula><mml:math id="M29"><mml:mrow><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>I</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> (<italic>i</italic>&#x02009;&#x02260;&#x02009;<italic>j</italic>) that is unequal to all other NCD values <inline-formula><mml:math id="M30"><mml:mrow><mml:msubsup><mml:mi>d</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>J</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> (<italic>J</italic>&#x02009;&#x02260;&#x02009;<italic>I</italic>, <italic>k</italic>&#x02009;&#x02260;&#x02009;<italic>l</italic>), then the functions <italic>f</italic> and <italic>g</italic> can be configured such that &#x02200;<italic>J</italic>&#x02009;&#x02260;&#x02009;<italic>I</italic>:<bold>&#x003A8;</bold>(<italic>S<sub>I</sub></italic>)&#x02009;&#x0003E;&#x02009;<bold>&#x003A8;</bold>(<italic>S<sub>J</sub></italic>). Hence, the lack of knowledge on functions <italic>f</italic> and <italic>g</italic> imposes severe restrictions on the eligibility of the set complexity <bold>&#x003A8;</bold> as such.</p>
<p>What is incommon for the proposals for <italic>f</italic> and <italic>g</italic> presented in Galas et al. (<xref ref-type="bibr" rid="B11">2010</xref>) is that the product function <italic>f</italic>&#x02009;&#x000D7;&#x02009;<italic>g</italic> forms only one peak in the domain [0,1]. The crucial question is: where should this peak be located &#x02013; ultimately, this is the same as asking: where is the boundary between &#x0201C;random&#x0201D; and &#x0201C;ordered&#x0201D; sets of data? Adopting the wideness of NCD distribution as a measure of data complexity is a way to bypass this problem. The wider the spread of NCD values, the more likely it is that some of the NCD values produce large values for <italic>f</italic>&#x02009;&#x000D7;&#x02009;<italic>g</italic>. Yet, difficulties arise when deciding a rigorous meaning for the &#x0201C;wideness&#x0201D; or &#x0201C;magnitude of variation&#x0201D; of the NCD distribution. In the present work, the calculated NCD distributions are seemingly unimodal; thereby we use the <italic>standard deviation</italic> of the NCD distribution as the measure of complexity of the set.</p>
</sec>
<sec>
<label>2.3.2</label>
<title>Data representation for complexity analysis</title>
<p>Two different data analysis approaches to studying the complexity are possible (Emmert-Streib and Scalas, <xref ref-type="bibr" rid="B9">2010</xref>): one can assess (1) the complexity of the process that produces a data realization, or (2) the complexity of the data realization itself. In this study we will apply the approach (2) in both estimating the complexity of structure and the complexity of dynamics. To study the complexity in the context-dependent manner described in Section <xref ref-type="sec" rid="s2">2.3.1</xref> we divide the data into a set of data, and represent it as a set of strings. For the structure, the rows of the connectivity matrix are read to strings, i.e., each string <italic>s</italic> shows the out-connection pattern of the corresponding neuron with <italic>s<sub>i</sub></italic>&#x02009;&#x0003D;&#x02009;&#x0201C;0&#x0201D; if there is no output to neuron <italic>i</italic> and <italic>s<sub>i</sub></italic>&#x02009;&#x0003D;&#x02009;&#x0201C;1&#x0201D; if there is one. The NCDs are approximated between these strings. In order to compute the NCD of the dynamics every spike train is converted into a binary sequence. Each discrete time step is assigned with one if a spike is present in that time slot, and with zero otherwise. For example, a string &#x0201C;0000000000100101000&#x0201D; would correspond to a case where a neuron is at first silent, then spikes at time intervals around 10&#x00394;<italic>t</italic>, 13&#x00394;<italic>t</italic> and 15&#x00394;<italic>t</italic>, where &#x00394;<italic>t</italic> is a sampling interval. For the compression of strings we use the general purpose data compression algorithm 7zip<xref ref-type="fn" rid="fn2"><sup>2</sup></xref>. The compressor parameters and the motivation for this particular compression method are given in Appendix 6.3.</p>
</sec>
</sec>
</sec>
<sec>
<label>3</label>
<title>Results</title>
<sec id="s1">
<label>3.1</label>
<title>Simulation setup</title>
<p>In the present paper we study both structural and dynamical properties of networks of <italic>N</italic>&#x02009;&#x0003D;&#x02009;1600 neurons. Regarding the choice of the structure of neuronal networks we base our approach on the growth properties of those networks produced by the NETMORPH simulator. To choose a trade-off between biological reality and ease of comparison to other types of networks we set the initial cell positions in a two-dimensional regular 40&#x02009;&#x000D7;&#x02009;40 grid. The present work does not consider continuous boundaries, i.e., the physical distance between the neurons is the standard Euclidean distance. The distance between adjacent neurons is set &#x02248;25&#x02009;&#x003BC;m, which is chosen such that the density of neurons corresponds to one of the culture densities examined in Wagenaar et al. (<xref ref-type="bibr" rid="B37">2006</xref>) (1600&#x02009;cells/mm<sup>2</sup>). Figure <xref ref-type="fig" rid="F4">4</xref> shows the average connection probability in a NETMORPH network as a function of time, where the average is taken over 16 simulation realizations. The standard deviation of the connection probability is found very small between different realizations (&#x0003C;0.002), hence only mean values are plotted here.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Connection probability as a function of time in the structure of networks generated by the NETMORPH simulator</bold>.</p></caption>
<graphic xlink:href="fncom-05-00026-g004.tif"/>
</fig>
<p>The main emphasis throughout this article will be on connection probabilities <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.02, 0.05, 0.1, 0.16 that, according to Figure <xref ref-type="fig" rid="F4">4</xref>, correspond to days 8, 11, 15, and 19 <italic>in vitro</italic>. The selected range of days <italic>in vitro</italic> is commonly considered in experimental studies of neuronal cultures. The connection probabilities 0.1 and 0.16 of 15th and 19th DIV, respectively, are in accordance with experimental studies that consider the connectivity of a mature network to be 10&#x02013;30% (Marom and Shahaf, <xref ref-type="bibr" rid="B23">2002</xref>).</p>
<p>Other considered networks are generated by Algorithm 1 using the abovementioned connection probabilities. The distance-dependence factors for these networks are chosen as <italic>W</italic>&#x02009;&#x0003D;&#x02009;0, 0.5, 1, 2, 4, 10, &#x0221E;.</p>
<p>The spiking activity in the abovementioned networks is studied by simulating the time series of the <italic>N</italic>&#x02009;&#x0003D;&#x02009;1600 individual neurons according to Section <xref ref-type="sec" rid="s3">2.2</xref>. The connectivity matrix of the modeled network defines which synaptic variables <italic>y<sub>ij</sub></italic> need to be modeled: the synaptic weights <italic>A<sub>ij</sub></italic> are non-zero only for non-zero connectivity matrix entries <italic>M<sub>ij</sub></italic>, hence for such (<italic>i</italic>,<italic>j</italic>) that <italic>M<sub>ij</sub></italic>&#x02009;&#x0003D;&#x02009;0 the synaptic variables <italic>y<sub>ij</sub></italic> can be ignored in terms of Eq. <xref ref-type="disp-formula" rid="E9">9</xref>. In this article we disallow multiple synapses from a neuron to another. Throughout the paper the fraction of the inhibitory neurons is fixed to 25%, which are picked by random, and the sampling interval is fixed to 0.5&#x02009;ms.</p>
</sec>
<sec>
<label>3.2</label>
<title>Network structure classes differ in their graph theoretic properties</title>
<p>We first study the structural properties of the network classes presented above by the means of measures introduced in Section <xref ref-type="sec" rid="s4">2.1.2</xref>. The in-degree distributions of the networks generated by Algorithm 1 are always binomial, as the out-degree distributions vary. Empirically calculated out-degree distributions, path length distributions, and local clustering coefficient distributions are shown in Figure <xref ref-type="fig" rid="F5">5</xref>, together with the respective NETMORPH distributions.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Out-degree, path length and clustering coefficient distributions plotted for different network classes and different connection probabilities</bold>. The power coefficient <italic>W</italic>&#x02009;&#x0003D;&#x02009;1 is used for PLCN networks. The mean path lengths and mean clustering coefficients are shown in association with the respective curves.</p></caption>
<graphic xlink:href="fncom-05-00026-g005.tif"/>
</fig>
<p>One can observe an increase in the mean path length as well as mean clustering coefficient with the increase of distance-dependence factor <italic>W</italic>, i.e., when moving from RN toward LCN. The out-degree distributions of NETMORPH networks are wider than those of any other type of network, but regarding the width and mean of the path length and clustering coefficient distributions the NETMORPH networks are always somewhere between RN and LCN.</p>
</sec>
<sec>
<label>3.3</label>
<title>Networks with different structure show variation in bursting behavior</title>
<p>For the activity part we simulate 61&#x02009;s of spike train recordings using the models described in Section <xref ref-type="sec" rid="s3">2.2</xref> and the model parameters described in Appendix 6.2. In all of our simulations a network burst occurs in the very beginning due to the transition into a steady state, which is why we ignore the first second of simulation. We simulate a set of spike trains for structure classes <italic>W</italic>&#x02009;&#x0003D;&#x02009;0, 1, &#x0221E; and NETMORPH using connection probabilities <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.02, 0.05, 0.1, 0.16. The average connection weight and other model parameters stay constant, only the connectivity matrix varies between different structure classes and different connection probabilities. In the case of <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.02 none of the networks shows bursting behavior, for <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.05 a burst emerges in about one out of three 1-min simulations, as for <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.1 and <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.16 there are bursts in every 1-min recording. Table <xref ref-type="table" rid="T1">1</xref> shows the acquired mean bursting frequencies &#x02013; they are comparable to the ones obtained in Gritsun et al. (<xref ref-type="bibr" rid="B13">2010</xref>). We concentrate on the two bigger connection probabilities.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Bursting rates of networks of different structure classes</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<td align="left"/>
<th align="left"><italic>W</italic>&#x02009;&#x0003D;&#x02009;0</th>
<th align="left"><italic>W</italic>&#x02009;&#x0003D;&#x02009;1</th>
<th align="left"><italic>W</italic>&#x02009;&#x0003D;&#x02009;&#x0221E;</th>
<th align="left">NETMORPH</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><italic>p</italic>&#x02009;&#x0003D;&#x02009;0.1</td>
<td align="char" char="&#x000B1;">4.1&#x02009;&#x000B1;&#x02009;1.3</td>
<td align="char" char="&#x000B1;">7.5&#x02009;&#x000B1;&#x02009;2.0</td>
<td align="char" char="&#x000B1;">13.3&#x02009;&#x000B1;&#x02009;0.9</td>
<td align="char" char="&#x000B1;">10.7&#x02009;&#x000B1;&#x02009;2.2</td>
</tr>
<tr>
<td align="left"><italic>p</italic>&#x02009;&#x0003D;&#x02009;0.16</td>
<td align="char" char="&#x000B1;">16.4&#x02009;&#x000B1;&#x02009;1.2</td>
<td align="char" char="&#x000B1;">17.6&#x02009;&#x000B1;&#x02009;1.2</td>
<td align="char" char="&#x000B1;">19.8&#x02009;&#x000B1;&#x02009;2.0</td>
<td align="char" char="&#x000B1;">19.0&#x02009;&#x000B1;&#x02009;1.6</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Values shown: mean</italic>&#x02009;&#x000B1;&#x02009;<italic>SD in bursts/min, calculated from 16 different 1-min recordings per table entry</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>The difference between the intraburst patterns of the different networks can already be observed in the magnified burst images in Figure <xref ref-type="fig" rid="F2">2</xref>, particularly in 2C where the effect of the location of the neuron in the grid is neglected. We show the difference by studying the following burst statistics: spike count per burst (SC), and the three burst shape statistics defined above (mFr, Rs, and Fs). Figure <xref ref-type="fig" rid="F6">6</xref> shows the distribution of these statistics in activity simulations of different network structure classes. The means as well as the medians of the three latter measures for both the NETMORPH and <italic>W</italic>&#x02009;&#x0003D;&#x02009;1 networks constantly fall between the two extremes, LCN and RN. The same does not hold for SC.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>Distributions of spike count/burst (SC, far left), maximum firing rate in a burst (mFr, middle-left), rising slope length (Rs, middle-right), and falling slope length (Fs, far right) for spike trains obtained by using different structure classes and different connection probabilities (upper: 0. 1; lower: 0.16)</bold>. The distance-dependence factor <italic>W</italic>&#x02009;&#x0003D;&#x02009;1 is used for PLCN. Histograms are smoothed using a Gaussian window with standard deviation&#x02009;&#x0003D;&#x02009;range of values/100.</p></caption>
<graphic xlink:href="fncom-05-00026-g006.tif"/>
</fig>
<p>We test the difference of the medians statistically between different structure classes using <italic>U</italic>-test. The null hypothesis is that the medians of two considered distributions in a panel of Figure <xref ref-type="fig" rid="F6">6</xref> are equal. The distributions of each measure (SC, mFr, Rs, Fs) and each network density are tested pairwise between the different network types. The test shows similarity of medians of Rs distributions between NETMORPH network and PLCN with connection probability 0.1 (<italic>p</italic>-value&#x02009;&#x0003D;&#x02009;0.59), but not in the case of connection probability 0.16 (<italic>p</italic>-value&#x02009;&#x0003D;&#x02009;1.7&#x02009;&#x000D7;&#x02009;10<sup>&#x02212;13</sup>). The same holds for medians of Fs distributions of these networks, respective <italic>p</italic>-values being 0.20 and 0.0027. In all the rest of the cases the null hypothesis of medians of any measure being the same between any two distributions of different structure classes can be rejected, as none of the <italic>p</italic>-values exceeds 0.002. The variances of the distributions are not tested, but one can observe that LCNs clearly produce the widest SC, Rs, and Fs distributions.</p>
</sec>
<sec>
<label>3.4</label>
<title>Complexity results in structure and dynamics</title>
<p>We start by studying simultaneously the KC of the rows of a connectivity matrix and the KC of the spike trains of the corresponding neurons. We generate a network for each structure class (<italic>W</italic>&#x02009;&#x0003D;&#x02009;0, 0.5, 1, 2, 4, 10, &#x0221E;; NETMORPH) and a population spike train recording for each of these networks. A set of 80 neurons is randomly picked from the <italic>N</italic>&#x02009;&#x0003D;&#x02009;1600 neurons retaining the proportion of excitatory and inhibitory neurons. This data set is considered representative of the whole set of neurons. Figure <xref ref-type="fig" rid="F7">7</xref> shows approximations of KCs for both structure and dynamics of different structure classes and different connection probabilities. The value of C(struc) shows the length of a compressed row of the connectivity matrix as C(dyn) is the length of the compressed spike train data of the corresponding neuron.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>Kolmogorov complexity approximation of a spike train of a neuron (dyn) versus the KC approximation of the respective row of connectivity matrix (struc)</bold>. <bold>(A,C)</bold>: The full spike train of a neuron read into the string to be compressed. <bold>(B,D)</bold>: Intraburst segments of the spike train read into separate strings (i.e., several &#x0201C;C(dyn)&#x0201D;-values possible for each &#x0201C;C(struc)&#x0201D;-value). In <bold>(A,B)</bold> the connection probability of the networks is 0.1, in <bold>(B,D)</bold> 0.16. The markers &#x0201C;&#x0002B;&#x0201D; represent excitatory neurons, as &#x0201C;&#x000B7;&#x0201D; are inhibitory. The ellipses drawn represent 67% of the probability mass of a Gaussian distribution with the same mean and covariance matrix as the plotted data.</p></caption>
<graphic xlink:href="fncom-05-00026-g007.tif"/>
</fig>
<p>Figure <xref ref-type="fig" rid="F7">7</xref> shows that the mean of the compression lengths of full spike train data descends when moving from local to random networks, as the mean compression length of columns of connectivity matrix ascends. The rising of the C(struc) is in accord with the fact that random strings maximize the KC of a string, whereas the descending of the mean values of C(dyn) can be explained by the decrease in the number of bursts. A slightly similar trend is visible when studying the KC of intraburst spike trains, but more than that, the range of values of C(dyn) seems to decrease when moving from local to random networks.</p>
<p>However, as pointed out earlier, the KC alone does not tell much about diversity of the data set, only the information content of each element of the set alone. We wish to address the question of to what extent the information in one element is repeated or near-to-repeated in the other elements. We first analyze the structure and dynamics data using alternative measures, namely, Hamming distance (HD) and cross-correlation coefficient (CC). HD counts the proportion of differing bits in two binary sequences: it equals zero for identical sequences and one for sequences that are inverse of each other. The same elements as those in Figures <xref ref-type="fig" rid="F7">7</xref>A,C are analyzed using HD, i.e., the rows of connectivity matrix and spike trains of the corresponding neurons. The same number of 80 sample neurons is picked randomly, and HD is computed between the <inline-formula><mml:math id="M31"><mml:mrow><mml:msubsup><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mrow><mml:mn>80</mml:mn></mml:mrow></mml:msubsup><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mn>3160</mml:mn></mml:mrow></mml:math></inline-formula> pairs of neurons. In addition, the dynamics is analyzed using CC between pairs of spike trains. The CC measures similarity between two spike trains and is capable of capturing time shifts between the signals. Hence the CC serves as an extension of the HD, or of the inverse of HD (as cross-correlation measures similarity and HD measures divergence). Figure <xref ref-type="fig" rid="F8">8</xref> shows the distribution of HD computed for both structure and dynamics (Figures <xref ref-type="fig" rid="F8">8</xref>A,C) and that of CC computed for dynamics versus HD computed for structure (Figures <xref ref-type="fig" rid="F8">8</xref>B,D).</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p><bold>(A,C):</bold> Hamming distances between spike trains (dyn) versus the HD between the corresponding rows of connectivity matrix (struc). <bold>(B,D)</bold>: Cross-correlation coefficient between spike trains (CC) versus the HD between the corresponding rows of connectivity matrix. In all panels the markers &#x0201C;&#x0002B;&#x0201D; represent comparisons of two excitatory neurons, &#x0201C;x&#x0201D; are comparisons between excitatory and inhibitory neurons, and &#x0201C;&#x000B7;&#x0201D; are comparisons between two inhibitory neurons. The distance-dependence factor <italic>W</italic>&#x02009;&#x0003D;&#x02009;1 is used for the PLCN.</p></caption>
<graphic xlink:href="fncom-05-00026-g008.tif"/>
</fig>
<p>In Figures <xref ref-type="fig" rid="F8">8</xref>A,C one can observe the widening of the HD distribution in both structure and dynamics when moving from random to local networks. The same applies for the CC distribution (Figures <xref ref-type="fig" rid="F8">8</xref>B,D). The HD(struc) distributions of the most locally connected networks are wider than that of RN, because for each neuron there exist some neurons with a lot of common out-neighbors (the spatially nearby neurons, small HD value) and some neurons with zero or near to zero common out-neighbors (the spatially distant neurons, large HD value). For some of the considered networks a bimodal distribution of HD(dyn) can be observed. In such cases, the peak closer to zero corresponds to the comparison of neurons of the same type (excitatory&#x02013;excitatory or inhibitory&#x02013;inhibitory), while the peak further from zero corresponds to comparison of neurons of different type (excitatory&#x02013;inhibitory). This bimodality is due to the difference in intraburst patterns between excitatory and inhibitory neurons: Figure <xref ref-type="fig" rid="F2">2</xref>B shows that on average the inhibitory population starts and ends bursting later than the excitatory one. This effect is most visible in RNs, as can be observed both in Figures <xref ref-type="fig" rid="F2">2</xref> and <xref ref-type="fig" rid="F8">8</xref>A. As for the CCs, the distributions are unimodal. This indicates that the differences between the spiking patterns of inhibitory and excitatory neurons are observable on small time scale (HD uses the bins of width 0.5&#x02009;ms), but not on large time scale (cross-correlations are integrated over an interval of &#x000B1;50&#x02009;ms). This is further supported by the fact that when the time window for CC calculations is narrowed, the CCs between neurons of same type become distinguishable from those between neurons of different type (data not shown).</p>
<p>Both HD and cross-correlation, however, assess the similarity between the data by observing only local differences. The HD determines the average difference between the data by comparing the data at exact same locations, as the cross-correlation allows some variation on the time scale. Both measures fail to capture similarities in the data if the similar patterns in the two considered data sequences lie too far from each other. This is also the case if the sequences include more subtle similarities than time shifts, e.g., if one sequence is a miscellaneous combination of the other&#x00027;s subsequences. Thereby, we proceed to the information diversity framework presented in Section <xref ref-type="sec" rid="s2">2.3.1</xref>. We take the same elements as in Figure <xref ref-type="fig" rid="F7">7</xref> &#x02013; rows of connectivity matrix and full spike train data of neurons or intraburst segments only &#x02013; and calculate the NCDs between these elements. These NCD distributions are plotted in Figure <xref ref-type="fig" rid="F9">9</xref>.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p><bold>Normalized compression distances between spike trains of neurons (dyn) versus the NCD between respective rows of connectivity matrix (struc)</bold>. The four figures arranged in a similar manner as in Figure <xref ref-type="fig" rid="F7">7</xref>, i.e., in <bold>(A,C)</bold> full spike trains are considered, and in <bold>(B,D)</bold> intraburst spike trains are considered. The ellipses drawn represent 90% of the probability mass of a Gaussian distribution with same mean and covariance matrix as the plotted data.</p></caption>
<graphic xlink:href="fncom-05-00026-g009.tif"/>
</fig>
<p>One can observe a gradual increase in the mean values of NCD(struc) with the increase of randomness to the structure of the network in Figure <xref ref-type="fig" rid="F9">9</xref>. This is rather expected: the more randomness applied to the structure of the network, the further away the connectivity data of different neurons are from each other. As for the NCD(dyn) values between full spike train data of neurons, one can observe a gradual decrease in the mean value with the increase of randomness, and a slightly similar evolution is visible in the burst-wise calculations as well. Both this and the decrease in the deviation of the NCD(dyn) values are in accordance with the properties of intraburst spike patterns illustrated in Figure <xref ref-type="fig" rid="F2">2</xref>: the spike train data seem more diverse and wider-spread in the local networks than in the random ones. Furthermore, as the bursting frequency is higher in LCNs than in RNs (Table <xref ref-type="table" rid="T1">1</xref>), the analyzed LCN spike trains (1&#x02009;min recordings) show more variability than those of RN. Consequently, the mean NCD(dyn) is visibly higher in LCN. When analyzing the dynamics of the intraburst interval, data variability is less pronounced and the difference between the means of NCD(dyn) distributions is smaller.</p>
<p>We repeat the experiment of Figure <xref ref-type="fig" rid="F9">9</xref> 10 times, and for each entry we calculate the standard deviations of NCD distributions (i.e., the complexities in our definition) of both structure and dynamics. Table <xref ref-type="table" rid="T2">2</xref> shows the mean complexities and their standard deviations, and the network classes in which the complexity is significantly different from that of a RN. The table shows that the structural complexity decreases with the increase of randomness to the structure. The complexity of full spike trains shows a less consistent trend. For sparser networks (<italic>p</italic>&#x02009;&#x0003D;&#x02009;0.1), the more locally connected networks produce more complex full spike trains than RNs, as for denser networks (<italic>p</italic>&#x02009;&#x0003D;&#x02009;0.16), only one of the PLCNs has statistically different complexity from that of RNs. We consider the latter statistical difference an outlier as a clear trend is absent. For the intraburst complexities the results suggest that the LCNs together with NETMORPH and some of the most locally connected PLCNs produce more complex dynamics than RNs.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p><bold>Complexities calculated as standard deviations of NCD distributions for both elements of structure and dynamics</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<td align="left"/>
<th align="left">STRUCT</th>
<th align="left">DYN (full)</th>
<th align="left">DYN (bursts)</th>
<th align="left">STRUCT</th>
<th align="left">DYN (full)</th>
<th align="left">DYN (bursts)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">LCN</td>
<td align="char" char="&#x000B1;">0.054&#x02009;&#x000B1;&#x02009;0.003&#x0002A;</td>
<td align="char" char="&#x000B1;">0.040&#x02009;&#x000B1;&#x02009;0.003&#x0002A;</td>
<td align="char" char="&#x000B1;">0.071&#x02009;&#x000B1;&#x02009;0.012&#x0002A;</td>
<td align="char" char="&#x000B1;">0.051&#x02009;&#x000B1;&#x02009;0.003&#x0002A;</td>
<td align="char" char="&#x000B1;">0.027&#x02009;&#x000B1;&#x02009;0.003</td>
<td align="char" char="&#x000B1;">0.057&#x02009;&#x000B1;&#x02009;0.006&#x0002A;</td>
</tr>
<tr>
<td align="left">W&#x02009;&#x0003D;&#x02009;10</td>
<td align="char" char="&#x000B1;">0.045&#x02009;&#x000B1;&#x02009;0.002&#x0002A;</td>
<td align="char" char="&#x000B1;">0.041&#x02009;&#x000B1;&#x02009;0.004&#x0002A;</td>
<td align="char" char="&#x000B1;">0.067&#x02009;&#x000B1;&#x02009;0.008&#x0002A;</td>
<td align="char" char="&#x000B1;">0.045&#x02009;&#x000B1;&#x02009;0.003&#x0002A;</td>
<td align="char" char="&#x000B1;">0.030&#x02009;&#x000B1;&#x02009;0.003</td>
<td align="char" char="&#x000B1;">0.053&#x02009;&#x000B1;&#x02009;0.003&#x0002A;</td>
</tr>
<tr>
<td align="left">W&#x02009;&#x0003D;&#x02009;4</td>
<td align="char" char="&#x000B1;">0.036&#x02009;&#x000B1;&#x02009;0.002&#x0002A;</td>
<td align="char" char="&#x000B1;">0.042&#x02009;&#x000B1;&#x02009;0.003&#x0002A;</td>
<td align="char" char="&#x000B1;">0.075&#x02009;&#x000B1;&#x02009;0.006&#x0002A;</td>
<td align="char" char="&#x000B1;">0.036&#x02009;&#x000B1;&#x02009;0.004&#x0002A;</td>
<td align="char" char="&#x000B1;">0.027&#x02009;&#x000B1;&#x02009;0.002</td>
<td align="char" char="&#x000B1;">0.057&#x02009;&#x000B1;&#x02009;0.007&#x0002A;</td>
</tr>
<tr>
<td align="left">W&#x02009;&#x0003D;&#x02009;2</td>
<td align="char" char="&#x000B1;">0.029&#x02009;&#x000B1;&#x02009;0.001&#x0002A;</td>
<td align="char" char="&#x000B1;">0.045&#x02009;&#x000B1;&#x02009;0.004&#x0002A;</td>
<td align="char" char="&#x000B1;">0.074&#x02009;&#x000B1;&#x02009;0.015&#x0002A;</td>
<td align="char" char="&#x000B1;">0.027&#x02009;&#x000B1;&#x02009;0.002&#x0002A;</td>
<td align="char" char="&#x000B1;">0.027&#x02009;&#x000B1;&#x02009;0.003</td>
<td align="char" char="&#x000B1;">0.048&#x02009;&#x000B1;&#x02009;0.004&#x0002A;</td>
</tr>
<tr>
<td align="left">W&#x02009;&#x0003D;&#x02009;1</td>
<td align="char" char="&#x000B1;">0.022&#x02009;&#x000B1;&#x02009;0.001&#x0002A;</td>
<td align="char" char="&#x000B1;">0.034&#x02009;&#x000B1;&#x02009;0.004&#x0002A;</td>
<td align="char" char="&#x000B1;">0.055&#x02009;&#x000B1;&#x02009;0.007&#x0002A;</td>
<td align="char" char="&#x000B1;">0.019&#x02009;&#x000B1;&#x02009;0.001&#x0002A;</td>
<td align="char" char="&#x000B1;">0.025&#x02009;&#x000B1;&#x02009;0.002&#x0002A;</td>
<td align="char" char="&#x000B1;">0.047&#x02009;&#x000B1;&#x02009;0.006</td>
</tr>
<tr>
<td align="left">W&#x02009;&#x0003D;&#x02009;0.5</td>
<td align="char" char="&#x000B1;">0.017&#x02009;&#x000B1;&#x02009;0.001&#x0002A;</td>
<td align="char" char="&#x000B1;">0.028&#x02009;&#x000B1;&#x02009;0.002</td>
<td align="char" char="&#x000B1;">0.048&#x02009;&#x000B1;&#x02009;0.020</td>
<td align="char" char="&#x000B1;">0.014&#x02009;&#x000B1;&#x02009;0.001&#x0002A;</td>
<td align="char" char="&#x000B1;">0.029&#x02009;&#x000B1;&#x02009;0.002</td>
<td align="char" char="&#x000B1;">0.042&#x02009;&#x000B1;&#x02009;0.003</td>
</tr>
<tr>
<td align="left">RN</td>
<td align="char" char="&#x000B1;">0.014&#x02009;&#x000B1;&#x02009;0.001</td>
<td align="char" char="&#x000B1;">0.028&#x02009;&#x000B1;&#x02009;0.002</td>
<td align="char" char="&#x000B1;">0.042&#x02009;&#x000B1;&#x02009;0.002</td>
<td align="char" char="&#x000B1;">0.013&#x02009;&#x000B1;&#x02009;0.001</td>
<td align="char" char="&#x000B1;">0.029&#x02009;&#x000B1;&#x02009;0.003</td>
<td align="char" char="&#x000B1;">0.044&#x02009;&#x000B1;&#x02009;0.005</td>
</tr>
<tr>
<td align="left">NETM</td>
<td align="char" char="&#x000B1;">0.040&#x02009;&#x000B1;&#x02009;0.004&#x0002A;</td>
<td align="char" char="&#x000B1;">0.039&#x02009;&#x000B1;&#x02009;0.006&#x0002A;</td>
<td align="char" char="&#x000B1;">0.053&#x02009;&#x000B1;&#x02009;0.008&#x0002A;</td>
<td align="char" char="&#x000B1;">0.033&#x02009;&#x000B1;&#x02009;0.002&#x0002A;</td>
<td align="char" char="&#x000B1;">0.027&#x02009;&#x000B1;&#x02009;0.003</td>
<td align="char" char="&#x000B1;">0.052&#x02009;&#x000B1;&#x02009;0.005&#x0002A;</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The three leftmost columns are calculated from simulations with connection probability</italic> <italic>p</italic>&#x02009;<italic>&#x0003D;</italic>&#x02009;<italic>0.1, three rightmost with p</italic>&#x02009;<italic>&#x0003D;</italic>&#x02009;<italic>0.16. Each entry represents the mean</italic>&#x02009;&#x000B1;&#x02009;<italic>standard deviation of the wideness of the NCD distribution, calculated over 10 repetitions for each entry. The entries where the median of values is significantly different (U-test, p-value 0.05) from the corresponding RN entry are marked with asterisk (&#x0002A;)</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>The information diversity results of Table <xref ref-type="table" rid="T2">2</xref> do not clearly indicate which network produces the most complex dynamics. The <italic>p</italic>-values for the test whether the information diversity of the most complex full spike trains is different from that of the second most complex are 0.13 and 0.38 for sparse and dense networks, respectively. For the intraburst complexities the respective <italic>p</italic>-values are 0.80 and 0.62. The results for the complexity of structure are qualitatively the same when considering the <italic>columns</italic> of connectivity matrix, i.e., the in-connection patterns, instead of rows (data not shown). Furthermore, the decrease in the complexity of the structure by the increase in randomness is also present in the case where the neurons in the connectivity matrix are randomly permutated. This shows that the trend in the structural complexity in Table <xref ref-type="table" rid="T2">2</xref> is not an artifact of the order in which the neuron connectivities are read into strings.</p>
</sec>
<sec>
<label>3.5</label>
<title>Conclusion</title>
<p>The structure of RNs are described by low path length and low clustering coefficient, and further by high KC and low information diversity. The RN dynamics is described by short and relatively rare bursts, and hence low KC of the spike train data. As the opposite, the structure of LCNs show longer path length and greater clustering coefficient, and the KC approximations of the structural data are small while the information diversity is large. The bursts in the LCN spike trains are longer and more frequent than in RN spike trains. The KC approximations of the LCN spike train data are large on average. Based on the variation in NCD, the intraburst complexity is higher in LCN output than in that of RN, and for the sparser of the two network densities the same holds for complexities of full spike trains.</p>
<p>The in-between networks, PLCNs, fall between the two extremes (RN and LCN) by their structural properties as well as their bursting behavior. The same holds for the biologically realistic NETMORPH networks. The information diversity of the structure of these networks is between that of RN and LCN. Similarly to LCNs, the intraburst dynamics of NETMORPH networks as well as the most locally connected PLCNs are more complex in terms of NCD variation than that of RNs.</p>
</sec>
</sec>
<sec sec-type="discussion">
<label>4</label>
<title>Discussion</title>
<p>In this work we present and apply an information diversity measure for assessing complexity in both structure and dynamics. According to this measure the neuronal networks with random structure produce less diverse spontaneous activity than networks where the connectivity of neurons is more dependent on distance. The presented study focuses on only one neuronal activity model, i.e., Izhikevich type neurons with dynamical model of synapses (Tsodyks model), and the networks of fixed size (<italic>N</italic>&#x02009;&#x0003D;&#x02009;1600). The further studies testing alternative models and examining the influence of network size are needed to confirm these findings. Still, the presented results demonstrate capability of the employed measure to discriminate between different network types.</p>
<p>The basis for the present study is the question: if one changes the structure of the neuronal network but keeps the average degree (or even the whole in-degree distribution) constant, how does the spontaneous activity change? In the activity simulations, all model parameters remain constant, only the connectivity matrix is changed between the simulations of different network types; hence the variation in bursting properties emerges from the structure of the network only. The selected algorithm for generation of network structure possesses the capability to tune distance-dependence on a continuous scale. As a result, we have not only fully locally connected networks, where a neuron always first connects to its nearest spatial neighbors before the distant ones (<italic>W</italic>&#x02009;&#x0003D;&#x02009;&#x0221E;, i.e., LCN), and fully random networks (<italic>W</italic>&#x02009;&#x0003D;&#x02009;0, i.e., RN), but everything in between (0&#x02009;&#x0003C;&#x02009;<italic>W</italic>&#x02009;&#x0003C;&#x02009;&#x0221E;, PLCN). The RNs correspond to directed Erd&#x00151;s&#x02013;R&#x000E9;nyi networks that are widely used in similar studies in the field. These networks are characterized by a binomial degree distribution; hence the choice of binomial in-degree distribution for all network types. The only network class to violate this binomiality are the NETMORPH networks, which are considered in order to increase the biological plausibility of the study. The results showing that the NETMORPH networks are placed somewhere between the LCNs and RNs by most of their structural and dynamical properties also support the use of Algorithm 1 for the network generation. If this was not the case, one would have to try to find another way to produce networks with as extreme properties as those in NETMORPH networks. The range of networks between LCN and RN could also be produced with an application of Watts&#x02013;Strogatz&#x02019; algorithm (Watts and Strogatz, <xref ref-type="bibr" rid="B38">1998</xref>). The crucial difference is that in our in-between networks (PLCNs) the &#x0201C;long-range connections&#x0201D; are the shorter the bigger the parameter <italic>W</italic> is, while in Watts&#x02013;Strogatz&#x02019; model the long-range connections are (roughly) on average equally long in all in-between networks. By a long-range connection we mean any connection to neuron A from neuron B when A is not yet connected to all neurons that are spatially nearer than B.</p>
<p>The complexity framework presented in this paper is adopted from Nykter et al. (<xref ref-type="bibr" rid="B26">2008</xref>), where critical Boolean networks are found to have the most complex dynamics out of a set of various Boolean networks. The method for estimating the complexity in the present work is different in the way that we apply the NCD measure between elements of a set that represents the object whose complexity is to be estimated (connectivity matrix, population spike train), not between different output realizations as in Nykter et al. (<xref ref-type="bibr" rid="B26">2008</xref>). This allows the estimation of the complexity of the object itself, not of the set of objects generated with the same process. The complexity of the object is assessed by the diversity of the information it carries. Although technically applicable to any set of strings, this is not supposed to be a universal measure of complexity. However, its use lacks the difficulties that arise when applying an alternative set complexity measure defined in Galas et al. (<xref ref-type="bibr" rid="B11">2010</xref>), as discussed in Section <xref ref-type="sec" rid="s2">2.3.1</xref>. The said non-universality of our measure stems from the limited range of deviation values that a NCD distribution can have, and on the other hand, the plain standard deviation might not be a good measure of wideness if the underlying distributions were multimodal. In this work all studied NCD distributions are unimodal. Furthermore, we only apply this complexity measure on data of comparable lengths and comparable characteristics, hence the resulting complexities are also comparable to each other. This may not be true in the opposite case, for example, spike trains of length 1&#x02009;s and 1&#x02009;h cannot be compared in an unbiased way.</p>
<p>In this study we show how the NCD values of both structure and dynamics of a neuronal network are distributed across the [0,1]&#x02009;&#x000D7;&#x02009;[0,1]-plane in the model networks (Figure <xref ref-type="fig" rid="F9">9</xref>), and calculate the mean information diversities of both structure and dynamics (Table <xref ref-type="table" rid="T2">2</xref>). The Figures <xref ref-type="fig" rid="F9">9</xref>A,C themselves give a good overview of the interplay between structural and dynamical information diversity. They show that the NCD distributions, computed for the considered types of networks, follow a visible trajectory. This trajectory is not evident when observing the widths of the NCD distributions only, nor when computing simpler distance measures (e.g., HD). The trajectory follows an &#x0201C;L&#x0201D;-shape, which is slightly violated by the NETMORPH NCD distribution (see Figure <xref ref-type="fig" rid="F9">9</xref>C). Whether there exists a network of the same degree that would span the whole &#x0201C;L&#x0201D;-shaped domain or a &#x0201C;superdiverse&#x0201D; network whose NCD values would cover also the unoccupied corner of the &#x0201C;L&#x0201D;-rectangle remains an open question. We have shown that such networks do not exist among the model classes studied here, and in the light of the results shown we also doubt the existence of such networks altogether, given the constraints of binomial in-degree distribution and the selected connection probability.</p>
<p>The different networks are separable also by the KC approximations of their structure and dynamics (Figure <xref ref-type="fig" rid="F7">7</xref>). However, we consider the KC analysis alone insufficient because it lacks the notion of context-dependence: the KC of a spike train would be maximized when the on-set of neurons (i.e., spikes) are as frequent as off-set of neurons (i.e., silent time steps) and randomly distributed in time. The effect of number of spikes on KC is already seen in Figures <xref ref-type="fig" rid="F7">7</xref>A,C: the KCs of spike trains of dense networks, where the number of spikes is greater, are on average greater than those of sparse networks. This is contrary to the case of context-dependent complexities, as shown in Table <xref ref-type="table" rid="T2">2</xref>, where the information diversities of full spike trains of dense networks are on average smaller than those of sparse networks. This leads to a profound question: how much spiking and bursting can there be before the activity is too random in order to contain any usable information? We believe that in order to address this question one has to apply a context-dependent measure of complexity instead of KC.</p>
<p>The complexity result in Table <xref ref-type="table" rid="T2">2</xref> concerning the information diversity of structure seems to contradict with the general notion according to which the most regular structure should be less complex than the structure that possesses both regularity and randomness (Sporns, <xref ref-type="bibr" rid="B33">2011</xref>). It should be noted, however, that also the most regular networks studied in this work (LCNs) occupy a degree of randomness, since their in-degree distribution is binomial and since farthest neighbors of a neuron are picked by random out of all equally distant ones. There is a multitude of possibilities for the most ordered structure, other than the one chosen in this work. For example, in Sporns (<xref ref-type="bibr" rid="B33">2011</xref>) a fully connected network is suggested to be a highly ordered neuronal system. Applying the information diversity measure to such structure in the framework of Table <xref ref-type="table" rid="T2">2</xref> gives a structural complexity of &#x02248;0.0268, which is less than that of the majority of the studied PLCNs. Hence, we regard the proposed measure eligible to assess the complexity of the structure.</p>
<p>In addition to analysis of well defined models, the presented measure can be used for analysis of experimental data. The method is straightforwardly applicable to neuronal activity recorded in the form of spike trains. Conversion of spike trains into binary sequences is described in the method section of this paper, as well as in the previous studies (Christen et al., <xref ref-type="bibr" rid="B8">2006</xref>; Benayon et al., <xref ref-type="bibr" rid="B4">2010</xref>). It can be observed that variability in NCD distribution corresponds to the variability in spiking patterns within population bursts. Similar measures of entropy and KC have been applied before, but the capability of the NCD to capture context between different data makes it suitable for assessing data complexity. The presented measure can be used to analyze different phases in neuronal network growth, where the structure is simulated by publicly available growth simulators (Koene et al., <xref ref-type="bibr" rid="B16">2009</xref>; Acimovic et al., 2011). Analysis of network structure, using the procedure described in this paper, can be employed for this study. Analysis of the structure and dynamics of the presented models can be used in relation to the <italic>in vitro</italic> studies with modulated network structure. The results of model analysis can help to predict and understand the recorded activity obtained for certain network structures imposed by the experimenter (Wheeler and Brewer, <xref ref-type="bibr" rid="B39">2010</xref>). Finally, the NCD variation as a measure of structural complexity, can be applied to analyze the large-scale functional connectivity of brain networks, similarly to the examples pointed in Sporns (<xref ref-type="bibr" rid="B33">2011</xref>).</p>
<p>The framework proposed in this study provides a measure of data complexity that is applicable to both structure and dynamics of neuronal networks. According to this measure, the neuronal networks with random structure show consistently less diverse intraburst dynamics than the more locally connected ones. The future work will incorporate a larger spectrum of different network structures in order to discover the extreme cases that more clearly maximize or minimize the complexity of dynamics.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<app-group>
<app id="A1">
<label>6</label>
<title>Appendix</title>
<sec>
<label>6.1</label>
<title>Model parameters for netmorph</title>
<p>The parameters used for generating realistic networks by NETMORPH are listed in Table <xref ref-type="table" rid="TA1">A1</xref> in Appendix. We use the implementation netmorph2D, which allows the simulation of strictly two-dimensional networks. The version 20090224.1225 of the simulator is used. The parameters are obtained from Koene et al. (<xref ref-type="bibr" rid="B16">2009</xref>), where the axonal parameters were optimized to fit growth statistic obtained from real data. We are unaware of any similar parameter optimization study done for dendritic growth, hence we use the dendritic parameters listed in the context of Figure 12D in Koene et al. (<xref ref-type="bibr" rid="B16">2009</xref>).</p>
<table-wrap position="float" id="TA1">
<label>Table A1</label>
<caption><p><bold>The parameter values used in NETMORPH</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Model selection parameters:</th>
<th align="left"/>
<th align="left">Growth and branching model parameters:</th>
<th align="left">Axon</th>
<th align="left">Dendrite</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">arbor_elongation_model</td>
<td align="left">van_Pelt</td>
<td align="left">growth_nu0</td>
<td align="left">0.00052083</td>
<td align="left">0.00013889 (&#x003BC;m/s)</td>
</tr>
<tr>
<td align="left">branching_model</td>
<td align="left">van_Pelt</td>
<td align="left">growth_F</td>
<td align="left">0.16</td>
<td align="left">0.39</td>
</tr>
<tr>
<td align="left">TSBM</td>
<td align="left">van_Pelt</td>
<td align="left">B_inf</td>
<td align="left">17.38</td>
<td align="left">4.75</td>
</tr>
<tr>
<td align="left">synapse_formation.PDF</td>
<td align="left">uniform</td>
<td align="left">E</td>
<td align="left">0.39</td>
<td align="left">0.5</td>
</tr>
<tr>
<td align="left">direction_model</td>
<td align="left">segment_history_tension</td>
<td align="left">S</td>
<td align="left">0</td>
<td align="left">0</td>
</tr>
<tr>
<td align="left">History_power</td>
<td align="left">2</td>
<td align="left">tau</td>
<td align="left">1209600</td>
<td align="left">319680 (s)</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>We take into account that not all synapses become functional by applying a 25% fraction of effective synapses, i.e., on average every fourth of the candidate synapses proposed by the simulator is actually accepted as a synapse. In addition, we apply the cell placement in a fixed 40-by-40 grid where distance between adjacent neuron somas is 24.99&#x02009;&#x003BC;m. To do this we had to recompile the simulator with our own extension that overrides the cell soma data created by the simulator (code not shown). For parameters not mentioned above we use the default values.</p>
</sec>
<sec>
<label>6.2</label>
<title>Activity model parameters</title>
<p>The activity simulations are run on MATLAB. The parameters for Izhikevich model (Eqs 6 and 7) are obtained from Izhikevich (<xref ref-type="bibr" rid="B15">2003</xref>). They are listed in Table <xref ref-type="table" rid="TA2">A2</xref> in Appendix.</p>
<table-wrap position="float" id="TA2">
<label>Table A2</label>
<caption><p><bold>Izhikevich model parameters</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Model parameters</th>
<th align="left">Excitatory</th>
<th align="left">Inhibitory</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><italic>a</italic></td>
<td align="left">0.02</td>
<td align="left">0.02&#x02009;&#x0002B;&#x02009;0.08<bold>r</bold><italic><sub>i</sub></italic></td>
</tr>
<tr>
<td align="left"><italic>b</italic></td>
<td align="left">0.2</td>
<td align="left">0.25&#x02009;&#x02212;&#x02009;0.05<bold>r</bold><italic><sub>i</sub></italic></td>
</tr>
<tr>
<td align="left"><italic>c</italic></td>
<td align="left">&#x02212;65&#x02009;&#x0002B;&#x02009;15<inline-formula><mml:math id="M32"><mml:mrow><mml:msubsup><mml:mi mathvariant='bold'>r</mml:mi><mml:mi>e</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula></td>
<td align="left">&#x02212;65</td>
</tr>
<tr>
<td align="left"><italic>d</italic></td>
<td align="left">8&#x02009;&#x02212;&#x02009;6<bold>r</bold><italic><sub>e</sub></italic></td>
<td align="left">2</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The parameters are randomized such that the random numbers <bold>r</bold><sub><bold>e</bold></sub> and <bold>r</bold><sub><bold>i</bold></sub> are drawn <italic>neuron</italic>-wise from a uniform distribution <italic>U</italic>(0,1). The noise term <italic>I<sub>G</sub></italic> (Eq. <xref ref-type="disp-formula" rid="E8">8</xref>) is a piecewise constant (constant for 1&#x02009;ms time windows) zero-mean Gaussian variable with standard deviation <inline-formula><mml:math id="M33"><mml:mrow><mml:mn>8.8</mml:mn><mml:mstyle scriptlevel='+1'><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext>ms</mml:mtext></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>.</mml:mo></mml:mrow></mml:math></inline-formula> This value is chosen to make the silent periods have a spiking frequency not too scarce and not too dense (see the inter-burst periods in Figure <xref ref-type="fig" rid="F2">2</xref> for the result). The simulation time step is chosen 0.5&#x02009;ms.</p>
<p>The synapse parameters (Eq. <xref ref-type="disp-formula" rid="E10">10</xref>) are taken from Tsodyks et al. (<xref ref-type="bibr" rid="B34">2000</xref>), and they are listed in Table <xref ref-type="table" rid="TA3">A3</xref> in Appendix.</p>
<table-wrap position="float" id="TA3">
<label>Table A3</label>
<caption><p><bold>Tsodyks model parameters</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<td align="left"/>
<th align="left">Excitatory</th>
<th align="left">Inhibitory</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" colspan="3"><bold>DYNAMICAL BEHAVIOR PARAMETERS</bold></td>
</tr>
<tr>
<td align="left">&#x003C4;<italic><sub>rec</sub></italic> (average)</td>
<td align="left">800&#x02009;ms</td>
<td align="left">100&#x02009;ms</td>
</tr>
<tr>
<td align="left">&#x003C4;<italic><sub>facil</sub></italic> (average)</td>
<td align="left">0</td>
<td align="left">1000&#x02009;ms</td>
</tr>
<tr>
<td align="left">&#x003C4;<sub><italic>I</italic></sub></td>
<td align="left">3&#x02009;ms</td>
<td align="left">3&#x02009;ms</td>
</tr>
<tr>
<td align="left" colspan="3"><bold>RESOURCE FRACTION PARAMETERS</bold></td>
</tr>
<tr>
<td align="left"><italic>U</italic> (average)</td>
<td align="left">0.5</td>
<td align="left">0.04</td>
</tr>
<tr>
<td align="left"><italic>U</italic><sub>min</sub></td>
<td align="left">0.1</td>
<td align="left">0.001</td>
</tr>
<tr>
<td align="left"><italic>U</italic><sub>max</sub></td>
<td align="left">0.9</td>
<td align="left">0.07</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>For each synapse the values of variables &#x003C4;<italic><sub>rec</sub></italic> and &#x003C4;<italic><sub>facil</sub></italic> are first drawn from a Gaussian distribution with the shown mean and standard deviation of half of the mean. Values lower than 5&#x02009;ms are replaced by the minimum value 5&#x02009;ms. The procedure is similar with the resource fraction parameter <italic>U</italic> (Eq. <xref ref-type="disp-formula" rid="E11">11</xref>), but for <italic>U</italic> both minimum and maximum value are applied. The minimum and maximum values are chosen following a test case in the NEST (Gewaltig and Diesmann, <xref ref-type="bibr" rid="B12">2007</xref>) simulator, although the simulator itself is not used due to difficulties in simultaneous implementation of Tsodyks&#x02019; synapse model and Izhikevich&#x00027;s neuron model.</p>
<p>For each neuron <italic>i</italic> the initial values for the membrane potential variable <italic>v<sub>i</sub></italic> is drawn from uniform distribution <italic>U</italic>([<italic>c<sub>i</sub></italic>,30]), where <italic>c<sub>i</sub></italic> is the reset potential parameter of the neuron. The initial recovery variable of the neuron is set <italic>b<sub>i</sub>v<sub>i</sub></italic>, where <italic>b<sub>i</sub></italic> is the sensitivity parameter of the neuron. The synaptic resources are initially in the recovered state, i.e., <italic>x<sub>ij</sub></italic>&#x02009;&#x0003D;&#x02009;1, <italic>y<sub>ij</sub></italic>&#x02009;&#x0003D;&#x02009;0, <italic>z<sub>ij</sub></italic>&#x02009;&#x0003D;&#x02009;0. The initial effective fraction variables <italic>u</italic> are set <italic>u<sub>ij</sub></italic>&#x02009;&#x0003D;&#x02009;<italic>U<sub>ij</sub></italic>, where <italic>U<sub>ij</sub></italic> is the resource fraction constant of the synapse <italic>ij</italic>.</p>
</sec>
<sec>
<label>6.3</label>
<title>Compression method</title>
<sec>
<label>6.3.1</label>
<title>A test study of compressors</title>
<p>The quality of the complexity estimation presented in Section <xref ref-type="sec" rid="s5">2.3</xref> heavily depends on the precision of KC approximation. In this section we motivate our choice of compression method by examining different compressors in a simple test case. Recent studies incorporating NCD with data compressors have used mostly gzip and bzip2 (Li et al., <xref ref-type="bibr" rid="B21">2004</xref>; Emmert-Streib and Scalas, <xref ref-type="bibr" rid="B9">2010</xref>); in addition to these two we study 7zip<xref ref-type="fn" rid="fn3"><sup>3</sup></xref>. All compressors are run with the default parameters, in addition 7zip is run in a heavy mode that requires more memory and computation time. The challenge in the compression of strings used in this study is the recognition of similar data patterns that may lie very far from each other and differ from each other in a more or less subtle way. We test the performance of the above compressors in a simple problem, where the data to be compressed consists of a duplicated random string. Figure <xref ref-type="fig" rid="FA1">1</xref> in Appendix shows compression rate of these strings, i.e., plots of <italic>C</italic>(<italic>xx</italic>)/<italic>L</italic>(<italic>x</italic>), where <italic>x</italic> is a random string with equal probabilities of occurrences of &#x0201C;0&#x0201D; and &#x0201C;1&#x0201D;, <italic>L</italic>(<italic>x</italic>) is the length of the uncompressed string <italic>x</italic> and <italic>C</italic>(<italic>xx</italic>) is the length of the compressed duplicated string.</p>
<fig id="FA1" position="float">
<label>Figure A1</label>
<caption><p><bold>Compression efficiencies when compressing a duplicated random string</bold>.</p></caption>
<graphic xlink:href="fncom-05-00026-a001.tif"/>
</fig>
<p>The mainly descending trend of the compression rates is due to a supposedly constant size coding overhead, whose proportion of the compressed code diminishes as the length of the string is increased. One can observe a shift in the compression rate when exceeding the block size in both gzip, bzip2 and 7zip at some length of data, but not in 7zip-heavy. The latter compressor also most successfully approaches the mean limit compression efficiency of 1/8, the value which is dictated by 8 bits in an ASCII character relative to the one bit required for the representation of &#x0201C;0&#x0201D;s and &#x0201C;1&#x0201D;s, taking into account that the compressed string consists of two identical strings. The value 1/4 would be expected as the maximal mean compression efficiency of compressors with block size smaller than <italic>L</italic>(<italic>x</italic>), as one would have <italic>C</italic>(<italic>xx</italic>)&#x02248;2<italic>C</italic>(<italic>x</italic>) &#x02013; this value is best approached by the compressor 7zip.</p>
<p>The reason we chose duplicated random strings as a test for compressors is that the strings are slightly similar from the compressibility point of view when calculating, e.g., NCD between two rows of a connectivity matrix. For example, in the case of a LCN two halves of the string will not be fully identical but most often merely shifted and some of the bits replaced by their opposite. Also in the case of calculating NCD between spike trains of two neurons, most of the spikes will probably be clustered around the same time indices (the time index of a burst) surrounded by hundreds or thousands of &#x0201C;0&#x0201D;s (silent periods). Surely our test case does not capture all the phenomena related to this kind of compression challenges where most of the data in the two halves of the string are equal but not all, but this at least shows that even small challenges &#x02013; compressing strictly duplicated data &#x02013; are managed far less efficiently by some compressors than the others. Basing on this test, we choose to use the 7zip-heavy compressor to compute all the results presented in this work.</p>
</sec>
<sec>
<label>6.3.2</label>
<title>Compression software</title>
<p>For compression of data strings we use the LZMA SDK 4.65 provided by the 7-zip website<xref ref-type="fn" rid="fn4"><sup>4</sup></xref>. To improve the default compression rate we set the number of fast bytes to 273 (default 128), the dictionary size to 1&#x02009;Gb (default 8&#x02009;Mb) and the number of match finding cycles to 750 (default 10), while keeping the rest of the parameters (number of literal context and position bits) default.</p>
</sec>
</sec>
</app>
</app-group>
<ack>
<title>5 Acknowledgments</title>
<p>The authors would like to acknowledge the following funding: TISE graduate school, Academy of Finland project grants no. 106030, no. 124615, and no. 132877, and Academy of Finland project no. 129657 (Center of Excellence in Signal Processing). Constructive comments provided by the reviewers were very useful in preparing the paper. The authors are grateful to the reviewers for pointing out the shortcomings in the first versions of the manuscripts and helping to improve it.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>A&#x00107;imovi&#x00107;</surname> <given-names>J.</given-names></name> <name><surname>M&#x000E4;ki-Marttunen</surname> <given-names>T.</given-names></name> <name><surname>Havela</surname> <given-names>R.</given-names></name> <name><surname>Teppola</surname> <given-names>H.</given-names></name> <name><surname>Linne</surname> <given-names>M.-L.</given-names></name></person-group> (<year>2011</year>). <article-title>Modeling of neuronal growth in vitro: comparison of simulation tools NETMORPH and CX3D</article-title>. <source>EURASIP J. Bioinform. Syst. Biol.</source> <volume>2011</volume>, Article ID 616382.</citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Albert</surname> <given-names>R.</given-names></name> <name><surname>Barab&#x000E1;si</surname> <given-names>A.-L.</given-names></name></person-group> (<year>2002</year>). <article-title>Statistical mechanics of complex networks</article-title>. <source>Rev. Mod. Phys.</source> <volume>74</volume>, <fpage>47</fpage>&#x02013;<lpage>97</lpage>.</citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amig&#x000F3;</surname> <given-names>J. M.</given-names></name> <name><surname>Szczepa&#x00144;ski</surname> <given-names>J.</given-names></name> <name><surname>Wajnryb</surname> <given-names>E.</given-names></name> <name><surname>Sanchez-Vivez</surname> <given-names>M. V.</given-names></name></person-group> (<year>2004</year>). <article-title>Estimating the entropy rate of spike trains via Lempel-Ziv complexity</article-title>. <source>Neural Comput.</source> <volume>16</volume>, <fpage>717</fpage>&#x02013;<lpage>736</lpage>.<pub-id pub-id-type="doi">10.1162/089976604322860677</pub-id><pub-id pub-id-type="pmid">15025827</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Benayon</surname> <given-names>M.</given-names></name> <name><surname>Cowan</surname> <given-names>J. D.</given-names></name> <name><surname>van Drongelen</surname> <given-names>W.</given-names></name> <name><surname>Wallace</surname> <given-names>E.</given-names></name></person-group> (<year>2010</year>). <article-title>Avalanches in a stochastic model of spiking neurons</article-title>. <source>PLoS Comput. Biol.</source> <volume>6</volume>, <fpage>e1000846</fpage>.<pub-id pub-id-type="doi">10.1371/journal.pcbi.1000846</pub-id><pub-id pub-id-type="pmid">20628615</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boccaletti</surname> <given-names>S.</given-names></name> <name><surname>Latora</surname> <given-names>V.</given-names></name> <name><surname>Moreno</surname> <given-names>Y.</given-names></name> <name><surname>Chavez</surname> <given-names>M.</given-names></name> <name><surname>Hwang</surname> <given-names>D.-U.</given-names></name></person-group> (<year>2006</year>). <article-title>Complex networks: structure and dynamics</article-title>. <source>Phys. Rep.</source> <volume>424</volume>, <fpage>175</fpage>&#x02013;<lpage>308</lpage>.<pub-id pub-id-type="doi">10.1016/j.physrep.2005.10.009</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brunel</surname> <given-names>N.</given-names></name></person-group> (<year>2000</year>). <article-title>Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons</article-title>. <source>J. Comput. Neurosci.</source> <volume>8</volume>, <fpage>183</fpage>&#x02013;<lpage>208</lpage>.<pub-id pub-id-type="pmid">10809012</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chiappalone</surname> <given-names>M.</given-names></name> <name><surname>Bove</surname> <given-names>M.</given-names></name> <name><surname>Vato</surname> <given-names>A.</given-names></name> <name><surname>Tedesco</surname> <given-names>M.</given-names></name> <name><surname>Martinoia</surname> <given-names>S.</given-names></name></person-group> (<year>2006</year>). <article-title>Dissociated cortical networks show spontaneously correlated activity patterns during in vitro development</article-title>. <source>Brain Res.</source> <volume>1093</volume>, <fpage>41</fpage>&#x02013;<lpage>53</lpage>.<pub-id pub-id-type="doi">10.1016/j.brainres.2006.03.049</pub-id><pub-id pub-id-type="pmid">16712817</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Christen</surname> <given-names>M.</given-names></name> <name><surname>Kohn</surname> <given-names>A.</given-names></name> <name><surname>Ott</surname> <given-names>T.</given-names></name> <name><surname>Stoop</surname> <given-names>R.</given-names></name></person-group> (<year>2006</year>). <article-title>Measuring spike pattern variability with the Lempel-Ziv-distance</article-title>. <source>J. Neurosci. Methods</source> <volume>156</volume>, <fpage>342</fpage>&#x02013;<lpage>350</lpage>.<pub-id pub-id-type="pmid">16584787</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Emmert-Streib</surname> <given-names>F.</given-names></name> <name><surname>Scalas</surname> <given-names>E.</given-names></name></person-group> (<year>2010</year>). <article-title>Statistic complexity: combining Kolmogorov complexity with an ensemble approach</article-title>. <source>PLoS ONE</source> <volume>5</volume>, <fpage>e12256</fpage>.<pub-id pub-id-type="doi">10.1371/journal.pone.0012256</pub-id><pub-id pub-id-type="pmid">20865047</pub-id></citation></ref>
<ref id="B10"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Fr&#x000E9;gnac</surname> <given-names>Y.</given-names></name> <name><surname>Rudolph</surname> <given-names>M.</given-names></name> <name><surname>Davison</surname> <given-names>A. P.</given-names></name> <name><surname>Destexhe</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>&#x0201C;Complexity in neuronal networks,&#x0201D;</article-title> in <source>Biological Networks</source>, ed. <person-group person-group-type="editor"><name><surname>Fran&#x000E7;ois K&#x000E9;p&#x000E8;s</surname></name></person-group> (<publisher-loc>Singapore</publisher-loc>: <publisher-name>World Scientific</publisher-name>), <fpage>291</fpage>&#x02013;<lpage>338</lpage>.</citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Galas</surname> <given-names>D. J.</given-names></name> <name><surname>Nykter</surname> <given-names>M.</given-names></name> <name><surname>Carter</surname> <given-names>G. W.</given-names></name> <name><surname>Price</surname> <given-names>N. D.</given-names></name> <name><surname>Shmulevich</surname> <given-names>I.</given-names></name></person-group> (<year>2010</year>). <article-title>Biological information as set-based complexity</article-title>. <source>IEEE Trans. Inf. Theory</source> <volume>56</volume>, <fpage>667</fpage>&#x02013;<lpage>677</lpage>.</citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gewaltig</surname> <given-names>M.-O.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Nest (neural simulation tool)</article-title>. <source>Scholarpedia</source> <volume>2</volume>, <fpage>1430</fpage>&#x02013;<lpage>1434</lpage>.<pub-id pub-id-type="doi">10.4249/scholarpedia.1430</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gritsun</surname> <given-names>T. A.</given-names></name> <name><surname>Le Feber</surname> <given-names>J.</given-names></name> <name><surname>Stegenga</surname> <given-names>J.</given-names></name> <name><surname>Rutten</surname> <given-names>W. L. C.</given-names></name></person-group> (<year>2010</year>). <article-title>Network bursts in cortical cultures are best simulated using pacemaker neurons and adaptive synapses</article-title>. <source>Biol. Cybern.</source> <volume>102</volume>, <fpage>293</fpage>&#x02013;<lpage>310</lpage>.<pub-id pub-id-type="doi">10.1007/s00422-010-0366-x</pub-id><pub-id pub-id-type="pmid">20157725</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Itzhack</surname> <given-names>R.</given-names></name> <name><surname>Louzoun</surname> <given-names>Y.</given-names></name></person-group> (<year>2010</year>). <article-title>Random distance dependent attachment as a model for neural network generation in the <italic>Caenorhabditis elegans</italic></article-title>. <source>Bioinformatics</source> <volume>26</volume>, <fpage>647</fpage>&#x02013;<lpage>652</lpage>.<pub-id pub-id-type="doi">10.1093/bioinformatics/btq015</pub-id><pub-id pub-id-type="pmid">20081220</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Izhikevich</surname> <given-names>E. M.</given-names></name></person-group> (<year>2003</year>). <article-title>Simple model of spiking neurons</article-title>. <source>IEEE Trans. Neural Netw.</source> <volume>14</volume>, <fpage>1569</fpage>&#x02013;<lpage>1572</lpage>.<pub-id pub-id-type="doi">10.1109/TNN.2003.820440</pub-id><pub-id pub-id-type="pmid">18244602</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koene</surname> <given-names>R. A.</given-names></name> <name><surname>Tijms</surname> <given-names>B.</given-names></name> <name><surname>van Hees</surname> <given-names>P.</given-names></name> <name><surname>Postma</surname> <given-names>F.</given-names></name> <name><surname>de Ridder</surname> <given-names>A.</given-names></name> <name><surname>Ramakers</surname> <given-names>G. J. A.</given-names></name> <name><surname>van Pelt</surname> <given-names>J.</given-names></name> <name><surname>van Ooyen</surname> <given-names>A.</given-names></name></person-group> (<year>2009</year>). <article-title>NETMORPH: a framework for the stochastic generation of large scale neuronal networks with realistic neuron morphologies</article-title>. <source>Neuroinformatics</source> <volume>7</volume>, <fpage>195</fpage>&#x02013;<lpage>210</lpage>.<pub-id pub-id-type="doi">10.1007/s12021-009-9052-3</pub-id><pub-id pub-id-type="pmid">19672726</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kriegstein</surname> <given-names>A. R.</given-names></name> <name><surname>Dichter</surname> <given-names>M. A.</given-names></name></person-group> (<year>1983</year>). <article-title>Morphological classification of rat cortical neurons in cell culture</article-title>. <source>J. Neurosci.</source> <volume>3</volume>, <fpage>1634</fpage>&#x02013;<lpage>1647</lpage>.<pub-id pub-id-type="pmid">6875660</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kriener</surname> <given-names>B.</given-names></name> <name><surname>Tetzlaff</surname> <given-names>T.</given-names></name> <name><surname>Aertsen</surname> <given-names>A.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name> <name><surname>Rotter</surname> <given-names>S.</given-names></name></person-group> (<year>2008</year>). <article-title>Correlations and population dynamics in cortical networks</article-title>. <source>Neural Comput.</source> <volume>20</volume>, <fpage>2185</fpage>&#x02013;<lpage>2226</lpage>.<pub-id pub-id-type="doi">10.1162/neco.2008.02-07-474</pub-id><pub-id pub-id-type="pmid">18439141</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kumar</surname> <given-names>A.</given-names></name> <name><surname>Rotter</surname> <given-names>S.</given-names></name> <name><surname>Aertsen</surname> <given-names>A.</given-names></name></person-group> (<year>2008</year>). <article-title>Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model</article-title>. <source>J. Neurosci.</source> <volume>28</volume>, <fpage>5268</fpage>&#x02013;<lpage>5280</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2542-07.2008</pub-id><pub-id pub-id-type="pmid">18480283</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Latham</surname> <given-names>P. E.</given-names></name> <name><surname>Richmond</surname> <given-names>B. J.</given-names></name> <name><surname>Nelson</surname> <given-names>P. G.</given-names></name> <name><surname>Nirenberg</surname> <given-names>S.</given-names></name></person-group> (<year>2000</year>). <article-title>Intrinsic dynamics in neuronal networks. I. Theory</article-title>. <source>J. Neurophysiol.</source> <volume>83</volume>, <fpage>808</fpage>&#x02013;<lpage>827</lpage>.<pub-id pub-id-type="pmid">10669496</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>M.</given-names></name> <name><surname>Chen</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>X.</given-names></name> <name><surname>Ma</surname> <given-names>B.</given-names></name> <name><surname>Vit&#x000E1;nyi</surname> <given-names>P. M. B.</given-names></name></person-group> (<year>2004</year>). <article-title>The similarity metric</article-title>. <source>IEEE Trans. Inf. Theory</source> <volume>50</volume>, <fpage>3250</fpage>&#x02013;<lpage>3264</lpage>.</citation></ref>
<ref id="B22"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>M.</given-names></name> <name><surname>Vitanyi</surname> <given-names>P.</given-names></name></person-group> (<year>1997</year>). <source>An Introduction to Kolmogorov Complexity and Its Applications</source>, <edition>2nd Edn</edition>. <publisher-loc>New York</publisher-loc>: <publisher-name>Springer-Verlag</publisher-name>.</citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marom</surname> <given-names>S.</given-names></name> <name><surname>Shahaf</surname> <given-names>G.</given-names></name></person-group> (<year>2002</year>). <article-title>Development, learning and memory in large random networks of cortical neurons: lessons beyond anatomy</article-title>. <source>Q. Rev. Biophys.</source> <volume>35</volume>, <fpage>63</fpage>&#x02013;<lpage>87</lpage>.<pub-id pub-id-type="pmid">11997981</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Neel</surname> <given-names>D. L.</given-names></name> <name><surname>Orrison</surname> <given-names>M. E.</given-names></name></person-group> (<year>2006</year>). <article-title>The linear complexity of a graph</article-title>. <source>Electron. J. Comb.</source> <volume>13</volume>, <fpage>1</fpage>&#x02013;<lpage>19</lpage>.</citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Newman</surname> <given-names>M. E. J.</given-names></name></person-group> (<year>2003</year>). <article-title>The structure and function of complex networks</article-title>. <source>SIAM Rev.</source> <volume>45</volume>, <fpage>167</fpage>&#x02013;<lpage>256</lpage>.<pub-id pub-id-type="doi">10.1137/S003614450342480</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nykter</surname> <given-names>M.</given-names></name> <name><surname>Price</surname> <given-names>N. D.</given-names></name> <name><surname>Larjo</surname> <given-names>A.</given-names></name> <name><surname>Aho</surname> <given-names>T.</given-names></name> <name><surname>Kauffman</surname> <given-names>S. A.</given-names></name> <name><surname>Yli-Harja</surname> <given-names>O.</given-names></name> <name><surname>Shmulevich</surname> <given-names>I.</given-names></name></person-group> (<year>2008</year>). <article-title>Critical networks exhibit maximal information diversity in structure-dynamics relationships</article-title>. <source>Phys. Rev. Lett.</source> <volume>100</volume>, <fpage>058702</fpage>.<pub-id pub-id-type="pmid">18352443</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ostojic</surname> <given-names>S.</given-names></name> <name><surname>Brunel</surname> <given-names>N.</given-names></name> <name><surname>Hakim</surname> <given-names>V.</given-names></name></person-group> (<year>2009</year>). <article-title>How connectivity, background activity, and synaptic properties shape the cross-correlations between spike trains</article-title>. <source>J. Neurosci.</source> <volume>29</volume>, <fpage>10234</fpage>&#x02013;<lpage>10253</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.1275-09.2009</pub-id><pub-id pub-id-type="pmid">19692598</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paninski</surname> <given-names>L.</given-names></name></person-group> (<year>2003</year>). <article-title>Estimation of entropy and mutual information</article-title>. <source>Neural Comput.</source> <volume>15</volume>, <fpage>1191</fpage>&#x02013;<lpage>1253</lpage>.<pub-id pub-id-type="doi">10.1162/089976603321780272</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rapp</surname> <given-names>P. E.</given-names></name> <name><surname>Zimmerman</surname> <given-names>I. D.</given-names></name> <name><surname>Vining</surname> <given-names>E. P.</given-names></name> <name><surname>Cohen</surname> <given-names>N.</given-names></name> <name><surname>Albano</surname> <given-names>A. M.</given-names></name> <name><surname>Jimenez-Montano</surname> <given-names>M. A.</given-names></name></person-group> (<year>1994</year>). <article-title>The algorithmic complexity of neural spike trains increases during focal seizures</article-title>. <source>J. Neurosci.</source> <volume>14</volume>, <fpage>4731</fpage>&#x02013;<lpage>4739</lpage>.<pub-id pub-id-type="pmid">8046447</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rotter</surname> <given-names>S.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name></person-group> (<year>1999</year>). <article-title>Exact digital simulation of time-invariant linear systems with applications to neuronal modeling</article-title>. <source>Biol. Cybern.</source> <volume>81</volume>, <fpage>381</fpage>&#x02013;<lpage>402</lpage>.<pub-id pub-id-type="doi">10.1007/s004220050570</pub-id><pub-id pub-id-type="pmid">10592015</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shadlen</surname> <given-names>M. N.</given-names></name> <name><surname>Newsome</surname> <given-names>W. T.</given-names></name></person-group> (<year>1998</year>). <article-title>The variable discharge of cortical neurons: implications for connectivity, computation, and information coding</article-title>. <source>J. Neurosci.</source> <volume>18</volume>, <fpage>3870</fpage>&#x02013;<lpage>3896</lpage>.<pub-id pub-id-type="pmid">9570816</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Soriano</surname> <given-names>J.</given-names></name> <name><surname>Rodr&#x000ED;gez Martinez</surname> <given-names>M.</given-names></name> <name><surname>Tlusty</surname> <given-names>T.</given-names></name> <name><surname>Moses</surname> <given-names>E.</given-names></name></person-group> (<year>2008</year>). <article-title>Development of input connections in neural cultures</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>105</volume>, <fpage>13758</fpage>&#x02013;<lpage>13763</lpage>.<pub-id pub-id-type="pmid">18772389</pub-id></citation></ref>
<ref id="B33"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Sporns</surname> <given-names>O.</given-names></name></person-group> (<year>2011</year>). <source>Networks of the Brain</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>The MIT Press</publisher-name>.</citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsodyks</surname> <given-names>M.</given-names></name> <name><surname>Uziel</surname> <given-names>A.</given-names></name> <name><surname>Markram</surname> <given-names>H.</given-names></name></person-group> (<year>2000</year>). <article-title>Synchrony generation in recurrent networks with frequency-dependent synapses</article-title>. <source>J. Neurosci.</source> <volume>20</volume>, <fpage>1</fpage>&#x02013;<lpage>5</lpage>.<pub-id pub-id-type="pmid">10627575</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tuckwell</surname> <given-names>H. C.</given-names></name></person-group> (<year>2006</year>). <article-title>Cortical network modeling: analytical methods for firing rates and some properties of networks of LIF neurons</article-title>. <source>J. Physiol. Paris</source> <volume>100</volume>, <fpage>88</fpage>&#x02013;<lpage>99</lpage>.<pub-id pub-id-type="doi">10.1016/j.jphysparis.2006.09.001</pub-id><pub-id pub-id-type="pmid">17064883</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Voges</surname> <given-names>N.</given-names></name> <name><surname>Guijarro</surname> <given-names>C.</given-names></name> <name><surname>Aertsen</surname> <given-names>A.</given-names></name> <name><surname>Rotter</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <article-title>Models of cortical networks with long-range patchy projections</article-title>. <source>J. Comput. Neurosci.</source> <volume>28</volume>, <fpage>137</fpage>&#x02013;<lpage>154</lpage>.<pub-id pub-id-type="pmid">19866352</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wagenaar</surname> <given-names>D. A.</given-names></name> <name><surname>Pine</surname> <given-names>J.</given-names></name> <name><surname>Potter</surname> <given-names>S. M.</given-names></name></person-group> (<year>2006</year>). <article-title>An extremely rich repertoire of bursting patterns during the development of cortical cultures</article-title>. <source>BMC Neurosci.</source> <volume>7</volume>, <fpage>11</fpage>&#x02013;<lpage>29</lpage>.<pub-id pub-id-type="doi">10.1186/1471-2202-7-11</pub-id><pub-id pub-id-type="pmid">16464257</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Watts</surname> <given-names>D. J.</given-names></name> <name><surname>Strogatz</surname> <given-names>S. H.</given-names></name></person-group> (<year>1998</year>). <article-title>Collective dynamics of small-world networks</article-title>. <source>Nature</source> <volume>393</volume>, <fpage>440</fpage>&#x02013;<lpage>442</lpage>.<pub-id pub-id-type="doi">10.1038/30918</pub-id><pub-id pub-id-type="pmid">9623998</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wheeler</surname> <given-names>B. C.</given-names></name> <name><surname>Brewer</surname> <given-names>G. J.</given-names></name></person-group> (<year>2010</year>). <article-title>Designing neural networks in culture</article-title>. <source>Proc. IEEE</source> <volume>98</volume>, <fpage>398</fpage>&#x02013;<lpage>406</lpage>.<pub-id pub-id-type="doi">10.1109/JPROC.2009.2039029</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn id="fn1"><p><sup>1</sup><uri xlink:href="http://7-zip.org/">http://7-zip.org/</uri></p></fn>
<fn id="fn2"><p><sup>2</sup><uri xlink:href="http://7-zip.org/">http://7-zip.org/</uri></p></fn>
<fn id="fn3"><p><sup>3</sup><uri xlink:href="http://7-zip.org/">http://7-zip.org/</uri></p></fn>
<fn id="fn4"><p><sup>4</sup><uri xlink:href="http://7-zip.org/">http://7-zip.org/</uri></p></fn>
</fn-group>
</back>
</article>