<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Comput. Neurosci.</journal-id>
<journal-title>Frontiers in Computational Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Comput. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5188</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fncom.2023.1092185</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Jeon</surname> <given-names>Ikhwan</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/2090220/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Kim</surname> <given-names>Taegon</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1498039/overview"/>
</contrib>
</contrib-group>
<aff><institution>Brain Science Institute, Korea Institute of Science and Technology</institution>, <addr-line>Seoul</addr-line>, <country>Republic of Korea</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Jiyoung Kang, Pukyong National University, Republic of Korea</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Chang-Eop Kim, Gachon University, Republic of Korea; Seok Jun Hong, Sungkyunkwan University, Republic of Korea</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Taegon Kim <email>taegon.kim&#x00040;kist.re.kr</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>28</day>
<month>06</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>17</volume>
<elocation-id>1092185</elocation-id>
<history>
<date date-type="received">
<day>07</day>
<month>11</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>12</day>
<month>06</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2023 Jeon and Kim.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Jeon and Kim</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<abstract>
<p>Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.</p></abstract>
<kwd-group>
<kwd>bottom-up approach</kwd>
<kwd>biologically plausible neural network</kwd>
<kwd>optimization of neural network</kwd>
<kwd>biological neural network supremacy</kwd>
<kwd>neural network architecture</kwd>
<kwd>balanced network</kwd>
<kwd>dendritic computation</kwd>
<kwd>Dale&#x00027;s principle</kwd>
</kwd-group>
<contract-sponsor id="cn001">National Research Foundation of Korea<named-content content-type="fundref-id">10.13039/501100003725</named-content></contract-sponsor>
<contract-sponsor id="cn002">Korea Institute of Science and Technology<named-content content-type="fundref-id">10.13039/501100003693</named-content></contract-sponsor>
<counts>
<fig-count count="3"/>
<table-count count="0"/>
<equation-count count="1"/>
<ref-count count="266"/>
<page-count count="18"/>
<word-count count="17568"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Turing&#x00027;s idea of building a thinking machine by replacing an organism with artifacts, part by part (Turing, <xref ref-type="bibr" rid="B241">1948</xref>), has inspired scientists and engineers because it was the first clear statement of a bottom-up approach toward building artificial intelligence (AI). In general, the term &#x0201C;bottom-up&#x0201D; refers to the directionality of an approach that begins with specifics or minutiae to arrive at a comprehensive solution. Thus, the bottom-up approach to developing a brain-like intelligence system begins with spatiotemporal local properties and their organized combinations. Local properties are presented in neurons or synapses, namely, single computational units and their combinations directly depict the connectivity and architecture of a neural circuit. Because these details and their effects are covered by the discipline of neuroscience, developing AI from the ground up using an understanding of neuroscience is straightforward. However, even the latest neuroscience field lacks comprehensive knowledge of neural circuits, their functions, and the mapping between them, indicating that the operating principle of neural networks is absent in practice (Goodfellow et al., <xref ref-type="bibr" rid="B76">2016</xref>; Jonas and Kording, <xref ref-type="bibr" rid="B112">2017</xref>). Thus, experimental attempts to translate up-to-date piecemeal information on various characteristics of neurons, synapses, and neural circuits into AI are the only viable options under these circumstances. Given that replacing a component of an artificial neural network (ANN) with the counterpart of a biological neural network (BNN) generally does not outperform the original ANN and is often not very influential, a bottom-up approach appears to be infeasible and impractical although it does not imply inherent impossibility, as Turing contested (Turing, <xref ref-type="bibr" rid="B241">1948</xref>).</p>
<p>Nonetheless, we believe exploring the gap between neuroscience and AI engineering using a bottom-up approach should be encouraged. Although no unified principle governing multiscale neural network features has been found, there are several useful models describing phenomena at different scales. Good examples include the Hebbian learning principle and its modifications, encompassing various forms of long-term synaptic plasticity (Dayan and Abbott, <xref ref-type="bibr" rid="B49">2001</xref>). Considering the history of AI development, it is unsurprising that an ANN incorporates specific principles from neuroscience and computational neuroscience. The birth of successful modern approaches, such as deep neural networks and their learning algorithms, is partly attributable to this type of strategy (Goodfellow et al., <xref ref-type="bibr" rid="B76">2016</xref>). Furthermore, given the massive amount of resources required to operate such systems (Schuman et al., <xref ref-type="bibr" rid="B203">2022</xref>), further information behind the efficient computation by the BNN should be uncovered and implanted into the ANN. To accelerate exploration using a bottom-up approach, cooperation between neuroscientists and AI engineers can be promoted through mutual benefits. One of the goals of neuroscience is to reveal the neural network mechanisms underlying a particular mental state or behavior that the neural network principle can encapsulate. This process requires confirmation by observations made in a controlled setting or laboratory experiments; however, because of their complexity, the brain and neural circuits are often inaccessible in a properly controlled manner. Furthermore, confirming a unified operating mechanism is challenging because of the low practicality of long-term and large-scale manipulation of the brain and neural system. AI engineering can serve as a valuable analogical model spanning several spatiotemporal scales, from a cellular level to behavioral consequences. Hence, an ANN based on the BNN features provides a proof-of-concept for a particular neural network principle, demonstrating how a neural circuit produces a specific behavior. On the other hand, the neural network principle contributes to a better understanding of how ANNs work. Considering that currently successful ANNs require improved explainability and interpretability (Gunning et al., <xref ref-type="bibr" rid="B83">2019</xref>; Vilone and Longo, <xref ref-type="bibr" rid="B249">2021</xref>; Nussberger et al., <xref ref-type="bibr" rid="B174">2022</xref>), bottom-up approaches equipped with neural network principles can help AI designers better understand the outcomes of their ANN models. Thus, this review preferentially introduces studies that focused on the conceptual similarity between the components of a given ANN and its corresponding BNN, regardless of the model&#x00027;s performance on the tasks designed for ANNs.</p>
<p>On the other hand, because other types of approaches toward well-functioning intelligence systems have been successful, such as the recent advancement of large-scale language models (Devlin et al., <xref ref-type="bibr" rid="B54">2019</xref>; Brown et al., <xref ref-type="bibr" rid="B30">2020</xref>) and text-to-image models (Ramesh et al., <xref ref-type="bibr" rid="B189">2022</xref>; Rombach et al., <xref ref-type="bibr" rid="B197">2022</xref>), approaches under purely engineering goals seem to dispense with the need for a bottom-up approach. However, such approaches merely offer an explanation of how the brain is capable of many cognitive functions with BNN, contrary to the mutual benefit expected from the bottom-up approach. Top-down approaches like &#x0201C;brain-inspired&#x0201D; AI (Chen et al., <xref ref-type="bibr" rid="B41">2019</xref>; Robertazzi et al., <xref ref-type="bibr" rid="B194">2022</xref>; Zeng et al., <xref ref-type="bibr" rid="B256">2022</xref>) partly enhance our understanding of the brain, especially the cognitive process of a certain task, and improve performances simultaneously, whereas their goals do not reach the circuit-level mechanism of BNN. At the other extreme, attempts to emulate BNN have been made to copy a mesoscopic neural circuit and demonstrate that copied BNN indeed show the same activity measured from experiments (Markram et al., <xref ref-type="bibr" rid="B150">2015</xref>). They are useful for replacing invasive experiments in the future and for simulating virtually controlled experiments. However, these detailed models are not directly applicable to AI systems because of their low cost-effectiveness and relatively simple output pattern, despite large-scale computation with a large number of parameters to be optimized. Therefore, this review focuses on studies that consider the mutual benefits between scientific and engineering goals at the proper level of BNN abstraction.</p>
<p>Considering the rudiments of deep neural networks, the first step was to construct a neural network and select a training algorithm after determining the task and training dataset. Unlike ANN, nature handles the search for a BNN architecture and builds a training strategy. Thus, we begin the review with ANN&#x00027;s architecture search and training algorithm, which was inspired by the natural process of network structure optimization and its updates. As the optimization process continues, the properties of the single computational units and the architecture of the neural circuit are updated, which can be viewed as the outcome of successful optimization. This implies that understanding BNN properties and their impact on computation can be advantageous because such properties of BNN studied have already been refined by nature. Hence, the following sections of this review focus on the montage of useful BNN properties and the efforts related to the direct utilization of BNN properties in ANN design (summarized in <xref ref-type="fig" rid="F1">Figure 1</xref>).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Summary figure of the review. <bold>(Left)</bold> The optimization processes of a neural network. Arrows represent the involvement of each process with time. <bold>(Right)</bold> The outcome of the optimization.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1092185-g0001.tif"/>
</fig>
<p>To develop a systematic manner as opposed to a random search of proper links between neuroscience and AI engineering, we defined the set of objectives that neural networks try to achieve as &#x0201C;the problem space&#x0201D; and categorized neural network models based on how closely their architectural features resemble those of BNN. Such formalization may offer an approximate map, including the limitations of ANN and what we should aim for when constructing a biologically plausible neural network. Using this map, we proposed the potential roles of neuroscience and AI engineering and their cooperative workflow pipeline. We believe that this pipeline will encourage reciprocal advantages by demonstrating how top-down and bottom-up approaches from neuroscience can offer useful information for AI engineering and, conversely, how AI engineering advances our understanding of the brain and its function.</p>
</sec>
<sec id="s2">
<title>2. Optimization strategy: multiscale credit assignment</title>
<p>All biologically intelligent agents interact with their environments and attempt to survive and reproduce. A combination of hereditary mutations and epigenetic adaptations builds up a biological agent&#x00027;s fitness, and the agents are eventually evaluated for survival (or death) and reproduction (or nonproliferation). One of the essential organs in an individual agent is the brain, which is optimized using the same process (Tosches, <xref ref-type="bibr" rid="B231">2017</xref>). Although the entire optimization process can be understood in parts by dividing it into different temporal scales, each part still encounters the common conundrum of how much each spatiotemporal local parameter should be updated to improve fitness. Thus, this issue can be described as a multiscale credit assignment problem (Valiant, <xref ref-type="bibr" rid="B244">2013</xref>). Assuming that the properties of the computational units, network architecture, and overall performance of the network are the outcomes of BNN optimization, it is worthwhile to imitate this strategy to achieve superior biologically plausible neural networks. In this review, we simply hypothesized that a longer time-scale optimization relates to the architectural search process through evolution and development, whereas a shorter-scale optimization corresponds to the learning process in a neural circuit or brain.</p>
<sec>
<title>2.1. Architecture search: evolution and development</title>
<p>The process of evolution includes the development and learning of a neural circuit; therefore, it is a credit assignment process with the longest temporal scale. Genes that must be evaluated for fitness are prepared by mutations, and the neural circuit variants built from these genes are eventually tested by natural selection (Tosches, <xref ref-type="bibr" rid="B231">2017</xref>; Hasson et al., <xref ref-type="bibr" rid="B91">2020</xref>). The artificial counterpart of the mutation-selection process, namely, the evolutionary algorithm (EA), has been applied in numerous domains for decades, and &#x0201C;neuroevolution&#x0201D; refers to the application of EA to neural networks (Yao and Liu, <xref ref-type="bibr" rid="B255">1998</xref>; Stanley et al., <xref ref-type="bibr" rid="B220">2019</xref>; Galv&#x000E1;n and Mooney, <xref ref-type="bibr" rid="B72">2021</xref>). Although the neuroevolution scheme simplified or omitted numerous aspects of the biological evolution process, it successfully captured the essentials and performed well in rediscovering the BNN properties (Risi and Stanley, <xref ref-type="bibr" rid="B193">2014</xref>) and optimizing the ANN architecture (Liang et al., <xref ref-type="bibr" rid="B135">2018</xref>; Zoph et al., <xref ref-type="bibr" rid="B265">2018</xref>). In addition to structural connectivity, network architecture comprises the functional features of a network, such as the activation function of each neuron and its hyperparameters or initial synaptic weights. For example, the hyperparameters of different neuronal activation functions can be optimized using the EA (Cui et al., <xref ref-type="bibr" rid="B48">2019</xref>). In deep learning, EA and reinforcement learning have been widely employed for the automated network model selection, termed neural architecture search (NAS; Elsken et al., <xref ref-type="bibr" rid="B63">2019</xref>; Liu Y. et al., <xref ref-type="bibr" rid="B139">2021</xref>).</p>
<p>In a BNN, developmental processes add diversity or constraints to neural networks through their stochastic nature or spatial arrangement, respectively (Smith, <xref ref-type="bibr" rid="B211">1999</xref>; Tosches, <xref ref-type="bibr" rid="B231">2017</xref>; Luo, <xref ref-type="bibr" rid="B146">2021</xref>), in addition to a genetic code-driven architecture search. During development, neurons are ready to grow and connect to others, controlled by internally produced proteins (genetic codes) and external cues. Biological studies have revealed sequentially proceeding developmental processes: neuralation, proliferation, cell migration, differentiation, synaptogenesis, synapse pruning, and myelination (Tierney and Nelson, <xref ref-type="bibr" rid="B228">2009</xref>). The first three steps indicate the orchestrated positioning of neuronal nodes in space, and the consecutive processes drive the formation of proper connections. Although genetic codes can drive the overall coordination of neuronal nodes in a three-dimensional space, chemical cues, such as morphogens, are constantly exposed to stochastic fluctuations (van Ooyen, <xref ref-type="bibr" rid="B246">2011</xref>; Goodhill, <xref ref-type="bibr" rid="B77">2018</xref>; Razetti et al., <xref ref-type="bibr" rid="B192">2018</xref>; Llorca et al., <xref ref-type="bibr" rid="B142">2019</xref>; Staii, <xref ref-type="bibr" rid="B219">2022</xref>). Additionally, considering that synaptogenesis induces the randomly generated overproduction of synapses and connectivity is polished by pruning and myelination processes (van Ooyen, <xref ref-type="bibr" rid="B246">2011</xref>; Goodhill, <xref ref-type="bibr" rid="B77">2018</xref>; Razetti et al., <xref ref-type="bibr" rid="B192">2018</xref>), the potential intervention of probabilistic diversification to differentiate connectivity is highly likely. Such stochasticity depends on the environment to which the brain is exposed. Thus, the common skeleton of the BNN architecture across individuals is an essential structure of a neural network to perform naturalistic tasks stably, and the variability in each individual agent is a sign of adaptation to different environments. This implies that by introducing such variability, we may be able to expand the range of searches in the parametric space of a neural network compared with relying only on genetic codes and mutations.</p>
<p>Although the evolution and development of BNN have potential advantages during ANN construction, direct and thorough imitation of these processes does not necessarily guarantee better ANN performance. First, when nature searches for answers through evolution and development, it utilizes an extremely efficient parallel search by preparing variable groups of individuals and combinations between groups (Foster and Baker, <xref ref-type="bibr" rid="B67">2004</xref>; Traulsen and Nowak, <xref ref-type="bibr" rid="B236">2006</xref>). To emulate such a process on a conventional computer, each individual needs to be stored in memory and evolved through a series of calculations, greatly increasing the computational burden. Thus, some processes should be simplified, and we need to capture the essential parts like the neuroevolution approach although an ensemble neural network strategy that shares the concept of group selection has been applied to construct and optimize ANN (Krogh and Vedelsby, <xref ref-type="bibr" rid="B124">1994</xref>; Zhou et al., <xref ref-type="bibr" rid="B263">2002</xref>; Liu and Yao, <xref ref-type="bibr" rid="B140">2008</xref>; Zhang S. et al., <xref ref-type="bibr" rid="B258">2020</xref>). The second aspect is platform dependency; as mentioned above, the optimization processes occurring in the brain depend on the spatial arrangement of computing units and chemicals as well as genetic codes, which implies that the distance between neurons can limit wiring (van Ooyen, <xref ref-type="bibr" rid="B246">2011</xref>; Goodhill, <xref ref-type="bibr" rid="B77">2018</xref>). Because the spatial arrangement of neurons and wiring costs do not matter in the simulation of an ANN, the direct translation of evolution and developmental processes for the BNN is not an effective option. Thus, only when we construct an ANN on a platform where the wiring cost can be defined, the emulation of the BNN formation through the direct imitation of evolution and development may offer a better architecture search algorithm. Third, evolution and development are primarily driven by the environment. In contrast to the well-specified task and dataset in ANN, the environment to which the BNN has to adapt is vast and carries an intensive amount of information, which blurs the boundary of essential information for training specific neural circuits. A notable recent study circumvented these problems and demonstrated that simplified developmental and evolutionary processes can select a biologically plausible neural circuit (Hiratani and Latham, <xref ref-type="bibr" rid="B97">2022</xref>). This study utilized a rather simple feedforward neural network to approximate olfactory information in an environment, which was considered a teacher network to train a student network that corresponds to a biological olfactory circuit consisting of expansion-contraction coding architecture; eventually, such a simple approach successfully met with the scaling laws in BNN. This study showed a model case of how a mutually beneficial investigation can be designed to enhance the understanding of both BNN and ANN.</p>
</sec>
<sec>
<title>2.2. Learning algorithm</title>
<p>Once the fundamental architecture is determined by genetic codes and developmental processes, as described above, the BNN begins to be rapidly trained by interacting with the environment. Both structural and functional changes are involved in the biological implementation of this training process, which we call learning. Structural changes include neurogenesis, neuronal death, synaptogenesis, and pruning, while functional changes indicate the plasticity of neurons and synapses in the brain. Considering that local chemical and physiological mechanisms mediate these changes, achieving global adaptation through learning is a problem that the BNN must resolve, which we refer to as the populational credit assignment problem of computing units (Friedrich et al., <xref ref-type="bibr" rid="B69">2011</xref>; Zou et al., <xref ref-type="bibr" rid="B266">2023</xref>). Additionally, when instruction information for a proper change is provided by a circuit mechanism, such as a feedback connection, it is accompanied by an unavoidable delay that eventually causes a temporal credit assignment problem (Friedrich et al., <xref ref-type="bibr" rid="B69">2011</xref>; Zou et al., <xref ref-type="bibr" rid="B266">2023</xref>).</p>
<sec>
<title>2.2.1. Local attributes: structural changes</title>
<p>Structural changes in the brain occur throughout the lifespan of an animal. However, considering that neurogenesis is a rare event and is observed in confined brain regions in adults, if any (Sorrells et al., <xref ref-type="bibr" rid="B216">2018</xref>, <xref ref-type="bibr" rid="B217">2021</xref>; Abdissa et al., <xref ref-type="bibr" rid="B1">2020</xref>; Moreno-Jim&#x000E9;nez et al., <xref ref-type="bibr" rid="B165">2021</xref>), and significant neuronal death is expected to take place in old age or a pathological brain (Mattson and Magnus, <xref ref-type="bibr" rid="B153">2006</xref>), simply assuming that the number of nodes of a neural network is determined by development is in the range of biological plausibility. In brain regions where we can expect significantly observable neurogenesis, such as the dentate gyrus in the hippocampus, a notable study reported that newly added neuronal nodes could contribute to neural network performance by working as a neural regularizer to avoid overfitting (Tran et al., <xref ref-type="bibr" rid="B234">2022</xref>). In contrast to the addition of neuronal nodes, neuronal death may be superficially interpreted as the negative regulation of neural networks as observed in the aging or degenerative pathology of the brain (Mattson and Magnus, <xref ref-type="bibr" rid="B153">2006</xref>). However, considering that some cognitive features can improve with age (Murman, <xref ref-type="bibr" rid="B167">2015</xref>; Ver&#x000ED;ssimo et al., <xref ref-type="bibr" rid="B248">2022</xref>), well-regulated neuronal death may not directly indicate the total dysfunction of a neural network. Two potential biological mechanisms account for this paradoxical positive regulation by removing neuronal nodes. First, as observed in biological studies (Kuhn et al., <xref ref-type="bibr" rid="B125">2001</xref>; Merlo et al., <xref ref-type="bibr" rid="B157">2019</xref>) and implied by computational studies (Barrett et al., <xref ref-type="bibr" rid="B16">2016</xref>; Tan et al., <xref ref-type="bibr" rid="B223">2020</xref>; Terziyan and Kaikova, <xref ref-type="bibr" rid="B225">2022</xref>), a biological system often prepares compensatory mechanisms against sudden changes that can function as a temporary or partial advantage in neural computation. Another possibility is the intrinsic advantage achieved by removing neuronal nodes. In ANN, similar negative structural regulations have already been utilized as a form of &#x0201C;drop-out&#x0201D; or &#x0201C;sparsification&#x0201D; by intentionally removing neuronal nodes (Goodfellow et al., <xref ref-type="bibr" rid="B76">2016</xref>; Tan et al., <xref ref-type="bibr" rid="B223">2020</xref>; Hoefler et al., <xref ref-type="bibr" rid="B99">2022</xref>). Because cognitive advantage with gradually increased neuronal death and its circuit mechanisms are largely unexplored, ANNs that include neuronal deaths and show partially or temporarily improved performances can offer new insights for both neuroscience and AI engineering.</p>
<p>Unlike the structural changes caused by neuronal addition or removal, new synapse formation and synaptic elimination by pruning, which are the addition and removal of edges in a neural network, occur more generally in the brain. The axon of a presynaptic neuron and the dendrite of a postsynaptic neuron should be within a proper distance before making a new synapse, and then a new synapse can be formed by the Hebbian type activity-dependent synaptogenesis (S&#x000FC;dhof, <xref ref-type="bibr" rid="B222">2018</xref>). However, the local mechanism of edge addition is insufficient for the optimization of an entire network and can result in excessive connectivity redundancy between activity-correlated neurons unless there is a regulatory mechanism. To participate in the optimization of a neural network, neurons must utilize information other than local synaptic activity. Negative regulatory mechanisms, such as synaptic elimination, are required to properly adjust the number of edges, which is widely utilized as an algorithm for the sparsification of a neural network to reduce the model (Luo, <xref ref-type="bibr" rid="B147">2020</xref>; Hoefler et al., <xref ref-type="bibr" rid="B99">2022</xref>). Adaptive synaptogenesis (Miller, <xref ref-type="bibr" rid="B160">1998</xref>; Thomas et al., <xref ref-type="bibr" rid="B226">2015</xref>), reinforcement signals from reward and punishment (Dos Santos et al., <xref ref-type="bibr" rid="B56">2017</xref>), or other types of neuromodulation (Garcia et al., <xref ref-type="bibr" rid="B73">2014</xref>; Speranza et al., <xref ref-type="bibr" rid="B218">2017</xref>) may achieve such orchestration between positive and negative regulation. The counterparts of edge number regulation by synapse formation and elimination in ANN are the additive update of a synaptic weight from zero-weight connection and making a synaptic weight to zero, respectively, implying that the structural changes in synapses can be interpreted as the on-off switch type of functional changes. Interestingly, beyond the dichotomy of synapses or no synapses, a contact point between two neurons is ready to be switched on by the Hebbian-type learning rule in the form of a silent synapse (Kerchner and Nicoll, <xref ref-type="bibr" rid="B118">2008</xref>; Hanse et al., <xref ref-type="bibr" rid="B88">2013</xref>), which is also found in filopodia lacking AMPA receptors and containing NMDA receptors in the adult neocortex (Vardalaki et al., <xref ref-type="bibr" rid="B247">2022</xref>). Considering that the brain should adapt to an increase in the amount of information to be stored, such a substrate for readiness is a valuable mechanism (Fusi et al., <xref ref-type="bibr" rid="B71">2005</xref>; Vardalaki et al., <xref ref-type="bibr" rid="B247">2022</xref>). Additionally, because a stable consolidation of acquired information into already stored information is accompanied by the rearrangement of the synaptic weights and connectivity, on- and off-type regulation should be appropriately utilized (Jedlicka et al., <xref ref-type="bibr" rid="B109">2022</xref>). For ANN simulation on the current computer form, zero-weight synapse costs roughly the same as any other weight value in the allowed range; however, in BNN, physical wiring and its maintenance require additional resources. Hence, when constructing an ANN on a platform where the cost can be reduced by eliminating connections, a NAS strategy must be considered based on various types of structural changes in the BNN.</p>
<p>Similar to our categorization, a recent review (Maile et al., <xref ref-type="bibr" rid="B149">2022</xref>) also regarded these structural changes after the developmental period as &#x0201C;structural learning,&#x0201D; which implies that NAS across a multi-temporal scale needs to continue for the whole life. In summary, structural changes in a neural network achieved by controlling the number of neurons or synapses are the key concepts that optimize a neural network architecture during its lifespan, and their implementation in an ANN can contribute to the construction of a better-performing neural network with reduced resource requirements under specific platforms.</p>
</sec>
<sec>
<title>2.2.2. Local attributes: functional changes</title>
<p>Although functional changes in a neural network are less explicit than physically expressed structural changes, they occur much more often in the brain and are essential for the fineness of adaptation. Various types of plasticity occurring at synapses or neurons are key components of the functional changes in a neural network.</p>
<p>Considering that a neuron transmits information as a spiking electrical signal, the so-called action potential, any change that alters the probability of generating action potentials under the same input indicates a change in neuronal excitability, which is called intrinsic plasticity. Thus, the intrinsic plasticity of a neuron can be interpreted as a transition from a certain state of neuronal excitability to a different state (Titley et al., <xref ref-type="bibr" rid="B229">2017</xref>; Debanne et al., <xref ref-type="bibr" rid="B50">2019</xref>). In a BNN, the concept of intrinsic plasticity is suitable for implementing memory mechanisms. Input-dependent stable changes in neuronal excitability can be directly paired with the hypothesis of the cellular level memory engram (Titley et al., <xref ref-type="bibr" rid="B229">2017</xref>; Alejandre-Garc&#x000ED;a et al., <xref ref-type="bibr" rid="B8">2022</xref>). Additionally, because the parameters of synaptic plasticity are significantly affected by the average activities of both pre- and post-synaptic neurons, as indicated by the Bienenstock-Cooper-Munro (BCM) model (Bienenstock et al., <xref ref-type="bibr" rid="B24">1982</xref>; Dayan and Abbott, <xref ref-type="bibr" rid="B49">2001</xref>), intrinsic plasticity can also be interpreted as a means of metaplasticity (Sehgal et al., <xref ref-type="bibr" rid="B206">2013</xref>). Thus, implementing intrinsic plasticity in an ANN can improve the representability of the given information. In an ANN, the concept of neuronal excitability is expressed as a bias before the activation function determines the neuron&#x00027;s output. In many ANN cases, bias is considered a common constant within a layer or even set to zero. We can expect significantly better performances by introducing intrinsic plasticity into ANN or spiking neurons (Zhang and Li, <xref ref-type="bibr" rid="B260">2019</xref>; Zhang et al., <xref ref-type="bibr" rid="B259">2019</xref>). A similarity between the simplified intrinsic plasticity introduced in ANN and batch normalization has also been reported (Shaw et al., <xref ref-type="bibr" rid="B208">2020</xref>).</p>
<p>The concept of synaptic plasticity involves changes in the efficacy of synaptic transmission across multiple temporal scales. Because a neuron propagates information through spikes, the main mechanism of synaptic plasticity is expected to depend on spike timing rather than amplitude by considering a uniform voltage level of action potential firing. Although the extent to which the synaptic weight should be adjusted depending on the timing differences between the presynaptic spike and postsynaptic spike varies with neuronal types, synaptic properties, or the existence of neuromodulation, synaptic plasticity occurring timing difference can be categorized as spike-timing-dependent-plasticity (STDP; qiang Bi and ming Poo, <xref ref-type="bibr" rid="B188">1998</xref>). Under an ultra-sparse firing regime, STDP may be the sole mechanism to implement synaptic plasticity, which features Hebbian plasticity, in which neurons that fire together wire together (Song et al., <xref ref-type="bibr" rid="B215">2000</xref>; Caporale and Dan, <xref ref-type="bibr" rid="B35">2008</xref>). However, because information encoding is not always at the level of a single action potential firing, the description of synaptic plasticity at the level of each spike cannot explain the computational implications of the consequences of such plasticity. Thus, it is necessary to build a description of synaptic plasticity that depends on momentary information transmitted through the synapse. Rate-dependent encoding occurs at a longer timescale or under a denser spiking regime (Gerstner et al., <xref ref-type="bibr" rid="B74">1997</xref>). Classical computational neuroscience has already depicted such plasticity by formalizing and improving Hebbian plasticity using additional terms (Dayan and Abbott, <xref ref-type="bibr" rid="B49">2001</xref>). In fact, Hebbian plasticity and its variants could well describe synaptic plasticities in BNN, and by introducing the concept of sliding threshold, metaplasticity could be incorporated into formalism (Abraham, <xref ref-type="bibr" rid="B2">2008</xref>; Laborieux et al., <xref ref-type="bibr" rid="B128">2021</xref>). However, because these phenomenological models focus on simple but accurate descriptions of various synaptic plasticities in BNN, they require ad hoc terms or modifications if more diverse dynamics in synaptic plasticity and metaplasticity are observed. In contrast, mechanistic models can be more useful for generalizing various types of synaptic plasticity by introducing the dynamics of biological synaptic components. For example, considering that short-term plasticity can be utilized to stably represent information for a certain short period in a buffer-like neural network, analogous cognitive mechanisms such as working memory can be modeled (Masse et al., <xref ref-type="bibr" rid="B152">2019</xref>), which may open up more promising future applications to the artificial memory system by introducing more detailed synaptic components. Indeed, a mechanistic model for short-term plasticities, such as the Tsodyks-Markram model (Tsodyks and Markram, <xref ref-type="bibr" rid="B240">1997</xref>), could be utilized to explain working memory modulation (Rodriguez et al., <xref ref-type="bibr" rid="B195">2022</xref>) and may help to build a better neuromorphic device (Zhang et al., <xref ref-type="bibr" rid="B261">2017</xref>; Li et al., <xref ref-type="bibr" rid="B133">2023</xref>) or a better artificial working memory system (Averbeck, <xref ref-type="bibr" rid="B12">2022</xref>; Kozachkov et al., <xref ref-type="bibr" rid="B123">2022</xref>; Rodriguez et al., <xref ref-type="bibr" rid="B195">2022</xref>). The mechanical description of long-term synaptic plasticity is often composed of several processes responsible for multiple-timescale mechanisms, as indicated in the cascade model of binary switches constructed using positive feedback loops with multiple time constants (Kawato et al., <xref ref-type="bibr" rid="B116">2011</xref>; Helfer and Shultz, <xref ref-type="bibr" rid="B92">2018</xref>; Smolen et al., <xref ref-type="bibr" rid="B212">2020</xref>). Although the readout of biological synaptic plasticity is the same as the weight adjustment in ANN, such mechanistic models may largely help construct a new type of metaplasticity algorithm in ANN. Considering the recent spotlight on metaplasticity as one of the solutions to catastrophic forgetting (Jedlicka et al., <xref ref-type="bibr" rid="B109">2022</xref>), it has become more important to understand how synapses in BNN can form their metastable states and how synaptic plasticity can exploit the transition between these states to enhance the representation of information (Fusi et al., <xref ref-type="bibr" rid="B71">2005</xref>; Benna and Fusi, <xref ref-type="bibr" rid="B19">2016</xref>; Abraham et al., <xref ref-type="bibr" rid="B3">2019</xref>).</p>
</sec>
<sec>
<title>2.2.3. Global optimization</title>
<p>An orchestrated strategy is required for these local processes of plasticity to result in the learning of a certain function. Learning is the adaptation of a neural network to approximate a function that maps from the input from the environment to the target output, which is a global optimization process (Zhang H. et al., <xref ref-type="bibr" rid="B257">2020</xref>). The optimization target and the algorithm for efficiently reaching the target by combining local processes should be elucidated to define this optimization. Although how the brain can optimize neural networks and what kind of target it tries to minimize or maximize are generally unknown, there are several phenomena observed in BNN that can be the hint or the starting point toward building biologically plausible optimization algorithms. For example, homeostatic control of neuronal activity has been observed in various neural networks across multiple spatiotemporal scales from locally occurring Hebbian plasticity to global synaptic scaling or homeostatic intrinsic plasticity (Turrigiano et al., <xref ref-type="bibr" rid="B242">1998</xref>; Turrigiano and Nelson, <xref ref-type="bibr" rid="B243">2004</xref>; Naud&#x000E9; et al., <xref ref-type="bibr" rid="B170">2013</xref>; Toyoizumi et al., <xref ref-type="bibr" rid="B232">2014</xref>). The impact of locally occurring homeostatic plasticity (Naud&#x000E9; et al., <xref ref-type="bibr" rid="B170">2013</xref>) and how global homeostatic plasticity regulates neural network dynamics (Zierenberg et al., <xref ref-type="bibr" rid="B264">2018</xref>) has been simulated in biological recurrent networks. However, it has not been tested in ANN to improve the performance, and no attempt has been made to find a similar concept in the current ANN optimization algorithm. Recent experimental confirmation also supports the idea that a neural network utilizes a plasticity rule that maximizes information (Toyoizumi et al., <xref ref-type="bibr" rid="B233">2005</xref>) or minimizes free energy (Isomura and Friston, <xref ref-type="bibr" rid="B104">2018</xref>; Gottwald and Braun, <xref ref-type="bibr" rid="B78">2020</xref>; Isomura et al., <xref ref-type="bibr" rid="B105">2022</xref>). Additionally, considering that the wiring between neurons requires metabolic resources in the BNN, as mentioned in the NAS and structural learning sections, we can also define the cost function that includes the constraints introduced by limited physical resources (Chen et al., <xref ref-type="bibr" rid="B40">2006</xref>; Tomasi et al., <xref ref-type="bibr" rid="B230">2013</xref>; Rubinov et al., <xref ref-type="bibr" rid="B199">2015</xref>; Goulas et al., <xref ref-type="bibr" rid="B79">2019</xref>). Although the target functions for a neural network to optimize are explicit in these examples, how the optimization results in learning a cognitive task remain elusive. However, they have inspired the ANN method to approach the learning of relationships among data to approximate the probability distribution of inputs or latent variables, which is an example of an unsupervised learning paradigm (Goodfellow et al., <xref ref-type="bibr" rid="B76">2016</xref>; Pitkow and Angelaki, <xref ref-type="bibr" rid="B186">2017</xref>). On the other hand, supervised learning can be defined more easily by quantifying the difference between the function to learn and the current state of a neural network, which is generally called the loss function in an ANN (Goodfellow et al., <xref ref-type="bibr" rid="B76">2016</xref>). The strategy for minimizing the loss function and assigning the adjustment of each weight is characterized by a backpropagation algorithm (Rumelhart et al., <xref ref-type="bibr" rid="B200">1986</xref>). While no explicit evidence has been found that the brain uses error backpropagation for learning, a hypothetical learning algorithm class, &#x0201C;neural gradient representation by activity differences (NGRAD),&#x0201D; has been suggested, which states that the information of activity difference is reflected as synaptic change, driving the learning or behavioral change of the network (Lillicrap et al., <xref ref-type="bibr" rid="B136">2020</xref>). Considering that the backpropagation algorithm in ANN and error-dependent learning are not directly comparable because of the difference in encoding (scalar value vs. spikes) and the questionable existence of mandatory symmetric backward connections in BNN, organized feedback of error or target information is necessary for the implementation of NGRAD in a biologically plausible neural network (Guerguiev et al., <xref ref-type="bibr" rid="B82">2017</xref>; Sacramento et al., <xref ref-type="bibr" rid="B201">2018</xref>; Whittington and Bogacz, <xref ref-type="bibr" rid="B252">2019</xref>; Lillicrap et al., <xref ref-type="bibr" rid="B136">2020</xref>; Fern&#x000E1;ndez et al., <xref ref-type="bibr" rid="B64">2021</xref>). In a large neural network with physical constraints, relying only on the global feedback information provided through the environment is inefficient because of the long delay (Nijhawan, <xref ref-type="bibr" rid="B172">2008</xref>; Foerde and Shohamy, <xref ref-type="bibr" rid="B66">2011</xref>; Cameron et al., <xref ref-type="bibr" rid="B33">2014</xref>). For example, when an animal tries to visually follow a fast-moving prey, moving the eyeballs at the proper speed and forming a proper percept without mental preparation by predicting sensory consequences is difficult (Greve, <xref ref-type="bibr" rid="B81">2015</xref>; Palmer et al., <xref ref-type="bibr" rid="B177">2015</xref>; Sederberg et al., <xref ref-type="bibr" rid="B205">2018</xref>). Therefore, a neural system is known to utilize predictive coding, and the prediction error may be an appropriate teaching signal for optimizing each component in a hierarchical neural network (Rao and Ballard, <xref ref-type="bibr" rid="B190">1999</xref>; Millidge et al., <xref ref-type="bibr" rid="B161">2022</xref>; Pezzulo et al., <xref ref-type="bibr" rid="B184">2022</xref>). A recent study theoretically suggested and experimentally validated that even a single neuron can predict future activity and use a predictive learning rule to minimize surprises; this is derived from a contrastive Hebbian learning rule (Luczak et al., <xref ref-type="bibr" rid="B144">2022</xref>). Thus, this study has important implications for the bottom-up principle of local learning rules to form a learning algorithm for intelligent agents. The neuromodulatory system can participate in slower feedback or more implicit teaching signals (Johansen et al., <xref ref-type="bibr" rid="B110">2014</xref>; Liu Y. H. et al., <xref ref-type="bibr" rid="B141">2021</xref>; Mei et al., <xref ref-type="bibr" rid="B156">2022</xref>). In fact, the three-factor rule constructed by simply adding a factor, such as neuromodulation, to pairwise synaptic plasticity can include diverse information about reward or learning hyperparameters (Gil et al., <xref ref-type="bibr" rid="B75">1997</xref>; Nadim and Bucher, <xref ref-type="bibr" rid="B168">2014</xref>; &#x00141;ukasz Ku&#x0015B;mierz et al., <xref ref-type="bibr" rid="B145">2017</xref>; Brzosko et al., <xref ref-type="bibr" rid="B31">2019</xref>). Given the experimentally examined role of neurotransmitters in the neuromodulatory system and the local physiological dynamics affected by such neurotransmitters, the brain&#x00027;s mechanism of dealing with vast amounts of information from the natural environment can be explained by a combination of diverse modulatory inputs and the distinctive distribution of receptor subtypes (Noudoost and Moore, <xref ref-type="bibr" rid="B173">2011</xref>; Rogers, <xref ref-type="bibr" rid="B196">2011</xref>; Fischer and Ullsperger, <xref ref-type="bibr" rid="B65">2017</xref>; Doya et al., <xref ref-type="bibr" rid="B57">2021</xref>; Cools and Arnsten, <xref ref-type="bibr" rid="B46">2022</xref>). Investigating global optimization algorithm and understanding it across multiple scales is important not only for neuroscience pursuing the answer of mechanisms of neural processes in the brain but also for constructing a better biologically plausible neural network capable of &#x0201C;general intelligence.&#x0201D;</p>
</sec>
</sec>
</sec>
<sec id="s3">
<title>3. Outcome of optimization: single computational unit properties</title>
<p>As Cajal&#x00027;s (<xref ref-type="bibr" rid="B32">1888</xref>) confirmation of the neuron doctrine implied, McCulloch and Pitts&#x00027;s (<xref ref-type="bibr" rid="B154">1943</xref>) theory of artificial neurons shaped the idea that a neuron is the single unit of computation, and a synapse is the single communication channel between two neurons. Although neurons and synapses have been intensively studied, several fundamental questions remain to be answered, including those regarding the computational roles of neuronal and synaptic properties. In ANN, a representative precedent, such as introducing a rectified linear unit (ReLU; Fukushima, <xref ref-type="bibr" rid="B70">1975</xref>; Nair and Hinton, <xref ref-type="bibr" rid="B169">2010</xref>), helped dramatically advance the field. Because single computational units in BNN are largely unexplored owing to their diversity and nonlinear properties, carefully searching computationally influential properties may enable us to build better neural networks.</p>
<sec>
<title>3.1. Representation of the activity and coding scheme of a single neuron</title>
<p>The governing dynamics of the electrical properties of a neuron have been well-described and integrated into Hodgkin and Huxley&#x00027;s (<xref ref-type="bibr" rid="B98">1952</xref>) monumental work. This set of nonlinear differential equations can regenerate the dynamic excitability and action potential firing. A simpler description of the dynamics using the leaky integrate-and-fire model (Hill, <xref ref-type="bibr" rid="B96">1936</xref>) can be utilized to reduce the complexity and extend the applicability to various types of firing patterns. In addition, direct reverse engineering of the spike parameters was successfully implemented (Izhikevich, <xref ref-type="bibr" rid="B108">2003</xref>). In these neuronal models of the BNN, two distinctive aspects were noticeable, when compared with the ANN. One is that a set of continuous-time differential equations describes neuronal activities, and the other is that there is no explicit activation function except in the integrate-and-fire model and its variants. Although the information encoded in the spiking dynamics along continuous time in the BNN is not yet fully understood, several strategies that the BNN may utilize have been investigated. The well-known dichotomy of such strategies is the rate vs. temporal code (Gerstner et al., <xref ref-type="bibr" rid="B74">1997</xref>; Guo et al., <xref ref-type="bibr" rid="B84">2021</xref>). The rate code encodes target information using the firing rate, corresponding to a neuron&#x00027;s positive scalar value encoding in ANN. Temporal coding refers to an encoding strategy that utilizes the timing of spikes, and the specific coding scheme can vary depending on the time a neuron uses to represent information. For example, a period of silence is a candidate for inter-spike interval coding or time-to-first-spike coding (Dayan and Abbott, <xref ref-type="bibr" rid="B49">2001</xref>; Park et al., <xref ref-type="bibr" rid="B182">2020</xref>; Guo et al., <xref ref-type="bibr" rid="B84">2021</xref>), or the absolute timing of multiple sparse spikes can be used to convey information under a proper decoding scheme (Com&#x0015F;a et al., <xref ref-type="bibr" rid="B44">2021</xref>). The other aspect of the coding strategy, which extends the capacity for encoding, is to deploy a population of neurons to represent the information (Averbeck et al., <xref ref-type="bibr" rid="B13">2006</xref>; Panzeri et al., <xref ref-type="bibr" rid="B180">2015</xref>; Pan et al., <xref ref-type="bibr" rid="B179">2019</xref>). Because the spiking patterns in a population of neurons can be statistically interpreted by considering each spike in each neuron as a sample of a specific random variable, an abundant representation form can be implemented. Different types of information can be conveyed through multiplexing by alternating coding schemes or mixing up heterogeneous neurons in a population (Harvey et al., <xref ref-type="bibr" rid="B90">2013</xref>; Akam and Kullmann, <xref ref-type="bibr" rid="B5">2014</xref>; Lankarany et al., <xref ref-type="bibr" rid="B129">2019</xref>; Jun et al., <xref ref-type="bibr" rid="B113">2022</xref>). For example, a sensor that waits for sparsely occurring inputs of various intensities can encode the input by timely bursting spikes upon an input arrival (Guo et al., <xref ref-type="bibr" rid="B84">2021</xref>). Such a strategy is advantageous for richer dynamics and encoding capacity as well as lower power consumption by considering silence (off period) as another piece of information (Cao et al., <xref ref-type="bibr" rid="B34">2015</xref>; Pfeiffer and Pfeil, <xref ref-type="bibr" rid="B185">2018</xref>). Therefore, spiking neural networks (SNN) has become an essential type of ANN and are widely utilized in neuromorphic engineering (Kornijcuk et al., <xref ref-type="bibr" rid="B121">2019</xref>; Kabilan and Muthukumaran, <xref ref-type="bibr" rid="B114">2021</xref>; Parker et al., <xref ref-type="bibr" rid="B183">2022</xref>). Because various models can describe a neuron&#x00027;s spike activity and each spike can represent distinctive information depending on the coding scheme, we can expect a much larger diversity of neuronal activation processes compared to ANN. Exploring various coding schemes with diverse temporal and populational spike patterns (Com&#x0015F;a et al., <xref ref-type="bibr" rid="B44">2021</xref>; Guo et al., <xref ref-type="bibr" rid="B84">2021</xref>) and heterogeneous distribution of diverse types of neurons (St&#x000F6;ckl et al., <xref ref-type="bibr" rid="B221">2022</xref>) is necessary to represent complex information better and build more biologically plausible neural networks. Diverse types of neurons and their computational impacts have been tested and have demonstrated better performance in typical ANN by varying the type of activation function (Lee et al., <xref ref-type="bibr" rid="B132">2018</xref>). Although groundbreaking improvements are rarely achieved by changing the activation functions in the deep learning field (Goodfellow et al., <xref ref-type="bibr" rid="B76">2016</xref>), combinations of representations of activities in a neuron (spike), consequential spike-based synaptic plasticity (spike-timing-dependent-plasticity and spike-driven synaptic plasticity), various coding schemes (temporal, rate, population, and phase), and heterogeneous neuronal types have not yet been fully examined.</p>
</sec>
<sec>
<title>3.2. Dale&#x00027;s principle and input balance</title>
<p>Although the strongest interpretation of Dale&#x00027;s principle, which indicates one neurotransmitter type for one neuron, has become outdated and proven incorrect through accumulated experimental results (Osborne, <xref ref-type="bibr" rid="B175">1979</xref>), it still offers an important framework for analyzing neural networks: the distinction between excitatory and inhibitory neurons (Eccles et al., <xref ref-type="bibr" rid="B61">1976</xref>; Cornford et al., <xref ref-type="bibr" rid="B47">2021</xref>). If we compare the synaptic efficacy in the BNN with that in the ANN, a direct correspondence can be found in the weights of the connection from one neuron to another. In contrast, the weight value in the ANN can vary between positive and negative values, and an input(presynaptic) neuron can include outward connections with both positive and negative weights unlike BNN neurons. Introducing the implications of Dale&#x00027;s principle to an ANN involves fixing a given neuronal identity to either an excitatory or inhibitory neuron, with the weights of its outward connections having the same signs. This is quite a strong constraint, but careful modification did not harm the network performance (Cornford et al., <xref ref-type="bibr" rid="B47">2021</xref>) and provided more diverse computation (Tripp and Eliasmith, <xref ref-type="bibr" rid="B238">2016</xref>) although there was no dramatic improvement in performance. Practical computational implications of the segregation of excitation and inhibition have not yet been established; however, by mathematical treatment of such a neural network, optimal dynamics of the neural network (Catsigeras, <xref ref-type="bibr" rid="B36">2013</xref>) and efficient learning (Haber and Schneidman, <xref ref-type="bibr" rid="B85">2022</xref>) have been carefully suggested as benefits. In BNN, it has long been suggested that a stable but sensitive representation of information can be achieved by balancing excitatory and inhibitory inputs, the so-called E-I balance (Den&#x000E8;ve and Machens, <xref ref-type="bibr" rid="B53">2016</xref>; Hennequin et al., <xref ref-type="bibr" rid="B93">2017</xref>). The implications of the E-I balance can be roughly explained by comparing it with other extremities. In an excitatory-dominant regime, excessive firing interferes with the expressibility of information by a neuron, whereas in an inhibitory-dominant regime, the frequency of firing drops, and the neuron cannot express the information that lies within a certain time scale. However, tightly balanced inputs can modulate a neuron to fire during a period of tiny temporal discrepancies between excitation and inhibition. Consequently, with an optimal number of firings, a neuron can efficiently represent multiple timescale inputs. The E-I balance has been restated and utilized to explain the performance and efficiency of biological neural circuit models (Den&#x000E8;ve et al., <xref ref-type="bibr" rid="B52">2017</xref>; Zhou and Yu, <xref ref-type="bibr" rid="B262">2018</xref>; Bhatia et al., <xref ref-type="bibr" rid="B22">2019</xref>; Sadeh and Clopath, <xref ref-type="bibr" rid="B202">2021</xref>) and the malfunctions of an imbalance regime (Sohal and Rubenstein, <xref ref-type="bibr" rid="B213">2019</xref>). In ANN applications (Song et al., <xref ref-type="bibr" rid="B214">2016</xref>; Ingrosso and Abbott, <xref ref-type="bibr" rid="B102">2019</xref>; Tian et al., <xref ref-type="bibr" rid="B227">2020</xref>), balanced inputs are utilized to optimize neural networks for better performance, with the advantages shown in BNN models. Because the concept of E-I balance covers a wide range of extents of balance (Hennequin et al., <xref ref-type="bibr" rid="B93">2017</xref>), defining an alternative type of balanced network (Khajeh et al., <xref ref-type="bibr" rid="B119">2022</xref>) is also possible. Considering that balancing is not just an artificial constraint but also the outcome of optimization (Trapp et al., <xref ref-type="bibr" rid="B235">2018</xref>), applying excitatory-inhibitory segregation and its balance seem to be another prominent way to build better biologically plausible neural networks.</p>
</sec>
<sec>
<title>3.3. Morphological effect: dendritic computation</title>
<p>The types of neurons in a BNN are extremely diverse; one criterion is their heterogeneous morphology (Kepecs and Fishell, <xref ref-type="bibr" rid="B117">2014</xref>; Cembrowski and Spruston, <xref ref-type="bibr" rid="B38">2019</xref>). Unlike in a point neuron model, spatially separated input, processor, and output units are implemented as dendrites, somas, and axons, respectively, in a BNN. Thus, the morphological effect refers to the emerging directionality of information flow and the information contents affected by each unit. Notably, the input part (dendrite) is spatially distributed over a larger space than the output pathway (axon) that is often found as a minimally branched fiber consisting of somewhat homogeneous segments with small cross-sectional areas (Chklovskii, <xref ref-type="bibr" rid="B42">2004</xref>). Hence, axonal fibers are expected to be primarily employed to faithfully convey the generated electrical signal (action potential) to distal postsynaptic neurons (Scott, <xref ref-type="bibr" rid="B204">1975</xref>). In contrast, dendrites have many branches with thicker shafts capable of accommodating complex cellular organelles, except the nucleus. The complex branching pattern and spacious cytosol indicate that intracellular processes also occur in dendrites and may be spatially heterogeneous (Shemer et al., <xref ref-type="bibr" rid="B209">2008</xref>; Dittmer et al., <xref ref-type="bibr" rid="B55">2019</xref>). Because synapses are distributed across such heterogeneous substrates, information processed through synapses can be highly heterogeneous even when exposed to uniform presynaptic activity. Specifically, given that the change in shaft thickness varies with the branching or distance from the soma (Harris and Spacek, <xref ref-type="bibr" rid="B89">2016</xref>), differentiating the electrical processing of each input from another is expected to depend on the location of the input (Guerguiev et al., <xref ref-type="bibr" rid="B82">2017</xref>; Sezener et al., <xref ref-type="bibr" rid="B207">2022</xref>; Pagkalos et al., <xref ref-type="bibr" rid="B176">2023</xref>). A simple but remarkable aspect of such a structure and implication is the sequential processing of inputs from the distal location toward the soma, as the directionality of the information flow in a passive cable indicates. As a single action potential from a presynaptic neuron can be interpreted as a Boolean activation input, a recent study attempted to simplify the dendritic processing of many inputs as a layered neural network by adding active dendritic computation to the directionality (Beniaguev et al., <xref ref-type="bibr" rid="B18">2021</xref>). This study highlighted the role of NMDA receptors capable of tuning the plasticity in each excitatory synapse and generating dendritic calcium spikes, which can be interpreted as the integration and firing of local inputs converging to a dendritic segment. Thus, each dendritic segment that generates spikes can be assumed to be a computing layer of converging Boolean inputs through a dendritic arbor, simplifying the complex information processing of a neuron and corresponding to the ANN. In neuroscience, there have been many observations of the active computation of dendrites via spike generation (Cook and Johnston, <xref ref-type="bibr" rid="B45">1997</xref>; Poirazi and Mel, <xref ref-type="bibr" rid="B187">2001</xref>; London and H&#x000E4;usser, <xref ref-type="bibr" rid="B143">2005</xref>; Johnston and Narayanan, <xref ref-type="bibr" rid="B111">2008</xref>). These examples also imply that various types of inputs are spatially and functionally segregated on distinctive branches or dendritic segments (Wybo et al., <xref ref-type="bibr" rid="B254">2019</xref>; Francioni and Harnett, <xref ref-type="bibr" rid="B68">2022</xref>); therefore, a neuron can work as a functional unit capable of more diverse performance than a point neuron. Because of the additional nonlinearity compared to a model point neuron, better expressibility can be expected (Wu et al., <xref ref-type="bibr" rid="B253">2018</xref>), and electrical compartmentalization and active dendritic properties can be applied to ANNs (Chavlis and Poirazi, <xref ref-type="bibr" rid="B39">2021</xref>; Iyer et al., <xref ref-type="bibr" rid="B107">2022</xref>; Sezener et al., <xref ref-type="bibr" rid="B207">2022</xref>). The segregated electrical properties also indicate that homeostatic control can occur separately in distinct dendritic branches (Tripodi et al., <xref ref-type="bibr" rid="B237">2008</xref>; Bird et al., <xref ref-type="bibr" rid="B25">2021</xref>; Shen et al., <xref ref-type="bibr" rid="B210">2021</xref>). Such an adjustment of weights in each dendritic branch toward a certain homeostatic level is similar to the normalization step in ANN (Shen et al., <xref ref-type="bibr" rid="B210">2021</xref>), which also improves learning in sparsely connected neural networks, such as BNN (Bird et al., <xref ref-type="bibr" rid="B25">2021</xref>). The typical structure of a cortical pyramidal neuron consists of two distinctive directions of dendritic outgrowth from the soma: basal and apical dendrites (DeFelipe and Farias, <xref ref-type="bibr" rid="B51">1992</xref>). These differ from each other not only in the direction of growth but also in the branching pattern. Additionally, owing to the vertical alignment of the dendrites of a cortical pyramidal neuron across the cortical laminar layer structure, basal and apical dendrites are exposed to inputs at different layers (Park et al., <xref ref-type="bibr" rid="B181">2019</xref>; Pagkalos et al., <xref ref-type="bibr" rid="B176">2023</xref>). Different branching patterns indicate distinctive information processing in the dendrites, as shown in the aforementioned study. Different input contents combined with different processing methods imply that diverse computations can occur at the microcircuit level, comprising several neurons. One remarkable application of this property is the assumption that a neuron processes both feedforward and feedback inputs, simultaneously. By postulating that error-conveying feedback and feedforward inputs containing external information are separately processed in distinct dendritic branches, the problem of credit assignment can also be explained (Guerguiev et al., <xref ref-type="bibr" rid="B82">2017</xref>; Sacramento et al., <xref ref-type="bibr" rid="B201">2018</xref>), as discussed in Section 2.2.3. Considering that in the biophysical model of a neuron, spontaneous orchestration of the dendritic properties of a neuron to learn a nonlinear function has been identified (Bicknell and H&#x000E4;usser, <xref ref-type="bibr" rid="B23">2021</xref>), the computational implication of dendritic computation is no longer an assumption from the observation of morphology but becomes an essential governing principle of a single neuronal information processing.</p>
</sec>
</sec>
<sec id="s4">
<title>4. Outcome of optimization: network architecture</title>
<p>Because single biological computing units exhibit numerous unexplored properties, large-scale combinations of these properties may enable neural networks to reveal complexities that can significantly affect neural network functions (Hermundstad et al., <xref ref-type="bibr" rid="B94">2011</xref>; Braganza and Beck, <xref ref-type="bibr" rid="B29">2018</xref>; Navlakha et al., <xref ref-type="bibr" rid="B171">2018</xref>). The complexity that underlies the BNN emerges from other characteristics, such as high heterogeneity (Liu, <xref ref-type="bibr" rid="B138">2020</xref>), overall sparse connectivity (Eavani et al., <xref ref-type="bibr" rid="B60">2015</xref>; Cayco-Gajic et al., <xref ref-type="bibr" rid="B37">2017</xref>), and hierarchical modularization (Meunier et al., <xref ref-type="bibr" rid="B158">2010</xref>; Hilgetag and Goulas, <xref ref-type="bibr" rid="B95">2020</xref>; D&#x00027;Souza et al., <xref ref-type="bibr" rid="B58">2022</xref>).</p>
<sec>
<title>4.1. General distinctive characteristics of the network structure in BNN</title>
<p>The construction and maintenance of hard wiring from one neuron to another involve metabolic and volumetric costs (Chen et al., <xref ref-type="bibr" rid="B40">2006</xref>; Tomasi et al., <xref ref-type="bibr" rid="B230">2013</xref>; Rubinov et al., <xref ref-type="bibr" rid="B199">2015</xref>; Goulas et al., <xref ref-type="bibr" rid="B79">2019</xref>); thus, in a BNN, it is difficult to imagine dense connections, as in an ANN, where we often encounter fully connected layers. The sparse connectivity in the BNN inspired the construction of a lightweight deep learning architecture (Wang C. H. et al., <xref ref-type="bibr" rid="B250">2022</xref>). Model compression by the sparsification of connectivity has led to a large reduction in power consumption, while minimizing performance reduction (Han et al., <xref ref-type="bibr" rid="B87">2015</xref>; Barlaud and Guyard, <xref ref-type="bibr" rid="B15">2021</xref>; Hoefler et al., <xref ref-type="bibr" rid="B99">2022</xref>) and improving performance (Luo, <xref ref-type="bibr" rid="B147">2020</xref>). Identifying the sweet spot between optimized sparsity and performance is the next challenge (Hoefler et al., <xref ref-type="bibr" rid="B99">2022</xref>), and as explored in Section 2, EA may be a suitable choice (Mocanu et al., <xref ref-type="bibr" rid="B164">2018</xref>). As the outcome of a properly chosen sparsification algorithm, the connectivity map of an optimal sparse network also directly improves neural network interpretability because the putative essential connections to process the task are presumably spared, while the unnecessary connections are pruned (Hoefler et al., <xref ref-type="bibr" rid="B99">2022</xref>).</p>
<p>Combining high heterogeneity with sparse connectivity results in modular structures (Mukherjee and Hill, <xref ref-type="bibr" rid="B166">2011</xref>; Miscouridou et al., <xref ref-type="bibr" rid="B162">2018</xref>), and the highly modular structure of the BNN shows the same set of advantages as sparse connectivity. The modular structure can be interpreted as an aggregation of computational units employed for the same function. These units (neurons) are usually located near each other and activated at the same developmental stage, which implies that the general wiring principle in BNN, involving activity- and distance-dependent wiring, may shape the modular structure (van Ooyen, <xref ref-type="bibr" rid="B246">2011</xref>). Contrary to the constructive algorithm by the developmental process, learning-based decomposition into modules is also possible (Kirsch et al., <xref ref-type="bibr" rid="B120">2018</xref>; Pan and Rajan, <xref ref-type="bibr" rid="B178">2020</xref>), enhancing the interpretability and convenience of troubleshooting. In addition, connecting modules that perform distinct functions enables the task-specific design of a comprehensive neural network (Amer and Maul, <xref ref-type="bibr" rid="B9">2019</xref>; Michaels et al., <xref ref-type="bibr" rid="B159">2020</xref>; Duan et al., <xref ref-type="bibr" rid="B59">2022</xref>). Because each module can be considered a building block of a neural network, the evolutionary strategy may perform best in identifying the entire architecture optimized for a certain task (Clune et al., <xref ref-type="bibr" rid="B43">2013</xref>; Lin et al., <xref ref-type="bibr" rid="B137">2021</xref>). Such a strategy eventually maximizes the functional performance of each building block and implies scalability without interfering with the performance of other modules (Ellefsen et al., <xref ref-type="bibr" rid="B62">2015</xref>), while maintaining a minimal number of additional connections. This example is directly related to the answer regarding how the brain can acquire and store multiple memories by not harming old ones and not interfering with new learning with a finite number of hardware units. Such a problem can be characterized by catastrophic forgetting and interference during continual learning, and many candidate mechanisms that the brain may utilize to solve these problems have been suggested (Hadsell et al., <xref ref-type="bibr" rid="B86">2020</xref>; Jedlicka et al., <xref ref-type="bibr" rid="B109">2022</xref>). The modular structures combined with the sparse representation are a more intuitive solution than others because it assigns each piece of information to a separate hardware, implying faster and more precise access to the memory unit. Although the number of neurons and synapses is still not enough to afford all the information which an intelligent agent learns during their lifespan, the modular structure may play a key role in efficient continual learning by harnessing other mechanisms regarding common information.</p>
</sec>
<sec>
<title>4.2. Connectivity in a specific brain region</title>
<p>Considering that the largest scale of the module structure is the functional modularization of the brain into each brain region, the most straightforward way for AI to acquire a certain function is to copy the connectivity of the specific brain region that regulates that particular function. Although the current brain-wide or regional wiring map is far from completion, several brain areas are known to have relatively organized connectivity and regulate well-defined functions.</p>
<p>One of these brain regions is the cerebellum. Because of its relatively simple and organized structure, the cerebellum was the first target for computational modeling as attempted by Marr (<xref ref-type="bibr" rid="B151">1969</xref>); Albus (<xref ref-type="bibr" rid="B6">1971</xref>). Major streams of the cerebellar information processing can be divided into a feedforward network through granule cells and Purkinje cells, and a feedback connection from inferior olive where a part of the cerebellar outputs projects. Because the feedforward stream conveys the information from the cortex and the olivary feedback sends the error between sensory feedback and sensory prediction, the Purkinje cell where these streams converge has been assumed to adapt to minimize the error signal (Raymond and Medina, <xref ref-type="bibr" rid="B191">2018</xref>). This conjecture based on the structure was directly applied to a cerebellar model articulation controller (CMAC; Albus, <xref ref-type="bibr" rid="B7">1975</xref>), which is based on the fact that the cerebellum is involved in smooth motor control. CMAC is still utilized with modifications (Tsa et al., <xref ref-type="bibr" rid="B239">2018</xref>; Le et al., <xref ref-type="bibr" rid="B130">2020</xref>). Because the cerebellum is not a sole motor controller, the whole motor control process should be analyzed by including the initial command generator and motor plant. Considering that the cerebellum receives inputs from the cerebral cortex through pontine nuclei and propagate outputs to the cortex through deep cerebellar nuclei to thalamic projection, the loop between the cortex and the cerebellum can be interpreted as the continuous corrector of the ongoing motor control. The importance of such a brain-wide loop structure in which a cerebellum is involved has been recently raised and integrated into ANN models (Iwadate et al., <xref ref-type="bibr" rid="B106">2014</xref>; Tanaka et al., <xref ref-type="bibr" rid="B224">2020</xref>; Boven et al., <xref ref-type="bibr" rid="B28">2023</xref>). Furthermore, in recent decades, our understanding of the cerebellum and its functions has deepened considerably, including the non-motor output from the cerebellum (Kang et al., <xref ref-type="bibr" rid="B115">2021</xref>; Hwang et al., <xref ref-type="bibr" rid="B101">2023</xref>) and multi-dimensional structural organization (Apps et al., <xref ref-type="bibr" rid="B10">2018</xref>; Beckinghausen and Sillitoe, <xref ref-type="bibr" rid="B17">2019</xref>). Although, currently, we barely understand the detailed network architecture underlying such diverse functions and gross anatomy, further research will lead us to implement the control of broad behavioral modality through the cerebellum.</p>
<p>The hippocampus is also a brain area that deserves a brief introduction here. The hippocampus has well-defined functional roles in episodic memory and spatial cognition, and the overall information flow across the sub-regions is also known (Bird and Burgess, <xref ref-type="bibr" rid="B26">2008</xref>; Kov&#x000E1;cs, <xref ref-type="bibr" rid="B122">2020</xref>; Li et al., <xref ref-type="bibr" rid="B134">2020</xref>). The improved artificial memory system has drawn more attention regarding the memory mechanisms and implementation of the memory circuit (Berger et al., <xref ref-type="bibr" rid="B21">2012</xref>; van de Ven et al., <xref ref-type="bibr" rid="B245">2020</xref>). Traditionally, the auto-associative connectivity in CA3 was characterized and inspired the Hopfield-type memory network (Hopfield, <xref ref-type="bibr" rid="B100">1982</xref>; Ishizuka et al., <xref ref-type="bibr" rid="B103">1990</xref>; Bennett et al., <xref ref-type="bibr" rid="B20">1994</xref>). In addition, considering that the well-known connections from CA3 to CA1 roughly form a hetero-associative network, the stored information can migrate along the feedforward organization within the hippocampus (Graham et al., <xref ref-type="bibr" rid="B80">2010</xref>; Miyata et al., <xref ref-type="bibr" rid="B163">2013</xref>). However, because such associative memory structures are known to have limited capacity (McEliece et al., <xref ref-type="bibr" rid="B155">1988</xref>; Kuo and Zhang, <xref ref-type="bibr" rid="B126">1994</xref>; Bosch and Kurfess, <xref ref-type="bibr" rid="B27">1998</xref>), additional structure or functional extension is necessary to reach the biological memory capacity level which can store dense information during a whole lifespan. Considering that the hippocampus receives inputs from the cortex through the dentate gyrus and projects back to the cortex through CA1 output, the interaction between the hippocampus and the cortex has been suggested to have the role of the memory buffer and consolidation (Rothschild et al., <xref ref-type="bibr" rid="B198">2017</xref>). In addition to the modular structure with sparse representation as mentioned in a previous section, working mechanisms of this interplay have been suggested, such as generative replay and metaplasticity (Hadsell et al., <xref ref-type="bibr" rid="B86">2020</xref>; van de Ven et al., <xref ref-type="bibr" rid="B245">2020</xref>; Jedlicka et al., <xref ref-type="bibr" rid="B109">2022</xref>), by resolving how to efficiently reorganize the representation of the information with time across the network. Considering that these mechanisms are inferred by the observations of both functional data and the architecture, the applications of these mechanisms to ANN (Hadsell et al., <xref ref-type="bibr" rid="B86">2020</xref>; van de Ven et al., <xref ref-type="bibr" rid="B245">2020</xref>; Wang L. et al., <xref ref-type="bibr" rid="B251">2022</xref>) propose more intense collaboration between neuroscience and AI engineering toward a neural network design containing both bioplausibility and better performance.</p>
<p>Besides the cerebellum and hippocampus, other unexplored brain areas can be used to build biologically plausible neural networks. Since the recent advances in neuroscience have revealed not only the map of structural and functional connections within a region and across regions but also the relationship between the structure and function, careful imitation of other brain areas with proper simplification and interfacing will be demanding.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s5">
<title>5. Discussion</title>
<sec>
<title>5.1. The goal and limitation of a bottom-up approach</title>
<p>Putting aside the hardware issue and the question of intrinsic infeasibility, whether copying a BNN by artifacts can generate the intelligence possessed by a human or animal directly requires the goal and limitation of a bottom-up approach. While we have partially reviewed recent advances in bottom-up approaches to construct neural networks, it should be noted that replacing only a certain part of the ANN with one from the BNN usually does not improve the performance measured by the criteria for ANN. In other words, if we introduce a new concept from a BNN, the entire framework must be changed. For example, to utilize spike-timing-dependent plasticity, a change from an ANN to an SNN is necessary, and consequently, the task design needs to be modified. For certain tasks, such as predicting the digit annotation from the images drawn from the MNIST dataset (LeCun et al., <xref ref-type="bibr" rid="B131">2010</xref>) after supervised learning, the ANN can achieve the best precision, while the SNN may not be able to outperform it. However, when implemented in hardware, SNNs have a considerably greater advantage in terms of power consumption, as observed in modern neuromorphic hardware (Cao et al., <xref ref-type="bibr" rid="B34">2015</xref>; Pfeiffer and Pfeil, <xref ref-type="bibr" rid="B185">2018</xref>; Cui et al., <xref ref-type="bibr" rid="B48">2019</xref>; Kornijcuk et al., <xref ref-type="bibr" rid="B121">2019</xref>; Kabilan and Muthukumaran, <xref ref-type="bibr" rid="B114">2021</xref>; Parker et al., <xref ref-type="bibr" rid="B183">2022</xref>). In addition, as mentioned in Section 3.1, SNNs may have the advantage of dealing with intermittently activated inputs (Pfeiffer and Pfeil, <xref ref-type="bibr" rid="B185">2018</xref>). Thus, this example prompts us to build an alternative interpretation that the advantages of a certain neural network can vary with the type of problem that the neural network must solve.</p>
<p>To generalize these observations, first, we defined &#x0201C;the problem space,&#x0201D; which is the set of problems that neural networks try to solve. &#x0201C;A problem&#x0201D; (<italic>P</italic>) is defined by the task itself (<italic>T</italic>), including the dataset and goal, and by the performance measure of the task (<italic>R</italic>), including the efficiency measure like power consumption or the number of required computations or platforms to perform the task. By mapping <italic>P</italic>, these attributes represent a point in the problem space &#x02119;. For a certain problem, if we set the naturalistic task and try to achieve the evaluation measure in the range of humans or animals, the problem is a point in the &#x0201C;natural problem space&#x0201D; in <xref ref-type="fig" rid="F2">Figure 2</xref>. By simply assuming that there is a subset of &#x02119; that consists of points mapped from the biological range of <italic>T, R</italic> (<italic>T</italic><sub><italic>bio</italic></sub>, <italic>R</italic><sub><italic>bio</italic></sub>), the set of natural problems (natural problem space) can be defined as <inline-formula><mml:math id="M1"><mml:mrow><mml:mi mathvariant='double-struck'>B</mml:mi></mml:mrow></mml:math></inline-formula>&#x02119; as follows:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M2"><mml:mrow><mml:mi mathvariant='double-struck'>B</mml:mi><mml:mi>&#x02119;</mml:mi><mml:mo>=</mml:mo><mml:mo>&#x0007B;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mi>&#x02119;</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0007D;</mml:mo><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Problem spaces and cover sets by neural network designs. <bold>(Left top)</bold> In the entire problem space (&#x02119;), natural problems can be defined as the green region where both the task (<italic>T</italic> which includes the dataset and the goal) and the performance measure (<italic>R</italic> which includes the efficiency measure, the number of required computations, and platforms to perform the task) are within the biological range. <bold>(Left bottom)</bold> Neural network class can be defined by comparing a designed neural network with a biological neural network. The similarity decides its class. <bold>(Right)</bold> Binary division of the problem space into the natural problem space (<italic>B&#x02119;</italic>) and artificial problem space (<italic>A&#x02119;</italic>) as aforementioned. Neural network classes are: <inline-formula><mml:math id="M3"><mml:mrow><mml:mi mathvariant="script">ANN</mml:mi></mml:mrow></mml:math></inline-formula>, artificial neural network; <inline-formula><mml:math id="M4"><mml:mrow><mml:mi mathvariant="script">BNN</mml:mi></mml:mrow></mml:math></inline-formula>, biological neural network; <inline-formula><mml:math id="M5"><mml:mrow><mml:mi mathvariant="script">BPNN</mml:mi></mml:mrow></mml:math></inline-formula>, biologically plausible neural network; <inline-formula><mml:math id="M6"><mml:mrow><mml:mi mathvariant="script">SNN</mml:mi></mml:mrow></mml:math></inline-formula>, spiking neural network. The black arrowhead represents the problems for ANN supremacy. Magenta: BNN supremacy; Purple: BPNN supremacy; Yellow: SNN supremacy, compared with ANN.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1092185-g0002.tif"/>
</fig>
<p>and all the non-natural problems belong to the &#x0201C;artificial problem space&#x0201D; (<italic>A&#x02119;</italic>). For example, tracking fast-moving prey without intensive pretraining is a natural problem, but identifying a fingerprint from a vast database is an artificial problem. In fact, determining a problem type can be taken care of by neuroscience, specifically, by a top-down approach, because it ultimately determines whether this task is one of what can be done with the brain, in Turing&#x00027;s (<xref ref-type="bibr" rid="B241">1948</xref>) terms.</p>
<p>These problems in <italic>B&#x02119;</italic> or <italic>A&#x02119;</italic> can be solved using neural networks; however, the coverage differs depending on the class of a neural network. ANNs have shown powerful performance, at least for problems in <italic>A&#x02119;</italic>, and have also been employed to solve natural problems by reducing power consumption and minimizing training. Thus, as shown in the Venn diagram in <xref ref-type="fig" rid="F2">Figure 2</xref>, the <inline-formula><mml:math id="M7"><mml:mrow><mml:mi mathvariant="script">ANN</mml:mi></mml:mrow></mml:math></inline-formula> class covers some natural problems and a larger part of the artificial problem space. On the other hand, the neural network class <inline-formula><mml:math id="M8"><mml:mrow><mml:mi mathvariant="script">SNN</mml:mi></mml:mrow></mml:math></inline-formula> has been utilized to solve more natural problems than artificial problems. For instance, by hardware implementation, an SNN can greatly reduce the required resources with similar precision to an ANN in image classification, but an ANN can show better optimization for performance on a typical computer after training with a large dataset. Therefore, as shown in <xref ref-type="fig" rid="F1">Figure 1</xref>, the <inline-formula><mml:math id="M9"><mml:mrow><mml:mi mathvariant="script">SNN</mml:mi></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="M10"><mml:mrow><mml:mi mathvariant="script">ANN</mml:mi></mml:mrow></mml:math></inline-formula> intersect in both problem spaces, and the intersection in the natural problem space is a subset of the <inline-formula><mml:math id="M11"><mml:mrow><mml:mi mathvariant="script">SNN</mml:mi></mml:mrow></mml:math></inline-formula>. By contrast, the <inline-formula><mml:math id="M12"><mml:mrow><mml:mi mathvariant="script">BNN</mml:mi></mml:mrow></mml:math></inline-formula> class is a subset of the natural problem space that covers most of the <italic>B&#x02119;</italic> region. Because we defined natural problems as those that the brain can solve, it is reasonable to assume that a BNN as a unit of the brain can be employed to process most natural problems not covered by other classes of neural networks. We would like to call the relative complement in <inline-formula><mml:math id="M13"><mml:mrow><mml:mi mathvariant="script">BNN</mml:mi></mml:mrow></mml:math></inline-formula> the &#x0201C;BNN supremacy regime,&#x0201D; which is the actively used phrase in quantum computing (Arute et al., <xref ref-type="bibr" rid="B11">2019</xref>). Thus, when building a biologically plausible neural network, the task, its performance measure, and the neural network architecture need to be changed to prove a better performance of a designed neural network than an ANN. Given the assumption that the class of biologically plausible neural networks, <inline-formula><mml:math id="M14"><mml:mrow><mml:mi mathvariant="script">BPNN</mml:mi></mml:mrow></mml:math></inline-formula>, is defined by the similarity to BNN architecture, our practical short-term goal is not only to construct a BNN-like architecture but also to demonstrate the &#x0201C;BPNN supremacy&#x0201D; by finding a proper problem in <italic>B&#x02119;</italic>. There have been attempts at formalization with similar motivations on SNN (Maass, <xref ref-type="bibr" rid="B148">1996</xref>; Kwisthout and Donselaar, <xref ref-type="bibr" rid="B127">2020</xref>) or ANN (Balcazar et al., <xref ref-type="bibr" rid="B14">1997</xref>), and solving the shortest path problem is a problem in the relative complement of <inline-formula><mml:math id="M15"><mml:mrow><mml:mi mathvariant="script">ANN</mml:mi></mml:mrow></mml:math></inline-formula> in <inline-formula><mml:math id="M16"><mml:mrow><mml:mi mathvariant="script">SNN</mml:mi></mml:mrow></mml:math></inline-formula> that has been discovered (Aimone et al., <xref ref-type="bibr" rid="B4">2021</xref>). Eventually, formalization and a mathematical approach are necessary to better define the problem spaces and investigate the spectrum in a set.</p>
</sec>
<sec>
<title>5.2. The role of neuroscience in the bottom-up approach to explore the BNN supremacy regime</title>
<p>How can we discover points in the problem spaces, specifically within the BNN or BPNN supremacy regime? Does a proper design of a BNN or BPNN always exist for certain problems? We do not have a concrete formalization scheme or rough map of problem spaces to answer these questions fundamentally using mathematical proofs. Furthermore, we do not have any information regarding the proper design of neural networks. Thus, we suggest the pipeline shown in <xref ref-type="fig" rid="F3">Figure 3</xref>, which starts with neuroscientific discoveries and shows how to define a problem specifically in the natural problem space. Accumulated data related to neuroscience can help define the task goal and the corresponding dataset to train neural networks through a top-down approach that specifically pursues the neural network mechanism by starting from observations at the level of the cognitive behavior of an intelligent agent. Thus, a top-down approach may be able to define a point in the problem space and distinguish between points in the <italic>B&#x02119;</italic> and <italic>A&#x02119;</italic>. Simultaneously, a bottom-up approach may enable the design of a neural network by combining many essential properties of the targeted BNN with the generalized principles of a neural network. However, because the definition of the problem in <italic>B&#x02119;</italic> may be too complicated to completely formulate, and it is difficult to judge whether the designed neural network can solve the problem before emulation, the defined problem and the hypothesized neural network design should be embedded in an already built scheme such as ANN or SNN to utilize feasible engineering techniques. Such hybridization is necessary for estimating the solvability of the problem without a full emulation. We speculate that persistent exploration by following the suggested pipeline will fill the information in the diagram shown in <xref ref-type="fig" rid="F2">Figure 2</xref>, which can eventually enable a formal investigation to derive the set boundary. We believe that this type of slow but straightforward bottom-up approach and collaboration with a top-down approach and interfacing with current ANN will help us to light up the way to build a thinking machine like the human on the concrete foundation of neural circuit principles. Moreover, this pipeline could promote improved communication between neuroscience and AI engineering.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Suggested pipeline to explore problem spaces and proper design of neural networks. The top-down approach defines the problem to solve based on the findings of neuroscience and the bottom-up approach designs a neural network. To determine whether the problem can be solved by a designed neural network without slow search, both need to be hybridized with feasible neural networks such as ANN or SNN.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1092185-g0003.tif"/>
</fig>
</sec>
</sec>
<sec sec-type="author-contributions" id="s6">
<title>Author contributions</title>
<p>IJ and TK searched and analyzed the references. IJ wrote the draft. TK arranged the original idea and revised the draft. All authors contributed to the article and approved the submitted version.</p>
</sec>
</body>
<back>
<sec sec-type="funding-information" id="s7">
<title>Funding</title>
<p>This research was supported by the Original Technology Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (no. 2021M3F3A2A01037811) and by the KIST Institutional Program (project no., 2E32211).</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s8">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abdissa</surname> <given-names>D.</given-names></name> <name><surname>Hamba</surname> <given-names>N.</given-names></name> <name><surname>Gerbi</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Review article on adult neurogenesis in humans</article-title>. <source>Transl. Res. Anat</source>. <volume>20</volume>:<fpage>100074</fpage>. <pub-id pub-id-type="doi">10.1016/j.tria.2020.100074</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abraham</surname> <given-names>W. C.</given-names></name></person-group> (<year>2008</year>). <article-title>Metaplasticity: tuning synapses and networks for plasticity</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>9</volume>, <fpage>387</fpage>&#x02013;<lpage>387</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2356</pub-id><pub-id pub-id-type="pmid">18401345</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abraham</surname> <given-names>W. C.</given-names></name> <name><surname>Jones</surname> <given-names>O. D.</given-names></name> <name><surname>Glanzman</surname> <given-names>D. L.</given-names></name></person-group> (<year>2019</year>). <article-title>Is plasticity of synapses the mechanism of long-term memory storage?</article-title> <source>NPJ Sci. Learn</source>. <volume>4</volume>:<fpage>9</fpage>. <pub-id pub-id-type="doi">10.1038/s41539-019-0048-y</pub-id><pub-id pub-id-type="pmid">31285847</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Aimone</surname> <given-names>J. B.</given-names></name> <name><surname>Ho</surname> <given-names>Y.</given-names></name> <name><surname>Parekh</surname> <given-names>O.</given-names></name> <name><surname>Phillips</surname> <given-names>C. A.</given-names></name> <name><surname>Pinar</surname> <given-names>A.</given-names></name> <name><surname>Severa</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>&#x0201C;Provable advantages for graph algorithms in spiking neural networks,&#x0201D;</article-title> in <source>Proceedings of the 33rd ACM Symposium on Parallelism in Algorithms and Architectures, SPAA &#x00027;21</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>35</fpage>&#x02013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1145/3409964.3461813</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Akam</surname> <given-names>T.</given-names></name> <name><surname>Kullmann</surname> <given-names>D. M.</given-names></name></person-group> (<year>2014</year>). <article-title>Oscillatory multiplexing of population codes for selective communication in the mammalian brain</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>15</volume>, <fpage>111</fpage>&#x02013;<lpage>122</lpage>. <pub-id pub-id-type="doi">10.1038/nrn3668</pub-id><pub-id pub-id-type="pmid">24434912</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Albus</surname> <given-names>J. S.</given-names></name></person-group> (<year>1971</year>). <article-title>A theory of cerebellar function</article-title>. <source>Math. Biosci</source>. <volume>10</volume>, <fpage>25</fpage>&#x02013;<lpage>61</lpage>. <pub-id pub-id-type="doi">10.1016/0025-5564(71)90051-4</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Albus</surname> <given-names>J. S.</given-names></name></person-group> (<year>1975</year>). <article-title>A new approach to manipulator control: the cerebellar model articulation controller (CMAC)</article-title>. <source>J. Dyn. Syst. Measure. Control</source> <volume>97</volume>, <fpage>220</fpage>&#x02013;<lpage>227</lpage>. <pub-id pub-id-type="doi">10.1115/1.3426922</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alejandre-Garc&#x000ED;a</surname> <given-names>T.</given-names></name> <name><surname>Kim</surname> <given-names>S.</given-names></name> <name><surname>P&#x000E9;rez-Ortega</surname> <given-names>J.</given-names></name> <name><surname>Yuste</surname> <given-names>R.</given-names></name></person-group> (<year>2022</year>). <article-title>Intrinsic excitability mechanisms of neuronal ensemble formation</article-title>. <source>eLife</source> <volume>11</volume>:<fpage>e77470</fpage>. <pub-id pub-id-type="doi">10.7554/eLife.77470</pub-id><pub-id pub-id-type="pmid">35506662</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amer</surname> <given-names>M.</given-names></name> <name><surname>Maul</surname> <given-names>T.</given-names></name></person-group> (<year>2019</year>). <article-title>A review of modularization techniques in artificial neural networks</article-title>. <source>Artif. Intell. Rev</source>. <volume>52</volume>, <fpage>527</fpage>&#x02013;<lpage>561</lpage>. <pub-id pub-id-type="doi">10.1007/s10462-019-09706-7</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Apps</surname> <given-names>R.</given-names></name> <name><surname>Hawkes</surname> <given-names>R.</given-names></name> <name><surname>Aoki</surname> <given-names>S.</given-names></name> <name><surname>Bengtsson</surname> <given-names>F.</given-names></name> <name><surname>Brown</surname> <given-names>A. M.</given-names></name> <name><surname>Chen</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Cerebellar modules and their role as operational cerebellar processing units</article-title>. <source>Cerebellum</source> <volume>17</volume>, <fpage>654</fpage>&#x02013;<lpage>682</lpage>. <pub-id pub-id-type="doi">10.1007/s12311-018-0952-3</pub-id><pub-id pub-id-type="pmid">29931663</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arute</surname> <given-names>F.</given-names></name> <name><surname>Arya</surname> <given-names>K.</given-names></name> <name><surname>Babbush</surname> <given-names>R.</given-names></name> <name><surname>Bacon</surname> <given-names>D.</given-names></name> <name><surname>Bardin</surname> <given-names>J. C.</given-names></name> <name><surname>Barends</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Quantum supremacy using a programmable superconducting processor</article-title>. <source>Nature</source> <volume>574</volume>, <fpage>505</fpage>&#x02013;<lpage>510</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-019-1666-5</pub-id><pub-id pub-id-type="pmid">31645734</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Averbeck</surname> <given-names>B. B.</given-names></name></person-group> (<year>2022</year>). <article-title>Pruning recurrent neural networks replicates adolescent changes in working memory and reinforcement learning</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>119</volume>:<fpage>e2121331119</fpage>. <pub-id pub-id-type="doi">10.1073/pnas.2121331119</pub-id><pub-id pub-id-type="pmid">35622896</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Averbeck</surname> <given-names>B. B.</given-names></name> <name><surname>Latham</surname> <given-names>P. E.</given-names></name> <name><surname>Pouget</surname> <given-names>A.</given-names></name></person-group> (<year>2006</year>). <article-title>Neural correlations, population coding and computation</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>7</volume>, <fpage>358</fpage>&#x02013;<lpage>366</lpage>. <pub-id pub-id-type="doi">10.1038/nrn1888</pub-id><pub-id pub-id-type="pmid">16760916</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Balcazar</surname> <given-names>J.</given-names></name> <name><surname>Gavalda</surname> <given-names>R.</given-names></name> <name><surname>Siegelmann</surname> <given-names>H.</given-names></name></person-group> (<year>1997</year>). <article-title>Computational power of neural networks: a characterization in terms of kolmogorov complexity</article-title>. <source>IEEE Trans. Inform. Theory</source> <volume>43</volume>, <fpage>1175</fpage>&#x02013;<lpage>1183</lpage>. <pub-id pub-id-type="doi">10.1109/18.605580</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Barlaud</surname> <given-names>M.</given-names></name> <name><surname>Guyard</surname> <given-names>F.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Learning sparse deep neural networks using efficient structured projections on convex constraints for green AI,&#x0201D;</article-title> in <source>2020 25th International Conference on Pattern Recognition (ICPR)</source> (<publisher-loc>Milan</publisher-loc>), <fpage>1566</fpage>&#x02013;<lpage>1573</lpage>. <pub-id pub-id-type="doi">10.1109/ICPR48806.2021.9412162</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barrett</surname> <given-names>D. G.</given-names></name> <name><surname>Den&#x000E8;ve</surname> <given-names>S.</given-names></name> <name><surname>Machens</surname> <given-names>C. K.</given-names></name></person-group> (<year>2016</year>). <article-title>Optimal compensation for neuron loss</article-title>. <source>eLife</source> <volume>5</volume>:<fpage>e12454</fpage>. <pub-id pub-id-type="doi">10.7554/eLife.12454</pub-id><pub-id pub-id-type="pmid">27935480</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Beckinghausen</surname> <given-names>J.</given-names></name> <name><surname>Sillitoe</surname> <given-names>R. V.</given-names></name></person-group> (<year>2019</year>). <article-title>Insights into cerebellar development and connectivity</article-title>. <source>Neurosci. Lett</source>. <volume>688</volume>, <fpage>2</fpage>&#x02013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1016/j.neulet.2018.05.013</pub-id><pub-id pub-id-type="pmid">29746896</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Beniaguev</surname> <given-names>D.</given-names></name> <name><surname>Segev</surname> <given-names>I.</given-names></name> <name><surname>London</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>Single cortical neurons as deep artificial neural networks</article-title>. <source>Neuron</source> <volume>109</volume>, <fpage>2727</fpage>&#x02013;<lpage>2739</lpage>.e3. <pub-id pub-id-type="doi">10.1016/j.neuron.2021.07.002</pub-id><pub-id pub-id-type="pmid">34380016</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Benna</surname> <given-names>M. K.</given-names></name> <name><surname>Fusi</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Computational principles of synaptic memory consolidation</article-title>. <source>Nat. Neurosci</source>. <volume>19</volume>, <fpage>1697</fpage>&#x02013;<lpage>1706</lpage>. <pub-id pub-id-type="doi">10.1038/nn.4401</pub-id><pub-id pub-id-type="pmid">27694992</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bennett</surname> <given-names>M. R.</given-names></name> <name><surname>Gibson</surname> <given-names>W. G.</given-names></name> <name><surname>Robinson</surname> <given-names>J.</given-names></name></person-group> (<year>1994</year>). <article-title>Dynamics of the ca3 pyramidial neuron autoassociative memory network in the hippocampus</article-title>. <source>Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci</source>. <volume>343</volume>, <fpage>167</fpage>&#x02013;<lpage>187</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.1994.0019</pub-id><pub-id pub-id-type="pmid">8146234</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berger</surname> <given-names>T. W.</given-names></name> <name><surname>Song</surname> <given-names>D.</given-names></name> <name><surname>Chan</surname> <given-names>R. H. M.</given-names></name> <name><surname>Marmarelis</surname> <given-names>V. Z.</given-names></name> <name><surname>LaCoss</surname> <given-names>J.</given-names></name> <name><surname>Wills</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>A hippocampal cognitive prosthesis: multi-input, multi-output nonlinear modeling and VLSI implementation</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng</source>. <volume>20</volume>, <fpage>198</fpage>&#x02013;<lpage>211</lpage>. <pub-id pub-id-type="doi">10.1109/TNSRE.2012.2189133</pub-id><pub-id pub-id-type="pmid">22438335</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bhatia</surname> <given-names>A.</given-names></name> <name><surname>Moza</surname> <given-names>S.</given-names></name> <name><surname>Bhalla</surname> <given-names>U. S.</given-names></name></person-group> (<year>2019</year>). <article-title>Precise excitation-inhibition balance controls gain and timing in the hippocampus</article-title>. <source>eLife</source> <volume>8</volume>:<fpage>e43415</fpage>. <pub-id pub-id-type="doi">10.7554/eLife.43415</pub-id><pub-id pub-id-type="pmid">34165427</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bicknell</surname> <given-names>B. A.</given-names></name> <name><surname>H&#x000E4;usser</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>A synaptic learning rule for exploiting nonlinear dendritic computation</article-title>. <source>Neuron</source> <volume>109</volume>, <fpage>4001</fpage>&#x02013;<lpage>4017</lpage>.e10. <pub-id pub-id-type="doi">10.1016/j.neuron.2021.09.044</pub-id><pub-id pub-id-type="pmid">34715026</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bienenstock</surname> <given-names>E.</given-names></name> <name><surname>Cooper</surname> <given-names>L.</given-names></name> <name><surname>Munro</surname> <given-names>P.</given-names></name></person-group> (<year>1982</year>). <article-title>Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex</article-title>. <source>J. Neurosci</source>. <volume>2</volume>, <fpage>32</fpage>&#x02013;<lpage>48</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.02-01-00032.1982</pub-id><pub-id pub-id-type="pmid">7054394</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bird</surname> <given-names>A. D.</given-names></name> <name><surname>Jedlicka</surname> <given-names>P.</given-names></name> <name><surname>Cuntz</surname> <given-names>H.</given-names></name></person-group> (<year>2021</year>). <article-title>Dendritic normalisation improves learning in sparsely connected artificial neural networks</article-title>. <source>PLoS Comput. Biol</source>. <volume>17</volume>:<fpage>e1009202</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1009202</pub-id><pub-id pub-id-type="pmid">34370727</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bird</surname> <given-names>C. M.</given-names></name> <name><surname>Burgess</surname> <given-names>N.</given-names></name></person-group> (<year>2008</year>). <article-title>The hippocampus and memory: insights from spatial processing</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>9</volume>, <fpage>182</fpage>&#x02013;<lpage>194</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2335</pub-id><pub-id pub-id-type="pmid">18270514</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bosch</surname> <given-names>H.</given-names></name> <name><surname>Kurfess</surname> <given-names>F. J.</given-names></name></person-group> (<year>1998</year>). <article-title>Information storage capacity of incompletely connected associative memories</article-title>. <source>Neural Netw</source>. <volume>11</volume>, <fpage>869</fpage>&#x02013;<lpage>876</lpage>. <pub-id pub-id-type="doi">10.1016/S0893-6080(98)00035-5</pub-id><pub-id pub-id-type="pmid">12662789</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boven</surname> <given-names>E.</given-names></name> <name><surname>Pemberton</surname> <given-names>J.</given-names></name> <name><surname>Chadderton</surname> <given-names>P.</given-names></name> <name><surname>Apps</surname> <given-names>R.</given-names></name> <name><surname>Costa</surname> <given-names>R. P.</given-names></name></person-group> (<year>2023</year>). <article-title>Cerebro-cerebellar networks facilitate learning through feedback decoupling</article-title>. <source>Nat. Commun.</source> <volume>14</volume>, <fpage>1</fpage>&#x02013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1038/s41467-022-35658-8</pub-id><pub-id pub-id-type="pmid">36599827</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Braganza</surname> <given-names>O.</given-names></name> <name><surname>Beck</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>The circuit motif as a conceptual tool for multilevel neuroscience</article-title>. <source>Trends Neurosci</source>. <volume>41</volume>, <fpage>128</fpage>&#x02013;<lpage>136</lpage>. <pub-id pub-id-type="doi">10.1016/j.tins.2018.01.002</pub-id><pub-id pub-id-type="pmid">29397990</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Brown</surname> <given-names>T.</given-names></name> <name><surname>Mann</surname> <given-names>B.</given-names></name> <name><surname>Ryder</surname> <given-names>N.</given-names></name> <name><surname>Subbiah</surname> <given-names>M.</given-names></name> <name><surname>Kaplan</surname> <given-names>J. D.</given-names></name> <name><surname>Dhariwal</surname> <given-names>P.</given-names></name> <etal/></person-group> (<year>2020</year>). <article-title>&#x0201C;Language models are few-shot learners,&#x0201D;</article-title> in <source>Advances in Neural Information Processing Systems, Vol. 33</source>, eds H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (<publisher-loc>Vancouver, CA; Red Hook, NY</publisher-loc>: <publisher-name>Curran Associates, Inc.</publisher-name>), <fpage>1877</fpage>&#x02013;<lpage>1901</lpage>.<pub-id pub-id-type="pmid">35785085</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brzosko</surname> <given-names>Z.</given-names></name> <name><surname>Mierau</surname> <given-names>S. B.</given-names></name> <name><surname>Paulsen</surname> <given-names>O.</given-names></name></person-group> (<year>2019</year>). <article-title>Neuromodulation of spike-timing-dependent plasticity: past, present, and future</article-title>. <source>Neuron</source> <volume>103</volume>, <fpage>563</fpage>&#x02013;<lpage>581</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2019.05.041</pub-id><pub-id pub-id-type="pmid">31437453</pub-id></citation></ref>
<ref id="B32">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cajal</surname> <given-names>R. Y.</given-names></name></person-group> (<year>1888</year>). <source>Revista trimestral de histolog&#x000ED;a normal y patol&#x000F3;gica</source>. <publisher-loc>Barcelona</publisher-loc>: <publisher-name>Casa Provincial de la Caridad,</publisher-name> 1<pub-id pub-id-type="pmid">17027775</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cameron</surname> <given-names>B.</given-names></name> <name><surname>de la Malla</surname> <given-names>C.</given-names></name> <name><surname>L&#x000F3;pez-Moliner</surname> <given-names>J.</given-names></name></person-group> (<year>2014</year>). <article-title>The role of differential delays in integrating transient visual and proprioceptive information</article-title>. <source>Front. Psychol</source>. <volume>5</volume>:<fpage>50</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2014.00050</pub-id><pub-id pub-id-type="pmid">24550870</pub-id></citation></ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cao</surname> <given-names>Y.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Khosla</surname> <given-names>D.</given-names></name></person-group> (<year>2015</year>). <article-title>Spiking deep convolutional neural networks for energy-efficient object recognition</article-title>. <source>Int. J. Comput. Vis</source>. <volume>113</volume>, <fpage>54</fpage>&#x02013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1007/s11263-014-0788-3</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caporale</surname> <given-names>N.</given-names></name> <name><surname>Dan</surname> <given-names>Y.</given-names></name></person-group> (<year>2008</year>). <article-title>Spike timing-dependent plasticity: a Hebbian learning rule</article-title>. <source>Annu. Rev. Neurosci</source>. <volume>31</volume>, <fpage>25</fpage>&#x02013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.31.060407.125639</pub-id><pub-id pub-id-type="pmid">18275283</pub-id></citation></ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Catsigeras</surname> <given-names>E.</given-names></name></person-group> (<year>2013</year>). <article-title>Dale&#x00027;s principle is necessary for an optimal neuronal network&#x00027;s dynamics</article-title>. <source>Appl. Math</source>. <volume>4</volume>, <fpage>15</fpage>&#x02013;<lpage>29</lpage>. <pub-id pub-id-type="doi">10.4236/am.2013.410A2002</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cayco-Gajic</surname> <given-names>N. A.</given-names></name> <name><surname>Clopath</surname> <given-names>C.</given-names></name> <name><surname>Silver</surname> <given-names>R. A.</given-names></name></person-group> (<year>2017</year>). <article-title>Sparse synaptic connectivity is required for decorrelation and pattern separation in feedforward networks</article-title>. <source>Nat. Commun</source>. <volume>8</volume>:<fpage>1116</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-017-01109-y</pub-id><pub-id pub-id-type="pmid">29061964</pub-id></citation></ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cembrowski</surname> <given-names>M. S.</given-names></name> <name><surname>Spruston</surname> <given-names>N.</given-names></name></person-group> (<year>2019</year>). <article-title>Heterogeneity within classical cell types is the rule: lessons from hippocampal pyramidal neurons</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>20</volume>, <fpage>193</fpage>&#x02013;<lpage>204</lpage>. <pub-id pub-id-type="doi">10.1038/s41583-019-0125-5</pub-id><pub-id pub-id-type="pmid">30778192</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chavlis</surname> <given-names>S.</given-names></name> <name><surname>Poirazi</surname> <given-names>P.</given-names></name></person-group> (<year>2021</year>). <article-title>Drawing inspiration from biological dendrites to empower artificial neural networks</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>70</volume>, <fpage>1</fpage>&#x02013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2021.04.007</pub-id><pub-id pub-id-type="pmid">34087540</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>B. L.</given-names></name> <name><surname>Hall</surname> <given-names>D. H.</given-names></name> <name><surname>Chklovskii</surname> <given-names>D. B.</given-names></name></person-group> (<year>2006</year>). <article-title>Wiring optimization can relate neuronal structure and function</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>103</volume>, <fpage>4723</fpage>&#x02013;<lpage>4728</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0506806103</pub-id><pub-id pub-id-type="pmid">16537428</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>S.</given-names></name> <name><surname>Zhang</surname> <given-names>S.</given-names></name> <name><surname>Shang</surname> <given-names>J.</given-names></name> <name><surname>Chen</surname> <given-names>B.</given-names></name> <name><surname>Zheng</surname> <given-names>N.</given-names></name></person-group> (<year>2019</year>). <article-title>Brain-inspired cognitive model with attention for self-driving cars</article-title>. <source>IEEE Trans. Cogn. Dev. Syst</source>. <volume>11</volume>, <fpage>13</fpage>&#x02013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1109/TCDS.2017.2717451</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chklovskii</surname> <given-names>D. B.</given-names></name></person-group> (<year>2004</year>). <article-title>Synaptic connectivity and neuronal morphology: two sides of the same coin</article-title>. <source>Neuron</source> <volume>43</volume>, <fpage>609</fpage>&#x02013;<lpage>617</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(04)00498-2</pub-id><pub-id pub-id-type="pmid">15339643</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clune</surname> <given-names>J.</given-names></name> <name><surname>Mouret</surname> <given-names>J.-B.</given-names></name> <name><surname>Lipson</surname> <given-names>H.</given-names></name></person-group> (<year>2013</year>). <article-title>The evolutionary origins of modularity</article-title>. <source>Proc. R. Soc. B: Biol. Sci</source>. <volume>280</volume>:<fpage>20122863</fpage>. <pub-id pub-id-type="doi">10.1098/rspb.2012.2863</pub-id><pub-id pub-id-type="pmid">23363632</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Com&#x00161;a</surname> <given-names>I.-M.</given-names></name> <name><surname>Potempa</surname> <given-names>K.</given-names></name> <name><surname>Versari</surname> <given-names>L.</given-names></name> <name><surname>Fischbacher</surname> <given-names>T.</given-names></name> <name><surname>Gesmundo</surname> <given-names>A.</given-names></name> <name><surname>Alakuijala</surname> <given-names>J.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Temporal coding in spiking neural networks with alpha synaptic function: learning with backpropagation,&#x0201D;</article-title> in <source>ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain</source>, 8529&#x02013;8533, <pub-id pub-id-type="doi">10.1109/ICASSP40776.2020.9053856</pub-id><pub-id pub-id-type="pmid">33900924</pub-id></citation></ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cook</surname> <given-names>E. P.</given-names></name> <name><surname>Johnston</surname> <given-names>D.</given-names></name></person-group> (<year>1997</year>). <article-title>Active dendrites reduce location-dependent variability of synaptic input trains</article-title>. <source>J. Neurophysiol</source>. <volume>78</volume>, <fpage>2116</fpage>&#x02013;<lpage>2128</lpage>. <pub-id pub-id-type="doi">10.1152/jn.1997.78.4.2116</pub-id><pub-id pub-id-type="pmid">9325379</pub-id></citation></ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cools</surname> <given-names>R.</given-names></name> <name><surname>Arnsten</surname> <given-names>A. F. T.</given-names></name></person-group> (<year>2022</year>). <article-title>Neuromodulation of prefrontal cortex cognitive function in primates: the powerful roles of monoamines and acetylcholine</article-title>. <source>Neuropsychopharmacology</source> <volume>47</volume>, <fpage>309</fpage>&#x02013;<lpage>328</lpage>. <pub-id pub-id-type="doi">10.1038/s41386-021-01100-8</pub-id><pub-id pub-id-type="pmid">34312496</pub-id></citation></ref>
<ref id="B47">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cornford</surname> <given-names>J.</given-names></name> <name><surname>Kalajdzievski</surname> <given-names>D.</given-names></name> <name><surname>Leite</surname> <given-names>M.</given-names></name> <name><surname>Lamarquette</surname> <given-names>A.</given-names></name> <name><surname>Kullmann</surname> <given-names>D. M.</given-names></name> <name><surname>Richards</surname> <given-names>B.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Learning to live with dale&#x00027;s principle: ANNs with separate excitatory and inhibitory units,&#x0201D;</article-title> in <source>9th International Conference on Learning Representations</source> (<publisher-loc>Austria</publisher-loc>). <pub-id pub-id-type="doi">10.1101/2020.11.02.364968</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cui</surname> <given-names>P.</given-names></name> <name><surname>Shabash</surname> <given-names>B.</given-names></name> <name><surname>Wiese</surname> <given-names>K. C.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;EvoDNN - an evolutionary deep neural network with heterogeneous activation functions,&#x0201D;</article-title> in <source>2019 IEEE Congress on Evolutionary Computation (CEC)</source> (<publisher-loc>Wellington</publisher-loc>), <fpage>2362</fpage>&#x02013;<lpage>2369</lpage>. <pub-id pub-id-type="doi">10.1109/CEC.2019.8789964</pub-id><pub-id pub-id-type="pmid">27294413</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Dayan</surname> <given-names>P.</given-names></name> <name><surname>Abbott</surname> <given-names>L.</given-names></name></person-group> (<year>2001</year>). <source>Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Massachusetts Institute of Technology Press</publisher-name>.</citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Debanne</surname> <given-names>D.</given-names></name> <name><surname>Inglebert</surname> <given-names>Y.</given-names></name> <name><surname>Russier</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>Plasticity of intrinsic neuronal excitability</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>54</volume>, <fpage>73</fpage>&#x02013;<lpage>82</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2018.09.001</pub-id><pub-id pub-id-type="pmid">30243042</pub-id></citation></ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>DeFelipe</surname> <given-names>J.</given-names></name> <name><surname>Fari&#x000F1;as</surname> <given-names>I.</given-names></name></person-group> (<year>1992</year>). <article-title>The pyramidal neuron of the cerebral cortex: morphological and chemical characteristics of the synaptic inputs</article-title>. <source>Prog. Neurobiol</source>. <volume>39</volume>, <fpage>563</fpage>&#x02013;<lpage>607</lpage>. <pub-id pub-id-type="doi">10.1016/0301-0082(92)90015-7</pub-id><pub-id pub-id-type="pmid">1410442</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Den&#x000E8;ve</surname> <given-names>S.</given-names></name> <name><surname>Alemi</surname> <given-names>A.</given-names></name> <name><surname>Bourdoukan</surname> <given-names>R.</given-names></name></person-group> (<year>2017</year>). <article-title>The brain as an efficient and robust adaptive learner</article-title>. <source>Neuron</source> <volume>94</volume>, <fpage>969</fpage>&#x02013;<lpage>977</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2017.05.016</pub-id><pub-id pub-id-type="pmid">28595053</pub-id></citation></ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Den&#x000E8;ve</surname> <given-names>S.</given-names></name> <name><surname>Machens</surname> <given-names>C. K.</given-names></name></person-group> (<year>2016</year>). <article-title>Efficient codes and balanced networks</article-title>. <source>Nat. Neurosci</source>. <volume>19</volume>, <fpage>375</fpage>&#x02013;<lpage>382</lpage>. <pub-id pub-id-type="doi">10.1038/nn.4243</pub-id><pub-id pub-id-type="pmid">26906504</pub-id></citation></ref>
<ref id="B54">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Devlin</surname> <given-names>J.</given-names></name> <name><surname>Chang</surname> <given-names>M.-W.</given-names></name> <name><surname>Lee</surname> <given-names>K.</given-names></name> <name><surname>Toutanova</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;BERT: pre-training of deep bidirectional transformers for language understanding,&#x0201D;</article-title> in <source>Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>, eds J. Burstein, C. Doran, and T. Solorio (<publisher-name>Association for Computational Linguistics</publisher-name>), <fpage>4171</fpage>&#x02013;<lpage>4186</lpage>. <pub-id pub-id-type="doi">10.18653/v1/n19-1423</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dittmer</surname> <given-names>P. J.</given-names></name> <name><surname>Dell&#x00027;Acqua</surname> <given-names>M. L.</given-names></name> <name><surname>Sather</surname> <given-names>W. A.</given-names></name></person-group> (<year>2019</year>). <article-title>Synaptic crosstalk conferred by a zone of differentially regulated ca &#x0003C; sup&#x0003E;2&#x0002B; &#x0003C; /sup&#x0003E; signaling in the dendritic shaft adjoining a potentiated spine</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>116</volume>, <fpage>13611</fpage>&#x02013;<lpage>13620</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1902461116</pub-id><pub-id pub-id-type="pmid">31209051</pub-id></citation></ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dos Santos</surname> <given-names>M.</given-names></name> <name><surname>Salery</surname> <given-names>M.</given-names></name> <name><surname>Forget</surname> <given-names>B.</given-names></name> <name><surname>Garcia Perez</surname> <given-names>M. A.</given-names></name> <name><surname>Betuing</surname> <given-names>S.</given-names></name> <name><surname>Boudier</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Rapid synaptogenesis in the nucleus accumbens is induced by a single cocaine administration and stabilized by mitogen-activated protein kinase interacting kinase-1 activity</article-title>. <source>Biol. Psychiatry</source> <volume>82</volume>, <fpage>806</fpage>&#x02013;<lpage>818</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsych.2017.03.014</pub-id><pub-id pub-id-type="pmid">28545678</pub-id></citation></ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Doya</surname> <given-names>K.</given-names></name> <name><surname>Miyazaki</surname> <given-names>K. W.</given-names></name> <name><surname>Miyazaki</surname> <given-names>K.</given-names></name></person-group> (<year>2021</year>). <article-title>Serotonergic modulation of cognitive computations</article-title>. <source>Curr. Opin. Behav. Sci</source>. <volume>38</volume>, <fpage>116</fpage>&#x02013;<lpage>123</lpage>. <pub-id pub-id-type="doi">10.1016/j.cobeha.2021.02.003</pub-id></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>D&#x00027;Souza</surname> <given-names>R. D.</given-names></name> <name><surname>Wang</surname> <given-names>Q.</given-names></name> <name><surname>Ji</surname> <given-names>W.</given-names></name> <name><surname>Meier</surname> <given-names>A. M.</given-names></name> <name><surname>Kennedy</surname> <given-names>H.</given-names></name> <name><surname>Knoblauch</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Hierarchical and nonhierarchical features of the mouse visual cortical network</article-title>. <source>Nat. Commun</source>. <volume>13</volume>, <fpage>1</fpage>&#x02013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1038/s41467-022-28035-y</pub-id><pub-id pub-id-type="pmid">35082302</pub-id></citation></ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Duan</surname> <given-names>S.</given-names></name> <name><surname>Yu</surname> <given-names>S.</given-names></name> <name><surname>Pr&#x000ED;ncipe</surname> <given-names>J. C.</given-names></name></person-group> (<year>2022</year>). <article-title>Modularizing deep learning via pairwise learning with kernels</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst</source>. <volume>33</volume>, <fpage>1441</fpage>&#x02013;<lpage>1451</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2020.3042346</pub-id><pub-id pub-id-type="pmid">33400656</pub-id></citation></ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eavani</surname> <given-names>H.</given-names></name> <name><surname>Satterthwaite</surname> <given-names>T. D.</given-names></name> <name><surname>Filipovych</surname> <given-names>R.</given-names></name> <name><surname>Gur</surname> <given-names>R. E.</given-names></name> <name><surname>Gur</surname> <given-names>R. C.</given-names></name> <name><surname>Davatzikos</surname> <given-names>C.</given-names></name></person-group> (<year>2015</year>). <article-title>Identifying sparse connectivity patterns in the brain using resting-state fMRI</article-title>. <source>NeuroImage</source> <volume>105</volume>, <fpage>286</fpage>&#x02013;<lpage>299</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2014.09.058</pub-id><pub-id pub-id-type="pmid">25284301</pub-id></citation></ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eccles</surname> <given-names>J. C.</given-names></name> <name><surname>Jones</surname> <given-names>R. V.</given-names></name> <name><surname>Paton</surname> <given-names>W. D. M.</given-names></name></person-group> (<year>1976</year>). <article-title>From electrical to chemical transmission in the central nervous system: the closing address of the sir henry dale centennial symposium Cambridge, 19 September 1975</article-title>. <source>Notes Rec. R. Soc. Lond</source>. <volume>30</volume>, <fpage>219</fpage>&#x02013;<lpage>230</lpage>. <pub-id pub-id-type="doi">10.1098/rsnr.1976.0015</pub-id><pub-id pub-id-type="pmid">12152632</pub-id></citation></ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ellefsen</surname> <given-names>K. O.</given-names></name> <name><surname>Mouret</surname> <given-names>J.-B.</given-names></name> <name><surname>Clune</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>Neural modularity helps organisms evolve to learn new skills without forgetting old skills</article-title>. <source>PLoS Comput. Biol</source>. <volume>11</volume>:<fpage>e1004128</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1004128</pub-id><pub-id pub-id-type="pmid">25837826</pub-id></citation></ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Elsken</surname> <given-names>T.</given-names></name> <name><surname>Metzen</surname> <given-names>J. H.</given-names></name> <name><surname>Hutter</surname> <given-names>F.</given-names></name></person-group> (<year>2019</year>). <article-title>Neural architecture search: a survey</article-title>. <source>J. Mach. Learn. Res</source>. <volume>20</volume>, <fpage>1997</fpage>&#x02013;<lpage>2017</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-05318-5_3</pub-id></citation>
</ref>
<ref id="B64">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Fern&#x000E1;ndez</surname> <given-names>J. G.</given-names></name> <name><surname>Hortal</surname> <given-names>E.</given-names></name> <name><surname>Mehrkanoon</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Towards biologically plausible learning in neural networks,&#x0201D;</article-title> in <source>2021 IEEE Symposium Series on Computational Intelligence (SSCI)</source> (<publisher-loc>Orlando, FL</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1109/SSCI50451.2021.9659539</pub-id></citation>
</ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fischer</surname> <given-names>A. G.</given-names></name> <name><surname>Ullsperger</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>An update on the role of serotonin and its interplay with dopamine for reward</article-title>. <source>Front. Hum. Neurosci</source>. <volume>11</volume>:<fpage>484</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2017.00484</pub-id><pub-id pub-id-type="pmid">29075184</pub-id></citation></ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Foerde</surname> <given-names>K.</given-names></name> <name><surname>Shohamy</surname> <given-names>D.</given-names></name></person-group> (<year>2011</year>). <article-title>Feedback timing modulates brain systems for learning in humans</article-title>. <source>J. Neurosci</source>. <volume>31</volume>, <fpage>13157</fpage>&#x02013;<lpage>13167</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2701-11.2011</pub-id><pub-id pub-id-type="pmid">21917799</pub-id></citation></ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Foster</surname> <given-names>S. A.</given-names></name> <name><surname>Baker</surname> <given-names>J. A.</given-names></name></person-group> (<year>2004</year>). <article-title>Evolution in parallel: new insights from a classic system</article-title>. <source>Trends Ecol. Evol</source>. <volume>19</volume>, <fpage>456</fpage>&#x02013;<lpage>459</lpage>. <pub-id pub-id-type="doi">10.1016/j.tree.2004.07.004</pub-id><pub-id pub-id-type="pmid">16701305</pub-id></citation></ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Francioni</surname> <given-names>V.</given-names></name> <name><surname>Harnett</surname> <given-names>M. T.</given-names></name></person-group> (<year>2022</year>). <article-title>Rethinking single neuron electrical compartmentalization: dendritic contributions to network computation</article-title> <source>in vivo. Neuroscience</source>, <volume>489</volume>, <fpage>185</fpage>&#x02013;<lpage>199</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroscience.2021.05.038</pub-id><pub-id pub-id-type="pmid">34116137</pub-id></citation></ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friedrich</surname> <given-names>J.</given-names></name> <name><surname>Urbanczik</surname> <given-names>R.</given-names></name> <name><surname>Senn</surname> <given-names>W.</given-names></name></person-group> (<year>2011</year>). <article-title>Spatio-temporal credit assignment in neuronal population learning</article-title>. <source>PLoS Comput. Biol</source>. <volume>7</volume>:<fpage>e1002092</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1002092</pub-id><pub-id pub-id-type="pmid">21738460</pub-id></citation></ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fukushima</surname> <given-names>K.</given-names></name></person-group> (<year>1975</year>). <article-title>Cognitron: a self-organizing multilayered neural network</article-title>. <source>Biol. Cybern</source>. <volume>20</volume>, <fpage>121</fpage>&#x02013;<lpage>136</lpage>. <pub-id pub-id-type="doi">10.1007/BF00342633</pub-id><pub-id pub-id-type="pmid">1203338</pub-id></citation></ref>
<ref id="B71">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fusi</surname> <given-names>S.</given-names></name> <name><surname>Drew</surname> <given-names>P. J.</given-names></name> <name><surname>Abbott</surname> <given-names>L.</given-names></name></person-group> (<year>2005</year>). <article-title>Cascade models of synaptically stored memories</article-title>. <source>Neuron</source> <volume>45</volume>, <fpage>599</fpage>&#x02013;<lpage>611</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2005.02.001</pub-id><pub-id pub-id-type="pmid">15721245</pub-id></citation></ref>
<ref id="B72">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Galv&#x000E1;n</surname> <given-names>E.</given-names></name> <name><surname>Mooney</surname> <given-names>P.</given-names></name></person-group> (<year>2021</year>). <article-title>Neuroevolution in deep neural networks: current trends and future challenges</article-title>. <source>IEEE Trans. Artif. Intell</source>. <volume>2</volume>, <fpage>476</fpage>&#x02013;<lpage>493</lpage>. <pub-id pub-id-type="doi">10.1109/TAI.2021.3067574</pub-id></citation>
</ref>
<ref id="B73">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Garcia</surname> <given-names>I.</given-names></name> <name><surname>Quast</surname> <given-names>K.</given-names></name> <name><surname>Huang</surname> <given-names>L.</given-names></name> <name><surname>Herman</surname> <given-names>A.</given-names></name> <name><surname>Selever</surname> <given-names>J.</given-names></name> <name><surname>Deussing</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Local CRH signaling promotes synaptogenesis and circuit integration of adult-born neurons</article-title>. <source>Dev. Cell</source> <volume>30</volume>, <fpage>645</fpage>&#x02013;<lpage>659</lpage>. <pub-id pub-id-type="doi">10.1016/j.devcel.2014.07.001</pub-id><pub-id pub-id-type="pmid">25199688</pub-id></citation></ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gerstner</surname> <given-names>W.</given-names></name> <name><surname>Kreiter</surname> <given-names>A. K.</given-names></name> <name><surname>Markram</surname> <given-names>H.</given-names></name> <name><surname>Herz</surname> <given-names>A. V. M.</given-names></name></person-group> (<year>1997</year>). <article-title>Neural codes: firing rates and beyond</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>94</volume>, <fpage>12740</fpage>&#x02013;<lpage>12741</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.94.24.12740</pub-id><pub-id pub-id-type="pmid">9398065</pub-id></citation></ref>
<ref id="B75">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gil</surname> <given-names>Z.</given-names></name> <name><surname>Connors</surname> <given-names>B. W.</given-names></name> <name><surname>Amitai</surname> <given-names>Y.</given-names></name></person-group> (<year>1997</year>). <article-title>Differential regulation of neocortical synapses by neuromodulators and activity</article-title>. <source>Neuron</source> <volume>19</volume>, <fpage>679</fpage>&#x02013;<lpage>686</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(00)80380-3</pub-id><pub-id pub-id-type="pmid">9331357</pub-id></citation></ref>
<ref id="B76">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Goodfellow</surname> <given-names>I.</given-names></name> <name><surname>Bengio</surname> <given-names>Y.</given-names></name> <name><surname>Courville</surname> <given-names>A.</given-names></name></person-group> (<year>2016</year>). <source>Deep Learning</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</citation>
</ref>
<ref id="B77">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goodhill</surname> <given-names>G. J.</given-names></name></person-group> (<year>2018</year>). <article-title>Theoretical models of neural development</article-title>. <source>iScience</source> <volume>8</volume>, <fpage>183</fpage>&#x02013;<lpage>199</lpage>. <pub-id pub-id-type="doi">10.1016/j.isci.2018.09.017</pub-id><pub-id pub-id-type="pmid">30321813</pub-id></citation></ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><collab>Gottwald S, and Braun, D. A..</collab></person-group> (<year>2020</year>). <article-title>The two kinds of free energy and the Bayesian revolution</article-title>. <source>PLoS Comput. Biol</source>. <volume>16</volume>:<fpage>e1008420</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1008420</pub-id><pub-id pub-id-type="pmid">33270644</pub-id></citation></ref>
<ref id="B79">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goulas</surname> <given-names>A.</given-names></name> <name><surname>Betzel</surname> <given-names>R. F.</given-names></name> <name><surname>Hilgetag</surname> <given-names>C. C.</given-names></name></person-group> (<year>2019</year>). <article-title>Spatiotemporal ontogeny of brain wiring</article-title>. <source>Sci. Adv</source>. <volume>5</volume>:<fpage>eaav9694</fpage>. <pub-id pub-id-type="doi">10.1126/sciadv.aav9694</pub-id><pub-id pub-id-type="pmid">31206020</pub-id></citation></ref>
<ref id="B80">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Graham</surname> <given-names>B. P.</given-names></name> <name><surname>Cutsuridis</surname> <given-names>V.</given-names></name> <name><surname>Hunter</surname> <given-names>R.</given-names></name></person-group> (<year>2010</year>). <source>Associative Memory Models of Hippocampal Areas CA1 and CA3</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer New York</publisher-name>, <fpage>459</fpage>&#x02013;<lpage>494</lpage>. <pub-id pub-id-type="doi">10.1007/978-1-4419-0996-1_16</pub-id></citation>
</ref>
<ref id="B81">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Greve</surname> <given-names>P. F.</given-names></name></person-group> (<year>2015</year>). <article-title>The role of prediction in mental processing: a process approach</article-title>. <source>New Ideas Psychol</source>. <volume>39</volume>, <fpage>45</fpage>&#x02013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1016/j.newideapsych.2015.07.007</pub-id></citation>
</ref>
<ref id="B82">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guerguiev</surname> <given-names>J.</given-names></name> <name><surname>Lillicrap</surname> <given-names>T. P.</given-names></name> <name><surname>Richards</surname> <given-names>B. A.</given-names></name></person-group> (<year>2017</year>). <article-title>Towards deep learning with segregated dendrites</article-title>. <source>eLife</source> <volume>6</volume>:<fpage>e22901</fpage>. <pub-id pub-id-type="doi">10.7554/eLife.22901</pub-id><pub-id pub-id-type="pmid">29205151</pub-id></citation></ref>
<ref id="B83">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gunning</surname> <given-names>D.</given-names></name> <name><surname>Stefik</surname> <given-names>M.</given-names></name> <name><surname>Choi</surname> <given-names>J.</given-names></name> <name><surname>Miller</surname> <given-names>T.</given-names></name> <name><surname>Stumpf</surname> <given-names>S.</given-names></name> <name><surname>Yang</surname> <given-names>G.-Z.</given-names></name></person-group> (<year>2019</year>). <article-title>XAI-explainable artificial intelligence</article-title>. <source>Sci. Robot</source>. <volume>4</volume>:<fpage>eaay7120</fpage>. <pub-id pub-id-type="doi">10.1126/scirobotics.aay7120</pub-id><pub-id pub-id-type="pmid">33137719</pub-id></citation></ref>
<ref id="B84">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guo</surname> <given-names>W.</given-names></name> <name><surname>Fouda</surname> <given-names>M. E.</given-names></name> <name><surname>Eltawil</surname> <given-names>A. M.</given-names></name> <name><surname>Salama</surname> <given-names>K. N.</given-names></name></person-group> (<year>2021</year>). <article-title>Neural coding in spiking neural networks: a comparative study for robust neuromorphic systems</article-title>. <source>Front. Neurosci</source>. <volume>15</volume>:<fpage>638474</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2021.638474</pub-id><pub-id pub-id-type="pmid">33746705</pub-id></citation></ref>
<ref id="B85">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Haber</surname> <given-names>A.</given-names></name> <name><surname>Schneidman</surname> <given-names>E.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;The computational and learning benefits of daleian neural networks,&#x0201D;</article-title> in <source>Advances in Neural Information Processing Systems 35: NeurIPS 2022, New Orleans, Louisiana, USA</source>, eds S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh (Red Hook, NY: Curran Associates Inc.), <fpage>5194</fpage>&#x02013;<lpage>5206</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://proceedings.neurips.cc/paper_files/paper/2022/file/21cb5931c39d7bd21b34b3b8f14a125c-Paper-Conference.pdf">https://proceedings.neurips.cc/paper_files/paper/2022/file/21cb5931c39d7bd21b34b3b8f14a125c-Paper-Conference.pdf</ext-link></citation>
</ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hadsell</surname> <given-names>R.</given-names></name> <name><surname>Rao</surname> <given-names>D.</given-names></name> <name><surname>Rusu</surname> <given-names>A. A.</given-names></name> <name><surname>Pascanu</surname> <given-names>R.</given-names></name></person-group> (<year>2020</year>). <article-title>Embracing change: continual learning in deep neural networks</article-title>. <source>Trends Cogn. Sci</source>. <volume>24</volume>, <fpage>1028</fpage>&#x02013;<lpage>1040</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2020.09.004</pub-id><pub-id pub-id-type="pmid">33158755</pub-id></citation></ref>
<ref id="B87">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Han</surname> <given-names>S.</given-names></name> <name><surname>Pool</surname> <given-names>J.</given-names></name> <name><surname>Tran</surname> <given-names>J.</given-names></name> <name><surname>Dally</surname> <given-names>W.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;Learning both weights and connections for efficient neural network,&#x0201D;</article-title> in <source>Proceedings of the 28th International Conference on Neural Information Processing Systems</source> (<publisher-loc>Montreal, QC</publisher-loc>), <fpage>1135</fpage>&#x02013;<lpage>1143</lpage>.</citation>
</ref>
<ref id="B88">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hanse</surname> <given-names>E.</given-names></name> <name><surname>Seth</surname> <given-names>H.</given-names></name> <name><surname>Riebe</surname> <given-names>I.</given-names></name></person-group> (<year>2013</year>). <article-title>Ampa-silent synapses in brain development and pathology</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>14</volume>, <fpage>839</fpage>&#x02013;<lpage>850</lpage>. <pub-id pub-id-type="doi">10.1038/nrn3642</pub-id><pub-id pub-id-type="pmid">24201185</pub-id></citation></ref>
<ref id="B89">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harris</surname> <given-names>K. M.</given-names></name> <name><surname>Spacek</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Dendrite structure,&#x0201D;</article-title> in <source>Dendrites</source>, eds G. Stuart, N. Spruston, and M. H&#x000E4;usser (Oxford: Oxford University Press). <pub-id pub-id-type="doi">10.1093/acprof:oso/9780198745273.003.0001</pub-id><pub-id pub-id-type="pmid">36389024</pub-id></citation></ref>
<ref id="B90">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harvey</surname> <given-names>M. A.</given-names></name> <name><surname>Saal</surname> <given-names>H. P.</given-names></name> <name><surname>Dammann</surname> <given-names>J. F.</given-names> <suffix>III</suffix></name> <name><surname>Bensmaia</surname> <given-names>S. J.</given-names></name></person-group> (<year>2013</year>). <article-title>Multiplexing stimulus information through rate and temporal codes in primate somatosensory cortex</article-title>. <source>PLoS Biol</source>. <volume>11</volume>:<fpage>e1001558</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pbio.1001558</pub-id><pub-id pub-id-type="pmid">23667327</pub-id></citation></ref>
<ref id="B91">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hasson</surname> <given-names>U.</given-names></name> <name><surname>Nastase</surname> <given-names>S. A.</given-names></name> <name><surname>Goldstein</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Direct fit to nature: an evolutionary perspective on biological and artificial neural networks</article-title>. <source>Neuron</source> <volume>105</volume>, <fpage>416</fpage>&#x02013;<lpage>434</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2019.12.002</pub-id><pub-id pub-id-type="pmid">32027833</pub-id></citation></ref>
<ref id="B92">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Helfer</surname> <given-names>P.</given-names></name> <name><surname>Shultz</surname> <given-names>T. R.</given-names></name></person-group> (<year>2018</year>). <article-title>Coupled feedback loops maintain synaptic long-term potentiation: a computational model of PKMZETA synthesis and AMPA receptor trafficking</article-title>. <source>PLoS Comput. Biol</source>. <volume>14</volume>:<fpage>e1006147</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1006147</pub-id><pub-id pub-id-type="pmid">29813048</pub-id></citation></ref>
<ref id="B93">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hennequin</surname> <given-names>G.</given-names></name> <name><surname>Agnes</surname> <given-names>E. J.</given-names></name> <name><surname>Vogels</surname> <given-names>T. P.</given-names></name></person-group> (<year>2017</year>). <article-title>Inhibitory plasticity: balance, control, and codependence</article-title>. <source>Annu. Rev. Neurosci</source>. <volume>40</volume>, <fpage>557</fpage>&#x02013;<lpage>579</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-neuro-072116-031005</pub-id><pub-id pub-id-type="pmid">28598717</pub-id></citation></ref>
<ref id="B94">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hermundstad</surname> <given-names>A. M.</given-names></name> <name><surname>Brown</surname> <given-names>K. S.</given-names></name> <name><surname>Bassett</surname> <given-names>D. S.</given-names></name> <name><surname>Carlson</surname> <given-names>J. M.</given-names></name></person-group> (<year>2011</year>). <article-title>Learning, memory, and the role of neural network architecture</article-title>. <source>PLoS Comput. Biol</source>. <volume>7</volume>:<fpage>e1002063</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1002063</pub-id><pub-id pub-id-type="pmid">21738455</pub-id></citation></ref>
<ref id="B95">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hilgetag</surname> <given-names>C. C.</given-names></name> <name><surname>Goulas</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Hierarchy&#x0201D; in the organization of brain networks</article-title>. <source>Philos. Trans. R. Soc. B</source> <volume>375</volume>:<fpage>20190319</fpage>. <pub-id pub-id-type="doi">10.1098/rstb.2019.0319</pub-id><pub-id pub-id-type="pmid">32089116</pub-id></citation></ref>
<ref id="B96">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hill</surname> <given-names>A. V.</given-names></name></person-group> (<year>1936</year>). <article-title>Excitation and accommodation in nerve</article-title>. <source>Proc. R. Soc. Lond. Ser. B Biol. Sci</source>. <volume>119</volume>, <fpage>305</fpage>&#x02013;<lpage>355</lpage>. <pub-id pub-id-type="doi">10.1098/rspb.1936.0012</pub-id></citation>
</ref>
<ref id="B97">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hiratani</surname> <given-names>N.</given-names></name> <name><surname>Latham</surname> <given-names>P. E.</given-names></name></person-group> (<year>2022</year>). <article-title>Developmental and evolutionary constraints on olfactory circuit selection</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>119</volume>:<fpage>e2100600119</fpage>. <pub-id pub-id-type="doi">10.1073/pnas.2100600119</pub-id><pub-id pub-id-type="pmid">35263217</pub-id></citation></ref>
<ref id="B98">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hodgkin</surname> <given-names>A. L.</given-names></name> <name><surname>Huxley</surname> <given-names>A. F.</given-names></name></person-group> (<year>1952</year>). <article-title>A quantitative description of membrane current and its application to conduction and excitation in nerve</article-title>. <source>J. Physiol</source>. <volume>117</volume>, <fpage>500</fpage>&#x02013;<lpage>544</lpage>. <pub-id pub-id-type="doi">10.1113/jphysiol.1952.sp004764</pub-id><pub-id pub-id-type="pmid">2185861</pub-id></citation></ref>
<ref id="B99">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hoefler</surname> <given-names>T.</given-names></name> <name><surname>Alistarh</surname> <given-names>D.</given-names></name> <name><surname>Ben-Nun</surname> <given-names>T.</given-names></name> <name><surname>Dryden</surname> <given-names>N.</given-names></name> <name><surname>Peste</surname> <given-names>A.</given-names></name></person-group> (<year>2022</year>). <article-title>Sparsity in deep learning: pruning and growth for efficient inference and training in neural networks</article-title>. <source>J. Mach. Learn. Res</source>. <volume>22</volume>, <fpage>1</fpage>&#x02013;<lpage>124</lpage>.</citation>
</ref>
<ref id="B100">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hopfield</surname> <given-names>J. J.</given-names></name></person-group> (<year>1982</year>). <article-title>Neural networks and physical systems with emergent collective computational abilities</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>79</volume>, <fpage>2554</fpage>&#x02013;<lpage>2558</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.79.8.2554</pub-id><pub-id pub-id-type="pmid">6953413</pub-id></citation></ref>
<ref id="B101">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hwang</surname> <given-names>K.-D.</given-names></name> <name><surname>Baek</surname> <given-names>J.</given-names></name> <name><surname>Ryu</surname> <given-names>H.-H.</given-names></name> <name><surname>Lee</surname> <given-names>J.</given-names></name> <name><surname>Shim</surname> <given-names>H. G.</given-names></name> <name><surname>Kim</surname> <given-names>S. Y.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>Cerebellar nuclei neurons projecting to the lateral parabrachial nucleus modulate classical fear conditioning</article-title>. <source>Cell Rep</source>. <volume>42</volume>:<fpage>112291</fpage>. <pub-id pub-id-type="doi">10.1016/j.celrep.2023.112291</pub-id><pub-id pub-id-type="pmid">36952344</pub-id></citation></ref>
<ref id="B102">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ingrosso</surname> <given-names>A.</given-names></name> <name><surname>Abbott</surname> <given-names>L.</given-names></name></person-group> (<year>2019</year>). <article-title>Training dynamically balanced excitatory-inhibitory networks</article-title>. <source>PLoS ONE</source> <volume>14</volume>:<fpage>e0220547</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0220547</pub-id><pub-id pub-id-type="pmid">31393909</pub-id></citation></ref>
<ref id="B103">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ishizuka</surname> <given-names>N.</given-names></name> <name><surname>Weber</surname> <given-names>J.</given-names></name> <name><surname>Amaral</surname> <given-names>D. G.</given-names></name></person-group> (<year>1990</year>). <article-title>Organization of intrahippocampal projections originating from CA3 pyramidal cells in the rat</article-title>. <source>J. Compar. Neurol</source>. <volume>295</volume>, <fpage>580</fpage>&#x02013;<lpage>623</lpage>. <pub-id pub-id-type="doi">10.1002/cne.902950407</pub-id><pub-id pub-id-type="pmid">2358523</pub-id></citation></ref>
<ref id="B104">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Isomura</surname> <given-names>T.</given-names></name> <name><surname>Friston</surname> <given-names>K.</given-names></name></person-group> (<year>2018</year>). <article-title><italic>In vitro</italic> neural networks minimise variational free energy</article-title>. <source>Sci. Rep</source>. <volume>8</volume>:<fpage>16926</fpage>. <pub-id pub-id-type="doi">10.1038/s41598-018-35221-w</pub-id><pub-id pub-id-type="pmid">30446766</pub-id></citation></ref>
<ref id="B105">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Isomura</surname> <given-names>T.</given-names></name> <name><surname>Shimazaki</surname> <given-names>H.</given-names></name> <name><surname>Friston</surname> <given-names>K. J.</given-names></name></person-group> (<year>2022</year>). <article-title>Canonical neural networks perform active inference</article-title>. <source>Commun. Biol</source>. <volume>5</volume>:<fpage>55</fpage>. <pub-id pub-id-type="doi">10.1038/s42003-021-02994-2</pub-id><pub-id pub-id-type="pmid">35031656</pub-id></citation></ref>
<ref id="B106">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Iwadate</surname> <given-names>K.</given-names></name> <name><surname>Suzuki</surname> <given-names>I.</given-names></name> <name><surname>Watanabe</surname> <given-names>M.</given-names></name> <name><surname>Yamamoto</surname> <given-names>M.</given-names></name> <name><surname>Furukawa</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>&#x0201C;An artificial neural network based on the architecture of the cerebellum for behavior learning,&#x0201D;</article-title> in <source>Soft Computing in Artificial Intelligence</source> (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>143</fpage>&#x02013;<lpage>151</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-05515-2_13</pub-id></citation>
</ref>
<ref id="B107">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Iyer</surname> <given-names>A.</given-names></name> <name><surname>Grewal</surname> <given-names>K.</given-names></name> <name><surname>Velu</surname> <given-names>A.</given-names></name> <name><surname>Souza</surname> <given-names>L. O.</given-names></name> <name><surname>Forest</surname> <given-names>J.</given-names></name> <name><surname>Ahmad</surname> <given-names>S.</given-names></name></person-group> (<year>2022</year>). <article-title>Avoiding catastrophe: active dendrites enable multi-task learning in dynamic environments</article-title>. <source>Front. Neurorobot</source>. <volume>16</volume>:<fpage>846219</fpage>. <pub-id pub-id-type="doi">10.3389/fnbot.2022.846219</pub-id><pub-id pub-id-type="pmid">35574225</pub-id></citation></ref>
<ref id="B108">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Izhikevich</surname> <given-names>E.</given-names></name></person-group> (<year>2003</year>). <article-title>Simple model of spiking neurons</article-title>. <source>IEEE Trans. Neural Netw</source>. <volume>14</volume>, <fpage>1569</fpage>&#x02013;<lpage>1572</lpage>. <pub-id pub-id-type="doi">10.1109/TNN.2003.820440</pub-id><pub-id pub-id-type="pmid">18244602</pub-id></citation></ref>
<ref id="B109">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jedlicka</surname> <given-names>P.</given-names></name> <name><surname>Tomko</surname> <given-names>M.</given-names></name> <name><surname>Robins</surname> <given-names>A.</given-names></name> <name><surname>Abraham</surname> <given-names>W. C.</given-names></name></person-group> (<year>2022</year>). <article-title>Contributions by metaplasticity to solving the catastrophic forgetting problem</article-title>. <source>Trends Neurosci</source>. <volume>45</volume>, <fpage>656</fpage>&#x02013;<lpage>666</lpage>. <pub-id pub-id-type="doi">10.1016/j.tins.2022.06.002</pub-id><pub-id pub-id-type="pmid">35798611</pub-id></citation></ref>
<ref id="B110">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johansen</surname> <given-names>J. P.</given-names></name> <name><surname>Diaz-Mataix</surname> <given-names>L.</given-names></name> <name><surname>Hamanaka</surname> <given-names>H.</given-names></name> <name><surname>Ozawa</surname> <given-names>T.</given-names></name> <name><surname>Ycu</surname> <given-names>E.</given-names></name> <name><surname>Koivumaa</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Hebbian and neuromodulatory mechanisms interact to trigger associative memory formation</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>111</volume>, <fpage>E5584</fpage>&#x02013;<lpage>E5592</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1421304111</pub-id><pub-id pub-id-type="pmid">25489081</pub-id></citation></ref>
<ref id="B111">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johnston</surname> <given-names>D.</given-names></name> <name><surname>Narayanan</surname> <given-names>R.</given-names></name></person-group> (<year>2008</year>). <article-title>Active dendrites: colorful wings of the mysterious butterflies</article-title>. <source>Trends Neurosci</source>. <volume>31</volume>, <fpage>309</fpage>&#x02013;<lpage>316</lpage>. <pub-id pub-id-type="doi">10.1016/j.tins.2008.03.004</pub-id><pub-id pub-id-type="pmid">18471907</pub-id></citation></ref>
<ref id="B112">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jonas</surname> <given-names>E.</given-names></name> <name><surname>Kording</surname> <given-names>K. P.</given-names></name></person-group> (<year>2017</year>). <article-title>Could a neuroscientist understand a microprocessor?</article-title> <source>PLoS Comput. Biol</source>. <volume>13</volume>:<fpage>e1005268</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1005268</pub-id><pub-id pub-id-type="pmid">28081141</pub-id></citation></ref>
<ref id="B113">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jun</surname> <given-names>N. Y.</given-names></name> <name><surname>Ruff</surname> <given-names>D. A.</given-names></name> <name><surname>Kramer</surname> <given-names>L. E.</given-names></name> <name><surname>Bowes</surname> <given-names>B.</given-names></name> <name><surname>Tokdar</surname> <given-names>S. T.</given-names></name> <name><surname>Cohen</surname> <given-names>M. R.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Coordinated multiplexing of information about separate objects in visual cortex</article-title>. <source>eLife</source> <volume>11</volume>:<fpage>e76452</fpage>. <pub-id pub-id-type="doi">10.7554/eLife.76452</pub-id><pub-id pub-id-type="pmid">36444983</pub-id></citation></ref>
<ref id="B114">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kabilan</surname> <given-names>R.</given-names></name> <name><surname>Muthukumaran</surname> <given-names>N.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;A neuromorphic model for image recognition using SNN,&#x0201D;</article-title> in <source>2021 6th International Conference on Inventive Computation Technologies (ICICT)</source> (<publisher-loc>Coimbatore</publisher-loc>), <fpage>720</fpage>&#x02013;<lpage>725</lpage>. <pub-id pub-id-type="doi">10.1109/ICICT50816.2021.9358663</pub-id></citation>
</ref>
<ref id="B115">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kang</surname> <given-names>S.</given-names></name> <name><surname>Jun</surname> <given-names>S.</given-names></name> <name><surname>Baek</surname> <given-names>S.</given-names></name> <name><surname>Park</surname> <given-names>H.</given-names></name> <name><surname>Yamamoto</surname> <given-names>Y.</given-names></name> <name><surname>Tanaka-Yamamoto</surname> <given-names>K.</given-names></name></person-group> (<year>2021</year>). <article-title>Recent advances in the understanding of specific efferent pathways emerging from the cerebellum</article-title>. <source>Front. Neuroanat</source>. <volume>15</volume>:<fpage>759948</fpage>. <pub-id pub-id-type="doi">10.3389/fnana.2021.759948</pub-id><pub-id pub-id-type="pmid">34975418</pub-id></citation></ref>
<ref id="B116">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kawato</surname> <given-names>M.</given-names></name> <name><surname>Kuroda</surname> <given-names>S.</given-names></name> <name><surname>Schweighofer</surname> <given-names>N.</given-names></name></person-group> (<year>2011</year>). <article-title>Cerebellar supervised learning revisited: biophysical modeling and degrees-of-freedom control</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>21</volume>, <fpage>791</fpage>&#x02013;<lpage>800</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2011.05.014</pub-id><pub-id pub-id-type="pmid">21665461</pub-id></citation></ref>
<ref id="B117">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kepecs</surname> <given-names>A.</given-names></name> <name><surname>Fishell</surname> <given-names>G.</given-names></name></person-group> (<year>2014</year>). <article-title>Interneuron cell types are fit to function</article-title>. <source>Nature</source> <volume>505</volume>, <fpage>318</fpage>&#x02013;<lpage>326</lpage>. <pub-id pub-id-type="doi">10.1038/nature12983</pub-id><pub-id pub-id-type="pmid">24429630</pub-id></citation></ref>
<ref id="B118">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kerchner</surname> <given-names>G. A.</given-names></name> <name><surname>Nicoll</surname> <given-names>R. A.</given-names></name></person-group> (<year>2008</year>). <article-title>Silent synapses and the emergence of a postsynaptic mechanism for LTP</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>9</volume>, <fpage>813</fpage>&#x02013;<lpage>825</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2501</pub-id><pub-id pub-id-type="pmid">18854855</pub-id></citation></ref>
<ref id="B119">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khajeh</surname> <given-names>R.</given-names></name> <name><surname>Fumarola</surname> <given-names>F.</given-names></name> <name><surname>Abbott</surname> <given-names>L.</given-names></name></person-group> (<year>2022</year>). <article-title>Sparse balance: excitatory-inhibitory networks with small bias currents and broadly distributed synaptic weights</article-title>. <source>PLoS Comput. Biol</source>. <volume>18</volume>:<fpage>e1008836</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1008836</pub-id><pub-id pub-id-type="pmid">35139071</pub-id></citation></ref>
<ref id="B120">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kirsch</surname> <given-names>L.</given-names></name> <name><surname>Kunze</surname> <given-names>J.</given-names></name> <name><surname>Barber</surname> <given-names>D.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Modular networks: learning to decompose neural computation,&#x0201D;</article-title> in <source>Advances in Neural Information Processing Systems 31: NeurIPS 2018, Montr&#x000E9;al, QC</source>, eds S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (<publisher-loc>Red Hook, NY</publisher-loc>: <publisher-name>Curran Associates Inc.</publisher-name>), <fpage>2414</fpage>&#x02013;<lpage>2423</lpage>.<pub-id pub-id-type="pmid">29096203</pub-id></citation></ref>
<ref id="B121">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kornijcuk</surname> <given-names>V.</given-names></name> <name><surname>Park</surname> <given-names>J.</given-names></name> <name><surname>Kim</surname> <given-names>G.</given-names></name> <name><surname>Kim</surname> <given-names>D.</given-names></name> <name><surname>Kim</surname> <given-names>I.</given-names></name> <name><surname>Kim</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Reconfigurable spike routing architectures for on-chip local learning in neuromorphic systems</article-title>. <source>Adv. Mater. Technol</source>. <volume>4</volume>:<fpage>1800345</fpage>. <pub-id pub-id-type="doi">10.1002/admt.201800345</pub-id></citation>
</ref>
<ref id="B122">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kov&#x000E1;cs</surname> <given-names>K. A.</given-names></name></person-group> (<year>2020</year>). <article-title>Episodic memories: how do the hippocampus and the entorhinal ring attractors cooperate to create them?</article-title> <source>Front. Syst. Neurosci</source>. <volume>14</volume>:<fpage>559168</fpage>. <pub-id pub-id-type="doi">10.3389/fnsys.2020.559186</pub-id><pub-id pub-id-type="pmid">33013334</pub-id></citation></ref>
<ref id="B123">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kozachkov</surname> <given-names>L.</given-names></name> <name><surname>Tauber</surname> <given-names>J.</given-names></name> <name><surname>Lundqvist</surname> <given-names>M.</given-names></name> <name><surname>Brincat</surname> <given-names>S. L.</given-names></name> <name><surname>Slotine</surname> <given-names>J.-J.</given-names></name> <name><surname>Miller</surname> <given-names>E. K.</given-names></name></person-group> (<year>2022</year>). <article-title>Robust and brain-like working memory through short-term synaptic plasticity</article-title>. <source>PLoS Comput. Biol</source>. <volume>18</volume>:<fpage>e1010776</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1010776</pub-id><pub-id pub-id-type="pmid">36574424</pub-id></citation></ref>
<ref id="B124">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Krogh</surname> <given-names>A.</given-names></name> <name><surname>Vedelsby</surname> <given-names>J.</given-names></name></person-group> (<year>1994</year>). <article-title>&#x0201C;Neural network ensembles, cross validation, and active learning,&#x0201D;</article-title> in <source>Advances in Neural Information Processing Systems, Vol. 7</source>, eds G. Tesauro, D. Touretzky, and T. Leen (<publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>).<pub-id pub-id-type="pmid">33821360</pub-id></citation></ref>
<ref id="B125">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuhn</surname> <given-names>H. G.</given-names></name> <name><surname>Palmer</surname> <given-names>T. D.</given-names></name> <name><surname>Fuchs</surname> <given-names>E.</given-names></name></person-group> (<year>2001</year>). <article-title>Adult neurogenesis: a compensatory mechanism for neuronal damage</article-title>. <source>Eur. Arch. Psychiatry Clin. Neurosci</source>. <volume>251</volume>, <fpage>152</fpage>&#x02013;<lpage>158</lpage>. <pub-id pub-id-type="doi">10.1007/s004060170035</pub-id><pub-id pub-id-type="pmid">11697579</pub-id></citation></ref>
<ref id="B126">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kuo</surname> <given-names>I.-C.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name></person-group> (<year>1994</year>). <article-title>&#x0201C;Capacity of associative memory,&#x0201D;</article-title> in <source>Proceedings of 1994 IEEE International Symposium on Information Theory</source> (<publisher-loc>Trondheim</publisher-loc>), <fpage>222</fpage>. <pub-id pub-id-type="doi">10.1109/ISIT.1994.394746</pub-id></citation>
</ref>
<ref id="B127">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kwisthout</surname> <given-names>J.</given-names></name> <name><surname>Donselaar</surname> <given-names>N.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;On the computational power and complexity of spiking neural networks,&#x0201D;</article-title> in <source>Proceedings of the Neuro-Inspired Computational Elements Workshop, NICE &#x00027;20</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>). <pub-id pub-id-type="doi">10.1145/3381755.3381760</pub-id></citation>
</ref>
<ref id="B128">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Laborieux</surname> <given-names>A.</given-names></name> <name><surname>Ernoult</surname> <given-names>M.</given-names></name> <name><surname>Hirtzlin</surname> <given-names>T.</given-names></name> <name><surname>Querlioz</surname> <given-names>D.</given-names></name></person-group> (<year>2021</year>). <article-title>Synaptic metaplasticity in binarized neural networks</article-title>. <source>Nat. Commun</source>. <volume>12</volume>:<fpage>2549</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-021-22768-y</pub-id><pub-id pub-id-type="pmid">33953183</pub-id></citation></ref>
<ref id="B129">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lankarany</surname> <given-names>M.</given-names></name> <name><surname>Al-Basha</surname> <given-names>D.</given-names></name> <name><surname>Ratt</surname> <given-names>S.</given-names></name> <name><surname>Prescott</surname> <given-names>S. A.</given-names></name></person-group> (<year>2019</year>). <article-title>Differentially synchronized spiking enables multiplexed neural coding</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>116</volume>, <fpage>10097</fpage>&#x02013;<lpage>10102</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1812171116</pub-id><pub-id pub-id-type="pmid">31028148</pub-id></citation></ref>
<ref id="B130">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Le</surname> <given-names>T.-L.</given-names></name> <name><surname>Huynh</surname> <given-names>T.-T.</given-names></name> <name><surname>Hong</surname> <given-names>S.-K.</given-names></name> <name><surname>Lin</surname> <given-names>C.-M.</given-names></name></person-group> (<year>2020</year>). <article-title>Hybrid neural network cerebellar model articulation controller design for non-linear dynamic time-varying plants</article-title>. <source>Front. Neurosci</source>. <volume>14</volume>:<fpage>695</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2020.00695</pub-id><pub-id pub-id-type="pmid">32848536</pub-id></citation></ref>
<ref id="B131">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>LeCun</surname> <given-names>Y.</given-names></name> <name><surname>Cortes</surname> <given-names>C.</given-names></name> <name><surname>Burges</surname> <given-names>C.</given-names></name></person-group> (<year>2010</year>). <source>MNIST Handwritten Digit Database. ATT Labs [Online]</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://yann.lecun.com/exdb/mnist">http://yann.lecun.com/exdb/mnist</ext-link></citation>
</ref>
<ref id="B132">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>A.</given-names></name> <name><surname>Lam</surname> <given-names>B.</given-names></name> <name><surname>Li</surname> <given-names>W.</given-names></name> <name><surname>Lee</surname> <given-names>H.</given-names></name> <name><surname>Chen</surname> <given-names>W.</given-names></name> <name><surname>Chang</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Conditional activation for diverse neurons in heterogeneous networks</article-title>. <source>CoRR, abs/1803.05006. arXiv [preprint] arXiv:1803.05006.</source></citation>
</ref>
<ref id="B133">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Chen</surname> <given-names>P.</given-names></name> <name><surname>Zhou</surname> <given-names>K.</given-names></name> <name><surname>Yu</surname> <given-names>J.</given-names></name> <name><surname>Wu</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>Short-term synaptic plasticity in emerging devices for neuromorphic computing</article-title>. <source>iScience</source> <volume>26</volume>:<fpage>106315</fpage>. <pub-id pub-id-type="doi">10.1016/j.isci.2023.106315</pub-id><pub-id pub-id-type="pmid">36950108</pub-id></citation></ref>
<ref id="B134">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>T.</given-names></name> <name><surname>Arleo</surname> <given-names>A.</given-names></name> <name><surname>Sheynikhovich</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <article-title>Modeling place cells and grid cells in multi-compartment environments: entorhinal-hippocampal loop as a multisensory integration circuit</article-title>. <source>Neural Netw</source>. <volume>121</volume>, <fpage>37</fpage>&#x02013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2019.09.002</pub-id><pub-id pub-id-type="pmid">31526953</pub-id></citation></ref>
<ref id="B135">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Liang</surname> <given-names>J.</given-names></name> <name><surname>Meyerson</surname> <given-names>E.</given-names></name> <name><surname>Miikkulainen</surname> <given-names>R.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Evolutionary architecture search for deep multitask networks,&#x0201D;</article-title> in <source>Proceedings of the Genetic and Evolutionary Computation Conference, GECCO &#x00027;18</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>466</fpage>&#x02013;<lpage>473</lpage>. <pub-id pub-id-type="doi">10.1145/3205455.3205489</pub-id></citation>
</ref>
<ref id="B136">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lillicrap</surname> <given-names>T. P.</given-names></name> <name><surname>Santoro</surname> <given-names>A.</given-names></name> <name><surname>Marris</surname> <given-names>L.</given-names></name> <name><surname>Akerman</surname> <given-names>C. J.</given-names></name> <name><surname>Hinton</surname> <given-names>G.</given-names></name></person-group> (<year>2020</year>). <article-title>Backpropagation and the brain</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>21</volume>, <fpage>335</fpage>&#x02013;<lpage>346</lpage>. <pub-id pub-id-type="doi">10.1038/s41583-020-0277-3</pub-id><pub-id pub-id-type="pmid">32303713</pub-id></citation></ref>
<ref id="B137">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lin</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>G.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>W.</given-names></name> <name><surname>Chen</surname> <given-names>B.</given-names></name> <name><surname>Tang</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>&#x0201C;ModularNAS: towards modularized and reusable neural architecture search,&#x0201D;</article-title> in <source>Proceedings of Machine Learning and Systems, Vol. 3</source>, eds A. Smola, A. Dimakis, and I. Stoica (Virtual), <fpage>413</fpage>&#x02013;<lpage>433</lpage>.</citation>
</ref>
<ref id="B138">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>T.</given-names></name></person-group> (<year>2020</year>). <article-title>BHN: a brain-like heterogeneous network</article-title>. <source>arXiv [preprint] arXiv:2005.12826</source>.</citation>
</ref>
<ref id="B139">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Sun</surname> <given-names>Y.</given-names></name> <name><surname>Xue</surname> <given-names>B.</given-names></name> <name><surname>Zhang</surname> <given-names>M.</given-names></name> <name><surname>Yen</surname> <given-names>G. G.</given-names></name> <name><surname>Tan</surname> <given-names>K. C.</given-names></name></person-group> (<year>2021</year>). <article-title>A survey on evolutionary neural architecture search</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst</source>. <volume>34</volume>, <fpage>550</fpage>&#x02013;<lpage>570</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2021.3100554</pub-id><pub-id pub-id-type="pmid">34357870</pub-id></citation></ref>
<ref id="B140">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Yao</surname> <given-names>X.</given-names></name></person-group> (<year>2008</year>). <article-title>Nature inspired neural network ensemble learning</article-title>. <source>J. Intell. Syst</source>. <volume>17</volume>(Suppl.), <fpage>5</fpage>&#x02013;<lpage>26</lpage>. <pub-id pub-id-type="doi">10.1515/JISYS.2008.17.S1.5</pub-id></citation>
</ref>
<ref id="B141">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Y. H.</given-names></name> <name><surname>Smith</surname> <given-names>S.</given-names></name> <name><surname>Mihalas</surname> <given-names>S.</given-names></name> <name><surname>Shea-Brown</surname> <given-names>E.</given-names></name> <name><surname>S&#x000FC;mb&#x000FC;l</surname> <given-names>U.</given-names></name></person-group> (<year>2021</year>). <article-title>Cell-type-specific neuromodulation guides synaptic credit assignment in a spiking neural network</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>118</volume>:<fpage>e2111821118</fpage>. <pub-id pub-id-type="doi">10.1073/pnas.2111821118</pub-id><pub-id pub-id-type="pmid">34916291</pub-id></citation></ref>
<ref id="B142">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Llorca</surname> <given-names>A.</given-names></name> <name><surname>Ciceri</surname> <given-names>G.</given-names></name> <name><surname>Beattie</surname> <given-names>R.</given-names></name> <name><surname>Wong</surname> <given-names>F. K.</given-names></name> <name><surname>Diana</surname> <given-names>G.</given-names></name> <name><surname>Serafeimidou-Pouliou</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>A stochastic framework of neurogenesis underlies the assembly of neocortical cytoarchitecture</article-title>. <source>eLife</source> <volume>8</volume>:<fpage>e51381</fpage>. <pub-id pub-id-type="doi">10.7554/eLife.51381</pub-id><pub-id pub-id-type="pmid">31736464</pub-id></citation></ref>
<ref id="B143">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>London</surname> <given-names>M.</given-names></name> <name><surname>H&#x000E4;usser</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>Dendritic computation</article-title>. <source>Annu. Rev. Neurosci</source>. <volume>28</volume>, <fpage>503</fpage>&#x02013;<lpage>532</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.28.061604.135703</pub-id><pub-id pub-id-type="pmid">16033324</pub-id></citation></ref>
<ref id="B144">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luczak</surname> <given-names>A.</given-names></name> <name><surname>McNaughton</surname> <given-names>B. L.</given-names></name> <name><surname>Kubo</surname> <given-names>Y.</given-names></name></person-group> (<year>2022</year>). <article-title>Neurons learn by predicting future activity</article-title>. <source>Nat. Mach. Intell</source>. <volume>4</volume>, <fpage>62</fpage>&#x02013;<lpage>72</lpage>. <pub-id pub-id-type="doi">10.1038/s42256-021-00430-y</pub-id><pub-id pub-id-type="pmid">35814496</pub-id></citation></ref>
<ref id="B145">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>&#x00141;ukasz Ku&#x0015B;mierz Isomura</surname> <given-names>T.</given-names></name> <name><surname>Toyoizumi</surname> <given-names>T.</given-names></name></person-group> (<year>2017</year>). <article-title>Learning with three factors: modulating Hebbian plasticity with errors</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>46</volume>, <fpage>170</fpage>&#x02013;<lpage>177</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2017.08.020</pub-id><pub-id pub-id-type="pmid">28918313</pub-id></citation></ref>
<ref id="B146">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>L.</given-names></name></person-group> (<year>2021</year>). <article-title>Architectures of neuronal circuits</article-title>. <source>Science</source> <volume>373</volume>:<fpage>eabg7285</fpage>. <pub-id pub-id-type="doi">10.1126/science.abg7285</pub-id><pub-id pub-id-type="pmid">34516844</pub-id></citation></ref>
<ref id="B147">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>W.</given-names></name></person-group> (<year>2020</year>). <article-title>Improving neural network with uniform sparse connectivity</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>215705</fpage>&#x02013;<lpage>215715</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.3040943</pub-id></citation>
</ref>
<ref id="B148">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maass</surname> <given-names>W.</given-names></name></person-group> (<year>1996</year>). <article-title>Lower bounds for the computational power of networks of spiking neurons</article-title>. <source>Neural Comput</source>. <volume>8</volume>, <fpage>1</fpage>&#x02013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1162/neco.1996.8.1.1</pub-id></citation>
</ref>
<ref id="B149">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Maile</surname> <given-names>K.</given-names></name> <name><surname>Herv&#x000E9;</surname> <given-names>L.</given-names></name> <name><surname>Wilson</surname> <given-names>D. G.</given-names></name></person-group> (<year>2022</year>). <source>Structural learning in artificial neural networks: a neural operator perspective. <italic>Trans. Mach. Learn. Res</italic></source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://openreview.net/forum?id=gzhEGhcsnN">https://openreview.net/forum?id=gzhEGhcsnN</ext-link></citation>
</ref>
<ref id="B150">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Markram</surname> <given-names>H.</given-names></name> <name><surname>Muller</surname> <given-names>E.</given-names></name> <name><surname>Ramaswamy</surname> <given-names>S.</given-names></name> <name><surname>Reimann</surname> <given-names>M.</given-names></name> <name><surname>Abdellah</surname> <given-names>M.</given-names></name> <name><surname>Sanchez</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Reconstruction and simulation of neocortical microcircuitry</article-title>. <source>Cell</source> <volume>163</volume>, <fpage>456</fpage>&#x02013;<lpage>492</lpage>. <pub-id pub-id-type="doi">10.1016/j.cell.2015.09.029</pub-id><pub-id pub-id-type="pmid">26451489</pub-id></citation></ref>
<ref id="B151">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marr</surname> <given-names>D.</given-names></name></person-group> (<year>1969</year>). <article-title>A theory of cerebellar cortex</article-title>. <source>J. Physiol</source>. <volume>202</volume>, <fpage>437</fpage>&#x02013;<lpage>470</lpage>. <pub-id pub-id-type="doi">10.1113/jphysiol.1969.sp008820</pub-id><pub-id pub-id-type="pmid">5784296</pub-id></citation></ref>
<ref id="B152">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Masse</surname> <given-names>N. Y.</given-names></name> <name><surname>Yang</surname> <given-names>G. R.</given-names></name> <name><surname>Song</surname> <given-names>H. F.</given-names></name> <name><surname>Wang</surname> <given-names>X.-J.</given-names></name> <name><surname>Freedman</surname> <given-names>D. J.</given-names></name></person-group> (<year>2019</year>). <article-title>Circuit mechanisms for the maintenance and manipulation of information in working memory</article-title>. <source>Nat. Neurosci</source>. <volume>22</volume>, <fpage>1159</fpage>&#x02013;<lpage>1167</lpage>. <pub-id pub-id-type="doi">10.1038/s41593-019-0414-3</pub-id><pub-id pub-id-type="pmid">31182866</pub-id></citation></ref>
<ref id="B153">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mattson</surname> <given-names>M. P.</given-names></name> <name><surname>Magnus</surname> <given-names>T.</given-names></name></person-group> (<year>2006</year>). <article-title>Ageing and neuronal vulnerability</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>7</volume>, <fpage>278</fpage>&#x02013;<lpage>294</lpage>. <pub-id pub-id-type="doi">10.1038/nrn1886</pub-id><pub-id pub-id-type="pmid">16552414</pub-id></citation></ref>
<ref id="B154">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCulloch</surname> <given-names>W. S.</given-names></name> <name><surname>Pitts</surname> <given-names>W.</given-names></name></person-group> (<year>1943</year>). <article-title>A logical calculus of the ideas immanent in nervous activity</article-title>. <source>Bull. Math. Biophys</source>. <volume>5</volume>, <fpage>115</fpage>&#x02013;<lpage>133</lpage>. <pub-id pub-id-type="doi">10.1007/BF02478259</pub-id><pub-id pub-id-type="pmid">2185863</pub-id></citation></ref>
<ref id="B155">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>McEliece</surname> <given-names>R. J.</given-names></name> <name><surname>Posner</surname> <given-names>E. C.</given-names></name> <name><surname>Rodemich</surname> <given-names>E. R.</given-names></name> <name><surname>Venkatesh</surname> <given-names>S. S.</given-names></name></person-group> (<year>1988</year>). <source>The Capacity of the Hopfield Associative Memory</source>. <publisher-loc>Washington, DC</publisher-loc>: <publisher-name>IEEE Computer Society Press</publisher-name>, <fpage>100</fpage>&#x02013;<lpage>121</lpage>.<pub-id pub-id-type="pmid">18249815</pub-id></citation></ref>
<ref id="B156">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mei</surname> <given-names>J.</given-names></name> <name><surname>Muller</surname> <given-names>E.</given-names></name> <name><surname>Ramaswamy</surname> <given-names>S.</given-names></name></person-group> (<year>2022</year>). <article-title>Informing deep neural networks by multiscale principles of neuromodulatory systems</article-title>. <source>Trends Neurosci</source>. <volume>45</volume>, <fpage>237</fpage>&#x02013;<lpage>250</lpage>. <pub-id pub-id-type="doi">10.1016/j.tins.2021.12.008</pub-id><pub-id pub-id-type="pmid">35074219</pub-id></citation></ref>
<ref id="B157">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Merlo</surname> <given-names>S.</given-names></name> <name><surname>Spampinato</surname> <given-names>S. F.</given-names></name> <name><surname>Sortino</surname> <given-names>M. A.</given-names></name></person-group> (<year>2019</year>). <article-title>Early compensatory responses against neuronal injury: a new therapeutic window of opportunity for Alzheimer&#x00027;s disease?</article-title> <source>CNS Neurosci. Therap</source>. <volume>25</volume>, <fpage>5</fpage>&#x02013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1111/cns.13050</pub-id><pub-id pub-id-type="pmid">30101571</pub-id></citation></ref>
<ref id="B158">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meunier</surname> <given-names>D.</given-names></name> <name><surname>Lambiotte</surname> <given-names>R.</given-names></name> <name><surname>Bullmore</surname> <given-names>E. T.</given-names></name></person-group> (<year>2010</year>). <article-title>Modular and hierarchically modular organization of brain networks</article-title>. <source>Front. Neurosci</source>. <volume>4</volume>:<fpage>200</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2010.00200</pub-id><pub-id pub-id-type="pmid">21151783</pub-id></citation></ref>
<ref id="B159">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Michaels</surname> <given-names>J. A.</given-names></name> <name><surname>Schaffelhofer</surname> <given-names>S.</given-names></name> <name><surname>Agudelo-Toro</surname> <given-names>A.</given-names></name> <name><surname>Scherberger</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>A goal-driven modular neural network predicts parietofrontal neural dynamics during grasping</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>117</volume>, <fpage>32124</fpage>&#x02013;<lpage>32135</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.2005087117</pub-id><pub-id pub-id-type="pmid">33257539</pub-id></citation></ref>
<ref id="B160">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miller</surname> <given-names>K. D.</given-names></name></person-group> (<year>1998</year>). <article-title>Equivalence of a sprouting-and-retraction model and correlation-based plasticity models of neural development</article-title>. <source>Neural Comput</source>. <volume>10</volume>, <fpage>529</fpage>&#x02013;<lpage>547</lpage>. <pub-id pub-id-type="doi">10.1162/089976698300017647</pub-id><pub-id pub-id-type="pmid">9527832</pub-id></citation></ref>
<ref id="B161">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Millidge</surname> <given-names>B.</given-names></name> <name><surname>Seth</surname> <given-names>A.</given-names></name> <name><surname>Buckley</surname> <given-names>C. L.</given-names></name></person-group> (<year>2022</year>). <article-title>Predictive coding: a theoretical and experimental review</article-title>. <source>arXiv [preprint] arXiv:2107.12979</source>.</citation>
</ref>
<ref id="B162">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Miscouridou</surname> <given-names>X.</given-names></name> <name><surname>Caron</surname> <given-names>F.</given-names></name> <name><surname>Teh</surname> <given-names>Y. W.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Modelling sparsity, heterogeneity, reciprocity and community structure in temporal interaction data,&#x0201D;</article-title> in <source>Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS&#x00027;18</source> (<publisher-loc>Red Hook, NY</publisher-loc>: <publisher-name>Curran Associates Inc.</publisher-name>), <fpage>2349</fpage>&#x02013;<lpage>2358</lpage>.</citation>
</ref>
<ref id="B163">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miyata</surname> <given-names>R.</given-names></name> <name><surname>Ota</surname> <given-names>K.</given-names></name> <name><surname>Aonishi</surname> <given-names>T.</given-names></name></person-group> (<year>2013</year>). <article-title>Optimal design for hetero-associative memory: hippocampal ca1 phase response curve and spike-timing-dependent plasticity</article-title>. <source>PLoS ONE</source> <volume>8</volume>:<fpage>e77395</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0077395</pub-id><pub-id pub-id-type="pmid">24204822</pub-id></citation></ref>
<ref id="B164">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mocanu</surname> <given-names>D. C.</given-names></name> <name><surname>Mocanu</surname> <given-names>E.</given-names></name> <name><surname>Stone</surname> <given-names>P.</given-names></name> <name><surname>Nguyen</surname> <given-names>P. H.</given-names></name> <name><surname>Gibescu</surname> <given-names>M.</given-names></name> <name><surname>Liotta</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science</article-title>. <source>Nat. Commun</source>. <volume>9</volume>, <fpage>1</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1038/s41467-018-04316-3</pub-id><pub-id pub-id-type="pmid">29921910</pub-id></citation></ref>
<ref id="B165">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moreno-Jim&#x000E9;nez</surname> <given-names>E. P.</given-names></name> <name><surname>Terreros-Roncal</surname> <given-names>J.</given-names></name> <name><surname>Flor-Garc&#x000ED;a</surname> <given-names>M.</given-names></name> <name><surname>R&#x000E1;bano</surname> <given-names>A.</given-names></name> <name><surname>Llorens-Mart&#x000ED;n</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>Evidences for adult hippocampal neurogenesis in humans</article-title>. <source>J. Neurosci</source>. <volume>41</volume>, <fpage>2541</fpage>&#x02013;<lpage>2553</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0675-20.2020</pub-id><pub-id pub-id-type="pmid">33762406</pub-id></citation></ref>
<ref id="B166">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mukherjee</surname> <given-names>S.</given-names></name> <name><surname>Hill</surname> <given-names>S. M.</given-names></name></person-group> (<year>2011</year>). <article-title>Network clustering: probing biological heterogeneity by sparse graphical models</article-title>. <source>Bioinformatics</source> <volume>27</volume>, <fpage>994</fpage>&#x02013;<lpage>1000</lpage>. <pub-id pub-id-type="doi">10.1093/bioinformatics/btr070</pub-id><pub-id pub-id-type="pmid">21317141</pub-id></citation></ref>
<ref id="B167">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murman</surname> <given-names>D. L.</given-names></name></person-group> (<year>2015</year>). <article-title>The impact of age on cognition</article-title>. <source>Semin. Hear</source>. <volume>36</volume>, <fpage>111</fpage>&#x02013;<lpage>121</lpage>. <pub-id pub-id-type="doi">10.1055/s-0035-1555115</pub-id><pub-id pub-id-type="pmid">27516712</pub-id></citation></ref>
<ref id="B168">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nadim</surname> <given-names>F.</given-names></name> <name><surname>Bucher</surname> <given-names>D.</given-names></name></person-group> (<year>2014</year>). <article-title>Neuromodulation of neurons and synapses</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>29</volume>, <fpage>48</fpage>&#x02013;<lpage>56</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2014.05.003</pub-id><pub-id pub-id-type="pmid">24907657</pub-id></citation></ref>
<ref id="B169">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Nair</surname> <given-names>V.</given-names></name> <name><surname>Hinton</surname> <given-names>G. E.</given-names></name></person-group> (<year>2010</year>). <article-title>&#x0201C;Rectified linear units improve restricted Boltzmann machines,&#x0201D;</article-title> in <source>Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML&#x00027;10</source> (<publisher-loc>Madison, WI</publisher-loc>: <publisher-name>Omnipress</publisher-name>), <fpage>807</fpage>&#x02013;<lpage>814</lpage>.</citation>
</ref>
<ref id="B170">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Naud&#x000E9;</surname> <given-names>J.</given-names></name> <name><surname>Cessac</surname> <given-names>B.</given-names></name> <name><surname>Berry</surname> <given-names>H.</given-names></name> <name><surname>Delord</surname> <given-names>B.</given-names></name></person-group> (<year>2013</year>). <article-title>Effects of cellular homeostatic intrinsic plasticity on dynamical and computational properties of biological recurrent neural networks</article-title>. <source>J. Neurosci</source>. <volume>33</volume>, <fpage>15032</fpage>&#x02013;<lpage>15043</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0870-13.2013</pub-id><pub-id pub-id-type="pmid">24048833</pub-id></citation></ref>
<ref id="B171">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Navlakha</surname> <given-names>S.</given-names></name> <name><surname>Bar-Joseph</surname> <given-names>Z.</given-names></name> <name><surname>Barth</surname> <given-names>A. L.</given-names></name></person-group> (<year>2018</year>). <article-title>Network design and the brain</article-title>. <source>Trends Cogn. Sci</source>. <volume>22</volume>, <fpage>64</fpage>&#x02013;<lpage>78</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2017.09.012</pub-id><pub-id pub-id-type="pmid">29054336</pub-id></citation></ref>
<ref id="B172">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nijhawan</surname> <given-names>R.</given-names></name></person-group> (<year>2008</year>). <article-title>Visual prediction: psychophysics and neurophysiology of compensation for time delays</article-title>. <source>Behav. Brain Sci</source>. <volume>31</volume>, <fpage>179</fpage>&#x02013;<lpage>198</lpage>. <pub-id pub-id-type="doi">10.1017/S0140525X08003804</pub-id><pub-id pub-id-type="pmid">18479557</pub-id></citation></ref>
<ref id="B173">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Noudoost</surname> <given-names>B.</given-names></name> <name><surname>Moore</surname> <given-names>T.</given-names></name></person-group> (<year>2011</year>). <article-title>The role of neuromodulators in selective attention</article-title>. <source>Trends Cogn. Sci</source>. <volume>15</volume>, <fpage>585</fpage>&#x02013;<lpage>591</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2011.10.006</pub-id><pub-id pub-id-type="pmid">22074811</pub-id></citation></ref>
<ref id="B174">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nussberger</surname> <given-names>A.-M.</given-names></name> <name><surname>Luo</surname> <given-names>L.</given-names></name> <name><surname>Celis</surname> <given-names>L. E.</given-names></name> <name><surname>Crockett</surname> <given-names>M. J.</given-names></name></person-group> (<year>2022</year>). <article-title>Public attitudes value interpretability but prioritize accuracy in artificial intelligence</article-title>. <source>Nat. Commun</source>. <volume>13</volume>:<fpage>5821</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-022-33417-3</pub-id><pub-id pub-id-type="pmid">36192416</pub-id></citation></ref>
<ref id="B175">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Osborne</surname> <given-names>N. N.</given-names></name></person-group> (<year>1979</year>). <article-title>Is dale&#x00027;s principle valid?</article-title> <source>Trends Neurosci</source>. <volume>2</volume>, <fpage>73</fpage>&#x02013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1016/0166-2236(79)90031-6</pub-id></citation>
</ref>
<ref id="B176">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pagkalos</surname> <given-names>M.</given-names></name> <name><surname>Chavlis</surname> <given-names>S.</given-names></name> <name><surname>Poirazi</surname> <given-names>P.</given-names></name></person-group> (<year>2023</year>). <article-title>Introducing the dendrify framework for incorporating dendrites to spiking neural networks</article-title>. <source>Nat. Commun</source>. <volume>14</volume>:<fpage>131</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-022-35747-8</pub-id><pub-id pub-id-type="pmid">36627284</pub-id></citation></ref>
<ref id="B177">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Palmer</surname> <given-names>S. E.</given-names></name> <name><surname>Marre</surname> <given-names>O.</given-names></name> <name><surname>Berry</surname> <given-names>M. J.</given-names></name> <name><surname>Bialek</surname> <given-names>W.</given-names></name></person-group> (<year>2015</year>). <article-title>Predictive information in a sensory population</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>112</volume>, <fpage>6908</fpage>&#x02013;<lpage>6913</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1506855112</pub-id><pub-id pub-id-type="pmid">26038544</pub-id></citation></ref>
<ref id="B178">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pan</surname> <given-names>R.</given-names></name> <name><surname>Rajan</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;On decomposing a deep neural network into modules,&#x0201D;</article-title> in <source>Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2020</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>889</fpage>&#x02013;<lpage>900</lpage>. <pub-id pub-id-type="doi">10.1145/3368089.3409668</pub-id></citation>
</ref>
<ref id="B179">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pan</surname> <given-names>Z.</given-names></name> <name><surname>Wu</surname> <given-names>J.</given-names></name> <name><surname>Chua</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>M.</given-names></name> <name><surname>Li</surname> <given-names>H.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Neural population coding for effective temporal classification,&#x0201D;</article-title> in <source>International Joint Conference on Neural Networks</source> (<publisher-loc>Budapest</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1109/IJCNN.2019.8851858</pub-id></citation>
</ref>
<ref id="B180">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Panzeri</surname> <given-names>S.</given-names></name> <name><surname>Macke</surname> <given-names>J. H.</given-names></name> <name><surname>Gross</surname> <given-names>J.</given-names></name> <name><surname>Kayser</surname> <given-names>C.</given-names></name></person-group> (<year>2015</year>). <article-title>Neural population coding: combining insights from microscopic and mass signals</article-title>. <source>Trends Cogn. Sci</source>. <volume>19</volume>, <fpage>162</fpage>&#x02013;<lpage>172</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2015.01.002</pub-id><pub-id pub-id-type="pmid">25670005</pub-id></citation></ref>
<ref id="B181">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Park</surname> <given-names>J.</given-names></name> <name><surname>Papoutsi</surname> <given-names>A.</given-names></name> <name><surname>Ash</surname> <given-names>R. T.</given-names></name> <name><surname>Marin</surname> <given-names>M. A.</given-names></name> <name><surname>Poirazi</surname> <given-names>P.</given-names></name> <name><surname>Smirnakis</surname> <given-names>S. M.</given-names></name></person-group> (<year>2019</year>). <article-title>Contribution of apical and basal dendrites to orientation encoding in mouse v1 l2/3 pyramidal neurons</article-title>. <source>Nat. Commun</source>. <volume>10</volume>:<fpage>5372</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-019-13029-0</pub-id><pub-id pub-id-type="pmid">31772192</pub-id></citation></ref>
<ref id="B182">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Kim</surname> <given-names>S.</given-names></name> <name><surname>Na</surname> <given-names>B.</given-names></name> <name><surname>Yoon</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;T2FSNN: deep spiking neural networks with time-to-first-spike coding,&#x0201D;</article-title> in <source>2020 57th ACM/IEEE Design Automation Conference (DAC)</source> (<publisher-loc>Virtual</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1109/DAC18072.2020.9218689</pub-id></citation>
</ref>
<ref id="B183">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Parker</surname> <given-names>L.</given-names></name> <name><surname>Chance</surname> <given-names>F.</given-names></name> <name><surname>Cardwell</surname> <given-names>S.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Benchmarking a bio-inspired SNN on a neuromorphic system,&#x0201D;</article-title> in <source>Neuro-Inspired Computational Elements Conference, NICE 2022</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>63</fpage>&#x02013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1145/3517343.3517365</pub-id><pub-id pub-id-type="pmid">36440280</pub-id></citation></ref>
<ref id="B184">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pezzulo</surname> <given-names>G.</given-names></name> <name><surname>Parr</surname> <given-names>T.</given-names></name> <name><surname>Friston</surname> <given-names>K.</given-names></name></person-group> (<year>2022</year>). <article-title>The evolution of brain architectures for predictive coding and active inference</article-title>. <source>Philos. Trans. R. Soc. B Biol. Sci</source>. <volume>377</volume>:<fpage>20200531</fpage>. <pub-id pub-id-type="doi">10.1098/rstb.2020.0531</pub-id><pub-id pub-id-type="pmid">34957844</pub-id></citation></ref>
<ref id="B185">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfeiffer</surname> <given-names>M.</given-names></name> <name><surname>Pfeil</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Deep learning with spiking neurons: opportunities and challenges</article-title>. <source>Front. Neurosci</source>. <volume>12</volume>:<fpage>774</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2018.00774</pub-id><pub-id pub-id-type="pmid">30410432</pub-id></citation></ref>
<ref id="B186">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pitkow</surname> <given-names>X.</given-names></name> <name><surname>Angelaki</surname> <given-names>D. E.</given-names></name></person-group> (<year>2017</year>). <article-title>Inference in the brain: statistics flowing in redundant population codes</article-title>. <source>Neuron</source> <volume>94</volume>, <fpage>943</fpage>&#x02013;<lpage>953</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2017.05.028</pub-id><pub-id pub-id-type="pmid">28595050</pub-id></citation></ref>
<ref id="B187">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poirazi</surname> <given-names>P.</given-names></name> <name><surname>Mel</surname> <given-names>B. W.</given-names></name></person-group> (<year>2001</year>). <article-title>Impact of active dendrites and structural plasticity on the memory capacity of neural tissue</article-title>. <source>Neuron</source> <volume>29</volume>, <fpage>779</fpage>&#x02013;<lpage>796</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(01)00252-5</pub-id><pub-id pub-id-type="pmid">11301036</pub-id></citation></ref>
<ref id="B188">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qiang Bi</surname> <given-names>G.</given-names></name> <name><surname>Ming Poo</surname> <given-names>M.</given-names></name></person-group> (<year>1998</year>). <article-title>Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type</article-title>. <source>J. Neurosci</source>. <volume>18</volume>, <fpage>10464</fpage>&#x02013;<lpage>10472</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.18-24-10464.1998</pub-id><pub-id pub-id-type="pmid">9852584</pub-id></citation></ref>
<ref id="B189">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ramesh</surname> <given-names>A.</given-names></name> <name><surname>Dhariwal</surname> <given-names>P.</given-names></name> <name><surname>Nichol</surname> <given-names>A.</given-names></name> <name><surname>Chu</surname> <given-names>C.</given-names></name> <name><surname>Chen</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>Hierarchical text-conditional image generation with clip latents</article-title>. <source>arXiv [preprint] arXiv:2204.06125</source>.</citation>
</ref>
<ref id="B190">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rao</surname> <given-names>R. P. N.</given-names></name> <name><surname>Ballard</surname> <given-names>D. H.</given-names></name></person-group> (<year>1999</year>). <article-title>Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects</article-title>. <source>Nat. Neurosci</source>. <volume>2</volume>, <fpage>79</fpage>&#x02013;<lpage>87</lpage>. <pub-id pub-id-type="doi">10.1038/4580</pub-id><pub-id pub-id-type="pmid">10195184</pub-id></citation></ref>
<ref id="B191">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Raymond</surname> <given-names>J. L.</given-names></name> <name><surname>Medina</surname> <given-names>J. F.</given-names></name></person-group> (<year>2018</year>). <article-title>Computational principles of supervised learning in the cerebellum</article-title>. <source>Annu. Rev. Neurosci</source>. <volume>41</volume>, <fpage>233</fpage>&#x02013;<lpage>253</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-neuro-080317-061948</pub-id><pub-id pub-id-type="pmid">29986160</pub-id></citation></ref>
<ref id="B192">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Razetti</surname> <given-names>A.</given-names></name> <name><surname>Medioni</surname> <given-names>C.</given-names></name> <name><surname>Malandain</surname> <given-names>G.</given-names></name> <name><surname>Besse</surname> <given-names>F.</given-names></name> <name><surname>Descombes</surname> <given-names>X.</given-names></name></person-group> (<year>2018</year>). <article-title>A stochastic framework to model axon interactions within growing neuronal populations</article-title>. <source>PLoS Comput. Biol</source>. <volume>14</volume>:<fpage>e1006627</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1006627</pub-id><pub-id pub-id-type="pmid">30507939</pub-id></citation></ref>
<ref id="B193">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Risi</surname> <given-names>S.</given-names></name> <name><surname>Stanley</surname> <given-names>K. O.</given-names></name></person-group> (<year>2014</year>). <article-title>&#x0201C;Guided self-organization in indirectly encoded and evolving topographic maps,&#x0201D;</article-title> in <source>Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, GECCO &#x00027;14</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>713</fpage>&#x02013;<lpage>720</lpage>. <pub-id pub-id-type="doi">10.1145/2576768.2598369</pub-id></citation>
</ref>
<ref id="B194">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Robertazzi</surname> <given-names>F.</given-names></name> <name><surname>Vissani</surname> <given-names>M.</given-names></name> <name><surname>Schillaci</surname> <given-names>G.</given-names></name> <name><surname>Falotico</surname> <given-names>E.</given-names></name></person-group> (<year>2022</year>). <article-title>Brain-inspired meta-reinforcement learning cognitive control in conflictual inhibition decision-making task for artificial agents</article-title>. <source>Neural Netw</source>. <volume>154</volume>, <fpage>283</fpage>&#x02013;<lpage>302</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2022.06.020</pub-id><pub-id pub-id-type="pmid">35917665</pub-id></citation></ref>
<ref id="B195">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Rodriguez</surname> <given-names>H. G.</given-names></name> <name><surname>Guo</surname> <given-names>Q.</given-names></name> <name><surname>Moraitis</surname> <given-names>T.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Short-term plasticity neurons learning to learn and forget,&#x0201D;</article-title> in <source>International Conference on Machine Learning, Vol. 162</source>, eds K. Chaudhuri, S. Jegelka, L. Song, C. Szepesv&#x000E1;ri, G. Niu, and S. Sabato (<publisher-name>MLR Press</publisher-name>), <fpage>18704</fpage>&#x02013;<lpage>18722</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://proceedings.mlr.press/v162/rodriguez22b.html">https://proceedings.mlr.press/v162/rodriguez22b.html</ext-link></citation>
</ref>
<ref id="B196">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rogers</surname> <given-names>R. D.</given-names></name></person-group> (<year>2011</year>). <article-title>The roles of dopamine and serotonin in decision making: evidence from pharmacological experiments in humans</article-title>. <source>Neuropsychopharmacology</source> <volume>36</volume>, <fpage>114</fpage>&#x02013;<lpage>132</lpage>. <pub-id pub-id-type="doi">10.1038/npp.2010.165</pub-id><pub-id pub-id-type="pmid">20881944</pub-id></citation></ref>
<ref id="B197">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rombach</surname> <given-names>R.</given-names></name> <name><surname>Blattmann</surname> <given-names>A.</given-names></name> <name><surname>Lorenz</surname> <given-names>D.</given-names></name> <name><surname>Esser</surname> <given-names>P.</given-names></name> <name><surname>Ommer</surname> <given-names>B.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;High-resolution image synthesis with latent diffusion models,&#x0201D;</article-title> in <source>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>New Orleans, LA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>10674</fpage>&#x02013;<lpage>10685</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR52688.2022.01042</pub-id></citation>
</ref>
<ref id="B198">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rothschild</surname> <given-names>G.</given-names></name> <name><surname>Eban</surname> <given-names>E.</given-names></name> <name><surname>Frank</surname> <given-names>L. M.</given-names></name></person-group> (<year>2017</year>). <article-title>A cortical-hippocampal-cortical loop of information processing during memory consolidation</article-title>. <source>Nat. Neurosci</source>. <volume>20</volume>, <fpage>251</fpage>&#x02013;<lpage>259</lpage>. <pub-id pub-id-type="doi">10.1038/nn.4457</pub-id><pub-id pub-id-type="pmid">27941790</pub-id></citation></ref>
<ref id="B199">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rubinov</surname> <given-names>M.</given-names></name> <name><surname>Ypma</surname> <given-names>R. J. F.</given-names></name> <name><surname>Watson</surname> <given-names>C.</given-names></name> <name><surname>Bullmore</surname> <given-names>E. T.</given-names></name></person-group> (<year>2015</year>). <article-title>Wiring cost and topological participation of the mouse brain connectome</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>112</volume>, <fpage>10032</fpage>&#x02013;<lpage>10037</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1420315112</pub-id><pub-id pub-id-type="pmid">26216962</pub-id></citation></ref>
<ref id="B200">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rumelhart</surname> <given-names>D. E.</given-names></name> <name><surname>Hinton</surname> <given-names>G. E.</given-names></name> <name><surname>Williams</surname> <given-names>R. J.</given-names></name></person-group> (<year>1986</year>). <article-title>Learning representations by back-propagating errors</article-title>. <source>Nature</source> <volume>323</volume>, <fpage>533</fpage>&#x02013;<lpage>536</lpage>. <pub-id pub-id-type="doi">10.1038/323533a0</pub-id><pub-id pub-id-type="pmid">37022259</pub-id></citation></ref>
<ref id="B201">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sacramento</surname> <given-names>J.</given-names></name> <name><surname>Ponte Costa</surname> <given-names>R.</given-names></name> <name><surname>Bengio</surname> <given-names>Y.</given-names></name> <name><surname>Senn</surname> <given-names>W.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Dendritic cortical microcircuits approximate the backpropagation algorithm,&#x0201D;</article-title> in <source>Advances in Neural Information Processing Systems 31: NeurIPS 2018, Montr&#x000E9;al, Canada</source>, eds S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (<publisher-loc>Red Hook, NY</publisher-loc>: <publisher-name>Curran Associates Inc.</publisher-name>), <volume>31</volume>, <fpage>8735</fpage>&#x02013;<lpage>8746</lpage>.</citation>
</ref>
<ref id="B202">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sadeh</surname> <given-names>S.</given-names></name> <name><surname>Clopath</surname> <given-names>C.</given-names></name></person-group> (<year>2021</year>). <article-title>Excitatory-inhibitory balance modulates the formation and dynamics of neuronal assemblies in cortical networks</article-title>. <source>Sci. Adv</source>. <volume>7</volume>:<fpage>eabg8411</fpage>. <pub-id pub-id-type="doi">10.1126/sciadv.abg8411</pub-id><pub-id pub-id-type="pmid">34731002</pub-id></citation></ref>
<ref id="B203">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schuman</surname> <given-names>C. D.</given-names></name> <name><surname>Kulkarni</surname> <given-names>S. R.</given-names></name> <name><surname>Parsa</surname> <given-names>M.</given-names></name> <name><surname>Mitchell</surname> <given-names>J. P.</given-names></name> <name><surname>Date</surname> <given-names>P.</given-names></name> <name><surname>Kay</surname> <given-names>B.</given-names></name></person-group> (<year>2022</year>). <article-title>Opportunities for neuromorphic computing algorithms and applications</article-title>. <source>Nat. Comput. Sci</source>. <volume>2</volume>, <fpage>10</fpage>&#x02013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1038/s43588-021-00184-y</pub-id></citation>
</ref>
<ref id="B204">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Scott</surname> <given-names>A. C.</given-names></name></person-group> (<year>1975</year>). <article-title>The electrophysics of a nerve fiber</article-title>. <source>Rev. Mod. Phys</source>. <volume>47</volume>, <fpage>487</fpage>&#x02013;<lpage>533</lpage>. <pub-id pub-id-type="doi">10.1103/RevModPhys.47.487</pub-id></citation>
</ref>
<ref id="B205">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sederberg</surname> <given-names>A. J.</given-names></name> <name><surname>MacLean</surname> <given-names>J. N.</given-names></name> <name><surname>Palmer</surname> <given-names>S. E.</given-names></name></person-group> (<year>2018</year>). <article-title>Learning to make external sensory stimulus predictions using internal correlations in populations of neurons</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>115</volume>, <fpage>1105</fpage>&#x02013;<lpage>1110</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1710779115</pub-id><pub-id pub-id-type="pmid">29348208</pub-id></citation></ref>
<ref id="B206">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sehgal</surname> <given-names>M.</given-names></name> <name><surname>Song</surname> <given-names>C.</given-names></name> <name><surname>Ehlers</surname> <given-names>V. L.</given-names></name> <name><surname>Moyer</surname> <given-names>J. R.</given-names></name></person-group> (<year>2013</year>). <article-title>Learning to learn-intrinsic plasticity as a metaplasticity mechanism for memory formation</article-title>. <source>Neurobiol. Learn. Mem</source>. <volume>105</volume>, <fpage>186</fpage>&#x02013;<lpage>199</lpage>. <pub-id pub-id-type="doi">10.1016/j.nlm.2013.07.008</pub-id><pub-id pub-id-type="pmid">27213810</pub-id></citation></ref>
<ref id="B207">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sezener</surname> <given-names>E.</given-names></name> <name><surname>Grabska-Barwi&#x00144;ska</surname> <given-names>A.</given-names></name> <name><surname>Kostadinov</surname> <given-names>D.</given-names></name> <name><surname>Beau</surname> <given-names>M.</given-names></name> <name><surname>Krishnagopal</surname> <given-names>S.</given-names></name> <name><surname>Budden</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>A rapid and efficient learning rule for biological neural circuits</article-title>. <source>bioRxiv</source>. <pub-id pub-id-type="doi">10.1101/2021.03.10.434756</pub-id><pub-id pub-id-type="pmid">27534393</pub-id></citation></ref>
<ref id="B208">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shaw</surname> <given-names>N. P.</given-names></name> <name><surname>Jackson</surname> <given-names>T.</given-names></name> <name><surname>Orchard</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>Biological batch normalisation: how intrinsic plasticity improves learning in deep neural networks</article-title>. <source>PLoS ONE</source> <volume>15</volume>:<fpage>e0238454</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0238454</pub-id><pub-id pub-id-type="pmid">32966302</pub-id></citation></ref>
<ref id="B209">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shemer</surname> <given-names>I.</given-names></name> <name><surname>Brinne</surname> <given-names>B.</given-names></name> <name><surname>Tegn&#x000E9;r</surname> <given-names>J.</given-names></name> <name><surname>Grillner</surname> <given-names>S.</given-names></name></person-group> (<year>2008</year>). <article-title>Electrotonic signals along intracellular membranes may interconnect dendritic spines and nucleus</article-title>. <source>PLoS Comput. Biol</source>. <volume>4</volume>:<fpage>e1000036</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1000036</pub-id><pub-id pub-id-type="pmid">18369427</pub-id></citation></ref>
<ref id="B210">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shen</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Navlakha</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>A correspondence between normalization strategies in artificial and biological neural networks</article-title>. <source>Neural Comput</source>. <volume>33</volume>, <fpage>3179</fpage>&#x02013;<lpage>3203</lpage>. <pub-id pub-id-type="doi">10.1162/neco_a_01439</pub-id><pub-id pub-id-type="pmid">34474484</pub-id></citation></ref>
<ref id="B211">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Smith</surname> <given-names>J. M.</given-names></name></person-group> (<year>1999</year>). <source>Shaping Life: Genes, Embryos, and Evolution</source>. <publisher-loc>Darwinism Today Series. New Haven</publisher-loc>: <publisher-name>Yale University Press</publisher-name>.</citation>
</ref>
<ref id="B212">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smolen</surname> <given-names>P.</given-names></name> <name><surname>Baxter</surname> <given-names>D. A.</given-names></name> <name><surname>Byrne</surname> <given-names>J. H.</given-names></name></person-group> (<year>2020</year>). <article-title>Comparing theories for the maintenance of late LTP and long-term memory: computational analysis of the roles of kinase feedback pathways and synaptic reactivation</article-title>. <source>Front. Comput. Neurosci</source>. <volume>14</volume>:<fpage>569349</fpage>. <pub-id pub-id-type="doi">10.3389/fncom.2020.569349</pub-id><pub-id pub-id-type="pmid">33390922</pub-id></citation></ref>
<ref id="B213">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sohal</surname> <given-names>V. S.</given-names></name> <name><surname>Rubenstein</surname> <given-names>J. L. R.</given-names></name></person-group> (<year>2019</year>). <article-title>Excitation-inhibition balance as a framework for investigating mechanisms in neuropsychiatric disorders</article-title>. <source>Mol. Psychiatry</source> <volume>24</volume>, <fpage>1248</fpage>&#x02013;<lpage>1257</lpage>. <pub-id pub-id-type="doi">10.1038/s41380-019-0426-0</pub-id><pub-id pub-id-type="pmid">31089192</pub-id></citation></ref>
<ref id="B214">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Song</surname> <given-names>H. F.</given-names></name> <name><surname>Yang</surname> <given-names>G. R.</given-names></name> <name><surname>Wang</surname> <given-names>X.-J.</given-names></name></person-group> (<year>2016</year>). <article-title>Training excitatory-inhibitory recurrent neural networks for cognitive tasks: a simple and flexible framework</article-title>. <source>PLoS Comput. Biol</source>. <volume>12</volume>:<fpage>e1004792</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1004792</pub-id><pub-id pub-id-type="pmid">26928718</pub-id></citation></ref>
<ref id="B215">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Song</surname> <given-names>S.</given-names></name> <name><surname>Miller</surname> <given-names>K. D.</given-names></name> <name><surname>Abbott</surname> <given-names>L. F.</given-names></name></person-group> (<year>2000</year>). <article-title>Competitive hebbian learning through spike-timing-dependent synaptic plasticity</article-title>. <source>Nat. Neurosci</source>. <volume>3</volume>, <fpage>919</fpage>&#x02013;<lpage>926</lpage>. <pub-id pub-id-type="doi">10.1038/78829</pub-id><pub-id pub-id-type="pmid">10966623</pub-id></citation></ref>
<ref id="B216">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sorrells</surname> <given-names>S. F.</given-names></name> <name><surname>Paredes</surname> <given-names>M. F.</given-names></name> <name><surname>Cebrian-Silla</surname> <given-names>A.</given-names></name> <name><surname>Sandoval</surname> <given-names>K.</given-names></name> <name><surname>Qi</surname> <given-names>D.</given-names></name> <name><surname>Kelley</surname> <given-names>K. W.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Human hippocampal neurogenesis drops sharply in children to undetectable levels in adults</article-title>. <source>Nature</source> <volume>555</volume>, <fpage>377</fpage>&#x02013;<lpage>381</lpage>. <pub-id pub-id-type="doi">10.1038/nature25975</pub-id><pub-id pub-id-type="pmid">29873751</pub-id></citation></ref>
<ref id="B217">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sorrells</surname> <given-names>S. F.</given-names></name> <name><surname>Paredes</surname> <given-names>M. F.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Kang</surname> <given-names>G.</given-names></name> <name><surname>Pastor-Alonso</surname> <given-names>O.</given-names></name> <name><surname>Biagiotti</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Positive controls in adults and children support that very few, if any, new neurons are born in the adult human hippocampus</article-title>. <source>J. Neurosci</source>. <volume>41</volume>, <fpage>2554</fpage>&#x02013;<lpage>2565</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0676-20.2020</pub-id><pub-id pub-id-type="pmid">33762407</pub-id></citation></ref>
<ref id="B218">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Speranza</surname> <given-names>L.</given-names></name> <name><surname>Labus</surname> <given-names>J.</given-names></name> <name><surname>Volpicelli</surname> <given-names>F.</given-names></name> <name><surname>Guseva</surname> <given-names>D.</given-names></name> <name><surname>Lacivita</surname> <given-names>E.</given-names></name> <name><surname>Leopoldo</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Serotonin 5-ht7 receptor increases the density of dendritic spines and facilitates synaptogenesis in forebrain neurons</article-title>. <source>J. Neurochem</source>. <volume>141</volume>, <fpage>647</fpage>&#x02013;<lpage>661</lpage>. <pub-id pub-id-type="doi">10.1111/jnc.13962</pub-id><pub-id pub-id-type="pmid">28337771</pub-id></citation></ref>
<ref id="B219">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Staii</surname> <given-names>C.</given-names></name></person-group> (<year>2022</year>). <article-title>Stochastic models of neuronal growth</article-title>. <source>arXiv [preprint] arXiv:2205.10723</source>.</citation>
</ref>
<ref id="B220">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stanley</surname> <given-names>K. O.</given-names></name> <name><surname>Clune</surname> <given-names>J.</given-names></name> <name><surname>Lehman</surname> <given-names>J.</given-names></name> <name><surname>Miikkulainen</surname> <given-names>R.</given-names></name></person-group> (<year>2019</year>). <article-title>Designing neural networks through neuroevolution</article-title>. <source>Nat. Mach. Intell</source>. <volume>1</volume>, <fpage>24</fpage>&#x02013;<lpage>35</lpage>. <pub-id pub-id-type="doi">10.1038/s42256-018-0006-z</pub-id></citation>
</ref>
<ref id="B221">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>St&#x000F6;ckl</surname> <given-names>C.</given-names></name> <name><surname>Lang</surname> <given-names>D.</given-names></name> <name><surname>Maass</surname> <given-names>W.</given-names></name></person-group> (<year>2022</year>). <article-title>Structure induces computational function in networks with diverse types of spiking neurons</article-title>. <source>bioRxiv</source>. <pub-id pub-id-type="doi">10.1101/2021.05.18.444689</pub-id></citation>
</ref>
<ref id="B222">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>S&#x000FC;dhof</surname> <given-names>T. C.</given-names></name></person-group> (<year>2018</year>). <article-title>Towards an understanding of synapse formation</article-title>. <source>Neuron</source> <volume>100</volume>, <fpage>276</fpage>&#x02013;<lpage>293</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2018.09.040</pub-id><pub-id pub-id-type="pmid">30359597</pub-id></citation></ref>
<ref id="B223">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tan</surname> <given-names>S. Z. K.</given-names></name> <name><surname>Du</surname> <given-names>R.</given-names></name> <name><surname>Perucho</surname> <given-names>J. A. U.</given-names></name> <name><surname>Chopra</surname> <given-names>S. S.</given-names></name> <name><surname>Vardhanabhuti</surname> <given-names>V.</given-names></name> <name><surname>Lim</surname> <given-names>L. W.</given-names></name></person-group> (<year>2020</year>). <article-title>Dropout in neural networks simulates the paradoxical effects of deep brain stimulation on memory</article-title>. <source>Front. Aging Neurosci</source>. <volume>12</volume>:<fpage>273</fpage>. <pub-id pub-id-type="doi">10.3389/fnagi.2020.00273</pub-id><pub-id pub-id-type="pmid">33093830</pub-id></citation></ref>
<ref id="B224">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tanaka</surname> <given-names>H.</given-names></name> <name><surname>Ishikawa</surname> <given-names>T.</given-names></name> <name><surname>Lee</surname> <given-names>J.</given-names></name> <name><surname>Kakei</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>The cerebro-cerebellum as a locus of forward model: a review</article-title>. <source>Front. Syst. Neurosci</source>. <volume>14</volume>:<fpage>19</fpage>. <pub-id pub-id-type="doi">10.3389/fnsys.2020.00019</pub-id><pub-id pub-id-type="pmid">32327978</pub-id></citation></ref>
<ref id="B225">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Terziyan</surname> <given-names>V.</given-names></name> <name><surname>Kaikova</surname> <given-names>O.</given-names></name></person-group> (<year>2022</year>). <article-title>Neural networks with disabilities: an introduction to complementary artificial intelligence</article-title>. <source>Neural Comput</source>. <volume>34</volume>, <fpage>255</fpage>&#x02013;<lpage>290</lpage>. <pub-id pub-id-type="doi">10.1162/neco_a_01449</pub-id><pub-id pub-id-type="pmid">34710901</pub-id></citation></ref>
<ref id="B226">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thomas</surname> <given-names>B. T.</given-names></name> <name><surname>Blalock</surname> <given-names>D. W.</given-names></name> <name><surname>Levy</surname> <given-names>W. B.</given-names></name></person-group> (<year>2015</year>). <article-title>Adaptive synaptogenesis constructs neural codes that benefit discrimination</article-title>. <source>PLoS Comput. Biol</source>. <volume>11</volume>:<fpage>e1004299</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1004299</pub-id><pub-id pub-id-type="pmid">26176744</pub-id></citation></ref>
<ref id="B227">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tian</surname> <given-names>G.</given-names></name> <name><surname>Li</surname> <given-names>S.</given-names></name> <name><surname>Huang</surname> <given-names>T.</given-names></name> <name><surname>Wu</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Excitation-inhibition balanced neural networks for fast signal detection</article-title>. <source>Front. Comput. Neurosci</source>. <volume>14</volume>:<fpage>79</fpage>. <pub-id pub-id-type="doi">10.3389/fncom.2020.00079</pub-id><pub-id pub-id-type="pmid">33013343</pub-id></citation></ref>
<ref id="B228">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tierney</surname> <given-names>A. L.</given-names></name> <name><surname>Nelson</surname> <given-names>C. A. I.</given-names></name></person-group> (<year>2009</year>). <article-title>Brain development and the role of experience in the early years</article-title>. <source>Zero Three</source> <volume>30</volume>, <fpage>9</fpage>&#x02013;<lpage>13</lpage>.<pub-id pub-id-type="pmid">23894221</pub-id></citation></ref>
<ref id="B229">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Titley</surname> <given-names>H. K.</given-names></name> <name><surname>Brunel</surname> <given-names>N.</given-names></name> <name><surname>Hansel</surname> <given-names>C.</given-names></name></person-group> (<year>2017</year>). <article-title>Toward a neurocentric view of learning</article-title>. <source>Neuron</source> <volume>95</volume>, <fpage>19</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2017.05.021</pub-id><pub-id pub-id-type="pmid">28683265</pub-id></citation></ref>
<ref id="B230">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tomasi</surname> <given-names>D.</given-names></name> <name><surname>Wang</surname> <given-names>G.-J.</given-names></name> <name><surname>Volkow</surname> <given-names>N. D.</given-names></name></person-group> (<year>2013</year>). <article-title>Energetic cost of brain functional connectivity</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>110</volume>, <fpage>13642</fpage>&#x02013;<lpage>13647</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1303346110</pub-id><pub-id pub-id-type="pmid">23898179</pub-id></citation></ref>
<ref id="B231">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tosches</surname> <given-names>M. A.</given-names></name></person-group> (<year>2017</year>). <article-title>Developmental and genetic mechanisms of neural circuit evolution</article-title>. <source>Dev. Biol</source>. <volume>431</volume>, <fpage>16</fpage>&#x02013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1016/j.ydbio.2017.06.016</pub-id><pub-id pub-id-type="pmid">28645748</pub-id></citation></ref>
<ref id="B232">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Toyoizumi</surname> <given-names>T.</given-names></name> <name><surname>Kaneko</surname> <given-names>M.</given-names></name> <name><surname>Stryker</surname> <given-names>M.</given-names></name> <name><surname>Miller</surname> <given-names>K.</given-names></name></person-group> (<year>2014</year>). <article-title>Modeling the dynamic interaction of hebbian and homeostatic plasticity</article-title>. <source>Neuron</source> <volume>84</volume>, <fpage>497</fpage>&#x02013;<lpage>510</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2014.09.036</pub-id><pub-id pub-id-type="pmid">25374364</pub-id></citation></ref>
<ref id="B233">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Toyoizumi</surname> <given-names>T.</given-names></name> <name><surname>Pfister</surname> <given-names>J.-P.</given-names></name> <name><surname>Aihara</surname> <given-names>K.</given-names></name> <name><surname>Gerstner</surname> <given-names>W.</given-names></name></person-group> (<year>2005</year>). <article-title>&#x0201C;Generalized Bienenstock-Cooper-Munro rule for spiking neurons that maximizes information transmission,&#x0201D;</article-title> in <source>Proceedings of the National Academy of Sciences</source>, Vol. 102. p. 5239&#x02013;5244. <pub-id pub-id-type="doi">10.1073/pnas.0500495102</pub-id><pub-id pub-id-type="pmid">15795376</pub-id></citation></ref>
<ref id="B234">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tran</surname> <given-names>L. M.</given-names></name> <name><surname>Santoro</surname> <given-names>A.</given-names></name> <name><surname>Liu</surname> <given-names>L.</given-names></name> <name><surname>Josselyn</surname> <given-names>S. A.</given-names></name> <name><surname>Richards</surname> <given-names>B. A.</given-names></name> <name><surname>Frankland</surname> <given-names>P. W.</given-names></name></person-group> (<year>2022</year>). <article-title>Adult neurogenesis acts as a neural regularizer</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>119</volume>:<fpage>e2206704119</fpage>. <pub-id pub-id-type="doi">10.1073/pnas.2206704119</pub-id><pub-id pub-id-type="pmid">36322739</pub-id></citation></ref>
<ref id="B235">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Trapp</surname> <given-names>P.</given-names></name> <name><surname>Echeveste</surname> <given-names>R.</given-names></name> <name><surname>Gros</surname> <given-names>C.</given-names></name></person-group> (<year>2018</year>). <article-title>E-i balance emerges naturally from continuous Hebbian learning in autonomous neural networks</article-title>. <source>Sci. Rep</source>. <volume>8</volume>:<fpage>8939</fpage>. <pub-id pub-id-type="doi">10.1038/s41598-018-27099-5</pub-id><pub-id pub-id-type="pmid">29895972</pub-id></citation></ref>
<ref id="B236">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Traulsen</surname> <given-names>A.</given-names></name> <name><surname>Nowak</surname> <given-names>M. A.</given-names></name></person-group> (<year>2006</year>). <article-title>Evolution of cooperation by multilevel selection</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>103</volume>, <fpage>10952</fpage>&#x02013;<lpage>10955</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0602530103</pub-id><pub-id pub-id-type="pmid">16829575</pub-id></citation></ref>
<ref id="B237">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tripodi</surname> <given-names>M.</given-names></name> <name><surname>Evers</surname> <given-names>J. F.</given-names></name> <name><surname>Mauss</surname> <given-names>A.</given-names></name> <name><surname>Bate</surname> <given-names>M.</given-names></name> <name><surname>Landgraf</surname> <given-names>M.</given-names></name></person-group> (<year>2008</year>). <article-title>Structural homeostasis: Compensatory adjustments of dendritic arbor geometry in response to variations of synaptic input</article-title>. <source>PLoS Biol</source>. <volume>6</volume>:<fpage>e60260</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pbio.0060260</pub-id><pub-id pub-id-type="pmid">18959482</pub-id></citation></ref>
<ref id="B238">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tripp</surname> <given-names>B.</given-names></name> <name><surname>Eliasmith</surname> <given-names>C.</given-names></name></person-group> (<year>2016</year>). <article-title>Function approximation in inhibitory networks</article-title>. <source>Neural Netw</source>. <volume>77</volume>, <fpage>95</fpage>&#x02013;<lpage>106</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2016.01.010</pub-id><pub-id pub-id-type="pmid">26963256</pub-id></citation></ref>
<ref id="B239">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsa</surname> <given-names>Y.</given-names></name> <name><surname>Chu</surname> <given-names>H.-C.</given-names></name> <name><surname>Fang</surname> <given-names>S.-H.</given-names></name> <name><surname>Lee</surname> <given-names>J.</given-names></name> <name><surname>Lin</surname> <given-names>C.-M.</given-names></name></person-group> (<year>2018</year>). <article-title>Adaptive noise cancellation using deep cerebellar model articulation controller</article-title>. <source>IEEE Access</source> <volume>6</volume>, <fpage>37395</fpage>&#x02013;<lpage>37402</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2018.2827699</pub-id></citation>
</ref>
<ref id="B240">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsodyks</surname> <given-names>M.</given-names></name> <name><surname>Markram</surname> <given-names>H.</given-names></name></person-group> (<year>1997</year>). <article-title>The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>94</volume>, <fpage>719</fpage>&#x02013;<lpage>723</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.94.2.719</pub-id><pub-id pub-id-type="pmid">9012851</pub-id></citation></ref>
<ref id="B241">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Turing</surname> <given-names>A. M.</given-names></name></person-group> (<year>1948</year>). <source>Intelligent Machinery</source>. Report for National Physical Laboratory.</citation>
</ref>
<ref id="B242">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Turrigiano</surname> <given-names>G. G.</given-names></name> <name><surname>Leslie</surname> <given-names>K. R.</given-names></name> <name><surname>Desai</surname> <given-names>N. S.</given-names></name> <name><surname>Rutherford</surname> <given-names>L. C.</given-names></name> <name><surname>Nelson</surname> <given-names>S. B.</given-names></name></person-group> (<year>1998</year>). <article-title>Activity-dependent scaling of quantal amplitude in neocortical neurons</article-title>. <source>Nature</source> <volume>391</volume>, <fpage>892</fpage>&#x02013;<lpage>896</lpage>. <pub-id pub-id-type="doi">10.1038/36103</pub-id><pub-id pub-id-type="pmid">9495341</pub-id></citation></ref>
<ref id="B243">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Turrigiano</surname> <given-names>G. G.</given-names></name> <name><surname>Nelson</surname> <given-names>S. B.</given-names></name></person-group> (<year>2004</year>). <article-title>Homeostatic plasticity in the developing nervous system</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>5</volume>, <fpage>97</fpage>&#x02013;<lpage>107</lpage>. <pub-id pub-id-type="doi">10.1038/nrn1327</pub-id><pub-id pub-id-type="pmid">14735113</pub-id></citation></ref>
<ref id="B244">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Valiant</surname> <given-names>L.</given-names></name></person-group> (<year>2013</year>). <source>Probably Approximately Correct: Nature&#x00027;s Algorithms for Learning and Prospering in a Complex World</source>. <publisher-loc>New York City, NY</publisher-loc>: <publisher-name>Basic Books, Inc</publisher-name>.</citation>
</ref>
<ref id="B245">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>van de Ven</surname> <given-names>G. M.</given-names></name> <name><surname>Siegelmann</surname> <given-names>H. T.</given-names></name> <name><surname>Tolias</surname> <given-names>A. S.</given-names></name></person-group> (<year>2020</year>). <article-title>Brain-inspired replay for continual learning with artificial neural networks</article-title>. <source>Nat. Commun</source>. <volume>11</volume>:<fpage>4069</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-020-17866-2</pub-id><pub-id pub-id-type="pmid">32792531</pub-id></citation></ref>
<ref id="B246">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>van Ooyen</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Using theoretical models to analyse neural development</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>12</volume>, <fpage>311</fpage>&#x02013;<lpage>326</lpage>. <pub-id pub-id-type="doi">10.1038/nrn3031</pub-id><pub-id pub-id-type="pmid">21587288</pub-id></citation></ref>
<ref id="B247">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vardalaki</surname> <given-names>D.</given-names></name> <name><surname>Chung</surname> <given-names>K.</given-names></name> <name><surname>Harnett</surname> <given-names>M. T.</given-names></name></person-group> (<year>2022</year>). <article-title>Filopodia are a structural substrate for silent synapses in adult neocortex</article-title>. <source>Nature</source> <volume>612</volume>, <fpage>323</fpage>&#x02013;<lpage>327</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-022-05483-6</pub-id><pub-id pub-id-type="pmid">36450984</pub-id></citation></ref>
<ref id="B248">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ver&#x000ED;ssimo</surname> <given-names>J.</given-names></name> <name><surname>Verhaeghen</surname> <given-names>P.</given-names></name> <name><surname>Goldman</surname> <given-names>N.</given-names></name> <name><surname>Weinstein</surname> <given-names>M.</given-names></name> <name><surname>Ullman</surname> <given-names>M. T.</given-names></name></person-group> (<year>2022</year>). <article-title>Evidence that ageing yields improvements as well as declines across attention and executive functions</article-title>. <source>Nat. Hum. Behav</source>. <volume>6</volume>, <fpage>97</fpage>&#x02013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1038/s41562-021-01169-7</pub-id><pub-id pub-id-type="pmid">34453127</pub-id></citation></ref>
<ref id="B249">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vilone</surname> <given-names>G.</given-names></name> <name><surname>Longo</surname> <given-names>L.</given-names></name></person-group> (<year>2021</year>). <article-title>Notions of explainability and evaluation approaches for explainable artificial intelligence</article-title>. <source>Inform. Fus</source>. <volume>76</volume>, <fpage>89</fpage>&#x02013;<lpage>106</lpage>. <pub-id pub-id-type="doi">10.1016/j.inffus.2021.05.009</pub-id><pub-id pub-id-type="pmid">34844219</pub-id></citation></ref>
<ref id="B250">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>C.-H.</given-names></name> <name><surname>Huang</surname> <given-names>K.-Y.</given-names></name> <name><surname>Yao</surname> <given-names>Y.</given-names></name> <name><surname>Chen</surname> <given-names>J.-C.</given-names></name> <name><surname>Shuai</surname> <given-names>H.-H.</given-names></name> <name><surname>Cheng</surname> <given-names>W.-H.</given-names></name></person-group> (<year>2022</year>). <article-title>Lightweight deep learning: an overview</article-title>. <source>IEEE Consum. Electron. Mag</source>. 1&#x02013;12. <pub-id pub-id-type="doi">10.1109/MCE.2022.3181759</pub-id></citation>
</ref>
<ref id="B251">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>L.</given-names></name> <name><surname>Lei</surname> <given-names>B.</given-names></name> <name><surname>Li</surname> <given-names>Q.</given-names></name> <name><surname>Su</surname> <given-names>H.</given-names></name> <name><surname>Zhu</surname> <given-names>J.</given-names></name> <name><surname>Zhong</surname> <given-names>Y.</given-names></name></person-group> (<year>2022</year>). <article-title>Triple-memory networks: a brain-inspired method for continual learning</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst</source>. <volume>33</volume>, <fpage>1925</fpage>&#x02013;<lpage>1934</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2021.3111019</pub-id><pub-id pub-id-type="pmid">34529579</pub-id></citation></ref>
<ref id="B252">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Whittington</surname> <given-names>J. C.</given-names></name> <name><surname>Bogacz</surname> <given-names>R.</given-names></name></person-group> (<year>2019</year>). <article-title>Theories of error back-propagation in the brain</article-title>. <source>Trends Cogn. Sci</source>. <volume>23</volume>, <fpage>235</fpage>&#x02013;<lpage>250</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2018.12.005</pub-id><pub-id pub-id-type="pmid">30704969</pub-id></citation></ref>
<ref id="B253">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>X.</given-names></name> <name><surname>Liu</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>W.</given-names></name> <name><surname>Wu</surname> <given-names>Q.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Improved expressivity through dendritic neural networks,&#x0201D;</article-title> in <source>Advances in Neural Information Processing Systems 31: NeurIPS 2018, Montr&#x000E9;al, Canada</source>, eds S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (<publisher-loc>Red Hook, NY</publisher-loc>: <publisher-name>Curran Associates Inc.</publisher-name>), <volume>31</volume>, <fpage>8068</fpage>&#x02013;<lpage>8079</lpage>.<pub-id pub-id-type="pmid">34656706</pub-id></citation></ref>
<ref id="B254">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wybo</surname> <given-names>W. A.</given-names></name> <name><surname>Torben-Nielsen</surname> <given-names>B.</given-names></name> <name><surname>Nevian</surname> <given-names>T.</given-names></name> <name><surname>Gewaltig</surname> <given-names>M.-O.</given-names></name></person-group> (<year>2019</year>). <article-title>Electrical compartmentalization in neurons</article-title>. <source>Cell Rep</source>. <volume>26</volume>, <fpage>1759</fpage>&#x02013;<lpage>1773</lpage>.e7. <pub-id pub-id-type="doi">10.1016/j.celrep.2019.01.074</pub-id><pub-id pub-id-type="pmid">30759388</pub-id></citation></ref>
<ref id="B255">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yao</surname> <given-names>X.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name></person-group> (<year>1998</year>). <article-title>Towards designing artificial neural networks by evolution</article-title>. <source>Appl. Math. Comput</source>. <volume>91</volume>, <fpage>83</fpage>&#x02013;<lpage>90</lpage>. <pub-id pub-id-type="doi">10.1016/S0096-3003(97)10005-4</pub-id></citation>
</ref>
<ref id="B256">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zeng</surname> <given-names>Y.</given-names></name> <name><surname>Zhao</surname> <given-names>D.</given-names></name> <name><surname>Zhao</surname> <given-names>F.</given-names></name> <name><surname>Shen</surname> <given-names>G.</given-names></name> <name><surname>Dong</surname> <given-names>Y.</given-names></name> <name><surname>Lu</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Braincog: a spiking neural network based brain-inspired cognitive intelligence engine for brain-inspired AI and brain simulation</article-title>. <source>arXiv [preprint] arXiv:2207.08533</source>. <pub-id pub-id-type="doi">10.2139/ssrn.4278957</pub-id></citation>
</ref>
<ref id="B257">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>H.</given-names></name> <name><surname>Sun</surname> <given-names>J.</given-names></name> <name><surname>Xu</surname> <given-names>Z.</given-names></name></person-group> (<year>2020</year>). <article-title>Learning to be global optimizer</article-title>. <source>arXiv [preprint] arXiv:2003.04521</source>.</citation>
</ref>
<ref id="B258">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>S.</given-names></name> <name><surname>Liu</surname> <given-names>M.</given-names></name> <name><surname>Yan</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;The diversified ensemble neural network,&#x0201D;</article-title> in <source>Advances in Neural Information Processing Systems, Vol. 33</source>, eds H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (<publisher-name>Curran Associates, Inc.</publisher-name>), <fpage>16001</fpage>&#x02013;<lpage>16011</lpage>.</citation>
</ref>
<ref id="B259">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>S.</given-names></name> <name><surname>Zhang</surname> <given-names>A.</given-names></name> <name><surname>Ma</surname> <given-names>Y.</given-names></name> <name><surname>Zhu</surname> <given-names>W.</given-names></name></person-group> (<year>2019</year>). <article-title>Intrinsic plasticity based inference acceleration for spiking multi-layer perceptron</article-title>. <source>IEEE Access</source> <volume>7</volume>, <fpage>73685</fpage>&#x02013;<lpage>73693</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2019.2914424</pub-id></citation>
</ref>
<ref id="B260">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>W.</given-names></name> <name><surname>Li</surname> <given-names>P.</given-names></name></person-group> (<year>2019</year>). <article-title>Information-theoretic intrinsic plasticity for online unsupervised learning in spiking neural networks</article-title>. <source>Front. Neurosci</source>. <volume>13</volume>:<fpage>31</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2019.00031</pub-id><pub-id pub-id-type="pmid">30804736</pub-id></citation></ref>
<ref id="B261">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Liu</surname> <given-names>S.</given-names></name> <name><surname>Zhao</surname> <given-names>X.</given-names></name> <name><surname>Wu</surname> <given-names>F.</given-names></name> <name><surname>Wu</surname> <given-names>Q.</given-names></name> <name><surname>Wang</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Emulating short-term and long-term plasticity of bio-synapse based on CU/A-SI/PT memristor</article-title>. <source>IEEE Electr. Device Lett</source>. <volume>38</volume>, <fpage>1208</fpage>&#x02013;<lpage>1211</lpage>. <pub-id pub-id-type="doi">10.1109/LED.2017.2722463</pub-id></citation>
</ref>
<ref id="B262">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhou</surname> <given-names>S.</given-names></name> <name><surname>Yu</surname> <given-names>Y.</given-names></name></person-group> (<year>2018</year>). <article-title>Synaptic E-I balance underlies efficient neural coding</article-title>. <source>Front. Neurosci</source>. <volume>12</volume>:<fpage>46</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2018.00046</pub-id><pub-id pub-id-type="pmid">29456491</pub-id></citation></ref>
<ref id="B263">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhou</surname> <given-names>Z.-H.</given-names></name> <name><surname>Wu</surname> <given-names>J.</given-names></name> <name><surname>Tang</surname> <given-names>W.</given-names></name></person-group> (<year>2002</year>). <article-title>Ensembling neural networks: many could be better than all</article-title>. <source>Artif. Intell</source>. <volume>137</volume>, <fpage>239</fpage>&#x02013;<lpage>263</lpage>. <pub-id pub-id-type="doi">10.1016/S0004-3702(02)00190-X</pub-id></citation>
</ref>
<ref id="B264">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zierenberg</surname> <given-names>J.</given-names></name> <name><surname>Wilting</surname> <given-names>J.</given-names></name> <name><surname>Priesemann</surname> <given-names>V.</given-names></name></person-group> (<year>2018</year>). <article-title>Homeostatic plasticity and external input shape neural network dynamics</article-title>. <source>Phys. Rev. X</source> <volume>8</volume>:<fpage>031018</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevX.8.031018</pub-id></citation>
</ref>
<ref id="B265">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zoph</surname> <given-names>B.</given-names></name> <name><surname>Vasudevan</surname> <given-names>V.</given-names></name> <name><surname>Shlens</surname> <given-names>J.</given-names></name> <name><surname>Le</surname> <given-names>Q. V.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Learning transferable architectures for scalable image recognition,&#x0201D;</article-title> in <source>2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Salt Lake City, UT</publisher-loc>), <fpage>8697</fpage>&#x02013;<lpage>8710</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2018.00907</pub-id></citation>
</ref>
<ref id="B266">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zou</surname> <given-names>W.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Huang</surname> <given-names>H.</given-names></name></person-group> (<year>2023</year>). <article-title>Ensemble perspective for understanding temporal credit assignment</article-title>. <source>Phys. Rev. E</source> <volume>107</volume>:<fpage>024307</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevE.107.024307</pub-id><pub-id pub-id-type="pmid">36932505</pub-id></citation></ref>
</ref-list> 
</back>
</article> 