<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Comput. Neurosci.</journal-id>
<journal-title>Frontiers in Computational Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Comput. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5188</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fncom.2025.1655701</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Autonomous retrieval for continuous learning in associative memory networks</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Saighi</surname> <given-names>Paul</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/3098499/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/data-curation/"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/project-administration/"/>
<role content-type="https://credit.niso.org/contributor-roles/software/"/>
<role content-type="https://credit.niso.org/contributor-roles/visualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Rozenberg</surname> <given-names>Marcelo</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/3141228/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/funding-acquisition/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/project-administration/"/>
<role content-type="https://credit.niso.org/contributor-roles/resources/"/>
<role content-type="https://credit.niso.org/contributor-roles/supervision/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Laboratoire de Physique des Solides, CNRS, Universit&#x000E9; Paris-Saclay</institution>, <addr-line>Orsay</addr-line>, <country>France</country></aff>
<aff id="aff2"><sup>2</sup><institution>CNRS, Integrative Neuroscience and Cognition Center, Universit&#x000E9; Paris-Cit&#x000E9;</institution>, <addr-line>Paris</addr-line>, <country>France</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Fernando Montani, National Scientific and Technical Research Council (CONICET), Argentina</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Eugenio Urdapilleta, Bariloche Atomic Centre (CNEA), Argentina</p>
<p>Germ&#x000E1;n Mato, Bariloche Atomic Centre (CNEA), Argentina</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Paul Saighi <email>paul.saighi&#x00040;universite-paris-saclay.fr</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>26</day>
<month>08</month>
<year>2025</year>
</pub-date>
<pub-date pub-type="collection">
<year>2025</year>
</pub-date>
<volume>19</volume>
<elocation-id>1655701</elocation-id>
<history>
<date date-type="received">
<day>28</day>
<month>06</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>08</month>
<year>2025</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2025 Saighi and Rozenberg.</copyright-statement>
<copyright-year>2025</copyright-year>
<copyright-holder>Saighi and Rozenberg</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>The brain&#x00027;s faculty to assimilate and retain information, continually updating its memory while limiting the loss of valuable past knowledge, remains largely a mystery. We address this challenge related to continuous learning in the context of associative memory networks, where the sequential storage of correlated patterns typically requires non-local learning rules or external memory systems. Our work demonstrates how incorporating biologically inspired inhibitory plasticity enables networks to autonomously explore their attractor landscape. The algorithm presented here allows for the autonomous retrieval of stored patterns, enabling the progressive incorporation of correlated memories. This mechanism is reminiscent of memory consolidation during sleep-like states in the mammalian central nervous system. The resulting framework provides insights into how neural circuits might maintain memories through purely local interactions and takes a step forward toward a more biologically plausible mechanism for memory rehearsal and continuous learning.</p></abstract>
<kwd-group>
<kwd>neural networks</kwd>
<kwd>memory consolidation</kwd>
<kwd>continuous learning</kwd>
<kwd>catastrophic forgetting</kwd>
<kwd>unsupervised learning</kwd>
<kwd>neuromorphic computing</kwd>
<kwd>associative memory networks</kwd>
</kwd-group>
<counts>
<fig-count count="7"/>
<table-count count="0"/>
<equation-count count="6"/>
<ref-count count="28"/>
<page-count count="10"/>
<word-count count="6655"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1 Introduction</title>
<p>Continuous learning (CL) refers to a system&#x00027;s ability to maintain performance across multiple tasks when operating in environments that evolve over time, requiring adaptation to changing data distributions. To do so, the learning mechanism should avoid uncontrolled forgetting of previously acquired knowledge when adapting to new information or contexts. In associative memory networks, this challenge arises when storing new activity patterns sequentially deteriorates existing memory representations, a phenomenon called catastrophic forgetting.</p>
<p>It is important to note that this work specifically addresses catastrophic forgetting in the context of sequential learning. This approach addresses a different challenge than the well-studied spin glass phase transitions that occur in Hopfield networks at high memory loads.</p>
<p>Memory rehearsal is a method that addresses the challenge of catastrophic forgetting by periodically retraining the model on previously stored patterns. This process reinforces older memory representations, preventing their degradation when new information is incorporated (<xref ref-type="bibr" rid="B24">Robins, 1995</xref>; <xref ref-type="bibr" rid="B19">McCallum, 1998</xref>). In the mammalian nervous system, spontaneous memory replays occur during sleep (<xref ref-type="bibr" rid="B24">Robins, 1995</xref>; <xref ref-type="bibr" rid="B18">Louie and Wilson, 2001</xref>; <xref ref-type="bibr" rid="B22">Peyrache et al., 2009</xref>; <xref ref-type="bibr" rid="B6">Fauth and Van Rossum, 2019</xref>; <xref ref-type="bibr" rid="B26">Tononi and Cirelli, 2014</xref>), suggesting a biological mechanism analogous to rehearsal techniques in artificial networks (<xref ref-type="bibr" rid="B24">Robins, 1995</xref>). This parallel raises a fundamental question: what mechanisms enable these autonomous memory replays in biological systems?</p>
<p>Previous research on bioinspired neural networks demonstrates that short-term synaptic depression can facilitate spontaneous rehearsal of neural assemblies (<xref ref-type="bibr" rid="B6">Fauth and Van Rossum, 2019</xref>). However, a significant constraint of this approach is its dependence on minimal overlap between neural assemblies. The neuronal populations in these studies share few neurons, resulting in effectively decorrelated memory representations. By contrast, classical associative memory networks like Hopfield Networks can effectively store uncorrelated memories that share many units. Some learning algorithms even allow the storage of highly correlated patterns that share a majority of their units (<xref ref-type="bibr" rid="B5">Diederich and Opper, 1987</xref>). From a biological perspective, understanding how networks implement the rehearsal of correlated populations is crucial as neural representations found in the cortex generally recruit extensively overlapping assemblies (<xref ref-type="bibr" rid="B9">Haxby et al., 2001</xref>; <xref ref-type="bibr" rid="B15">Kriegeskorte, 2008</xref>). The maintenance of overlapping assemblies is widely considered essential for cortical computation, as it supports stimulus generalization and the emergence of invariant, high-level concepts in which individual neurons participate in multiple but related representations (<xref ref-type="bibr" rid="B9">Haxby et al., 2001</xref>; <xref ref-type="bibr" rid="B23">Quiroga et al., 2005</xref>; <xref ref-type="bibr" rid="B15">Kriegeskorte, 2008</xref>). This problematic is of similar importance in neuromorphic engineering contexts as highly correlated representations are an emerging feature of artificial neural networks (<xref ref-type="bibr" rid="B14">Kothapalli, 2023</xref>).</p>
<p>How the rehearsal of correlated memories takes place autonomously on neural substrates remains a largely unaddressed question. In this work, we focus on this issue and explore its potential application for continuous learning (CL). For the sake of bio-plausibility and potential implementation in neuromorphic substrates, we shall demand that our system exhibits the following features: 1) It stores memory states that can be highly correlated; 2) During the pattern recovery, the network does not converge toward strange attractors which would constitute false memories; 3) Plasticity rules are local, meaning that the modification of synaptic efficacy can be computed in terms of its pre- and post-synaptic neuron states; 4) It is autonomous, namely, it should retrieve all previously stored patterns from its own dynamics. The network does not have access to an external list of previously recorded memory states. For the sake of requirements (1) and (3), we shall adopt a perceptron-like algorithm, inspired by the work of (<xref ref-type="bibr" rid="B5">Diederich and Opper 1987</xref>). On the other hand, for requirement (2), we shall use continuous Hopfield Networks (CHNs) (<xref ref-type="bibr" rid="B11">Hopfield, 1984</xref>). Our work demonstrates that, kept under a certain memory load, CHNs converge exclusively to stored patterns during the retrieval. This approach avoids both the spurious state proliferation common in Discrete Hopfield Networks (DHNs) and the shortcomings of temperature parameter fine-tuning inherent to Stochastic Hopfield Networks (SHNs) (<xref ref-type="bibr" rid="B1">Amit et al., 1985a</xref>).</p>
<p>The main contribution of the present work is to introduce an algorithm to address the requirement (4). A crucial feature of our approach is the use of self-inhibition to shrink the basin of attraction of previously visited attractor states, thus allowing for a sequential and thorough search and recovery of all previously stored correlated memory states. This recovery effectively allows the rehearsal of the stored pattern for CL purposes. The dynamic of plastic recurrent inhibition is inspired by computational neuroscience work based on actual neurophysiological data (<xref ref-type="bibr" rid="B27">Vogels et al., 2011</xref>).</p></sec>
<sec sec-type="methods" id="s2">
<title>2 Methods</title>
<sec>
<title>2.1 Continuous Hopfield Network (CHN)</title>
<p>In contrast with the conventional DHN model (<xref ref-type="bibr" rid="B10">Hopfield, 1982</xref>), where neural states are defined as binary variables, in a CHN they are continuous (<xref ref-type="bibr" rid="B11">Hopfield, 1984</xref>). The dynamics of the network is defined by a set of differential equations with each neuron unit described as a leaky integration:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>c</mml:mi><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:msub><mml:mrow><mml:mi>W</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>u</italic><sub><italic>i</italic></sub> is the membrane potential of neuron <italic>i</italic>, <italic>c</italic> is the membrane capacitance, <italic>r</italic> is the leak resistance of each neuron, <italic>W</italic><sub><italic>ij</italic></sub> is the synaptic efficacy between neurons <italic>j</italic> and <italic>i</italic>, and <italic>v</italic><sub><italic>i</italic></sub> is the activity (or firing rate) of neuron <italic>i</italic> that depends solely on the potential as</p>
<disp-formula id="E2"><mml:math id="M2"><mml:mrow><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
<p>where &#x003C3; is a monotonically increasing function of <italic>u</italic> with saturation to prevent runaway dynamics. We adopt <inline-formula><mml:math id="M3"><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:mi>e</mml:mi><mml:mi>x</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mi>x</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:math></inline-formula>. Therefore, as &#x003C3;(0) &#x0003D; 0.5, each unit has a positive output at the resting state, allowing the network to have a baseline activity without external input. We shall refer to either the vector <bold>v</bold>(<italic>t</italic>) or <bold>u</bold>(<italic>t</italic>) as &#x0201C;states&#x0201D;, which should be clear from the context.</p>
<p>The convergence of the flow to stable states for the case of symmetric synaptic weights <italic>W</italic><sub><italic>ij</italic></sub> has been demonstrated (<xref ref-type="bibr" rid="B11">Hopfield, 1984</xref>). Throughout this work, whenever we integrate these equations using Euler method, we do so until the network reaches convergence, defined as <inline-formula><mml:math id="M4"><mml:mo>|</mml:mo><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:msub><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x0221E;</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0003C;</mml:mo><mml:mi>&#x003F5;</mml:mi></mml:math></inline-formula>, where &#x003F5; &#x0003D; 10<sup>&#x02212;6</sup>.</p></sec>
<sec>
<title>2.2 Pattern storage and reading</title>
<p>The states <bold>v</bold>(<italic>t</italic>) of the CHN evolve on [0, 1]<sup><italic>N</italic></sup> through continuous dynamics, with <italic>N</italic> the number of neurons. Any state in this space may represent a stored pattern. For simplicity, we restrict ourselves to store binary patterns for which active neurons have a high firing rate, <italic>v</italic><sub><italic>i</italic></sub>&#x02248;1, and inactive neurons have a low firing rate, <italic>v</italic><sub><italic>i</italic></sub> &#x02248; 0.</p>
<p>Hence, a pattern <bold>x</bold><sup>&#x003BC;</sup> is defined as a binary vector such that <inline-formula><mml:math id="M5"><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> where <inline-formula><mml:math id="M6"><mml:mrow><mml:msubsup><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> for each unit <italic>i</italic>. A pattern is read from the state of the network at time <italic>t</italic> using a threshold:</p>
<disp-formula id="E3"><label>(2)</label><mml:math id="M7"><mml:mrow><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>&#x003BC;</mml:mi></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mn>1</mml:mn></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mtext>if&#x000A0;</mml:mtext><mml:msub><mml:mi>v</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0003E;</mml:mo><mml:mn>0.5</mml:mn></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mn>0</mml:mn></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mtext>otherwise.</mml:mtext></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mrow></mml:math></disp-formula>
<p>Given a binary pattern <bold>x</bold><sup>&#x003BC;</sup>, it is convenient to define target potentials <inline-formula><mml:math id="M8"><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> as</p>
<disp-formula id="E4"><label>(3)</label><mml:math id="M9"><mml:mrow><mml:msubsup><mml:mover accent='true'><mml:mi>u</mml:mi><mml:mo>&#x002DC;</mml:mo></mml:mover><mml:mi>i</mml:mi><mml:mi>&#x003BC;</mml:mi></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mo>+</mml:mo><mml:msubsup><mml:mi>u</mml:mi><mml:mrow><mml:mtext>target</mml:mtext></mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:msubsup></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mtext>if&#x000A0;</mml:mtext><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>&#x003BC;</mml:mi></mml:msubsup><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:msubsup><mml:mi>u</mml:mi><mml:mrow><mml:mtext>target</mml:mtext></mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:msubsup></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mtext>if&#x000A0;</mml:mtext><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>&#x003BC;</mml:mi></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mrow></mml:math></disp-formula>
<p>We adopt here <italic>u</italic><sub>target</sub> &#x0003D; 6 to ensure proper pattern reading following the thresholding procedure.</p>
<p>Inspired by previous work on DHN (<xref ref-type="bibr" rid="B5">Diederich and Opper, 1987</xref>), we introduce, in <xref ref-type="table" rid="T1">Algorithm 1</xref>, a perceptron-inspired learning algorithm for efficient storage of correlated patterns in CHNs. The algorithm minimizes the error between the target states and the network&#x00027;s equilibrium states, ensuring that each memory becomes a stable state. Gradient descent methods typically require small step sizes to prevent the optimization process from becoming unstable and to reduce oscillations around local minima. In our implementation, we select &#x003B1; &#x0003D; 0.0001 as the learning rate to ensure stable convergence of weight updates. The derivation of the weight update rule can be found in the <xref ref-type="supplementary-material" rid="SM1">Appendix 1</xref>. Although a rigorous proof of convergence for the algorithm is beyond the scope of this work, we expect that arguments demonstrating the convergence of the gradient descent algorithm (GDA) in the context of DHN could be adapted for this purpose (<xref ref-type="bibr" rid="B5">Diederich and Opper, 1987</xref>). Here, we rely on numerical evidence showing the network&#x00027;s ability to successfully query and revisit stored patterns.</p>
<table-wrap position="float" id="T1"> 
<label>Algorithm 1</label>
<caption><p>Gradient descent for the storage of correlated patterns (GDA)</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr><td align="left" valign="top"><monospace> 1: &#x000A0;Initialize <italic>W</italic><sub><italic>ij</italic></sub> &#x0003D; 0 for all <italic>i, j</italic></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 2: &#x000A0;repeat</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 3: &#x000A0;&#x000A0;&#x000A0;&#x000A0; for each pattern &#x003BC; <bold>do</bold></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 4: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; for each neuron <italic>i</italic> <bold>do</bold></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 5: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; Compute the target potential <inline-formula><mml:math id="M10"><mml:msubsup><mml:mrow><mml:mi>&#x00169;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> (<xref ref-type="disp-formula" rid="E3">Equation 3</xref>) for each neuron <italic>j</italic> with <italic>j</italic>&#x02260;<italic>i</italic></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 6: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; Compute the expected potential: <inline-formula><mml:math id="M11"><mml:msubsup><mml:mrow><mml:mi>&#x000FB;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo></mml:math></inline-formula> <inline-formula><mml:math id="M12"><mml:mi>r</mml:mi><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:munder><mml:msub><mml:mrow><mml:mi>W</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mi>&#x00169;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 7: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; Update weights: <inline-formula><mml:math id="M13"><mml:msub><mml:mrow><mml:mi>W</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02190;</mml:mo><mml:msub><mml:mrow><mml:mi>W</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mi>&#x00169;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msubsup><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x000FB;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>r</mml:mi><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mi>&#x000FB;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 8: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; end <bold>for</bold></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 9: &#x000A0;&#x000A0;&#x000A0;&#x000A0; end <bold>for</bold></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 10: &#x000A0;until ||&#x00394;<italic>W</italic>||<sub>&#x0221E;</sub> &#x0003C; &#x003F5;</monospace></td></tr>
</tbody>
</table>
</table-wrap>
 <p>Following the network training with the GDA, each activity vector <inline-formula><mml:math id="M14"><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> becomes an attractor. The network can now be queried as the system reliably converges to the nearest stored state from a partial cue. The corresponding binary patterns <inline-formula><mml:math id="M15"><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> can then be accurately retrieved by thresholding at time <italic>t</italic><sub><italic>f</italic></sub> (<xref ref-type="disp-formula" rid="E3">Equation 3</xref>) when the system reaches convergence. The querying procedure is detailed in <xref ref-type="table" rid="T1">Algorithm 1</xref> (<xref ref-type="supplementary-material" rid="SM1">Appendix 2</xref>) and illustrated in <xref ref-type="fig" rid="F1">Figure 1</xref>.</p>
<fig position="float" id="F1">
<label>Figure 1</label>
<caption><p>Querying of two binary-pattern representation of the handwritten digit &#x0201C;3&#x0201D; and &#x0201C;4&#x0201D; from the MNIST dataset. The network is of size 20 &#x000D7; 16 as each neuron codes for a pixel. From black to white, the color represents the rate <italic>v</italic><sub><italic>i</italic></sub>(<italic>t</italic>) of each unit. At <italic>t</italic> &#x0003D; 0, the network is initialized with the part of the units set to the target value as described in <xref ref-type="supplementary-material" rid="SM1">Appendix 2</xref>. The figure displays snapshots of the evolution of the rates in time for each unit of the network. The network is queried two times, for the first query Q1 the network gradually settles to the nearest stored state corresponding to the digit &#x0201C;3&#x0201D;. For Q2, the network settles to the stored state corresponding to the digit &#x0201C;4&#x0201D;.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-19-1655701-g0001.tif">
<alt-text>Two rows of four grayscale images each, labeled Q1 and Q2, show changes over time (t=0, t=20, t=40, t=60). Each image depicts different patterns of varying shades, from black to white, with a gradient scale on the right indicating values from 0.2 to 0.8.</alt-text>
</graphic>
</fig>
<p>It is worth emphasizing that despite the continuous nature of our model, which theoretically allows for a richer state space, we deliberately restrict ourselves to binary patterns. The interest of using a CHN is the simplicity it allows when implementing our pattern recovery mechanism (Section 2.4), limiting the appearance of false memories as spurious states, which are often encountered in DHNs (<xref ref-type="bibr" rid="B2">Amit et al., 1985b</xref>). A variant of the DHN with stochastic units, the SHN (<xref ref-type="bibr" rid="B2">Amit et al., 1985b</xref>), would be a possible candidate to implement our algorithm, as they tend to visit only the stored patterns by properly controlling the annealing temperatures. However, the stochastic properties of these networks would require a more complex setup.</p></sec>
<sec>
<title>2.3 Continuous incorporation of correlated memories</title>
<p>While the GDA effectively enables the storage of correlated memories, its implementation in <xref ref-type="table" rid="T1">Algorithm 1</xref> reveals a significant limitation: It requires multiple iterations over the entire set of patterns to achieve convergence. Without the ability to reprocess all patterns, the network would suffer from catastrophic forgetting, where learning a new pattern in isolation rapidly erodes previously stored memories (<xref ref-type="bibr" rid="B20">McCloskey and Cohen, 1989</xref>; <xref ref-type="bibr" rid="B24">Robins, 1995</xref>; <xref ref-type="bibr" rid="B12">Kirkpatrick et al., 2017</xref>; <xref ref-type="bibr" rid="B25">Shen et al., 2023</xref>). By repeatedly processing all patterns, the algorithm can find a weight matrix <bold>W</bold> that properly separates the patterns, despite their correlations (<xref ref-type="bibr" rid="B5">Diederich and Opper, 1987</xref>). Adding a new pattern, therefore, requires access to all previously stored patterns from an external source.</p>
<p>This requirement for external access to the complete memory dataset stands in contrast to biological learning systems, which must incorporate new information while maintaining past memories without relying on an explicit external copy of the already stored data. To overcome this external dependency and move toward more biologically plausible learning, a solution is to develop a mechanism that allows the network to internally recover its stored memories.</p>
<p>The development of such an autonomous retrieval mechanism would allow us to exploit the GDA&#x00027;s ability for the continuous incorporation of correlated memories. The continuous learning algorithm is formally defined in <xref ref-type="table" rid="T2">Algorithm 2</xref>. The act of retraining the network on the whole set is called a rehearsal. Recovering the whole set from the network to allow rehearsal is called retrieval.</p>
<table-wrap position="float" id="T2"> 
<label>Algorithm 2</label>
<caption><p>Continuous incorporation of correlated patterns through rehearsal of the whole memory set</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr><td align="left" valign="top"><monospace> 1: &#x000A0;Input: CHN trained on <italic>p</italic> patterns using the GDA</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 2: &#x000A0;Given: New pattern <bold>x</bold><sup><italic>p</italic>&#x0002B;1</sup> to be stored</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 3: &#x000A0;Retrieve set {<bold>x</bold>} of stored patterns through autonomous retrieval (AR) introduced in Section 2.4</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 4: &#x000A0;Update pattern set: {<bold>x</bold>}&#x02190;{<bold>x</bold>}&#x0222A;{<bold>x</bold><sup><italic>p</italic>&#x0002B;1</sup>}</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 5: &#x000A0;Apply GDA to store updated pattern set</monospace></td></tr>
</tbody>
</table>
</table-wrap></sec>
<sec>
<title>2.4 Autonomous retrieval</title>
<p>In this section, we present the autonomous retrieval (AR) mechanism allowing the recovery of stored correlated patterns in a network.</p>
<p>Given a trained network initialized at the &#x0201C;neutral&#x0201D; state, <italic>u</italic><sub><italic>i</italic></sub> &#x0003D; 0 for each unit <italic>i</italic>, the network dynamics described by <xref ref-type="disp-formula" rid="E1">Equation 1</xref> converge deterministically to a given stored attractor, which is thus &#x0201C;retrieved&#x0201D;. This attractor can be seen as the dominant attractor from the neutral state. The goal now is to allow for the exploration of other states to permit a complete recovery of the stored memories. To do so, we introduce the adaptation terms <italic>A</italic><sub><italic>i</italic></sub>.</p>
<disp-formula id="E5"><label>(4)</label><mml:math id="M16"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>c</mml:mi><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac></mml:mtd><mml:mtd><mml:mo>=</mml:mo></mml:mtd><mml:mtd><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:msub><mml:mrow><mml:mi>W</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E6"><label>(5)</label><mml:math id="M17"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:mo>&#x02190;</mml:mo></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B2;</mml:mi><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>with <italic>v</italic><sub><italic>i</italic></sub>(<italic>t</italic><sub><italic>f</italic></sub>) the firing rate of neuron <italic>i</italic> after convergence of the dynamics. &#x003B2; represents the adaptation strength and is chosen to be small, typically below 0.1. Adaptation terms could correspond to various components commonly observed in the mammalian central nervous system. On short time scales, spike frequency adaptation (SFA) of excitatory neurons functions as plastic self-inhibition (<xref ref-type="bibr" rid="B8">Ha and Cheong, 2017</xref>; <xref ref-type="bibr" rid="B21">Peron and Gabbiani, 2009</xref>). Repeated stimulation progressively reduces firing activity in the neuron, mirroring the dynamics produced by our adaptation <italic>A</italic><sub><italic>i</italic></sub>. Alternatively, <italic>A</italic><sub><italic>i</italic></sub> can be interpreted as recurrent inhibition mediated by local inter-neurons, which frequently exhibit Hebbian plasticity (<xref ref-type="bibr" rid="B13">Kodangattil et al., 2013</xref>; <xref ref-type="bibr" rid="B4">D&#x00027;amour and Froemke, 2015</xref>).</p>
<p>After each visited attractor, i.e., memory retrieved, its basin of attraction is made smaller by the update of the adaptation term (<xref ref-type="disp-formula" rid="E5">Equation 5</xref>). Once a pattern has been inhibited, the probability for the network to converge into it from the neutral state is reduced. By resetting the network to the neutral state after each convergence-inhibition cycle, we allow the sequential recovery of stored patterns.</p>
<p><xref ref-type="fig" rid="F2">Figure 2</xref> illustrates the sequential recovery of two stored patterns and the modification of their attractor basin during the procedure. The geometry of these attractor basins explains why a minimal inhibitory influence, resulting from a small &#x003B2; value, is sufficient to alter the trajectory. The neutral state resides near the separatrix that divides the attractor basins. Consequently, even a slight modification of the separatrix position, caused by inhibitory potentiation, can significantly redirect the network&#x00027;s trajectory.</p>
<fig position="float" id="F2">
<label>Figure 2</label>
<caption><p>Stream plots of the CHN in a 2D subspace spanned by two pattern states. We examine a network with two stored patterns, labeled &#x0201C;1&#x0201D; and &#x0201C;2&#x0201D;, and their stable states <inline-formula><mml:math id="M18"><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> and <inline-formula><mml:math id="M19"><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>; these define a two-dimensional subspace parametrized by <inline-formula><mml:math id="M20"><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>. At each point <inline-formula><mml:math id="M21"><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> in this subspace, we compute the full <italic>N</italic>-dimensional time derivative via <xref ref-type="disp-formula" rid="E4">Equation 4</xref> and project it back onto the <inline-formula><mml:math id="M22"><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>-plane to obtain the plotted flow. <inline-formula><mml:math id="M23"><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> and <inline-formula><mml:math id="M24"><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>u</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> have minimal correlation as they are generated from random binary patterns <bold>x</bold><sup>1</sup>and<bold>x</bold><sup>2</sup>&#x02208;{0, 1}<sup><italic>N</italic></sup>. <bold>(left)</bold> Before training. <bold>(middle)</bold> After storing the two patterns using the GDA, each pattern state (small red dots) becomes a stable attractor; the neutral state <bold>u</bold><sub><italic>N</italic></sub> &#x0003D; <bold>0</bold> (large red dot) is on an unstable manifold. The red line corresponds to the trajectory of the network projected on the subspace. The trajectory follows the unstable manifold before arbitrarily falling into one of the two stored patterns. <bold>(right)</bold> Adaptation <bold>A</bold> is updated to apply enhanced inhibition to pattern 1 which shrinks its basin of attraction. This induces a shift in the position of the separatrix, favoring the flow to pattern 2. An exaggerated large inhibitory coefficient &#x003B2; &#x0003D; 0.5 is used to illustrate the modification of the vector field. Similar stream plots are obtained from networks storing patterns with various degrees of correlation using the GDA.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-19-1655701-g0002.tif">
<alt-text>Flow field diagrams illustrating vector changes across three stages: pre-training, post-training, and post-inhibition. The diagrams show directional arrows and paths, with red lines highlighting significant trajectories labeled 1 and 2. Axes are labeled &#x003BB;1 and &#x003BB;2.</alt-text>
</graphic>
</fig>
<p>The increase in adaptation tends to distort the stable states associated with stored patterns. As this distortion is minimal for small &#x003B2;, it is mainly compensated for by the thresholding mechanism used to read the network output (Section 2.2). To further reduce the impact of this distortion, we divide the convergence dynamic into two phases, the &#x0201C;biased&#x0201D; phase and the &#x0201C;free&#x0201D; phase. The biased phase guides the network to converge toward states that have not been retrieved. It corresponds to the simulation of the network with adaptation, <xref ref-type="disp-formula" rid="E4">Equation 4</xref>, until convergence. The optional free phase then allows the network to complete its convergence to an undistorted stored state. It corresponds to the simulation of the network without inhibitory synapses (<xref ref-type="disp-formula" rid="E1">Equation 1</xref>) until convergence. The whole procedure is detailed in <xref ref-type="table" rid="T3">Algorithm 3</xref>.</p>
<table-wrap position="float" id="T3"> 
<label>Algorithm 3</label>
<caption><p>Autonomous retrieval (AR)</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr><td align="left" valign="top"><monospace> 1: &#x000A0;Input: Trained network weights <italic>W</italic><sub><italic>ij</italic></sub>, number of iterations <italic>k</italic>, plasticity rate &#x003B2;</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 2: &#x000A0;Initialize: <italic>A</italic><sub><italic>i</italic></sub> &#x0003D; 0, {<bold>x</bold>} &#x0003D; &#x02205;</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 3: &#x000A0;while <italic>j</italic>&#x0003C;<italic>k</italic> <bold>do</bold></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 4: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; Set neutral initial conditions: <bold>u</bold>(<italic>t</italic> &#x0003D; 0) &#x0003D; <bold>0</bold></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 5: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; Biased phase: Integrate <xref ref-type="disp-formula" rid="E4">Equation 4</xref> until convergence to the state <bold>u</bold>(<italic>t</italic><sub><italic>b</italic></sub>) &#x0003D; <bold>u</bold><sub><italic>b</italic></sub></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 6: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; Free phase: from the state <bold>u</bold>(<italic>t</italic><sub><italic>b</italic></sub>), integrate <xref ref-type="disp-formula" rid="E1">Equation 1</xref> until convergence &#x022B3; optional</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 7: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; Read pattern <bold>x</bold><sup>&#x003BC;</sup> from final state <bold>v</bold>(<italic>t</italic><sub><italic>f</italic></sub>) via thresholding (<xref ref-type="disp-formula" rid="E3">Equation 3</xref>)</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 8: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; Update retrieved pattern set: {<bold>x</bold>}&#x02190;{<bold>x</bold>}&#x0222A;{<bold>x</bold><sup>&#x003BC;</sup>}</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 9: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; Update inhibitory weights: <italic>A</italic><sub><italic>i</italic></sub>&#x02190;<italic>A</italic><sub><italic>i</italic></sub>&#x0002B;&#x003B2;<italic>v</italic><sub><italic>i</italic></sub>(<italic>t</italic><sub><italic>f</italic></sub>)</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 10: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0; <italic>j</italic>&#x02190;<italic>j</italic>&#x0002B;1</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 11: &#x000A0;end <bold>while</bold></monospace> </td></tr>
<tr><td align="left" valign="top"><monospace> 12: &#x000A0;Return: {<bold>x</bold>}</monospace></td></tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="F3">Figure 3</xref> provides a visualization of AR for binary-pattern representation of handwritten digits from the MNIST dataset (<xref ref-type="bibr" rid="B17">LeCun et al., 1998</xref>). For the first iteration, the inhibitory drive is null as no pattern has been retrieved. The pattern corresponding to the binary picture of a 3 is recovered. The network state is reinitialized with the updated adaptation (<bold>A</bold>). The network now inhibits the recovery of a 3, which induces convergence toward a second pattern, here a 4. Adaptation is updated again and now inhibits both the 3 and the 4 together. Inhibition of the 3 and the 4 combined allows the recovery of the 5 and so on. For the recovery of MNIST binary digits, as a large inhibitory coefficient &#x003B2; has been chosen, the free phase is mandatory to reduce the distortion of the stored pattern.</p>
<fig position="float" id="F3">
<label>Figure 3</label>
<caption><p>Recovery of three binary-patterns representations of handwritten digits from the MNIST dataset, &#x0201C;3&#x0201D;, &#x0201C;4&#x0201D;, &#x0201C;6&#x0201D;, &#x0201C;5&#x0201D;. The network is of size 20 &#x000D7; 16 as each neuron codes for a pixel. <bold>Left</bold> (in red): Evolution of adaptation of each neuron <italic>A</italic><sub><italic>i</italic></sub>. Right (blue and yellow): snapshots of the evolution of the rates <italic>v</italic><sub><italic>i</italic></sub>(<italic>t</italic>) for each unit of the network. Each row corresponds to the start of a new memory retrieval in the CHN. The free phase corresponds to the removal of the inhibitory drive biasing the activity of each neurons as described in <xref ref-type="table" rid="T3">Algorithm 3</xref>. The panels illustrate how the biased phase &#x0201C;orients&#x0201D; the evolution, and then, the free phase provides the precise convergence to a stored pattern. An exaggerated large inhibitory coefficient &#x003B2; &#x0003D; 2 is used to illustrate the stable state deformation at the end of the biased phase, highlighting the need of the free phase.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-19-1655701-g0003.tif">
<alt-text>Graphical comparison of two phases over four iterations. The left column displays the &#x0201C;Biased phase&#x0201D; in four red-tinted heatmap visuals, indicating varying amplitude (A) from 0 to 8. The right column presents the &#x0201C;Free phase&#x0201D; in black-and-white heatmaps. Each row represents a sequential increase in iterations from one to four, showcasing the changes in patterns between the phases.</alt-text>
</graphic>
</fig>
<p>The order in which patterns will be visited is an emerging feature of the learning algorithm that has not been studied in this work. With each iteration of the AR algorithm, inhibitory synapses undergo potentiation. This growth is constrained only by the number of iterations and the value of &#x003B2;. Theoretically, such an unbounded growth could cause the adaptation <italic>A</italic><sub><italic>i</italic></sub> to disrupt the attractor dynamics induced by <bold>W</bold>. However, in practice, this potential issue can be managed by adjusting the value of &#x003B2; based on the number of iterations. Specifically, when more iterations are required, using a smaller &#x003B2; is sufficient to maintain robust pattern retrieval without compromising attractor dynamics.</p>
<p>In <xref ref-type="fig" rid="F4">Figure 4</xref>, we illustrate the dynamics of the CHN during the retrieval of stored memory patterns in networks with various loads. The load corresponds to <italic>M</italic>/<italic>N</italic>, with <italic>M</italic> the number of stored memories, and <italic>N</italic> the size of the network. For low loads (see <xref ref-type="fig" rid="F4">Figure 4</xref> top), the system converges sequentially to the attractors corresponding to the stored patterns. Once every memory has been recovered, if the simulation is not ended, the network falls back into the stored states without showing any false memories. The lower the value of &#x003B2;, the longer this dynamic can proceed without encountering spurious states. The free phase has no utility in this scenario, the attractors are well separated, and the disruption of the energy landscape by adaptation is minimal. For critical loads (see <xref ref-type="fig" rid="F4">Figure 4</xref> middle), the retrieval process becomes more challenging. The attractors exhibit reduced separation, and the network struggles to converge to the stored states once inhibition is applied. In this scenario, the free phase demonstrates its utility. At the end of the biased phase, during the second iteration of the AR, the network remains in a mixed state characterized by ambiguous correlations with multiple stored patterns. The free phase enables the network to resolve this ambiguity and ultimately converge to a stored state. At high loads (see <xref ref-type="fig" rid="F4">Figure 4</xref> bottom), the network successfully retrieves some stored states but mostly fall into spurious attractors. Under these conditions, even the free phase cannot rescue the recovery dynamics.</p>
<fig position="float" id="F4">
<label>Figure 4</label>
<caption><p>Visualization of AR (<xref ref-type="table" rid="T3">Algorithm 3</xref>). The x-axis shows the number of time steps for simulations of the CHN, with all query iterations of AR concatenated. The y-axis shows the correlation between the state of the network and any stored state, with each color corresponding to a specific memory. Vertical red dashed lines separate successive memory retrieval sequences. At the beginning of a memory recall adaptation <bold>A</bold> is updated and the initial state is the neutral one. The yellow dashed lines indicate the end of the biased phase. Between yellow and red lines is the free phase, during which self-inhibition is removed, to converge precisely into a stored state. Considered patterns have minimal correlation as they are generated from random binary vectors. <bold>(Top)</bold> Recovery dynamics for a low-load network: 5 patterns for a network of 60 units. <bold>(Middle)</bold> Recovery dynamics for a critical-load network: 5 stored patterns for a network of size 30. <bold>(Bottom)</bold> Recovery dynamics for a high-load network: 16 stored patterns for a network of size 60.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-19-1655701-g0004.tif">
<alt-text>Three line graphs showing correlation data over time, labeled &#x0201C;Corr &#x02329;&#x003BC;i&#x003BC;i(t)&#x0232A;&#x0201D;. Each graph has multiple colored lines representing different datasets, with vertical dashed lines in red and yellow indicating significant points or transitions. The x-axis ranges from zero to five thousand, zero to eight thousand, and zero to twenty thousand, respectively, labeled with &#x0201C;t&#x0201D;. The y-axis ranges from 0.4 to 1.0 for correlation values.</alt-text>
</graphic>
</fig>
<p>These autonomous recovery dynamics are reminiscent of memory replays observed in the central nervous system of mammals during quiescent states, such as sleep or rest. This process is believed to function as a consolidation mechanism that mitigates or modulates memory forgetting (<xref ref-type="bibr" rid="B24">Robins, 1995</xref>; <xref ref-type="bibr" rid="B18">Louie and Wilson, 2001</xref>; <xref ref-type="bibr" rid="B22">Peyrache et al., 2009</xref>; <xref ref-type="bibr" rid="B6">Fauth and Van Rossum, 2019</xref>; <xref ref-type="bibr" rid="B26">Tononi and Cirelli, 2014</xref>).</p>
<p>The use of recurrent plastic inhibitory synapses has been tested and shows the same qualitative dynamics as that observed for networks with units that undergo adaptation. The results and the model are detailed in <xref ref-type="supplementary-material" rid="SM1">Appendix 4</xref>. As implementing adaptation requires less computation and parameters, the following work focuses only on the adaptation model.</p></sec></sec>
<sec sec-type="results" id="s3">
<title>3 Results</title>
<p>In this section, we evaluate the ability of our algorithm to retrieve correlated patterns from a given network. We will consider the retrieval of pattern sets with various amounts of correlation. Each set gets assigned a random binary pattern, the parent pattern. Each pattern of a given set is generated by randomly choosing and randomizing a fraction (1&#x02212;&#x003C1;) of bits from the parent pattern as described in <xref ref-type="table" rid="T2">Algorithm 2</xref> (<xref ref-type="supplementary-material" rid="SM1">Appendix 3</xref>). Therefore, a higher &#x003C1; induces more correlation, while a lower &#x003C1; results in less correlation. Networks with various loads will be tested. All retrieval dynamics are considered without free phases as it has been observed that, for the retrieval of random binary patterns and the use of small inhibitory potentiation values &#x003B2;, the free phase does not significantly improve retrieval performance.</p>
<p><xref ref-type="fig" rid="F5">Figure 5</xref> illustrates, for various correlations, the loads for which AR allows systematic recovery of all stored patterns without external cues or memory lists. This property enables the continuous incorporation of new correlated memories, as described in <xref ref-type="table" rid="T2">Algorithm 2</xref>. The retrieval improves with network size&#x02014;larger networks can reliably retrieve more stored patterns without encountering false memories. We can observe the rather surprising feature that a moderate amount of correlation (&#x003C1;&#x02248;0.5) tends to improve pattern retrieval compared to highly correlated and minimally correlated pattern sets. In fact, this finding contrasts with traditional associative memory models, where correlation typically degrades performance (<xref ref-type="bibr" rid="B7">Fontanari and Theumann, 1990</xref>).</p>
<fig position="float" id="F5">
<label>Figure 5</label>
<caption><p>Recovery capacity of AR for various correlations, modulated by &#x003C1;. Data are averaged over 20 simulations with different pattern sets. Correlated patterns are generated using Algorithm 5 (<xref ref-type="supplementary-material" rid="SM1">Appendix 3</xref>). For networks of various sizes and numbers of stored patterns, we employ <xref ref-type="table" rid="T3">Algorithm 3</xref> until all patterns are recovered or until a spurious state is found. <bold>(Top row)</bold> The percentage of &#x02018;full retrieval&#x00027;, i.e., simulation runs where no spurious state occurs before recovering all stored patterns. <bold>(Bottom row)</bold> Number of iterations required to recover all patterns. Recovery is best for &#x003C1; &#x0003D; 0.5, which denotes equal number of correlated and uncorrelated bits. For small &#x003C1; values with few correlated bits, performance worsens. For highly correlated sets, only very low loads are recovered without false memories.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-19-1655701-g0005.tif">
<alt-text>Heatmaps compare network size against the number of stored patterns for different values of &#x003C1; (0, 0.25, 0.5, 0.75, 0.9). Colors range from light to dark, indicating low to high values on two different scales: 0 to 100 for the top row and 0 to 117 for the bottom row.</alt-text>
</graphic>
</fig>
<p>Our interpretation is that an intermediate amount of correlation helps the system to be driven in the &#x0201C;good&#x0201D; direction in early stages of the evolution, i.e., while still rather close to the neutral state, and the energy landscape is rather featureless. Once driven to a point of the state space proximal to all the stored patterns, the system can finish the convergence. As illustrated in <xref ref-type="supplementary-material" rid="SM1">Appendix 5</xref>, Appendix Figure 3, this push toward the good direction can be measured through the average correlation between the synaptic drive of each unit and the stored patterns.</p>
<p>As emerges from our simulations, to limit the appearance of false memories, the values of &#x003B2; must be relatively small compared to the target potential <italic>u</italic><sub>target</sub>. In our case, we found it convenient to keep &#x003B2; smaller than 0.1. By keeping the nudging of the convergence dynamic small, the emergence of false memories that might otherwise result from deformations in the energy landscape is reduced. <xref ref-type="fig" rid="F6">Figure 6</xref> indicates the existence of a trade-off: Smaller &#x003B2; values require more iterations to retrieve all patterns but provide greater stability, while larger values accelerate pattern retrieval at the cost of increasing the probability of encountering a spurious state. Higher values of &#x003B2; than those considered in this study lead to catastrophic degradation of recovery dynamics for which no stored patterns are recovered. Lower values of &#x003B2; only lead to the need for more iterations when recovering the pattern set.</p>
<fig position="float" id="F6">
<label>Figure 6</label>
<caption><p>Similar to <xref ref-type="fig" rid="F5">Figure 5</xref> but for various &#x003B2; values. Correlated patterns are generated with a &#x003C1; of 0.5. Lower &#x003B2; values require more iterations to recover the complete set of stored patterns. Higher &#x003B2; values diminish the number of iteration needed but increase the probability of finding spurious state when the load is important.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-19-1655701-g0006.tif">
<alt-text>Two rows of color-coded heatmaps showing stored patterns versus network size at different beta values (0.025, 0.05, 0.1, 0.5, 10). The top row&#x00027;s color scale ranges from 0 to 100, while the bottom row&#x00027;s extends to 120. Each subplot is labeled with its respective beta value. Darker colors indicate higher values.</alt-text>
</graphic>
</fig>
<p>We shall now describe some qualitative information on the efficiency of our algorithm. As expected, the number of iterations required to successfully store patterns using <xref ref-type="table" rid="T1">Algorithm 1</xref> increases with the memory load (<xref ref-type="fig" rid="F7">Figure 7</xref>). Moreover, the higher the correlation between patterns, the higher the number of iterations needed for storage. Overall, our method requires significantly more iterations than previously documented for associative memory tasks in DHNs (<xref ref-type="bibr" rid="B5">Diederich and Opper, 1987</xref>). We can argue that this increase in computational cost may come from two factors. First, training a CHN through GDA (<xref ref-type="table" rid="T1">Algorithm 1</xref>) inherently requires more iterations than training DHNs. Second, we observed that a smaller convergence parameter &#x003F5; in <xref ref-type="table" rid="T1">Algorithm 1</xref>, while computationally more demanding, yields superior retrieval performance. We hypothesize that this tighter convergence criterion induces stronger competition between pattern attractors at the neutral state. This competition results in enhanced network responsiveness to subtle modifications of attractor basins induced by W&#x00027; during retrieval. These observations led to the adoption of a very small &#x003F5; &#x0003D; 10<sup>&#x02212;6</sup>, which demanded more iterations when performing the GDA.</p>
<fig position="float" id="F7">
<label>Figure 7</label>
<caption><p>Convergence time of the GDA. Color intensity shows the number of iterations required to store a given number of patterns using GDA (<xref ref-type="table" rid="T1">Algorithm 1</xref>). Data are averaged over 20 simulations for various pattern sets.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-19-1655701-g0007.tif">
<alt-text>Three heat maps display the relationship between network size and the number of patterns, ranging from 1 to 25. The color gradient from yellow to dark red indicates the logarithm of the number of iterations, with yellow representing higher values and dark red representing lower values. Each map shows dense clustering in the upper left corner, gradually decreasing towards the bottom right.</alt-text>
</graphic>
</fig>
</sec>
<sec sec-type="discussion" id="s4">
<title>4 Discussion</title>
<p>Our work introduces a biologically inspired mechanism for the continuous incorporation of correlated patterns in associative networks. CL is made possible through the autonomous recovery of all stored patterns during a retrieval phase. By systematically retrieving memories, the network can incorporate new patterns while mitigating the forgetting of the ones already stored. Autonomous retrieval is made possible by adaptation, avoiding the necessity of recalling patterns from an external list.</p>
<p>Previous work in computational neurosciences indicates that inhibitory circuits may play a critical role in the regulation of neural activity and plasticity (<xref ref-type="bibr" rid="B27">Vogels et al., 2011</xref>; <xref ref-type="bibr" rid="B3">Barron et al., 2016</xref>). Here, we demonstrate that inhibitory plasticity or SFA could be one of the key mechanisms that allow the sequential reactivation of memories observed during sleep and resting states (<xref ref-type="bibr" rid="B28">Wilson and McNaughton, 1994</xref>; <xref ref-type="bibr" rid="B22">Peyrache et al., 2009</xref>). A property of associative networks highlighted by our approach is that subtle changes in self-inhibition can drive substantial shifts in network dynamics without disrupting the fundamental structure of stored attractors. Inhibition, therefore, allows context-dependent activity of the network as observed in experimental setups (<xref ref-type="bibr" rid="B16">Kuchibhotla et al., 2017</xref>). Biological neural circuits might, therefore, employ similar mechanisms to navigate complex, correlated, memory spaces. Our results indicate that larger networks experience fewer spurious state visits, suggesting improved reliability with scale. However, understanding how these dynamics extend to networks of biologically relevant sizes would require further investigation.</p>
<p>Traditional associative memory models typically suffer from decreased capacity when storing correlated patterns (<xref ref-type="bibr" rid="B2">Amit et al., 1985b</xref>). However, our findings revealed the rather unexpected feature that moderate correlation levels actually improve pattern retrieval in the context of autonomous retrieval. We argue how this result may arise from the way that correlated structures influence the geometry of the attractor basins and the ensuing flow toward them. A more thorough theoretical analysis of this phenomenon could provide information on the factors that influence the robustness of recovery dynamics in both artificial and biological systems.</p></sec>
<sec sec-type="conclusions" id="s5">
<title>5 Conclusion</title>
<p>In this work, we demonstrate how adaptation can enable autonomous exploration of attractor landscapes in continuous Hopfield networks. Our key finding reveals that, under a critical load, inhibitory plasticity allows networks to systematically retrieve the entire set of stored memories, even for highly correlated sets. This property allowed us to propose and test an algorithmic scheme for continuous learning leveraging the ability of gradient descent to store correlated patterns. The capacity for self-directed pattern exploration, emerging from inhibitory modulation, offers insights for both biological memory consolidation and neuromorphic computing.</p></sec>
</body>
<back>
<sec sec-type="data-availability" id="s6">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/<xref ref-type="supplementary-material" rid="SM1">Supplementary material</xref>, further inquiries can be directed to the corresponding author.</p>
</sec>
<sec sec-type="author-contributions" id="s7">
<title>Author contributions</title>
<p>PS: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Visualization, Writing &#x02013; original draft, Writing &#x02013; review &#x00026; editing. MR: Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Writing &#x02013; review &#x00026; editing.</p>
</sec>
<sec sec-type="funding-information" id="s8">
<title>Funding</title>
<p>The author(s) declare that financial support was received for the research and/or publication of this article. This project received financial support from the Ile-de-France region through the program DIM AI4IDF MR acknowledges support from the French ANR project MemAI ANR-23-CE30-0040-01 and the CNRS MITI program Osez2025.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="ai-statement" id="s9">
<title>Generative AI statement</title>
<p>The author(s) declare that no Gen AI was used in the creation of this manuscript.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p></sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec sec-type="supplementary-material" id="s11">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fncom.2025.1655701/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fncom.2025.1655701/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Table_1.pdf" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/></sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amit</surname> <given-names>D. J.</given-names></name> <name><surname>Gutfreund</surname> <given-names>H.</given-names></name> <name><surname>Sompolinsky</surname> <given-names>H.</given-names></name></person-group> (<year>1985a</year>). <article-title>Storing infinite numbers of patterns in a spin-glass model of neural networks</article-title>. <source>Phys. Rev. Lett</source>. <volume>55</volume>, <fpage>1530</fpage>&#x02013;<lpage>1533</lpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.55.1530</pub-id><pub-id pub-id-type="pmid">10031847</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amit</surname> <given-names>D. J.</given-names></name> <name><surname>Gutfreund</surname> <given-names>H.</given-names></name> <name><surname>Sompolinsky</surname> <given-names>H.</given-names></name></person-group> (<year>1985b</year>). <article-title>Spin-glass models of neural networks</article-title>. <source>Phys. Rev. A</source>. <volume>32</volume>, <fpage>1007</fpage>&#x02013;<lpage>1018</lpage>. <pub-id pub-id-type="doi">10.1103/PhysRevA.32.1007</pub-id><pub-id pub-id-type="pmid">9896156</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barron</surname> <given-names>H. C.</given-names></name> <name><surname>Vogels</surname> <given-names>T. P.</given-names></name> <name><surname>Emir</surname> <given-names>U. E.</given-names></name> <name><surname>Makin</surname> <given-names>T. R.</given-names></name> <name><surname>O&#x00027;Shea</surname> <given-names>J.</given-names></name> <name><surname>Clare</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Unmasking latent inhibitory connections in human cortex to reveal dormant cortical memories</article-title>. <source>Neuron</source> <volume>90</volume>, <fpage>191</fpage>&#x02013;<lpage>203</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2016.02.031</pub-id><pub-id pub-id-type="pmid">26996082</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>D&#x00027;amour</surname> <given-names>J.</given-names></name> <name><surname>Froemke</surname> <given-names>R.</given-names></name></person-group> (<year>2015</year>). <article-title>Inhibitory and excitatory spike-timing-dependent plasticity in the auditory cortex</article-title>. <source>Neuron</source> <volume>86</volume>, <fpage>514</fpage>&#x02013;<lpage>528</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2015.03.014</pub-id><pub-id pub-id-type="pmid">25843405</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Diederich</surname> <given-names>S.</given-names></name> <name><surname>Opper</surname> <given-names>M.</given-names></name></person-group> (<year>1987</year>). <article-title>Learning of correlated patterns in spin-glass networks by local learning rules</article-title>. <source>Phys. Rev. Lett</source>. <volume>58</volume>, <fpage>949</fpage>&#x02013;<lpage>952</lpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.58.949</pub-id><pub-id pub-id-type="pmid">10035080</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fauth</surname> <given-names>M. J.</given-names></name> <name><surname>Van Rossum</surname> <given-names>M. C.</given-names></name></person-group> (<year>2019</year>). <article-title>Self-organized reactivation maintains and reinforces memories despite synaptic turnover</article-title>. <source>Elife</source> <volume>8</volume>:<fpage>e43717</fpage>. <pub-id pub-id-type="doi">10.7554/eLife.43717</pub-id><pub-id pub-id-type="pmid">31074745</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fontanari</surname> <given-names>J. F.</given-names></name> <name><surname>Theumann</surname> <given-names>W. K.</given-names></name></person-group> (<year>1990</year>). <article-title>On the storage of correlated patterns in Hopfield&#x00027;s model</article-title>. <source>Journal de Physique</source>. <volume>51</volume>, <fpage>375</fpage>&#x02013;<lpage>386</lpage>. <pub-id pub-id-type="doi">10.1051/jphys:01990005105037500</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ha</surname> <given-names>G. E.</given-names></name> <name><surname>Cheong</surname> <given-names>E.</given-names></name></person-group> (<year>2017</year>). <article-title>Spike frequency adaptation in neurons of the central nervous system</article-title>. <source>Exp Neurobiol</source>. <volume>26</volume>, <fpage>179</fpage>&#x02013;<lpage>185</lpage>. <pub-id pub-id-type="doi">10.5607/en.2017.26.4.179</pub-id><pub-id pub-id-type="pmid">28912640</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haxby</surname> <given-names>J. V.</given-names></name> <name><surname>Gobbini</surname> <given-names>M. I.</given-names></name> <name><surname>Furey</surname> <given-names>M. L.</given-names></name> <name><surname>Ishai</surname> <given-names>A.</given-names></name> <name><surname>Schouten</surname> <given-names>J. L.</given-names></name> <name><surname>Pietrini</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2001</year>). <article-title>Distributed and overlapping representations of faces and objects in ventral temporal cortex</article-title>. <source>Science</source> <volume>293</volume>, <fpage>2425</fpage>&#x02013;<lpage>2430</lpage>. <pub-id pub-id-type="doi">10.1126/science.1063736</pub-id><pub-id pub-id-type="pmid">11577229</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hopfield</surname> <given-names>J. J.</given-names></name></person-group> (<year>1982</year>). <article-title>Neural networks and physical systems with emergent collective computational abilities</article-title>. <source>Proc. Nat. Acad. Sci</source>. <volume>79</volume>, <fpage>2554</fpage>&#x02013;<lpage>2558</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.79.8.2554</pub-id><pub-id pub-id-type="pmid">6953413</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hopfield</surname> <given-names>J. J.</given-names></name></person-group> (<year>1984</year>). <article-title>Neurons with graded response have collective computational properties like those of two-state neurons</article-title>. <source>Proc. Nat. Acad. Sci</source>. <volume>81</volume>, <fpage>3088</fpage>&#x02013;<lpage>3092</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.81.10.3088</pub-id><pub-id pub-id-type="pmid">6587342</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kirkpatrick</surname> <given-names>J.</given-names></name> <name><surname>Pascanu</surname> <given-names>R.</given-names></name> <name><surname>Rabinowitz</surname> <given-names>N.</given-names></name> <name><surname>Veness</surname> <given-names>J.</given-names></name> <name><surname>Desjardins</surname> <given-names>G.</given-names></name> <name><surname>Rusu</surname> <given-names>A. A.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Overcoming catastrophic forgetting in neural networks</article-title>. <source>Proc. Nat. Acad. Sci</source>. <volume>114</volume>, <fpage>3521</fpage>&#x02013;<lpage>3526</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1611835114</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kodangattil</surname> <given-names>J. N.</given-names></name> <name><surname>Dacher</surname> <given-names>M.</given-names></name> <name><surname>Authement</surname> <given-names>M. E.</given-names></name> <name><surname>Nugent</surname> <given-names>F. S.</given-names></name></person-group> (<year>2013</year>). <article-title>Spike timing-dependent plasticity at GABAergic synapses in the ventral tegmental area</article-title>. <source>J. Physiol</source>. <volume>591</volume>, <fpage>4699</fpage>&#x02013;<lpage>4710</lpage>. <pub-id pub-id-type="doi">10.1113/jphysiol.2013.257873</pub-id><pub-id pub-id-type="pmid">23897235</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kothapalli</surname> <given-names>V.</given-names></name></person-group> (<year>2023</year>). <article-title>Neural collapse: a review on modelling principles and generalization</article-title>. <source>arXiv</source> [preprint] arXiv:2206.04041. <pub-id pub-id-type="doi">10.48550/arXiv.2206.04041</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kriegeskorte</surname> <given-names>N.</given-names></name></person-group> (<year>2008</year>). <article-title>Representational similarity analysis and connecting the branches of systems neuroscience</article-title>. <source>Front. Syst. Neurosci</source>. <volume>2</volume>:<fpage>2008</fpage>. <pub-id pub-id-type="doi">10.3389/neuro.06.004.2008</pub-id><pub-id pub-id-type="pmid">19104670</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuchibhotla</surname> <given-names>K. V.</given-names></name> <name><surname>Gill</surname> <given-names>J. V.</given-names></name> <name><surname>Lindsay</surname> <given-names>G. W.</given-names></name> <name><surname>Papadoyannis</surname> <given-names>E. S.</given-names></name> <name><surname>Field</surname> <given-names>R. E.</given-names></name></person-group> (<year>2017</year>). <article-title>Parallel processing by cortical inhibition enables context-dependent behavior</article-title>. <source>Nat. Neurosci</source>. <volume>20</volume>, <fpage>62</fpage>&#x02013;<lpage>71</lpage>. <pub-id pub-id-type="doi">10.1038/nn.4436</pub-id><pub-id pub-id-type="pmid">27798631</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>LeCun</surname> <given-names>Y.</given-names></name> <name><surname>Cortes</surname> <given-names>C.</given-names></name> <name><surname>Burges</surname> <given-names>J. C. C.</given-names></name></person-group> (<year>1998</year>). <source>The MNIST Database of Handwritten Digits</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://yann.lecun.com/exdb/mnist/">http://yann.lecun.com/exdb/mnist/</ext-link></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Louie</surname> <given-names>K.</given-names></name> <name><surname>Wilson</surname> <given-names>M. A.</given-names></name></person-group> (<year>2001</year>). <article-title>Temporally structured replay of awake hippocampal ensemble activity during rapid eye movement sleep</article-title>. <source>Neuron</source> <volume>29</volume>, <fpage>145</fpage>&#x02013;<lpage>156</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(01)00186-6</pub-id><pub-id pub-id-type="pmid">11182087</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCallum</surname> <given-names>S.</given-names></name></person-group> (<year>1998</year>). <article-title>Catastrophic forgetting and the pseudorehearsal solution in hopfield-type networks</article-title>. <source>Conn. Sci</source>. <volume>10</volume>, <fpage>121</fpage>&#x02013;<lpage>35</lpage>. <pub-id pub-id-type="doi">10.1080/095400998116530</pub-id><pub-id pub-id-type="pmid">10496474</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCloskey</surname> <given-names>M.</given-names></name> <name><surname>Cohen</surname> <given-names>N. J.</given-names></name></person-group> (<year>1989</year>). Catastrophic interference in connectionist networks: the sequential learning problem. In: <italic>Psychology of Learning and Motivation</italic>. London: Elsevier (1989). p. <fpage>109</fpage>&#x02013;<lpage>165</lpage>.</citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peron</surname> <given-names>S.</given-names></name> <name><surname>Gabbiani</surname> <given-names>F.</given-names></name></person-group> (<year>2009</year>). <article-title>Spike frequency adaptation mediates looming stimulus selectivity in a collision-detecting neuron</article-title>. <source>Nat. Neurosci</source>. <volume>12</volume>, <fpage>318</fpage>&#x02013;<lpage>326</lpage>. <pub-id pub-id-type="doi">10.1038/nn.2259</pub-id><pub-id pub-id-type="pmid">19198607</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peyrache</surname> <given-names>A.</given-names></name> <name><surname>Khamassi</surname> <given-names>M.</given-names></name> <name><surname>Benchenane</surname> <given-names>K.</given-names></name> <name><surname>Wiener</surname> <given-names>S. I.</given-names></name> <name><surname>Battaglia</surname> <given-names>F. P.</given-names></name></person-group> (<year>2009</year>). <article-title>Replay of rule-learning related neural patterns in the prefrontal cortex during sleep</article-title>. <source>Nat. Neurosci</source>. <volume>12</volume>, <fpage>919</fpage>&#x02013;<lpage>926</lpage>. <pub-id pub-id-type="doi">10.1038/nn.2337</pub-id><pub-id pub-id-type="pmid">19483687</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Quiroga</surname> <given-names>R. Q.</given-names></name> <name><surname>Reddy</surname> <given-names>L.</given-names></name> <name><surname>Kreiman</surname> <given-names>G.</given-names></name> <name><surname>Koch</surname> <given-names>C.</given-names></name> <name><surname>Fried</surname> <given-names>I.</given-names></name></person-group> (<year>2005</year>). <article-title>Invariant visual representation by single neurons in the human brain</article-title>. <source>Nature</source> <volume>435</volume>, <fpage>1102</fpage>&#x02013;<lpage>1107</lpage>. <pub-id pub-id-type="doi">10.1038/nature03687</pub-id><pub-id pub-id-type="pmid">15973409</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Robins</surname> <given-names>A.</given-names></name></person-group> (<year>1995</year>). <article-title>Catastrophic forgetting, rehearsal and pseudorehearsal</article-title>. <source>Conn. Sci</source>. <volume>7</volume>, <fpage>123</fpage>&#x02013;<lpage>146</lpage>. <pub-id pub-id-type="doi">10.1080/09540099550039318</pub-id><pub-id pub-id-type="pmid">33286958</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shen</surname> <given-names>Y.</given-names></name> <name><surname>Dasgupta</surname> <given-names>S.</given-names></name> <name><surname>Navlakha</surname> <given-names>S.</given-names></name></person-group> (<year>2023</year>). <article-title>Reducing catastrophic forgetting with associative learning: a lesson from fruit flies</article-title>. <source>Neural Comput</source>. <volume>35</volume>, <fpage>1797</fpage>&#x02013;<lpage>1819</lpage>. <pub-id pub-id-type="doi">10.1162/neco_a_01615</pub-id><pub-id pub-id-type="pmid">37725710</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tononi</surname> <given-names>G.</given-names></name> <name><surname>Cirelli</surname> <given-names>C.</given-names></name></person-group> (<year>2014</year>). <article-title>Sleep and the price of plasticity: from synaptic and cellular homeostasis to memory consolidation and integration</article-title>. <source>Neuron</source> <volume>81</volume>, <fpage>12</fpage>&#x02013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2013.12.025</pub-id><pub-id pub-id-type="pmid">24411729</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vogels</surname> <given-names>T. P.</given-names></name> <name><surname>Sprekeler</surname> <given-names>H.</given-names></name> <name><surname>Zenke</surname> <given-names>F.</given-names></name> <name><surname>Clopath</surname> <given-names>C.</given-names></name> <name><surname>Gerstner</surname> <given-names>W.</given-names></name></person-group> (<year>2011</year>). <article-title>Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks</article-title>. <source>Science</source> <volume>334</volume>, <fpage>1569</fpage>&#x02013;<lpage>1573</lpage>. <pub-id pub-id-type="doi">10.1126/science.1211095</pub-id><pub-id pub-id-type="pmid">22075724</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wilson</surname> <given-names>M. A.</given-names></name> <name><surname>McNaughton</surname> <given-names>B. L.</given-names></name></person-group> (<year>1994</year>). <article-title>Reactivation of hippocampal ensemble memories during sleep</article-title>. <source>Science</source> <volume>265</volume>, <fpage>676</fpage>&#x02013;<lpage>679</lpage>. <pub-id pub-id-type="doi">10.1126/science.8036517</pub-id><pub-id pub-id-type="pmid">8036517</pub-id></citation></ref>
</ref-list>
</back>
</article>