<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Phys.</journal-id>
<journal-title>Frontiers in Physics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Phys.</abbrev-journal-title>
<issn pub-type="epub">2296-424X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1079624</article-id>
<article-id pub-id-type="doi">10.3389/fphy.2022.1079624</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Physics</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Partial quantisation scheme for optimising the performance of hopfield network</article-title>
<alt-title alt-title-type="left-running-head">Song et al.</alt-title>
<alt-title alt-title-type="right-running-head">
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fphy.2022.1079624">10.3389/fphy.2022.1079624</ext-link>
</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Song</surname>
<given-names>Zhaoyang</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Qu</surname>
<given-names>Yingjie</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Li</surname>
<given-names>Ming</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Liang</surname>
<given-names>Junqing</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Ma</surname>
<given-names>Hongyang</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1569303/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>School of Information and Control Engineering</institution>, <institution>Qingdao University of Technology</institution>, <addr-line>Qingdao</addr-line>, <country>China</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>School of Science</institution>, <institution>Qingdao University of Technology</institution>, <addr-line>Qingdao</addr-line>, <country>China</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1556812/overview">Tianyu Ye</ext-link>, Zhejiang Gongshang University, China</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1997965/overview">Yumin Dong</ext-link>, Chongqing Normal University, China</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1556535/overview">Lihua Gong</ext-link>, Nanchang University, China</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Junqing Liang, <email>liangjunqing@qut.edu.cn</email>; Hongyang Ma, <email>hongyang_ma@aliyun.com</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Quantum Engineering and Technology, a section of the journal Frontiers in Physics</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>24</day>
<month>11</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>10</volume>
<elocation-id>1079624</elocation-id>
<history>
<date date-type="received">
<day>25</day>
<month>10</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>07</day>
<month>11</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2022 Song, Qu, Li, Liang and Ma.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Song, Qu, Li, Liang and Ma</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>The ideal Hopfield network would be able to remember information and recover the missing information based on what has been remembered. It is expected to have applications in areas such as associative memory, pattern recognition, optimisation computation, parallel implementation of VLSI and optical devices, but the lack of memory capacity and the tendency to generate pseudo-attractors make the network capable of handling only a very small amount of data. In order to make the network more widely used, we propose a scheme to optimise and improve its memory and resilience by introducing quantum perceptrons instead of Hebbian rules to complete its weight matrix design. Compared with the classical Hopfield network, our scheme increases the threshold of each node in the network while training the weights, and the memory space of the Hopfield network changes from being composed of the weight matrix only to being composed of the weight matrix and the threshold matrix together, resulting in a dimensional increase in the memory capacity of the network, which greatly solves the problem of the Hopfield network&#x2019;s memory The problem of insufficient memory capacity and the tendency to generate pseudo-attractors was solved to a great extent. To verify the feasibility of the proposed scheme, we compare it with the classical Hopfield network in four different dimensions, namely, non-orthogonal simple matrix recovery, incomplete data recovery, memory capacity and model convergence speed. These experiments demonstrate that the improved Hopfield network with quantum perceptron has significant advantages over the classical Hopfield network in terms of memory capacity and recovery ability, which provides a possibility for practical application of the network.</p>
</abstract>
<kwd-group>
<kwd>hopfield network</kwd>
<kwd>weight matrix</kwd>
<kwd>quantum perceptron</kwd>
<kwd>storage capacity</kwd>
<kwd>recovery capability</kwd>
</kwd-group>
<contract-num rid="cn001">Grant Nos. 61772295</contract-num>
<contract-num rid="cn002">Grant Nos. ZR2021MF049 Grant Nos. ZR2019YQ01</contract-num>
<contract-sponsor id="cn001">National Natural Science Foundation of China<named-content content-type="fundref-id">10.13039/501100001809</named-content>
</contract-sponsor>
<contract-sponsor id="cn002">Natural Science Foundation of Shanghai<named-content content-type="fundref-id">10.13039/100007219</named-content>
</contract-sponsor>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Machine learning [<xref ref-type="bibr" rid="B1">1</xref>] is an important branch of artificial intelligence and a way to achieve artificial intelligence, i.e. machine learning is used as a means to solve problems in artificial intelligence. It is a multi-disciplinary discipline involving probability theory, statistics convex optimisation, complexity theory and many other disciplines. Machine learning algorithms are a class of algorithms that analyse existing data to obtain a certain pattern and use this pattern to make predictions about unknown data. It has been used with great success in very many fields, including medicine [<xref ref-type="bibr" rid="B2">2</xref>], biology [<xref ref-type="bibr" rid="B3">3</xref>], chemistry [<xref ref-type="bibr" rid="B4">4</xref>], physics [<xref ref-type="bibr" rid="B5">5</xref>&#x2013;<xref ref-type="bibr" rid="B8">8</xref>] and mathematics [<xref ref-type="bibr" rid="B9">9</xref>]. Machine learning has proven to be one of the most successful ways to explore the field of artificial intelligence.</p>
<p>Perceptron [<xref ref-type="bibr" rid="B10">10</xref>] is a two-classification linear classification model, which aims to find the hyperplane that divides the training data linearly. Its biggest feature is that it is easy to implement. Suppose the training data set is <inline-formula id="inf1">
<mml:math id="m1">
<mml:mi>D</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mfenced open="{" close="}">
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3f1;</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3f1;</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3f1;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>, where <inline-formula id="inf2">
<mml:math id="m2">
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3f1;</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2286;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3f1;</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">{</mml:mo>
<mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">}</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>. The perceptron model is:<disp-formula id="e1">
<mml:math id="m3">
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>sign</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(1)</label>
</disp-formula>Where <inline-formula id="inf3">
<mml:math id="m4">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf4">
<mml:math id="m5">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> are the model parameters of the perceptron, <inline-formula id="inf5">
<mml:math id="m6">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2208;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="normal">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>m</mml:mtext>
</mml:mrow>
</mml:msup>
</mml:math>
</inline-formula> is called weight or weight vector, and b &#x2208; R is called bias. <inline-formula id="inf6">
<mml:math id="m7">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> represents the inner product of <inline-formula id="inf7">
<mml:math id="m8">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf8">
<mml:math id="m9">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula>. The sign function is a symbolic function:<disp-formula id="e2">
<mml:math id="m10">
<mml:mi>sign</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable class="cases">
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3c;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(2)</label>
</disp-formula>The linear equation <inline-formula id="inf9">
<mml:math id="m11">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:math>
</inline-formula> is a hyperplane in the characteristic space, where <inline-formula id="inf10">
<mml:math id="m12">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> is the normal vector of the hyperplane and <italic>b</italic> is the intercept of the hyperplane. The hyperplane can divide the feature space into two parts, and the point above the hyperplane conforms <inline-formula id="inf11">
<mml:math id="m13">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>&#x2a7e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:math>
</inline-formula>, otherwise, it conforms <inline-formula id="inf12">
<mml:math id="m14">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:mn>0</mml:mn>
</mml:math>
</inline-formula>. The model of the classic perceptron and its application to classification is illustrated in <xref ref-type="fig" rid="F1">Figure 1</xref>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Models of classical perceptual machines (left) and applications to classification (right).</p>
</caption>
<graphic xlink:href="fphy-10-1079624-g001.tif"/>
</fig>
<p>Quantum information is a new discipline developed based on quantum physics and information technology, which mainly includes two fields: quantum communication and quantum computing. Quantum communication focuses on quantum cryptography [<xref ref-type="bibr" rid="B11">11</xref>,<xref ref-type="bibr" rid="B12">12</xref>], quantum teleportation [<xref ref-type="bibr" rid="B13">13</xref>&#x2013;<xref ref-type="bibr" rid="B16">16</xref>], and quantum direct communication [<xref ref-type="bibr" rid="B17">17</xref>], while quantum computing focuses on algorithms that fit quantum properties [<xref ref-type="bibr" rid="B18">18</xref>&#x2013;<xref ref-type="bibr" rid="B23">23</xref>]. This is an extremely active field, as it has the potential to disrupt classical informatics, communication technologies, and computing methods.</p>
<p>Quantum perceptron belongs to quantum machine learning algorithms [<xref ref-type="bibr" rid="B24">24</xref>,<xref ref-type="bibr" rid="B25">25</xref>], which is the quantum counterpart of the classical perceptron model. Kapoor proved that quantum computation can provide significant improvements in the computational and statistical complexity of the perceptron model [<xref ref-type="bibr" rid="B26">26</xref>]; Schuld proposed a scalable quantum perceptron based on quantum Fourier transform [<xref ref-type="bibr" rid="B27">27</xref>], which can be used as a component of other more advanced networks [<xref ref-type="bibr" rid="B28">28</xref>]; Tacchino proposed a quantum perceptron model that can run on near-term quantum processing hardwar [<xref ref-type="bibr" rid="B29">29</xref>]. Currently, quantum perceptron models are in the exploratory stage and there is no absolute authority on them. In our work, the quantum perceptron model based on the quantum phase estimation algorithm [<xref ref-type="bibr" rid="B27">27</xref>] proposed by Schuld is used. The inverse quantum Fourier transform and the gradient descent algorithm on a classical computer are used to train the weight matrix of the perceptron.</p>
<p>Hopfield network (HNN) are single-layer full feedback network [<xref ref-type="bibr" rid="B30">30</xref>], which are characterised by the fact that the output <italic>x</italic>
<sub>
<italic>i</italic>
</sub> of any neuron is fed back to all neurons <italic>x</italic>
<sub>
<italic>j</italic>
</sub> as output by connecting the weights <italic>w</italic>
<sub>
<italic>ij</italic>
</sub>. The network usually uses Hebbian rule [<xref ref-type="bibr" rid="B31">31</xref>] for the design of the weight matrix. Hebbian rule is simpler but useful for the design of the weight matrix in HNN. However, sometimes the Hebbian rule cannot find an exact weight matrix, even though such a matrix exists [<xref ref-type="bibr" rid="B32">32</xref>]. This is because the rule does not incorporate the thresholds of the HNN into the training, which can result in attractors producing ranges of attraction domains that overlap each other or even appear to overwrite. And if the vectors to be stored are closer to each other, their probability of error is higher.</p>
<p>Considering that the weight matrix designed by the Hebbian rule is not enough to support the HNN to accomplish various practical tasks, we propose an improvement scheme, which will use the quantum perceptron instead of the Hebbian rule for the design of the HNN weights, Firstly, the weights and thresholds of the Hopfield network are mapped into the weight matrix of the quantum perceptron, and each node of the HNN is used as the input vector, and the weight matrix of the quantum perceptron is passed through the quantum The final weight matrix of the quantum perceptron is the weight matrix and threshold matrix of the HNN. The improved HNN has more memory storage space than the Hebbian rule because it has an additional threshold matrix to assist in storage, and can store the memorised information better. Moreover, due to the more accurate weight information, it is also easier to reach the steady state when iterating the HNN, thus the resilience and model convergence speed are significantly improved. Currently, the most widespread use of HNNs is for information recovery and information matching. Our improved HNN has been simulated and analysed to provide a huge improvement over the classical HNN in both information recovery and information matching, which makes the improved HNN more usable than the classical HNN, which is expected to provide more applications for HNNs in more fields, such as playing a greater role in virus information identification, human brain simulation, and error correction of quantum noise [<xref ref-type="bibr" rid="B33">33</xref>].</p>
<p>In <xref ref-type="sec" rid="s2">Section 2</xref>, we describe in detail the HNN model, the Hebbian rule, the quantum Fourier transform and the quantum phase estimation algorithm used in this paper; <xref ref-type="sec" rid="s3">Section 3</xref> describes in detail the theory of our approach, including the correspondence between the HNN and the perceptron model, the quantum perceptron model and how to use the quantum perceptron model for training the HNN weights and thresholds; <xref ref-type="sec" rid="s4">Section 4</xref> presents our simulation <xref ref-type="sec" rid="s4">Section 4</xref> presents our experimental analysis, in which we design experiments to verify the feasibility of our proposed scheme and its improvement and advantages over the classical scheme; <xref ref-type="sec" rid="s5">Section 5</xref> concludes the paper and provides predictions and analysis of the future of our proposed scheme.</p>
</sec>
<sec id="s2">
<title>2 Preliminaries</title>
<sec id="s2-1">
<title>2.1 Hopfield network</title>
<p>HNN are multi-input, thresholded, binary nonlinear dynamic systems. The excitation function of the neuron is usually a step function, and the value of the neuron is &#x2212;1, 1, or 0,1. When the value is 0 or &#x2212;1, the current neuron is in the inhibited state, and when the value is 1, the current neuron is in the activated state. The HNN is a single layer neural network in which all neuron nodes are connected to other neuron nodes. There is no self-feedback between the nodes, forming a complete graph model. A neuron node in the inhibited state will enter the activated state when the stimulus exceeds a set threshold, i.e. it will jump from 0 or &#x2212;1 to 1.</p>
<p>Each node in a HNN has the same function, and the output of a single node corresponds to the final state of that node, denoted by <italic>x</italic>
<sub>
<italic>i</italic>
</sub>, with the states of all nodes forming the state of the network <inline-formula id="inf13">
<mml:math id="m15">
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2026;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
</mml:math>
</inline-formula>. The topology and mode of operation is shown in <xref ref-type="fig" rid="F2">Figure 2</xref>. The network enters a steady state and produces an output when the rate of change of the energy function of the network, &#x394;<italic>E</italic> &#x3d; 0 or when a preset upper limit of iterations is reached. The energy function and the rate of change of the energy function are as follows.<disp-formula id="e3">
<mml:math id="m16">
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mi>E</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi mathvariant="bold-italic">W</mml:mi>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi mathvariant="bold-italic">&#x3b8;</mml:mi>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>E</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>E</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(3)</label>
</disp-formula>where <inline-formula id="inf14">
<mml:math id="m17">
<mml:mi mathvariant="bold">W</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="{" close="}">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is the weight matrix, <inline-formula id="inf15">
<mml:math id="m18">
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="{" close="}">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is the network state and <inline-formula id="inf16">
<mml:math id="m19">
<mml:mi mathvariant="bold-italic">&#x3b8;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="{" close="}">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is the threshold matrix.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>HNN topology operating structure and mode of operation.</p>
</caption>
<graphic xlink:href="fphy-10-1079624-g002.tif"/>
</fig>
</sec>
<sec id="s2-2">
<title>2.2 Hebbian rule</title>
<p>The Hebbian rule describes the basic principle of synaptic plasticity, that is, continuous and repeated stimulation from presynaptic neurons to postsynaptic neurons can increase the efficiency of synaptic transmission.</p>
<p>The Hebbian rule is the oldest and simplest neuron learning rule. Here is the description equantion of the Hebbian rule:<disp-formula id="e4">
<mml:math id="m20">
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
<label>(4)</label>
</disp-formula>Where <italic>w</italic>
<sub>
<italic>ij</italic>
</sub> is the connection weight from neuron <italic>j</italic> to neuron <italic>i</italic>, p is the number of training modes, and <inline-formula id="inf17">
<mml:math id="m21">
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula> is the <italic>i</italic> input of neuron <italic>k</italic>.</p>
<p>In the HNN, Hebbian rules can be used to design weight matrices:<disp-formula id="e5">
<mml:math id="m22">
<mml:mi mathvariant="bold-italic">W</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
</mml:math>
<label>(5)</label>
</disp-formula>Here <italic>w</italic>
<sub>
<italic>ii</italic>
</sub> &#x3d; 0, which means that there is no self-feedback between nodes. The equantion is rewritten as follows:<disp-formula id="e6">
<mml:math id="m23">
<mml:mi mathvariant="bold-italic">W</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mtext>T</mml:mtext>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">I</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(6)</label>
</disp-formula>Where <bold>
<italic>I</italic>
</bold> is the unit matrix and <bold>
<italic>X</italic>
</bold> is the system state of HNN.</p>
</sec>
<sec id="s2-3">
<title>2.3 HNN attractor and pseudo attractor</title>
<p>Considering that the Hopfield network has M samples of <italic>X</italic>
<sup>
<italic>m</italic>
</sup>, then:<disp-formula id="e7">
<mml:math id="m24">
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mtext>T</mml:mtext>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable class="cases">
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi>m</mml:mi>
<mml:mo>&#x2260;</mml:mo>
<mml:mi>z</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mi>n</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi>m</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>z</mml:mi>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(7)</label>
</disp-formula>
<disp-formula id="e8">
<mml:math id="m25">
<mml:mi mathvariant="bold-italic">W</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mtext>T</mml:mtext>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">I</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msup>
</mml:math>
<label>(8)</label>
</disp-formula>Because of n<inline-formula id="inf18">
<mml:math id="m26">
<mml:mo>&#x3e;</mml:mo>
</mml:math>
</inline-formula>M, therefore:<disp-formula id="e9">
<mml:math id="m27">
<mml:mtable class="align" columnalign="left">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold-italic">W</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:mi>f</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:mi>sgn</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>m</mml:mtext>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(9)</label>
</disp-formula>According to <xref ref-type="disp-formula" rid="e9">Eq. 9</xref>: when a given sample, <bold>
<italic>X</italic>
</bold>
<sup>m</sup> is the ideal attractor and produces a certain attractor domain around it, which will be &#x201c;captured&#x201d; by the attractor in the attractor domain. However, the condition that the given samples are orthogonal to each other is too harsh, which eventually leads to the attraction domain of some points outside the samples, which are regarded as pseudo attractors of the HNN.</p>
</sec>
<sec id="s2-4">
<title>2.4 Quantum Fourier transform</title>
<p>The quantum Fourier transform is an efficient quantum algorithm for the Fourier transform of quantum amplitudes. The quantum Fourier transform is not the classical counterpart of the Fourier transform and does not speed up the Fourier transform process on classical data, but it can perform an important task-phase estimation, i.e. estimating the eigenvalues of the You operator under certain conditions. The matrix representation of the quantum Fourier transform is as follows:<disp-formula id="e10">
<mml:math id="m28">
<mml:mi>Q</mml:mi>
<mml:mi>F</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mtable class="matrix">
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mo>&#x22ef;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mi>&#x3c9;</mml:mi>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mo>&#x22ef;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mo>&#x22ef;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msup>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mo>&#x22ee;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mo>&#x22ee;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mo>&#x22ee;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="center"/>
<mml:mtd columnalign="center">
<mml:mo>&#x22ee;</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mo>&#x22ef;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msup>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(10)</label>
</disp-formula>
</p>
<p>Where, <inline-formula id="inf19">
<mml:math id="m29">
<mml:mi>&#x3c9;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>cos</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>sin</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:math>
</inline-formula>.</p>
<p>In the classical Fourier transform, the transformation takes the following form:<disp-formula id="e11">
<mml:math id="m30">
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>j</mml:mi>
<mml:mi>k</mml:mi>
<mml:mo>/</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msup>
</mml:math>
<label>(11)</label>
</disp-formula>
</p>
<p>The mathematical form of the quantum Fourier transform is similar to the mathematical representation of the discrete Fourier transform [<xref ref-type="bibr" rid="B34">34</xref>]. It is an operator defined on a set of standard orthogonal bases &#x7c;0&#x27e9;, &#x7c;1&#x27e9;&#x22ef;&#x7c;<italic>N</italic> &#x2212; 1&#x27e9; with the following action:<disp-formula id="e12">
<mml:math id="m31">
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>j</mml:mi>
<mml:mi>j</mml:mi>
<mml:mi>k</mml:mi>
<mml:mo>/</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
</mml:math>
<label>(12)</label>
</disp-formula>An arbitrary quantum state action can be expressed as:<disp-formula id="e13">
<mml:math id="m32">
<mml:mtable class="align" columnalign="left">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>&#x3c8;</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:munder>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:mo>&#x2192;</mml:mo>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi>Q</mml:mi>
<mml:mi>F</mml:mi>
<mml:mi>T</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mi>j</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mi>j</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:mo>&#x3d;</mml:mo>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(13)</label>
</disp-formula>where the amplitude <inline-formula id="inf20">
<mml:math id="m33">
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:msubsup>
<mml:mrow>
<mml:mo movablelimits="false" form="prefix">&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mi>j</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msup>
</mml:math>
</inline-formula> is the value of the discrete Fourier transform of the amplitude <inline-formula id="inf21">
<mml:math id="m34">
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>.</p>
<p>The transform itself does not have much obvious value, but it is an important component subalgorithm of the quantum phase estimation algorithm. The quantum Fourier transform corresponds to the quantum line diagram (omitting the SWAP gate), where <inline-formula id="inf22">
<mml:math id="m35">
<mml:msub>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mtable class="matrix">
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msup>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>. <xref ref-type="fig" rid="F3">Figure 3</xref> illustrates the quantum circuit of the quantum Fourier transform.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Quantum circuits for quantum Fourier transform.</p>
</caption>
<graphic xlink:href="fphy-10-1079624-g003.tif"/>
</fig>
</sec>
<sec id="s2-5">
<title>2.5 Qunamtum phase estimation algorithm</title>
<p>The quantum phase estimation algorithm is the key to many quantum algorithms [<xref ref-type="bibr" rid="B6">6</xref>,<xref ref-type="bibr" rid="B35">35</xref>], and its role is to estimate the phase in the eigenvalues of the eigenvectors corresponding to the You matrix. The quantum circuit for quantum phase estimation is shown in <xref ref-type="fig" rid="F4">Figure 4</xref>. The algorithm uses two registers, the first of which contains <italic>&#x3c4;</italic> quantum bits with initial state &#x7c;0&#x27e9;. The value of <italic>&#x3c4;</italic> depends on the number of bits desired to be accurately estimated and the desired success rate. The second register has an initial state of <inline-formula id="inf23">
<mml:math id="m36">
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>. The essence of the process is the ability to perform the inverse Fourier transform:<disp-formula id="e14">
<mml:math id="m37">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>&#x3c6;</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2192;</mml:mo>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>&#x3c6;</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(14)</label>
</disp-formula>where state <inline-formula id="inf24">
<mml:math id="m38">
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>&#x3c6;</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo stretchy="false">&#x232a;</mml:mo>
</mml:math>
</inline-formula> is the estimated value of <italic>&#x3c6;</italic>.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Quantum circuits for quantum phase estimation.</p>
</caption>
<graphic xlink:href="fphy-10-1079624-g004.tif"/>
</fig>
</sec>
</sec>
<sec sec-type="methods" id="s3">
<title>3 Methods</title>
<sec id="s3-1">
<title>3.1 Correspondence between perceptron models and HNN</title>
<p>Firstly, we will discuss HNN with the range restricted to cells with non-zero thresholds and a step function as the threshold function, which is by far the most common form of HNN. Secondly two consensus needs to be established: 1) the units in this network are perceptrons. 2) The perceptron can determine the weights and thresholds of the network for the problem to be learned. Focus on consensus i): Based on the definitions of HNN and perceptual machines above, it is clear that the unit in a HNN is a perceptual machine.</p>
<p>Focus on consensus ii): Consider a HNN with n cells, where <italic>W</italic> is the weight matrix of <italic>n</italic> &#xd7; <italic>n</italic>, such that <italic>&#x3b8;</italic>
<sub>
<italic>i</italic>
</sub> denotes the threshold of the cell <italic>i</italic> and the state of the network is <bold>
<italic>X</italic>
</bold>. If one wants this network to reach a steady state, it means that the following n inequalities must be satisfied:<disp-formula id="e15">
<mml:math id="m39">
<mml:mtable class="gathered">
<mml:mtr>
<mml:mtd>
<mml:mi>sign</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>12</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>13</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mo>&#x22ef;</mml:mo>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>sign</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>21</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>23</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mo>&#x22ef;</mml:mo>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x22ee;</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>sign</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mo>&#x22ef;</mml:mo>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(15)</label>
</disp-formula>Since it has no self-feedback, only the <italic>n</italic> (<italic>n</italic> &#x2212; 1)/2 non-zero entries of the weight matrix <bold>
<italic>W</italic>
</bold> and the <italic>n</italic> thresholds of the cells appear in these inequalities. Let <bold>
<italic>u</italic>
</bold> denote the vector of <italic>n</italic> &#x2b; <italic>n</italic> (<italic>n</italic> &#x2b; 1)/2 dimension whose components are the non-diagonal elements of the weight matrix <italic>w</italic>
<sub>
<italic>ij</italic>
</sub> (<italic>i</italic> &#x3c; <italic>j</italic>) and the <italic>n</italic> threshold minus signs. The vector <bold>
<italic>u</italic>
</bold> is given by the following equation:<disp-formula id="e16">
<mml:math id="m40">
<mml:mi mathvariant="bold-italic">u</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>12</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>13</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>23</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>24</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(16)</label>
</disp-formula>The vector x is transformed into n auxiliary vectors <bold>
<italic>v</italic>
</bold>
<sub>1</sub>, <bold>
<italic>v</italic>
</bold>
<sub>2</sub>, <bold>
<italic>v</italic>
</bold>
<sub>3</sub>, &#x2026; , <bold>
<italic>v</italic>
</bold>
<sub>
<italic>n</italic>
</sub> of dimension <italic>n</italic> &#x2b; <italic>n</italic> (<italic>n</italic> &#x2b; 1)/2 given by the expression:<disp-formula id="e17">
<mml:math id="m41">
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:munder>
<mml:mrow>
<mml:munder accentunder="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#xfe38;</mml:mo>
</mml:munder>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munder>
<mml:mo>,</mml:mo>
<mml:mn>0,0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:munder>
<mml:mrow>
<mml:munder accentunder="false">
<mml:mrow>
<mml:mn>1,0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mo>&#xfe38;</mml:mo>
</mml:munder>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munder>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:munder>
<mml:mrow>
<mml:munder accentunder="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mo>&#xfe38;</mml:mo>
</mml:munder>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munder>
<mml:mo>,</mml:mo>
<mml:munder>
<mml:mrow>
<mml:munder accentunder="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#xfe38;</mml:mo>
</mml:munder>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:munder>
<mml:mo>,</mml:mo>
<mml:mn>0,0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:munder>
<mml:mrow>
<mml:munder accentunder="false">
<mml:mrow>
<mml:mn>0,1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mo>&#xfe38;</mml:mo>
</mml:munder>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munder>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:munder>
<mml:mrow>
<mml:munder accentunder="false">
<mml:mrow>
<mml:mn>0,0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#xfe38;</mml:mo>
</mml:munder>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munder>
<mml:mo>,</mml:mo>
<mml:munder>
<mml:mrow>
<mml:munder accentunder="false">
<mml:mrow>
<mml:mn>0,0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#xfe38;</mml:mo>
</mml:munder>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:munder>
<mml:mo>,</mml:mo>
<mml:mn>0,0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:munder>
<mml:mrow>
<mml:munder accentunder="false">
<mml:mrow>
<mml:mn>0,0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>&#xfe38;</mml:mo>
</mml:munder>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munder>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(17)</label>
</disp-formula>
<xref ref-type="disp-formula" rid="e15">Eq. 15</xref> can be rewritten in the following form:<disp-formula id="e18">
<mml:math id="m42">
<mml:mi>sign</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold-italic">v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x22c5;</mml:mo>
<mml:mi mathvariant="bold-italic">u</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:math>
<label>(18)</label>
</disp-formula>
<xref ref-type="disp-formula" rid="e18">Eq. 18</xref> shows that the solution to the original problem is found by computing the linear separation of vectors <italic>z</italic>
<sub>
<italic>i</italic>
</sub>. The vectors belonging to the positive half-space are those with <inline-formula id="inf25">
<mml:math id="m43">
<mml:mi>sgn</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:math>
</inline-formula>, and those belonging to the negative half-space are those with <inline-formula id="inf26">
<mml:math id="m44">
<mml:mi>sgn</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:math>
</inline-formula>. This problem can be solved using perceptron learning, which allows us to calculate the weight vector <bold>
<italic>v</italic>
</bold> required for linear separation and from this to derive the weight matrix <italic>W</italic> with the threshold matrix <italic>&#x3b8;</italic>. <xref ref-type="fig" rid="F5">Figure 5</xref> shows the correspondence between the HNN and the perceptron model.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>HNN and perceptual model transformation relationship.</p>
</caption>
<graphic xlink:href="fphy-10-1079624-g005.tif"/>
</fig>
</sec>
<sec id="s3-2">
<title>3.2 Quantum perceptron model</title>
<p>First, t-qubit state&#x7c;0&#x27e9; are passed through the Hadmard gate, to obtain the superposition state <inline-formula id="inf27">
<mml:math id="m45">
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo stretchy="false">&#x232a;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2297;</mml:mo>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2192;</mml:mo>
<mml:mspace width="0.3333em"/>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:msubsup>
<mml:mrow>
<mml:mo movablelimits="false" form="prefix">&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>J</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>J</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
</mml:math>
</inline-formula>, where <italic>J</italic> is the integer form of the bit string <inline-formula id="inf28">
<mml:math id="m46">
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>, i.e. <italic>J</italic> &#x3d; <italic>j</italic>
<sub>1</sub>2<sup>
<italic>n</italic>&#x2212;1</sup> &#x2b; <italic>j</italic>
<sub>2</sub>2<sup>
<italic>n</italic>&#x2212;2</sup> &#x2b; &#x22ef; &#x2b; <italic>j</italic>
<sub>
<italic>n</italic>
</sub>2<sup>0</sup>. Next, by an orcal operation <inline-formula id="inf29">
<mml:math id="m47">
<mml:mi mathvariant="script">O</mml:mi>
</mml:math>
</inline-formula>:<disp-formula id="e19">
<mml:math id="m48">
<mml:mtable class="gathered">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="script">O</mml:mi>
<mml:mo>:</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>J</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>J</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2192;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>J</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>J</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>J</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>J</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>J</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>w</mml:mi>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
<mml:mi>J</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>J</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(19)</label>
</disp-formula>Where <inline-formula id="inf30">
<mml:math id="m49">
<mml:msub>
<mml:mrow>
<mml:mi>U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mi>U</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:msubsup>
<mml:mrow>
<mml:mo>&#x2297;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mrow>
<mml:mi>U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mtable class="matrix">
<mml:mtr>
<mml:mtd columnalign="center">
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>&#x3d5;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>n</mml:mi>
</mml:math>
</inline-formula>.</p>
<p>From <xref ref-type="disp-formula" rid="e13">Eqs. 13</xref>&#x2013;<xref ref-type="disp-formula" rid="e19">19</xref>:<disp-formula id="e20">
<mml:math id="m50">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>J</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>J</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>J</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>J</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>J</mml:mi>
<mml:mi>&#x3c6;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>J</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(20)</label>
</disp-formula>Finally the estimated phase can be obtained by inverse Fourier transform <inline-formula id="inf31">
<mml:math id="m51">
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>&#x3c6;</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo stretchy="false">&#x232a;</mml:mo>
</mml:math>
</inline-formula>:<disp-formula id="equ1">
<mml:math id="m52">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>J</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>J</mml:mi>
<mml:mi>&#x3c6;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>J</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mrow>
<mml:mover>
<mml:mrow>
<mml:mo>&#x2192;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mi>F</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:mover>
</mml:mrow>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>&#x3c6;</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:mo>&#x2297;</mml:mo>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>
</p>
</sec>
<sec id="s3-3">
<title>3.3 Obtaining parameter information using quantum perception</title>
<p>The connection between the HNN and the perceptron model was described above. It is now clarified how the design of the HNN weight matrix can be carried out using the quantum perceptron. Firstly, <italic>&#x3c3;</italic> &#x3d; (<bold>
<italic>v</italic>
</bold>, <bold>
<italic>u</italic>
</bold>) is input to the quantum perceptron model as an initial parameter and the model update rule for the quantum perceptron is as follows:<disp-formula id="e21">
<mml:math id="m53">
<mml:mtable class="align" columnalign="left">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mi>U</mml:mi>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mo>&#x2297;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mrow>
<mml:mi>U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mo>&#x2297;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>u</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>&#x3d5;</mml:mi>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi>u</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msup>
<mml:msubsup>
<mml:mrow>
<mml:mo>&#x2297;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msup>
<mml:msubsup>
<mml:mrow>
<mml:mo>&#x2297;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msup>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(21)</label>
</disp-formula>From the above equation, it can be deduced that &#x7c;<italic>&#x3c3;</italic>&#x27e9; is an eigenvector of the matrix U and <italic>e</italic>
<sup>2<italic>&#x3c0;i</italic>&#x394;<italic>&#x3d5;h</italic>(<italic>u</italic>,<italic>v</italic>)</sup> is the corresponding eigenvalue. By picking the appropriate value of t in the quantum perceptron, the inverse Fourier transform by:<disp-formula id="e22">
<mml:math id="m54">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="true">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>J</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mi>J</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>J</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
<mml:mo>&#x2192;</mml:mo>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>&#x3c6;</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2297;</mml:mo>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo stretchy="false">&#x232a;</mml:mo>
</mml:math>
<label>(22)</label>
</disp-formula>It is possible to obtain a value of, which is very close to the true phase, and also becomes closer to the true phase as the value of <italic>t</italic> becomes larger. Combining <xref ref-type="disp-formula" rid="e19">Eq. 19</xref> gives:<disp-formula id="e23">
<mml:math id="m55">
<mml:mi>U</mml:mi>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mfenced open="|" close="&#x27e9;">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.5</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2208;</mml:mo>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mn>0,1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(23)</label>
</disp-formula>Therefore the value of <italic>&#x3c3;</italic> &#x3d; (<bold>
<italic>v</italic>
</bold>, <bold>
<italic>u</italic>
</bold>) can be obtained by <inline-formula id="inf32">
<mml:math id="m56">
<mml:msup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>&#x3c6;</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:math>
</inline-formula>.According to [], it can be known that in the perceptron model, its weight update rule:<disp-formula id="e24">
<mml:math id="m57">
<mml:msub>
<mml:mrow>
<mml:mi>u</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3be;</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2254;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>u</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3be;</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2254;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>u</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3be;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x3b7;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x2212;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:msubsup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x2212;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:msubsup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(24)</label>
</disp-formula>where <inline-formula id="inf33">
<mml:math id="m58">
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>sgn</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold-italic">u</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>&#x3be;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b7;</mml:mi>
</mml:math>
</inline-formula> is the learning rate. However, when training with a perceptron, it is difficult to guarantee the separability of the data. Therefore, our perceptron model is trained using the delta rule, i.e. a gradient descent algorithm to search the space of possible weight vectors in order to find the best-fitting sample weight vector. The process is implemented with the aid of a classical computer. Its weight update rule is expressed in the same form as (Eq. 25), except that <bold>
<italic>Y</italic>
</bold>
<sup>
<italic>q</italic>
</sup> &#x3d; <bold>
<italic>u</italic>
</bold>(<italic>&#x3be;</italic>)<bold>
<italic>&#x3c3;</italic>
</bold>
<sup>
<italic>q</italic>
</sup>.</p>
</sec>
<sec id="s3-4">
<title>3.4 Computational complexity analysis</title>
<p>We analyze the computational complexity of the HNN in two steps. 1) Analysis of the lift rate of the data to be trained after conversion of the HNN to the perceptron model. 2) The computational complexity required to complete the weight parameters by means of the quantum phase estimation algorithm. First we analyse i), any HNN with n nodes satisfying the requirements of <xref ref-type="sec" rid="s3-1">Section 3.1</xref> can be converted into a perceptron model with <italic>n</italic> (<italic>n</italic> &#x2212; 1)/2 weight parameters. For ii), we analyze here two different algorithms for finding the weight parameters, namely the gradient descent-based algorithm and the Grover fast weight finding algorithm. The time complexity of the gradient descent-based algorithm is mainly controlled by the number of steps <italic>&#x25b;</italic> accuracy, i.e. O &#x221d; <italic>&#x25b;</italic>
<sup>2</sup>; the time complexity of finding the parameters using the Grover algorithm can reach <italic>O</italic>(<italic>n</italic>) under certain conditions. It is clear from this analysis that the final computational complexity is <inline-formula id="inf34">
<mml:math id="m59">
<mml:mi>O</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">&#x3d2;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>, regardless of the algorithm used. However, quantum machine learning is able to process information using quantum effects, as in this paper, where we input the training set as a superposition of feature vectors into a quantum perceptron model that can be processed simultaneously, and this process is not affected by the size of the model. The value of this process is small when the model size is small, and becomes more apparent as the model size increases and becomes the most important part of determining the computational complexity.</p>
</sec>
</sec>
<sec id="s4">
<title>4 Emulation analysis</title>
<p>The two most important applications of HNN are data matching and data recovery, which correspond to the accuracy of the HNN&#x2019;s weight matrix and memory capacity respectively. The convergence speed of the HNN model is extremely important in both data matching and data recovery. To this end, we designed three experiments, namely a non-orthogonal simple matrix recovery test, a Random binary-based incomplete matrix recovery test, and a memory capacity test based on the recognizability of QR codes, to compare the effectiveness of our proposed improved HNN with that of the classical HNN, and finally we added a model convergence speed comparison experiment to compare the performance differences between the models.</p>
<p>Our simulation analysis is based on the pennylane open source framework. The framework has embedded transition algorithms between quantum and classical algorithms as well as parameter optimisation algorithms, eliminating the need for us to package the parameters and design the optimisation algorithms separately. With this framework, the measured and calculated weight parameters are directly updated iteratively by means of a gradient descent algorithm, and the relevant information is fed back into the quantum algorithm to update the perceptron weights. Using this as a basis, we have designed the following simulation experiments.</p>
<sec id="s4-1">
<title>4.1 Result</title>
<p>In the non-orthogonal simple matrix memory test, we demonstrated that our proposed solution can effectively cope with the memory confusion caused by non-orthogonal simple matrices; in the fragmented data recovery test, we demonstrated that our proposed QP-HNN has an average recovery rate improvement of 30.6% and a maximum of 49.1% in the effective interval compared with Hebbian rule-Hopfield network (HR-HNN), making it more practical. In the memory stress test based on QR code recognisability, our proposed QP-HNN is 2.25 times more effective than HR-HNN.</p>
</sec>
<sec id="s4-2">
<title>4.2 Non-orthogonal simple matrix memory test</title>
<p>The non-orthogonal simple matrix memory test is set up for the Hebbian rule in the classical HNN, as one of the prerequisites for the design of the weight matrix using the Hebbian rule is that the input vectors must be orthogonal to each other, and if they do not satisfy orthogonality, the designed weight matrix may be incorrect. We demonstrate the impact of this deficiency using two non-orthogonal 3D row vectors <italic>X</italic>
<sub>
<italic>v</italic>
</sub> &#x3d; [0, 1, 0] and <italic>X</italic>
<sub>
<italic>&#x3d1;</italic>
</sub> &#x3d; [1, 1, 1] as the input matrices for HR-HNN and QP-HNN, as shown in <xref ref-type="fig" rid="F6">Figure 6</xref> Where the trained weight matrix <bold>
<italic>W</italic>
</bold>
<sub>
<italic>HR</italic>
</sub> &#x3d; [[0, 1, 1] [1, 0, 1] [1, 1, 0]] for HR-HNN, the weight matrix <bold>
<italic>W</italic>
</bold>
<sub>
<italic>QP</italic>
</sub> &#x3d; [[0, 0.5, 0.3] [0.5, 0, 0] [0, 0, 0.2]] for QP &#x2212; Hop and the threshold matrix <bold>
<italic>&#x3b8;</italic>
</bold>
<sub>
<italic>QP</italic>
</sub> &#x3d; [0.6, &#x2212;0.1, 0.2].</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>Non-orthogonal simple matrix memory test.</p>
</caption>
<graphic xlink:href="fphy-10-1079624-g006.tif"/>
</fig>
</sec>
<sec id="s4-3">
<title>4.3 Random binary-based incomplete matrix recovery test</title>
<p>In this subsection, we test and compare the recoverability of three different HNN: ClassicalPerceptron-Hopfield (CP-HNN), QP-HNN, and HR-HNN. Firstly, a random number generator was used to generate 100 60 &#xd7; 60 binary matrices <inline-formula id="inf35">
<mml:math id="m60">
<mml:mi>M</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="{" close="}">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>r</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>r</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2026;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2026;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>r</mml:mi>
<mml:mn>99</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>r</mml:mi>
<mml:mn>100</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>, and a different number of binary matrices <italic>M</italic>
<sub>
<italic>&#x3b9;</italic>
</sub>, <italic>&#x3b9;</italic> &#x2208; {1, 2 &#x2026; 100} were randomly selected from <italic>M</italic> as the weight training matrices using QuantumPerceptron, ClassicalPerceptron, and HebbianRule to design the weight matrices respectively. A matrix <italic>M</italic>
<sub>
<italic>bri</italic>
</sub> was selected from <italic>M</italic>
<sub>
<italic>&#x3b9;</italic>
</sub> and generated <inline-formula id="inf36">
<mml:math id="m61">
<mml:msubsup>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>.</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>3</mml:mn>
</mml:math>
</inline-formula> of the data in <inline-formula id="inf37">
<mml:math id="m62">
<mml:msubsup>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula> was inverted to simulate data residuals, and this matrix was used as the input matrix for the network to test the recovery rate of the above three HNN. The example model is shown in <xref ref-type="fig" rid="F7">Figure 7</xref>, and its recovery rate with different numbers of binary matrices memorised is shown in <xref ref-type="fig" rid="F8">Figure 8</xref>.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption>
<p>Example model of fragmented data recovery.</p>
</caption>
<graphic xlink:href="fphy-10-1079624-g007.tif"/>
</fig>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption>
<p>Diagram corresponding to the number of binary matrices and the recovery rate.</p>
</caption>
<graphic xlink:href="fphy-10-1079624-g008.tif"/>
</fig>
<p>From <xref ref-type="fig" rid="F8">Figure 8</xref>, it can be seen that the resilience of the HR-HNN network decreases rapidly as <italic>&#x3b9;</italic> becomes larger <italic>&#x3b9;</italic> means that the orthogonality between the matrices in <italic>M</italic>
<sub>
<italic>&#x3b9;</italic>
</sub> decreases, and consequently, memory confusion ensues. The resilience of the network basically fails at <italic>&#x3b9;</italic> &#x3d; 20 and is completely lost at <italic>&#x3b9;</italic> &#x3d; 30; the ClassicalPerceptron-Hopfield (CP-HNN) network is highly similar to the QP-HNN network in terms of resilience and has excellent robustness in the first and middle stages of <italic>&#x3b9;</italic> growth because the network also trains the threshold This is equivalent to increasing the error tolerance space and mitigating errors due to the non-orthogonality of the vectors in the matrix. As can be seen from the diagram, the network is still very resilient at <italic>&#x3b9;</italic> &#x3d; 20. However, as <italic>&#x3b9;</italic> increases, the fault tolerance space becomes saturated and the resilience decreases rapidly until it fails.</p>
</sec>
<sec id="s4-4">
<title>4.4 Memory capacity test based on the recognizability of QR codes</title>
<p>In order to visualise the memory capacity of the models, the differences between the models are presented using QR codes, which have different levels of fault tolerance and represent the number of error pixels that can be tolerated in the QR code. For our tests we have used the L level of fault tolerance, which allows for a maximum of 7% of incorrect pixels.</p>
<p>The QR code <italic>q</italic>
<sub>1</sub> is generated and stored in the &#x201c;Successful Identification&#x201d;, generating a QR code set <inline-formula id="inf38">
<mml:math id="m63">
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="{" close="}">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2,3,4,5</mml:mn>
<mml:mo>&#x2026;</mml:mo>
</mml:math>
</inline-formula> the information in <italic>q</italic>
<sub>
<italic>n</italic>
</sub> is an irregular string of numbers generated by a random number generator, a randomly selected <italic>m</italic> - QR code from <italic>Q</italic> is used as the interfering QR code, and <italic>q</italic>
<sub>1</sub> is involved in the design work of HR-HNN and QP-HNN weight matrices. After 100 tests and statistical processing, the output matrix of HR-HNN can be successfully recognised when <italic>m</italic> &#x2264; 4; QP &#x2212; Hop output matrix can be successfully recognised when, <italic>m</italic> &#x2264; 8. In <xref ref-type="fig" rid="F9">Figure 9</xref> we show a comparison of these two HNNs in terms of memory capacity.</p>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption>
<p>QP-HNN compared to HR-HNN memory capacity.</p>
</caption>
<graphic xlink:href="fphy-10-1079624-g009.tif"/>
</fig>
</sec>
<sec id="s4-5">
<title>4.5 HNN recovery rate test</title>
<p>The usability of HNN is also affected by the number of iterations required for the model to converge, which in turn is affected by the completeness of the weights, threshold information and input data. Therefore, building on the previous subsection, we further investigate the number of iterations required for <italic>q</italic>&#x2032; to recover to the state <inline-formula id="inf39">
<mml:math id="m64">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> where information can be correctly identified for different numbers of interfering QR codes, as shown in subplot <italic>a</italic> and subplot <italic>b</italic> in <xref ref-type="fig" rid="F10">Figure 10</xref>. Subplot c shows the difference in the number of iterations required for <italic>q</italic>&#x2032; to recover to <inline-formula id="inf40">
<mml:math id="m65">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> with the same amount of information. As can be seen from the figure, QP-HNN possesses a significant advantage over HR-HNN for the <italic>q</italic>&#x2032; to <inline-formula id="inf41">
<mml:math id="m66">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> process, and this advantage becomes more pronounced as <italic>m</italic> grows.</p>
<fig id="F10" position="float">
<label>FIGURE 10</label>
<caption>
<p>HR-HNN and QP-HNN convergence and resilience tests.</p>
</caption>
<graphic xlink:href="fphy-10-1079624-g010.tif"/>
</fig>
<p>
<xref ref-type="table" rid="T1">Table 1</xref> counts the recovery capacity limit of the HNN when the preset upper limit of 30,000 iterations is reached, where HR-HNN reaches the memory limit at <italic>m</italic> &#x3d; 4, i.e. at <italic>m</italic> &#x3d; 5, <italic>q</italic>&#x2032; cannot recover to <inline-formula id="inf42">
<mml:math id="m67">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> even if the number of iterations is increased, while in QP-HNN, the memory limit occurs at <italic>m</italic> &#x3d; 8.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Percentage of information required for recovery.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">&#x3c; <italic>imgsrc</italic> &#x3d; <italic>&#x201c;FPHY</italic>
<sub>2</sub>022&#x2013;1079624<sub>
<italic>a</italic>
</sub>
<italic>rt</italic>0<italic>x</italic>.<italic>png&#x201d;alt</italic> &#x3d; <italic>&#x201c;PICT&#x201d;</italic> &#x3e; &#x3c;/<italic>img</italic> &#x3e;</th>
<th align="left">1</th>
<th align="left">2</th>
<th align="left">3</th>
<th align="left">4</th>
<th align="left">5</th>
<th align="left">6</th>
<th align="left">7</th>
<th align="left">8</th>
<th align="left">9</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">QP-HNN</td>
<td align="left">12%</td>
<td align="left">14%</td>
<td align="left">18%</td>
<td align="left">23%</td>
<td align="left">31%</td>
<td align="left">42%</td>
<td align="left">57%</td>
<td align="left">73%</td>
<td align="left">92%</td>
</tr>
<tr>
<td align="left">HR-HNN</td>
<td align="left">7%</td>
<td align="left">24%</td>
<td align="left">51%</td>
<td align="left">86%</td>
<td align="left">-</td>
<td align="left">-</td>
<td align="left">-</td>
<td align="left">-</td>
<td align="left">-</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec sec-type="conclusion" id="s5">
<title>5 Conclusion</title>
<p>We improve the original HNN weight design method by using a quantum perceptron instead of the Hebbian rule. The improved QP-HNN can better handle non-orthogonal matrices, and its information memory and recovery capabilities as well as model convergence speed are significantly improved compared to HR-HNN. It also opens up the possibility of further expanding the scope of applications in areas such as virus information recognition, human brain simulation, and error correction of quantum noise.</p>
<p>Our improved scheme is based on the quantum perceptron model proposed that we can input all the data to be processed into the model simultaneously by transforming and preparing them into quantum entangled states. The current model used is still the quantum-classical computing model, where the optimal weighting parameters are found by a classical computer, but Kapoor et al. have shown that the weighting parameters can be found much faster using the Grover algorithm, considerably increase the efficiency of finding the weight parameters to compensate for the extra time consumed in its determination of the weights compared to the Hebbian rule. Currently, corresponding quantum models of HNNs already exist, and the combination of quantum perceptrons and quantum HNNs is also destined to be more desirable in pure quantum computers than in classical HNNs.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="s6">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="s7">
<title>Author contributions</title>
<p>ZS: Conceptualization, Methodology, Software,Writing-Original draft preparation. YQ: Data curation, Writing-Original draft preparation. ML: Visualization, Investigation. JL:Supervision, Writing&#x2014;Review and Editing. HM: Supervision, Writing&#x2014;Review and Editing, Project administration, Funding acquisition.</p>
</sec>
<sec id="s8">
<title>Funding</title>
<p>Project supported by the National Natural Science Foundation of China (Grant Nos. 61772295), Natural Science Foundation of Shandong Province, China (Grant Nos. ZR2021MF049, ZR2019YQ01), Project of Shandong Provincial Natural Science Foundation Joint Fund Application (ZR202108020011).</p>
</sec>
<sec sec-type="COI-statement" id="s9">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<label>1.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carleo</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Cirac</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Cranmer</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Daudet</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Schuld</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Tishby</surname>
<given-names>N</given-names>
</name>
<etal/>
</person-group> <article-title>Machine learning and the physical sciences</article-title>. <source>Rev Mod Phys</source> (<year>2019</year>) <volume>91</volume>:<fpage>045002</fpage>. <pub-id pub-id-type="doi">10.1103/RevModPhys.91.045002</pub-id>
</citation>
</ref>
<ref id="B2">
<label>2.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liakos</surname>
<given-names>KG</given-names>
</name>
<name>
<surname>Busato</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Moshou</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Pearson</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Bochtis</surname>
<given-names>D</given-names>
</name>
</person-group>. <article-title>Machine learning in agriculture: A review</article-title>. <source>Sensors</source> (<year>2018</year>) <volume>18</volume>:<fpage>2674</fpage>. <pub-id pub-id-type="doi">10.3390/s18082674</pub-id>
</citation>
</ref>
<ref id="B3">
<label>3.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pinter</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Felde</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Mosavi</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Ghamisi</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Gloaguen</surname>
<given-names>R</given-names>
</name>
</person-group>. <article-title>Covid-19 pandemic prediction for Hungary; a hybrid machine learning approach</article-title>. <source>Mathematics</source> (<year>2020</year>) <volume>8</volume>:<fpage>890</fpage>. <pub-id pub-id-type="doi">10.3390/math8060890</pub-id>
</citation>
</ref>
<ref id="B4">
<label>4.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dral</surname>
<given-names>PO</given-names>
</name>
</person-group>. <article-title>Quantum chemistry in the age of machine learning</article-title>. <source>J Phys Chem Lett</source> (<year>2020</year>) <volume>11</volume>:<fpage>2336</fpage>&#x2013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1021/acs.jpclett.9b03664</pub-id>
</citation>
</ref>
<ref id="B5">
<label>5.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Radovic</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Rousseau</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Kagan</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Bonacorsi</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Himmel</surname>
<given-names>A</given-names>
</name>
<etal/>
</person-group> <article-title>Machine learning at the energy and intensity frontiers of particle physics</article-title>. <source>Nature</source> (<year>2018</year>) <volume>560</volume>:<fpage>41</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-018-0361-2</pub-id>
</citation>
</ref>
<ref id="B6">
<label>6.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haug</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Dumke</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Kwek</surname>
<given-names>L-C</given-names>
</name>
<name>
<surname>Miniatura</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Amico</surname>
<given-names>L</given-names>
</name>
</person-group>. <article-title>Machine-learning engineering of quantum currents</article-title>. <source>Phys Rev Res</source> (<year>2021</year>) <volume>3</volume>:<fpage>013034</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevResearch.3.013034</pub-id>
</citation>
</ref>
<ref id="B7">
<label>7.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Fei</surname>
<given-names>SM</given-names>
</name>
</person-group>. <article-title>Einstein-podolsky-rosen steering based on semisupervised machine learning</article-title>. <source>Phys Rev A (Coll Park)</source> (<year>2021</year>) <volume>104</volume>:<fpage>052427</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevA.104.052427</pub-id>
</citation>
</ref>
<ref id="B8">
<label>8.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jasinski</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Montaner</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Forrey</surname>
<given-names>RC</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>BH</given-names>
</name>
<name>
<surname>Stancil</surname>
<given-names>PC</given-names>
</name>
<name>
<surname>Balakrishnan</surname>
<given-names>N</given-names>
</name>
<etal/>
</person-group> <article-title>Machine learning corrected quantum dynamics calculations</article-title>. <source>Phys Rev Res</source> (<year>2020</year>) <volume>2</volume>:<fpage>032051</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevResearch.2.032051</pub-id>
</citation>
</ref>
<ref id="B9">
<label>9.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jumper</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Pritzel</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Green</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Figurnov</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ronneberger</surname>
<given-names>O</given-names>
</name>
<etal/>
</person-group> <article-title>Highly accurate protein structure prediction with alphafold</article-title>. <source>Nature</source> (<year>2021</year>) <volume>596</volume>:<fpage>583</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-021-03819-2</pub-id>
</citation>
</ref>
<ref id="B10">
<label>10.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rosenblatt</surname>
<given-names>F</given-names>
</name>
</person-group>. <article-title>The perceptron: A probabilistic model for information storage and organization in the brain</article-title>. <source>Psychol Rev</source> (<year>1958</year>) <volume>65</volume>:<fpage>386</fpage>&#x2013;<lpage>408</lpage>. <pub-id pub-id-type="doi">10.1037/h0042519</pub-id>
</citation>
</ref>
<ref id="B11">
<label>11.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Gong</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>G</given-names>
</name>
</person-group>. <article-title>Quantum image encryption scheme with iterative generalized arnold transforms and quantum image cycle shift operations</article-title>. <source>Quan Inf Process</source> (<year>2017</year>) <volume>16</volume>:<fpage>164</fpage>&#x2013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1007/s11128-017-1612-0</pub-id>
</citation>
</ref>
<ref id="B12">
<label>12.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yi Nuo</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Zhao Yang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Yu Lin</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Nan</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Hong Yang</surname>
<given-names>M</given-names>
</name>
</person-group>. <article-title>Color image encryption algorithm based on dna code and alternating quantum random walk</article-title>. <source>Acta Phys Sin</source> (<year>2021</year>) <volume>70</volume>:<fpage>230302</fpage>&#x2013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.7498/aps.70.20211255</pub-id>
</citation>
</ref>
<ref id="B13">
<label>13.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ye</surname>
<given-names>TY</given-names>
</name>
<name>
<surname>Geng</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>TJ</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>Efficient semiquantum key distribution based on single photons in both polarization and spatial-mode degrees of freedom</article-title>. <source>Quan Inf Process</source> (<year>2022</year>) <volume>21</volume>:<fpage>123</fpage>&#x2013;<lpage>1</lpage>. <pub-id pub-id-type="doi">10.1007/s11128-022-03457-1</pub-id>
</citation>
</ref>
<ref id="B14">
<label>14.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>HY</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>ZW</given-names>
</name>
<name>
<surname>Fan</surname>
<given-names>XK</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>SM</given-names>
</name>
</person-group>. <article-title>The routing communication protocol for small quantum network based on quantum error correction code</article-title>. <source>Acta Electonica Sinica</source> (<year>2015</year>) <volume>43</volume>:<fpage>171</fpage>. <pub-id pub-id-type="doi">10.3969/j.issn.0372-2112.2015.01.027</pub-id>
</citation>
</ref>
<ref id="B15">
<label>15.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>NR</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>KN</given-names>
</name>
<name>
<surname>Zou</surname>
<given-names>XF</given-names>
</name>
</person-group>. <article-title>Multi-party semi-quantum key distribution protocol with four-particle cluster states</article-title>. <source>Annalen der Physik</source> (<year>2019</year>) <volume>531</volume>:<fpage>1800520</fpage>. <pub-id pub-id-type="doi">10.1002/andp.201800520</pub-id>
</citation>
</ref>
<ref id="B16">
<label>16.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ye</surname>
<given-names>TY</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>HK</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>JL</given-names>
</name>
</person-group>. <article-title>Semi-quantum key distribution with single photons in both polarization and spatial-mode degrees of freedom</article-title>. <source>Int J Theor Phys (Dordr)</source> (<year>2020</year>) <volume>59</volume>:<fpage>2807</fpage>&#x2013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.1007/s10773-020-04540-y</pub-id>
</citation>
</ref>
<ref id="B17">
<label>17.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sheng</surname>
<given-names>YB</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Long</surname>
<given-names>GL</given-names>
</name>
</person-group>. <article-title>One-step quantum secure direct communication</article-title>. <source>Sci Bull</source> (<year>2022</year>) <volume>67</volume>:<fpage>367</fpage>&#x2013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.1016/j.scib.2021.11.002</pub-id>
</citation>
</ref>
<ref id="B18">
<label>18.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Noiri</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Takeda</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Nakajima</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Kobayashi</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Sammak</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Scappucci</surname>
<given-names>G</given-names>
</name>
<etal/>
</person-group> <article-title>Fast universal quantum gate above the fault-tolerance threshold in silicon</article-title>. <source>Nature</source> (<year>2022</year>) <volume>601</volume>:<fpage>338</fpage>&#x2013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-021-04182-y</pub-id>
</citation>
</ref>
<ref id="B19">
<label>19.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lloyd</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Mohseni</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Rebentrost</surname>
<given-names>P</given-names>
</name>
</person-group>. <article-title>Quantum principal component analysis</article-title>. <source>Nat Phys</source> (<year>2014</year>) <volume>10</volume>:<fpage>631</fpage>&#x2013;<lpage>3</lpage>. <pub-id pub-id-type="doi">10.1038/nphys3029</pub-id>
</citation>
</ref>
<ref id="B20">
<label>20.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Du</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>Experimental realization of a quantum support vector machine</article-title>. <source>Phys Rev Lett</source> (<year>2015</year>) <volume>114</volume>:<fpage>140504</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.114.140504</pub-id>
</citation>
</ref>
<ref id="B21">
<label>21.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Low</surname>
<given-names>GH</given-names>
</name>
<name>
<surname>Yoder</surname>
<given-names>TJ</given-names>
</name>
<name>
<surname>Chuang</surname>
<given-names>IL</given-names>
</name>
</person-group>. <article-title>Quantum inference on bayesian networks</article-title>. <source>Phys Rev A (Coll Park)</source> (<year>2014</year>) <volume>89</volume>:<fpage>062315</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevA.89.062315</pub-id>
</citation>
</ref>
<ref id="B22">
<label>22.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dong</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Tarn</surname>
<given-names>T-J</given-names>
</name>
</person-group>. <article-title>Quantum reinforcement learning</article-title>. <source>IEEE Trans Syst Man Cybern B</source> (<year>2008</year>) <volume>38</volume>:<fpage>1207</fpage>&#x2013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1109/TSMCB.2008.925743</pub-id>
</citation>
</ref>
<ref id="B23">
<label>23.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>TF</given-names>
</name>
<name>
<surname>Xie</surname>
<given-names>XW</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>JY</given-names>
</name>
</person-group>. <article-title>Hybrid quantum&#x2013;classical generative adversarial networks for image generation via learning discrete distribution</article-title>. <source>Signal Processing: Image Commun</source> (<year>2022</year>) <volume>2022</volume>:<fpage>116891</fpage>. <pub-id pub-id-type="doi">10.1016/j.image.2022.116891</pub-id>
</citation>
</ref>
<ref id="B24">
<label>24.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Biamonte</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Wittek</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Pancotti</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Rebentrost</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Wiebe</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Lloyd</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Quantum machine learning</article-title>. <source>Nature</source> (<year>2017</year>) <volume>549</volume>:<fpage>195</fpage>&#x2013;<lpage>202</lpage>. <pub-id pub-id-type="doi">10.1038/nature23474</pub-id>
</citation>
</ref>
<ref id="B25">
<label>25.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Harrow</surname>
<given-names>AW</given-names>
</name>
<name>
<surname>Hassidim</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Lloyd</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Quantum algorithm for linear systems of equations</article-title>. <source>Phys Rev Lett</source> (<year>2009</year>) <volume>103</volume>:<fpage>150502</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.103.150502</pub-id>
</citation>
</ref>
<ref id="B26">
<label>26.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kapoor</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Wiebe</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Svore</surname>
<given-names>K</given-names>
</name>
</person-group>. <article-title>Quantum perceptron models</article-title>. <source>Adv Neural Inf Process Syst</source> (<year>2016</year>) <volume>29</volume>.</citation>
</ref>
<ref id="B27">
<label>27.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weinstein</surname>
<given-names>YS</given-names>
</name>
<name>
<surname>Pravia</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Fortunato</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Lloyd</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Cory</surname>
<given-names>DG</given-names>
</name>
</person-group>. <article-title>Implementation of the quantum Fourier transform</article-title>. <source>Phys Rev Lett</source> (<year>2001</year>) <volume>86</volume>:<fpage>1889</fpage>&#x2013;<lpage>91</lpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.86.1889</pub-id>
</citation>
</ref>
<ref id="B28">
<label>28.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schuld</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sinayskiy</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Petruccione</surname>
<given-names>F</given-names>
</name>
</person-group>. <article-title>Simulating a perceptron on a quantum computer</article-title>. <source>Phys Lett A</source> (<year>2015</year>) <volume>379</volume>:<fpage>660</fpage>&#x2013;<lpage>3</lpage>. <pub-id pub-id-type="doi">10.1016/j.physleta.2014.11.061</pub-id>
</citation>
</ref>
<ref id="B29">
<label>29.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tacchino</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Macchiavello</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Gerace</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Bajoni</surname>
<given-names>D</given-names>
</name>
</person-group>. <article-title>An artificial neuron implemented on an actual quantum processor</article-title>. <source>Npj Quan Inf</source> (<year>2019</year>) <volume>5</volume>:<fpage>26</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1038/s41534-019-0140-4</pub-id>
</citation>
</ref>
<ref id="B30">
<label>30.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hopfield</surname>
<given-names>JJ</given-names>
</name>
</person-group>. <article-title>Neural networks and physical systems with emergent collective computational abilities</article-title>. <source>Proc Natl Acad Sci U S A</source> (<year>1982</year>) <volume>79</volume>:<fpage>2554</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.79.8.2554</pub-id>
</citation>
</ref>
<ref id="B31">
<label>31.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Hebb</surname>
<given-names>DO</given-names>
</name>
</person-group>. <source>The organization of behavior: A neuropsychological theory</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Psychology Press</publisher-name> (<year>2005</year>). <pub-id pub-id-type="doi">10.4324/9781410612403</pub-id>
</citation>
</ref>
<ref id="B32">
<label>32.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wuensche</surname>
<given-names>A</given-names>
</name>
</person-group>. <article-title>Discrete dynamical networks and their attractor basins</article-title>. <source>Complex Syst</source> (<year>1998</year>) <volume>98</volume>:<fpage>3</fpage>&#x2013;<lpage>21</lpage>.</citation>
</ref>
<ref id="B33">
<label>33.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Song</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Tian</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>H</given-names>
</name>
</person-group>. <article-title>Target-generating quantum error correction coding scheme based on generative confrontation network</article-title>. <source>Quan Inf Process</source> (<year>2022</year>) <volume>21</volume>:<fpage>280</fpage>&#x2013;<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1007/s11128-022-03616-4</pub-id>
</citation>
</ref>
<ref id="B34">
<label>34.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Harris</surname>
<given-names>FJ</given-names>
</name>
</person-group>. <article-title>On the use of windows for harmonic analysis with the discrete Fourier transform</article-title>. <source>Proc IEEE</source> (<year>1978</year>) <volume>66</volume>:<fpage>51</fpage>&#x2013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1109/PROC.1978.10837</pub-id>
</citation>
</ref>
<ref id="B35">
<label>35.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dorner</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Demkowicz-Dobrzanski</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>BJ</given-names>
</name>
<name>
<surname>Lundeen</surname>
<given-names>JS</given-names>
</name>
<name>
<surname>Wasilewski</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Banaszek</surname>
<given-names>K</given-names>
</name>
<etal/>
</person-group> <article-title>Optimal quantum phase estimation</article-title>. <source>Phys Rev Lett</source> (<year>2009</year>) <volume>102</volume>:<fpage>040403</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.102.040403</pub-id>
</citation>
</ref>
</ref-list>
</back>
</article>