<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neuroinform.</journal-id>
<journal-title>Frontiers in Neuroinformatics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neuroinform.</abbrev-journal-title>
<issn pub-type="epub">1662-5196</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fninf.2024.1395916</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Gershgorin circle theorem-based feature extraction for biomedical signal analysis</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Patel</surname> <given-names>Sahaj A.</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2069235/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Smith</surname> <given-names>Rachel June</given-names></name>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/funding-acquisition/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/project-administration/"/>
<role content-type="https://credit.niso.org/contributor-roles/resources/"/>
<role content-type="https://credit.niso.org/contributor-roles/supervision/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/visualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Yildirim</surname> <given-names>Abidin</given-names></name>
<uri xlink:href="https://loop.frontiersin.org/people/2237698/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/project-administration/"/>
<role content-type="https://credit.niso.org/contributor-roles/resources/"/>
<role content-type="https://credit.niso.org/contributor-roles/supervision/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/visualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff><institution>Department of Electrical and Computer Engineering, University of Alabama at Birmingham</institution>, <addr-line>Birmingham, AL</addr-line>, <country>United States</country></aff>
<author-notes>
<fn fn-type="edited-by" id="fn0001">
<p>Edited by: Pawel Oswiecimka, Polish Academy of Sciences, Poland</p>
</fn>
<fn fn-type="edited-by" id="fn0002">
<p>Reviewed by: Prasanta Panigrahi, Indian Institute of Science Education and Research Kolkata, India</p>
<p>Sarojini Manju Attili, George Mason University, United States</p>
</fn>
<corresp id="c001">&#x002A;Correspondence: Sahaj A. Patel, <email>sahaj432@uab.edu</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>16</day>
<month>05</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>18</volume>
<elocation-id>1395916</elocation-id>
<history>
<date date-type="received">
<day>04</day>
<month>03</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>02</day>
<month>05</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2024 Patel, Smith and Yildirim.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Patel, Smith and Yildirim</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Recently, graph theory has become a promising tool for biomedical signal analysis, wherein the signals are transformed into a graph network and represented as either adjacency or Laplacian matrices. However, as the size of the time series increases, the dimensions of transformed matrices also expand, leading to a significant rise in computational demand for analysis. Therefore, there is a critical need for efficient feature extraction methods demanding low computational time. This paper introduces a new feature extraction technique based on the Gershgorin Circle theorem applied to biomedical signals, termed Gershgorin Circle Feature Extraction (GCFE). The study makes use of two publicly available datasets: one including synthetic neural recordings, and the other consisting of EEG seizure data. In addition, the efficacy of GCFE is compared with two distinct visibility graphs and tested against seven other feature extraction methods. In the GCFE method, the features are extracted from a special modified weighted Laplacian matrix from the visibility graphs. This method was applied to classify three different types of neural spikes from one dataset, and to distinguish between seizure and non-seizure events in another. The application of GCFE resulted in superior performance when compared to seven other algorithms, achieving a positive average accuracy difference of 2.67% across all experimental datasets. This indicates that GCFE consistently outperformed the other methods in terms of accuracy. Furthermore, the GCFE method was more computationally-efficient than the other feature extraction techniques. The GCFE method can also be employed in real-time biomedical signal classification where the visibility graphs are utilized such as EKG signal classification.</p>
</abstract>
<kwd-group>
<kwd>Gershgorin circle theorem</kwd>
<kwd>visibility graph</kwd>
<kwd>weighted Laplacian matrix</kwd>
<kwd>biomedical signals</kwd>
<kwd>deep learning</kwd>
<kwd>feature extraction</kwd>
</kwd-group>
<counts>
<fig-count count="5"/>
<table-count count="5"/>
<equation-count count="9"/>
<ref-count count="44"/>
<page-count count="11"/>
<word-count count="7775"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>In recent years, there has been a substantial increase in the adoption of non-invasive devices for measuring brain activity, such as electroencephalography (EEG) (<xref ref-type="bibr" rid="ref26">Minguillon et al., 2017</xref>; <xref ref-type="bibr" rid="ref17">He et al., 2023</xref>). The non-invasiveness and high temporal resolution make it a convenient and essential tool for research and clinical diagnosis of neurological diseases (<xref ref-type="bibr" rid="ref33">Perez-Valero et al., 2021</xref>). EEG is measured by placing electrodes on the scalp and it provides indispensable insights into the synchronous activity of populations of cortical neurons (<xref ref-type="bibr" rid="ref12">David et al., 2002</xref>). EEG signals can be used to understand the underlying neural dynamics of cognitive, motor, and pathological phenomena (<xref ref-type="bibr" rid="ref34">Rodriguez-Bermudez and Garcia-Laencina, 2015</xref>). For example, EEG signals are used in a wide variety of applications such as neuromarketing (<xref ref-type="bibr" rid="ref11">Costa-Feito et al., 2023</xref>), investigation of sleep architecture (<xref ref-type="bibr" rid="ref15">Gu et al., 2023</xref>), detection of neurodegenerative conditions such as Alzheimer&#x2019;s disease (<xref ref-type="bibr" rid="ref27">Modir et al., 2023</xref>), neurofeedback therapy (<xref ref-type="bibr" rid="ref38">Torres et al., 2023</xref>), and epileptic seizure detection (<xref ref-type="bibr" rid="ref25">Maher et al., 2023</xref>). Over time, various linear and non-linear methods have been developed for extracting distinct features from recorded time series signals. Linear methods of feature extraction encompass families of time-frequency domains such as Fourier transformation, Wavelet transformation, and Empirical Mode Decomposition (<xref ref-type="bibr" rid="ref20">K&#x00F6;rner, 1988</xref>; <xref ref-type="bibr" rid="ref32">Percival and Walden, 2000</xref>). On the other hand, the non-linear methods involve computations of Lyapunov exponents and recurrence networks (<xref ref-type="bibr" rid="ref19">Kantz and Schreiber, 2003</xref>; <xref ref-type="bibr" rid="ref10">Campanharo et al., 2008</xref>). As the EEG time series signals are inherently non-stationary and noisy in nature, robust time-series analysis techniques are necessary to capture meaningful patterns and features in the signal.</p>
<p>In recent years, graph theory approaches have gained popularity as an alternative to traditional time-frequency domain methods for analyzing brain signals (<xref ref-type="bibr" rid="ref36">Stam and Van Straate, 2012</xref>). The graph networks can reveal non-linear characteristics of non-stationary and chaotic signals. In standard graph theory, the graph consists of sets of nodes and edges where the nodes represent the samples or data points of a time series, and the edges represent the connections or distances between two data points. In 2006, <xref ref-type="bibr" rid="ref43">Zhang and Small (2006)</xref> introduced the representation of time series data into complex graph networks, revealing chaotic or fractal properties of the time series. In 2008, <xref ref-type="bibr" rid="ref22">Lacasa et al. (2008)</xref> presented the first natural visibility graph (NVG) that converted time series into a graph network. Unlike standard graphs, which are typically constructed based on predefined relationships between data points, visibility graphs convert each data point in a time series into a node and then connect nodes with an edge if they can &#x2018;see&#x2019; each other, usually determined by a line of sight criterion over the time series data. The original NVG, as presented by Lucas et al., had unweighted edges, meaning it did not consider the varying scales or magnitudes of the time series data&#x2014;this resulted in treating the data univariately. In contrast, standard graphs might not inherently represent temporal or sequential data and are often not designed to handle the dynamic scaling that visibility graphs can accommodate. In 2010, <xref ref-type="bibr" rid="ref2">Ahmadlou et al. (2010)</xref> implemented the first visibility graph on EEG signals for detecting Alzheimer&#x2019;s disease.</p>
<p>Beyond the NVG, several groups have developed different variants of visibility graphs, such as Horizontal Visibility Graph (HVG) (<xref ref-type="bibr" rid="ref24">Luque et al., 2009</xref>), Weighted Visibility Graph (WVG) (<xref ref-type="bibr" rid="ref37">Supriya et al., 2016</xref>), Limited Penetrable Horizontal Visibility Graph (LPHVG) (<xref ref-type="bibr" rid="ref13">Gao et al., 2016</xref>), and Weighted Dual Perspective Visibility Graph (WDPVG) (<xref ref-type="bibr" rid="ref44">Zheng et al., 2021</xref>). Each of these methods construct distinct graph topologies based on the provided time series data. To decode and interpret the tropological characteristics of these graphs, they are transformed into a matrix form such as the Adjacency matrix or Laplacian matrix. Later, feature extraction and reduction techniques are applied on these matrices. For instance, <xref ref-type="bibr" rid="ref42">Zhang et al. (2022)</xref> used the weighted adjacency matrix as a feature representation for classifying different sleep stages using calcium imaging data. In contrast, <xref ref-type="bibr" rid="ref28">Mohammadpoory et al. (2023)</xref> experimented with various methods to extract features from adjacency matrices such as Graph Index Complexity (GIC), Characteristic Path Length (CPL), Global Efficiency (GE) (<xref ref-type="bibr" rid="ref23">Latora and Marchiori, 2001</xref>), Local Efficiency (LE) (<xref ref-type="bibr" rid="ref23">Latora and Marchiori, 2001</xref>), Clustering Coefficients (CC) (<xref ref-type="bibr" rid="ref35">Saram&#x00E4;ki et al., 2007</xref>), and Assortative Coefficient (AC) (<xref ref-type="bibr" rid="ref5">Artameeyanant et al., 2017</xref>). <xref ref-type="bibr" rid="ref37">Supriya et al. (2016)</xref> took a different approach and calculated two network properties: modularity (<xref ref-type="bibr" rid="ref7">Blondel, 2008</xref>) and an average weighted degree (<xref ref-type="bibr" rid="ref4">Antoniou and Tsompa, 2008</xref>) from the graph. Likewise, <xref ref-type="bibr" rid="ref16">Hao et al. (2016)</xref> classified EEG seizures by measuring the graph&#x2019;s &#x201C;<italic>Average Path Length</italic>&#x201D; and CC.</p>
<p>Although incorporating techniques that extract multiple features simultaneously characterizes the resulting graph more robustly, it also requires more computational time to perform feature extraction. In addition, as the number of samples rises, computational time also proportionally increases. Therefore, in real-time application of EEG signals processing, we must have low computational cost for preprocessing and feature extraction methodologies that do not compromise accuracy. Driven by this need, this study presents a new feature extraction method with low computational cost for time series in biomedical signal processing. This study utilizes the Gershgorin Circle (GC) theorem (<xref ref-type="bibr" rid="ref14">Gershgorin, 1931</xref>) as a technique for primary feature extraction.</p>
<p>In 1931, mathematician S. A. Gershgorin introduced the Gershgorin Circle (GC) theorem, a pivotal method for estimating eigenvalue inclusions for a square matrix. The GC theorem offers a straightforward yet powerful technique to approximate the location of eigenvalues by defining circles in the complex plane, centered at the matrix&#x2019;s diagonal entries with radii determined by the sum of the absolute values of the off-diagonal entries in each row. This approach not only simplifies the understanding of a matrix&#x2019;s spectral properties but also requires fewer computational operations compared to other eigenvalue estimation methods (<xref ref-type="bibr" rid="ref39">Varga, 2010</xref>). As a result, the GC theorem has found extensive applications across various fields, such as stability analysis of nonlinear systems (<xref ref-type="bibr" rid="ref29">Ortega Bejarano et al., 2018</xref>), power grids (<xref ref-type="bibr" rid="ref41">Xie et al., 2022</xref>), and graph sampling (<xref ref-type="bibr" rid="ref40">Wang et al., 2020</xref>), demonstrating its versatility and effectiveness. Furthermore, subsequent advancements have refined the theorem, enhancing the precision of the eigenvalue inclusions and bringing them closer to the actual eigenvalues of a matrix. This evolution underscores the theorem&#x2019;s significant impact on the mathematical and engineering disciplines, offering a reliable and efficient tool for analyzing and interpreting the eigenvalues of square matrices.</p>
<p>This study introduces a new, low computational feature extraction approach for time series in biomedical signal processing. In this approach, the GC theorem is used to extract features from a modified Weighted Laplacian (mWL) matrix. <xref ref-type="fig" rid="fig1">Figure 1</xref> shows a block diagram of the GCFE approach. The outline of this paper is as follows: Section 2 explains the proposed approach, which is divided into four subsections: &#x2013; signal pre-processing, mWL matrix formation, and GCFE and classification model. Section 3 presents a detailed overview of datasets utilized in this study. The GCFE method results, and discussion are described in Section 4. Finally, Section 5 articulates the conclusion of the proposed approach.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Overall representation of the GCFE framework: from raw signal recording to feature classification.</p>
</caption>
<graphic xlink:href="fninf-18-1395916-g001.tif"/>
</fig>
</sec>
<sec sec-type="methods" id="sec2">
<label>2</label>
<title>Methodology</title>
<p>The proposed approach can be distilled into four fundamental steps: Preprocessing, forming the mWL matrix, GCFE, and feature classification.</p>
<sec id="sec3">
<label>2.1</label>
<title>Preprocessing</title>
<p>In the preprocessing stage, each dataset undergoes into normalization, where the data are scaled between 0 and 1, referenced to raw recording minimum and maximum values. After signal normalization, the whole time series is segmented with <inline-formula>
<mml:math id="M1">
<mml:mi>N</mml:mi>
</mml:math>
</inline-formula>-number of samples with vector size of <inline-formula>
<mml:math id="M2">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. Each user defined segmented part is called an epoch. In this study, we chose an epoch size of <inline-formula>
<mml:math id="M3">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>56</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> for Dataset 1 and <inline-formula>
<mml:math id="M4">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>1024</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mspace width="thickmathspace"/>
</mml:mrow>
</mml:math>
</inline-formula>for Dataset 2. An example in <xref ref-type="fig" rid="fig2">Figure 2</xref> shows the complete implementation of GCFE for a random time series with five samples and WVG as a graph transformation method. The random time series (<italic>Q</italic>), in <xref ref-type="fig" rid="fig2">Figure 2A</xref> represents the normalized values which range between 0 and 1, i.e., <italic>Q</italic>&#x2009;=&#x2009;[0.6, 0.4, 0.1, 0.5, 0.7].</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Practical implementation of GCFE on random time series with 5 sample and WVG. <bold>(A)</bold> Shows a random time series with values over discrete time points. <bold>(B)</bold> Depicts the corresponding Weighted Visibility Graph (WVG) representation with weighted edges <inline-formula>
<mml:math id="M5">
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. <bold>(C)</bold> Illustrates the transition from a weighted adjacency matrix <inline-formula>
<mml:math id="M6">
<mml:mrow>
<mml:mspace width="0.25em"/>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, to an unweighted adjacency matrix <inline-formula>
<mml:math id="M7">
<mml:mrow>
<mml:mspace width="0.25em"/>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and a degree matrix <inline-formula>
<mml:math id="M8">
<mml:mrow>
<mml:mspace width="0.25em"/>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. <bold>(D)</bold> Presents a modified weighted Laplacian matrix <inline-formula>
<mml:math id="M9">
<mml:mrow>
<mml:mspace width="0.25em"/>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> derived from the graph. <bold>(E)</bold> Displays the two extracted feature vector by GC Theorem.</p>
</caption>
<graphic xlink:href="fninf-18-1395916-g002.tif"/>
</fig>
</sec>
<sec id="sec4">
<label>2.2</label>
<title>Signal to visibility graph</title>
<p>The next stage is the formation of each epoch into a graph to expose the underlying nonlinear properties of the time series. Two different graph formation methods are utilized to evaluate the performance of the proposed approach across various graph types. Alternative visibility graph transformation techniques beyond WVG and WDPVG can also be integrated into this approach. Any visibility graph consists of a number of nodes and edges, where the nodes represent the data points of the time series, and the edges represent the distance between any two linked nodes. In WVG, only two nodes connect with a weighted edge (denoted as <inline-formula>
<mml:math id="M10">
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>) if &#x201C;visibility&#x201D; between them satisfies the <xref ref-type="disp-formula" rid="EQ1">Equation (1)</xref>.</p>
<disp-formula id="EQ1">
<label>(1)</label>
<mml:math id="M11">
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x003C;</mml:mo>
<mml:mi>Q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>Q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2217;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where, <inline-formula>
<mml:math id="M12">
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M13">
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula>
<mml:math id="M14">
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> represents the datapoints of a time series with its timestamps <inline-formula>
<mml:math id="M15">
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M16">
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula>
<mml:math id="M17">
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> respectively. If <xref ref-type="disp-formula" rid="EQ1">Equation (1)</xref> is satisfied, then the timestamp <inline-formula>
<mml:math id="M18">
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> lies in between <inline-formula>
<mml:math id="M19">
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M20">
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> i.e., <inline-formula>
<mml:math id="M21">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x003C;</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>&#x003C;</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. Then, the weight for each edge is calculated based on <xref ref-type="disp-formula" rid="EQ2">Equation (2)</xref> (<xref ref-type="bibr" rid="ref44">Zheng et al., 2021</xref>).</p>
<disp-formula id="EQ2">
<label>(2)</label>
<mml:math id="M22">
<mml:mrow>
<mml:mi>W</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>t</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>Q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>8</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where, <inline-formula>
<mml:math id="M23">
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M24">
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> are two nodes, <inline-formula>
<mml:math id="M25">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M26">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> are time events of nodes <italic>i</italic> and <italic>j</italic>. <xref ref-type="fig" rid="fig2">Figure 2B</xref> shows the conversion of random time series into WVG with weighted node connections <inline-formula>
<mml:math id="M27">
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. In this example, since node-4 and node-5 are visible for node-1, there are weighted links between node-1 and node-4 (with <inline-formula>
<mml:math id="M28">
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mn>14</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>) and node-1 and node-5 (with <inline-formula>
<mml:math id="M29">
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mn>15</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>). In contrast, the link between node-3 and node-5 is not connected because node-5 is not visible from node-3.</p>
<p>Similar to WVG, the WDPVG is generated by combining two distinct visibility graphs: WVG and Weighted Reflective Perspective Visibility Graph (WRPVG). To form the WDPVG, we first implement the WVG based on <xref ref-type="disp-formula" rid="EQ2">Equation (2)</xref>. Subsequently, the time-series signal is inversed (reflected), after which the nodes are connected again by <xref ref-type="disp-formula" rid="EQ2">Equation (2)</xref>. An illustrative representation of WDPVG can be observed in <xref ref-type="fig" rid="fig1">Figure 1</xref>.</p>
</sec>
<sec id="sec5">
<label>2.3</label>
<title>Modified weighted Laplacian matrix</title>
<p>To operate with graph networks, it is often necessary to represent these graphs in matrix form. Popular representations of graph networks are the Weighted Adjacency (WA) matrix, the (Unweighted Adjacency) UA matrix, or the Laplacian matrix. This approach introduces a unique modified Weighted Laplacian (mWL), which is a strictly diagonally dominant matrix, and consequently, it inherits Positive Semi-definite (PSD) properties. To generate the mWL matrix, first, the WA (represented by <inline-formula>
<mml:math id="M30">
<mml:mrow>
<mml:mspace width="thickmathspace"/>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> in <xref ref-type="fig" rid="fig2">Figure 2C</xref>) and UA (represented by <inline-formula>
<mml:math id="M31">
<mml:mrow>
<mml:mspace width="thickmathspace"/>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width="thickmathspace"/>
</mml:mrow>
</mml:math>
</inline-formula>in <xref ref-type="fig" rid="fig2">Figure 2C</xref>) matrices are constructed from each WVG or WDPVG. The size of each WA and UA matrix depends upon the number of nodes, which is equivalent to the number of data points in each epoch (N). For instance, in <xref ref-type="fig" rid="fig2">Figure 2B</xref>, the WVG has five nodes, as there are five samples in the Q-time series. Note that the WA and UA are square matrices with the size of <inline-formula>
<mml:math id="M32">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. Both WA and UA matrices are generated according to <xref ref-type="disp-formula" rid="EQ3 EQ4">Equations (3, 4)</xref>, respectively.</p>
<disp-formula id="EQ3">
<label>(3)</label>
<mml:math id="M33">
<mml:mrow>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mspace width="0.25em"/>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mi mathvariant="normal"> </mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi mathvariant="normal"> </mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:mi mathvariant="normal">a</mml:mi>
<mml:mi>n</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>g</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mspace width="0.25em"/>
<mml:mi>o</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>w</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ4">
<label>(4)</label>
<mml:math id="M34">
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mspace width="0.25em"/>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mi mathvariant="normal"> </mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi mathvariant="normal"> </mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi mathvariant="normal"> </mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi mathvariant="normal"> </mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>g</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mspace width="0.25em"/>
<mml:mi>o</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>w</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>In the WA matrix, the elements of <inline-formula>
<mml:math id="M35">
<mml:mrow>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are set to edge weights <inline-formula>
<mml:math id="M36">
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, if there is an edge between node <italic>i</italic> and <italic>j</italic>; otherwise, the elements are set to 0. Likewise, for the UA matrix, the elements of <inline-formula>
<mml:math id="M37">
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are assigned a value of &#x201C;1&#x201D; if there is an edge between node <italic>i</italic> and <italic>j</italic>, and &#x201C;0&#x201D; otherwise. Afterward, the Degree matrix <inline-formula>
<mml:math id="M38">
<mml:mrow>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is calculated from the UA matrix as per <xref ref-type="disp-formula" rid="EQ5">Equation (5)</xref>. In the Degree matrix, the diagonal values represent the row summation of all values of the UA matrix. In <xref ref-type="fig" rid="fig2">Figure 2C</xref>, an example is presented for <inline-formula>
<mml:math id="M39">
<mml:mrow>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M40">
<mml:mrow>
<mml:mspace width="thickmathspace"/>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>and <inline-formula>
<mml:math id="M41">
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> matrices (5 <inline-formula>
<mml:math id="M42">
<mml:mo>&#x00D7;</mml:mo>
</mml:math>
</inline-formula>5).</p>
<p>The mWL matrix <inline-formula>
<mml:math id="M43">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is computed by taking the difference between the Degree matrix (<inline-formula>
<mml:math id="M44">
<mml:mrow>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mspace width="thickmathspace"/>
</mml:mrow>
</mml:math>
</inline-formula>and WA matrix <inline-formula>
<mml:math id="M45">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> according to <xref ref-type="disp-formula" rid="EQ6">Equation (6)</xref>. Note that the size of the mWL matrix is similar to the WA matrix, i.e., (<inline-formula>
<mml:math id="M46">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>). For example, the mWL matrix <inline-formula>
<mml:math id="M47">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> shown in <xref ref-type="fig" rid="fig2">Figure 2D</xref> is strictly diagonally dominant because its diagonal elements exceed the absolute sum of the corresponding row values.</p>
<disp-formula id="EQ5">
<label>(5)</label>
<mml:math id="M48">
<mml:mrow>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mspace width="0.25em"/>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mspace width="0.25em"/>
<mml:mi>O</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>w</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ6">
<label>(6)</label>
<mml:math id="M49">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
<sec id="sec6">
<label>2.4</label>
<title>Gershgorin circle feature extraction</title>
<p>After computing the mWL matrix, the GC theorem is applied to extract features from each <inline-formula>
<mml:math id="M50">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width="thickmathspace"/>
</mml:mrow>
</mml:math>
</inline-formula>matrix. The GC theorem states that all the eigenvalues of the (<inline-formula>
<mml:math id="M51">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>) square matrix lies inside the Gershgorin union disks (i.e., Gershgorin circles). The formation of each Gershgorin disk relies on a center point and its radius. The radius of each disk <inline-formula>
<mml:math id="M52">
<mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>L</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mspace width="0.25em"/>
<mml:mspace width="0.25em"/>
</mml:mrow>
</mml:math>
</inline-formula>is computed by taking the absolute row summation of off-diagonal values of (<inline-formula>
<mml:math id="M53">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>) matrix as described in <xref ref-type="disp-formula" rid="EQ7">Equation (7)</xref>. The center of each disk <inline-formula>
<mml:math id="M54">
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the diagonal value of each row as per <xref ref-type="disp-formula" rid="EQ8">Equation (8)</xref>,</p>
<disp-formula id="EQ7">
<label>(7)</label>
<mml:math id="M55">
<mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mfenced>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mo>\</mml:mo>
<mml:mfenced close="}" open="{">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:munder>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:mspace width="thickmathspace"/>
<mml:mfenced>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ8">
<label>(8)</label>
<mml:math id="M56">
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mspace width="0.25em"/>
<mml:mi>w</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mspace width="0.25em"/>
<mml:mspace width="0.25em"/>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>j</mml:mi>
<mml:mspace width="0.25em"/>
<mml:mspace width="0.25em"/>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where the sets of <inline-formula>
<mml:math id="M57">
<mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M58">
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> represents the GCFE. The final output of each <inline-formula>
<mml:math id="M59">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width="thickmathspace"/>
</mml:mrow>
</mml:math>
</inline-formula>matrix is oriented in a vector form, such that all sets of GC are radii, followed by GC centers. This leads to the transformation of <inline-formula>
<mml:math id="M60">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width="thickmathspace"/>
</mml:mrow>
</mml:math>
</inline-formula>matrix features, which is in (<inline-formula>
<mml:math id="M61">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>) form, into ({<inline-formula>
<mml:math id="M62">
<mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> x <inline-formula>
<mml:math id="M63">
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>) or (2 <inline-formula>
<mml:math id="M64">
<mml:mrow>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>) vector form. For instance, the <italic>Q</italic> time series in <xref ref-type="fig" rid="fig2">Figure 2E</xref> delivers 10 total GC extracted features, which were <inline-formula>
<mml:math id="M65">
<mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mn>0.258</mml:mn>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mspace width="0.25em"/>
<mml:mspace width="0.25em"/>
<mml:mn>0.650</mml:mn>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mn>0.700</mml:mn>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mn>0.683</mml:mn>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mn>0.325</mml:mn>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M66">
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mn>3</mml:mn>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mn>4</mml:mn>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mn>2</mml:mn>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mn>4</mml:mn>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mn>3</mml:mn>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> respectively.</p>
</sec>
<sec id="sec7">
<label>2.5</label>
<title>Feature classification</title>
<p>With the ongoing advancements in machine learning and deep learning, numerous state-of-the-art algorithms have been developed for classifying features. Popular algorithms include but are not limited to, Support Vector Machines (SVM), Decision Trees, and Convolutional Neural Networks (CNN). For this study, the 1D-CNN model was selected to classify the extracted features. However, the proposed method is not limited to using CNN models for feature classification. Other classification methods, such as SVM, Decision Trees, and Artificial Neural Networks (ANN), can also be employed; however, these methods typically require more computational time as the size of the input time-series or the vector size of the extracted features increases.</p>
<p><xref ref-type="table" rid="tab1">Table 1</xref> summarizes the architecture of the 1D-CNN model. Dataset &#x2013; 1 and Dataset &#x2013; 2 employ the same architecture model, distinguished only by the number of convolution and pooling layers. To classify the features, the GCFE sets are supplied into the 1D-CNN model and then trained according to the target properties. The size of the initial Input Layer depends upon the number of GCFE sets, i.e., batch size ({<inline-formula>
<mml:math id="M67">
<mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>L</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> x <inline-formula>
<mml:math id="M68">
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>L</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>), and channels. The Input Layer is followed by connecting sets of the Convolution Layer + Pooling Layer. For Dataset &#x2013; 1 and Dataset &#x2013; 2, two and six sets of Convolution Layer and Pooling Layer are used, respectively. Each Convolution Layer utilized 32 filters, with a kernel size of 3, and ReLU was used as an activation function. In the Pooling Layers, the Max Pooling technique was used.</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>1D-CNN architecture and each layer&#x2019;s configurations.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">No. of layers</th>
<th align="left" valign="top">Layer name</th>
<th align="left" valign="top">Layer configuration</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">1</td>
<td align="left" valign="middle">Input layer</td>
<td align="left" valign="middle">(Batch size, rows, channels)</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="2">2 or 6</td>
<td align="left" valign="middle">Convolution layer</td>
<td align="left" valign="middle">Conv1D &#x2013; Kernel&#x2009;=&#x2009;3, Padding&#x2009;=&#x2009;&#x201C;same,&#x201D; Activation Function&#x2009;=&#x2009;&#x201C;ReLU,&#x201D; No. Filters&#x2009;=&#x2009;32</td>
</tr>
<tr>
<td align="left" valign="middle">Pooling layer</td>
<td align="left" valign="middle">MaxPooling1D</td>
</tr>
<tr>
<td align="left" valign="middle">1</td>
<td align="left" valign="middle">Flatten layer</td>
<td align="left" valign="middle">(Batch, Flatten last Pooling Layer input size)</td>
</tr>
<tr>
<td align="left" valign="middle">1</td>
<td align="left" valign="middle">Dense Layer 1</td>
<td align="left" valign="middle">(Batch Size, 100), Activation Function&#x2009;=&#x2009;&#x201C;ReLU&#x201D;</td>
</tr>
<tr>
<td align="left" valign="middle">1</td>
<td align="left" valign="middle">Dropout Layer</td>
<td align="left" valign="middle">0.1</td>
</tr>
<tr>
<td align="left" valign="middle">1</td>
<td align="left" valign="middle">Dense Layer 2</td>
<td align="left" valign="middle">(Batch Size, 100), Activation Function&#x2009;=&#x2009;&#x201C;ReLU&#x201D;</td>
</tr>
<tr>
<td align="left" valign="middle">1</td>
<td align="left" valign="middle">Output Layer</td>
<td align="left" valign="middle">(Batch Size, Classes), Activation Function&#x2009;=&#x2009;&#x201C;SoftMax&#x201D;</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>After the final Pooling Layer, a Flatten Layer was connected to transform the feature map (filters) into a 1D vector. Next, the Fully Connected Network (FCN) was built by joining two Dense Layers and one Dropout Layer between the two Dense Layers. Both Dense Layers consisted of 100 artificial neurons and a ReLU activation function. To prevent overfitting, a 10% dropout value was chosen. The final layer of the FCN connects to the Output Layer, the size of which varies based on the dataset classes. For the Output Layer, the SoftMax activation function was used. Note that the 1D-CNN model uses &#x201C;SparseCategoricalCrossentropy&#x201D; as its loss function and &#x201C;Adam&#x201D; as the optimizer. The detailed mathematical exploration of CNN can be found in <xref ref-type="bibr" rid="ref21">Krizhevsky et al. (2012)</xref>.</p>
</sec>
</sec>
<sec id="sec8">
<label>3</label>
<title>Datasets</title>
<p>Two publicly available datasets are utilized to evaluate the performance of the proposed methodology. The selection of these datasets is strategic, aimed at validating the proposed method on distinct types of signals: simulator-generated action potentials for intracellular recordings and non-invasive EEG recordings, which typically feature a larger number of samples in each epoch. <xref ref-type="table" rid="tab2">Table 2</xref> details the total number of epochs for both datasets, facilitating a comprehensive assessment of the method&#x2019;s applicability to different biomedical signals.</p>
<table-wrap position="float" id="tab2">
<label>Table 2</label>
<caption>
<p>Epoch distribution across datasets and signal-to-noise ratios (snrs) for different classes and sets.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Dataset</th>
<th align="center" valign="top" colspan="4">No. of epochs</th>
</tr>
<tr>
<th align="left" valign="top">Dataset &#x2013; 1</th>
<th align="center" valign="top">N-class_1</th>
<th align="center" valign="top">N-class_2</th>
<th align="center" valign="top">N-class_3</th>
<th align="center" valign="top">N- class_4</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">SNR0.5</td>
<td align="center" valign="middle">6,423</td>
<td align="center" valign="middle">6,595</td>
<td align="center" valign="middle">6,597</td>
<td align="center" valign="middle">6,000</td>
</tr>
<tr>
<td align="left" valign="middle">SNR1.25</td>
<td align="center" valign="middle">6,394</td>
<td align="center" valign="middle">6,587</td>
<td align="center" valign="middle">6,597</td>
<td align="center" valign="middle">6,000</td>
</tr>
<tr>
<td align="left" valign="middle">SNR2.0</td>
<td align="center" valign="middle">5,553</td>
<td align="center" valign="middle">6,633</td>
<td align="center" valign="middle">6,597</td>
<td align="center" valign="middle">6,000</td>
</tr>
<tr>
<td align="left" valign="middle">Dataset &#x2013; 2</td>
<td colspan="4"/>
</tr>
<tr>
<td align="left" valign="middle">Set A</td>
<td align="center" valign="middle" colspan="4">400</td>
</tr>
<tr>
<td align="left" valign="middle">Set B</td>
<td align="center" valign="middle" colspan="4">400</td>
</tr>
<tr>
<td align="left" valign="middle">Set C</td>
<td align="center" valign="middle" colspan="4">400</td>
</tr>
<tr>
<td align="left" valign="middle">Set D</td>
<td align="center" valign="middle" colspan="4">400</td>
</tr>
<tr>
<td align="left" valign="middle">Set E</td>
<td align="center" valign="middle" colspan="4">400</td>
</tr>
</tbody>
</table>
</table-wrap>
<sec id="sec9">
<label>3.1</label>
<title>Dataset &#x2013; 1</title>
<p>Dataset &#x2013; 1 consists of a synthetic, simulated action potential with additive Gaussian noise (<xref ref-type="bibr" rid="ref6">Bernert and Yvert, 2019</xref>). The action potentials were generated based on <xref ref-type="disp-formula" rid="EQ10">Equation (9)</xref>, and the details can be found elsewhere (<xref ref-type="bibr" rid="ref1">Adamos et al., 2008</xref>).</p>
<disp-formula id="EQ10">
<label>(9)</label>
<mml:math id="M69">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>A</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x03C0;</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>&#x03C4;</mml:mi>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x03C4;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mn>2.3548</mml:mn>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x03C4;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>The selection of parameters for generating action potentials (<inline-formula>
<mml:math id="M70">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x03C4;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x03C4;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x03C4;</mml:mi>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>) are explained in <xref ref-type="bibr" rid="ref6">Bernert and Yvert (2019)</xref>. The dataset consists of seven different Signal-to-Noise Ratios (SNRs) &#x2013; 0.5, 0.75, 1.0, 1.25, 1.50, 1.75, and 2.0. Each SNR value represents the level of Gaussian noise added to the signal. Each set contains three action potentials with different shapes and properties which are: N-class_1, N-class_2, N-class_3, and signal noise labeled N-class_4. For the experiment, only three Signal-to-Noise Ratio (SNR) values were chosen for testing: 0.5, 1.25, and 2.0. The test signal was generated with a sampling frequency of 20&#x2009;kHz with a mean firing rate of 3.3&#x2009;Hz. Each SNR set included ten recordings of 200&#x2009;s. <xref ref-type="table" rid="tab2">Table 2</xref> shows the number of epochs for each class per SNR set. The size of each epoch is set to 56 samples. The preprocessed segmented datasets were used (<xref ref-type="bibr" rid="ref31">Patel and Yildirim, 2023</xref>).</p>
</sec>
<sec id="sec10">
<label>3.2</label>
<title>Dataset &#x2013; 2</title>
<p>Dataset &#x2013; 2 is a publicly available epilepsy EEG dataset that was recorded by the Department of Epilepsy at Bonn University, Germany (<xref ref-type="bibr" rid="ref3">Andrzejak et al., 2001</xref>). It contains five different sets of recordings, labeled as E &#x2013; Set_A, E &#x2013; Set_B, E &#x2013; Set_C, E &#x2013; Set_D, and E &#x2013; Set_E. Each set consists of 100 channels of EEG that were sampled at 173.61&#x2009;Hz. Each set was recorded for 23.6&#x2009;s for a total of 4,096 data points. Each channel was segmented into 4 epochs with 1,024 samples per epoch shown in <xref ref-type="table" rid="tab2">Table 2</xref>. The EEG signal was bandpass filtered from 0.53&#x2009;Hz to 85&#x2009;Hz. Each EEG set was treated as an individual classifying class. Set A was scalp EEG recordings from healthy participants with eyes open, Set B was scalp EEG from healthy participants with eyes closed, Set C was interictal (between seizure) intracranial EEG recordings from hippocampal formations contralateral to the epileptogenic zone in mesial temporal epilepsy patients, Set D was interictal intracranial EEG recordings within the epileptogenic zone in mesial temporal epilepsy patients, and Set E was a recording of ictal (seizure) intracranial EEG activity from mesial temporal epilepsy patients.</p>
</sec>
</sec>
<sec sec-type="results" id="sec11">
<label>4</label>
<title>Results and discussion</title>
<p>In this paper, GCFE method is compared with seven other feature extraction techniques. As shown in <xref ref-type="table" rid="tab1">Table 1</xref>, all feature classification experiments conducted by this approach utilized a 1D-CNN model. The batch size was set to 32 for 1D-CNN model. All tests were conducted on a university supercomputer that was configured with 24 cores and 24&#x2009;GB of memory per core. The consistent experimental setup ensures a valid assessment of the GCFE approach relative to other methods. The performance and accuracy scoring metrics for each experiment were determined using cross-validation.</p>
<p>Dataset &#x2013; 1 was split into three groups: training, validation, and testing, with proportions of 70, 15, and 15%, respectively. The Dataset &#x2013; 2 was split only into two groups: training (70%) and testing (30%). In Dataset &#x2013; 2, the validation and testing datasets are kept the same size because of the limited number of epochs (instances). The decision to use these particular split ratios was guided by methodologies commonly adopted in other research papers, which served as benchmarks for comparison. Finally, each experiment was trained with 30 iterations.</p>
<p><xref ref-type="fig" rid="fig3">Figures 3A</xref>,<xref ref-type="fig" rid="fig3">B</xref> show box plot distributions of reduced features for distinct dataset classes, specifically focusing on GC radii and GC center values, respectively. These figures also provide results from the Wilcoxon rank sum test under the null hypothesis that N-Class_4 from Dataset-1 and E-Set_E from Dataset-2 are superior to the remaining classes within their respective dataset groups. The distribution values were generated using the WVG method, incorporating 200 epochs randomly chosen from each class in Dataset-1(SNR &#x2013; 0.5, 1.5, and 2.0) and from Dataset-2.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Box plot demonstrating the Distribution of GCFE using WVG for each dataset class, with statistical significance determined by the Wilcoxon rank sum test <bold>(A)</bold> Distribution of GC radii &#x2013; Sum of weighted edges of <inline-formula>
<mml:math id="M71">
<mml:mrow>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> (row wise) vs. Dataset Classes; <bold>(B)</bold> Distribution of GC center &#x2013; <inline-formula>
<mml:math id="M72">
<mml:mrow>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> vs. Dataset Classes.</p>
</caption>
<graphic xlink:href="fninf-18-1395916-g003.tif"/>
</fig>
<p>From <xref ref-type="fig" rid="fig3">Figure 3A</xref>, it can be shown that the GC radii values differ among classes. Specifically, N-class_3, characterized by a higher action potential amplitude, has significantly (<italic>p</italic>&#x2009;&#x003C;&#x2009;0.05) higher radii values compared to the action potentials of N-class_1 and N-class_2. Additionally, the Interquartile Range (IQR) of N-class_4 is considerably smaller than that of the other classes. This observation can be attributed to the fact that N-class_4 represents noise, which has a lower amplitude compared to other action potential classes. Consequently, the edges of the graph corresponding to N-class_4 are smaller, resulting in a smaller sum of weighted edges (<inline-formula>
<mml:math id="M73">
<mml:mrow>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>). The median <inline-formula>
<mml:math id="M74">
<mml:mrow>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> values for N-class_1, N-class_2, N-class_3, and N_class_4 were 0.12, 0.12, 0.16, and 0.08, respectively. A parallel pattern is noted in the GC centers for Dataset-1 classes as shown in <xref ref-type="fig" rid="fig3">Figure 3B</xref>, where N-Class_3 possesses the highest median <inline-formula>
<mml:math id="M75">
<mml:mrow>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> value of 7, significantly distinct from the other classes (<italic>p</italic>&#x2009;&#x003C;&#x2009;0.05). This is because N-Class_3 represents the action potential with the highest amplitude. According to the visibility graph concept, a graph representing this class will have more connections of edges with both near and far samples (nodes) of the signal, resulting into higher value of degree <inline-formula>
<mml:math id="M76">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>) that is, greater GC centers values. The median values for GC centers for N-Class_1, N-Class_2, and N-Class_4 is 6, 6, and 5, respectively.</p>
<p>In Dataset-2, presented in <xref ref-type="fig" rid="fig3">Figure 3A</xref>, E-Set_B was characterized by increased radii values (<italic>p</italic>&#x2009;&#x003E;&#x2009;0.05) with a median <inline-formula>
<mml:math id="M77">
<mml:mrow>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> value of 0.25, comparable to E-Set-E which also had a median value of 0.25. The median GC radii values for E-Set_A, E-Set_C, and E-Set_D were 0.21, 0.14, and 0.15, respectively. Meanwhile, <xref ref-type="fig" rid="fig3">Figure 3B</xref> illustrates that, for Dataset-2, the GC Centers for E-Set_D were significantly different (<italic>p</italic>&#x2009;&#x003E;&#x2009;0.05) with a median value of 14. This pattern indicates that for non-stationary recordings with higher amplitude, both the GC radii and GC centers tend to exhibit higher median values. Conversely, for stationary or nearly stationary recordings with higher amplitude, the GC radii still display higher median values, but the GC centers tend to have lower median values. This observation is based on the standard weighted visibility graph theory, which helps differentiate the dynamic characteristics of the recordings based on their structural connectivity within the graph.</p>
<p><xref ref-type="table" rid="tab3">Table 3</xref> presents eight different feature extraction studies represented as F1 (<xref ref-type="bibr" rid="ref28">Mohammadpoory et al., 2023</xref>), F2 (<xref ref-type="bibr" rid="ref18">Javaid et al., 2022</xref>), F3 (<xref ref-type="bibr" rid="ref37">Supriya et al., 2016</xref>), F4 (<xref ref-type="bibr" rid="ref16">Hao et al., 2016</xref>), F5 (<xref ref-type="bibr" rid="ref8">Bose et al., 2020</xref>), F6 (<xref ref-type="bibr" rid="ref9">Cai et al., 2022</xref>), F7 (<xref ref-type="bibr" rid="ref2">Ahmadlou et al., 2010</xref>), and F8 (Proposed). In addition, it also illustrates the number of features extracted by each method per dataset. The features for each method were arranged in vector form. The F2 method has maximum features with 172 and 3,076 for Dataset &#x2013; 1 and Dataset &#x2013; 2, respectively. The GCFE (F8) for Dataset &#x2013; 1 and Dataset &#x2013; 2 were 112 and 2048 features, respectively. The F7 method feature count for Dataset &#x2013; 1 was selected similarly to the F8 method, while for Dataset &#x2013; 2, a maximum of 800 features were selected.</p>
<table-wrap position="float" id="tab3">
<label>Table 3</label>
<caption>
<p>Number of studies and its features counts per dataset.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Study name</th>
<th align="left" valign="top">Features</th>
<th align="center" valign="top">No. of features for Dataset &#x2013; 1</th>
<th align="center" valign="top">No. of features for Dataset &#x2013; 2</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">F1</td>
<td align="left" valign="middle">GIC, CPL, GE, LE, CC, AC</td>
<td align="center" valign="middle">61</td>
<td align="center" valign="middle">1,029</td>
</tr>
<tr>
<td align="left" valign="middle">F2</td>
<td align="left" valign="middle">CC, CPL, AC, WD, GE, LE, NBC</td>
<td align="center" valign="middle">172</td>
<td align="center" valign="middle">3,076</td>
</tr>
<tr>
<td align="left" valign="middle">F3</td>
<td align="left" valign="middle">Modularity, CC</td>
<td align="center" valign="middle">57</td>
<td align="center" valign="middle">1,025</td>
</tr>
<tr>
<td align="left" valign="middle">F4</td>
<td align="left" valign="middle">CPL, CC</td>
<td align="center" valign="middle">57</td>
<td align="center" valign="middle">1,025</td>
</tr>
<tr>
<td align="left" valign="middle">F5</td>
<td align="left" valign="middle">CC, GE, LE, Transitivity</td>
<td align="center" valign="middle">59</td>
<td align="center" valign="middle">1,027</td>
</tr>
<tr>
<td align="left" valign="middle">F6</td>
<td align="left" valign="middle">WD, CC</td>
<td align="center" valign="middle">112</td>
<td align="center" valign="middle">2048</td>
</tr>
<tr>
<td align="left" valign="middle">F7</td>
<td align="left" valign="middle">PCA</td>
<td align="center" valign="middle">112</td>
<td align="center" valign="middle">800</td>
</tr>
<tr>
<td align="left" valign="middle">F8 (Proposed)</td>
<td align="left" valign="middle">GCFE</td>
<td align="center" valign="middle">112</td>
<td align="center" valign="middle">2048</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="table" rid="tab4">Table 4</xref> provides insights into each feature extraction method&#x2019;s accuracy, sensitivity, and specificity across two distinct visibility graphs. Seven classification experiments were conducted using feature extraction methods F1 to F8. The F8 (proposed) method demonstrated superior performance over most of the other feature extraction methods, labeled F1 through F7. According to <xref ref-type="table" rid="tab4">Table 4</xref>, it is evident that the F8 method outperformed the F2 method, which had the highest number of features. This outcome substantiates the assertion that an increase in the number of features does not necessarily enhance classification performance. Additionally, the F8 method achieved higher performance with fewer features, further illustrating the effectiveness of optimized feature extraction over mere quantity. Furthermore, the average accuracy differences computed to accurately compare the proposed feature extraction method&#x2019;s performance against others.</p>
<table-wrap position="float" id="tab4">
<label>Table 4</label>
<caption>
<p>Summary of performance metrics for feature extraction studies using weighted visibility graphs and weighted dual perspective visibility graphs. the metrics are arranged in rows, in the order of accuracy, sensitivity, and specificity.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Experiments</th>
<th align="center" valign="top">F1</th>
<th align="center" valign="top">F2</th>
<th align="center" valign="top">F3</th>
<th align="center" valign="top">F4</th>
<th align="center" valign="top">F5</th>
<th align="center" valign="top">F6</th>
<th align="center" valign="top">F7</th>
<th align="center" valign="top">F8</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle" colspan="9">
<italic>Weighted visibility graph</italic>
</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="3">SNR 0.5</td>
<td align="center" valign="middle">98.620</td>
<td align="center" valign="middle">99.661</td>
<td align="center" valign="middle">98.178</td>
<td align="center" valign="middle">98.412</td>
<td align="center" valign="middle">98.594</td>
<td align="center" valign="middle">99.687</td>
<td align="center" valign="middle">99.810</td>
<td align="center" valign="middle">99.578</td>
</tr>
<tr>
<td align="center" valign="middle">98.620</td>
<td align="center" valign="middle">99.661</td>
<td align="center" valign="middle">98.178</td>
<td align="center" valign="middle">98.412</td>
<td align="center" valign="middle">98.594</td>
<td align="center" valign="middle">99.687</td>
<td align="center" valign="middle">99.810</td>
<td align="center" valign="middle">99.578</td>
</tr>
<tr>
<td align="center" valign="middle">98.623</td>
<td align="center" valign="middle">99.662</td>
<td align="center" valign="middle">98.213</td>
<td align="center" valign="middle">98.412</td>
<td align="center" valign="middle">98.604</td>
<td align="center" valign="middle">99.688</td>
<td align="center" valign="middle">99.810</td>
<td align="center" valign="middle">99.579</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="3">SNR 1.25</td>
<td align="center" valign="middle">92.598</td>
<td align="center" valign="middle">98.227</td>
<td align="center" valign="middle">91.399</td>
<td align="center" valign="middle">91.920</td>
<td align="center" valign="middle">90.930</td>
<td align="center" valign="middle">97.367</td>
<td align="center" valign="middle">97.888</td>
<td align="center" valign="middle">97.888</td>
</tr>
<tr>
<td align="center" valign="middle">92.598</td>
<td align="center" valign="middle">98.227</td>
<td align="center" valign="middle">91.399</td>
<td align="center" valign="middle">91.920</td>
<td align="center" valign="middle">90.930</td>
<td align="center" valign="middle">97.367</td>
<td align="center" valign="middle">97.888</td>
<td align="center" valign="middle">97.888</td>
</tr>
<tr>
<td align="center" valign="middle">92.695</td>
<td align="center" valign="middle">98.234</td>
<td align="center" valign="middle">91.408</td>
<td align="center" valign="middle">91.947</td>
<td align="center" valign="middle">91.066</td>
<td align="center" valign="middle">97.460</td>
<td align="center" valign="middle">97.888</td>
<td align="center" valign="middle">97.888</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="3">SNR 2.0</td>
<td align="center" valign="middle">79.639</td>
<td align="center" valign="middle">89.429</td>
<td align="center" valign="middle">79.343</td>
<td align="center" valign="middle">80.473</td>
<td align="center" valign="middle">79.612</td>
<td align="center" valign="middle">88.784</td>
<td align="center" valign="middle">87.808</td>
<td align="center" valign="middle">89.454</td>
</tr>
<tr>
<td align="center" valign="middle">79.639</td>
<td align="center" valign="middle">89.429</td>
<td align="center" valign="middle">79.343</td>
<td align="center" valign="middle">80.473</td>
<td align="center" valign="middle">79.612</td>
<td align="center" valign="middle">88.784</td>
<td align="center" valign="middle">87.808</td>
<td align="center" valign="middle">89.454</td>
</tr>
<tr>
<td align="center" valign="middle">79.766</td>
<td align="center" valign="middle">89.397</td>
<td align="center" valign="middle">79.349</td>
<td align="center" valign="middle">80.351</td>
<td align="center" valign="middle">79.706</td>
<td align="center" valign="middle">88.770</td>
<td align="center" valign="middle">87.886</td>
<td align="center" valign="middle">89.539</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="3">Set A vs. E</td>
<td align="center" valign="middle">96.500</td>
<td align="center" valign="middle">96.500</td>
<td align="center" valign="middle">95.833</td>
<td align="center" valign="middle">97.500</td>
<td align="center" valign="middle">95.000</td>
<td align="center" valign="middle">97.083</td>
<td align="center" valign="middle">97.500</td>
<td align="center" valign="middle">98.333</td>
</tr>
<tr>
<td align="center" valign="middle">96.500</td>
<td align="center" valign="middle">96.500</td>
<td align="center" valign="middle">95.833</td>
<td align="center" valign="middle">97.500</td>
<td align="center" valign="middle">95.000</td>
<td align="center" valign="middle">97.083</td>
<td align="center" valign="middle">97.500</td>
<td align="center" valign="middle">98.333</td>
</tr>
<tr>
<td align="center" valign="middle">96.531</td>
<td align="center" valign="middle">96.342</td>
<td align="center" valign="middle">96.057</td>
<td align="center" valign="middle">97.559</td>
<td align="center" valign="middle">95.095</td>
<td align="center" valign="middle">97.118</td>
<td align="center" valign="middle">97.559</td>
<td align="center" valign="middle">98.391</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="3">Set B vs. E</td>
<td align="center" valign="middle">94.166</td>
<td align="center" valign="middle">94.583</td>
<td align="center" valign="middle">96.666</td>
<td align="center" valign="middle">96.666</td>
<td align="center" valign="middle">96.666</td>
<td align="center" valign="middle">97.916</td>
<td align="center" valign="middle">95.416</td>
<td align="center" valign="middle">97.916</td>
</tr>
<tr>
<td align="center" valign="middle">94.166</td>
<td align="center" valign="middle">94.583</td>
<td align="center" valign="middle">96.666</td>
<td align="center" valign="middle">96.666</td>
<td align="center" valign="middle">96.666</td>
<td align="center" valign="middle">97.916</td>
<td align="center" valign="middle">95.416</td>
<td align="center" valign="middle">97.916</td>
</tr>
<tr>
<td align="center" valign="middle">94.392</td>
<td align="center" valign="middle">95.151</td>
<td align="center" valign="middle">96.726</td>
<td align="center" valign="middle">96.795</td>
<td align="center" valign="middle">96.683</td>
<td align="center" valign="middle">97.950</td>
<td align="center" valign="middle">95.453</td>
<td align="center" valign="middle">98.006</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="3">Set C vs. E</td>
<td align="center" valign="middle">95.000</td>
<td align="center" valign="middle">97.916</td>
<td align="center" valign="middle">95.000</td>
<td align="center" valign="middle">96.250</td>
<td align="center" valign="middle">93.333</td>
<td align="center" valign="middle">98.750</td>
<td align="center" valign="middle">96.666</td>
<td align="center" valign="middle">97.916</td>
</tr>
<tr>
<td align="center" valign="middle">95.000</td>
<td align="center" valign="middle">97.916</td>
<td align="center" valign="middle">95.000</td>
<td align="center" valign="middle">96.250</td>
<td align="center" valign="middle">93.333</td>
<td align="center" valign="middle">98.750</td>
<td align="center" valign="middle">96.666</td>
<td align="center" valign="middle">97.916</td>
</tr>
<tr>
<td align="center" valign="middle">95.131</td>
<td align="center" valign="middle">97.950</td>
<td align="center" valign="middle">95.037</td>
<td align="center" valign="middle">96.342</td>
<td align="center" valign="middle">93.417</td>
<td align="center" valign="middle">98.754</td>
<td align="center" valign="middle">96.710</td>
<td align="center" valign="middle">97.950</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="3">Set D vs. E</td>
<td align="center" valign="middle">94.166</td>
<td align="center" valign="middle">96.250</td>
<td align="center" valign="middle">94.583</td>
<td align="center" valign="middle">91.666</td>
<td align="center" valign="middle">95.833</td>
<td align="center" valign="middle">96.666</td>
<td align="center" valign="middle">96.250</td>
<td align="center" valign="middle">98.333</td>
</tr>
<tr>
<td align="center" valign="middle">94.166</td>
<td align="center" valign="middle">96.250</td>
<td align="center" valign="middle">94.583</td>
<td align="center" valign="middle">91.666</td>
<td align="center" valign="middle">95.833</td>
<td align="center" valign="middle">96.666</td>
<td align="center" valign="middle">96.250</td>
<td align="center" valign="middle">98.333</td>
</tr>
<tr>
<td align="center" valign="middle">94.186</td>
<td align="center" valign="middle">96.286</td>
<td align="center" valign="middle">94.995</td>
<td align="center" valign="middle">91.692</td>
<td align="center" valign="middle">95.894</td>
<td align="center" valign="middle">96.726</td>
<td align="center" valign="middle">96.286</td>
<td align="center" valign="middle">98.333</td>
</tr>
<tr>
<td align="left" valign="middle" colspan="9">
<italic>Weighed dual perspective visibility graph</italic>
</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="3">SNR 0.5</td>
<td align="center" valign="middle">98.516</td>
<td align="center" valign="middle">99.609</td>
<td align="center" valign="middle">98.282</td>
<td align="center" valign="middle">98.230</td>
<td align="center" valign="middle">98.074</td>
<td align="center" valign="middle">99.661</td>
<td align="center" valign="middle">99.831</td>
<td align="center" valign="middle">99.493</td>
</tr>
<tr>
<td align="center" valign="middle">98.516</td>
<td align="center" valign="middle">99.609</td>
<td align="center" valign="middle">98.282</td>
<td align="center" valign="middle">98.230</td>
<td align="center" valign="middle">98.074</td>
<td align="center" valign="middle">99.661</td>
<td align="center" valign="middle">99.831</td>
<td align="center" valign="middle">99.493</td>
</tr>
<tr>
<td align="center" valign="middle">98.532</td>
<td align="center" valign="middle">99.611</td>
<td align="center" valign="middle">98.291</td>
<td align="center" valign="middle">98.238</td>
<td align="center" valign="middle">98.087</td>
<td align="center" valign="middle">99.662</td>
<td align="center" valign="middle">99.831</td>
<td align="center" valign="middle">99.498</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="3">SNR 1.25</td>
<td align="center" valign="middle">91.920</td>
<td align="center" valign="middle">98.123</td>
<td align="center" valign="middle">92.025</td>
<td align="center" valign="middle">91.842</td>
<td align="center" valign="middle">91.712</td>
<td align="center" valign="middle">97.993</td>
<td align="center" valign="middle">97.973</td>
<td align="center" valign="middle">98.163</td>
</tr>
<tr>
<td align="center" valign="middle">91.920</td>
<td align="center" valign="middle">98.123</td>
<td align="center" valign="middle">92.025</td>
<td align="center" valign="middle">91.842</td>
<td align="center" valign="middle">91.712</td>
<td align="center" valign="middle">97.993</td>
<td align="center" valign="middle">97.973</td>
<td align="center" valign="middle">98.163</td>
</tr>
<tr>
<td align="center" valign="middle">91.903</td>
<td align="center" valign="middle">98.132</td>
<td align="center" valign="middle">92.007</td>
<td align="center" valign="middle">91.947</td>
<td align="center" valign="middle">91.923</td>
<td align="center" valign="middle">98.000</td>
<td align="center" valign="middle">97.973</td>
<td align="center" valign="middle">98.163</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="3">SNR 2.0</td>
<td align="center" valign="middle">81.011</td>
<td align="center" valign="middle">89.133</td>
<td align="center" valign="middle">79.639</td>
<td align="center" valign="middle">79.747</td>
<td align="center" valign="middle">79.935</td>
<td align="center" valign="middle">89.187</td>
<td align="center" valign="middle">87.873</td>
<td align="center" valign="middle">89.346</td>
</tr>
<tr>
<td align="center" valign="middle">81.011</td>
<td align="center" valign="middle">89.133</td>
<td align="center" valign="middle">79.639</td>
<td align="center" valign="middle">79.747</td>
<td align="center" valign="middle">79.935</td>
<td align="center" valign="middle">89.187</td>
<td align="center" valign="middle">87.873</td>
<td align="center" valign="middle">89.346</td>
</tr>
<tr>
<td align="center" valign="middle">81.120</td>
<td align="center" valign="middle">89.394</td>
<td align="center" valign="middle">79.531</td>
<td align="center" valign="top">79.730</td>
<td align="center" valign="top">79.982</td>
<td align="center" valign="top">89.228</td>
<td align="center" valign="top">87.789</td>
<td align="center" valign="top">89.442</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="3">Set A vs. E</td>
<td align="center" valign="top">97.500</td>
<td align="center" valign="top">96.250</td>
<td align="center" valign="top">96.666</td>
<td align="center" valign="top">97.500</td>
<td align="center" valign="top">96.666</td>
<td align="center" valign="top">95.833</td>
<td align="center" valign="top">97.916</td>
<td align="center" valign="top">98.333</td>
</tr>
<tr>
<td align="center" valign="top">97.500</td>
<td align="center" valign="top">96.250</td>
<td align="center" valign="top">96.666</td>
<td align="center" valign="top">97.500</td>
<td align="center" valign="top">96.666</td>
<td align="center" valign="top">95.833</td>
<td align="center" valign="top">97.916</td>
<td align="center" valign="top">98.333</td>
</tr>
<tr>
<td align="center" valign="top">97.559</td>
<td align="center" valign="top">96.255</td>
<td align="center" valign="top">96.666</td>
<td align="center" valign="top">97.559</td>
<td align="center" valign="top">96.890</td>
<td align="center" valign="top">96.057</td>
<td align="center" valign="top">98.006</td>
<td align="center" valign="top">98.391</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="3">Set B vs. E</td>
<td align="center" valign="top">93.333</td>
<td align="center" valign="top">97.083</td>
<td align="center" valign="top">91.666</td>
<td align="center" valign="top">89.583</td>
<td align="center" valign="top">97.500</td>
<td align="center" valign="top">92.916</td>
<td align="center" valign="top">92.083</td>
<td align="center" valign="top">97.083</td>
</tr>
<tr>
<td align="center" valign="top">93.333</td>
<td align="center" valign="top">97.083</td>
<td align="center" valign="top">91.666</td>
<td align="center" valign="top">89.583</td>
<td align="center" valign="top">97.500</td>
<td align="center" valign="top">92.916</td>
<td align="center" valign="top">92.083</td>
<td align="center" valign="top">97.083</td>
</tr>
<tr>
<td align="center" valign="top">93.675</td>
<td align="center" valign="top">97.256</td>
<td align="center" valign="top">92.938</td>
<td align="center" valign="top">91.498</td>
<td align="center" valign="top">97.516</td>
<td align="center" valign="top">93.857</td>
<td align="center" valign="top">92.391</td>
<td align="center" valign="top">97.174</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="3">Set C vs. E</td>
<td align="center" valign="top">95.833</td>
<td align="center" valign="top">97.916</td>
<td align="center" valign="top">95.416</td>
<td align="center" valign="top">95.833</td>
<td align="center" valign="top">95.000</td>
<td align="center" valign="top">97.916</td>
<td align="center" valign="top">97.083</td>
<td align="center" valign="top">97.916</td>
</tr>
<tr>
<td align="center" valign="top">95.833</td>
<td align="center" valign="top">97.916</td>
<td align="center" valign="top">95.416</td>
<td align="center" valign="top">95.833</td>
<td align="center" valign="top">95.000</td>
<td align="center" valign="top">97.916</td>
<td align="center" valign="top">97.083</td>
<td align="center" valign="top">97.916</td>
</tr>
<tr>
<td align="center" valign="top">95.851</td>
<td align="center" valign="top">97.921</td>
<td align="center" valign="top">95.416</td>
<td align="center" valign="top">95.833</td>
<td align="center" valign="top">95.131</td>
<td align="center" valign="top">97.950</td>
<td align="center" valign="top">97.088</td>
<td align="center" valign="top">98.006</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="3">Set D vs. E</td>
<td align="center" valign="top">95.833</td>
<td align="center" valign="top">96.666</td>
<td align="center" valign="top">95.833</td>
<td align="center" valign="top">94.583</td>
<td align="center" valign="top">89.166</td>
<td align="center" valign="top">97.500</td>
<td align="center" valign="top">94.166</td>
<td align="center" valign="top">97.083</td>
</tr>
<tr>
<td align="center" valign="top">95.833</td>
<td align="center" valign="top">96.666</td>
<td align="center" valign="top">95.833</td>
<td align="center" valign="top">94.583</td>
<td align="center" valign="top">89.166</td>
<td align="center" valign="top">97.500</td>
<td align="center" valign="top">94.166</td>
<td align="center" valign="top">97.083</td>
</tr>
<tr>
<td align="center" valign="top">96.057</td>
<td align="center" valign="top">96.795</td>
<td align="center" valign="top">96.057</td>
<td align="center" valign="top">94.621</td>
<td align="center" valign="top">89.486</td>
<td align="center" valign="top">97.559</td>
<td align="center" valign="top">94.230</td>
<td align="center" valign="top">97.118</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="fig4">Figure 4</xref> presents a visual comparison of the average accuracy differences for the F8 method across seven studies, utilizing both WVG and WDPVG techniques. The process involved calculating the mean accuracies for WVG and WDPVG in each experiment, followed by determining the average difference in accuracy between the F8 method and the other studies. <xref ref-type="fig" rid="fig4">Figure 4</xref> shows that experiment F8 consistently outperforms the F1 to F7 methods by having a positive average accuracy difference across all datasets. Among all the experiments, the F8&#x2019;s performance for Set A vs. Set E had the lowest average accuracy difference. Additionally, in the SNR dataset experiment, F8 shows robustness and superior performance, especially as the signal became noisier (at SNR 2.0) compared to other methodologies.</p>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Comparative analysis of the average accuracy differences for the F8 (GCFE) method using WVG and WDPVG graph types across seven experiments.</p>
</caption>
<graphic xlink:href="fninf-18-1395916-g004.tif"/>
</fig>
<p><xref ref-type="fig" rid="fig5">Figure 5</xref> presents the average computational time for each feature extraction method across the two datasets. <xref ref-type="fig" rid="fig5">Figure 5A</xref> represents Dataset &#x2013; 1, and <xref ref-type="fig" rid="fig5">Figure 5B</xref> represents Dataset &#x2013; 2. In both representations, the <italic>x</italic>-axis denotes the number of features for each method. The <italic>y</italic>-axis displays the average computation time for each method in seconds on a logarithmic scale. The computation time is calculated for each method for an average of 25,325 and 800 epochs for Dataset &#x2013; 1 and Dataset &#x2013; 2, respectively.</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Representation of average (avg.) computational time for both datasets vs. number of features in each feature extraction method: <bold>(A)</bold> Log-scale avg. computational time vs. number of features in Dataset &#x2013; 1; <bold>(B)</bold> Log-scale avg. computational time vs. number of features in Dataset &#x2013; 2.</p>
</caption>
<graphic xlink:href="fninf-18-1395916-g005.tif"/>
</fig>
<p>Similar trends were observed in <xref ref-type="fig" rid="fig5">Figure 5B</xref> for Dataset 2, where the F8 method was the most time-efficient, averaging 11.34&#x2009;s. In contrast, the F2 method was the most time-consuming, requiring an average of 336.69&#x2009;s. Despite having fewer features, as indicated in <xref ref-type="table" rid="tab3">Table 3</xref>, the methods F1, F3, F4, F5, and F7 still demanded more time than F8, with average times of 178.94&#x2009;s, 13.79&#x2009;s, 44.11&#x2009;s, 140.46&#x2009;s, and 130.39&#x2009;s, respectively. The F6 method was the only feature extraction method that came close to F8 in terms of computation time, averaging 13.06&#x2009;s. From <xref ref-type="table" rid="tab4">Tables 4</xref> and <xref ref-type="table" rid="tab5">5</xref>, it is revealed that possessing a larger number of extracted features, specifically the F2 method with the most features, does not enhance classification accuracy and leads to increased computational time. All the results and supporting code are made available on GitHub (<xref ref-type="bibr" rid="ref30">Patel, 2023</xref>).</p>
<table-wrap position="float" id="tab5">
<label>Table 5</label>
<caption>
<p>Computational times (in seconds) for feature extraction studies using weighted visibility graphs and weighted dual perspective visibility graphs across various experiments.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Experiments</th>
<th align="center" valign="top">F1</th>
<th align="center" valign="top">F2</th>
<th align="center" valign="top">F3</th>
<th align="center" valign="top">F4</th>
<th align="center" valign="top">F5</th>
<th align="center" valign="top">F6</th>
<th align="center" valign="top">F7</th>
<th align="center" valign="top">F8</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle" colspan="9">
<italic>Weighted visibility graph</italic>
</td>
</tr>
<tr>
<td align="left" valign="middle">SNR 0.5</td>
<td align="center" valign="middle">27.4</td>
<td align="center" valign="middle">33.7</td>
<td align="center" valign="middle">4.3</td>
<td align="center" valign="middle">4.5</td>
<td align="center" valign="middle">25.3</td>
<td align="center" valign="middle">2.6</td>
<td align="center" valign="middle">27.4</td>
<td align="center" valign="middle">1.5</td>
</tr>
<tr>
<td align="left" valign="middle">SNR 1.25</td>
<td align="center" valign="middle">22.4</td>
<td align="center" valign="middle">30.5</td>
<td align="center" valign="middle">4.1</td>
<td align="center" valign="middle">4.2</td>
<td align="center" valign="middle">19.9</td>
<td align="center" valign="middle">2.6</td>
<td align="center" valign="middle">26.5</td>
<td align="center" valign="middle">1.5</td>
</tr>
<tr>
<td align="left" valign="middle">SNR 2.0</td>
<td align="center" valign="middle">19.6</td>
<td align="center" valign="middle">26.6</td>
<td align="center" valign="middle">4.0</td>
<td align="center" valign="middle">4.1</td>
<td align="center" valign="middle">17.6</td>
<td align="center" valign="middle">2.5</td>
<td align="center" valign="middle">25.1</td>
<td align="center" valign="middle">1.5</td>
</tr>
<tr>
<td align="left" valign="middle">Set A vs. E</td>
<td align="center" valign="middle">104.7</td>
<td align="center" valign="middle">246.2</td>
<td align="center" valign="middle">13.2</td>
<td align="center" valign="middle">37.8</td>
<td align="center" valign="middle">76.7</td>
<td align="center" valign="middle">12.8</td>
<td align="center" valign="middle">128.3</td>
<td align="center" valign="middle">10.8</td>
</tr>
<tr>
<td align="left" valign="middle">Set B vs. E</td>
<td align="center" valign="middle">105.7</td>
<td align="center" valign="middle">247.2</td>
<td align="center" valign="middle">13.7</td>
<td align="center" valign="middle">37.7</td>
<td align="center" valign="middle">76.4</td>
<td align="center" valign="middle">13.6</td>
<td align="center" valign="middle">133.0</td>
<td align="center" valign="middle">11.3</td>
</tr>
<tr>
<td align="left" valign="middle">Set C vs. E</td>
<td align="center" valign="middle">124.6</td>
<td align="center" valign="middle">285.2</td>
<td align="center" valign="middle">13.9</td>
<td align="center" valign="middle">40.6</td>
<td align="center" valign="middle">94.9</td>
<td align="center" valign="middle">12.3</td>
<td align="center" valign="middle">129.5</td>
<td align="center" valign="middle">11.2</td>
</tr>
<tr>
<td align="left" valign="middle">Set D vs. E</td>
<td align="center" valign="middle">176.9</td>
<td align="center" valign="middle">345.3</td>
<td align="center" valign="middle">13.8</td>
<td align="center" valign="middle">42.2</td>
<td align="center" valign="middle">141.9</td>
<td align="center" valign="middle">12.2</td>
<td align="center" valign="middle">131.3</td>
<td align="center" valign="middle">11.5</td>
</tr>
<tr>
<td align="left" valign="middle" colspan="9">
<italic>Weighed dual perspective visibility graph</italic>
</td>
</tr>
<tr>
<td align="left" valign="middle">SNR 0.5</td>
<td align="center" valign="middle">35.2</td>
<td align="center" valign="middle">45.5</td>
<td align="center" valign="middle">5.1</td>
<td align="center" valign="middle">6.1</td>
<td align="center" valign="middle">34.3</td>
<td align="center" valign="middle">3.2</td>
<td align="center" valign="middle">27.6</td>
<td align="center" valign="middle">1.5</td>
</tr>
<tr>
<td align="left" valign="middle">SNR 1.25</td>
<td align="center" valign="middle">28.9</td>
<td align="center" valign="middle">38.2</td>
<td align="center" valign="middle">5.0</td>
<td align="center" valign="middle">6.2</td>
<td align="center" valign="middle">25.8</td>
<td align="center" valign="middle">2.8</td>
<td align="center" valign="middle">26.6</td>
<td align="center" valign="middle">1.5</td>
</tr>
<tr>
<td align="left" valign="middle">SNR 2.0</td>
<td align="center" valign="middle">26.6</td>
<td align="center" valign="middle">35.0</td>
<td align="center" valign="middle">4.1</td>
<td align="center" valign="middle">5.4</td>
<td align="center" valign="middle">23.9</td>
<td align="center" valign="middle">2.6</td>
<td align="center" valign="middle">24.9</td>
<td align="center" valign="middle">1.4</td>
</tr>
<tr>
<td align="left" valign="middle">Set A vs. E</td>
<td align="center" valign="middle">159.4</td>
<td align="center" valign="middle">314.9</td>
<td align="center" valign="middle">14.5</td>
<td align="center" valign="middle">44.8</td>
<td align="center" valign="middle">120.9</td>
<td align="center" valign="middle">12.9</td>
<td align="center" valign="middle">131.8</td>
<td align="center" valign="middle">11.5</td>
</tr>
<tr>
<td align="left" valign="middle">Set B vs. E</td>
<td align="center" valign="middle">164.3</td>
<td align="center" valign="middle">306.7</td>
<td align="center" valign="middle">13.1</td>
<td align="center" valign="middle">49.5</td>
<td align="center" valign="middle">119.3</td>
<td align="center" valign="middle">13.7</td>
<td align="center" valign="middle">131.6</td>
<td align="center" valign="middle">11.2</td>
</tr>
<tr>
<td align="left" valign="middle">Set C vs. E</td>
<td align="center" valign="middle">208.2</td>
<td align="center" valign="middle">346.9</td>
<td align="center" valign="middle">14.5</td>
<td align="center" valign="middle">49.0</td>
<td align="center" valign="middle">142.0</td>
<td align="center" valign="middle">13.9</td>
<td align="center" valign="middle">127.2</td>
<td align="center" valign="middle">11.4</td>
</tr>
<tr>
<td align="left" valign="middle">Set D vs. E</td>
<td align="center" valign="middle">387.7</td>
<td align="center" valign="middle">601.1</td>
<td align="center" valign="middle">13.6</td>
<td align="center" valign="middle">51.3</td>
<td align="center" valign="middle">351.6</td>
<td align="center" valign="middle">13.1</td>
<td align="center" valign="middle">130.4</td>
<td align="center" valign="middle">11.8</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec sec-type="conclusions" id="sec12">
<label>5</label>
<title>Conclusion</title>
<p>In conclusion, this paper demonstrated a new implementation of the GC theorem with the mWL matrix as a feature extraction methodology for biomedical signals. In addition, the results clearly support that the GCFE approach surpasses other feature reduction techniques. Additionally, GCFE delivered consistently positive average accuracy difference across both datasets and two distinct graphs. Further, the computational efficiency of the proposed methodology was better when compared to other methods. The superior accuracy and decreased computational time of GCFE demonstrates that it exceptionally well-suited for real-time biomedical signal classification applications. However, the proposed GCFE is constrained to extracting a fixed number of features, converting an N x N Laplacian matrix to a 2 &#x00D7; N vector due to its non-parametric approach. Future research could expand the potential uses of GCFE by integrating alternative eigenvalue inclusion theorems or by modifying the GC theorem to predict more precise eigenvalue inclusions of random Laplacian matrices.</p>
</sec>
<sec sec-type="data-availability" id="sec13">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.</p>
</sec>
<sec sec-type="author-contributions" id="sec14">
<title>Author contributions</title>
<p>SP: Conceptualization, Formal analysis, Methodology, Validation, Writing &#x2013; original draft, Writing &#x2013; review &#x0026; editing. RS: Formal analysis, Funding acquisition, Investigation, Project administration, Resources, Supervision, Validation, Visualization, Writing &#x2013; review &#x0026; editing. AY: Formal analysis, Investigation, Project administration, Resources, Supervision, Validation, Visualization, Writing &#x2013; review &#x0026; editing.</p>
</sec>
</body>
<back>
<sec sec-type="funding-information" id="sec15">
<title>Funding</title>
<p>The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. RS was funded by the American Epilepsy Society Grant 1042632, CURE Epilepsy Grant 1061181, and NIH BRAIN Initiative Grant UG3NS130202.</p>
</sec>
<sec sec-type="COI-statement" id="sec16">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="sec17">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec sec-type="supplementary-material" id="sec18">
<title>Supplementary material</title>
<p>The Supplementary material for this article can be found online at: <ext-link xlink:href="https://www.frontiersin.org/articles/10.3389/fninf.2024.1395916/full#supplementary-material" ext-link-type="uri">https://www.frontiersin.org/articles/10.3389/fninf.2024.1395916/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.PDF" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Adamos</surname> <given-names>D. A.</given-names></name> <name><surname>Kosmidis</surname> <given-names>E. K.</given-names></name> <name><surname>Theophilidis</surname> <given-names>G.</given-names></name></person-group> (<year>2008</year>). <article-title>Performance evaluation of PCA-based spike sorting algorithms</article-title>. <source>Comput. Methods Prog. Biomed.</source> <volume>91</volume>, <fpage>232</fpage>&#x2013;<lpage>244</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cmpb.2008.04.011</pub-id>, PMID: <pub-id pub-id-type="pmid">18565614</pub-id></citation></ref>
<ref id="ref2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahmadlou</surname> <given-names>M.</given-names></name> <name><surname>Adeli</surname> <given-names>H.</given-names></name> <name><surname>Adeli</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>New diagnostic EEG markers of the Alzheimer&#x2019;s disease using visibility graph</article-title>. <source>J. Neural Transm.</source> <volume>117</volume>, <fpage>1099</fpage>&#x2013;<lpage>1109</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00702-010-0450-3</pub-id>, PMID: <pub-id pub-id-type="pmid">20714909</pub-id></citation></ref>
<ref id="ref3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andrzejak</surname> <given-names>R. G.</given-names></name> <name><surname>Lehnertz</surname> <given-names>K.</given-names></name> <name><surname>Mormann</surname> <given-names>F.</given-names></name> <name><surname>Rieke</surname> <given-names>C.</given-names></name> <name><surname>David</surname> <given-names>P.</given-names></name> <name><surname>Elger</surname> <given-names>C. E.</given-names></name></person-group> (<year>2001</year>). <article-title>Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: dependence on recording region and brain stat</article-title>. <source>Phys. Rev. E</source> <volume>64</volume>:<fpage>061907</fpage>. doi: <pub-id pub-id-type="doi">10.1103/PhysRevE.64.061907</pub-id>, PMID: <pub-id pub-id-type="pmid">11736210</pub-id></citation></ref>
<ref id="ref4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Antoniou</surname> <given-names>I.</given-names></name> <name><surname>Tsompa</surname> <given-names>E.</given-names></name></person-group> (<year>2008</year>). <article-title>Statistical analysis of weighted networks</article-title>. <source>Discrete Dyn. Nature Soc.</source> <volume>2008</volume>, <fpage>1</fpage>&#x2013;<lpage>16</lpage>. doi: <pub-id pub-id-type="doi">10.1155/2008/375452</pub-id></citation></ref>
<ref id="ref5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Artameeyanant</surname> <given-names>P.</given-names></name> <name><surname>Sultornsanee</surname> <given-names>S.</given-names></name> <name><surname>Chamnongthai</surname> <given-names>K.</given-names></name></person-group> (<year>2017</year>). <article-title>Electroencephalography-based feature extraction using complex network for automated epileptic seizure detection</article-title>. <source>Expert. Syst.</source> <volume>34</volume>:<fpage>e12211</fpage>. doi: <pub-id pub-id-type="doi">10.1111/exsy.12211</pub-id></citation></ref>
<ref id="ref6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bernert</surname> <given-names>M.</given-names></name> <name><surname>Yvert</surname> <given-names>B.</given-names></name></person-group> (<year>2019</year>). <article-title>An attention-based spiking neural network for unsupervised spike-sorting</article-title>. <source>Int. J. Neural Syst.</source> <volume>29</volume>:<fpage>1850059</fpage>. doi: <pub-id pub-id-type="doi">10.1142/S0129065718500594</pub-id>, PMID: <pub-id pub-id-type="pmid">30776985</pub-id></citation></ref>
<ref id="ref7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blondel</surname> <given-names>V. D.</given-names></name></person-group> (<year>2008</year>). <article-title>Fast unfolding of communities in large networks</article-title>. <source>J. Stat. Mech.</source> <volume>2008</volume>:<fpage>P10008</fpage>. doi: <pub-id pub-id-type="doi">10.1088/1742-5468/2008/10/P10008</pub-id></citation></ref>
<ref id="ref8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bose</surname> <given-names>R.</given-names></name> <name><surname>Samanta</surname> <given-names>K.</given-names></name> <name><surname>Modak</surname> <given-names>S.</given-names></name> <name><surname>Chatterjee</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Augmenting neuromuscular disease detection using optimally parameterized weighted visibility graph</article-title>. <source>IEEE J. Biomed. Health Inform.</source> <volume>25</volume>, <fpage>685</fpage>&#x2013;<lpage>692</lpage>. doi: <pub-id pub-id-type="doi">10.1109/JBHI.2020.3001877</pub-id></citation></ref>
<ref id="ref9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cai</surname> <given-names>Q.</given-names></name> <name><surname>An</surname> <given-names>J. P.</given-names></name> <name><surname>Li</surname> <given-names>H. Y.</given-names></name> <name><surname>Guo</surname> <given-names>J. Y.</given-names></name> <name><surname>Gao</surname> <given-names>Z. K.</given-names></name></person-group> (<year>2022</year>). <article-title>Cross-subject emotion recognition using visibility graph and genetic algorithm-based convolution neural network Chaos: an Interdisciplinary</article-title>. <source>J. Nonlinear Sci.</source> <volume>32</volume>:<fpage>093110</fpage>. doi: <pub-id pub-id-type="doi">10.1063/5.0098454</pub-id>, PMID: <pub-id pub-id-type="pmid">36182360</pub-id></citation></ref>
<ref id="ref10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Campanharo</surname> <given-names>A.</given-names></name> <name><surname>Ramos</surname> <given-names>F. M.</given-names></name> <name><surname>Macau</surname> <given-names>E. E. N.</given-names></name> <name><surname>Rosa</surname> <given-names>R. R.</given-names></name> <name><surname>Bolzan</surname> <given-names>M. J. A.</given-names></name> <name><surname>S&#x00E1;</surname> <given-names>L. D. A.</given-names></name></person-group> (<year>2008</year>). <article-title>Searching chaos and coherent structures in the atmospheric turbulence above the Amazon fores</article-title>. <source>Philos. Trans. Phys. Sci. Eng.</source> <volume>366</volume>, <fpage>579</fpage>&#x2013;<lpage>589</lpage>. doi: <pub-id pub-id-type="doi">10.1098/rsta.2007.2118</pub-id>, PMID: <pub-id pub-id-type="pmid">17698463</pub-id></citation></ref>
<ref id="ref11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Costa-Feito</surname> <given-names>A.</given-names></name> <name><surname>Gonz&#x00E1;lez-Fern&#x00E1;ndez</surname> <given-names>A. M.</given-names></name> <name><surname>Rodr&#x00ED;guez-Santos</surname> <given-names>C.</given-names></name> <name><surname>Cervantes-Blanco</surname> <given-names>M.</given-names></name></person-group> (<year>2023</year>). <article-title>Electroencephalography in consumer behaviour and marketing: a science mapping approach</article-title>. <source>Human. Soc. Sci. Commun.</source> <volume>10</volume>, <fpage>1</fpage>&#x2013;<lpage>13</lpage>. doi: <pub-id pub-id-type="doi">10.1057/s41599-023-01991-6</pub-id></citation></ref>
<ref id="ref12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>David</surname> <given-names>O.</given-names></name> <name><surname>Garnero</surname> <given-names>L.</given-names></name> <name><surname>Cosmelli</surname> <given-names>D.</given-names></name> <name><surname>Varela</surname> <given-names>F. J.</given-names></name></person-group> (<year>2002</year>). <article-title>Estimation of neural dynamics from MEG/EEG cortical current density maps: application to the reconstruction of large-scale cortical synchrony</article-title>. <source>IEEE Trans. Biomed. Eng.</source> <volume>49</volume>, <fpage>975</fpage>&#x2013;<lpage>987</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TBME.2002.802013</pub-id></citation></ref>
<ref id="ref13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>Z.</given-names></name> <name><surname>Cai</surname> <given-names>Q.</given-names></name> <name><surname>Yang</surname> <given-names>Y. X.</given-names></name> <name><surname>Dang</surname> <given-names>W. D.</given-names></name> <name><surname>Zhang</surname> <given-names>S. S.</given-names></name></person-group> (<year>2016</year>). <article-title>Multiscale limited penetrable horizontal visibility graph for analyzing nonlinear time series</article-title>. <source>Sci. Rep.</source> <volume>6</volume>:<fpage>35622</fpage>. doi: <pub-id pub-id-type="doi">10.1038/srep35622</pub-id>, PMID: <pub-id pub-id-type="pmid">27759088</pub-id></citation></ref>
<ref id="ref14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gershgorin</surname> <given-names>S. A.</given-names></name></person-group> (<year>1931</year>). <article-title>&#x00DC;ber die Abgrenzung der Eigenwerte einer Matrix</article-title>. <source>Izvestiya Akademii Nauk SSSR</source> <volume>6</volume>, <fpage>749</fpage>&#x2013;<lpage>754</lpage>,</citation></ref>
<ref id="ref15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gu</surname> <given-names>Y.</given-names></name> <name><surname>Gagnon</surname> <given-names>J.</given-names></name> <name><surname>Kaminska</surname> <given-names>M.</given-names></name></person-group> (<year>2023</year>). <article-title>Sleep electroencephalography biomarkers of cognition in obstructive sleep apnea</article-title>. <source>J. Sleep Res.</source> <volume>32</volume>:<fpage>13831</fpage>. doi: <pub-id pub-id-type="doi">10.1111/jsr.13831</pub-id></citation></ref>
<ref id="ref16"><citation citation-type="other"><person-group person-group-type="author"><name><surname>Hao</surname> <given-names>C.</given-names></name> <name><surname>Chen</surname> <given-names>Z.</given-names></name> <name><surname>Zhao</surname> <given-names>Z.</given-names></name></person-group> (<year>2016</year>). &#x201C;<article-title>Analysis and prediction of epilepsy based on visibility graph</article-title>&#x201D; in <source>2016 3rd ICISCE. IEEE</source>, <fpage>1271</fpage>&#x2013;<lpage>1274</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ICISCE.2016.272</pub-id></citation></ref>
<ref id="ref17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>C.</given-names></name> <name><surname>Chen</surname> <given-names>Y. Y.</given-names></name> <name><surname>Phang</surname> <given-names>C. R.</given-names></name> <name><surname>Stevenson</surname> <given-names>C.</given-names></name> <name><surname>Chen</surname> <given-names>I. P.</given-names></name> <name><surname>Jung</surname> <given-names>T. P.</given-names></name></person-group> (<year>2023</year>). <article-title>Diversity and suitability of the state-of-the-art wearable and wireless EEG systems review</article-title>. <source>IEEE J. Biomed. Health Inform.</source> <volume>27</volume>, <fpage>3830</fpage>&#x2013;<lpage>3843</lpage>. doi: <pub-id pub-id-type="doi">10.1109/JBHI.2023.3239053</pub-id>, PMID: <pub-id pub-id-type="pmid">37022001</pub-id></citation></ref>
<ref id="ref18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Javaid</surname> <given-names>H.</given-names></name> <name><surname>Kumarnsit</surname> <given-names>E.</given-names></name> <name><surname>Chatpun</surname> <given-names>S.</given-names></name></person-group> (<year>2022</year>). <article-title>Age-related alterations in EEG network connectivity in healthy aging</article-title>. <source>Brain Sci.</source> <volume>12</volume>:<fpage>218</fpage>. doi: <pub-id pub-id-type="doi">10.3390/brainsci12020218</pub-id>, PMID: <pub-id pub-id-type="pmid">35203981</pub-id></citation></ref>
<ref id="ref19"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Kantz</surname> <given-names>H.</given-names></name> <name><surname>Schreiber</surname> <given-names>T.</given-names></name></person-group> (<year>2003</year>). <source>Nonlinear time series analysis</source>. <publisher-loc>Cambridge. Cambridge, UK</publisher-loc>: <publisher-name>Cambridge Univ. Press</publisher-name>.</citation></ref>
<ref id="ref20"><citation citation-type="book"><person-group person-group-type="author"><name><surname>K&#x00F6;rner</surname> <given-names>T.</given-names></name></person-group> (<year>1988</year>). <source>Fourier analysis</source>. <publisher-loc>Cambridge, UK</publisher-loc>: <publisher-name>Cambridge Univ. Press</publisher-name>.</citation></ref>
<ref id="ref21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krizhevsky</surname> <given-names>A.</given-names></name> <name><surname>Sutskever</surname> <given-names>I.</given-names></name> <name><surname>Hinton</surname> <given-names>G. E.</given-names></name></person-group> (<year>2012</year>). <article-title>Imagenet classification with deep convolutional neural networks</article-title>. <source>NeurIPS</source> <volume>25</volume>, <fpage>1</fpage>&#x2013;<lpage>9</lpage>.</citation></ref>
<ref id="ref22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lacasa</surname> <given-names>L.</given-names></name> <name><surname>Luque</surname> <given-names>B.</given-names></name> <name><surname>Ballesteros</surname> <given-names>F.</given-names></name> <name><surname>Luque</surname> <given-names>J.</given-names></name> <name><surname>Nu&#x00F1;o</surname> <given-names>J. C.</given-names></name></person-group> (<year>2008</year>). <article-title>From time series to complex networks: the visibility graph</article-title>. <source>Proc. Natl. Acad. Sci. USA</source> <volume>105</volume>, <fpage>4972</fpage>&#x2013;<lpage>4975</lpage>. doi: <pub-id pub-id-type="doi">10.1073/pnas.0709247105</pub-id>, PMID: <pub-id pub-id-type="pmid">18362361</pub-id></citation></ref>
<ref id="ref23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Latora</surname> <given-names>V.</given-names></name> <name><surname>Marchiori</surname> <given-names>M.</given-names></name></person-group> (<year>2001</year>). <article-title>Efficient behavior of small-world networks</article-title>. <source>Phys. Rev. Lett.</source> <volume>87</volume>:<fpage>198701</fpage>. doi: <pub-id pub-id-type="doi">10.1103/PhysRevLett.87.198701</pub-id></citation></ref>
<ref id="ref24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luque</surname> <given-names>B.</given-names></name> <name><surname>Lacasa</surname> <given-names>L.</given-names></name> <name><surname>Ballesteros</surname> <given-names>F.</given-names></name> <name><surname>Luque</surname> <given-names>J.</given-names></name></person-group> (<year>2009</year>). <article-title>Horizontal visibility graphs: exact results for random time series</article-title>. <source>Phys. Rev. E</source> <volume>80</volume>:<fpage>046103</fpage>. doi: <pub-id pub-id-type="doi">10.1103/PhysRevE.80.046103</pub-id>, PMID: <pub-id pub-id-type="pmid">19905386</pub-id></citation></ref>
<ref id="ref25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maher</surname> <given-names>C.</given-names></name> <name><surname>Yang</surname> <given-names>Y.</given-names></name> <name><surname>Truong</surname> <given-names>N. D.</given-names></name> <name><surname>Wang</surname> <given-names>C.</given-names></name> <name><surname>Nikpour</surname> <given-names>A.</given-names></name> <name><surname>Kavehei</surname> <given-names>O.</given-names></name></person-group> (<year>2023</year>). <article-title>Seizure detection with reduced electroencephalogram channels: research trends and outlook</article-title>. <source>R. Soc. Open Sci.</source> <volume>10</volume>:<fpage>230022</fpage>. doi: <pub-id pub-id-type="doi">10.1098/rsos.230022</pub-id>, PMID: <pub-id pub-id-type="pmid">37153360</pub-id></citation></ref>
<ref id="ref26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Minguillon</surname> <given-names>J.</given-names></name> <name><surname>Lopez-Gordo</surname> <given-names>M. A.</given-names></name> <name><surname>Pelayo</surname> <given-names>F.</given-names></name></person-group> (<year>2017</year>). <article-title>Trends in EEG-BCI for daily-life: requirements for artifact removal</article-title>. <source>Biomed. Signal Process. Control</source> <volume>31</volume>, <fpage>407</fpage>&#x2013;<lpage>418</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.bspc.2016.09.005</pub-id></citation></ref>
<ref id="ref27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Modir</surname> <given-names>A.</given-names></name> <name><surname>Shamekhi</surname> <given-names>S.</given-names></name> <name><surname>Ghaderyan</surname> <given-names>P.</given-names></name></person-group> (<year>2023</year>). <article-title>A systematic review and methodological analysis of EEG-based biomarkers of Alzheimer's disease</article-title>. <source>Measurement</source> <volume>220</volume>:<fpage>113274</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.measurement.2023.113274</pub-id></citation></ref>
<ref id="ref28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mohammadpoory</surname> <given-names>Z.</given-names></name> <name><surname>Nasrolahzadeh</surname> <given-names>M.</given-names></name> <name><surname>Amiri</surname> <given-names>S. A.</given-names></name></person-group> (<year>2023</year>). <article-title>Classification of healthy and epileptic seizure EEG signals based on different visibility graph algorithms and EEG time series</article-title>. <source>Multimedia Tools Appl.</source> <volume>83</volume>, <fpage>2703</fpage>&#x2013;<lpage>2724</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11042-023-15681-7</pub-id></citation></ref>
<ref id="ref29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ortega Bejarano</surname> <given-names>D. A.</given-names></name> <name><surname>Ibarguen-Mondragon</surname> <given-names>E.</given-names></name> <name><surname>Gomez-Hernandez</surname> <given-names>E. A.</given-names></name></person-group> (<year>2018</year>). <article-title>A stability test for non linear systems of ordinary differential equations based on the gershgorin circles</article-title>. <source>Contem. Eng. Sci.</source> <volume>11</volume>, <fpage>4541</fpage>&#x2013;<lpage>4548</lpage>. doi: <pub-id pub-id-type="doi">10.12988/ces.2018.89504</pub-id></citation></ref>
<ref id="ref30"><citation citation-type="other"><person-group person-group-type="author"><name><surname>Patel</surname> <given-names>S. A.</given-names></name></person-group>, (<year>2023</year>). <article-title>Signal_GCFE</article-title> Available at: <ext-link xlink:href="https://github.com/sahaj432/Signal_GCFE.git" ext-link-type="uri">https://github.com/sahaj432/Signal_GCFE.git</ext-link> (accessed 07 Sept. 2023).</citation></ref>
<ref id="ref31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Patel</surname> <given-names>S. A.</given-names></name> <name><surname>Yildirim</surname> <given-names>A.</given-names></name></person-group> (<year>2023</year>). <article-title>Non-stationary neural signal to image conversion framework for image-based deep learning algorithms</article-title>. <source>Front. Neuroinform.</source> <volume>17</volume>:<fpage>1081160</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fninf.2023.1081160</pub-id>, PMID: <pub-id pub-id-type="pmid">37035716</pub-id></citation></ref>
<ref id="ref32"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Percival</surname> <given-names>D.</given-names></name> <name><surname>Walden</surname> <given-names>A.</given-names></name></person-group> (<year>2000</year>). <source>Wavelet methods for time series analysis</source>. <publisher-loc>Cambridge, UK</publisher-loc>: <publisher-name>Cambridge Univ. Press</publisher-name>.</citation></ref>
<ref id="ref33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perez-Valero</surname> <given-names>E.</given-names></name> <name><surname>Lopez-Gordo</surname> <given-names>M. A.</given-names></name> <name><surname>Morillas</surname> <given-names>C.</given-names></name> <name><surname>Pelayo</surname> <given-names>F.</given-names></name> <name><surname>Vaquero-Blasco</surname> <given-names>M. A.</given-names></name></person-group> (<year>2021</year>). <article-title>A review of automated techniques for assisting the early detection of Alzheimer&#x2019;s disease with a focus on EEG</article-title>. <source>J. Alzheimers Dis.</source> <volume>80</volume>, <fpage>1363</fpage>&#x2013;<lpage>1376</lpage>. doi: <pub-id pub-id-type="doi">10.3233/JAD-201455</pub-id>, PMID: <pub-id pub-id-type="pmid">33682717</pub-id></citation></ref>
<ref id="ref34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rodriguez-Bermudez</surname> <given-names>G.</given-names></name> <name><surname>Garcia-Laencina</surname> <given-names>P. J.</given-names></name></person-group> (<year>2015</year>). <article-title>Analysis of EEG signals using nonlinear dynamics and chaos: a review</article-title>. <source>Appl. Math. Inform. Sci.</source> <volume>9</volume>:<fpage>2309</fpage>. doi: <pub-id pub-id-type="doi">10.31219/osf.io/4ehcs</pub-id></citation></ref>
<ref id="ref35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Saram&#x00E4;ki</surname> <given-names>J.</given-names></name> <name><surname>Kivel&#x00E4;</surname> <given-names>M.</given-names></name> <name><surname>Onnela</surname> <given-names>J. P.</given-names></name> <name><surname>Kaski</surname> <given-names>K.</given-names></name> <name><surname>Kertesz</surname> <given-names>J.</given-names></name></person-group> (<year>2007</year>). <article-title>Generalizations of the clustering coefficient to weighted complex networks</article-title>. <source>Phys. Rev. E.</source> <volume>2</volume>:<fpage>27105</fpage>. doi: <pub-id pub-id-type="doi">10.1103/PhysRevE.75.027105</pub-id>, PMID: <pub-id pub-id-type="pmid">33682717</pub-id></citation></ref>
<ref id="ref36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stam</surname> <given-names>C. J.</given-names></name> <name><surname>Van Straate</surname> <given-names>E. C. W.</given-names></name></person-group> (<year>2012</year>). <article-title>The organization of physiological brain networks</article-title>. <source>Clin. Neurophysiol.</source> <volume>123</volume>, <fpage>1067</fpage>&#x2013;<lpage>1087</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.clinph.2012.01.011</pub-id></citation></ref>
<ref id="ref37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Supriya</surname> <given-names>S.</given-names></name> <name><surname>Siuly</surname> <given-names>S.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Cao</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name></person-group> (<year>2016</year>). <article-title>Weighted visibility graph with complex network features in the detection of epilepsy</article-title>. <source>IEEE Access</source> <volume>4</volume>, <fpage>6554</fpage>&#x2013;<lpage>6566</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2016.2612242</pub-id></citation></ref>
<ref id="ref38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Torres</surname> <given-names>C. B.</given-names></name> <name><surname>Barona</surname> <given-names>E. J. G.</given-names></name> <name><surname>Molina</surname> <given-names>M. G.</given-names></name> <name><surname>S&#x00E1;nchez</surname> <given-names>M. E. G. B.</given-names></name> <name><surname>Manso</surname> <given-names>J. M. M.</given-names></name></person-group> (<year>2023</year>). <article-title>A systematic review of EEG neurofeedback in fibromyalgia to treat psychological variables, chronic pain and general health</article-title>. <source>Euro. Arch. Psych. Clin. Neurosci.</source>, <fpage>1</fpage>&#x2013;<lpage>19</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00406-023-01612-y</pub-id></citation></ref>
<ref id="ref39"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Varga</surname> <given-names>R. S.</given-names></name></person-group> (<year>2010</year>). <source>Ger&#x0161;gorin and his circles</source>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer Science &#x0026; Business Media</publisher-name>.</citation></ref>
<ref id="ref40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>F.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Cheung</surname> <given-names>G.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Graph sampling for matrix completion using recurrent Gershgorin disc shift</article-title>. <source>IEEE Trans. Signal Process.</source> <volume>68</volume>, <fpage>2814</fpage>&#x2013;<lpage>2829</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TSP.2020.2988784</pub-id></citation></ref>
<ref id="ref41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xie</surname> <given-names>L.</given-names></name> <name><surname>Huang</surname> <given-names>J.</given-names></name> <name><surname>Tan</surname> <given-names>E.</given-names></name> <name><surname>He</surname> <given-names>F.</given-names></name> <name><surname>Liu</surname> <given-names>Z.</given-names></name></person-group> (<year>2022</year>). <article-title>The stability criterion and stability analysis of three-phase grid-connected rectifier system based on Gerschgorin circle theorem</article-title>. <source>Electronics</source> <volume>11</volume>:<fpage>3270</fpage>. doi: <pub-id pub-id-type="doi">10.3390/electronics11203270</pub-id></citation></ref>
<ref id="ref42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Landsness</surname> <given-names>E. C.</given-names></name> <name><surname>Chen</surname> <given-names>W.</given-names></name> <name><surname>Miao</surname> <given-names>H.</given-names></name> <name><surname>Tang</surname> <given-names>M.</given-names></name> <name><surname>Brier</surname> <given-names>L. M.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Automated sleep state classification of wide-field calcium imaging data via multiplex visibility graphs and deep learning</article-title>. <source>J. Neurosci. Methods</source> <volume>366</volume>:<fpage>109421</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.jneumeth.2021.109421</pub-id></citation></ref>
<ref id="ref43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Small</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Complex network from pseudoperiodic time series: topology versus dynamics</article-title>. <source>Phys. Rev. Lett.</source> <volume>96</volume>:<fpage>238701</fpage>. doi: <pub-id pub-id-type="doi">10.1103/PhysRevLett.96.238701</pub-id>, PMID: <pub-id pub-id-type="pmid">16803415</pub-id></citation></ref>
<ref id="ref44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>M.</given-names></name> <name><surname>Domanskyi</surname> <given-names>S.</given-names></name> <name><surname>Piermarocchi</surname> <given-names>C.</given-names></name> <name><surname>Mias</surname> <given-names>G. I.</given-names></name></person-group> (<year>2021</year>). <article-title>Visibility graph based temporal community detection with applications in biological time series</article-title>. <source>Sci. Rep.</source> <volume>11</volume>:<fpage>5623</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41598-021-84838-x</pub-id>, PMID: <pub-id pub-id-type="pmid">33707481</pub-id></citation></ref>
</ref-list>
</back>
</article>