<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="brief-report" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Astron. Space Sci.</journal-id>
<journal-title>Frontiers in Astronomy and Space Sciences</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Astron. Space Sci.</abbrev-journal-title>
<issn pub-type="epub">2296-987X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1120389</article-id>
<article-id pub-id-type="doi">10.3389/fspas.2023.1120389</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Astronomy and Space Sciences</subject>
<subj-group>
<subject>Perspective</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The need for adoption of neural HPC (NeuHPC) in space sciences</article-title>
<alt-title alt-title-type="left-running-head">Karimabadi et al.</alt-title>
<alt-title alt-title-type="right-running-head">
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fspas.2023.1120389">10.3389/fspas.2023.1120389</ext-link>
</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Karimabadi</surname>
<given-names>Homa</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/123625/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wilkes</surname>
<given-names>Jason</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Roberts</surname>
<given-names>D. Aaron</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/2131226/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Analytics Ventures</institution>, <addr-line>San Diego</addr-line>, <addr-line>CA</addr-line>, <country>United States</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>AlphaTrai, Inc.</institution>, <addr-line>San Diego</addr-line>, <addr-line>CA</addr-line>, <country>United States</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>NASA Goddard Space Flight Center</institution>, <addr-line>Greenbelt</addr-line>, <addr-line>MD</addr-line>, <country>United States</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/187391/overview">Joseph E Borovsky</ext-link>, Space Science Institute, United States</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/727550/overview">Enrico Camporeale</ext-link>, University of Colorado Boulder, United States</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Homa Karimabadi, <email>homakar@gmail.com</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Space Physics, a section of the journal Frontiers in Astronomy and Space Sciences</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>21</day>
<month>02</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>10</volume>
<elocation-id>1120389</elocation-id>
<history>
<date date-type="received">
<day>09</day>
<month>12</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>23</day>
<month>01</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2023 Karimabadi, Wilkes and Roberts.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Karimabadi, Wilkes and Roberts</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>A major challenge facing scientists using conventional approaches for solving PDEs is the simulation of extreme multi-scale problems. While exascale computing will enable simulations of larger systems, the extreme multiscale nature of many problems requires new techniques. Deep learning techniques have disrupted several domains, such as computer vision, language (e.g., ChatGPT), and computational biology, leading to breakthrough advances. Similarly, the adaptation of these techniques for scientific computing has led to a new and rapidly advancing branch of High-Performance Computing (HPC), which we call neural-HPC (NeuHPC). Proof of concept studies in domains such as computational fluid dynamics and material science have demonstrated advantages in both efficiency and accuracy compared to conventional solvers. However, NeuHPC is yet to be embraced in plasma simulations. This is partly due to general lack of awareness of NeuHPC in the space physics community as well as the fact that most plasma physicists do not have training in artificial intelligence and cannot easily adapt these new techniques to their problems. As we explain below, there is a solution to this. We consider NeuHPC a critical paradigm for knowledge discovery in space sciences and urgently advocate for its adoption by both researchers as well as funding agencies. Here, we provide an overview of NeuHPC and specific ways that it can overcome existing computational challenges and propose a roadmap for future direction.</p>
</abstract>
<kwd-group>
<kwd>AI<sup>1</sup>
</kwd>
<kwd>PDE<sup>2</sup>
</kwd>
<kwd>HPC<sup>3</sup>
</kwd>
<kwd>Neural Nets<sup>4</sup>
</kwd>
<kwd>Symbolic Regression<sup>5</sup>
</kwd>
<kwd>closure Model<sup>6</sup>
</kwd>
<kwd>error Prediction<sup>7</sup>
</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Over the years there have been many techniques trumpeted as having great disruptive potential, which were eventually found to have muted applicability. It is rare that a technology comes along that is truly disruptive and is adopted across wide areas of science and engineering. Modern artificial intelligence (AI) is a rare technology where those claims are not overblown. In what follows, we will use the terms &#x201c;machine learning&#x201d; and &#x201c;artificial intelligence&#x201d; interchangeably.</p>
<p>One of the authors (HK) was an early advocate of the use of AI and computer vision in space sciences with applications in event detection/classification (e.g., <xref ref-type="bibr" rid="B16">Karimabadi et al., 2009</xref>), knowledge discovery in simulations and in-situ-visualization (e.g., <xref ref-type="bibr" rid="B17">Karimabadi et al., 2011a</xref>; <xref ref-type="bibr" rid="B19">Karimabadi et al., 2011c</xref>; <xref ref-type="bibr" rid="B20">2012</xref>; <xref ref-type="bibr" rid="B21">2013a</xref>), and derivation of equations from data (<xref ref-type="bibr" rid="B15">Karimabadi et al., 2007</xref>). The impetus for this effort was based on the vision that as our ability to generate data continues to grow exponentially, data driven science would become an indispensable field of scientific knowledge discovery. This vision has since come to pass, but the rate and scale with which this has happened has exceeded all expectations.</p>
<p>Despite the promising results and utility of those early works, including applications of simple neural nets to spacecraft data (e.g., <xref ref-type="bibr" rid="B37">Newell et al., 1991</xref>; <xref ref-type="bibr" rid="B3">Boberg et al., 2000</xref>), the techniques were not widely adopted. At the time, the field of AI was in a nascent stage in which artificial neural networks (ANNs) had been largely abandoned in favor of &#x201c;lighter weight&#x201d; techniques such as support vector machines. These algorithms had limited learning capacity, and relied heavily on hand engineered features, requiring a top-down agent to act as a &#x201c;God outside the machine&#x201d; to tell the models which attributes of the world to focus on, rather than allowing the algorithms to learn what is and is not relevant bottom-up, from the data and the model&#x2019;s objective function. Another factor that limited their utility was their lack of universality. One had to devise special algorithms for problems in computer vision, speech, and audio, among others.</p>
<p>Everything changed in 2012 when AlexNet, a GPU implemented convolutional network (CNN), won ImageNet&#x2019;s image classification competition by a wide margin. This seemingly overnight success was built upon 7 decades of slowly evolving research in deep learning (see the <xref ref-type="sec" rid="s9">Supplementary Material</xref> for definition of deep learning). The field had to wait for the accessibility of large data sets and the development of GPUs, a widely available relatively inexpensive device with a special kind of massively parallel computational power, before its potential could be realized.</p>
<p>Since AlexNet, advances in AI have fueled adoption of neural algorithms across a myriad of industries and sciences. The first applications of AI in space sciences have been in analysis of spacecraft data (e.g., <xref ref-type="bibr" rid="B6">Camporeale, 2019</xref>; <xref ref-type="bibr" rid="B4">Breuillard et al., 2020</xref>; <xref ref-type="bibr" rid="B28">Li et al., 2020</xref>; <xref ref-type="bibr" rid="B11">Hu et al., 2022</xref>) where off-the-shelf AI techniques can be readily applied. However, application of AI in NeuHPC offers a greater opportunity with the potential to qualitatively change the field. The remainder of this article focusses on NeuHPC.</p>
<p>Partial differential equations (PDEs) often lead to extreme multi-scale behavior which makes the resolution of all scales in one simulation impossible. While exascale computing will enable simulations of larger systems (e.g., <xref ref-type="bibr" rid="B53">Xiao et al., 2021</xref>; <xref ref-type="bibr" rid="B13">Ji et al., 2022</xref>), the extreme multiscale nature of many problems in space sciences requires new techniques. In the global magnetosphere, there are 10<sup>7</sup> degrees of separation in spatial and temporal scales, putting it beyond the conventional techniques even at exascale. Also, round-off error in time-stepped solvers is severely limiting. Further, exascale simulations present other challenges, from knowledge discovery to the massive datasets, to efficient checkpointing and data management. We consider AI as a core technology and its adoption as critical for meaningful advancement in scientific computing. This belief is based on unique features of neural nets and the rapid and promising advancements of their use in scientific computation.</p>
<p>
<xref ref-type="table" rid="T1">Table 1</xref> summarizes key features of ANNs that make them especially suitable for overcoming the current HPC challenges by enabling new capabilities not possible with conventional approaches. While in-depth discussion of each topic is beyond the scope of this paper, relevant references are provided for interested reader to learn more. First, automated differentiation (see the <xref ref-type="sec" rid="s9">Supplementary Material</xref> for more details) enables accurate computation of derivatives of arbitrary order (spatial and temporal) to working precision. This mesh-free operation, resulting in mesh invariant solutions, is advantageous over numerical differentiation methods (e.g., finite differencing) which suffer from discretization error with increasing cost and error in higher derivatives. As an example, one can solve the heat equation &#x2202;u/&#x2202;t &#x3d; <italic>&#x394;</italic>(u) where the function <italic>u</italic> is represented as a neural net and the spatial and temporal derivatives are calculated using the chain rule.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Summary of key features of ANNs that enable new capabilities in HPC. References to some recent work that have gone beyond the proof of concept stage are also provided.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">&#x201c;Features&#x201d;</th>
<th align="left">Benefits</th>
<th align="left">Key references</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td rowspan="4" align="left">State-of-the-art in vision (object detection, tracking, classification, &#x2026;)</td>
<td align="left">-Knowledge discovery from large data</td>
<td rowspan="4" align="left">There are many well-known and open-sourced AI models (e.g., Faster R-CNN, Detectron2, YOLO, U-Net, ResNet, &#x2026;)</td>
</tr>
<tr>
<td align="left">-Computational steering</td>
</tr>
<tr>
<td align="left">-Intelligent checkpointing</td>
</tr>
<tr>
<td align="left">-Efficient data dumps</td>
</tr>
<tr>
<td rowspan="2" align="left">Automated differentiation</td>
<td align="left">-Mesh-free simulations (no grid error)</td>
<td align="left">Auto differentiation is available in AI platforms like Tensorflow <ext-link ext-link-type="uri" xlink:href="https://www.tensorflow.org/guide/autodiff">https://www.tensorflow.org/guide/autodiff</ext-link>
</td>
</tr>
<tr>
<td align="left">-Super resolution</td>
<td align="left">For its incorporation into solutions of PDEs, see PINN, PINO, FNO, DeepONet</td>
</tr>
<tr>
<td rowspan="6" align="left">Universal approximation theorem (functions)</td>
<td align="left">-Equation discovery from data</td>
<td align="left">
<xref ref-type="bibr" rid="B50">Udrescu and Tegmark (2020)</xref>
</td>
</tr>
<tr>
<td align="left">-Closure models</td>
<td align="left">Derived all 100 of Feynman equations from data <ext-link ext-link-type="uri" xlink:href="https://ai-feynman.readthedocs.io/en/latest/">https://ai-feynman.readthedocs.io/en/latest/</ext-link>PySR: High-Performance Symbolic Regression in Python <ext-link ext-link-type="uri" xlink:href="https://astroautomata.com/PySR/">https://astroautomata.com/PySR/</ext-link>
</td>
</tr>
<tr>
<td align="left">-System/subsystem/reduced order model discovery</td>
<td align="left">
<xref ref-type="bibr" rid="B14">Kamienny et al. (2022)</xref> End-to-end Symbolic regression with several orders of magnitude faster inference as compared to state-of-the-art genetic programming</td>
</tr>
<tr>
<td align="left">-Accelerated simulations</td>
<td align="left">DeepXDE (PINN)&#x2014;AI-based solution of PDEs <ext-link ext-link-type="uri" xlink:href="https://deepxde.readthedocs.io/en/latest/">https://deepxde.readthedocs.io/en/latest/</ext-link>
</td>
</tr>
<tr>
<td align="left">-Error correction</td>
<td align="left">
<xref ref-type="bibr" rid="B24">Kochkov et al., 2021</xref>&#x2014;accelerated simulations with 40&#x2013;80x fold computational speedups. A trained model for R<sub>e</sub> &#x3d; 1,000 generalized to higher Reynolds number of R<sub>e</sub> &#x3d; 4,000</td>
</tr>
<tr>
<td align="left">-Frame predictions/Predicting the evolution of spatiotemporal turbulent flow</td>
<td align="left">TF-Net&#x2014;<xref ref-type="bibr" rid="B52">Wang et al., 2020</xref> - Successful prediction of 60 frames ahead for turbulent flow</td>
</tr>
<tr>
<td rowspan="5" align="left">Universal approximation theorem (operators)</td>
<td rowspan="2" align="left">-Solution to a family rather than instance of PDEs</td>
<td align="left">FNO&#x2014;<xref ref-type="bibr" rid="B29">Li et al., 2021a</xref> achieved up to <italic>three orders of magnitude</italic> in speedup as compared to traditional PDE solvers</td>
</tr>
<tr>
<td align="left">
<ext-link ext-link-type="uri" xlink:href="https://github.com/zongyi-li/fourier_neural_operator">https://github.com/zongyi-li/fourier_neural_operator</ext-link>
</td>
</tr>
<tr>
<td align="left">-Zero-shot super resolution</td>
<td align="left">DeepONet - <xref ref-type="bibr" rid="B34">Lu L et al., 2019</xref>, <xref ref-type="bibr" rid="B33">Lu et al., 2021</xref> <ext-link ext-link-type="uri" xlink:href="https://github.com/lululxvi/deeponet">https://github.com/lululxvi/deeponet</ext-link> <ext-link ext-link-type="uri" xlink:href="https://deepxde.readthedocs.io/en/latest/">https://deepxde.readthedocs.io/en/latest/</ext-link>Fourcastnet&#x2014;<xref ref-type="bibr" rid="B40">Pathak et al. (2022)</xref>
</td>
</tr>
<tr>
<td rowspan="2" align="left">-Enables ensembling</td>
<td align="left">Using FNO, is able to generate forecasts orders of magnitude faster than state-of-the-art weather forecasting model</td>
</tr>
<tr>
<td align="left">PINO&#x2014;<xref ref-type="bibr" rid="B30">Li et al., 2021b</xref> produces accurate results while retaining a <italic>400x</italic> speedup compared to the GPU-based pseudo-spectral solver</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Second, the universal approximation theorem (<xref ref-type="bibr" rid="B10">Hornik et al., 1989</xref>) implies that ANNs can accurately approximate any function. In contrast to fixed-shaped approximators that have no internal parameters (e.g., polynomials), neural networks consist of parameterized functions, allowing them to take on a variety of different shapes.</p>
<p>Less known but as important is the universal approximation theorem for operators (<xref ref-type="bibr" rid="B57">Chen and Chen, 1995</xref>) which states that a neural net with a single hidden layer can accurately approximate any non-linear continuous operator (<xref ref-type="bibr" rid="B33">Lu et al., 2021</xref>). The operator can be explicit such as derivatives (e.g., Laplacian), integrals (e.g., Laplace transform) or implicit such as solution operators of a PDE. This offers a unique capability where the network can learn the solution to an entire <italic>family</italic> of PDEs rather than an <italic>instance</italic> of a PDE, as in the conventional approaches. Once the model is trained, inference to obtain solutions for different parameters of the PDE is very fast. This can lead to orders of magnitude speedup and enables efficient exploration of the solution space and ensemble modeling which may be prohibitively expensive otherwise.</p>
<p>These capabilities open the door to zero-shot learning, i.e., the operator can be trained on a lower resolution and evaluated at a higher resolution, without seeing any higher resolution data. To this end, <xref ref-type="bibr" rid="B29">Li et al. (2021a)</xref> developed the first network (FNO) with zero-shot learning that <italic>successfully</italic> learns the resolution-invariant solution operator for the family of Navier-Stokes equations in the turbulent regime. This feature of transferring the solution between the meshes works well on both the spatial and temporal domain (<xref ref-type="bibr" rid="B58">Kovachki et al., 2021</xref>). We refer the reader to <xref ref-type="bibr" rid="B22">Kim et al. (2021)</xref> for discussion and differences of super-resolution reconstruction for paired <italic>versus</italic> unpaired data. Another useful feature of AI-based solvers is transfer learning. For example, <xref ref-type="bibr" rid="B30">Li et al. (2021b)</xref> used a pre-trained model on the Kolmogorov flow to transfer it to different Reynolds numbers.</p>
<p>A wide variety of solutions have been proposed to leverage ANNs in computations across domains such as CFD, material science, and weather forecasting. A detailed review is beyond the scope of the present work. Our goal is simply to bring awareness to promising advances in NeuHPC and provide a starting point for further exploration. Although our focus is NeuHPC, techniques such as system identification can also be applied to spacecraft data either in isolation or in combination with simulation data.</p>
</sec>
<sec id="s2">
<title>2 Proof of concepts and beyond</title>
<sec id="s2-1">
<title>2.1 Quantitative data analysis</title>
<p>We demonstrate the utility of AI for analysis of simulation data by addressing the challenging problem of automated detection and measurement of scales of individual current sheets formed in plasma turbulence. Previous works, limited to two snapshots of MHD simulations, were based on phenomenological approach, utilizing insights on MHD physics (e.g., <xref ref-type="bibr" rid="B51">Uritsky et al., 2010</xref>; <xref ref-type="bibr" rid="B55">Zhdankin et al., 2013</xref>). We time-boxed ourselves to 2&#xa0;days to see whether we can significantly reduce time-to-solution using existing AI techniques. We used the magnitude of current density (507 timeslices) from simulations of <xref ref-type="bibr" rid="B59">Karimabadi et al. (2013b)</xref>. <xref ref-type="fig" rid="F1">Figure 1</xref> shows the results for one time slice, where lengths of only a few current sheets are displayed. Visual comparison with the raw image of the current sheets shows generally good agreement and demonstrates the utility of AI. Details including the code and videos of results over 507 slices are provided in the <xref ref-type="sec" rid="s9">Supplementary Material</xref>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>
<bold>(A)</bold> Intensity plot of current density, <bold>(B)</bold> Automated detection and length measurements of current sheets.</p>
</caption>
<graphic xlink:href="fspas-10-1120389-g001.tif"/>
</fig>
</sec>
<sec id="s2-2">
<title>2.2 Derivation of equations and operators from data</title>
<p>Deriving closed form, compact and understandable analytical equations from data is at the core of scientific discovery. In the following, we provide an overview of the recent ML techniques aimed at turning machine models to scientific knowledge. Such knowledge discovery can come in different forms: a) derivation of algebraic equation (e.g., law of gravity), b) derivation of ODE or PDE (e.g., the diffusion equation), and c) the derivation of the unknown parameters of a known equation (the so-called inverse problem).</p>
<sec id="s2-2-1">
<title>2.2.1 Algebraic equations</title>
<p>Symbolic regression is an ML technique that searches the space of mathematical expressions to find the best data-feeding model. The goal is to strike a balance between model accuracy and model complexity. The common benchmark to compare the efficacy of different models is the Symbolic Regression database (<ext-link ext-link-type="uri" xlink:href="https://space.mit.edu/home/tegmark/aifeynman.html">https://space.mit.edu/home/tegmark/aifeynman.html</ext-link>) which contains 120 symbolic regression mysteries and answers. Most (100) of the equations are from Feynman Lectures on Physics and 20 more difficult equations are sourced from other physics books. See <xref ref-type="bibr" rid="B26">La Cava et al. (2021)</xref> for additional benchmarks.</p>
<p>Symbolic regression has been commonly carried out using generic programming and evolutionary algorithms, and there are several open source and commercially available libraries such as Eureqa (<xref ref-type="bibr" rid="B45">Schmidt and Lipson, 2009</xref>). Their main drawback is that, due to the combinatorial nature of the problem, genetic programming does not scale well to high dimensional systems. In contrast, ANNs are highly efficient at learning in high-dimensional space, and this has led to a flurry of activity in their adaptation to address the combinatorial challenge of symbolic regression. The blackbox nature of neural nets seems at first to be at odds with the goals of symbolic regression. Various approaches differ in how they overcome this issue and have been of two general varieties. In one approach, neural nets are used as an aid to reduce the search space of genetic programming techniques (e.g., <xref ref-type="bibr" rid="B8">Cranmer et al., 2020</xref>; <xref ref-type="bibr" rid="B41">Petersen et al., 2020</xref>; <xref ref-type="bibr" rid="B49">Udrescu et al., 2020</xref>; <xref ref-type="bibr" rid="B50">Udrescu and Tegmark, 2020</xref>). In AI-Feynman (<xref ref-type="bibr" rid="B50">Udrescu &#x26; Tegmark, 2020</xref>), the neural nets are used to find hidden simplicity such as symmetry in the data. Using this approach, they were able to derive all 100 of Feynman equations <italic>versus</italic> 71 using previous techniques.</p>
<p>The second class of solutions adapt the architecture of the neural nets for symbolic regression. The two key modifications required are to enable ANN to have access to a vocabulary of functions/primitives and to impose sparsity to reduce model complexity while maintaining high accuracy. <xref ref-type="bibr" rid="B36">Martius &#x26; Lampert (2016)</xref> proposed a simple feedforward ANN where standard activation functions are replaced with symbolic building blocks corresponding to functions common in science and engineering. These activation functions are analogous to the primitive functions in symbolic regression. <xref ref-type="bibr" rid="B44">Sahoo et al. (2018)</xref> extended the work to include division. In the <xref ref-type="sec" rid="s9">Supplementary Material</xref>, we construct another type of ANN which, unlike standard ANNs, has a variety of synapses and cell body types. We show that it can derive law of gravity from data. Another approach involves adaptation of language models/transformers to the symbolic regression problem. <xref ref-type="bibr" rid="B14">Kamienny et al. (2022)</xref> developed an end-to-end transformer-based model that uses both symbolic tokens for the operators and variables, and numeric tokens for the constants. It shows a significant jump in accuracy compared to previous ANN-based approaches, with several orders of magnitude faster inference as compared to state-of-the-art genetic programming.</p>
</sec>
<sec id="s2-2-2">
<title>2.2.2 Unknown PDEs</title>
<p>In cases where the underlying PDEs are not known, scientists want i) accurate solvers that generalize well, ii) fast solvers which would be faster than traditional solvers in test cases where the PDE is known, iii) accurate symbolic extraction. Studies with their prime focus on symbolic extraction follow similar approaches as those for algebraic equations (see below). However, there are innovative breakthroughs in the development of solvers that address objectives i)-ii). This is accomplished through approaches that learn PDE solution operators. This includes DeepONet (<xref ref-type="bibr" rid="B34">Lu et al., 2019</xref>; <xref ref-type="bibr" rid="B33">Lu et al., 2021</xref>) and FNO (<xref ref-type="bibr" rid="B29">Li et al., 2021a</xref>, <xref ref-type="bibr" rid="B58">Kovachki et al., 2021</xref>) which are open source. See the latter for additional references and a useful literature review. <xref ref-type="bibr" rid="B29">Li et al. (2021a)</xref> showed successful experiments on Burger&#x2019;s equation, Darcy flow, and the Navier-Stokes equations and achieved up to <italic>three orders of magnitude</italic> in speedup compared to traditional PDE solvers. Another important proof point and real-world application for FNO came from its adaptation for weather forecasting (FourCastNet) by <xref ref-type="bibr" rid="B40">Pathak et al. (2022)</xref>. In a head-to-head comparison with a state-of-the-art forecasting system (IFS), FourCastNet was found to have generally comparable accuracy as IFS but with higher accuracy for small-scale variables, including precipitation. In addition, FourCastNet can generate forecasts (less than 2&#xa0;s for a week-long forecast) orders of magnitude faster than IFS. This enables creation of fast large-ensemble forecasts which are out of reach of traditional techniques.</p>
<p>While DeepONet and FNO were not focused on symbolic extraction, one can always add symbolic extraction to the models. The basic ideas for discovery of PDEs from data in symbolic form are like those for algebraic data and can be cast into three categories. One category (e.g., sparse identification of non-linear dynamics (SINDy)) consists of construction of a candidate library of partial derivatives which is then used by a sparse regression technique to obtain a parsimonious model (<xref ref-type="bibr" rid="B43">Rudy et al., 2017</xref>; <xref ref-type="bibr" rid="B7">Champion et al., 2019</xref>). In case of PDEs, neural nets offer the added advantage of accurate differentiation. As a result, a second category of solutions combine neural nets with genetic algorithms where the derivatives are calculated by neural nets and genetic algorithms are used for search (<xref ref-type="bibr" rid="B54">Xu et al., 2020</xref>; <xref ref-type="bibr" rid="B9">Desai and Strachan, 2021</xref>). A third class is purely neural net based and includes the use of symbolic networks (<xref ref-type="bibr" rid="B31">Long et al., 2019</xref>).</p>
</sec>
</sec>
<sec id="s2-3">
<title>2.3 Solutions when the form of the PDE is known</title>
<p>Here, we discuss three class of AI based approaches when the form of the PDE is known. As mentioned earlier, the so-called inverse problem is not discussed here (see <xref ref-type="bibr" rid="B5">Camporeale et al., 2022</xref> for an application in space physics).</p>
<sec id="s2-3-1">
<title>2.3.1 AI solvers</title>
<p>Conventional solvers (e.g., FDM) discretize the domain into a grid and advance the simulation using time-stepped methodology or discrete-event based time advance (e.g., <xref ref-type="bibr" rid="B38">Omelchenko and Karimabadi, 2022</xref>). The so-called Physics-Informed Neural Network (PINN)-type methods (<xref ref-type="bibr" rid="B42">Raissi et al., 2019</xref>; <xref ref-type="bibr" rid="B12">Jagtap and Karniadakis, 2020</xref>) overcome discretization issues of conventional solvers by taking advantage of auto-differentiation to compute the exact, mesh-free derivatives. They also offer several advantages over other deep learning approaches. PINN requires less training data since the underlying equation is already known. And having the prior knowledge of the physical/conservation laws enables their incorporation into the neural network design which in turn reduces the space of admissible solutions.</p>
<p>A notable study is that of <xref ref-type="bibr" rid="B30">Li et al. (2021b)</xref> who combined operator learning (FNO) with function optimization (PINN). This integrated technique (PINO) outperforms previous ML methods including both PINN and FNO, while retaining the significant speedup of FNO compared to instance-based solvers. In the challenging problem of long temporal transient flow of Navier-Stokes equation, where the solution builds up from near-zero velocity to a velocity where the system reaches ergodic state, PINO produces accurate results while retaining a <italic>400x</italic> speedup compared to the GPU-based pseudo-spectral solver.</p>
</sec>
<sec id="s2-3-2">
<title>2.3.2 Closure models</title>
<p>A common approach to deal with the extreme multi-scale solution to PDEs is using subgrid closure models. Given the utility of neural networks for extracting equations from data, there has been significant work, especially in the CFD domain, on their use for development of closure models (<xref ref-type="bibr" rid="B25">Kurz and Beck, 2022</xref> and references therein). Here we refer the reader to several review articles on this topic (e.g., <xref ref-type="bibr" rid="B48">Taghizadeh et al., 2020</xref>; <xref ref-type="bibr" rid="B47">Sofos et al., 2022</xref>).</p>
</sec>
<sec id="s2-3-3">
<title>2.3.3 Error correction</title>
<p>Another approach has been to use AI to correct errors at each time step in under-resolved simulations (<xref ref-type="bibr" rid="B24">Kochkov et al., 2021</xref> and references therein). This approach requires training a coarse resolution solver with high resolution ground truth simulations. Promising results were obtained in solution to Navier-Stokes by <xref ref-type="bibr" rid="B24">Kochkov et al. (2021)</xref>. Results were as accurate as baseline solvers with 8&#x2013;10x finer resolution in each spatial dimension, resulting in 40&#x2013;80x fold computational speedups. The model exhibited good stability over long simulations and showed surprisingly good generalization to Reynolds numbers outside of the flows where it is trained.</p>
</sec>
</sec>
</sec>
<sec id="s3">
<title>3 Discussion and proposed roadmap for NeuHPC in space physics</title>
<p>We advocate for the following changes: a) make funding NeuHPC a priority, b) adapt the funding to the pace of AI developments. This means a short leash on grants and strong focus on results measured by well-established benchmarks (see examples of benchmarks below). c) Promotion of interdisciplinary collaboration with AI experts in industry and academia to overcome the fact that most plasma physicists do not have deep expertise in AI.</p>
<p>Given AI&#x2019;s prowess in predictions, we suggest as starting point proof-of-concept (POC) studies focused on video prediction and error correction:<list list-type="simple">
<list-item>
<p>&#x2022; Video prediction: Apply off-the-shelf spatio-temporal deep learning models for video prediction (e.g., U-net, ResNet) to simulations. This would create a benchmark (<xref ref-type="bibr" rid="B52">Wang et al., 2020</xref>) for comparison with follow up studies using PDE centric AI approaches like PINO or FNO. We suggest 2D hybrid simulations (e.g., KHI) where many training cases can be generated for videos of current density, mixing (see <xref ref-type="sec" rid="s9">Supplementary Material</xref>), among others.</p>
</list-item>
<list-item>
<p>&#x2022; Grid error correction: Assess the viability of error correction in a coarse grid hybrid simulation, against an equivalent high-resolution simulation. DES hybrid (<xref ref-type="bibr" rid="B38">Omelchenko and Karimabadi, 2022</xref>) is particularly useful since it remains stable even when the grid scale is significantly larger than the ion inertial length.</p>
</list-item>
<list-item>
<p>&#x2022; PIC noise error correction: Since noise level goes down only as the square root of number of particles, an AI-based correction would enable running a simulation with a low number of particles (e.g., 5 particles/cell) but reproducing results of a simulation with a much higher number of particles/cell (e.g., 500), a major breakthrough.</p>
</list-item>
</list>
</p>
<p>Other POCs of interest that target multi-scale problems include:<list list-type="simple">
<list-item>
<p>&#x2022; Closure models: Explore derivation of closure models for the island coalescence problem (<xref ref-type="bibr" rid="B18">Karimabadi et al., 2011b</xref>).</p>
</list-item>
<list-item>
<p>&#x2022; PDE derivation: Explore derivation of an equation that describes the temporal evolution of the island coalescence problem.</p>
</list-item>
</list>
</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="s4">
<title>Data availability statement</title>
<p>The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number (s) can be found in the article/<xref ref-type="sec" rid="s9">Supplementary Material</xref>.</p>
</sec>
<sec id="s5">
<title>Author contributions</title>
<p>HK conceived of the idea for the paper and wrote the first draft. JW wrote the codes and produced the results for the two POCs in the <xref ref-type="sec" rid="s9">Supplementary Material</xref>. All authors discussed and edited the final version of the manuscript.</p>
</sec>
<sec id="s6">
<title>Funding</title>
<p>Two of the authors (HK and JW) did not receive any external funding for their work. DAR is supported by NASA&#x0027;s Heliophysics Digital Resource Library.</p>
</sec>
<ack>
<p>The authors express their gratitude to Vadim Roytershteyn for obtaining the simulation data and making it available. They also acknowledge the valuable feedback from the reviewers that helped improve the manuscript. Additionally, the authors appreciate the constructive discussions with Hudson Cooper, Yuri Omelchenko, and William Daughton.</p>
</ack>
<sec sec-type="COI-statement" id="s7">
<title>Conflict of interest</title>
<p>HK was employed by Analytics Ventures, JW was employed by AlphaTrai, Inc.</p>
<p>The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s8">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec id="s9">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fspas.2023.1120389/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fspas.2023.1120389/full&#x23;supplementary-material</ext-link>
</p>
<supplementary-material xlink:href="DataSheet1.pdf" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boberg</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Peter</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Lundstedt</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2000</year>). <article-title>Real time Kp predictions from solar wind data using neural networks</article-title>. <source>Phys. Chem. Earth, Part C Sol. Terr. Planet. Sci.</source> <volume>254</volume>, <fpage>275</fpage>&#x2013;<lpage>280</lpage>. <pub-id pub-id-type="doi">10.1016/s1464-1917(00)00016-7</pub-id>
</citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Breuillard</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Dupuis</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Retino</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Le Contel</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Amaya</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lapenta</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Automatic classification of plasma regions in near-earth space with supervised machine learning: Application to magnetospheric multi scale 2016&#x2013;2019 observations</article-title>. <source>Front. Astronomy Space Sci.</source> <volume>7</volume>, <fpage>55</fpage>. <pub-id pub-id-type="doi">10.3389/fspas.2020.00055</pub-id>
</citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Camporeale</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Wilkie</surname>
<given-names>G. J.</given-names>
</name>
<name>
<surname>Drozdov</surname>
<given-names>A. Y.</given-names>
</name>
<name>
<surname>Bortnik</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Data&#x2010;driven discovery of fokker&#x2010;planck equation for the earth&#x27;s radiation belts electrons using physics&#x2010;informed neural networks</article-title>. <source>J. Geophys. Res. Space Phys.</source> <volume>127</volume>, <fpage>e2022JA030377</fpage>. <pub-id pub-id-type="doi">10.1029/2022ja030377</pub-id>
</citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Camporeale</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>The challenge of machine learning in space weather nowcasting and forecasting</article-title>. <source>Space Weather.</source> <volume>17</volume>, <fpage>1166</fpage>&#x2013;<lpage>1207</lpage>. <pub-id pub-id-type="doi">10.1029/2018SW002061</pub-id>
</citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Champion</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Lusch</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Kutz</surname>
<given-names>J. N.</given-names>
</name>
<name>
<surname>Brunton</surname>
<given-names>S. L.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Data-driven discovery of coordinates and governing equations</article-title>. <source>Proc. Natl. Acad. Sci.</source> <volume>116</volume>, <fpage>22445</fpage>&#x2013;<lpage>22451</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1906995116</pub-id>
</citation>
</ref>
<ref id="B57">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>1995</year>). <article-title>Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems</article-title>. <source>IEEE Trans. Neural Networks</source> <volume>6</volume>, <fpage>911</fpage>&#x2013;<lpage>917</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1906995116</pub-id>
</citation>
</ref>
<ref id="B8">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Cranmer</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sanchez-Gonzalez</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Battaglia</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Cranmer</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Spergel</surname>
<given-names>D.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Discovering symbolic models from deep learning with inductive biases</article-title>. <source>Adv. Neural Inf. Process Syst.</source> <volume>33</volume>, <fpage>17429</fpage>&#x2013;<lpage>17442</lpage>.</citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Desai</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Strachan</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Parsimonious neural networks learn interpretable physical laws</article-title>. <source>Sci. Rep.</source> <volume>11</volume> (<issue>1</issue>), <fpage>12761</fpage>&#x2013;<lpage>12769</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-021-92278-w</pub-id>
</citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hornik</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Stinchcombe</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>White</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>1989</year>). <article-title>Multilayer feedforward networks are universal approximators</article-title>. <source>Neural Netw.</source> <volume>2</volume>, <fpage>359</fpage>&#x2013;<lpage>366</lpage>. <pub-id pub-id-type="doi">10.1016/0893-6080(89)90020-8</pub-id>
</citation>
</ref>
<ref id="B11">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Hu</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Camporeale</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Swiger</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2022</year>). <source>Multi-hour ahead dst index prediction using multi-fidelity boosted neural networks</source>. <comment>arXiv:2209.12571</comment>.</citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jagtap</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Karniadakis</surname>
<given-names>G. E.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Extended physics-informed neural networks (xpinns): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations</article-title>. <source>Commun. Comput. Phys.</source> <volume>28</volume> (<issue>5</issue>), <fpage>2002</fpage>&#x2013;<lpage>2041</lpage>. <pub-id pub-id-type="doi">10.4208/cicp.OA-2020-0164</pub-id>
</citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ji</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Daughton</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Jara-Almonte</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Le</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Stanier</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Yoo</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Magnetic reconnection in the era of exascale computing and multiscale experiments</article-title>. <source>Nat. Rev. Phys.</source> <volume>4</volume>, <fpage>263</fpage>&#x2013;<lpage>282</lpage>. <pub-id pub-id-type="doi">10.1038/s42254-021-00419-x</pub-id>
</citation>
</ref>
<ref id="B14">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kamienny</surname>
<given-names>P.-A.</given-names>
</name>
<name>
<surname>d&#x2019;Ascoli</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lample</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Charton</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2022</year>). <source>End-to-end symbolic regression with transformers</source>. <comment>arXiv preprint arXiv:2204.10532</comment>.</citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Karimabadi</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Sipes</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>White</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Marinucci</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Dmitriev</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Chao</surname>
<given-names>J.</given-names>
</name>
<etal/>
</person-group> (<year>2007</year>). <article-title>Data mining in space physics: Minetool algorithm</article-title>. <source>J. Geophys. Res.</source> <volume>112</volume>, <fpage>A11215</fpage>. <pub-id pub-id-type="doi">10.1029/2006JA012136</pub-id>
</citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Karimabadi</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Sipes</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Lavraud</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Roberts</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>A new multivariate time series data analysis technique: Automated detection of flux transfer events using Cluster data</article-title>. <source>J. Geophys. Res.</source> <volume>114</volume>, <fpage>A06216</fpage>. <pub-id pub-id-type="doi">10.1029/2009JA014202</pub-id>
</citation>
</ref>
<ref id="B17">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Karimabadi</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Vu</surname>
<given-names>H. X.</given-names>
</name>
<name>
<surname>Loring</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Omelchenko</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Sipes</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Roytershteyn</surname>
<given-names>V.</given-names>
</name>
<etal/>
</person-group> (<year>2011a</year>). &#x201c;<article-title>Petascale kinetic simulation of the magnetosphere</article-title>,&#x201d; in <conf-name>Proceeding of the TeraGrid Conference</conf-name>, <conf-loc>Salt Lake City, UT</conf-loc>, <conf-date>July 2011</conf-date>. <comment>Article No. 5</comment>. <pub-id pub-id-type="doi">10.1145/2016741.2016747</pub-id>
</citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Karimabadi</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Dorelli</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Roytershteyn</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Daughton</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Chac&#xf3;n</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2011b</year>). <article-title>Flux pileup in collisionless magnetic reconnection: Bursty interaction of large flux ropes</article-title>. <source>Phys. Rev. Lett.</source> <volume>107</volume>, <fpage>025002</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.107.025002</pub-id>
</citation>
</ref>
<ref id="B19">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Karimabadi</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Loring</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Vu</surname>
<given-names>H. X.</given-names>
</name>
<name>
<surname>Omelchenko</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Tatineni</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Majumdar</surname>
<given-names>A.</given-names>
</name>
<etal/>
</person-group> (<year>2011c</year>). <source>Petascale global kinetic simulations of the magnetosphere and visualization strategies for analysis of very large multi-variate data sets</source>. <publisher-loc>San Francisco</publisher-loc>: <publisher-name>Astronomical Soc Pacific</publisher-name>, <fpage>281</fpage>&#x2013;<lpage>291</lpage>.</citation>
</ref>
<ref id="B20">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Karimabadi</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Yilmaz</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Sipes</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2012</year>). &#x201c;<article-title>Recent advances in analysis of large datasets</article-title>,&#x201d; in <source>Numerical modeling of space PlasmaFlows:ASTRONUM-2011</source>. <source>ASP conference series</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Pogorelov</surname>
<given-names>N. V.</given-names>
</name>
<name>
<surname>Font</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Audit</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Zank</surname>
<given-names>G. P.</given-names>
</name>
</person-group> (<publisher-loc>San Francisco</publisher-loc>: <publisher-name>Astronomical Society of the Pacific</publisher-name>), <volume>459</volume>, <fpage>371</fpage>&#x2013;<lpage>377</lpage>.</citation>
</ref>
<ref id="B21">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Karimabadi</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>O&#x2019;Leary</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Loring</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Majumdar</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Tatineni</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Geveci</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2013a</year>). &#x201c;<article-title>
<italic>In-situ</italic> visualization for global hybrid simulations</article-title>,&#x201d; in <conf-name>Proceeding of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery (XSEDE &#x2019;13)</conf-name>, <conf-date>July 2013</conf-date> (<publisher-loc>San Diego</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>).</citation>
</ref>
<ref id="B59">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Karimabadi</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Roytershteyn</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Wan</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Matthaeus</surname>
<given-names>W. H.</given-names>
</name>
<name>
<surname>Daughton</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>P.</given-names>
</name>
<etal/>
</person-group> (<year>2013b</year>). &#x201c;<article-title>Coherent structures, intermittent turbulence, and dissipation in high-temperature plasmas</article-title>. <source>Phys. Plasmas</source> <volume>20</volume> (<issue>1</issue>), <fpage>012303</fpage>.</citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kim</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Won</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Unsupervised deep learning for super-resolution reconstruction of turbulence</article-title>. <source>J. Fluid Mech.</source> <volume>910</volume>, <fpage>A29</fpage>. <pub-id pub-id-type="doi">10.1017/jfm.2020.1028</pub-id>
</citation>
</ref>
<ref id="B24">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kochkov</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Alieva</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>P Brenner</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hoyer</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2021</year>). <source>Machine learning accelerated computational fluid dynamics</source>. <comment>arXiv preprint arXiv:2102.01010</comment>.</citation>
</ref>
<ref id="B58">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kovachki</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Azizzadenesheli</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Bhattacharya</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Stuart</surname>
<given-names>A.</given-names>
</name>
<etal/>
</person-group> (<year>2021</year>). <source>Neural operator: Learning maps between function spaces</source>. <comment>arXiv preprint arXiv:2108.08481</comment>.</citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kurz</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>A machine learning framework for LES closure terms</article-title>. <source>Electron. Trans. Numer. Analysis</source> <volume>56</volume>, <fpage>117</fpage>&#x2013;<lpage>137</lpage>. <pub-id pub-id-type="doi">10.1553/etna_vol56s117</pub-id>
</citation>
</ref>
<ref id="B26">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>La Cava</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Orzechowski</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Burlacu</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>de Franca</surname>
<given-names>F. O.</given-names>
</name>
<name>
<surname>Virgolin</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Jin</surname>
<given-names>Y.</given-names>
</name>
<etal/>
</person-group> (<year>2021</year>). &#x201c;<article-title>Contemporary symbolic regression methods and their relative performance</article-title>,&#x201d; in <source>Advances in neural information processing systems &#x2014; datasets and benchmarks track</source>.</citation>
</ref>
<ref id="B28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Predicting solar flares using a novel deep convolutional neural network</article-title>. <source>Astrophysical J.</source> <volume>891</volume>, <fpage>10</fpage>. <pub-id pub-id-type="doi">10.3847/1538-4357/ab6d04</pub-id>
</citation>
</ref>
<ref id="B29">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Kovachki</surname>
<given-names>N. B.</given-names>
</name>
<name>
<surname>Azizzadenesheli</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Bhattacharya</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Stuart</surname>
<given-names>A. M.</given-names>
</name>
<etal/>
</person-group> (<year>2021a</year>). &#x201c;<article-title>Fourier neural operator for parametric partial differential equations</article-title>,&#x201d; in <conf-name>Proceeding of the 9th International Conference on Learning Representations, ICLR 2021</conf-name>, <conf-loc>Austria</conf-loc>, <conf-date>May 2021</conf-date> (<publisher-loc>Vienna</publisher-loc>: <publisher-name>Virtual Event</publisher-name>).</citation>
</ref>
<ref id="B30">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Kovachki</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Jin</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>B.</given-names>
</name>
<etal/>
</person-group> (<year>2021b</year>). <source>Physics-informed neural operator for learning partial differential equations</source>. <comment>arXiv preprint arXiv:2111.03794</comment>.</citation>
</ref>
<ref id="B31">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Long</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Dong</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>PDE-net 2.0: Learning PDEs from data with a numeric-symbolic hybrid deep network</article-title>. <source>J. Comput. Phys.</source> <volume>399</volume>, <fpage>108925</fpage>. <pub-id pub-id-type="doi">10.1016/j.jcp.2019.108925</pub-id>
</citation>
</ref>
<ref id="B33">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lu</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Jin</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Pang</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Karniadakis</surname>
<given-names>G. E.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators</article-title>. <source>Nat. Mach. Intell.</source> <volume>3</volume>, <fpage>218</fpage>&#x2013;<lpage>229</lpage>. <pub-id pub-id-type="doi">10.1038/s42256-021-00302-5</pub-id>
</citation>
</ref>
<ref id="B34">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Lu</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Meng</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Mao</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Karniadakis</surname>
<given-names>G. E.</given-names>
</name>
</person-group> (<year>2019</year>). <source>DeepXDE: A deep learning library for solving differential equations</source>. <comment>arXiv preprint arXiv:1907.04502</comment>.</citation>
</ref>
<ref id="B36">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Martius</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Lampert</surname>
<given-names>C. H.</given-names>
</name>
</person-group> (<year>2016</year>). <source>Extrapolation and learning equations</source>. <comment>[Online]. Available: <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/1610.02995">http://arxiv.org/abs/1610.02995</ext-link>
</comment>.</citation>
</ref>
<ref id="B37">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Newell</surname>
<given-names>P. T.</given-names>
</name>
<name>
<surname>Wing</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Meng</surname>
<given-names>C. I.</given-names>
</name>
<name>
<surname>Sigillito</surname>
<given-names>V.</given-names>
</name>
</person-group> (<year>1991</year>). <article-title>The auroral oval position, structure, and intensity of precipitation from 1984 onward: An automated on&#x2010;line data base</article-title>. <source>J. Geophys. Res. Space Phys.</source> <volume>96</volume>, <fpage>5877</fpage>&#x2013;<lpage>5882</lpage>. <pub-id pub-id-type="doi">10.1029/90ja02450</pub-id>
</citation>
</ref>
<ref id="B38">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Omelchenko</surname>
<given-names>Y. A.</given-names>
</name>
<name>
<surname>Karimabadi</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2022</year>). <source>Emaps: An intelligent agent-based technology for simulation of multiscale systems. Space and astrophysical plasma simulation</source>. Editor <person-group person-group-type="editor">
<name>
<surname>B&#xfc;chner</surname>
<given-names>J.</given-names>
</name>
</person-group> <comment>Ch. 13 (in print)</comment>. <pub-id pub-id-type="doi">10.1007/978-3-031-11870-8_13</pub-id>
</citation>
</ref>
<ref id="B40">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Pathak</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Subramanian</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Harrington</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Raja</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chattopadhyay</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Mardani</surname>
<given-names>M.</given-names>
</name>
<etal/>
</person-group> (<year>2022</year>). <source>Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators</source>. <comment>arXiv preprint arXiv:2202.11214</comment>.</citation>
</ref>
<ref id="B41">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Petersen</surname>
<given-names>B. K.</given-names>
</name>
<name>
<surname>Larma</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Mundhenk</surname>
<given-names>T. N.</given-names>
</name>
<name>
<surname>Santiago</surname>
<given-names>C. P.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>S. K.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>J. T.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients</article-title>. In <conf-name>Proceeding of the International Conference on Learning Representations</conf-name>, <conf-date>Sept 2020</conf-date>.</citation>
</ref>
<ref id="B42">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Raissi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Perdikaris</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>E Karniadakis</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations</article-title>. <source>J. Comput. Phys.</source> <volume>378</volume>, <fpage>686</fpage>&#x2013;<lpage>707</lpage>. <pub-id pub-id-type="doi">10.1016/j.jcp.2018.10.045</pub-id>
</citation>
</ref>
<ref id="B43">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rudy</surname>
<given-names>S. H.</given-names>
</name>
<name>
<surname>Brunton</surname>
<given-names>S. L.</given-names>
</name>
<name>
<surname>Proctor</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Kutz</surname>
<given-names>J. N.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Data-driven discovery of partial differential equations</article-title>. <source>Sci. Adv.</source> <volume>3</volume>, <fpage>1602614</fpage>. <pub-id pub-id-type="doi">10.1126/sciadv.1602614</pub-id>
</citation>
</ref>
<ref id="B44">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Sahoo</surname>
<given-names>S. S.</given-names>
</name>
<name>
<surname>Lampert</surname>
<given-names>C. H.</given-names>
</name>
<name>
<surname>Martius</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2018</year>). <source>Learning equations for extrapolation and control</source>. <comment>arXiv preprint arXiv:1806.07259</comment>.</citation>
</ref>
<ref id="B45">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schmidt</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lipson</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Distilling free-form natural laws from experimental data</article-title>. <source>science</source> <volume>324</volume> (<issue>5923</issue>), <fpage>81</fpage>&#x2013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1126/science.1165893</pub-id>
</citation>
</ref>
<ref id="B47">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sofos</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Stavrogiannis</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Exarchou-Kouveli</surname>
<given-names>K. K.</given-names>
</name>
<name>
<surname>Akabua</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Charilas</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Karakasidis</surname>
<given-names>T. E.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Current trends in fluid research in the era of artificial intelligence: A review</article-title>. <source>A Rev. Fluids</source> <volume>7</volume>, <fpage>116</fpage>. <pub-id pub-id-type="doi">10.3390/fluids7030116</pub-id>
</citation>
</ref>
<ref id="B48">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Taghizadeh</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Witherden</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Girimaji</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Turbulence closure modeling with data-driven techniques: Physical compatibility and consistency considerations</article-title>. <source>New J. Phys.</source> <volume>22</volume>, <fpage>093023</fpage>. <pub-id pub-id-type="doi">10.1088/1367-2630/abadb3</pub-id>
</citation>
</ref>
<ref id="B49">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Udrescu</surname>
<given-names>S-M.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Neto</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Tegmark</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2020</year>). <source>AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity</source>. <comment>arXiv:2006.10782</comment>.</citation>
</ref>
<ref id="B50">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Udrescu</surname>
<given-names>S-M.</given-names>
</name>
<name>
<surname>Tegmark</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2020</year>). <source>AI feynman: A physics-inspired method for symbolic regression</source>. <comment>arXiv:1905.11481, [physics.comp-ph]</comment>.</citation>
</ref>
<ref id="B51">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Uritsky</surname>
<given-names>V. M.</given-names>
</name>
<name>
<surname>Pouquet</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Rosenberg</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Mininni</surname>
<given-names>P. D.</given-names>
</name>
<name>
<surname>Donovan</surname>
<given-names>E. F.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Structures in magnetohydrodynamic turbulence: Detection and scaling</article-title>. <source>Phys. Rev. E</source> <volume>82</volume>, <fpage>326</fpage>. <pub-id pub-id-type="doi">10.1103/physreve.82.056326</pub-id>
</citation>
</ref>
<ref id="B52">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Kashinath</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Mustafa</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Albert</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Towards physics-informed deep learning for turbulent flow prediction</article-title>,&#x201d; in <conf-name>Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD &#x2019;20)</conf-name>, <conf-loc>Virtual Event, CA, USA</conf-loc>, <conf-date>August 2020</conf-date> (<publisher-name>ACM</publisher-name>), <fpage>1457</fpage>&#x2013;<lpage>1466</lpage>. <comment>New York, NY, USA</comment>. <pub-id pub-id-type="doi">10.1145/3394486.3403198</pub-id>
</citation>
</ref>
<ref id="B53">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Xiao</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>An</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>C.</given-names>
</name>
<etal/>
</person-group> (<year>2021</year>). &#x201c;<article-title>Symplectic structure-preserving particle-in-cell whole-volume simulation of tokamak plasmas to 111.3 trillion particles and 25.7 billion grids</article-title>,&#x201d; in <conf-name>Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis</conf-name>, <conf-loc>St. Louis, MO, USA</conf-loc>, <conf-date>November 2021</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>13</lpage>.</citation>
</ref>
<ref id="B54">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>DLGA-PDE: Discovery of PDEs with incomplete candidate library via combination of deep learning and genetic algorithm</article-title>. <source>J. Comput. Phys.</source> <volume>418</volume>, <fpage>109584</fpage>. <pub-id pub-id-type="doi">10.1016/j.jcp.2020.109584</pub-id>
</citation>
</ref>
<ref id="B55">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhdankin</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Uzdensky</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Perez</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Boldyrev</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Statistical analysis of current sheets in three-dimensional magnetohydrodynamic turbulence</article-title>. <source>Astrophys. J.</source> <volume>771</volume>, <fpage>124</fpage>. <pub-id pub-id-type="doi">10.1088/0004-637x/771/2/124</pub-id>
</citation>
</ref>
</ref-list>
</back>
</article>