<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Archiving and Interchange DTD v2.3 20070202//EN" "archivearticle.dtd">
<article article-type="methods-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Mech. Eng.</journal-id>
<journal-title>Frontiers in Mechanical Engineering</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Mech. Eng.</abbrev-journal-title>
<issn pub-type="epub">2297-3079</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1410190</article-id>
<article-id pub-id-type="doi">10.3389/fmech.2024.1410190</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Mechanical Engineering</subject>
<subj-group>
<subject>Methods</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Blending physics with data using an efficient Gaussian process regression with soft inequality and monotonicity constraints</article-title>
<alt-title alt-title-type="left-running-head">Kochan and Yang</alt-title>
<alt-title alt-title-type="right-running-head">
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmech.2024.1410190">10.3389/fmech.2024.1410190</ext-link>
</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Kochan</surname>
<given-names>Didem</given-names>
</name>
<uri xlink:href="https://loop.frontiersin.org/people/2703430/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/Writing - review &#x26; editing/"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Yang</surname>
<given-names>Xiu</given-names>
</name>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/2738258/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/project-administration/"/>
<role content-type="https://credit.niso.org/contributor-roles/resources/"/>
<role content-type="https://credit.niso.org/contributor-roles/supervision/"/>
<role content-type="https://credit.niso.org/contributor-roles/Writing - review &#x26; editing/"/>
<role content-type="https://credit.niso.org/contributor-roles/funding-acquisition/"/>
</contrib>
</contrib-group>
<aff>
<institution>Department of Industrial and Systems Engineering</institution>, <institution>Lehigh University</institution>, <addr-line>Bethlehem</addr-line>, <addr-line>PA</addr-line>, <country>United States</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1491435/overview">Ke Li</ext-link>, Schlumberger, United States</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1987930/overview">Lu-Kai Song</ext-link>, Beihang University, China</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2213810/overview">Xueyu Zhu</ext-link>, The University of Iowa, United States</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2544911/overview">Fei Song</ext-link>, Schlumberger, United States</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Xiu Yang, <email>xiy518@lehigh.edu</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>23</day>
<month>01</month>
<year>2025</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>10</volume>
<elocation-id>1410190</elocation-id>
<history>
<date date-type="received">
<day>31</day>
<month>03</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>23</day>
<month>12</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2025 Kochan and Yang.</copyright-statement>
<copyright-year>2025</copyright-year>
<copyright-holder>Kochan and Yang</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>In this work, we propose a new Gaussian process (GP) regression framework that enforces the physical constraints in a probabilistic manner. Specifically, we focus on inequality and monotonicity constraints. This GP model is trained by the quantum-inspired Hamiltonian Monte Carlo (QHMC) algorithm, which is an efficient way to sample from a broad class of distributions by allowing a particle to have a random mass matrix with a probability distribution. Integrating the QHMC into the inequality and monotonicity constrained GP regression in the probabilistic sense, our approach enhances the accuracy and reduces the variance in the resulting GP model. Additionally, the probabilistic aspect of the method leads to reduced computational expenses and execution time. Further, we present an adaptive learning algorithm that guides the selection of constraint locations. The accuracy and efficiency of the method are demonstrated in estimating the hyperparameter of high-dimensional GP models under noisy conditions, reconstructing the sparsely observed state of a steady state heat transport problem, and learning a conservative tracer distribution from sparse tracer concentration measurements.</p>
</abstract>
<kwd-group>
<kwd>constrained optimization</kwd>
<kwd>Gaussian process regression</kwd>
<kwd>quantum-inspired Hamiltonian Monte Carlo</kwd>
<kwd>adaptive learning</kwd>
<kwd>soft constraints</kwd>
</kwd-group>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Fluid Mechanics</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>In many real-world applications, measuring complex systems or evaluating computational models can be time-consuming, costly or computationally intensive. Gaussian process (GP) regression is one of the Bayesian techniques that addresses this problem by building a surrogate model. It is a supervised machine learning framework that has been widely used in regression and classification tasks. A GP can be interpreted as a suitable probability distribution on a set of functions, which can be conditioned on observations using Bayes&#x2019; rule (<xref ref-type="bibr" rid="B20">Lange-Hegermann, 2021</xref>). GP regression has found applications in various challenging practical problems including multi-target regression problems <xref ref-type="bibr" rid="B29">Nabati et al. (2022)</xref>, biomedical applications <xref ref-type="bibr" rid="B8">D&#xfc;richen et al. (2014)</xref>, <xref ref-type="bibr" rid="B31">Pimentel et al. (2013)</xref>, robotics <xref ref-type="bibr" rid="B43">Williams and Rasmussen (2006)</xref> and mechanical engineering applications <xref ref-type="bibr" rid="B38">Song et al. (2021)</xref>, <xref ref-type="bibr" rid="B21">Li et al. (2023)</xref>, etc. The recent research demonstrate that a GP regression model can make predictions incorporating prior information (kernels) and generate uncertainty measures over predictions. However, prior knowledge often includes physical laws, and using the standard GP regression framework may lead to an unbounded model in which some points can take infeasible values that violate physical laws (<xref ref-type="bibr" rid="B20">Lange-Hegermann, 2021</xref>). For example, non-negativity is a requirement for various physical properties such as temperature, density and viscosity (<xref ref-type="bibr" rid="B30">Pensoneault et al., 2020</xref>). Incorporating physical information in GP framework can regularize the behaviour of the model and provide more realistic uncertainties, since the approach concurrently evaluates problem data and physical models (<xref ref-type="bibr" rid="B40">Swiler et al., 2020</xref>; <xref ref-type="bibr" rid="B11">Ezati et al., 2024</xref>).</p>
<p>A significant amount of research has been conducted to incorporate physical information in GP framework, resulting in various techniques and methodologies (<xref ref-type="bibr" rid="B40">Swiler et al., 2020</xref>). For example, a probit model for the likelihood of derivative information can be employed to enforce monotonicity constraints (<xref ref-type="bibr" rid="B36">Riihim&#xe4;ki and Vehtari, 2010</xref>). Although this approach can also be used to enforce convexity in one dimension, an additional requirement on Hessian is incorporated for higher dimensions (<xref ref-type="bibr" rid="B7">Da Veiga and Marrel, 2012</xref>). In (<xref ref-type="bibr" rid="B24">L&#xf3;pez-Lopera et al., 2022</xref>) an additive GP approach is introduced to account for monotonicity constraints. Although posterior sampling step can be challenging, the additive GP framework enables to satisfy the constraints everywhere in the input space, and it is scalable to higher dimensions. The work presented in <xref ref-type="bibr" rid="B13">Gulian et al. (2022)</xref> presents a framework in which spectral decomposition covariance kernels and differential equation constraints are used in a co-kriging setup to perform GP regression constrained by boundary value problems. With their inherent advantages, physics-informed GP models that incorporate physical constraints has applications in diverse areas, such as manufacturing <xref ref-type="bibr" rid="B32">Qiang et al. (2023)</xref>, forecasting in power grids <xref ref-type="bibr" rid="B28">Mao et al. (2020)</xref> or urban flooding models <xref ref-type="bibr" rid="B18">Kohanpur et al. (2023)</xref>, mimicing drivers&#x2019; behavior <xref ref-type="bibr" rid="B41">Wang et al. (2021)</xref>, monitoring intelligent tire systems <xref ref-type="bibr" rid="B4">Barbosa et al. (2021)</xref>, predicting fuel flow rate <xref ref-type="bibr" rid="B6">Chati and Balakrishnan (2017)</xref>, designing wind turbines <xref ref-type="bibr" rid="B42">Wilkie and Galasso (2021)</xref>, etc. Due to their flexibility, physics-informed GP models can be combined with several approaches to enhance the accuracy of model predictions. These works show that integrating physical knowledge into the prediction process provides accurate results.</p>
<p>Enforcing inequality constraints into a GP is typically challenging as the conditional process, subject to these constraints, does not retain the properties of a GP (<xref ref-type="bibr" rid="B26">Maatouk and Bay, 2017</xref>). One of the approaches to handle thi s problem is a data augmentation approach in which the inequality constraints are enforced at various locations and approximate samples are drawn from the predictive distribution (<xref ref-type="bibr" rid="B1">Abrahamsen and Benth, 2001</xref>), or using a block covariance kernel (<xref ref-type="bibr" rid="B33">Raissi et al., 2017</xref>). Implicitly constrained GP regression method proposed in (<xref ref-type="bibr" rid="B37">Salzmann and Urtasun, 2010</xref>) shows that the mean prediction of a GP implicitly satisfies linear constraints, if the constraints are satisfied by the training data. A similar approach shows that when we impose linear inequality constraints on a finite set of points in the domain, the resulting process is a compound Gaussian Process with a truncated Gaussian mean (<xref ref-type="bibr" rid="B2">Agrell, 2019</xref>). Most of the approaches assume that the inequalities are satisfied on a finite set of input locations. Based on that assumption, the methods approximate the posterior distribution given those constraint input points. The approach introduced in (<xref ref-type="bibr" rid="B7">Da Veiga and Marrel, 2012</xref>) is an example of these methods, where maximum likelihood estimation of GP hyperparameters are investigated under the constraint assumptions. In practice, this should also limit the number of constraint points needed for an effective discrete-location approximation. In addition, the method is not efficient on high-dimensional datasets as it takes a large amount of time to train the GP model.</p>
<p>To the best of our knowlege, the first Gaussian method that satisfies certain inequalities at all the input space is proposed by <xref ref-type="bibr" rid="B26">Maatouk and Bay (2017)</xref>. The GP approximation of the samples are performed in the finite-dimensional space functions, and a rejection sampling method is used for approximating the posterior. The convergence properties of the method is investigated in (<xref ref-type="bibr" rid="B27">Maatouk et al., 2015</xref>). Although using the rejection sampling to obtain posterior helps convergence, it might be computationally expensive. Similar to the previous approaches in which a set of inputs satisfy the constraints, this method also suffers from the curse of dimensionality. Later, the truncated Gaussian approach (<xref ref-type="bibr" rid="B25">L&#xf3;pez-Lopera et al., 2018</xref>) extends the framework in (<xref ref-type="bibr" rid="B26">Maatouk and Bay, 2017</xref>) to general sets of linear inequalities. Building upon the approaches in (<xref ref-type="bibr" rid="B26">Maatouk and Bay, 2017</xref>; <xref ref-type="bibr" rid="B27">Maatouk et al., 2015</xref>), the work presented in (<xref ref-type="bibr" rid="B25">L&#xf3;pez-Lopera et al., 2018</xref>) introduces a finite-dimensional approach that incorporates inequalities for both data interpolation and covariance parameter estimation. In this work, the posterior distribution is expressed as a truncated multinormal distribution. The method uses different Markov Chain Monte Carlo (MCMC) methods and exact sampling methods to obtain the posterior distribution. Among the various MCMC sampling techniques including Gibbs, Metropolis-Hastings (MH) and Hamiltonian Monte Carlo (HMC), the results indicate that HMC sampling is the most efficient one. The truncated Gaussian approaches offer several advantages, including the ability to achieve high accuracy and the flexibility in satisfying multiple inequality conditions. However, although those types of methods address the limitations in (<xref ref-type="bibr" rid="B26">Maatouk and Bay, 2017</xref>), they might be time consuming particularly in applications with large datasets or high-dimensional spaces.</p>
<p>In this work, we use QHMC algorithm to train the GP model, and enforce the inequality and monotonicity constraints in a probabilistic manner. Our work addresses the computational limitations caused by high dimensions or large datasets. Unlike truncated Gaussian methods in (<xref ref-type="bibr" rid="B25">L&#xf3;pez-Lopera et al., 2018</xref>) for inequality constraints, or additive GP (<xref ref-type="bibr" rid="B24">L&#xf3;pez-Lopera et al., 2022</xref>) with monotonicity constraints, the proposed method can maintain its efficiency on higher dimensions. Further, we adopt an adaptive learning algorithm that selects the constraint locations. The efficiency and accuracy of the QHMC algorithms are demonstrated on inequality and monotonicity constrained problems. Inequality constrained examples include lower and higher dimensional synthetic problems, a conservative tracer distribution from sparse tracer concentration measurements and a three-dimensional heat transfer problem, while monotonicity constrained examples provide lower and higher dimensional synthetic problems. Our contributions can be summarized in three key points: i) QHMC reduces difference between posterior mean and the ground truth, ii) utilizing QHMC in a probabilistic sense decreases variance and uncertainty, and iii) the proposed algorithm is a robust, efficient and flexible method applicable to a wide range of problems. We implemented QHMC sampling in the truncated Gaussian approach to enhance accuracy and efficiency while working with the QHMC algorithm.</p>
</sec>
<sec id="s2">
<title>2 Gaussian process under inequality constraints</title>
<sec id="s2-1">
<title>2.1 Standard GP regression framework</title>
<p>Suppose we have a target function represented by values <inline-formula id="inf1">
<mml:math id="m1">
<mml:mrow>
<mml:mi mathvariant="bold">y</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>, where <inline-formula id="inf2">
<mml:math id="m2">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="double-struck">R</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> are observations at locations <inline-formula id="inf3">
<mml:math id="m3">
<mml:mrow>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">{</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">}</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>. Here, <inline-formula id="inf4">
<mml:math id="m4">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> represents <inline-formula id="inf5">
<mml:math id="m5">
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>-dimensional vectors in the domain <inline-formula id="inf6">
<mml:math id="m6">
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="double-struck">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. Using the framework provided in <xref ref-type="bibr" rid="B19">Kuss and Rasmussen (2003)</xref>, we approximate the target function by a GP, denoted as <inline-formula id="inf7">
<mml:math id="m7">
<mml:mrow>
<mml:mi>Y</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mo>.</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>.</mml:mo>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>:</mml:mo>
<mml:mi>D</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi mathvariant="normal">&#x3a9;</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi mathvariant="normal">R</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. We can express <inline-formula id="inf8">
<mml:math id="m8">
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> as<disp-formula id="e1">
<mml:math id="m9">
<mml:mrow>
<mml:mi>Y</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2254;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>P</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>&#x3bc;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mi>K</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(1)</label>
</disp-formula>where <inline-formula id="inf9">
<mml:math id="m10">
<mml:mrow>
<mml:mi>&#x3bc;</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the mean function and <inline-formula id="inf10">
<mml:math id="m11">
<mml:mrow>
<mml:mi>K</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the covariance and mean functions in <xref ref-type="disp-formula" rid="e1">Equation 1</xref> are defined as<disp-formula id="e2">
<mml:math id="m12">
<mml:mrow>
<mml:mi>&#x3bc;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="double-struck">E</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>Y</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:mtext>and</mml:mtext>
<mml:mspace width="1em"/>
<mml:mi>K</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="double-struck">E</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>Y</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3bc;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>Y</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3bc;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
<label>(2)</label>
</disp-formula>
</p>
<p>Typically, the standard squared exponential covariance kernel can be used as a kernel function in <xref ref-type="disp-formula" rid="e2">Equation 2</xref>:<disp-formula id="e3">
<mml:math id="m13">
<mml:mrow>
<mml:mi>K</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:msubsup>
<mml:mrow>
<mml:mo stretchy="false">&#x2016;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msup>
<mml:mrow>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b4;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(3)</label>
</disp-formula>where <inline-formula id="inf11">
<mml:math id="m14">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> is the signal variance, <inline-formula id="inf12">
<mml:math id="m15">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b4;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the Kronecker delta function and <inline-formula id="inf13">
<mml:math id="m16">
<mml:mrow>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the length-scale. We then assume that the observation includes an additive independent identically distributed (i.i.d.) Gaussian noise term <inline-formula id="inf14">
<mml:math id="m17">
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and having zero mean and variance <inline-formula id="inf15">
<mml:math id="m18">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>. We denote the hyperparameters in <xref ref-type="disp-formula" rid="e3">Equation 3</xref> by <inline-formula id="inf16">
<mml:math id="m19">
<mml:mrow>
<mml:mi mathvariant="italic">&#x3b8;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, and estimate them using the training data. The parameters can be estimated by minimizing the negative marginal log-likelihood <xref ref-type="bibr" rid="B19">Kuss and Rasmussen (2003)</xref>, <xref ref-type="bibr" rid="B39">Stein (1988)</xref>, <xref ref-type="bibr" rid="B46">Zhang (2004)</xref>:<disp-formula id="e4">
<mml:math id="m20">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">Y</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">y</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="italic">&#x3bc;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mtext>T</mml:mtext>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mi>K</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">y</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="italic">&#x3bc;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>K</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(4)</label>
</disp-formula>
</p>
<p>The following section shows how the parameter updates are performed using the QHMC method.</p>
</sec>
<sec id="s2-2">
<title>2.2 Quantum-inspired Hamiltonian Monte Carlo</title>
<p>QHMC is an enhanced version of the HMC algorithm that incorporates a random mass matrix for the particles, following a probability distribution. In conventional HMC, the position is represented by the original variables <inline-formula id="inf17">
<mml:math id="m21">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>, while Gaussian momentum is represented by auxiliary variables <inline-formula id="inf18">
<mml:math id="m22">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>. Utilizing the energy-time uncertainty relation of quantum mechanics, QHMC allows a particle to have a random mass matrix with a probability distribution. Consequently, in addition to the position and momentum variables, a mass variable <inline-formula id="inf19">
<mml:math id="m23">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> is introduced within the QHMC framework. Having a third variable offers the advantage of exploring various landscapes in the state-space. As a result, unlike standard HMC or conventional sampling methods such as MH and Gibbs, QHMC can perform well on discontinuous, non-smooth and spiky distributions <xref ref-type="bibr" rid="B5">Barbu and Zhu (2020)</xref>, <xref ref-type="bibr" rid="B23">Liu and Zhang (2019)</xref>. In particular, while the performance of HMC and MH sampling degrade when the distribution is ill-conditioned or multi-modal, the performance of QHMC does not have these limitations. Moreover, QHMC maintains its performance with almost zero additional cost of resampling the mass variable. Due to its efficiency and adaptibility, QHMC can easily integrate with other techniques, or be modified to enhance its performance based on specific objectives and applications. For example, stochastic versions of QHMC can yield accurate solutions with increased efficiency, and the approach is applicable to various scenarios involving missing data <xref ref-type="bibr" rid="B23">Liu and Zhang (2019)</xref>, <xref ref-type="bibr" rid="B17">Kochan et al. (2022)</xref>.</p>
<p>The quantum nature of QHMC can be understood by considering a one-dimensional harmonic oscillator example provided in <xref ref-type="bibr" rid="B23">Liu and Zhang (2019)</xref>. Let us consider a ball with a fixed mass <inline-formula id="inf20">
<mml:math id="m24">
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> attached to a spring at the origin. Assuming <inline-formula id="inf21">
<mml:math id="m25">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the displacement, the magnitude of the restoring force that pulls back the ball to the origin is <inline-formula id="inf22">
<mml:math id="m26">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, and the ball oscillates around the origin with period <inline-formula id="inf23">
<mml:math id="m27">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</inline-formula>. In contrast to standard HMC where the mass <inline-formula id="inf24">
<mml:math id="m28">
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is fixed at 1, QHMC incorporates a time-varying mass, allowing the ball to experience acceleration and explore various distribution landscapes. That is, QHMC has the capability to employ a short time period <inline-formula id="inf25">
<mml:math id="m29">
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, corresponding to a small mass <inline-formula id="inf26">
<mml:math id="m30">
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, to efficiently explore broad but flat regions. Conversely, in spiky regions, it can switch to a larger time period <inline-formula id="inf27">
<mml:math id="m31">
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, i.e., larger <inline-formula id="inf28">
<mml:math id="m32">
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, to ensure thorough exploration of all corners of the landscape <xref ref-type="bibr" rid="B23">Liu and Zhang (2019)</xref>.</p>
<p>The implementation of QHMC is straightforward: i) construct a stochastic process <inline-formula id="inf29">
<mml:math id="m33">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> for the mass, and at each time <inline-formula id="inf30">
<mml:math id="m34">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, ii) sample <inline-formula id="inf31">
<mml:math id="m35">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> from a distribution <inline-formula id="inf32">
<mml:math id="m36">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. Resampling the positive-definite mass matrix is the only additional step to the standard HMC procedure. In practice, assuming that <inline-formula id="inf33">
<mml:math id="m37">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is independent of <inline-formula id="inf34">
<mml:math id="m38">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf35">
<mml:math id="m39">
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, a mass density function <inline-formula id="inf36">
<mml:math id="m40">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> with mean <inline-formula id="inf37">
<mml:math id="m41">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3bc;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and variance <inline-formula id="inf38">
<mml:math id="m42">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> can be where <inline-formula id="inf39">
<mml:math id="m43">
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the identity matrix. QHMC framework simulates the following dynamical system:<disp-formula id="e5">
<mml:math id="m44">
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mtable class="matrix">
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mi>x</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mi>q</mml:mi>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mtable class="matrix">
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mi>M</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mi>q</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x2207;</mml:mi>
<mml:mi>U</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(5)</label>
</disp-formula>
</p>
<p>In this setting, the potential energy function of the QHMC system is <inline-formula id="inf40">
<mml:math id="m45">
<mml:mrow>
<mml:mi>U</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">[</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">Y</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, i.e., the negative of marginal log-likelihood. <xref ref-type="statement" rid="Algorithm_1">Algorithm 1</xref> summarizes the steps of QHMC sampling, and, here, we consider the location variables <inline-formula id="inf41">
<mml:math id="m46">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">{</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">}</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> in GP model as the position variables <inline-formula id="inf42">
<mml:math id="m47">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> in <xref ref-type="statement" rid="Algorithm_1">Algorithm 1</xref>. The method evolves the QHMC dynamics in <xref ref-type="disp-formula" rid="e5">Equation 5</xref> to update the locations <inline-formula id="inf43">
<mml:math id="m48">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. In this work, we implement the QHMC method for inequality constrained GP regression in a probabilistic manner.</p>
</sec>
<sec id="s2-3">
<title>2.3 Proposed method</title>
<p>Instead of enforcing all constraints strictly, the approach introduced in <xref ref-type="bibr" rid="B30">Pensoneault et al. (2020)</xref> minimizes the negative marginal log-likelihood function in <xref ref-type="disp-formula" rid="e4">Equation 4</xref> while allowing constraint violations with a small probability. For example, for non-negativity constraints, the following requirement is imposed to the problem:<disp-formula id="e6">
<mml:math id="m49">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">Y</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3c;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2264;</mml:mo>
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:mtext>for&#x2009;all</mml:mtext>
<mml:mspace width="1em"/>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(6)</label>
</disp-formula>where <inline-formula id="inf44">
<mml:math id="m50">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x226a;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>In contrast to enforcing the constraint via truncated Gaussian assumption <xref ref-type="bibr" rid="B26">Maatouk and Bay (2017)</xref> or performing inference based on the Laplace approximation and expectation propagation <xref ref-type="bibr" rid="B16">Jensen et al. (2013)</xref>, the proposed method preserves the Gaussian posterior of the standard GP regression. The method uses a slight modification of the existing cost function. Given a model that follows a Gaussian distribution, the constraint in <xref ref-type="disp-formula" rid="e6">Equation 6</xref> can be re-expressed by the posterior mean and posterior standard deviation:<disp-formula id="e7">
<mml:math id="m51">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3b7;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>s</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:mtext>for&#x2009;all</mml:mtext>
<mml:mspace width="1em"/>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(7)</label>
</disp-formula>where <inline-formula id="inf45">
<mml:math id="m52">
<mml:mrow>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> stands for the posterior mean, <inline-formula id="inf46">
<mml:math id="m53">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the standard deviation and <inline-formula id="inf47">
<mml:math id="m54">
<mml:mrow>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the cumulative distribution function of a Gaussian random variable. Following the work in <xref ref-type="bibr" rid="B30">Pensoneault et al. (2020)</xref>, in this study <inline-formula id="inf48">
<mml:math id="m55">
<mml:mrow>
<mml:mi>&#x3b7;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> was set to <inline-formula id="inf49">
<mml:math id="m56">
<mml:mrow>
<mml:mn>2.2</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> for demonstration purposes. As a result, <inline-formula id="inf50">
<mml:math id="m57">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>&#x3b7;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, indicating that two standard deviations below the mean is still nonnegative. Then, using <xref ref-type="disp-formula" rid="e7">Equation 7</xref> the formulation of the optimization problem is given as<disp-formula id="e8">
<mml:math id="m58">
<mml:mrow>
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msub>
<mml:mrow>
<mml:mi>arg min</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width=".1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x2212;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">Y</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="1em"/>
<mml:mtext>such&#x2009;that</mml:mtext>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>s</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(8)</label>
</disp-formula>In this particular form of the optimization problem, a functional constraint described by <xref ref-type="disp-formula" rid="e8">Equation 8</xref> is existent. It can be prohibitive or impossible to satisfy this constraint at all points across the entire domain. Therefore, we adopt a strategy where <xref ref-type="disp-formula" rid="e8">Equation 8</xref> is enforced only on a selected set of <inline-formula id="inf51">
<mml:math id="m59">
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> constraint points denoted as <inline-formula id="inf52">
<mml:math id="m60">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>. The optimization problem can be reformulated as<disp-formula id="e9">
<mml:math id="m61">
<mml:mrow>
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msub>
<mml:mrow>
<mml:mi>arg min</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x2212;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">Y</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="1em"/>
<mml:mtext>such&#x2009;that</mml:mtext>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>s</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mspace width="1em"/>
<mml:mtext>for&#x2009;all</mml:mtext>
<mml:mspace width="1em"/>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>m</mml:mi>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(9)</label>
</disp-formula>where hyperparameters are estimated to enforce bounds. Solving this optimization problem can be very challenging, and hence, in <xref ref-type="bibr" rid="B30">Pensoneault et al. (2020)</xref> additional regularization terms are added. Rather than directly solving the optimization problem, this work adopts the soft-QHMC method, which introduces inequality constraints with a high probability (e.g., 95%) by selecting a specific set of <inline-formula id="inf53">
<mml:math id="m62">
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> constraint points in the domain. Then non-negativity on the posterior GP is enforced at these selected points. The log-likelihood in <xref ref-type="disp-formula" rid="e4">Equation 4</xref> is minimized using the QHMC algorithm. Leveraging the Bayesian estimation <xref ref-type="bibr" rid="B12">Gelman et al. (2014)</xref>, we can approximate the posterior distribution by log-likelihood function and prior probability distribution as shown in the following:<disp-formula id="e10">
<mml:math id="m63">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">Y</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x221d;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold">Y</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">Y</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(10)</label>
</disp-formula>The QHMC training flow starts with the Bayesian learning shown in <xref ref-type="disp-formula" rid="e10">Equation 10</xref> this Bayesian learning and proceeds with an MCMC procedure for drawing samples generated by the Bayesian framework. A general sampling procedure at step <inline-formula id="inf54">
<mml:math id="m64">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is given as<disp-formula id="e11">
<mml:math id="m65">
<mml:mrow>
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msup>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x223c;</mml:mo>
<mml:mi>&#x3c0;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>X</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>X</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mi>Y</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msup>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msup>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x223c;</mml:mo>
<mml:mi>&#x3c0;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>X</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mi>Y</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(11)</label>
</disp-formula>
</p>
<p>The workflow of soft inequality-constrained GP regression with the sampling procedure in <xref ref-type="disp-formula" rid="e11">Equation 11</xref> is summarized in <xref ref-type="statement" rid="Algorithm_2">Algorithm 2</xref>, where QHMC sampling (provided in <xref ref-type="statement" rid="Algorithm_1">Algorithm 1</xref>) is used as a GP training method. In this version of non-negativity enforced GP regression, the constraint points are located where the posterior variance is highest.</p>
<p>
<statement content-type="algorithm" id="Algorithm_1">
<label>Algorithm 1</label>
<p>QHMC Training for GP with Inequality Constraints.<list list-type="simple">
<list-item>
<p>
<bold>Input:</bold> Initial point <inline-formula id="inf55">
<mml:math id="m66">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, step size <inline-formula id="inf56">
<mml:math id="m67">
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, number of&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;&#x2003; simulation steps <inline-formula id="inf57">
<mml:math id="m68">
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, mass distribution&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;&#x2003; parameters <inline-formula id="inf58">
<mml:math id="m69">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3bc;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf59">
<mml:math id="m70">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</list-item>
<list-item>
<p>1:&#x2003;<bold>for</bold> <inline-formula id="inf60">
<mml:math id="m71">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> <bold>do</bold>
</p>
</list-item>
<list-item>
<p>2:&#x2003;&#x2003;&#x2003;<bold>Resample</bold> <inline-formula id="inf61">
<mml:math id="m72">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;<bold>Resample</bold> <inline-formula id="inf62">
<mml:math id="m73">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;<inline-formula id="inf63">
<mml:math id="m74">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;<inline-formula id="inf64">
<mml:math id="m75">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2190;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mi>&#x2207;</mml:mi>
<mml:mi>U</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>3:&#x2003;&#x2003;<bold>for</bold> <inline-formula id="inf65">
<mml:math id="m76">
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>L</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> <bold>do</bold>
</p>
</list-item>
<list-item>
<p>4:&#x2003;&#x2003;&#x2003;&#x2003;<inline-formula id="inf66">
<mml:math id="m77">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2190;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3f5;</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;<inline-formula id="inf67">
<mml:math id="m78">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2190;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mi>&#x2207;</mml:mi>
<mml:mi>U</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>5:&#x2003;&#x2003;<bold>end for</bold>
</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;<inline-formula id="inf68">
<mml:math id="m79">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2190;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3f5;</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;<inline-formula id="inf69">
<mml:math id="m80">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2190;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mi>&#x2207;</mml:mi>
<mml:mi>U</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;<inline-formula id="inf70">
<mml:math id="m81">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;<bold>MH step:</bold> <inline-formula id="inf71">
<mml:math id="m82">
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> Uniform[0,1];</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;<inline-formula id="inf72">
<mml:math id="m83">
<mml:mrow>
<mml:mi>&#x3c1;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>H</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>H</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>;</p>
</list-item>
<list-item>
<p>6:&#x2003;&#x2003;<bold>if</bold> <inline-formula id="inf73">
<mml:math id="m84">
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>min</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c1;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> <bold>then</bold>
</p>
</list-item>
<list-item>
<p>7:&#x2003;&#x2003;&#x2003;<inline-formula id="inf74">
<mml:math id="m85">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">&#x302;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>8:&#x2003;&#x2003;<bold>else</bold>
</p>
</list-item>
<list-item>
<p>9:&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;<inline-formula id="inf75">
<mml:math id="m86">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>10:&#x2003;&#x2003;<bold>end if</bold>
</p>
</list-item>
<list-item>
<p>11:&#x2003;<bold>end for</bold>
</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;&#x2003;<bold>Output:</bold> <inline-formula id="inf76">
<mml:math id="m87">
<mml:mrow>
<mml:mo stretchy="false">{</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
</mml:mrow>
<mml:mo stretchy="false">}</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
</list>
</p>
</statement>
</p>
<p>
<statement content-type="algorithm" id="Algorithm_2">
<label>Algorithm 2</label>
<p>Soft Inequality-constrained GP Regression.<list list-type="simple">
<list-item>
<p>1:&#x2003;Specify <inline-formula id="inf77">
<mml:math id="m88">
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> constraint points denoted by <inline-formula id="inf78">
<mml:math id="m89">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>, where corresponding observation <inline-formula id="inf79">
<mml:math id="m90">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</list-item>
<list-item>
<p>2:&#x2003;<bold>for</bold> <inline-formula id="inf80">
<mml:math id="m91">
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> <bold>do</bold>
</p>
</list-item>
<list-item>
<p>3:&#x2003;&#x2003;Compute the MSE of <inline-formula id="inf81">
<mml:math id="m92">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> of MLE prediction&#x2003;&#x2003;&#x2003; <inline-formula id="inf82">
<mml:math id="m93">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> for <inline-formula id="inf83">
<mml:math id="m94">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;&#x2003;Obtain observation <inline-formula id="inf84">
<mml:math id="m95">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> at <inline-formula id="inf85">
<mml:math id="m96">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;&#x2003;Locate <inline-formula id="inf86">
<mml:math id="m97">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> for the maximum of <inline-formula id="inf87">
<mml:math id="m98">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> for </p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;&#x2003;<inline-formula id="inf88">
<mml:math id="m99">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</list-item>
<list-item>
<p>4:&#x2003;<bold>end for</bold>
</p>
</list-item>
<list-item>
<p>&#x2003;&#x2003;&#x2003;Construct the MLE prediction of <inline-formula id="inf89">
<mml:math id="m100">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> using&#x2003;&#x2003; QHMC training.</p>
</list-item>
</list>
</p>
</statement>
</p>
<sec id="s2-3-1">
<title>2.3.1 Enforcing monotonicity constraints</title>
<p>Monotonicity constraints on a GP can be enforced using the likelihood of derivative observations. After the selection of active constraints, non-negativity constraints are incorporated in the partial derivative, i.e.,<disp-formula id="e12">
<mml:math id="m101">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(12)</label>
</disp-formula>where <inline-formula id="inf90">
<mml:math id="m102">
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is a vector of <inline-formula id="inf91">
<mml:math id="m103">
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> latent values. In the soft-constrained GP method, we introduce the non-negativity information in <xref ref-type="disp-formula" rid="e12">Equation 12</xref> on a set of selected points, and apply the same procedure as in <xref ref-type="disp-formula" rid="e9">Equation 9</xref>.</p>
<p>Since the derivative is also a GP with mean and covariance matrix <xref ref-type="bibr" rid="B36">Riihim&#xe4;ki and Vehtari (2010)</xref>:<disp-formula id="e13">
<mml:math id="m104">
<mml:mrow>
<mml:mi>&#x3bc;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="double-struck">E</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>Y</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:mtext>and</mml:mtext>
<mml:mspace width="1em"/>
<mml:mi>K</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mi>K</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(13)</label>
</disp-formula>the new posterior distribution using the parameters in <xref ref-type="disp-formula" rid="e13">Equation 13</xref> is given as<disp-formula id="e14">
<mml:math id="m105">
<mml:mrow>
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">y</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x222b;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">y</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>f</mml:mi>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">y</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x222b;</mml:mo>
<mml:mo>&#x222b;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mi>f</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold">y</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>f</mml:mi>
<mml:mi>d</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(14)</label>
</disp-formula>where <inline-formula id="inf92">
<mml:math id="m106">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf93">
<mml:math id="m107">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> denote the predictions at location <inline-formula id="inf94">
<mml:math id="m108">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. The general sampling procedure for the posterior distribution in <xref ref-type="disp-formula" rid="e14">Equation 14</xref> is the same as in <xref ref-type="disp-formula" rid="e11">Equation 11</xref>.</p>
</sec>
</sec>
</sec>
<sec id="s3">
<title>3 Theoretical analysis of the method</title>
<p>In this section, employing Bayes&#x2019; Theorem, we demonstrate how QHMC is capable of producing a steady-state distribution that approximates the actual posterior distribution. Then, we examine the convergence characteristics of the probabilistic approach on the optimization problem outlined in <xref ref-type="disp-formula" rid="e9">Equation 9</xref>.</p>
<sec id="s3-1">
<title>3.1 Convergence of QHMC training</title>
<p>The study presented in <xref ref-type="bibr" rid="B23">Liu and Zhang (2019)</xref> demonstrates that the QHMC framework can effectively capture a correct steady-state distribution that describes the desired posterior distribution <inline-formula id="inf95">
<mml:math id="m109">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x221d;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>U</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> via Bayes&#x2019; rule. The joint probability density of <inline-formula id="inf96">
<mml:math id="m110">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>q</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> can be calculated by Bayesian theorem:<disp-formula id="e15">
<mml:math id="m111">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>q</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>q</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(15)</label>
</disp-formula>The conditional distribution in <xref ref-type="disp-formula" rid="e15">Equation 15</xref> is approximated as follows:<disp-formula id="e16">
<mml:math id="m112">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>q</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x221d;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>U</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>K</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>U</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mi>exp</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(16)</label>
</disp-formula>Then, using <xref ref-type="disp-formula" rid="e16">Equation 16</xref>, <inline-formula id="inf97">
<mml:math id="m113">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> can be written as<disp-formula id="e17">
<mml:math id="m114">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo>&#x222b;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mo>&#x222b;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi>d</mml:mi>
<mml:mi>q</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>q</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x221d;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>U</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(17)</label>
</disp-formula>
<xref ref-type="disp-formula" rid="e17">Equation 17</xref> shows that the marginal steady distribution approaches the true posterior distribution <xref ref-type="bibr" rid="B23">Liu and Zhang (2019)</xref>.</p>
</sec>
<sec id="s3-2">
<title>3.2 Convergence properties of probabilistic approach</title>
<p>In this section, we show that satisfying the constraints on a set of locations <inline-formula id="inf98">
<mml:math id="m115">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> in the domain <inline-formula id="inf99">
<mml:math id="m116">
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> preserves convergence. Recall the following inequality-constrained optimization problem:<disp-formula id="e18">
<mml:math id="m117">
<mml:mrow>
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msub>
<mml:mrow>
<mml:mi>arg min</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x2212;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">Y</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="1em"/>
<mml:mtext>such&#x2009;that</mml:mtext>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>s</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mspace width="1em"/>
<mml:mtext>for&#x2009;all</mml:mtext>
<mml:mspace width="1em"/>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>m</mml:mi>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(18)</label>
</disp-formula>
</p>
<p>Now, it is necessary to demonstrate that the result obtained by using the selected set of input locations as in <xref ref-type="disp-formula" rid="e18">Equation 18</xref> converge to the value of the regression model&#x2019;s output. This convergence ensures that probabilistic approach will eventually result in a model that satisfy the desired conditions.</p>
<p>Note that throughout the proof, it is assumed that <inline-formula id="inf100">
<mml:math id="m118">
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is finite. The proof can be constructed for the cases whether the domain is countable or uncountable.</p>
<p>
<list list-type="simple">
<list-item>
<p>(i) Assume that the domain <inline-formula id="inf101">
<mml:math id="m119">
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is a countable set containing <inline-formula id="inf102">
<mml:math id="m120">
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> elements. Then, select a subset <inline-formula id="inf103">
<mml:math id="m121">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> with <inline-formula id="inf104">
<mml:math id="m122">
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> points, where <inline-formula id="inf105">
<mml:math id="m123">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. For each point <inline-formula id="inf106">
<mml:math id="m124">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, there exists a Gaussian probability distribution. The set of distributions associated with <inline-formula id="inf107">
<mml:math id="m125">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is denoted as <inline-formula id="inf108">
<mml:math id="m126">
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. For the constraint points <inline-formula id="inf109">
<mml:math id="m127">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, there are <inline-formula id="inf110">
<mml:math id="m128">
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> constraints and their corresponding probability distributions, which can be defined as <inline-formula id="inf111">
<mml:math id="m129">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. Additionally, we introduce a set <inline-formula id="inf112">
<mml:math id="m130">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> such that</p>
</list-item> </list>
<disp-formula id="e19">
<mml:math id="m131">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2254;</mml:mo>
<mml:mfenced open="{" close="}">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">Y</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3c;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(19)</label>
</disp-formula>which covers the locations where the non-negativity constraint is violated. For each fixed <inline-formula id="inf113">
<mml:math id="m132">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, define<disp-formula id="e20">
<mml:math id="m133">
<mml:mrow>
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mi>v</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x2254;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">Y</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3c;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>&#x2261;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:mtext>and</mml:mtext>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x2254;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">Y</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3c;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>&#x2261;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(20)</label>
</disp-formula>
</p>
<p>
<list list-type="simple">
<list-item>
<p>(ii) Assume that the domain <inline-formula id="inf114">
<mml:math id="m134">
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is a finite but uncountable set. In this case, a countable subset <inline-formula id="inf115">
<mml:math id="m135">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> with <inline-formula id="inf116">
<mml:math id="m136">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> can be constructed. The set of probability distributions are defined as in case (i). Since <inline-formula id="inf117">
<mml:math id="m137">
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is finite, the set <inline-formula id="inf118">
<mml:math id="m138">
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
<mml:mo>&#x222a;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">{</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is also finite. Consequently, the sets <inline-formula id="inf119">
<mml:math id="m139">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf120">
<mml:math id="m140">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> can be constructed as in the first case, (<xref ref-type="disp-formula" rid="e19">Equations 19</xref>, <xref ref-type="disp-formula" rid="e20">20</xref>). Next steps establish a convergence of <inline-formula id="inf121">
<mml:math id="m141">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> over <inline-formula id="inf122">
<mml:math id="m142">
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> as <inline-formula id="inf123">
<mml:math id="m143">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> converges to <inline-formula id="inf124">
<mml:math id="m144">
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</list-item>
</list>
</p>
<p>First, let us provide distance metrics used throughout the proof. Following the definitions in <xref ref-type="bibr" rid="B14">Guo et al. (2015)</xref>, let<disp-formula id="e21">
<mml:math id="m145">
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2254;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x2016;</mml:mo>
</mml:mrow>
</mml:math>
<label>(21)</label>
</disp-formula>denote the distance from a point <inline-formula id="inf125">
<mml:math id="m146">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> to a set <inline-formula id="inf126">
<mml:math id="m147">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. Then, the distance of two compact sets <inline-formula id="inf127">
<mml:math id="m148">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf128">
<mml:math id="m149">
<mml:mrow>
<mml:mi>B</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> can be defined as<disp-formula id="e22">
<mml:math id="m150">
<mml:mrow>
<mml:mi mathvariant="double-struck">D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>B</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2254;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>sup</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi>d</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>B</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(22)</label>
</disp-formula>Then, using the definitions in <xref ref-type="disp-formula" rid="e21">Equations 21</xref>, <xref ref-type="disp-formula" rid="e22">22</xref>, the Hausddorff distance between <inline-formula id="inf129">
<mml:math id="m151">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf130">
<mml:math id="m152">
<mml:mrow>
<mml:mi>B</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is defined as <inline-formula id="inf131">
<mml:math id="m153">
<mml:mrow>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>B</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x2254;</mml:mo>
<mml:mi>max</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">{</mml:mo>
<mml:mrow>
<mml:mi mathvariant="double-struck">D</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>B</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="double-struck">D</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. Finally, we define a pseudo-metric <inline-formula id="inf132">
<mml:math id="m154">
<mml:mrow>
<mml:mi mathvariant="bold">d</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> to describe the distance between two probability distributions <inline-formula id="inf133">
<mml:math id="m155">
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf134">
<mml:math id="m156">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> as<disp-formula id="e23">
<mml:math id="m157">
<mml:mrow>
<mml:mi mathvariant="bold">d</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2254;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>sup</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(23)</label>
</disp-formula>where <inline-formula id="inf135">
<mml:math id="m158">
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the domain specified in <xref ref-type="sec" rid="s3-2">Section 3.2</xref>.</p>
<p>
<statement content-type="assumption" id="Assumption_1">
<label>Assumption 1</label>
<p>
<italic>Suppose that the probability distributions</italic> <inline-formula id="inf136">
<mml:math id="m159">
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> <italic>and</italic> <inline-formula id="inf137">
<mml:math id="m160">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> <italic>satisfy the following conditions:</italic>
<list list-type="simple">
<list-item>
<p>1. <italic>There exists a weakly compact set</italic> <inline-formula id="inf138">
<mml:math id="m161">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> <italic>such that</italic> <inline-formula id="inf139">
<mml:math id="m162">
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
<mml:mo>&#x2282;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> <italic>and</italic> <inline-formula id="inf140">
<mml:math id="m163">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2282;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</list-item>
<list-item>
<p>2. <inline-formula id="inf141">
<mml:math id="m164">
<mml:mrow>
<mml:munder>
<mml:mrow>
<mml:mi>lim</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi mathvariant="bold">d</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>, with probability 1.</italic>
</p>
</list-item>
<list-item>
<p>3. <inline-formula id="inf142">
<mml:math id="m165">
<mml:mrow>
<mml:munder>
<mml:mrow>
<mml:mi>lim</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi mathvariant="bold">d</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>, with probability 1.</italic>
</p>
</list-item>
</list>
</p>
<p>Now, we show that <xref ref-type="statement" rid="Theorem_1">Theorem 1</xref> holds under the assumptions in <xref ref-type="statement" rid="Assumption_1">Assumption 1</xref>. Recall that we have<disp-formula id="equ1">
<mml:math display="block" id="m166">
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mrow>
<mml:mtext>conv</mml:mtext>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>conv</mml:mtext>
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mi>max</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close="}" open="{" separators="none">
<mml:mrow>
<mml:mfenced close="|" open="|" separators="none">
<mml:mrow>
<mml:munder>
<mml:mi>sup</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi mathvariant="script">P</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mi>x</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:munder>
<mml:mi>sup</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mi>x</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="2.5em"/>
<mml:mfenced close="|" open="|" separators="none">
<mml:mrow>
<mml:munder>
<mml:mi>inf</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi mathvariant="script">P</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mi>x</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:munder>
<mml:mi>inf</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mi>x</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>Based on the definition and property of Hausdorff distance <xref ref-type="bibr" rid="B15">Hess (1999)</xref> we also have <disp-formula id="e24">
<mml:math id="m167">
<mml:mrow>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mtext>conv</mml:mtext>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>conv</mml:mtext>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2264;</mml:mo>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2264;</mml:mo>
<mml:mi>max</mml:mi>
<mml:mfenced open="{" close="}">
<mml:mrow>
<mml:mi mathvariant="double-struck">D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="double-struck">D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(24)</label>
</disp-formula>Consider the distance of two sets:<disp-formula id="e25">
<mml:math id="m168">
<mml:mrow>
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mi mathvariant="double-struck">D</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>sup</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:munder>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x2016;</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>sup</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:munder>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo stretchy="false">&#x2016;</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x2264;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>sup</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:munder>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:munder>
<mml:mrow>
<mml:mi>sup</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo stretchy="false">&#x2016;</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold">d</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(25)</label>
</disp-formula>where d is defined in <xref ref-type="disp-formula" rid="e23">Equation 23</xref> and apply the same procedures as in <xref ref-type="disp-formula" rid="e24">Equations 24</xref>, <xref ref-type="disp-formula" rid="e25">25</xref> to obtain <inline-formula id="inf143">
<mml:math id="m169">
<mml:mrow>
<mml:mi mathvariant="double-struck">D</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x2264;</mml:mo>
<mml:mi mathvariant="bold">d</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. Hence,<disp-formula id="e26">
<mml:math id="m170">
<mml:mrow>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mtext>conv</mml:mtext>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>conv</mml:mtext>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2264;</mml:mo>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2264;</mml:mo>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(26)</label>
</disp-formula>Consequently from <xref ref-type="disp-formula" rid="e26">Equation 26</xref>, we obtain<disp-formula id="e27">
<mml:math id="m171">
<mml:mrow>
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>v</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo stretchy="false">&#x7c;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x2264;</mml:mo>
<mml:mfenced open="|" close="|">
<mml:mrow>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x2264;</mml:mo>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mtext>conv</mml:mtext>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>conv</mml:mtext>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x2264;</mml:mo>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(27)</label>
</disp-formula>
</p>
</statement>
</p>
<p>
<statement content-type="theorem" id="Theorem_1">
<label>Theorem 1</label>
<p>
<inline-formula id="inf144">
<mml:math id="m172">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> <italic>converges to</italic> <inline-formula id="inf145">
<mml:math id="m173">
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> <italic>as</italic> <inline-formula id="inf146">
<mml:math id="m174">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> <italic>converges to</italic> <inline-formula id="inf147">
<mml:math id="m175">
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>, that is</italic>
<disp-formula id="equ2">
<mml:math id="m176">
<mml:mrow>
<mml:munder>
<mml:mrow>
<mml:mi>lim</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:munder>
<mml:munder>
<mml:mrow>
<mml:mi>sup</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>v</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Proof.Let us assume that <inline-formula id="inf148">
<mml:math id="m177">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is fixed, and define<disp-formula id="e28">
<mml:math id="m178">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo>&#x2254;</mml:mo>
<mml:mfenced open="{" close="}">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>:</mml:mo>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mtext>cl</mml:mtext>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:mtext>and</mml:mtext>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2254;</mml:mo>
<mml:mfenced open="{" close="}">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>:</mml:mo>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mtext>cl</mml:mtext>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(28)</label>
</disp-formula>where cl represents the closure. Note that both <inline-formula id="inf149">
<mml:math id="m179">
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf150">
<mml:math id="m180">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are bounded subsets in <inline-formula id="inf151">
<mml:math id="m181">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="double-struck">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. Let us define <inline-formula id="inf152">
<mml:math id="m182">
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf153">
<mml:math id="m183">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> such that<disp-formula id="e29">
<mml:math id="m184">
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mo>&#x2254;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:mi>b</mml:mi>
<mml:mo>&#x2254;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>sup</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2254;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:msub>
<mml:mrow>
<mml:mi>b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2254;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>sup</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(29)</label>
</disp-formula>
</p>
<p>The Hausdorff distance between convex hulls (conv) of the sets <inline-formula id="inf154">
<mml:math id="m185">
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf155">
<mml:math id="m186">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are calculated as <xref ref-type="bibr" rid="B15">Hess (1999)</xref>.<disp-formula id="e30">
<mml:math id="m187">
<mml:mrow>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mtext>conv</mml:mtext>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>conv</mml:mtext>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>max</mml:mi>
<mml:mfenced open="{" close="}">
<mml:mrow>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">&#x7c;</mml:mo>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(30)</label>
</disp-formula>Since we know that<disp-formula id="e31">
<mml:math id="m188">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>sup</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mi>v</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>sup</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
<mml:mtext>and</mml:mtext>
<mml:mspace width="1em"/>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mi>v</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mi>inf</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(31)</label>
</disp-formula>combining the result in <xref ref-type="disp-formula" rid="e27">Equation 27</xref>, and the definitions in <xref ref-type="disp-formula" rid="e28">Equations 28</xref>&#x2013;<xref ref-type="disp-formula" rid="e31">31</xref>
<disp-formula id="e32">
<mml:math display="block" id="m189">
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mrow>
<mml:mtext>conv</mml:mtext>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>conv</mml:mtext>
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mi>max</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close="}" open="{" separators="none">
<mml:mrow>
<mml:mfenced close="|" open="|" separators="none">
<mml:mrow>
<mml:munder>
<mml:mi>sup</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi mathvariant="script">P</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mi>x</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:munder>
<mml:mi>sup</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mi>x</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="2.5em"/>
<mml:mfenced close="|" open="|" separators="none">
<mml:mrow>
<mml:munder>
<mml:mi>inf</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi mathvariant="script">P</mml:mi>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mi>x</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:munder>
<mml:mi>inf</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mfenced close=")" open="(" separators="none">
<mml:mi>x</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(32)</label>
</disp-formula>Based on the <xref ref-type="disp-formula" rid="e32">Equation 32</xref> and the definition and property of Hausdorff distance <xref ref-type="bibr" rid="B15">Hess (1999)</xref> we have<disp-formula id="e33">
<mml:math id="m190">
<mml:mrow>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mtext>conv</mml:mtext>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>conv</mml:mtext>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2264;</mml:mo>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(33)</label>
</disp-formula>The inequality in <xref ref-type="disp-formula" rid="e33">Equation 33</xref> yields <xref ref-type="bibr" rid="B14">Guo et al. (2015)</xref>.<disp-formula id="e34">
<mml:math id="m191">
<mml:mrow>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>v</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mo>&#x2264;</mml:mo>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2264;</mml:mo>
<mml:mi mathvariant="double-struck">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(34)</label>
</disp-formula>
</p>
<p>In this setting, <inline-formula id="inf156">
<mml:math id="m192">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> can be any point in <inline-formula id="inf157">
<mml:math id="m193">
<mml:mrow>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, and the right hand side of the inequality is independent of <inline-formula id="inf158">
<mml:math id="m194">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. The proof can be completed by taking the supremum of each side in <xref ref-type="disp-formula" rid="e34">Equation 34</xref> with respect to <inline-formula id="inf159">
<mml:math id="m195">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> <xref ref-type="bibr" rid="B14">Guo et al. (2015)</xref>.</p>
</statement>
</p>
</sec>
</sec>
<sec id="s4">
<title>4 Numerical examples</title>
<p>In this section, we evaluate the performance of the proposed algorithms on various examples including synthetic and real data. The evaluations consider the size and dimension of the datasets. Several versions of QHMC algorithms are introduced and compared depending on the selection of constraint point locations and probabilistic approach.</p>
<p>Rather than randomly locating <inline-formula id="inf160">
<mml:math id="m196">
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> constraint points, the algorithm starts with an empty constraint set and determine the locations of the constraint points one by one adaptively. Throughout this process, various strategies are employed to add constraints. The specific approaches are outlined as follows:<list list-type="simple">
<list-item>
<p>1. Constraint-adaptive approach: While constructing the set of constraint points, this approach evaluates whether the constraint is satisfied at a given location. The function value at that point is calculated, and if the constraint is found to be violated, a constraint point is added to indicate this violation. This helps track areas where the constraint is not met and allows for adjustments to be made accordingly.</p>
</list-item>
<list-item>
<p>2. Variance-adaptive approach: This approach calculates the prediction variance in the test set. Constraint points are identified at the positions where the variance values are highest. As outlined in <xref ref-type="statement" rid="Algorithm_2">Algorithm 2</xref>, new constraints are located at the maxima of <inline-formula id="inf161">
<mml:math id="m197">
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. The goal here is basically to reduce the variance in predictions and increase the stability.</p>
</list-item>
<list-item>
<p>3. Combination of constraint and variance adaption: In this approach, a threshold value (e.g., 0.20) is determined for the variance, and the algorithm locates constraint points to the locations where the highest prediction variance is observed. Once the variance reduces to the threshold value, the algorithm switches to the first strategy, in which it locates constraint points where the violation occurs.</p>
</list-item>
</list>
</p>
<p>We represent the constraint-adaptive, hard-constrained approach as QHMCad and its soft-constrained counterpart as QHMCsoftad. Similarly, QHMCvar refers to the method focusing on variance, while QHMCsoftvar corresponds to its soft-constrained version. The combination of the two approaches with hard and soft constraints are denoted by QHMCboth and QHMCsoftboth, respectively. For the sake of comparison, truncated Gaussian algorithms using an HMC sampler (tnHMC) and a QHMC sampler (tnQHMC) for inequality-constrained examples are implemented, while additive GP (additiveGP) algorithm is adapted for monotonicity-constrained examples.</p>
<p>For the synthetic examples, the time and accuracy performances of the algorithms are evaluated while simultaneously changing the dataset size and noise level in the data. Following <xref ref-type="bibr" rid="B30">Pensoneault et al. (2020)</xref>, as our metric, we calculate the relative <inline-formula id="inf162">
<mml:math id="m198">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> error between the posterior mean <inline-formula id="inf163">
<mml:math id="m199">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and the true value of the target function <inline-formula id="inf164">
<mml:math id="m200">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> on a set of test points <inline-formula id="inf165">
<mml:math id="m201">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">{</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo stretchy="false">}</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>:<disp-formula id="e35">
<mml:math id="m202">
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munderover>
</mml:mstyle>
<mml:msup>
<mml:mrow>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:munderover>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:munderover>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munderover>
</mml:mstyle>
<mml:mi>f</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msqrt>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(35)</label>
</disp-formula>
</p>
<p>We solve the log-likelihood minimization problem in MATLAB, using the GPML package <xref ref-type="bibr" rid="B34">Rasmussen and Nickisch (2010)</xref>. For the constrained optimization, we use the fmincon from the MATLAB Optimization Toolbox based on the built-in interior point algorithm. Additionally, in order to highlight the advantage of QHMC over HMC, the proposed approach is implemented with using the standard HMC procedure. The relative error, (calculated as in <xref ref-type="disp-formula" rid="e35">Equation 35</xref>), posterior variance and execution time of each version of QHMC and HMC algorithms are presented.</p>
<sec id="s4-1">
<title>4.1 Inequaltiy constraints</title>
<p>This section provides two synthetic examples and two real-life application examples to demonstrate the effectiveness of QHMC algorithms on inequality constraints. Synthetic examples compare the performance QHMC approach with truncated Gaussian methods for a 2-dimensional and a 10-dimensional problems. For the 2-dimensional example, the primary focus is on enforcing the non-negativity constraints within the GP model. In the case of the 10-dimensional example, we generalize our analysis to satisfy a different inequality constraint, and evaluate the performances of truncated Gaussian, QHMC and soft-QHMC methods. Third example considers conservative transport in a steady-state velocity field in heterogeneous porous media. Despite being a two-dimensional problem, the non-homogeneous structure of the solute concentration introduces complexity and increases the level of difficulty. The last example is a 3-dimensional heat transfer problem in a hallow sphere.</p>
<sec id="s4-1-1">
<title>4.1.1 Example 1</title>
<p>Consider the following 2D function under non-negativity constraints:<disp-formula id="e36">
<mml:math id="m203">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>arctan</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mn>5</mml:mn>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>arctan</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(36)</label>
</disp-formula>where <inline-formula id="inf166">
<mml:math id="m204">
<mml:mrow>
<mml:mo stretchy="false">{</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">}</mml:mo>
</mml:mrow>
<mml:mo>&#x2208;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">[</mml:mo>
<mml:mrow>
<mml:mn>0,1</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:math>
</inline-formula>. In this example, the GP model is trained via QHMC over 20 randomly selected locations.</p>
<p>In <xref ref-type="table" rid="T1">Table 1</xref>, the comparison between QHMC and HMC algorithms with a dataset size of 200 is presented. The relative error values indicate that QHMC yields approximately <inline-formula id="inf172">
<mml:math id="m210">
<mml:mrow>
<mml:mn>20</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> more accurate results than HMC, and it achieves this with a shorter processing time. Consequently, QHMC demonstrates both higher accuracy and efficiency compared to HMC.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Comparison of QHMC and HMC on 2D, inequality.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">QHMC-ad</td>
<td align="center">0.10</td>
<td align="center">0.14</td>
<td align="center">46 s</td>
<td align="left">HMC-ad</td>
<td align="center">0.12</td>
<td align="center">0.17</td>
<td align="center">52 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-ad</td>
<td align="center">0.11</td>
<td align="center">0.16</td>
<td align="center">39 s</td>
<td align="left">HMC-soft-ad</td>
<td align="center">0.13</td>
<td align="center">0.19</td>
<td align="center">48 s</td>
</tr>
<tr>
<td align="left">QHMC-var</td>
<td align="center">0.11</td>
<td align="center">0.12</td>
<td align="center">40 s</td>
<td align="left">HMC-var</td>
<td align="center">0.13</td>
<td align="center">0.14</td>
<td align="center">46 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-var</td>
<td align="center">0.12</td>
<td align="center">0.15</td>
<td align="center">34 s</td>
<td align="left">HMC-soft-var</td>
<td align="center">0.15</td>
<td align="center">0.14</td>
<td align="center">42 s</td>
</tr>
<tr>
<td align="left">QHMC-both</td>
<td align="center">
<bold>0.08</bold>
</td>
<td align="center">0.13</td>
<td align="center">48 s</td>
<td align="left">HMC-both</td>
<td align="center">0.10</td>
<td align="center">0.14</td>
<td align="center">53 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-both</td>
<td align="center">0.09</td>
<td align="center">0.13</td>
<td align="center">39 s</td>
<td align="left">HMC-soft-both</td>
<td align="center">0.12</td>
<td align="center">0.15</td>
<td align="center">44 s</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>
<xref ref-type="fig" rid="F1">Figure 1</xref> presents the relative error values of the algorithms with respect to two parameters: the size of the dataset and signal-to-noise ratio (SNR). It can be seen that the most accurate results without adding any noise are provided by QHMCboth and tnQHMC algorithms with around <inline-formula id="inf167">
<mml:math id="m205">
<mml:mrow>
<mml:mn>10</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> relative error. However, upon introducing the noise to the data and increasing its magnitude, a distinct pattern is observed. The QHMC methods exhibit relative error values of approximately <inline-formula id="inf168">
<mml:math id="m206">
<mml:mrow>
<mml:mn>15</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> within the SNR range of <inline-formula id="inf169">
<mml:math id="m207">
<mml:mrow>
<mml:mn>15</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> to <inline-formula id="inf170">
<mml:math id="m208">
<mml:mrow>
<mml:mn>20</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. In contrast, the relative error of the truncated Gaussian methods increases to <inline-formula id="inf171">
<mml:math id="m209">
<mml:mrow>
<mml:mn>25</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> within the same noise range. This pattern demonstrates that QHMC methods can tolerate noise and maintain higher accuracy under these conditions.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Relative error of the algorithms with different SNR and data sizes for Example 1 (2D), inequality.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g001.tif"/>
</fig>
<p>Further, we compare the time performances of the algorithms in <xref ref-type="fig" rid="F2">Figure 2</xref> which demonstrates that QHMC methods, especially the probabilistic QHMC approaches can perform much faster than the truncated Gaussian methods. In this simple 2D problem in <xref ref-type="disp-formula" rid="e36">Equation 36</xref>, the presence of noise does not significantly impact the running times of the QHMC algorithms. In contrast, truncated Gaussian algorithms are slower under noisy environment even when the dataset size is small. Additionally, it can be observed in <xref ref-type="fig" rid="F3">Figure 3</xref> that the QHMC algorithms, especially QHMCvar and QHMCboth are the most robust ones, as their small relative error comes with a small posterior variance. In contrast, the posterior variance values of the truncated Gaussian methods are higher than QHMC posterior variances even when there is no noise, and gets higher along with the relative error (see <xref ref-type="fig" rid="F1">Figure 1</xref>) when the SNR levels increase. Combining all of these experiments, the inference is that QHMC methods achieve higher accuracy within a shorter time frame. Consequently, these methods prove to be more efficient and robust as they can effectively tolerate changes in parameters. Additionally, it is worth noting that a slight improvement is achieved in the performance of truncated Gaussian algorithms by implementing tnQHMC. Based on the numerical results obtained by tnQHMC, it can be concluded that employing tnQHMC not only yields higher accuracy but also saves some computational time compared to tnHMC.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Execution times (in seconds) of the algorithms with different SNR and datasizes for Example 1 (2D), inequality.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g002.tif"/>
</fig>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Posterior variances of the algorithms with different SNR and datasizes for Example 1 (2D), inequality.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g003.tif"/>
</fig>
</sec>
<sec id="s4-1-2">
<title>4.1.2 Example 2</title>
<p>Consider the 10D Ackley function <xref ref-type="bibr" rid="B10">Eriksson and Poloczek (2021)</xref> defined as follows:<disp-formula id="e37">
<mml:math id="m211">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>b</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:munderover>
</mml:mstyle>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>b</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:munderover>
</mml:mstyle>
<mml:mi>cos</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>c</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(37)</label>
</disp-formula>where <inline-formula id="inf173">
<mml:math id="m212">
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf174">
<mml:math id="m213">
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>20</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf175">
<mml:math id="m214">
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf176">
<mml:math id="m215">
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. We study the performance of the algorithms on the domain <inline-formula id="inf177">
<mml:math id="m216">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">[</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>10,10</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> while enforcing the function in <xref ref-type="disp-formula" rid="e37">Equation 37</xref> to be greater than 5. <xref ref-type="table" rid="T2">Table 2</xref> shows the comparison between QHMC and HMC algorithms with 200 data points. Similar to the previous example, the results indicate that QHMC yields approximately 20% more accurate results than HMC in a shorter amount of time.</p>
<table-wrap id="T2" position="float">
<label>TABLE 2</label>
<caption>
<p>Comparison of QHMC and HMC on 10D, inequality.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">QHMC-ad</td>
<td align="center">0.10</td>
<td align="center">0.13</td>
<td align="center">39 m 17 s</td>
<td align="left">HMC-ad</td>
<td align="center">0.12</td>
<td align="center">0.15</td>
<td align="center">43 m 33 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-ad</td>
<td align="center">0.11</td>
<td align="center">0.14</td>
<td align="center">36 m 21 s</td>
<td align="left">HMC-soft-ad</td>
<td align="center">0.13</td>
<td align="center">0.15</td>
<td align="center">41 m 10 s</td>
</tr>
<tr>
<td align="left">QHMC-var</td>
<td align="center">0.11</td>
<td align="center">0.11</td>
<td align="center">37 m 4 s</td>
<td align="left">HMC-var</td>
<td align="center">0.13</td>
<td align="center">0.12</td>
<td align="center">41 m 31 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-var</td>
<td align="center">0.12</td>
<td align="center">0.11</td>
<td align="center">34 m 23 s</td>
<td align="left">HMC-soft-var</td>
<td align="center">0.14</td>
<td align="center">0.12</td>
<td align="center">37 m 42 s</td>
</tr>
<tr>
<td align="left">QHMC-both</td>
<td align="center">
<bold>0.09</bold>
</td>
<td align="center">0.12</td>
<td align="center">40 m 8 s</td>
<td align="left">HMC-both</td>
<td align="center">0.10</td>
<td align="center">0.14</td>
<td align="center">44 m 23 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-both</td>
<td align="center">0.10</td>
<td align="center">0.12</td>
<td align="center">37 m 53 s</td>
<td align="left">HMC-soft-both</td>
<td align="center">0.12</td>
<td align="center">0.14</td>
<td align="center">42 m 5 s</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>
<xref ref-type="fig" rid="F4">Figure 4</xref> illustrates that QHMCboth, QHMCsoftboth and truncated Gaussian algorithms yield the lowest error when there is no noise in the data. However, as the noise level increases, truncated Gaussian methods fall behind all QHMC approaches. Specifically, both the QHMCboth and QHMCsofthboth algorithms demonstrate the ability to tolerate noise levels up to <inline-formula id="inf178">
<mml:math id="m217">
<mml:mrow>
<mml:mn>15</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> with an associated relative error of approximately <inline-formula id="inf179">
<mml:math id="m218">
<mml:mrow>
<mml:mn>15</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. However, other variants of QHMC methods display greater noise tolerance when dealing with larger datasets. With fewer than 100 data points, the error rate reaches around <inline-formula id="inf180">
<mml:math id="m219">
<mml:mrow>
<mml:mn>25</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, but it decreases to <inline-formula id="inf181">
<mml:math id="m220">
<mml:mrow>
<mml:mn>15</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>20</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> when the number of data points exceeds 100.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Relative error of the algorithms with different SNR and data sizes for Example 2 (10D), inequality.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g004.tif"/>
</fig>
<p>
<xref ref-type="fig" rid="F5">Figure 5</xref> illustrates the time comparison of the algorithms, where QHMC methods provide around <inline-formula id="inf182">
<mml:math id="m221">
<mml:mrow>
<mml:mn>30</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>35</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> time efficiency for the datasets larger than a size of 150. Combining this time advantage with the higher accuracy of QHMC indicates that both soft and hard constrained QHMC algorithms outperform truncated Gaussian methods across various criteria. QHMC methods offer the flexibility to employ one of the algorithms depending on the priority of the experiments. For example, if speed is the primary consideration, QHMCsoftvar is the fastest method while maintaining a good level of accuracy. If accuracy is the most important metric, employing QHMCboth would be a wiser choice, as it still offers significant time savings compared to other methods.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>Execution times (in minutes) of the algorithms with different SNR and datasizes for Example 2 (10D), inequality.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g005.tif"/>
</fig>
<p>
<xref ref-type="fig" rid="F6">Figure 6</xref> presents that the posterior variance values of truncated Gaussian methods are significantly higher than that of the QHMC algorithms, especially when the noise levels are higher than <inline-formula id="inf183">
<mml:math id="m222">
<mml:mrow>
<mml:mn>5</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. As expected, QHMCvar and QHMCsoftvar algorithms offer the lowest variance, while QHMCboth and QHMCsoftboth follow them. A clear pattern is shown in the figure, in which QHMC approaches can tolerate higher noise levels especially when the dataset is large. It is notable that our method demonstrates a significant increase in efficiency as the dimension increases. When comparing this 10D example to the 2D case, the execution times of the truncated Gaussian methods are notably impacted by the dimension, even in the absence of noise in the datasets. Although their relative error levels can remain low without noise, it takes 1.5 times longer than the QHMC methods to offer those accuracy. Additionally, this observation holds only for cases where the data is noise-free. As soon as noise is present, the accuracy of truncated Gaussian methods deteriorates, whereas QHMC methods can withstand the noise and yield good results in a shorter time span. In all tables, bold values indicate the best performance in each metric.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>Posterior variances of the algorithms with different SNR and datasizes for Example 2 (10D), inequality.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g006.tif"/>
</fig>
</sec>
<sec id="s4-1-3">
<title>4.1.3 Example 3: solute transport in heterogeneous porous media</title>
<p>Following the example in <xref ref-type="bibr" rid="B44">Yang et al. (2019)</xref>, we examine conservative transport within a constant velocity field in heterogeneous porous media. Let us denote the solute concentration by <inline-formula id="inf184">
<mml:math id="m223">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mtext>T</mml:mtext>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, and suppose that the measurements of <inline-formula id="inf185">
<mml:math id="m224">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> are available at various locations at different times. Conservation laws can be used to describe the processes of flow and transport. Specifically, Darcy flow equation describes the flow by <xref ref-type="bibr" rid="B44">Yang et al. (2019)</xref>.<disp-formula id="e38">
<mml:math id="m225">
<mml:mrow>
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable class="cases">
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mi>&#x2207;</mml:mi>
<mml:mo>&#x22c5;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>K</mml:mi>
<mml:mi>&#x2207;</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="double-struck">D</mml:mi>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi mathvariant="bold">n</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi>y</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2009;or&#x2009;</mml:mtext>
<mml:mi>y</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mi>h</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>H</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi>x</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mi>h</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>H</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi>x</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
<label>(38)</label>
</disp-formula>where <inline-formula id="inf186">
<mml:math id="m226">
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the hydraulic head, <inline-formula id="inf187">
<mml:math id="m227">
<mml:mrow>
<mml:mi mathvariant="double-struck">D</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">[</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">]</mml:mo>
</mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">[</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the simulation domain with <inline-formula id="inf188">
<mml:math id="m228">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>256</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf189">
<mml:math id="m229">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>128</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf190">
<mml:math id="m230">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>H</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf191">
<mml:math id="m231">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>H</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are known boundary head values and <inline-formula id="inf192">
<mml:math id="m232">
<mml:mrow>
<mml:mi>K</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the unknown hydraulic conductivity field. The field is represented as a stochastic process, with the distribution of values described by a log-normal distribution. Specifically, it is expressed as <inline-formula id="inf193">
<mml:math id="m233">
<mml:mrow>
<mml:mi>K</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>Z</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, where is a second-order stationary GP with a known exponential covariance function, <inline-formula id="inf194">
<mml:math id="m234">
<mml:mrow>
<mml:mtext>Cov</mml:mtext>
<mml:mrow>
<mml:mo stretchy="false">{</mml:mo>
<mml:mrow>
<mml:mi>Z</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>Z</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">}</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>Z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">&#x7c;</mml:mo>
<mml:mo>/</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> where variance <inline-formula id="inf195">
<mml:math id="m235">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>Z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and correlation length <inline-formula id="inf196">
<mml:math id="m236">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. The solute transport by the advection-dispersion equation <xref ref-type="bibr" rid="B9">Emmanuel and Berkowitz (2005)</xref>, <xref ref-type="bibr" rid="B22">Lin and Tartakovsky (2009)</xref>, <xref ref-type="bibr" rid="B44">Yang et al. (2019)</xref> can be described by<disp-formula id="e39">
<mml:math id="m237">
<mml:mrow>
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable class="cases">
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x2207;</mml:mi>
<mml:mo>&#x22c5;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi mathvariant="bold">C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x2207;</mml:mi>
<mml:mo>&#x22c5;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mo stretchy="false">&#x2016;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>&#x2207;</mml:mi>
<mml:mi>C</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mtext>&#x2009;in&#x2009;</mml:mtext>
<mml:mi mathvariant="double-struck">D</mml:mi>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mi>C</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>Q</mml:mi>
<mml:mi>&#x3b4;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi>t</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi mathvariant="bold">n</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi>y</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2009;or&#x2009;</mml:mtext>
<mml:mi>y</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mtext>&#x2009;or&#x2009;</mml:mtext>
<mml:mi>x</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mi>C</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi>x</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
<label>(39)</label>
</disp-formula>
</p>
<p>In this context, <inline-formula id="inf197">
<mml:math id="m238">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> represents the solute concentration defined over <inline-formula id="inf198">
<mml:math id="m239">
<mml:mrow>
<mml:mi mathvariant="double-struck">D</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">[</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">]</mml:mo>
</mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mi mathvariant="normal">&#x3a9;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf199">
<mml:math id="m240">
<mml:mrow>
<mml:mi mathvariant="bold">v</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> denotes the fluid velocity given by <inline-formula id="inf200">
<mml:math id="m241">
<mml:mrow>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>K</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi>&#x3c9;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mi>&#x2207;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c9;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> with <inline-formula id="inf201">
<mml:math id="m242">
<mml:mrow>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> being porosity; <inline-formula id="inf202">
<mml:math id="m243">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the diffusion coefficient, <inline-formula id="inf203">
<mml:math id="m244">
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> stands for the tortuosity, and <inline-formula id="inf204">
<mml:math id="m245">
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the dispersivity tensor, with diagonal components <inline-formula id="inf205">
<mml:math id="m246">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf206">
<mml:math id="m247">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. In this study, the transport parameters for <xref ref-type="disp-formula" rid="e38">Equations 38</xref>, <xref ref-type="disp-formula" rid="e39">39</xref> are defined as: <inline-formula id="inf207">
<mml:math id="m248">
<mml:mrow>
<mml:mi>&#x3d5;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.317</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c4;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2.5</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>1</mml:mn>
<mml:msup>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf208">
<mml:math id="m249">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.5</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. Lastly, the solute is instantaneously injected at <inline-formula id="inf209">
<mml:math id="m250">
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>50,64</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> at <inline-formula id="inf210">
<mml:math id="m251">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> with the intensity <inline-formula id="inf211">
<mml:math id="m252">
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> <xref ref-type="bibr" rid="B44">Yang et al. (2019)</xref>. In <xref ref-type="fig" rid="F7">Figure 7</xref>, the ground truth with observation locations and constraint locations are presented to provide an insight into the structure of solute concentration.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption>
<p>Observation locations (black squares) and constraint locations (black stars).</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g007.tif"/>
</fig>
<p>
<xref ref-type="table" rid="T3">Table 3</xref> presents a comparison of all versions of QHMC and HMC methods, along with the truncated Gaussian algorithms. Similar to the results observed with synthetic examples, the QHMCboth, QHMCsoftboth, and tnQHMC algorithms demonstrate the most accurate predictions with a relative error of <inline-formula id="inf212">
<mml:math id="m253">
<mml:mrow>
<mml:mn>13</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>15</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. Notably, QHMCsoftboth emerges as the fastest among the methods while achieving higher accuracy. For instance, the error value obtained by QHMCsoftboth is 0.14, whereas tnQHMC&#x2019;s error is 0.15. However, QHMCsoftboth delivers a <inline-formula id="inf213">
<mml:math id="m254">
<mml:mrow>
<mml:mn>20</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> time efficiency gain with slightly superior accuracy. In <xref ref-type="fig" rid="F8">Figure 8</xref>, a comprehensive comparison of the algorithms is presented. The decrease in relative error values is noticeable as constraints are gradually added, following the adopted adaptive approach. Initially, the error is 0.5 and gradually decreases to approximately 0.13. Furthermore, it is evident that the QHMCboth and QHMCsoftboth methods consistently deliver the most accurate results at each step, whereas the performance of the QHMCsoftvar method is outperformed by other approaches.</p>
<table-wrap id="T3" position="float">
<label>TABLE 3</label>
<caption>
<p>Comparison of QHMC and HMC on solute transport with nonnegativity.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">QHMC-ad</td>
<td align="center">0.18</td>
<td align="center">0.13</td>
<td align="center">83 s</td>
<td align="left">HMC-ad</td>
<td align="center">0.20</td>
<td align="center">0.14</td>
<td align="center">89 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-ad</td>
<td align="center">0.19</td>
<td align="center">0.13</td>
<td align="center">75 s</td>
<td align="left">HMC-soft-ad</td>
<td align="center">0.22</td>
<td align="center">0.15</td>
<td align="center">83 s</td>
</tr>
<tr>
<td align="left">QHMC-var</td>
<td align="center">0.20</td>
<td align="center">0.12</td>
<td align="center">80 s</td>
<td align="left">HMC-var</td>
<td align="center">0.23</td>
<td align="center">0.13</td>
<td align="center">91 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-var</td>
<td align="center">0.21</td>
<td align="center">0.13</td>
<td align="center">71 s</td>
<td align="left">HMC-soft-var</td>
<td align="center">0.24</td>
<td align="center">0.14</td>
<td align="center">79 s</td>
</tr>
<tr>
<td align="left">QHMC-both</td>
<td align="center">
<bold>0.13</bold>
</td>
<td align="center">0.12</td>
<td align="center">86 s</td>
<td align="left">HMC-both</td>
<td align="center">0.15</td>
<td align="center">0.14</td>
<td align="center">97 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-both</td>
<td align="center">0.14</td>
<td align="center">0.13</td>
<td align="center">74 s</td>
<td align="left">HMC-soft-both</td>
<td align="center">0.15</td>
<td align="center">0.15</td>
<td align="center">82 s</td>
</tr>
<tr>
<td align="left">tnQHMC</td>
<td align="center">0.15</td>
<td align="center">0.13</td>
<td align="center">96 s</td>
<td align="left">tnHMC</td>
<td align="center">0.16</td>
<td align="center">0.16</td>
<td align="center">103 s</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption>
<p>The change in relative error while adding constraints, solute transport.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g008.tif"/>
</fig>
</sec>
<sec id="s4-1-4">
<title>4.1.4 Example 4: heat transfer in a hallow sphere</title>
<p>This 3-dimensional example considers a heat transfer problem in a hallow sphere. Let <inline-formula id="inf214">
<mml:math id="m255">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> represent a ball centered at 0 with radius <inline-formula id="inf215">
<mml:math id="m256">
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. Defining the hallow sphere as <inline-formula id="inf216">
<mml:math id="m257">
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, the equations are given as <xref ref-type="bibr" rid="B45">Yang et al. (2021)</xref>.<disp-formula id="e40">
<mml:math id="m258">
<mml:mrow>
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable class="cases">
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>u</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x2207;</mml:mi>
<mml:mo>&#x22c5;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3ba;</mml:mi>
<mml:mi>&#x2207;</mml:mi>
<mml:mi>u</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>D</mml:mi>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mi>&#x3ba;</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>u</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi mathvariant="bold">n</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3c0;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3c0;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mtext>if&#x2009;</mml:mtext>
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mo stretchy="false">&#x7c;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>4</mml:mn>
<mml:mtext>&#x2009;and&#x2009;</mml:mtext>
<mml:mi>&#x3d5;</mml:mi>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mi>&#x3ba;</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>u</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi mathvariant="bold">n</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mtext>if&#x2009;</mml:mtext>
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mo stretchy="false">&#x2016;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>4</mml:mn>
<mml:mtext>&#x2009;and&#x2009;</mml:mtext>
<mml:mi>&#x3d5;</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mi>u</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mtext>if&#x2009;</mml:mtext>
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mo stretchy="false">&#x2016;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
<label>(40)</label>
</disp-formula>
</p>
<p>In this context, <inline-formula id="inf217">
<mml:math id="m259">
<mml:mrow>
<mml:mi mathvariant="bold">n</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> denotes the normal vector pointing outward, while <inline-formula id="inf218">
<mml:math id="m260">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf219">
<mml:math id="m261">
<mml:mrow>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> represent the azimuthal and elevation angles, respectively, of points within the sphere. We determine the precise heat conductivity in <xref ref-type="disp-formula" rid="e40">Equation 40</xref> using <inline-formula id="inf220">
<mml:math id="m262">
<mml:mrow>
<mml:mi>&#x3ba;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1.0</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>0.05</mml:mn>
<mml:mi>u</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. The quadratic elements with 12,854 degrees of freedom are employed, and we set <inline-formula id="inf221">
<mml:math id="m263">
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> to solve the partial differential equations (PDE). Starting with 6 initial locations at 0 on the surface, 6 new constraint locations are introduced based on the active learning approach of the QHMC version. In <xref ref-type="fig" rid="F9">Figure 9</xref>, the decrease is evident in relative error while the constraints are added step by step. In addition, <xref ref-type="fig" rid="F10">Figure 10</xref> shows the ground truth and the GP result obtained by QHMCsoftboth algorithm, where QHMCsoftboth <inline-formula id="inf222">
<mml:math id="m264">
<mml:mrow>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> matches the reference model. The constraint locations of the result are shown in <xref ref-type="fig" rid="F11">Figure 11</xref>. Moreover, its posterior variance is small based on the results shown in <xref ref-type="table" rid="T2">Table 2</xref>. The table also provides the error, posterior variance and time performances of QHMC and HMC algorithms, where the advantages of QHMC over HMC in all categories, even with the truncated Gaussian algorithm are observed. Although all of the algorithms complete the GP regression in less than 1 min, comparing the truncated Gaussian method with QHMC-based algorithms, <inline-formula id="inf223">
<mml:math id="m265">
<mml:mrow>
<mml:mn>40</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>60</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> time efficiency along with compatible accuracy of QHMC algorithms is achieved. In addition to the time and accuracy performances, it is shown that the posterior variance values are smallest with QHMCvar and QHMCboth approaches, followed by tnQHMC and QHMCad approaches. Using HMC sampling in all methods generates larger posterior variances.</p>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption>
<p>The change in relative error while adding constraints, heat equation.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g009.tif"/>
</fig>
<fig id="F10" position="float">
<label>FIGURE 10</label>
<caption>
<p>Comparison of the ground truth and QHMCsoftboth result. <bold>(A)</bold> Heat equation data, ground truth <italic>y(x)</italic>. <bold>(B)</bold> QHMCsoftboth prediction <italic>y&#x2a;(x)</italic>.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g010.tif"/>
</fig>
<fig id="F11" position="float">
<label>FIGURE 11</label>
<caption>
<p>Initial locations (squares) and adaptively added constraint locations (stars). <bold>(A)</bold> Initial locations. <bold>(B)</bold> Constraint locations added by QHMC.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g011.tif"/>
</fig>
</sec>
</sec>
<sec id="s4-2">
<title>4.2 Monotonicity constraints</title>
<p>This section provides two numerical examples to investigate the effectiveness of our algorithms on monotonicity constraints. As stated in <xref ref-type="sec" rid="s2-3-1">Section 2.3.1</xref>, the monotonicity constraints are enforced in the direction of active variables. Similar to the comparisons in previous section, we illustrate the advantages of QHMC over HMC, and then compare the performance of QHMC algorithms with additive GP approach introduced in <xref ref-type="bibr" rid="B24">L&#xf3;pez-Lopera et al. (2022)</xref> with respect to the same criteria as in the previous section.</p>
<sec id="s4-2-1">
<title>4.2.1 Example 1</title>
<p>Consider the following 5D function with monotonicity constraints <xref ref-type="bibr" rid="B24">L&#xf3;pez-Lopera et al. (2022)</xref>:<disp-formula id="e41">
<mml:math id="m266">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>arctan</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>5</mml:mn>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>arctan</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>2</mml:mn>
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>4</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>10</mml:mn>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(41)</label>
</disp-formula>
</p>
<p>In this example, we enforce the function in <xref ref-type="disp-formula" rid="e41">Equation 41</xref> to be non-decreasing with x &#x2208; [0, 1]&#x5e;5. <xref ref-type="table" rid="T5">Table 5</xref> shows the performances of HMC and QHMC algorithms, where we observe that QHMC achieves higher accuracy with lower variance in a shorter amount of time. The comparison proves that each version of QHMC is more efficient than HMC In addition, <xref ref-type="fig" rid="F12">Figure 12</xref> shows the relative error values of QHMC and additive GP algorithms with respect to the change in SNR and dataset size. Based on the results, it is clear that QHMCboth and QHMCsoftboth provide the most accurate results under every different condition, while the difference is more remarkable for the cases in which noise is higher. Although QHMCboth and QHMCsoftboth provides the most accurate results, other QHMC versions also generate more accurate results then additive GP method. Moreover, <xref ref-type="fig" rid="F13">Figure 13</xref> shows that the soft-constrained QHMC approaches are faster than the hard-constrained QHMC, while hard-constrained QHMC versions are still faster than additive GP algorithm. Additionally, we can see in <xref ref-type="fig" rid="F14">Figure 14</xref> that QHMC, both hard and soft constrained versions, can reduce the posterior variance. Unlike the additiveGP method, QHMC approaches generate small variances even under high noise.</p>
<fig id="F12" position="float">
<label>FIGURE 12</label>
<caption>
<p>Relative error of the algorithms with different SNR and data sizes for Example 1 (5D), monotonicity.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g012.tif"/>
</fig>
<fig id="F13" position="float">
<label>FIGURE 13</label>
<caption>
<p>Execution times (in seconds) of the algorithms with different SNR and data sizes for Example 1 (5D), monotonicity.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g013.tif"/>
</fig>
<fig id="F14" position="float">
<label>FIGURE 14</label>
<caption>
<p>Posterior variances of the algorithms with different SNR and data sizes for Example 1 (5D), monotonicity.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g014.tif"/>
</fig>
</sec>
<sec id="s4-2-2">
<title>4.2.2 Example 2</title>
<p>We provide a 20-dimensional example to indicate the applicability and effectiveness of QHMC algorithms on higher dimensions with monotonicity constraint. We consider the target function used in <xref ref-type="bibr" rid="B24">L&#xf3;pez-Lopera et al. (2022)</xref>, <xref ref-type="bibr" rid="B3">Bachoc et al. (2022)</xref>.<disp-formula id="e42">
<mml:math id="m267">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:munderover>
</mml:mstyle>
<mml:mi>arctan</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mn>5</mml:mn>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(42)</label>
</disp-formula>with <inline-formula id="inf224">
<mml:math id="m268">
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>20</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. We enforce monotonicity constraint on the 20D function in <xref ref-type="disp-formula" rid="e42">Equation 42</xref>.</p>
<p>
<xref ref-type="table" rid="T6">Table 6</xref> illustrates accuracy and time advantages of QHMC over HMC. For each version of QHMC and HMC, using QHMC sampling in a specific version accelerates the process while increasing the accuracy. Overall comparison shows that among all versions with QHMC and HMC sampling, QHMCboth is the most accurate approach, while QHMCsoftboth is the fastest and ranked second in accuracy. <xref ref-type="fig" rid="F15">Figures 15</xref>, <xref ref-type="fig" rid="F16">16</xref> show the relative error and time performances of QHMC-based algorithms, HMCsoftboth and additive GP algorithm, respectively. In this final example with the highest dimension, the same phenomenon is observed as in previous results: soft-constrained versions demonstrate greater efficiency, while hard-constrained QHMC approaches remain faster than additive GP across different conditions, including high noise levels. Based on <xref ref-type="fig" rid="F15">Figure 15</xref>, QHMCboth can tolerate noise levels up to <inline-formula id="inf225">
<mml:math id="m269">
<mml:mrow>
<mml:mn>10</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> with the smallest error, and it can still provide good accuracy (error is around 0.15) even when the SNR is higher than <inline-formula id="inf226">
<mml:math id="m270">
<mml:mrow>
<mml:mn>10</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. It is also worth to mention that although the error values generated by HMCsoftboth and additiveGP are pretty close, HMCsoftboth performs faster than additiveGP, especially when the dataset is larger and noise level is higher. Furthermore, QHMC reduces the posterior variance as in shown in <xref ref-type="fig" rid="F17">Figure 17</xref>. The behavior of the algorithms follows the same trend as 5-dimensional example, where the methods can tolerate the noise in the data, especially with larger datasets.</p>
<fig id="F15" position="float">
<label>FIGURE 15</label>
<caption>
<p>Relative error of the algorithms with different SNR and data sizes for Example 2 (20D), monotonicity.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g015.tif"/>
</fig>
<fig id="F16" position="float">
<label>FIGURE 16</label>
<caption>
<p>Execution times (in minutes) of the algorithms with different SNR and data sizes for Example 2 (20D), monotonicity.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g016.tif"/>
</fig>
<fig id="F17" position="float">
<label>FIGURE 17</label>
<caption>
<p>Posterior variances of the algorithms with different SNR and data sizes for Example 2 (20D), monotonicity.</p>
</caption>
<graphic xlink:href="fmech-10-1410190-g017.tif"/>
</fig>
</sec>
</sec>
<sec id="s4-3">
<title>4.3 Discussion</title>
<p>In the scope of the proposed QHMC-based method, this work investigates the advantages and disadvantages of using soft-constrained approach on physics-informed GP regression. The comparison of modified versions of proposed algorithm along with a recent method is further performed to validate the superiority of the approach. The significant findings and the corresponding possible reasons are summarized as follows:<list list-type="simple">
<list-item>
<p>1. Synthetic examples are designed to highlight the robustness and efficiency of proposed method. In one example, considering two criteria: dataset size and SNR. The QHMC-based algorithms are evaluated in an environment with a range of <inline-formula id="inf227">
<mml:math id="m271">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>20</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> SNR, and results provided in <xref ref-type="fig" rid="F1">Figures 1</xref>, <xref ref-type="fig" rid="F4">4</xref>, <xref ref-type="fig" rid="F12">12</xref>, <xref ref-type="fig" rid="F15">15</xref> have shown that both soft and hard-constrained versions of proposed method tolerate the noise in the data, especially if it is less then <inline-formula id="inf228">
<mml:math id="m272">
<mml:mrow>
<mml:mn>10</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. In addition, the methods are more tolerant when the dataset size increases. This part of the experiments for each synthetic example proved the robustness of the proposed method.</p>
</list-item>
<list-item>
<p>2. Additionally, the numerical results of synthetic examples include the execution times for when the SNR and dataset size increase in each example. The goal is to underscore the effectiveness of the proposed algorithm. <xref ref-type="fig" rid="F2">Figures 2</xref>, <xref ref-type="fig" rid="F5">5</xref>, <xref ref-type="fig" rid="F13">13</xref>, <xref ref-type="fig" rid="F16">16</xref> show the time advantages of the algorithms, especially for the soft-constrained versions.</p>
</list-item>
<list-item>
<p>3. The dimensions of synthetic examples are selected to verify that the robustness and efficiency of the algorithms remain for higher dimensions. For inequality-constrained scenarios, evaluations are performed on 2D and 10D problems, while for monotonicity-constrained algorithms evaluations are performed on 5D and 20D problems. The results have verified that the performance of proposed methods can maintain the accuracy for higher-dimensional cases in a relatively short amount of times.</p>
</list-item>
<list-item>
<p>4. The real-life applications are chosen to verify that the proposed method is promising to generalize different type of problems. The solute concentration example is a 2D problem with non-homogeneous structure, while heat transfer problem is a 3D problem that requires PDE solving. On the contrary of synthetic examples, in this set of experiments, the dataset size is fixed and there is no injected Gaussian noise in the data. We present a comprehensive comparison of all methods along with the truncated Gaussian algorithm. Step by step decrease in the error is presented in <xref ref-type="fig" rid="F8">Figures 8</xref>, <xref ref-type="fig" rid="F9">9</xref>, where the success of all versions are verified.</p>
</list-item>
<list-item>
<p>5. The proposed method is a combination of QHMC algorithm and a probabilistic approach for phiysics-informed GP. QHMC training provides accuracy due to its broad state space exploration, while probabilistic approach lowers the variance. In each case, we start with the experiments conducted with fixed dataset size and zero SNR to demonstrate the superiority of QHMC over HMC. The HMC versions of the proposed methods are implemented and compared to the corresponding QHMC algorithms in <xref ref-type="table" rid="T1">Tables 1</xref>, <xref ref-type="table" rid="T3">3</xref>&#x2013;<xref ref-type="table" rid="T6">6</xref>. The findings for every single case confirm that QHMC enhances the accuracy, robustness and efficiency. After demonstrating the superiority of QHMC method, a comprehensive evaluation is performed for QHMC-based methods in different scenarios. Again, for the sake of verification of efficiency of soft-constrained QHMC, we implemented the hard constrained versions by choosing the violation probability as 0.005. The findings indicate that the soft-constrained approaches reduce computational expenses while maintaining accuracy comparable to that of the hard-constrained counterparts. Releasing the constraints by a probabilistic sense has brought efficiency, while decreasing the posterior variance.</p>
</list-item>
<list-item>
<p>6. We should also note that while the numerical results indicate that the current approach is a robust and efficient QHMC algorithm, the impact of the probability of constraint violation should be further investigated. The experiments were conducted with a relatively low probability of releasing the constraints (around <inline-formula id="inf229">
<mml:math id="m273">
<mml:mrow>
<mml:mn>5</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>) and the accuracy was maintained under these conditions. However, allowing for more violations may pose limitations. In addition, the performance of the proposed approach on different type of constrained optimization problems, including those involving equality constraints, can be more challenging. Addressing these challenges can be both a limitation and a potential future work for QHMC-based, physics-informed GP regression.</p>
</list-item>
<list-item>
<p>7. While our examples demonstrate the efficiency of the proposed approach in higher dimensions (up to 20), evaluating the performance of the algorithms in much higher dimensions, such as 100 or more, remains a subject for future work. Based on the current results, we expect that the QHMC will outperform HMC, due to its sampling advantages. For example, in <xref ref-type="bibr" rid="B23">Liu and Zhang (2019)</xref> a stochastic version of QHMC approach is efficiently applied to train a two-layer neural network for classifying the MNIST dataset, where the weight matrices have dimensions of <inline-formula id="inf230">
<mml:math id="m274">
<mml:mrow>
<mml:mn>784</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>200</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf231">
<mml:math id="m275">
<mml:mrow>
<mml:mn>200</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. Similarly, a stochastic QHMC approach in <xref ref-type="bibr" rid="B17">Kochan et al. (2022)</xref> is employed for image reconstruction on MNIST data, treating the dataset as a <inline-formula id="inf232">
<mml:math id="m276">
<mml:mrow>
<mml:mn>60,000</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>784</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> matrix. However for constrained GP problems, the algorithm might need further improvements to effectively address the challenges posed by the curse of dimensionality in more demanding scenarios.</p>
</list-item>
</list>
</p>
<table-wrap id="T4" position="float">
<label>TABLE 4</label>
<caption>
<p>Comparison of QHMC and HMC on heat transfer with nonnegativity.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">QHMC-ad</td>
<td align="center">0.04</td>
<td align="center">0.04</td>
<td align="center">34 s</td>
<td align="left">HMC-ad</td>
<td align="center">0.06</td>
<td align="center">0.07</td>
<td align="center">40 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-ad</td>
<td align="center">0.05</td>
<td align="center">0.04</td>
<td align="center">30 s</td>
<td align="left">HMC-soft-ad</td>
<td align="center">0.07</td>
<td align="center">0.07</td>
<td align="center">32 s</td>
</tr>
<tr>
<td align="left">QHMC-var</td>
<td align="center">0.05</td>
<td align="center">0.02</td>
<td align="center">30 s</td>
<td align="left">HMC-var</td>
<td align="center">0.09</td>
<td align="center">0.05</td>
<td align="center">27 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-var</td>
<td align="center">0.06</td>
<td align="center">0.03</td>
<td align="center">26 s</td>
<td align="left">HMC-soft-var</td>
<td align="center">0.10</td>
<td align="center">0.05</td>
<td align="center">29 s</td>
</tr>
<tr>
<td align="left">QHMC-both</td>
<td align="center">
<bold>0.02</bold>
</td>
<td align="center">0.03</td>
<td align="center">32 s</td>
<td align="left">HMC-both</td>
<td align="center">0.04</td>
<td align="center">0.05</td>
<td align="center">37 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-both</td>
<td align="center">0.03</td>
<td align="center">0.03</td>
<td align="center">27 s</td>
<td align="left">HMC-soft-both</td>
<td align="center">0.05</td>
<td align="center">0.06</td>
<td align="center">35 s</td>
</tr>
<tr>
<td align="left">tnQHMC</td>
<td align="center">0.04</td>
<td align="center">0.05</td>
<td align="center">51 s</td>
<td align="left">tnHMC</td>
<td align="center">0.06</td>
<td align="center">0.07</td>
<td align="center">56 s</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T5" position="float">
<label>TABLE 5</label>
<caption>
<p>Comparison of QHMC and HMC on 5D, monotonicity.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">QHMC-ad</td>
<td align="center">0.11</td>
<td align="center">0.16</td>
<td align="center">2 m 23 s</td>
<td align="left">HMC-ad</td>
<td align="center">0.13</td>
<td align="center">0.17</td>
<td align="center">3 m 14 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-ad</td>
<td align="center">0.14</td>
<td align="center">0.18</td>
<td align="center">1 m 57 s</td>
<td align="left">HMC-soft-ad</td>
<td align="center">0.17</td>
<td align="center">0.20</td>
<td align="center">2 m 48 s</td>
</tr>
<tr>
<td align="left">QHMC-var</td>
<td align="center">0.12</td>
<td align="center">0.15</td>
<td align="center">2 m 13 s</td>
<td align="left">HMC-var</td>
<td align="center">0.15</td>
<td align="center">0.17</td>
<td align="center">2 m 58 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-var</td>
<td align="center">0.15</td>
<td align="center">0.17</td>
<td align="center">1 m 42 s</td>
<td align="left">HMC-soft-var</td>
<td align="center">0.18</td>
<td align="center">0.19</td>
<td align="center">2 m 16 s</td>
</tr>
<tr>
<td align="left">QHMC-both</td>
<td align="center">
<bold>0.10</bold>
</td>
<td align="center">0.13</td>
<td align="center">2 m 25 s</td>
<td align="left">HMC-both</td>
<td align="center">0.12</td>
<td align="center">0.15</td>
<td align="center">2 m 58 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-both</td>
<td align="center">0.12</td>
<td align="center">0.14</td>
<td align="center">
<bold>1</bold> <bold>m 55</bold> <bold>s</bold>
</td>
<td align="left">HMC-soft-both</td>
<td align="center">0.14</td>
<td align="center">0.15</td>
<td align="center">2 m 39 s</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T6" position="float">
<label>TABLE 6</label>
<caption>
<p>Comparison of QHMC and HMC on 20D, monotonicity.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
<th align="left">Method</th>
<th align="center">Error</th>
<th align="center">Posterior Var</th>
<th align="center">Time</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">QHMC-ad</td>
<td align="center">0.13</td>
<td align="center">0.18</td>
<td align="center">33 m 1 s</td>
<td align="left">HMC-ad</td>
<td align="center">0.15</td>
<td align="center">0.21</td>
<td align="center">35 m 38 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-ad</td>
<td align="center">0.15</td>
<td align="center">0.19</td>
<td align="center">31 m 21 s</td>
<td align="left">HMC-soft-ad</td>
<td align="center">0.18</td>
<td align="center">0.22</td>
<td align="center">33 m 41 s</td>
</tr>
<tr>
<td align="left">QHMC-var</td>
<td align="center">0.14</td>
<td align="center">0.16</td>
<td align="center">32 m 53 s</td>
<td align="left">HMC-var</td>
<td align="center">0.17</td>
<td align="center">0.17</td>
<td align="center">34 m 21 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-var</td>
<td align="center">0.16</td>
<td align="center">0.17</td>
<td align="center">29 m 42 s</td>
<td align="left">HMC-soft-var</td>
<td align="center">0.19</td>
<td align="center">0.18</td>
<td align="center">31 m 17 s</td>
</tr>
<tr>
<td align="left">QHMC-both</td>
<td align="center">
<bold>0.11</bold>
</td>
<td align="center">0.14</td>
<td align="center">33 m 45 s</td>
<td align="left">HMC-both</td>
<td align="center">0.14</td>
<td align="center">0.16</td>
<td align="center">36 m 21 s</td>
</tr>
<tr>
<td align="left">QHMC-soft-both</td>
<td align="center">0.12</td>
<td align="center">0.15</td>
<td align="center">
<bold>29</bold> <bold>m 48</bold> <bold>s</bold>
</td>
<td align="left">HMC-soft-both</td>
<td align="center">0.15</td>
<td align="center">0.17</td>
<td align="center">33 m 11 s</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec sec-type="conclusion" id="s5">
<title>5 Conclusion</title>
<p>Leveraging the accuracy of QHMC training and the efficiency of probabilistic approach, this work introduced a soft-constrained QHMC algorithm to enforce inequality and monotonicity constraints on the GP. The proposed algorithm reduces the difference between ground truth and the posterior mean in the resulting GP model, while increasing the efficiency by attaining the accurate results in a short amount of time. To further enhance the performance of the QHMC algorithms across various scenarios, modified versions of QHMC are implemented adopting adaptive learning. These versions provide flexibility in selecting the most suitable algorithm based on the specific priorities of a given problem.</p>
<p>We provided the convergence of QHMC by showing that its steady-state distribution approach the true posterior density, and theoretically justified that the probabilistic approach preserves convergence. Finally, we have implemented our methods to solve several types of optimization problems. Each experiment initially outlined the benefits of QHMC sampling in comparison to HMC sampling. These advantages remained consistent across all cases, resulting in approximately a <inline-formula id="inf233">
<mml:math id="m277">
<mml:mrow>
<mml:mn>20</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> time-saving and <inline-formula id="inf234">
<mml:math id="m278">
<mml:mrow>
<mml:mn>15</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> higher accuracy. Having demonstrated the advantages of QHMC sampling, further evaluation on the performances of the algorithms across various scenarios was performed. The examples cover higher-dimensional problems featuring both inequality and monotonicity constraints. Furthermore, the evaluations include real-world applications where injecting physical properties is essential, particularly in cases involving inequality constraints.</p>
<p>In the context of inequality-constrained Gaussian processes (GPs), we explored 2-dimensional and 10-dimensional synthetic problems, along with two real applications involving 2-dimensional and 3-dimensional data. For synthetic examples, the relative error, posterior variance and execution time of the algorithms were compared while gradually increasing the noise level and dataset size. Overall, QHMC-based algorithms outperformed the truncated Gaussian methods. Although the truncated Gaussian methods provide high accuracy in the absence of noise and are compatible with QHMC approaches, their relative error and posterior variances increase as the noise appeared and increased. Moreover, the advantages of soft-constrained QHMC became more evident, particularly in higher-dimensional cases, when compared to truncated Gaussian and even hard-constrained QHMC. The time comparison of the algorithms underscores that the truncated Gaussian methods are significantly impacted by the curse of dimensionality and large datasets, exhibiting slower performance under these conditions. In real-world application scenarios featuring 2-dimensional and 3-dimensional data, the findings were consistent with those observed in the synthetic examples. Although the accuracy level may not reach the highest levels observed in the synthetic examples and 3-dimensional heat equation problem, the observed trend remains consistent. The lower accuracy observed in the latter problem can be attributed to the non-homogeneous structure of solute concentration.</p>
<p>In the case of monotonicity-constrained GP, we addressed 5-dimensional and 20-dimensional examples, utilizing the same configuration as employed for inequality-constrained GP. A comprehensive comparison was conducted between all versions of QHMC algorithms and the additive GP method. The results indicate that QHMC-based approaches hold a notable advantage, particularly in scenarios involving noise and large datasets. While additive GP proves to be a strong method suitable for high-dimensional cases, QHMC algorithms performed faster and yielded lower variances.</p>
<p>In conclusion, the work has demonstrated that soft-constrained QHMC is a robust, efficient and flexible method that can be applicable to higher dimensional cases and large datasets. Numerical results have shown that soft-constrained QHMC is promising to be generalized to various applications with different physical properties.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="s6">
<title>Data availability statement</title>
<p>The data analyzed in this study is subject to the following licenses/restrictions: Datasets are from author&#x2019;s one of the previous papers. Requests to access these datasets should be directed to DK, <email>dik318@lehigh.edu</email>.</p>
</sec>
<sec sec-type="author-contributions" id="s7">
<title>Author contributions</title>
<p>DK: Formal Analysis, Investigation, Methodology, Validation, Writing&#x2013;original draft, Writing&#x2013;review and editing. XY: Formal Analysis, Methodology, Project administration, Resources, Supervision, Writing&#x2013;review and editing, Funding acquisition.</p>
</sec>
<sec sec-type="funding-information" id="s8">
<title>Funding</title>
<p>The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This work is supported by the NSF CAREER DMS-2143915.</p>
</sec>
<sec sec-type="COI-statement" id="s9">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abrahamsen</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Benth</surname>
<given-names>F. E.</given-names>
</name>
</person-group> (<year>2001</year>). <article-title>Kriging with inequality constraints</article-title>. <source>Math. Geol.</source> <volume>33</volume>, <fpage>719</fpage>&#x2013;<lpage>744</lpage>. <pub-id pub-id-type="doi">10.1023/a:1011078716252</pub-id>
</citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Agrell</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Gaussian processes with linear operator inequality constraints</article-title>. <source>arXiv Prepr. arXiv:1901</source>, <fpage>03134</fpage>.</citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bachoc</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>L&#xf3;pez-Lopera</surname>
<given-names>A. F.</given-names>
</name>
<name>
<surname>Roustant</surname>
<given-names>O.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Sequential construction and dimension reduction of Gaussian processes under inequality constraints</article-title>. <source>SIAM J. Math. Data Sci.</source> <volume>4</volume>, <fpage>772</fpage>&#x2013;<lpage>800</lpage>. <pub-id pub-id-type="doi">10.1137/21m1407513</pub-id>
</citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barbosa</surname>
<given-names>B. H. G.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Askari</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Khajepour</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Lateral force prediction using Gaussian process regression for intelligent tire systems</article-title>. <source>IEEE Trans. Syst. Man, Cybern. Syst.</source> <volume>52</volume>, <fpage>5332</fpage>&#x2013;<lpage>5343</lpage>. <pub-id pub-id-type="doi">10.1109/tsmc.2021.3123310</pub-id>
</citation>
</ref>
<ref id="B5">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Barbu</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>S.-C.</given-names>
</name>
</person-group> (<year>2020</year>) <source>Monte Carlo methods</source>, <volume>35</volume>. <publisher-name>Springer</publisher-name>.</citation>
</ref>
<ref id="B6">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Chati</surname>
<given-names>Y. S.</given-names>
</name>
<name>
<surname>Balakrishnan</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2017</year>). &#x201c;<article-title>A Gaussian process regression approach to model aircraft engine fuel flow rate</article-title>,&#x201d; in <source>Proceedings of the 8th international conference on cyber-physical systems</source>, <fpage>131</fpage>&#x2013;<lpage>140</lpage>.</citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Da Veiga</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Marrel</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Gaussian process modeling with inequality constraints</article-title>. <source>Ann. la Fac. Sci. Toulouse Math&#xe9;matiques</source> <volume>21</volume>, <fpage>529</fpage>&#x2013;<lpage>555</lpage>. <pub-id pub-id-type="doi">10.5802/afst.1344</pub-id>
</citation>
</ref>
<ref id="B8">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>D&#xfc;richen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Pimentel</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Clifton</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Schweikard</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Clifton</surname>
<given-names>D. A.</given-names>
</name>
</person-group> (<year>2014</year>). &#x201c;<article-title>Multi-task Gaussian process models for biomedical applications</article-title>,&#x201d; in <source>
<italic>IEEE-EMBS international Conference on Biomedical and health informatics (BHI)</italic> (IEEE)</source>, <fpage>492</fpage>&#x2013;<lpage>495</lpage>.</citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Emmanuel</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Berkowitz</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2005</year>). <article-title>Mixing-induced precipitation and porosity evolution in porous media</article-title>. <source>Adv. water Resour.</source> <volume>28</volume>, <fpage>337</fpage>&#x2013;<lpage>344</lpage>. <pub-id pub-id-type="doi">10.1016/j.advwatres.2004.11.010</pub-id>
</citation>
</ref>
<ref id="B10">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Eriksson</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Poloczek</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Scalable constrained Bayesian optimization</article-title>, In <source>Proceedings of the 24th International Conference on Artificial Intelligence and Statistics</source> (<comment>PMLR 130:730&#x2013;738, 2021</comment>), <fpage>730</fpage>&#x2013;<lpage>738606</lpage>.</citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ezati</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Esmaeilbeigi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kamandi</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2024</year>). <article-title>Novel approaches for hyper-parameter tuning of physics-informed Gaussian processes: application to parametric pdes</article-title>. <source>Eng. Comput.</source> <volume>40</volume>, <fpage>3175</fpage>&#x2013;<lpage>3194</lpage>. <pub-id pub-id-type="doi">10.1007/s00366-024-01970-8</pub-id>
</citation>
</ref>
<ref id="B12">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Gelman</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Carlin</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Stern</surname>
<given-names>H. S.</given-names>
</name>
<name>
<surname>Dunson</surname>
<given-names>D. B.</given-names>
</name>
<name>
<surname>Vehtari</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Rubin</surname>
<given-names>D. B.</given-names>
</name>
</person-group> (<year>2014</year>). <source>Bayesian data analysis</source>. <publisher-loc>Philadelphia, Pennsylvania</publisher-loc>: <publisher-name>Tyler &#x26; Francis Group, Inc.</publisher-name>
</citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gulian</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Frankel</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Swiler</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Gaussian process regression constrained by boundary value problems</article-title>. <source>Comput. Methods Appl. Mech. Eng.</source> <volume>388</volume>, <fpage>114117</fpage>. <pub-id pub-id-type="doi">10.1016/j.cma.2021.114117</pub-id>
</citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guo</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Convergence analysis for mathematical programs with distributionally robust chance constraint</article-title>. <source>SIAM J. Optim.</source> <volume>27</volume>, <fpage>784</fpage>&#x2013;<lpage>816</lpage>. <comment>
<italic>(to appear)</italic>
</comment>. <pub-id pub-id-type="doi">10.1137/15m1036592</pub-id>
</citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hess</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>1999</year>). <article-title>Conditional expectation and martingales of random sets</article-title>. <source>Pattern Recognit.</source> <volume>32</volume>, <fpage>1543</fpage>&#x2013;<lpage>1567</lpage>. <pub-id pub-id-type="doi">10.1016/s0031-3203(99)00020-5</pub-id>
</citation>
</ref>
<ref id="B16">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Jensen</surname>
<given-names>B. S.</given-names>
</name>
<name>
<surname>Nielsen</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Larsen</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2013</year>). &#x201c;<article-title>Bounded Gaussian process regression</article-title>,&#x201d; in <source>2013 IEEE international workshop on machine learning for signal processing (MLSP)</source> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B17">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kochan</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>X.</given-names>
</name>
</person-group> (<year>2022</year>). &#x201c;<article-title>A quantum-inspired Hamiltonian Monte Carlo method for missing data imputation</article-title>,&#x201d; in <source>
<italic>Mathematical and scientific machine learning</italic> (PMLR)</source>, <fpage>17</fpage>&#x2013;<lpage>32</lpage>.</citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kohanpur</surname>
<given-names>A. H.</given-names>
</name>
<name>
<surname>Saksena</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Dey</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Johnson</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Riasi</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Yeghiazarian</surname>
<given-names>L.</given-names>
</name>
<etal/>
</person-group> (<year>2023</year>). <article-title>Urban flood modeling: uncertainty quantification and physics-informed Gaussian processes regression forecasting</article-title>. <source>Water Resour. Res.</source> <volume>59</volume>, <fpage>e2022WR033939</fpage>. <pub-id pub-id-type="doi">10.1029/2022wr033939</pub-id>
</citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kuss</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Rasmussen</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2003</year>). <article-title>Gaussian processes in reinforcement learning</article-title>. <source>Adv. neural Inf. Process. Syst.</source> <volume>16</volume>.</citation>
</ref>
<ref id="B20">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Lange-Hegermann</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Linearly constrained Gaussian processes with boundary conditions</article-title>. In <source>Proceedings of The 24th International Conference on Artificial Intelligence and Statistics</source> (<publisher-name>PMLR628 130</publisher-name>: <fpage>1090</fpage>&#x2013;<lpage>1098</lpage>).</citation>
</ref>
<ref id="B21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>X.-Q.</given-names>
</name>
<name>
<surname>Song</surname>
<given-names>L.-K.</given-names>
</name>
<name>
<surname>Choy</surname>
<given-names>Y.-S.</given-names>
</name>
<name>
<surname>Bai</surname>
<given-names>G.-C.</given-names>
</name>
</person-group> (<year>2023</year>). <article-title>Fatigue reliability analysis of aeroengine blade-disc systems using physics-informed ensemble learning</article-title>. <source>Philosophical Trans. R. Soc. A</source> <volume>381</volume>, <fpage>20220384</fpage>. <pub-id pub-id-type="doi">10.1098/rsta.2022.0384</pub-id>
</citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lin</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Tartakovsky</surname>
<given-names>A. M.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>An efficient, high-order probabilistic collocation method on sparse grids for three-dimensional flow and solute transport in randomly heterogeneous porous media</article-title>. <source>Adv. Water Resour.</source> <volume>32</volume>, <fpage>712</fpage>&#x2013;<lpage>722</lpage>. <pub-id pub-id-type="doi">10.1016/j.advwatres.2008.09.003</pub-id>
</citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Z.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Quantum-inspired Hamiltonian Monte Carlo for bayesian sampling</article-title>. <source>arXiv Prepr. arXiv:1912.01937</source>.</citation>
</ref>
<ref id="B24">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>L&#xf3;pez-Lopera</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bachoc</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Roustant</surname>
<given-names>O.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>High-dimensional additive Gaussian processes under monotonicity constraints</article-title>. <source>Adv. Neural Inf. Process. Syst.</source> <volume>35</volume>, <fpage>8041</fpage>&#x2013;<lpage>8053</lpage>.</citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>L&#xf3;pez-Lopera</surname>
<given-names>A. F.</given-names>
</name>
<name>
<surname>Bachoc</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Durrande</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Roustant</surname>
<given-names>O.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Finite-dimensional Gaussian approximation with linear inequality constraints</article-title>. <source>SIAM/ASA J. Uncertain. Quantification</source> <volume>6</volume>, <fpage>1224</fpage>&#x2013;<lpage>1255</lpage>. <pub-id pub-id-type="doi">10.1137/17m1153157</pub-id>
</citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maatouk</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Bay</surname>
<given-names>X.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Gaussian process emulators for computer experiments with inequality constraints</article-title>. <source>Math. Geosci.</source> <volume>49</volume>, <fpage>557</fpage>&#x2013;<lpage>582</lpage>. <pub-id pub-id-type="doi">10.1007/s11004-017-9673-2</pub-id>
</citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maatouk</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Roustant</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Richet</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Cross-validation estimations of hyper-parameters of Gaussian processes with inequality constraints</article-title>. <source>Procedia Environ. Sci.</source> <volume>27</volume>, <fpage>38</fpage>&#x2013;<lpage>44</lpage>. <pub-id pub-id-type="doi">10.1016/j.proenv.2015.07.105</pub-id>
</citation>
</ref>
<ref id="B28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mao</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Jagtap</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Karniadakis</surname>
<given-names>G. E.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Physics-informed neural networks for high-speed flows</article-title>. <source>Comput. Methods Appl. Mech. Eng.</source> <volume>360</volume>, <fpage>112789</fpage>. <pub-id pub-id-type="doi">10.1016/j.cma.2019.112789</pub-id>
</citation>
</ref>
<ref id="B29">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nabati</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ghorashi</surname>
<given-names>S. A.</given-names>
</name>
<name>
<surname>Shahbazian</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Jgpr: a computationally efficient multi-target Gaussian process regression algorithm</article-title>. <source>Mach. Learn.</source> <volume>111</volume>, <fpage>1987</fpage>&#x2013;<lpage>2010</lpage>. <pub-id pub-id-type="doi">10.1007/s10994-022-06170-3</pub-id>
</citation>
</ref>
<ref id="B30">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pensoneault</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>X.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Nonnegativity-enforced Gaussian process regression</article-title>. <source>Theor. Appl. Mech. Lett.</source> <volume>10</volume>, <fpage>182</fpage>&#x2013;<lpage>187</lpage>. <pub-id pub-id-type="doi">10.1016/j.taml.2020.01.036</pub-id>
</citation>
</ref>
<ref id="B31">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Pimentel</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Clifton</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Clifton</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Tarassenko</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2013</year>). &#x201c;<article-title>Probabilistic estimation of respiratory rate using Gaussian processes</article-title>,&#x201d; in <source>2013 35th annual international conference of the IEEE engineering in medicine and biology society (EMBC)</source> (<publisher-name>IEEE</publisher-name>), <fpage>2902</fpage>&#x2013;<lpage>2905</lpage>.</citation>
</ref>
<ref id="B32">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Qiang</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Shi</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Ren</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Shi</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2023</year>). <article-title>Integrating physics-informed recurrent Gaussian process regression into instance transfer for predicting tool wear in milling process</article-title>. <source>J. Manuf. Syst.</source> <volume>68</volume>, <fpage>42</fpage>&#x2013;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1016/j.jmsy.2023.02.019</pub-id>
</citation>
</ref>
<ref id="B33">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Raissi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Perdikaris</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Karniadakis</surname>
<given-names>G. E.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Machine learning of linear differential equations using Gaussian processes</article-title>. <source>J. Comput. Phys.</source> <volume>348</volume>, <fpage>683</fpage>&#x2013;<lpage>693</lpage>. <pub-id pub-id-type="doi">10.1016/j.jcp.2017.07.050</pub-id>
</citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rasmussen</surname>
<given-names>C. E.</given-names>
</name>
<name>
<surname>Nickisch</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Gaussian processes for machine learning (gpml) toolbox</article-title>. <source>J. Mach. Learn. Res.</source> <volume>11</volume>, <fpage>3011</fpage>&#x2013;<lpage>3015</lpage>.</citation>
</ref>
<ref id="B36">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Riihim&#xe4;ki</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Vehtari</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2010</year>). &#x201c;<article-title>Gaussian processes with monotonicity information</article-title>,&#x201d; in <source>Proceedings of the thirteenth international conference on artificial intelligence andf statistics</source> (<publisher-name>JMLR Workshop and Conference Proceedings</publisher-name>), <fpage>645</fpage>&#x2013;<lpage>652</lpage>.</citation>
</ref>
<ref id="B37">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Salzmann</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Urtasun</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Implicitly constrained Gaussian process regression for monocular non-rigid pose estimation</article-title>. <source>Adv. neural Inf. Process. Syst.</source> <volume>23</volume>.</citation>
</ref>
<ref id="B38">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Song</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Shi</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>A nonlinear finite element-based supervised machine learning approach for efficiently predicting collapse resistance of wireline tool housings subjected to combined loads</article-title>,&#x201d; in <conf-name>ASME International Mechanical Engineering Congress and Exposition</conf-name> (<publisher-name>American Society of Mechanical Engineers</publisher-name>). <pub-id pub-id-type="doi">10.1115/imece2021-72222</pub-id>
<volume>85680</volume>
<fpage>V012T12A057</fpage>
</citation>
</ref>
<ref id="B39">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>M. L.</given-names>
</name>
</person-group> (<year>1988</year>). <article-title>Asymptotically efficient prediction of a random field with a misspecified covariance function</article-title>. <source>Ann. Statistics</source> <volume>16</volume>, <fpage>55</fpage>&#x2013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1214/aos/1176350690</pub-id>
</citation>
</ref>
<ref id="B40">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Swiler</surname>
<given-names>L. P.</given-names>
</name>
<name>
<surname>Gulian</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Frankel</surname>
<given-names>A. L.</given-names>
</name>
<name>
<surname>Safta</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Jakeman</surname>
<given-names>J. D.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>A survey of constrained Gaussian process regression: approaches and implementation challenges</article-title>. <source>J. Mach. Learn. Model. Comput.</source> <volume>1</volume>, <fpage>119</fpage>&#x2013;<lpage>156</lpage>. <pub-id pub-id-type="doi">10.1615/jmachlearnmodelcomput.2020035155</pub-id>
</citation>
</ref>
<ref id="B41">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Han</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Tiwari</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Work</surname>
<given-names>D. B.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>Personalized adaptive cruise control via Gaussian process regression</article-title>,&#x201d; in <source>2021 IEEE international intelligent transportation systems conference (ITSC)</source> (<publisher-name>IEEE</publisher-name>), <fpage>1496</fpage>&#x2013;<lpage>1502</lpage>.</citation>
</ref>
<ref id="B42">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wilkie</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Galasso</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Gaussian process regression for fatigue reliability analysis of offshore wind turbines</article-title>. <source>Struct. Saf.</source> <volume>88</volume>, <fpage>102020</fpage>. <pub-id pub-id-type="doi">10.1016/j.strusafe.2020.102020</pub-id>
</citation>
</ref>
<ref id="B43">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Williams</surname>
<given-names>C. K.</given-names>
</name>
<name>
<surname>Rasmussen</surname>
<given-names>C. E.</given-names>
</name>
</person-group> (<year>2006</year>). <source>Gaussian Processes for Machine Learning</source> <volume>2</volume>. <publisher-loc>MA</publisher-loc>: <publisher-name>MIT press Cambridge</publisher-name>.</citation>
</ref>
<ref id="B44">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Barajas-Solano</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Tartakovsky</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Tartakovsky</surname>
<given-names>A. M.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Physics-informed cokriging: a Gaussian-process-regression-based multifidelity method for data-model convergence</article-title>. <source>J. Comput. Phys.</source> <volume>395</volume>, <fpage>410</fpage>&#x2013;<lpage>431</lpage>. <pub-id pub-id-type="doi">10.1016/j.jcp.2019.06.041</pub-id>
</citation>
</ref>
<ref id="B45">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Tartakovsky</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Tartakovsky</surname>
<given-names>A. M.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Physics information aided kriging using stochastic simulation models</article-title>. <source>SIAM J. Sci. Comput.</source> <volume>43</volume>, <fpage>A3862</fpage>&#x2013;<lpage>A3891</lpage>. <pub-id pub-id-type="doi">10.1137/20m1331585</pub-id>
</citation>
</ref>
<ref id="B46">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2004</year>). <article-title>Inconsistent estimation and asymptotically equal interpolations in model-based geostatistics</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>99</volume>, <fpage>250</fpage>&#x2013;<lpage>261</lpage>. <pub-id pub-id-type="doi">10.1198/016214504000000241</pub-id>
</citation>
</ref>
</ref-list>
<sec id="s11">
<title>Nomenclature</title>
<sec>
<title>Abbreviations</title>
<def-list>
<def-item>
<term id="G1-fmech.2024.1410190">
<bold>additiveGP</bold>
</term>
<def>
<p>Additive Gaussian Process method</p>
</def>
</def-item>
<def-item>
<term id="G2-fmech.2024.1410190">
<bold>GP</bold>
</term>
<def>
<p>Gaussian Process</p>
</def>
</def-item>
<def-item>
<term id="G3-fmech.2024.1410190">
<bold>HMC</bold>
</term>
<def>
<p>Hamiltonian Monte Carlo</p>
</def>
</def-item>
<def-item>
<term id="G4-fmech.2024.1410190">
<bold>HMCad</bold>
</term>
<def>
<p>Hard-constrained Hamiltonian Monte Carlo with adaptivity</p>
</def>
</def-item>
<def-item>
<term id="G5-fmech.2024.1410190">
<bold>HMCboth</bold>
</term>
<def>
<p>Hard-constrained Hamiltonian Monte Carlo with both adaptivity and variance</p>
</def>
</def-item>
<def-item>
<term id="G6-fmech.2024.1410190">
<bold>HMCsoftad</bold>
</term>
<def>
<p>Soft-constrained Hamiltonian Monte Carlo with adaptivity</p>
</def>
</def-item>
<def-item>
<term id="G7-fmech.2024.1410190">
<bold>HMCsoftboth</bold>
</term>
<def>
<p>Soft-constrained Hamiltonian Monte Carlo with both adaptivity and variance</p>
</def>
</def-item>
<def-item>
<term id="G8-fmech.2024.1410190">
<bold>HMCsoftvar</bold>
</term>
<def>
<p>Soft-constrained Hamiltonian Monte Carlo with variance</p>
</def>
</def-item>
<def-item>
<term id="G9-fmech.2024.1410190">
<bold>HMCvar</bold>
</term>
<def>
<p>Hard-constrained Hamiltonian Monte Carlo with variance</p>
</def>
</def-item>
<def-item>
<term id="G10-fmech.2024.1410190">
<bold>MCMC</bold>
</term>
<def>
<p>Markov chain Monte Carlo</p>
</def>
</def-item>
<def-item>
<term id="G11-fmech.2024.1410190">
<bold>MH</bold>
</term>
<def>
<p>Metropolis-Hastings</p>
</def>
</def-item>
<def-item>
<term id="G12-fmech.2024.1410190">
<bold>PDE</bold>
</term>
<def>
<p>Partial differential equations</p>
</def>
</def-item>
<def-item>
<term id="G13-fmech.2024.1410190">
<bold>QHMC</bold>
</term>
<def>
<p>Quantum-inspired Hamiltonian Monte Carlo</p>
</def>
</def-item>
<def-item>
<term id="G14-fmech.2024.1410190">
<bold>QHMCad</bold>
</term>
<def>
<p>Hard-constrained Quantum-inspired Hamiltonian Monte Carlo with adaptivity</p>
</def>
</def-item>
<def-item>
<term id="G15-fmech.2024.1410190">
<bold>QHMCboth</bold>
</term>
<def>
<p>Hard-constrained Quantum-inspired Hamiltonian Monte Carlo with adaptivity and variance</p>
</def>
</def-item>
<def-item>
<term id="G16-fmech.2024.1410190">
<bold>QHMCsoftad</bold>
</term>
<def>
<p>Soft-constrained Quantum-inspired Hamiltonian Monte Carlo with adaptivity</p>
</def>
</def-item>
<def-item>
<term id="G17-fmech.2024.1410190">
<bold>QHMCsoftboth</bold>
</term>
<def>
<p>Soft-constrained Quantum-inspired Hamiltonian Monte Carlo with both adaptivity and variance</p>
</def>
</def-item>
<def-item>
<term id="G18-fmech.2024.1410190">
<bold>QHMCsoftvar</bold>
</term>
<def>
<p>Soft-constrained Quantum-inspired Hamiltonian Monte Carlo with variance</p>
</def>
</def-item>
<def-item>
<term id="G19-fmech.2024.1410190">
<bold>QHMCvar</bold>
</term>
<def>
<p>Hard-constrained Quantum-inspired Hamiltonian Monte Carlo with variance</p>
</def>
</def-item>
<def-item>
<term id="G20-fmech.2024.1410190">
<bold>SNR</bold>
</term>
<def>
<p>Signal-to-noise ratio</p>
</def>
</def-item>
<def-item>
<term id="G21-fmech.2024.1410190">
<bold>tnHMC</bold>
</term>
<def>
<p>Truncated Gaussian method with Hamiltonian Monte Carlo sampling</p>
</def>
</def-item>
<def-item>
<term id="G22-fmech.2024.1410190">
<bold>tnQHMC</bold>
</term>
<def>
<p>Truncated Gaussian method with Quantum-inspired Hamiltonian Monte Carlo sampling</p>
</def>
</def-item>
</def-list>
</sec>
<sec>
<title>Symbols</title>
<def-list>
<def-item>
<term id="G23-fmech.2024.1410190">
<inline-formula id="inf235">
<mml:math id="m279">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b4;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</term>
<def>
<p>Kronecker Delta</p>
</def>
</def-item>
<def-item>
<term id="G24-fmech.2024.1410190">
<inline-formula id="inf236">
<mml:math id="m280">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</term>
<def>
<p>Signal variance</p>
</def>
</def-item>
<def-item>
<term id="G25-fmech.2024.1410190">
<inline-formula id="inf237">
<mml:math id="m281">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</term>
<def>
<p>Hyperparameters of Gaussian model</p>
</def>
</def-item>
<def-item>
<term id="G26-fmech.2024.1410190">
<inline-formula id="inf238">
<mml:math id="m282">
<mml:mrow>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</term>
<def>
<p>Length-scale</p>
</def>
</def-item>
</def-list>
</sec>
</sec>
</back>
</article>