<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Energy Res.</journal-id>
<journal-title>Frontiers in Energy Research</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Energy Res.</abbrev-journal-title>
<issn pub-type="epub">2296-598X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1064464</article-id>
<article-id pub-id-type="doi">10.3389/fenrg.2022.1064464</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Energy Research</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Research on identification of server energy consumption characteristics <italic>via</italic> dirichlet max-margin factor analysis similarity preservation model</article-title>
<alt-title alt-title-type="left-running-head">Chen et al.</alt-title>
<alt-title alt-title-type="right-running-head">
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fenrg.2022.1064464">10.3389/fenrg.2022.1064464</ext-link>
</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Chen</surname>
<given-names>Buhua</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/2041773/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Liu</surname>
<given-names>Hanjiang</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Shen</surname>
<given-names>Chengbin</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Shen</surname>
<given-names>Buyang</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Li</surname>
<given-names>Kunlun</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Laboratory of Cloud-Network Integration</institution>, <institution>Research Institute of China Telecom</institution>, <addr-line>Guangzhou</addr-line>, <country>China</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Laboratory of Cloud-Network Integration</institution>, <institution>Research Institute of China Telecom</institution>, <addr-line>Shanghai</addr-line>, <country>China</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Laboratory of Cloud-Network Integration</institution>, <institution>Research Institute of China Telecom</institution>, <addr-line>Beijing</addr-line>, <country>China</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1402895/overview">Youcai Liang</ext-link>, South China University of Technology, China</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2082415/overview">Penghui Wang</ext-link>, Xidian University, China</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2090505/overview">Chu Jun</ext-link>, Nanchang Hangkong University, China</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Buhua Chen, <email>chenbuh@chinatelecom.cn</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Process and Energy Systems Engineering, a section of the journal Frontiers in Energy Research</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>10</day>
<month>01</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>10</volume>
<elocation-id>1064464</elocation-id>
<history>
<date date-type="received">
<day>08</day>
<month>10</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>28</day>
<month>11</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2023 Chen, Liu, Shen, Shen and Li.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Chen, Liu, Shen, Shen and Li</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Growing server energy consumption is a significant environmental issue, and mitigating it is a key technological challenge. Application-level energy minimization strategies depend on accurate modeling of energy consumption during an application&#x2019;s execution. This paper presents a theoretical and experimental study of the dpMMSPFA model in the field of server energy consumption identification. The dpMMSPFA for classification of hidden spaces uses latent variable support vector machines (LVSVM) to learn discriminative subspaces with maximal marginal constraints. The factor analysis (FA) model, the similarity preservation (SP) item, the Dirichlet process mixture (DPM) model, and the maximal marginal classifier are jointly learned beneath a unified Bayesian architecture to advance classification of predictive power. The parameters of the proposed model can be inferred by the simple and efficient Gibbs sampling in terms of the conditional conjugate property. Empirical results on various datasets demonstrate that 1) max-margin joint learning can significantly improve the prediction performance of the model implemented by feature extraction and classification separately and meanwhile retain the generative ability; 2) dpMMSPFA is superior to MMFA when employing SP item and Dirichlet process mixture as prior knowledge; 3) the classification of dpMMSPFA model can often achieve better results on benchmark and measured energy server consumption datasets; 4) and the recognition rate can reach as high as 95.79% at 10 components, far better than other models on measured energy server consumption datasets.</p>
</abstract>
<kwd-group>
<kwd>server energy consumption</kwd>
<kwd>dpMMSPFA model</kwd>
<kwd>factor analysis</kwd>
<kwd>similarity preserving item</kwd>
<kwd>Dirichlet process mixture</kwd>
<kwd>latent variable support vector machine</kwd>
<kwd>classification performance</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>The issue of energy consumption in data centers has increased significantly in recent years due to the rapid growth of ICT technology and infrastructure. Especially since the outbreak of COVID-19 in 2020, the demand for digital services for economic and social development has skyrocketed, more and more consumer and commercial activities have turned to online, and the digital and information technology sectors have experienced tremendous growth. Omdia&#x2019;s relevant statistics shows that consumer data traffic from cellular networks and fixed broadband will increase between 2018 and 2024&#xa0;at a compound annual growth rate of 29%, increasing from 1.3&#xa0;million PB in 2018 to 5.8&#xa0;million PB in 2024. The current ICT infrastructure, which includes data centers, data center Internet, and Internet access networks, is put to a great deal of pressure by this development rate (<xref ref-type="bibr" rid="B16">Moises, 2021</xref>).</p>
<p>To meet the new demand, operators, cloud manufacturers and Internet enterprises have upgraded and expanded their data centers. While processing business load requires a lot of electric energy, the data center also generates a large amount of indirect carbon emissions. Global data centers&#x2019; power consumption, including those in China, is expected to rise from 2% in 2020 to over 4% in 2025 (<xref ref-type="bibr" rid="B5">DC Cooling, 2021</xref>). The hassle of high energy consumption has a serious impact on social security, climate warming, air quality and reliability of power grid. With the gradual depletion of traditional power sources and the soaring price, the cost of maintaining operational data will exceed the cost of purchasing system hardware. Therefore, the optimization of server energy consumption of a cloud operating system or a data center has become a greater essential problem in the current technical environment.</p>
<p>The ordinary way of measuring energy is to directly measure the electrical parameters of the server through the electrical instrument to achieve the actual energy of the server (<xref ref-type="bibr" rid="B14">Konstantakos et al., 2008</xref>; <xref ref-type="bibr" rid="B25">Rotem et al., 2012</xref>). However, this physical measurement approach can solely obtain the actual power, and it is impossible to analyze from these statistics what causes the rise, drop or unexpected change of power. Since the alternate of server energy consumption is bound to be accompanied <italic>via</italic> the change of system resource usage, it is necessary to design an identification model for energy usage, which can accurately classify the server energy consumption level, reflect the relationship between system resource utilization and server energy consumption, analyze the influence of resource utilization on energy consumption.</p>
<p>In brief overview, this paper proposes research on the Dirichlet max-margin factor analysis similarity preservation model (dpMMSPFA) for feature identification of server energy consumption level. To ensure that energy consumption analysis can be determined through high classification accuracy, this research aims to provide a comprehensive energy consumption feature recognition method that can meet the needs of high accuracy. It also aims to provide theoretical support and assist designers to control energy consumption when constructing servers.</p>
<sec id="s1-1">
<title>1.1 The main contributions of the paper are summarized as follows</title>
<p>
<list list-type="simple">
<list-item>
<p>1) A novel Dirichlet maximum marginal similarity preservation factor analysis model (dpMMSPFA) that considers the FA model, the SP item, the Dirichlet process mixture model, and the LVSVMs is designed in a united Bayesian framework.</p>
</list-item>
<list-item>
<p>2) Extensive experimental analysis is conducted to validate the proposed model using widely adopted UCI benchmarks to evaluate generalization performance of the proposed MMSPFA model.</p>
</list-item>
<list-item>
<p>3) To create and train the model in this study, 17 features relating to server energy consumption are chosen and only 7.5% of all collected server energy consumption data (small training size) is used in training. The proposed approach is adjusted by experiments on altering value of hyper-parameters to achieve the best energy consumption feature recognition performance.</p>
</list-item>
<list-item>
<p>4) The proposed model is compared with other five models (including two-stage model and joint model) in the recognition ability of server workload (including &#x201c;CPU intensive tasks&#x201d;, &#x201c;I/O intensive tasks&#x201d;, &#x201c;Load intensive tasks&#x201d; and &#x201c;Non-Loaded tasks&#x201d;) under different energy consumption characteristic dimensions.</p>
</list-item>
</list>
</p>
</sec>
<sec id="s1-2">
<title>1.2 The bright structure of the accomplished paper is organized as follows</title>
<p>
<list list-type="simple">
<list-item>
<p>1) <xref ref-type="sec" rid="s1">Section 1</xref>: This section explains the essential research significance of server energy consumption classification.</p>
</list-item>
<list-item>
<p>2) <xref ref-type="sec" rid="s2">Section 2</xref>: The existing linear model and nonlinear model (machine learning, deep learning and reinforce learning), research on server energy consumption models and their defects are introduced. The development status of nonparametric Bayesian model is stated in this section.</p>
</list-item>
<list-item>
<p>3) <xref ref-type="sec" rid="s3">Section 3</xref>: Four vital models underneath Bayesian framework are described, along with factor analysis (FA) model, similarity preservation supervision item (SP) model, Dirichlet mixture process (DPM) and latent variable support vector machine (LVSVM).</p>
</list-item>
<list-item>
<p>4) <xref ref-type="sec" rid="s4">Section 4</xref>: The mathematical construction of the algorithm of the model focuses on the Gibbs sampling inference method, which lays an important foundation on the nonparametric Bayesian classification model dpMMSPFA proposed in <xref ref-type="sec" rid="s4">Section 4</xref>. A Dirichlet max-margin factor analysis similarity preservation model (dpMMSPFA classification model) is proposed.</p>
</list-item>
<list-item>
<p>5) <xref ref-type="sec" rid="s5">Section 5</xref>: Through the conducted experiments of dpMMSPFA model on UCI benchmark data and measured server energy consumption data, it indicates that dpMMSPFA model has notable identification performance and generalization performance.</p>
</list-item>
<list-item>
<p>6) <xref ref-type="sec" rid="s6">Section 6</xref>: This section summarizes the primary work of this paper and the future research work of nonparametric Bayesian model and energy consumption research in intelligent hardware is envisaged and prospected.</p>
</list-item>
</list>
</p>
</sec>
</sec>
<sec id="s2">
<title>2 Related work</title>
<sec id="s2-1">
<title>2.1 Current research on server energy consumption model</title>
<p>The general layout of the data center energy consumption modeling and forecasting framework is shown in <xref ref-type="fig" rid="F1">Figure 1</xref>. The parts of the data center can generally be separated into three levels: hardware, software, and applications. The server energy consumption that will be studied in this research is not only influenced by its hardware configuration, but also affected by its operating system and various application types. There are two types of server energy consumption models, linear and nonlinear, which have been applied to the research of server energy consumption estimation.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>The general layout of the data center energy consumption modeling and forecasting framework.</p>
</caption>
<graphic xlink:href="fenrg-10-1064464-g001.tif"/>
</fig>
<p>Preliminary studies of prediction model for server energy consumption were based on the linear model (<xref ref-type="bibr" rid="B22">P&#xf6;ss and Nambiar, 2010</xref>; <xref ref-type="bibr" rid="B6">Davis et al., 2012</xref>; <xref ref-type="bibr" rid="B7">Davis et al., 2014</xref>). <xref ref-type="bibr" rid="B29">Subramaniam and Feng, 2014</xref> used the SPEC power benchmark to conduct tests on seven heterogeneous servers and evaluated the accuracy of the linear regression model based on CPU utilization (<xref ref-type="bibr" rid="B13">Kilper et al., 2011</xref>). The findings demonstrated that not all servers exhibit a linear relationship between power usage and server utilization characteristics. Therefore, researchers started to think about utilizing machine learning and deep learning nonlinear models to develop server energy consumption models to boost the precision of energy consumption forecasting and refine management of energy consumption control.</p>
<p>Real-world data frequently have a multimodal distribution, making it impossible for a straightforward classification/regression algorithm to provide an acceptable criterion. The dimensionality reduction techniques are categorized according to whether or not they incorporate supervised content, and can be essentially divided into two types, unsupervised and supervised dimensionality reduction. Unsupervised downscaling methods normally extract this characteristic information that maintains the statistics structure, while supervised downscaling techniques not solely extract low-dimensional information representations, but also carry certain <italic>a priori</italic> records (e.g., category information). In the discipline of unsupervised dimensionality reduction, FA (<xref ref-type="bibr" rid="B4">Chen et al., 2010</xref>; <xref ref-type="bibr" rid="B28">Shi et al., 2011</xref>; <xref ref-type="bibr" rid="B8">Du et al., 2012</xref>), PCA (<xref ref-type="bibr" rid="B31">Tharwat, 2016</xref>), and other techniques have a vital and far-reaching status, and for supervised dimensionality reduction methods, LDA strategies are widely valued and appreciated <italic>via</italic> researchers, experts and scholars. <xref ref-type="bibr" rid="B38">Zhou et al. (2018)</xref> incorporated PCA dimension reduction prior to applying the typical machine learning energy consumption model. While this somewhat alleviates the risk of instability and overfitting of prediction, it has certain restrictions on the processing of energy consumption data in general.</p>
<p>An energy consumption model FSDL based on feature selection and deep learning was proposed (<xref ref-type="bibr" rid="B18">Liang et al., 2020</xref>). In order to maximize the forecast accuracy of the energy consumption model, this model combines feature selection and deep learning techniques. This model is vulnerable to overfitting, though. <xref ref-type="bibr" rid="B19">Lin et al. (2020)</xref> developed three power consumption models based on BP neural networks, LSTM neural networks, and Elman neural networks, respectively. The prediction accuracy of the three power consumption models under various task loads is compared. When the cost of training and prediction accuracy are taken into account, ENN-PM outperforms TW_BP_PM and ML_STM-PM. As is common knowledge, humans are adept at recognizing a novel object from a relatively limited number of examples. In contrast, the deep learning technology requires a large amount of data to train a proper model. Especially when the number of neural network layers increases, the model becomes more complex. It will take more time and computing power to train the model to the convergence stage as the number of parameters to be optimized rises. The server energy consumption was forecasted <italic>via</italic> Q-learning, B-ANN, MLP and other reinforcement learning techniques (<xref ref-type="bibr" rid="B27">Shen et al., 2013</xref>; <xref ref-type="bibr" rid="B17">Li et al., 2010</xref>; <xref ref-type="bibr" rid="B11">Islam et al., 2012</xref>; <xref ref-type="bibr" rid="B20">Moreno and Xu, 2012</xref>; <xref ref-type="bibr" rid="B3">Caglar and Gokhale, 2014</xref>; et al., <xref ref-type="bibr" rid="B30">Tesauro et al., 2017</xref>). However, before the training effect may truly improve, reinforcement learning necessitates experience accumulation to a significant level. Additionally, it can be easier to fall into local optimization and not really achieve global optimization if the training object receives rewards from the environment in an untimely manner and the reward setting is unreasonable. This presents another challenge for reinforcement learning in the field of server energy consumption model.</p>
</sec>
<sec id="s2-2">
<title>2.2 The background for dpMMSPFA model</title>
<p>In light of the aforementioned restrictions and the fact that the characteristics of server energy consumption datasets have not been dealt with in depth, we made the decision to develop a novel machine learning model.</p>
<p>Data interrelationship evaluation, data dimensionality reduction, pattern classification, and characteristic description are all frequently carried out using the method of factor analysis (FA) (<xref ref-type="bibr" rid="B4">Chen et al., 2010</xref>; <xref ref-type="bibr" rid="B28">Shi et al., 2011</xref>; <xref ref-type="bibr" rid="B8">Du et al., 2012</xref>). In FA models, implicit factors serve as representations of low-dimensional observations of data in the hidden space. Even though FA is an unsupervised dimensionality reduction technique, it has the ability to not only reduce dimensions but also to represent how subspace and original space are reconfigured, with Bayesian inference used to implement the FA model. Since FA is an unsupervised model and without <italic>a priori</italic> data like label content, it can only characterize low-dimensional observations (<xref ref-type="bibr" rid="B4">Chen et al., 2010</xref>; <xref ref-type="bibr" rid="B28">Shi et al., 2011</xref>; <xref ref-type="bibr" rid="B8">Du et al., 2012</xref>). Many experts and academics have been interested in the topic of how to introduce supervised content of latent elements recently. Attempts have been made to include discriminative supervised content as properly part of the input elements (<xref ref-type="bibr" rid="B15">Lacoste-Julien et al., 2008</xref>; <xref ref-type="bibr" rid="B12">Jiang et al., 2011</xref>; <xref ref-type="bibr" rid="B39">Zhu et al., 2012</xref>; <xref ref-type="bibr" rid="B41">Zhu et al., 2013</xref>), and a supervised K-SVD approach of label consistency is proposed to train discriminative dictionaries for dispersed coding (<xref ref-type="bibr" rid="B12">Jiang et al., 2011</xref>). K-SVD assembly characterizes with each dictionary item to carry out the discriminative supervised approach for dispersed encoding during concordance learning (<xref ref-type="bibr" rid="B12">Jiang et al., 2011</xref>).</p>
<p>Thus, it seems that supervised content is very vital to boost the predictive ability for classification model. Against this background, our model determines to introduce label content material of the original input data, referred to as similarity preservation (SP) item. By introducing supervised content material into FA, the proposed model not only keeps the finest data description and characteristic extraction capabilities, but also maximizes the priori predictive potential of characterization content.</p>
<p>DP mixture (DPM) models have been introduced as nonparametric Bayesian clustering algorithms for ME models (<xref ref-type="bibr" rid="B24">Rasmussen and Ghahramani, 2001</xref>; <xref ref-type="bibr" rid="B26">Shahbaba and Neal, 2009</xref>; <xref ref-type="bibr" rid="B36">Zhang et al., 2014</xref>). As an illustration, <xref ref-type="bibr" rid="B26">Shahbaba and Neal, 2009</xref> created the dpMNL, a multi-metric Logit (MNL) nonlinear model based on DP mixtures.</p>
<p>The foundation of recognition accuracy is the classification model. The most classical representatives are support vector machines and random forest (RF). <xref ref-type="bibr" rid="B23">Zoubin et al. (2015)</xref> transformed the random forest classification model into <italic>&#x3b2;</italic>- Bayesian posterior framework presents a new idea for creating classifiers in Bayesian framework. Although <italic>&#x3b2;</italic>- Bayesian posterior is not an actual Bayesian theoretical framework, there is still promising. Support vector machines (SVMs), as a traditional representative of classifiers (<xref ref-type="bibr" rid="B33">Upadhyay et al., 2021</xref>), are capable to maximize the margins between different classes of data. Data augmentation techniques characterize the latent variables of SVMs as LVSVMs (<xref ref-type="bibr" rid="B21">Polson and Scott, 2011</xref>), hence successfully inferring the further proposed Gibbs&#x2019; maximum marginal topic model (<xref ref-type="bibr" rid="B40">Zhu, et al., 2014</xref>) and fast maximum marginal matrix decomposition (<xref ref-type="bibr" rid="B35">Xu, et al., 2013</xref>).</p>
<p>In this study, we accordingly construct the Dirichlet maximum marginal similarity preservation factor analysis model (dpMMSPFA) with LVSVM, which mutually learns discriminative subspaces, supervised content, clustering and maximum marginal classifiers underneath a Bayesian architecture. Therefore, hidden representations are authentic and reasonable for supervised predictive recognition tasks. Gibbs MedLDA (<xref ref-type="bibr" rid="B39">Zhu et al., 2012</xref>; <xref ref-type="bibr" rid="B41">Zhu et al., 2013</xref>) was inferred <italic>via</italic> an efficient estimation algorithm, Gibbs sampling. Similarly, the ambit of dpMMSPFA can acquire desirable covariance underneath the motion of augmented variables. Thus, the model dpMMSPFA can be estimated with simple and effective Gibbs sampling for all parameters.</p>
</sec>
</sec>
<sec id="s3">
<title>3 Preliminaries for dpMMSPFA model</title>
<sec id="s3-1">
<title>3.1 Factor analysis (FA)</title>
<p>The precise inner structural links between high dimension observations and low dimension hidden variables can be represented by factor analysis models (FA). By projecting the high-dimensional observations into the low-dimensional space, the factor analysis model is able to determine the potential low-dimensional effective features of the data. Suppose there are N column vectors <inline-formula id="inf1">
<mml:math id="m1">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x22ef;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>N</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and each of them is a <inline-formula id="inf2">
<mml:math id="m2">
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> -dimensional vector. Let <inline-formula id="inf3">
<mml:math id="m3">
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x22ef;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>N</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, and then the original input observations <inline-formula id="inf4">
<mml:math id="m4">
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is generated as a linear transformation of some lower <inline-formula id="inf5">
<mml:math id="m5">
<mml:mrow>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> -dimensional hidden variable <inline-formula id="inf6">
<mml:math id="m6">
<mml:mrow>
<mml:mi mathvariant="bold-italic">S</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> plus additive Gaussian noise <inline-formula id="inf7">
<mml:math id="m7">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. The transformation matrix <inline-formula id="inf8">
<mml:math id="m8">
<mml:mrow>
<mml:mi mathvariant="bold-italic">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the load matrix with each column <inline-formula id="inf9">
<mml:math id="m9">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x22ef;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. However, the number of factors <inline-formula id="inf10">
<mml:math id="m10">
<mml:mrow>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is not known in advance and needs to be determined earlier, therefore, the factor analysis generating expression can be given as<disp-formula id="e1">
<mml:math id="m11">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">S</mml:mi>
<mml:mrow>
<mml:mi>K</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(1)</label>
</disp-formula>
</p>
<p>FA is employed as an unsupervised archetypal model that does not make use of implicit features with label content to describe the initial observations of the data (<xref ref-type="bibr" rid="B4">Chen et al., 2010</xref>; <xref ref-type="bibr" rid="B28">Shi et al., 2011</xref>; <xref ref-type="bibr" rid="B8">Du et al., 2012</xref>). In context of this, we provide an approach for supervised data representation that, while simultaneously using FA to represent original observations, successfully increases the classifier&#x2019;s predictive power by labeling content.</p>
</sec>
<sec id="s3-2">
<title>3.2 Similarity preservation (SP) item</title>
<p>The incorporation of supervised content to model learning can significantly enhance the classifier&#x2019;s overall performance. We added the SP item to FA so that the extracted hidden variables could best characterize the original input data. This can maximize the <italic>a priori</italic> prediction ability of labelling content. Commonly, similarity is denoted by any symmetric positive definite matrix. <xref ref-type="bibr" rid="B12">Jiang et al. (2011)</xref> successfully improved the classifier&#x2019;s recognition performance by relying it on label content. In a similar vein, the similarity preservation item (SP) in our suggested model yields label content illustration.</p>
<p>As shown in Eq. <xref ref-type="disp-formula" rid="e2">2</xref>, we start from the cosine distance matrix.<disp-formula id="e2">
<mml:math id="m12">
<mml:mrow>
<mml:msub>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mfenced open="&#x2016;" close="&#x2016;" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2a;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
<label>(2)</label>
</disp-formula>where <inline-formula id="inf11">
<mml:math id="m13">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf12">
<mml:math id="m14">
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> denote label content of column vector <inline-formula id="inf13">
<mml:math id="m15">
<mml:mrow>
<mml:mi mathvariant="bold-italic">x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>The weight <inline-formula id="inf14">
<mml:math id="m16">
<mml:mrow>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> of the similarity preservation items are defined based on whether the data are in the same classification.<disp-formula id="e3">
<mml:math id="m17">
<mml:mrow>
<mml:msub>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="{" close="" separators="|">
<mml:mrow>
<mml:mtable columnalign="center">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2260;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(3)</label>
</disp-formula>
</p>
<p>An example of a sparse coding matrix <inline-formula id="inf15">
<mml:math id="m18">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">U</mml:mi>
<mml:mi>l</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is as follows.<disp-formula id="e4">
<mml:math id="m19">
<mml:mrow>
<mml:mfenced open="[" close="]" separators="|">
<mml:mrow>
<mml:mtable columnalign="center">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>6</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>6</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn mathvariant="bold">6</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>7</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>7</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>6</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>7</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>7</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
<label>(4)</label>
</disp-formula>
</p>
<p>To find the best solution for the similarity preservation items, we employed singular vector decomposition (SVD) of the matrix <inline-formula id="inf16">
<mml:math id="m20">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">U</mml:mi>
<mml:mi>l</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. The generating phrase for similarity preservation can therefore be stated as<disp-formula id="e5">
<mml:math id="m21">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">U</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">S</mml:mi>
<mml:mrow>
<mml:mi>K</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(5)</label>
</disp-formula>
</p>
</sec>
<sec id="s3-3">
<title>3.3 Dirichlet process mixture model (DPM)</title>
<p>A Dirichlet process <inline-formula id="inf17">
<mml:math id="m22">
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the distribution in the distribution (<xref ref-type="bibr" rid="B10">Ferguson, 1973</xref>). It is characterized by a baseline distribution <inline-formula id="inf18">
<mml:math id="m23">
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and a positive scalar parameter <inline-formula id="inf19">
<mml:math id="m24">
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. Suppose that we randomly draw a sample distribution <inline-formula id="inf20">
<mml:math id="m25">
<mml:mrow>
<mml:mi>G</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> from a DP. Subsequently, <inline-formula id="inf21">
<mml:math id="m26">
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> random variables <inline-formula id="inf22">
<mml:math id="m27">
<mml:mrow>
<mml:mfenced open="{" close="" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msubsup>
<mml:mo>}</mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula> are independently sampled from <inline-formula id="inf23">
<mml:math id="m28">
<mml:mrow>
<mml:mi>G</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>.<disp-formula id="equ1">
<mml:math id="m29">
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mfenced open="{" close="}" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>D</mml:mi>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="e6">
<mml:math id="m30">
<mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
<label>(6)</label>
</disp-formula>
</p>
<p>DP mixtures (DPM) can be used in cluster community problems where the kinds of clusters are uncertain (<xref ref-type="bibr" rid="B24">Rasmussen and Ghahramani, 2001</xref>; <xref ref-type="bibr" rid="B26">Shahbaba and Neal, 2009</xref>; <xref ref-type="bibr" rid="B36">Zhang et al., 2014</xref>). The DPM model can be expressed as Eq. <xref ref-type="disp-formula" rid="e7">7</xref>, in which <inline-formula id="inf24">
<mml:math id="m31">
<mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> can parameterize the distribution of the input data <inline-formula id="inf25">
<mml:math id="m32">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>.<disp-formula id="equ2">
<mml:math id="m33">
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mfenced open="{" close="}" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>D</mml:mi>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="e7">
<mml:math id="m34">
<mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="|" close="|" separators="|">
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mo>;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
<label>(7)</label>
</disp-formula>
</p>
<p>To describe the distribution of the stochastic variable <inline-formula id="inf26">
<mml:math id="m35">
<mml:mrow>
<mml:mi>G</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> in <inline-formula id="inf27">
<mml:math id="m36">
<mml:mrow>
<mml:mi mathvariant="normal">D</mml:mi>
<mml:mi mathvariant="normal">P</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, <xref ref-type="bibr" rid="B1">Antoniak, 1974</xref> constructs a pioneering method that is called &#x201c;stick-breaking&#x201d;. In the stick-breaking description, there are two infinite kinds of independent stochastic variables: <inline-formula id="inf28">
<mml:math id="m37">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mfenced open="{" close="}" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>&#x221e;</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf29">
<mml:math id="m38">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mfenced open="{" close="}" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>&#x221e;</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> , in which <inline-formula id="inf30">
<mml:math id="m39">
<mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and<inline-formula id="inf31">
<mml:math id="m40">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi mathvariant="normal">B</mml:mi>
<mml:mi mathvariant="normal">e</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
<mml:mi mathvariant="normal">a</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. The stick-breaking description of <inline-formula id="inf32">
<mml:math id="m41">
<mml:mrow>
<mml:mi>G</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> can be expressed as<disp-formula id="equ3">
<mml:math id="m42">
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>&#x221e;</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:msub>
<mml:mi>&#x3b4;</mml:mi>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="e8">
<mml:math id="m43">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x220f;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
<mml:mo>&#x2208;</mml:mo>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="|">
<mml:mrow>
<mml:mn>0,1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>&#x221e;</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
<label>(8)</label>
</disp-formula>
</p>
<p>DP, as a representative of the stick-breaking, has an apparently discrete stochastic variable <inline-formula id="inf33">
<mml:math id="m44">
<mml:mrow>
<mml:mi>G</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> consisting of an infinite but countable number of independently sampled atoms from <inline-formula id="inf34">
<mml:math id="m45">
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> in <inline-formula id="inf35">
<mml:math id="m46">
<mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. At the same time, the stick-breaking hyper-parameter <inline-formula id="inf36">
<mml:math id="m47">
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> determines the average value of the stick-breaking variables <inline-formula id="inf37">
<mml:math id="m48">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c5;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and then adjust the effective number of different parameter values.</p>
<p>Based on the DP&#x2019;s stick-breaking expression in Eq. <xref ref-type="disp-formula" rid="e8">8</xref>, the DPM is then be given as<disp-formula id="equ4">
<mml:math id="m49">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ5">
<mml:math id="m50">
<mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>&#x221e;</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ6">
<mml:math id="m51">
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3c0;</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>M</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c0;</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="e9">
<mml:math id="m52">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
<label>(9)</label>
</disp-formula>where <inline-formula id="inf38">
<mml:math id="m53">
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the index of the cluster of the observation data <inline-formula id="inf39">
<mml:math id="m54">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf40">
<mml:math id="m55">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c0;</mml:mi>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
<mml:mo>)</mml:mo>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
<mml:mo>)</mml:mo>
<mml:msubsup>
<mml:mo>)</mml:mo>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>&#x221e;</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> can be accquired by Eq. <xref ref-type="disp-formula" rid="e8">8</xref>, and <inline-formula id="inf41">
<mml:math id="m56">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c0;</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is a multinomial distribution over the mixture proportions <inline-formula id="inf42">
<mml:math id="m57">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c0;</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>To make for use of the advantage of the nonparametric Bayesian method, as a supervised cluster model, dpMMSPFA is developed on the basis of the truncated stick-breaking DPM in this study.</p>
</sec>
<sec id="s3-4">
<title>3.4 Latent variable support vector machine (LVSVM)</title>
<p>SVM is a powerful machine learning tool that has been widely used in the field of pattern recognition, mainly due to its great generalization ability. Given a labelled training set with data vectors <inline-formula id="inf43">
<mml:math id="m58">
<mml:mrow>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x22ef;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>N</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mi>P</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>, and their labels <inline-formula id="inf44">
<mml:math id="m59">
<mml:mrow>
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x22ef;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>N</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mo>&#x2208;</mml:mo>
<mml:mrow>
<mml:mfenced open="{" close="}" separators="|">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. It is defined as<disp-formula id="e10">
<mml:math id="m60">
<mml:mrow>
<mml:munder>
<mml:mi mathvariant="italic">min</mml:mi>
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3be;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:msub>
<mml:mi>&#x3be;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>.</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>.</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msup>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mi>T</mml:mi>
</mml:msup>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>&#x3be;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3be;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
<label>(10)</label>
</disp-formula>where <inline-formula id="inf45">
<mml:math id="m61">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the augmented feature vector, <inline-formula id="inf46">
<mml:math id="m62">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the weighted coefficient and <inline-formula id="inf47">
<mml:math id="m63">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is a positive tuning parameter. The underlying discriminative objective is a linear hinge loss function, <inline-formula id="inf48">
<mml:math id="m64">
<mml:mrow>
<mml:mi mathvariant="italic">max</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msup>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mi>T</mml:mi>
</mml:msup>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, which seems to lead to the difficulty of traditional Bayesian analysis.</p>
<p>In contrast to the conventional method, <xref ref-type="bibr" rid="B21">Polson and Scott, 2011</xref> suggested a hidden variable method based on data augmentation technology to describe SVMs. With the assistance of the augmentation technology, new parameter information can be extracted from data samples and expanded into the original data samples. Any type of data can be expanded using data augmentation technologies to deepen the knowledge of the original information while also enhancing the data&#x2019;s quality. By including auxiliary variables, the approach can be dealt with more easily. As a result of its widespread use in dealing with non-conjugate models, data augmentation has grown to be a highly powerful technique for resolving non-conjugate issues.</p>
<p>The pseudo-posterior distribution of an SVM can be expressed as a marginal distribution of a high-dimensional distribution with augmented variables. Thus, the complete pseudo-posterior distribution of the data can be written as<disp-formula id="equ7">
<mml:math id="m65">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3bb;</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi mathvariant="bold-italic">y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="e11">
<mml:math id="m66">
<mml:mrow>
<mml:mo>&#x221d;</mml:mo>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x220f;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msubsup>
<mml:mi mathvariant="italic">exp</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msup>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mi>T</mml:mi>
</mml:msup>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mo>;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">I</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(11)</label>
</disp-formula>
</p>
</sec>
</sec>
<sec sec-type="methods" id="s4">
<title>4 Methodology</title>
<sec id="s4-1">
<title>4.1 dpMMSPFA model</title>
<p>As is mentioned above, for each column vector <inline-formula id="inf49">
<mml:math id="m67">
<mml:mrow>
<mml:mi mathvariant="bold-italic">x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> of FA model, it can be written as<disp-formula id="e12">
<mml:math id="m68">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mrow>
<mml:mi>K</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(12)</label>
</disp-formula>
</p>
<p>and for each column vector <inline-formula id="inf50">
<mml:math id="m69">
<mml:mrow>
<mml:mi mathvariant="bold-italic">u</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> of SP item, it can be written as<disp-formula id="e13">
<mml:math id="m70">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">u</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mrow>
<mml:mi>K</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(13)</label>
</disp-formula>
</p>
<p>In dpMMSPFA, the hidden variable <inline-formula id="inf51">
<mml:math id="m71">
<mml:mrow>
<mml:mfenced open="{" close="" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:msubsup>
<mml:mo>}</mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula> is clustered and it can be supposed that each cluster of hidden variables are subordinated to the Gaussian distribution:<disp-formula id="e14">
<mml:math id="m72">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mi mathvariant="script">W</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(14)</label>
</disp-formula>where <inline-formula id="inf52">
<mml:math id="m73">
<mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> represents the Gaussian distribution parameter of the <inline-formula id="inf53">
<mml:math id="m74">
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
<mml:mi mathvariant="normal">h</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> cluster, <inline-formula id="inf54">
<mml:math id="m75">
<mml:mrow>
<mml:mfenced open="{" close="}" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>. A joint normal-Wishart distribution is employed as the conjugate prior of <inline-formula id="inf55">
<mml:math id="m76">
<mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, then it can be expressed as<disp-formula id="e15">
<mml:math id="m77">
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mi mathvariant="script">W</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">W</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3c5;</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(15)</label>
</disp-formula>
</p>
<p>On the basis of the DPM&#x2019;s stick-breaking expression, FA model, SP term and LVSVM, our the hierarchical dpMMSPFA model is described as<disp-formula id="equ8">
<mml:math id="m78">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ9">
<mml:math id="m79">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">u</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x391;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ10">
<mml:math id="m80">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ11">
<mml:math id="m81">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mi>&#x398;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="|" close="|" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:msub>
<mml:mi>G</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c0;</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>M</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c0;</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="e16">
<mml:math id="m82">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>{</mml:mo>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msub>
<mml:mo>}</mml:mo>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mo>{</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>c</mml:mi>
<mml:msubsup>
<mml:mo>}</mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>{</mml:mo>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msub>
<mml:mo>}</mml:mo>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>&#x221e;</mml:mi>
</mml:mrow>
</mml:math>
<label>(16)</label>
</disp-formula>
</p>
<p>We continue to add expressions and descriptions for proposed dpMMSPFA model:<disp-formula id="equ12">
<mml:math id="m83">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>d</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ13">
<mml:math id="m84">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>d</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>a</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ14">
<mml:math id="m85">
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo>;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>d</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3b2;</mml:mi>
<mml:mi>K</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3b2;</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ15">
<mml:math id="m86">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
<mml:mo>;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>d</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3b3;</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3b3;</mml:mi>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3b3;</mml:mi>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="e17">
<mml:math id="m87">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
<mml:mo>;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>d</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c9;</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>e</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>f</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(17)</label>
</disp-formula>where hyper-priors <inline-formula id="inf56">
<mml:math id="m88">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf57">
<mml:math id="m89">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>a</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf58">
<mml:math id="m90">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>G</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>e</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>f</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> are employed with <inline-formula id="inf59">
<mml:math id="m91">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf60">
<mml:math id="m92">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf61">
<mml:math id="m93">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf62">
<mml:math id="m94">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>a</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf63">
<mml:math id="m95">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>e</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>f</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula> as the corresponding hyper-parameters.where <inline-formula id="inf64">
<mml:math id="m96">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf65">
<mml:math id="m97">
<mml:mrow>
<mml:mfenced open="{" close="" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msub>
<mml:mo>}</mml:mo>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula> represent the weighted coefficient and the augmented variables of the LVSVM classifier related to the <inline-formula id="inf66">
<mml:math id="m98">
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mtext>th</mml:mtext>
</mml:mrow>
</mml:math>
</inline-formula> cluster. The complete pseudo-posterior of parameters can be expressed as<disp-formula id="equ16">
<mml:math id="m99">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">S</mml:mi>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mfenced open="{" close="}" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>C</mml:mi>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mfenced open="{" close="}" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>K</mml:mi>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mfenced open="{" close="}" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>K</mml:mi>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mi>Z</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mfenced open="{" close="}" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>C</mml:mi>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mi mathvariant="bold-italic">X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ17">
<mml:math id="m100">
<mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x220f;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>C</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x220f;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">S</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">u</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">S</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ18">
<mml:math id="m101">
<mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x220f;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>C</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x220f;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>&#x3d5;</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c5;</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="e18">
<mml:math id="m102">
<mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x220f;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>K</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">d</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(18)</label>
</disp-formula>
</p>
<p>Here the posterior computation is implemented by a Markov chain Monte Carlo (MCMC) algorithm based on Gibbs sampling, where the posterior distribution is approximated by a sufficient number of samples. Then, the conditional distributions used in Gibbs sampling are shown as follows.</p>
<p>For <inline-formula id="inf67">
<mml:math id="m103">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>: the conditional distribution of <inline-formula id="inf68">
<mml:math id="m104">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi mathvariant="bold-italic">n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is<disp-formula id="equ19">
<mml:math id="m105">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x7c;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x221d;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">u</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ20">
<mml:math id="m106">
<mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:mi mathvariant="italic">exp</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="e19">
<mml:math id="m107">
<mml:mrow>
<mml:mo>&#x223c;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3bc;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi mathvariant="bold-italic">n</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(19)</label>
</disp-formula>where the posterior mean is <inline-formula id="inf69">
<mml:math id="m108">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:mi>k</mml:mi>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:msubsup>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mi>k</mml:mi>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:msubsup>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="bold-italic">u</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mfrac>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>&#x3b7;</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mfrac>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and the covariance matrix is <inline-formula id="inf70">
<mml:math id="m109">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:mi>T</mml:mi>
</mml:msup>
<mml:msubsup>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:msup>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mi>T</mml:mi>
</mml:msup>
<mml:msubsup>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mi mathvariant="bold-italic">A</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>C</mml:mi>
<mml:mn>0</mml:mn>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:msub>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>For <inline-formula id="inf71">
<mml:math id="m110">
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>: the conditional distribution of <inline-formula id="inf72">
<mml:math id="m111">
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is<disp-formula id="equ21">
<mml:math id="m112">
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>M</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x3ba;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3ba;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3ba;</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3ba;</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="equ22">
<mml:math id="m113">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3ba;</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mi>c</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="e20">
<mml:math id="m114">
<mml:mrow>
<mml:mo>&#x221d;</mml:mo>
<mml:msub>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi mathvariant="italic">exp</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mfenced open="(" close="" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
</mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(20)</label>
</disp-formula>
</p>
<p>A Markov chain can be successfully created using the conditional distribution given above. Samples from the aforementioned conditional posterior distribution can be taken from each iteration in the expressions (19) and (20) depending on the initial conditions. The Markov chain is worked to run to finish the burn-in phase during the training phase. And with regard to <xref ref-type="fig" rid="F2">Figure 2</xref>&#x27;s architecture diagram for the integrated recognition system based on dpMMSPFA. The dpMMSPFA integrated recognition scheme comprises two stages: the MMSPFA model establishment phase, represented by the blue box in the diagram, and the usage phase of dpMMSPFA model, represented by the red box in the diagram. In the next section, we&#x2019;ll go into more depth about these two phases.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>The classification system frame for dpMMSPFA.</p>
</caption>
<graphic xlink:href="fenrg-10-1064464-g002.tif"/>
</fig>
</sec>
<sec id="s4-2">
<title>4.2 Model learning</title>
<p>The dpMMSPFA model establishment is a supervised process, in which Gibbs sampling is used to infer model parameters summarized in <xref ref-type="table" rid="T1">Table 1</xref>. We run this Markov chain to finish the burn-in phase during the model establishment phase.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>The training stage of dpMMSPFA model.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Sequence of steps</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">1</td>
<td align="left">Data preprocessing for training data</td>
</tr>
<tr>
<td align="left">2</td>
<td align="left">Set all of the parameters in each sub-model to their default values, and let <inline-formula id="inf73">
<mml:math id="m115">
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, then start the sub-models employing Gibbs sampling</td>
</tr>
<tr>
<td align="left">3</td>
<td align="left">Burn-in: extract samples from the conditional posterior condition, <inline-formula id="inf74">
<mml:math id="m116">
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">Return to step 2 if <inline-formula id="inf75">
<mml:math id="m117">
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, otherwise continue</td>
</tr>
<tr>
<td align="left">5</td>
<td align="left">Collect all of the parameters <inline-formula id="inf76">
<mml:math id="m118">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> times and then end the training phase afterwards</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s4-3">
<title>4.3 Model prediction</title>
<p>While the dpMMSPFA model&#x2019;s training phase is supervised, the model&#x2019;s prediction process is unsupervised shown in <xref ref-type="table" rid="T2">Table 2</xref>. In the aforementioned procedures, it is essential to set the data sample so that all parameters are sampled <inline-formula id="inf77">
<mml:math id="m119">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> times and in simultaneously.</p>
<table-wrap id="T2" position="float">
<label>TABLE 2</label>
<caption>
<p>The prediction stage of dpMMSPFA model.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Sequence of steps</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">1</td>
<td align="left">Data preprocessing for the test data. We sample the parameters of the test data <inline-formula id="inf78">
<mml:math id="m120">
<mml:mrow>
<mml:mi mathvariant="bold-italic">x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and estimate their category labels based on the model parameters obtained through Gibbs sampling collection</td>
</tr>
<tr>
<td align="left">2</td>
<td align="left">Take a sample of the test data&#x2019;s latent variable <inline-formula id="inf79">
<mml:math id="m121">
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> for <inline-formula id="inf80">
<mml:math id="m122">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> times</td>
</tr>
<tr>
<td align="left">3</td>
<td align="left">Take a sample of the test data&#x2019;s cluster index <inline-formula id="inf81">
<mml:math id="m123">
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> for <inline-formula id="inf82">
<mml:math id="m124">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> times</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">Calculate the test data&#x2019;s likelihood values of the test data for <inline-formula id="inf83">
<mml:math id="m125">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> times</td>
</tr>
<tr>
<td align="left">5</td>
<td align="left">Predict the test data&#x2019;s category label <inline-formula id="inf84">
<mml:math id="m126">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The test data is pre-processed first, and the hidden variable <inline-formula id="inf85">
<mml:math id="m127">
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and the cluster index <inline-formula id="inf86">
<mml:math id="m128">
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> of <inline-formula id="inf87">
<mml:math id="m129">
<mml:mrow>
<mml:mi mathvariant="bold-italic">x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> are sampled unsupervised on the basis of their related posterior Eq. <xref ref-type="disp-formula" rid="e21">21</xref> and Eq. <xref ref-type="disp-formula" rid="e22">22</xref>, respectively.<disp-formula id="e21">
<mml:math id="m130">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x221d;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">D</mml:mi>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x39b;</mml:mi>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(21)</label>
</disp-formula>
<disp-formula id="equ23">
<mml:math id="m131">
<mml:mrow>
<mml:mi>z</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>M</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold">&#x3ba;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi mathvariant="bold-italic">&#x3ba;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3ba;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3ba;</mml:mi>
<mml:mi>C</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="e22">
<mml:math id="m132">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3ba;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>z</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold-italic">v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x221d;</mml:mo>
<mml:msub>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3bc;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(22)</label>
</disp-formula>
</p>
<p>All the collected samples are averaged from Gibbs sampler to predict the label <inline-formula id="inf88">
<mml:math id="m133">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> of <inline-formula id="inf89">
<mml:math id="m134">
<mml:mrow>
<mml:mi mathvariant="bold-italic">x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> as<disp-formula id="e23">
<mml:math id="m135">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mtext>&#x2009;</mml:mtext>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mtext>&#x2009;</mml:mtext>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(23)</label>
</disp-formula>where <inline-formula id="inf90">
<mml:math id="m136">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">s</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> represents the <inline-formula id="inf91">
<mml:math id="m137">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
<mml:mi mathvariant="normal">h</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> sample of the hidden variable, which is sampled with the <inline-formula id="inf92">
<mml:math id="m138">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
<mml:mi mathvariant="normal">h</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> gathered samples, <inline-formula id="inf93">
<mml:math id="m139">
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> represents the <inline-formula id="inf94">
<mml:math id="m140">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
<mml:mi mathvariant="normal">h</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> sample of the cluster index which the test data belongs to and <inline-formula id="inf95">
<mml:math id="m141">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3b7;</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> represents the <inline-formula id="inf96">
<mml:math id="m142">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
<mml:mi mathvariant="normal">h</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> gathered SVM coefficients of cluster <inline-formula id="inf97">
<mml:math id="m143">
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</sec>
</sec>
<sec id="s5">
<title>5 Experiments</title>
<sec id="s5-1">
<title>5.1 Classifiers and parameters setting</title>
<p>This section compares our proposed dpMMSPFA model to the following classification techniques and uses experimental data to demonstrate how effective and predictive it is on certain datasets. 1) PCA &#x2b; SVM (<xref ref-type="bibr" rid="B31">Tharwat, 2016</xref>); 2) Kmeans &#x2b; SVM (<xref ref-type="bibr" rid="B34">Wu and Peng, 2017</xref>); 3) LVSVM (SVM) (<xref ref-type="bibr" rid="B21">Polson, 2011</xref>), 4) dpMNL (<xref ref-type="bibr" rid="B26">Shahbaba and Neal, 2009</xref>), 5) MMFA, and 6) dpMMSPFA. In the experiments, the LVSVM classifier is used for PCA and Kmeans. The feature chosen and the classifier are different since models (1)&#x2013;(2) are two-stage models. In joint approaches (5) and (6), LVSVM is employed, and the tuning parameter <inline-formula id="inf98">
<mml:math id="m144">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is picked from <inline-formula id="inf99">
<mml:math id="m145">
<mml:mrow>
<mml:mfenced open="{" close="}" separators="|">
<mml:mrow>
<mml:mn>0.001,0.01,0.1,1,10</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</sec>
<sec id="s5-2">
<title>5.2 UCI benchmark data</title>
<p>In this section, we perform experiments on Benchmark data sets of varying size and difficulty, and for each we average the accuracy over ten random splits. The benchmark data sets can be found at either the University of California at Irvine (UCI) or the Machine Learning Dataset Repository (<xref ref-type="bibr" rid="B9">Dua and Graff, 2022</xref>). <xref ref-type="table" rid="T3">Table 3</xref> summarizes the data information.</p>
<table-wrap id="T3" position="float">
<label>TABLE 3</label>
<caption>
<p>Characteristics of datasets used in experiments.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Index</th>
<th align="left">Dataset</th>
<th align="left">Class</th>
<th align="left">Feature</th>
<th align="left">Training</th>
<th align="left">Test</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">1</td>
<td align="left">Breast-cancer</td>
<td align="left">2</td>
<td align="left">9</td>
<td align="left">200</td>
<td align="left">77</td>
</tr>
<tr>
<td align="left">2</td>
<td align="left">German</td>
<td align="left">2</td>
<td align="left">20</td>
<td align="left">700</td>
<td align="left">300</td>
</tr>
<tr>
<td align="left">3</td>
<td align="left">Heart</td>
<td align="left">2</td>
<td align="left">13</td>
<td align="left">170</td>
<td align="left">100</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">Waveform</td>
<td align="left">2</td>
<td align="left">21</td>
<td align="left">400</td>
<td align="left">4600</td>
</tr>
<tr>
<td align="left">5</td>
<td align="left">Diabetes</td>
<td align="left">2</td>
<td align="left">8</td>
<td align="left">468</td>
<td align="left">300</td>
</tr>
<tr>
<td align="left">6</td>
<td align="left">Splice</td>
<td align="left">2</td>
<td align="left">60</td>
<td align="left">1000</td>
<td align="left">2175</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The average testing accuracy is displayed in <xref ref-type="fig" rid="F3">Figure 3</xref>, with the best results throughout many components explicitly shown for each approach. The low-dimensional features that are extracted using unsupervised approach PCA may not be sufficient for the subsequent prediction task. Another two-stage model Kmeans, which build SVMs with an ensemble of clusters, performs well merely in some datasets. These two-stage models are unable to produce adequate outcomes as a result. The robustness of MNL&#x2019;s classification performance is insufficient, making dpMNL less effective than the unsupervised joint model MMFA. In contrast to dpMNL and LVSVM, dpMMSPFA employs FA, which pulls out more beneficial features from cluster, label, and classification content using a unified Bayesian framework. MMFA is a particular instance of dpMMSPFA that does not entirely utilize the label content when there is just one cluster. As shown in <xref ref-type="fig" rid="F3">Figure 3</xref>, the suggested model dpMMSPFA works well in the experiment and gets the maximum accuracy, especially when working with multimodal datasets like the German, Heart, and other similar cases. <xref ref-type="fig" rid="F4">Figure 4</xref> depicts the estimated posterior distributions of the number of clusters by dpMMSPFA. We can observe that, regarding the distribution of the data, the number of clusters detected by the dpMMSPFA is more logical.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Test accuracies of different models <bold>(A)</bold> Breast-cancer; <bold>(B)</bold> German; <bold>(C)</bold> Heart; <bold>(D)</bold> Waveform; <bold>(E)</bold> Diabetis; <bold>(F)</bold> Splice.</p>
</caption>
<graphic xlink:href="fenrg-10-1064464-g003.tif"/>
</fig>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Approximate posterior distribution on the number of clusters by dpMMSPFA <bold>(A)</bold> Breast-cancer; <bold>(B)</bold> German; <bold>(C)</bold> Heart; <bold>(D)</bold> Waveform; <bold>(E)</bold> Diabetis; <bold>(F)</bold> Splice.</p>
</caption>
<graphic xlink:href="fenrg-10-1064464-g004.tif"/>
</fig>
</sec>
<sec id="s5-3">
<title>5.3 Server energy consumption data</title>
<p>The energy consumption grade is more valuable than the specific energy consumption data. It influences both the choice to purchase enterprise server assets and the decision to implement energy-saving measures at various levels of energy consumption. The 5 categories of energy consumption in this study, which range from low to high, are represented by the numbers [1, 2, 3, 4, 5] listed in <xref ref-type="table" rid="T4">Table 4</xref>.</p>
<table-wrap id="T4" position="float">
<label>TABLE 4</label>
<caption>
<p>Server energy consumption level and corresponding range.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Server energy consumption level</th>
<th align="left">Range of energy consumption (Watts)</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">Level 1</td>
<td align="left">&#x2265;0, &#x3c;190</td>
</tr>
<tr>
<td align="left">Level 2</td>
<td align="left">&#x2265;190, &#x3c;250</td>
</tr>
<tr>
<td align="left">Level 3</td>
<td align="left">&#x2265;250, &#x3c;300</td>
</tr>
<tr>
<td align="left">Level 4</td>
<td align="left">&#x2265;300, &#x3c;350</td>
</tr>
<tr>
<td align="left">Level 5</td>
<td align="left">&#x2265;350, &#x3c;400</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In this section, we introduce dpMMSPFA to the server energy consumption level classification community. Results of the experiment are presented in this subsection chiefly based on measured Inspur NF5280M5 server. <xref ref-type="table" rid="T5">Table 5</xref> displays the Inspur NF5280M5 server configuration used in this study. The power consumption is measured using an energy consumption tool through which &#x201c;CPU intensive tasks&#x201d;, &#x201c;I/O intensive tasks&#x201d;, &#x201c;Load intensive tasks&#x201d; and &#x201c;Non-loaded tasks,&#x201d; are produced, accordingly. Underneath Linux, there is a testing tool named &#x201c;Stress&#x201d;, a tool imposing a configurable amount of load on system, which is designed primarily for people who would like to test load systems and monitor on how these devices are operating (<xref ref-type="bibr" rid="B32">Ulianytskyi, 2022</xref>).</p>
<table-wrap id="T5" position="float">
<label>TABLE 5</label>
<caption>
<p>Configuration of Inspur NF5280M5 server.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Definition</th>
<th align="left">Configuration</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">CPU architecture</td>
<td align="left">Intel(R) Xeon(R) Gold 5118 CPU@ 2.30GHz- 2 &#xd7; 12 Core</td>
</tr>
<tr>
<td align="left">Memory size</td>
<td align="left">12 &#xd7; 32&#xa0;GB</td>
</tr>
<tr>
<td align="left">Disk size</td>
<td align="left">2&#xd7;1&#xa0;TB SSD &#x2b;1&#xd7; 6&#xa0;TB HDD</td>
</tr>
<tr>
<td align="left">Network interface card</td>
<td align="left">82599&#xa0;ES -3 &#xd7; 2&#xd7;10 - Gigabit SFA/SPF &#x2b; network connection</td>
</tr>
<tr>
<td align="left">Operating system</td>
<td align="left">CentOS</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Load characteristic parameters of the Inspur NF5280M5 while it executes tasks under various loads are summarized in <xref ref-type="table" rid="T6">Table 6</xref>.</p>
<table-wrap id="T6" position="float">
<label>TABLE 6</label>
<caption>
<p>Characteristic parameters.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">No</th>
<th align="left">Feature parameter</th>
<th align="left">Unit</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">1</td>
<td align="left">Total-cpu-usage: usr</td>
<td align="left">%</td>
<td align="left">Percentage of programs in user&#x2019;s space</td>
</tr>
<tr>
<td align="left">2</td>
<td align="left">Total-cpu-usage: sys</td>
<td align="left">%</td>
<td align="left">Percentage of programs in system&#x2019;s space</td>
</tr>
<tr>
<td align="left">3</td>
<td align="left">Total-cpu-usage: idl</td>
<td align="left">%</td>
<td align="left">Idle percentage</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">Total-cpu-usage: wai</td>
<td align="left">%</td>
<td align="left">Percentage of CPU consumed waiting for disk I/O</td>
</tr>
<tr>
<td align="left">5</td>
<td align="left">Total-cpu-usage: hiq</td>
<td align="left">Times/sec</td>
<td align="left">Number of hardware interrupts</td>
</tr>
<tr>
<td align="left">6</td>
<td align="left">Total-cpu-usage: siq</td>
<td align="left">Times/sec</td>
<td align="left">Number of software interrupts</td>
</tr>
<tr>
<td align="left">7</td>
<td align="left">Dsk/total: read</td>
<td align="left">Bytes/sec</td>
<td align="left">Disk reading bandwidth</td>
</tr>
<tr>
<td align="left">8</td>
<td align="left">Dsk/total: write</td>
<td align="left">Bytes/sec</td>
<td align="left">Disk writing bandwidth</td>
</tr>
<tr>
<td align="left">9</td>
<td align="left">Net/total: recv</td>
<td align="left">Bytes/sec</td>
<td align="left">Network packet receiving bandwidth</td>
</tr>
<tr>
<td align="left">10</td>
<td align="left">Net/total: send</td>
<td align="left">Bytes/sec</td>
<td align="left">Network packet sending bandwidth</td>
</tr>
<tr>
<td align="left">11</td>
<td align="left">IO/total: read</td>
<td align="left">Blocks/sec</td>
<td align="left">Number of reading disk blocks</td>
</tr>
<tr>
<td align="left">12</td>
<td align="left">IO/total: write</td>
<td align="left">Blocks/sec</td>
<td align="left">Number of writing disk blocks</td>
</tr>
<tr>
<td align="left">13</td>
<td align="left">System: int</td>
<td align="left">Times/sec</td>
<td align="left">Number of hardware interrupts</td>
</tr>
<tr>
<td align="left">14</td>
<td align="left">System: csw</td>
<td align="left">Times/sec</td>
<td align="left">Number of context switches</td>
</tr>
<tr>
<td align="left">15</td>
<td align="left">Load-avg: 1&#xa0;m</td>
<td align="left">/</td>
<td align="left">Average load of the system per minute</td>
</tr>
<tr>
<td align="left">16</td>
<td align="left">Load-avg: 5&#xa0;m</td>
<td align="left">/</td>
<td align="left">Average load of the system every 5&#xa0;min</td>
</tr>
<tr>
<td align="left">17</td>
<td align="left">Load-avg: 15&#xa0;m</td>
<td align="left">/</td>
<td align="left">Average load of the system every 15&#xa0;min</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Thermodynamic chart is a graphical representation of pattern similarity, with each component value in the matrix representing a different colour. The heatmap, the visualized <inline-formula id="inf100">
<mml:math id="m146">
<mml:mrow>
<mml:msup>
<mml:msub>
<mml:mrow>
<mml:mo>&#x2033;</mml:mo>
<mml:mi mathvariant="bold-italic">U</mml:mi>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:msub>
<mml:mo>&#x2033;</mml:mo>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> introduced in <xref ref-type="sec" rid="s3">Section 3</xref>, corresponding to 7.5% of the collected server energy consumption data (training samples) is used as supervised information to characterize the similarity and distinction of data in the matrix, as shown in <xref ref-type="fig" rid="F5">Figure 5</xref>. The similarity between the training data gradually changes in accordance with darkish black, darkish blue, blue, lake blue, bluish green, green, yellowish green and mild yellow. When the colour is darkish black on the diagonal of the matrix, the similarity is 1. Furthermore, the comparison data of two pairs are from different categories visualized with bluish green, and the similarity is 0. We incorporate the matrix category and similarity information represented by this heatmap into the proposed dpMMSPFA model, which aims to improve the model&#x2019;s overall discriminant performance.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>Similarity heatmap of sparse coding matrix represented by cosine function.</p>
</caption>
<graphic xlink:href="fenrg-10-1064464-g005.tif"/>
</fig>
<p>Since the Gamma hyper-parameters, <inline-formula id="inf101">
<mml:math id="m147">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>b</mml:mi>
</mml:mrow>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>a</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf102">
<mml:math id="m148">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>e</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>f</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> may indirectly influence the inference of corresponding parameters, the transformation matrix <inline-formula id="inf103">
<mml:math id="m149">
<mml:mrow>
<mml:mi mathvariant="bold-italic">D</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf104">
<mml:math id="m150">
<mml:mrow>
<mml:mi mathvariant="bold-italic">A</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, and the noise matrix <inline-formula id="inf105">
<mml:math id="m151">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3b5;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf106">
<mml:math id="m152">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c6;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, to evaluate the effect of those parameters, we varied model hyper-parameters from {10<sup>&#x2212;5</sup>,10<sup>&#x2212;4</sup>,10<sup>&#x2212;3</sup>,10<sup>&#x2212;2</sup>,10<sup>&#x2212;1</sup>, 10<sup>0</sup>, 10<sup>1</sup>,10<sup>2</sup>} to evaluate their effects on recognition rate. In order to achieve highest recognition ability on server energy consumption dataset, we fixed them to suitable values (<inline-formula id="inf107">
<mml:math id="m153">
<mml:mrow>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> &#x3d;100, <inline-formula id="inf108">
<mml:math id="m154">
<mml:mrow>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> &#x3d;10<sup>&#x2212;1</sup>, <inline-formula id="inf109">
<mml:math id="m155">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>a</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> &#x3d;10<sup>&#x2212;2</sup>, <inline-formula id="inf110">
<mml:math id="m156">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> &#x3d;10<sup>&#x2212;3</sup>, <inline-formula id="inf111">
<mml:math id="m157">
<mml:mrow>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> &#x3d;10<sup>&#x2212;1</sup>, <inline-formula id="inf112">
<mml:math id="m158">
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> &#x3d;10<sup>&#x2212;2</sup>, <inline-formula id="inf113">
<mml:math id="m159">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>e</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> &#x3d;10<sup>&#x2212;2</sup>, <inline-formula id="inf114">
<mml:math id="m160">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>f</mml:mi>
<mml:mo>&#x223c;</mml:mo>
</mml:mover>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> &#x3d;1) based on the results provided in <xref ref-type="fig" rid="F6">Figure 6</xref>. At the same time, we discovered that high performance identification rate can always be achieved when the dpMMSPFA model automatically learns 6 or 7 clusters. This finding also demonstrates the usefulness and rationality of introducing the DPM model to cluster and group in our proposed dpMMSPFA.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>The recognition effects on the hyper-parameters.</p>
</caption>
<graphic xlink:href="fenrg-10-1064464-g006.tif"/>
</fig>
<p>Here, we provided the samples&#x2019; clustering results with a clustering number of 7 (samples have been processed by <italic>z</italic>-score). <xref ref-type="fig" rid="F7">Figure 7</xref> depicts that the sample proportion of each cluster is fairly uniform, and there are no instances whenever the proportion of a particular partition is too high or low, showing that the number of clusters that were automatically learned is appropriate and avoids the flaws with conventional clustering, such as overfitting. These results provide evidence to explain why 7 partitions can give remarkable energy consumption feature recognition performance aforementioned in the experiments in <xref ref-type="fig" rid="F6">Figure 6</xref>.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption>
<p>Clustering results by dpMMSPFA.</p>
</caption>
<graphic xlink:href="fenrg-10-1064464-g007.tif"/>
</fig>
<p>The commonly used CH (Calinski-Harabaz) and SC (Silhouette Coefficient) indexes (<xref ref-type="bibr" rid="B37">Zhang et al., 2021</xref>) were used to evaluate the clustering effectiveness and are compared with the clustering methods of Kmeans (<xref ref-type="bibr" rid="B34">Wu and Peng, 2017</xref>) and FCM (<xref ref-type="bibr" rid="B2">Bezdek et al.,1984</xref>) through experiments. This was done to further confirm the clustering effectiveness of dpMMSPFA after the DPM model is introduced in this paper. The clustering effect is often better the larger the CH and SC. When the automatic clustering results are 6 and 7, we continued to run the dpMMSPFA clustering effectiveness experiment in accordance with the experiment in <xref ref-type="fig" rid="F6">Figure 6</xref>. We set the number of clusters from 2&#x2013;10&#xa0;at an interval of 1, conducted cluster effectiveness evaluation statistics, and ensured that the number of clusters is appropriate for Kmeans and FCM. According to the experimental findings in <xref ref-type="fig" rid="F8">Figure 8</xref>, the value of dpMMSPFA is closer to 1 than that of Kmeans and FCM, which means the effect is better and the volatility is lower than that of FCM when using CH to assess the efficacy of clustering. The effectiveness of clustering is assessed for SC. Overall, the dpMMSPFA clustering results have a larger ratio of inter group to intra group dispersion, and the effect is superior to that of Kmeans and FCM. Despite the fact that under the SC index, the volatility of dpMMSPFA is higher than that of FCM, most SC indexes of dpMMSPFA exhibit superior clustering outcomes than FCM. The clustering evaluation experiment also proves that the excellent recognition performance of 6 and 7 clusters in the experiment in <xref ref-type="fig" rid="F6">Figure 6</xref> is inseparable from the effectiveness of the dpMMSPFA automatically clustering.</p>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption>
<p>Boxplot of clustering effectiveness comparison.</p>
</caption>
<graphic xlink:href="fenrg-10-1064464-g008.tif"/>
</fig>
<p>
<xref ref-type="fig" rid="F9">Figure 9</xref> depicts the relationship between classification performance and the number of features in the aforementioned models. The experimental findings confirm the continued effectiveness of our suggested dpMMSPFA. The highest accuracy of 95.79% is achieved by dpMMSPFA when 10 extracted components are used. Due to the modest advantage of dimensionality reduction, PCA &#x2b; LVSVM outperforms all LVSVM model in classification, but as dimensionality increases, it falls short of dpMMSPFA&#x2019;s classification performance. On the other hand, while LVSVM, Kmeans &#x2b; LVSVM and dpMNL employ the original data as input features and the server-consumed data contains some redundant material, they both perform less effectively in terms of classification than dpMMSPFA. Additionally, dpMMSPFA performs better than two-stage separation models and has a relatively robust ability for energy consumption classification. According to <xref ref-type="fig" rid="F9">Figure 9</xref>, MMFA has a greater classification prediction performance than other classifiers but a poorer prediction performance than dpMMSPFA most of the time. Joint learning used in a joint architecture can therefore significantly enhance categorization performance. Furthermore, it is crucial to include SP elements and clustering in the joint architecture. The outcomes also demonstrate that using additional characteristics enhances classification performance which is consistent with the theoretical dpMMSPFA framework. The introduction of more and more redundant feature content, however, led to a decline in classification performance, which is why accuracy did not always rise as the number of features increased.</p>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption>
<p>Test accuracy on energy consumption classification under different components.</p>
</caption>
<graphic xlink:href="fenrg-10-1064464-g009.tif"/>
</fig>
</sec>
</sec>
<sec id="s6">
<title>6 Conclusion and future work</title>
<p>For the classification of energy consumption, we develop the Dirichlet maximum marginal similarity preservation factor analysis (dpMMSPFA) model. The Bayesian statistical method and the maximal marginal criterion for classifiers are merged into a single framework by the data augmentation community. In the hidden space that FA extracts, dpMMSPFA concurrently learns the underlying structure of observation data, similarity preservation items, clusters, and classifiers. In summing up it can be stated that the usefulness and efficiency of our suggested dpMMSPFA model have been verified by experiments on datasets of measured server energy.</p>
<p>The Beta process is a crucial model in the nonparametric Bayesian field in addition to the Dirichlet model. When the classifier is given the original samples, it can nevertheless produce stunning recognition results for highly complex, multimodal, and dimensional data without the application of any dimension reduction techniques. Imagine that in the future research on energy consumption modeling, we unify the factor analysis (FA), Beta process and hidden variable support vector machine (LVSVM) beneath the framework of Bayesian theory. The hope is that it will be remarkable potential to achieve to produce enormously excellent recognition performance. For the classifier underneath the nonparametric Bayesian framework, except for the hidden variable support vector machine (LVSVM), there is no nonparametric Bayesian classifier with robust generalization ability. In fact, in the classification model, the overall performance of random forest (RF) is almost equal to that of SVM classifier, and it can even produce higher classification performance for low dimensional data. It deserves to be further studied the transformation of the random forest classification model into an actual random forest model underneath the nonparametric Bayesian framework, which will be a huge leap for nonparametric Bayesian models.</p>
<p>Additionally, as hardware has improved, a popular research topic is the energy consumption model for offloading server power to smart network interface cards (smart NIC), distributed processing units (DPU), graphics processing units (GPU), field programmable gate arrays (FPGA), and other hardware resources. In the future, we shall find out more about and explore the creation of an energy consumption model for intelligent hardware units.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="s7">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="s8">
<title>Author contributions</title>
<p>BC: Project design, Methodology, Software, Simulation, Writing-Original draft preparation, Content revision. HL: Data collection, Experiments discussion, Content revision. CS: Project discussion. BS: Discussion on experimental environment. KL: Supervision.</p>
</sec>
<sec id="s9">
<title>Funding</title>
<p>This work received the funding from the Science and Technology Project of Research Institute of China Telecom.</p>
</sec>
<sec sec-type="COI-statement" id="s10">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s11">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Antoniak</surname>
<given-names>C. E.</given-names>
</name>
</person-group> (<year>1974</year>). <article-title>Mixtures of dirichlet processes with applications to bayesian nonparametric problems</article-title>. <source>Ann. Stat.</source> <volume>2</volume>, <fpage>1152</fpage>&#x2013;<lpage>1174</lpage>. <pub-id pub-id-type="doi">10.1214/AOS/1176342871</pub-id>
</citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bezdek</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Ehrlich</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Full</surname>
<given-names>W. E.</given-names>
</name>
</person-group> (<year>1984</year>). <article-title>Fcm: The fuzzy c-means clustering algorithm</article-title>. <source>Comput. Geosciences</source> <volume>10</volume>, <fpage>191</fpage>&#x2013;<lpage>203</lpage>. <pub-id pub-id-type="doi">10.1016/0098-3004(84)90020-7</pub-id>
</citation>
</ref>
<ref id="B3">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Caglar</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Gokhale</surname>
<given-names>A. S.</given-names>
</name>
</person-group> (<year>2014</year>), <article-title>iOverbook: Intelligent resource-overbooking to support soft real-time applications in the cloud</article-title>. <conf-name>2014 IEEE 7th International Conference on Cloud Computing</conf-name>. <conf-loc>Anchorage, AK, USA</conf-loc>.</citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Silva</surname>
<given-names>J. G.</given-names>
</name>
<name>
<surname>Paisley</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Dunson</surname>
<given-names>D. B.</given-names>
</name>
<name>
<surname>Carin</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds</article-title>. <source>IEEE Trans. Signal Process.</source> <volume>58</volume>, <fpage>6140</fpage>&#x2013;<lpage>6155</lpage>. <pub-id pub-id-type="doi">10.1109/TSP.2010.2070796</pub-id>
</citation>
</ref>
<ref id="B5">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Data Center Cooling Working Group of China Refrigeration Society (Dc Cooling)</surname>
</name>
</person-group> (<year>2021</year>). <source>Annual development research report on cooling technology of China data center</source>. <publisher-loc>Beijing, China</publisher-loc>: <publisher-name>China Architecture Press</publisher-name>.</citation>
</ref>
<ref id="B6">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Davis</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Rivoire</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Goldszmidt</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ardestani</surname>
<given-names>E. K.</given-names>
</name>
</person-group> (<year>2012</year>), <article-title>Chaos: Composable highly accurate OS-based power models</article-title>. <conf-name>2012 IEEE International Symposium on Workload Characterization (IISWC)</conf-name>. <conf-loc>La Jolla, CA, USA</conf-loc>.</citation>
</ref>
<ref id="B7">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Davis</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Rivoire</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Goldszmidt</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2014</year>). &#x201c;<article-title>Star-Cap: Cluster power management using software-only models</article-title>,&#x201d; <conf-name>2014 43rd International Conference on Parallel Processing Workshops</conf-name> <conf-loc>Minneapolis, MN, USA</conf-loc>.</citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Du</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Pan</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bao</surname>
<given-names>Z.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Noise robust radar HRRP target recognition based on multitask factor analysis with small training data size</article-title>. <source>IEEE Trans. Signal Process.</source> <volume>60</volume>, <fpage>3546</fpage>&#x2013;<lpage>3559</lpage>. <pub-id pub-id-type="doi">10.1109/TSP.2012.2191965</pub-id>
</citation>
</ref>
<ref id="B9">
<citation citation-type="web">
<person-group person-group-type="author">
<name>
<surname>Dua</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Graff</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>UCI machine learning repository</article-title>. <comment>Available: <ext-link ext-link-type="uri" xlink:href="https://archive.ics.uci.edu/ml/index.php">https://archive.ics.uci.edu/ml/index.php</ext-link> (Accessed 0623, 2022)</comment>.</citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ferguson</surname>
<given-names>T. S.</given-names>
</name>
</person-group> (<year>1973</year>). <article-title>A bayesian analysis of some nonparametric problems</article-title>. <source>Ann. Stat.</source> <volume>1</volume>, <fpage>209</fpage>&#x2013;<lpage>230</lpage>. <pub-id pub-id-type="doi">10.1214/AOS/1176342360</pub-id>
</citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Islam</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Keung</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Empirical prediction models for adaptive resource provisioning in the cloud</article-title>. <source>Future Gener. Comput. Syst.</source> <volume>28</volume>, <fpage>155</fpage>&#x2013;<lpage>162</lpage>. <pub-id pub-id-type="doi">10.1016/j.future.2011.05.027</pub-id>
</citation>
</ref>
<ref id="B12">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>Z. L.</given-names>
</name>
<name>
<surname>Davis</surname>
<given-names>L. S.</given-names>
</name>
</person-group> (<year>2011</year>). &#x201c;<article-title>Learning a discriminative dictionary for sparse coding via label consistent K-SVD</article-title>,&#x201d; <conf-name>IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</conf-name> <publisher-name>Springs</publisher-name> (<conf-loc>Colorado, CO, USA</conf-loc>).</citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kilper</surname>
<given-names>D. C.</given-names>
</name>
<name>
<surname>Atkinson</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Korotky</surname>
<given-names>S. K.</given-names>
</name>
<name>
<surname>Goyal</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Vetter</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Suvakovic</surname>
<given-names>D.</given-names>
</name>
<etal/>
</person-group> (<year>2011</year>). <article-title>Power trends in communication networks</article-title>. <source>IEEE J. Sel. Top. Quantum Electron.</source> <volume>17</volume>, <fpage>275</fpage>&#x2013;<lpage>284</lpage>. <pub-id pub-id-type="doi">10.1109/JSTQE.2010.2074187</pub-id>
</citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Konstantakos</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Chatzigeorgiou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nikolaidis</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Laopoulos</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Energy consumption estimation in embedded systems</article-title>. <source>IEEE Trans. Instrum. Meas.</source> <volume>57</volume>, <fpage>797</fpage>&#x2013;<lpage>804</lpage>. <pub-id pub-id-type="doi">10.1109/TIM.2007.913724</pub-id>
</citation>
</ref>
<ref id="B15">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Lacoste-Julien</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Sha</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Jordan</surname>
<given-names>M. I.</given-names>
</name>
</person-group> (<year>2008</year>). &#x201c;<article-title>DiscLDA: Discriminative learning for dimensionality reduction and classification</article-title>,&#x201d; <conf-name>22th Annual Conference on Neural Information Processing Systems (NIPS)</conf-name> (<conf-loc>Vancouver: British Columbia, Canada</conf-loc>).</citation>
</ref>
<ref id="B16">
<citation citation-type="web">
<person-group person-group-type="author">
<name>
<surname>Levy</surname>
<given-names>Moises</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Data centers: Energy consumption is all about workloads</article-title>
<comment>. <ext-link ext-link-type="uri" xlink:href="https://omdia.tech.informa.com/OM018224/Data-centers-Energy-consumption-is-all-about-workloads">https://omdia.tech.informa.com/OM018224/Data-centers-Energy-consumption-is-all-about-workloads</ext-link> (Accessed June 11, 2021)</comment>.</citation>
</ref>
<ref id="B17">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Shen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>An embedded software power model based on algorithm complexity using back-propagation neural networks</article-title>, <conf-name>2010 IEEE/ACM Int&#x27;l Conference on Green Computing and Communications &#x26; Int&#x27;l Conference on Cyber, Physical and Social Computing</conf-name>. <conf-loc>Hangzhou, China</conf-loc>.</citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Power consumption model based on feature selection and deep learning in cloud computing scenarios</article-title>. <source>IET Commun.</source> <volume>14</volume>, <fpage>1610</fpage>&#x2013;<lpage>1618</lpage>. <pub-id pub-id-type="doi">10.1049/iet-com.2019.0717</pub-id>
</citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lin</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>An artificial neural network approach to power consumption model construction for servers in cloud data centers</article-title>. <source>IEEE Trans. Sustain. Comput.</source> <volume>5</volume>, <fpage>329</fpage>&#x2013;<lpage>340</lpage>. <pub-id pub-id-type="doi">10.1109/TSUSC.2019.2910129</pub-id>
</citation>
</ref>
<ref id="B20">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Moreno</surname>
<given-names>I. S.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2012</year>), <article-title>Neural network-based overallocation for improved energy-efficiency in real-time cloud environments</article-title>. <conf-name>2012 IEEE 15th International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing</conf-name>. <conf-loc>Shenzhen, China</conf-loc>.</citation>
</ref>
<ref id="B21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Polson</surname>
<given-names>N. G.</given-names>
</name>
<name>
<surname>Scott</surname>
<given-names>S. L.</given-names>
</name>
</person-group> (<year>2011</year>). <article-title>Data augmentation for support vector machines</article-title>. <source>Bayesian Anal.</source> <volume>6</volume>, <fpage>1</fpage>&#x2013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1214/11-BA601</pub-id>
</citation>
</ref>
<ref id="B22">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>P&#xf6;ss</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nambiar</surname>
<given-names>R. O.</given-names>
</name>
</person-group> (<year>2010</year>). &#x201c;<article-title>A power consumption analysis of decision support systems</article-title>,&#x201d; <conf-name>2010 1st joint WOSP/SIPEW international conference on performance engineering</conf-name> (<conf-loc>San Jose, California, USA</conf-loc>).</citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Quadrianto</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Ghahramani</surname>
<given-names>Z.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>A very simple safe-bayesian random forest</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell.</source> <volume>37</volume>, <fpage>1297</fpage>&#x2013;<lpage>1303</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2014.2362751</pub-id>
</citation>
</ref>
<ref id="B24">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Rasmussen</surname>
<given-names>C. E.</given-names>
</name>
<name>
<surname>Ghahramani</surname>
<given-names>Z.</given-names>
</name>
</person-group> (<year>2001</year>). &#x201c;<article-title>Infinite mixtures of Gaussian process experts</article-title>,&#x201d; in <conf-name>21th Annual Conference on Neural Information Processing Systems (NIPS)</conf-name> (<conf-loc>Vancouver: British Columbia, Canada</conf-loc>).</citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rotem</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Naveh</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ananthakrishnan</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Weissmann</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Rajwan</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Power-management architecture of the intel microarchitecture code-named sandy bridge</article-title>. <source>IEEE Micro</source> <volume>32</volume>, <fpage>20</fpage>&#x2013;<lpage>27</lpage>. <pub-id pub-id-type="doi">10.1109/MM.2012.12</pub-id>
</citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shahbaba</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Neal</surname>
<given-names>R. M.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Nonlinear models using dirichlet process mixtures</article-title>. <source>J. Mach. Learn. Res.</source> <volume>10</volume>, <fpage>1829</fpage>&#x2013;<lpage>1850</lpage>. <pub-id pub-id-type="doi">10.5555/1577069.1755846</pub-id>
</citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shen</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Qiu</surname>
<given-names>Q.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Achieving autonomous power management using reinforcement learning</article-title>. <source>ACM Trans. Des. Autom. Electron. Syst.</source> <volume>18</volume> (<issue>24</issue>), <fpage>1</fpage>&#x2013;<lpage>2432</lpage>. <pub-id pub-id-type="doi">10.1145/2442087.2442095</pub-id>
</citation>
</ref>
<ref id="B28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shi</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Bao</surname>
<given-names>Z.</given-names>
</name>
</person-group> (<year>2011</year>). <article-title>Radar HRRP statistical recognition with local factor analysis by automatic bayesian ying-yang harmony learning</article-title>. <source>IEEE Trans. Signal Process.</source> <volume>6</volume>, <fpage>610</fpage>&#x2013;<lpage>617</lpage>. <pub-id pub-id-type="doi">10.1109/TSP.2010.2088391</pub-id>
</citation>
</ref>
<ref id="B29">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Subramaniam</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>2014</year>). &#x201c;<article-title>Enabling efficient power provisioning for enterprise applications</article-title>,&#x201d; <conf-name>2014 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing</conf-name> (<conf-loc>Chicago, IL, USA</conf-loc>).</citation>
</ref>
<ref id="B30">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Tesauro</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Das</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Chan</surname>
<given-names>H. Y.</given-names>
</name>
<name>
<surname>Kephart</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Levine</surname>
<given-names>D. W.</given-names>
</name>
<name>
<surname>Rawson</surname>
<given-names>F. L.</given-names>
</name>
<etal/>
</person-group> (<year>2007</year>). &#x201c;<article-title>Managing power consumption and performance of computing systems using reinforcement learning</article-title>,&#x201d; in <conf-name>21th Annual Conference on Neural Information Processing Systems (NIPS)</conf-name> (<conf-loc>Vancouver: British Columbia, Canada</conf-loc>).</citation>
</ref>
<ref id="B31">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tharwat</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Principal component analysis - a tutorial</article-title>. <source>Int. J. Appl. Pattern Recognit.</source> <volume>3</volume>, <fpage>197</fpage>&#x2013;<lpage>240</lpage>. <pub-id pub-id-type="doi">10.1504/IJAPR.2016.10000630</pub-id>
</citation>
</ref>
<ref id="B32">
<citation citation-type="web">
<person-group person-group-type="author">
<name>
<surname>Ulianytskyi</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Packages for linux and unix</article-title>. <comment>Available: <ext-link ext-link-type="uri" xlink:href="https://https://pkgs.org/download/stress">https://https://pkgs.org/download/stress</ext-link> (Accessed 0623, 2022)</comment>.</citation>
</ref>
<ref id="B33">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Upadhyay</surname>
<given-names>P. C.</given-names>
</name>
<name>
<surname>Karanam</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Lory</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>DeSouza</surname>
<given-names>G. N.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>Classifying cover crop residue from rgb images: A simple SVM versus a SVM ensemble</article-title>,&#x201d; <conf-name>2021 IEEE Symposium Series on Computational Intelligence</conf-name> (<conf-loc>Orlando, FL, USA</conf-loc>: <publisher-name>SSCI</publisher-name>).</citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wu</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Peng</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>A data mining approach combining $K$ -means clustering with bagging neural network for short-term wind power forecasting</article-title>. <source>IEEE Internet Things J.</source> <volume>4</volume>, <fpage>979</fpage>&#x2013;<lpage>986</lpage>. <pub-id pub-id-type="doi">10.1109/JIOT.2017.2677578</pub-id>
</citation>
</ref>
<ref id="B35">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2013</year>). &#x201c;<article-title>Fast max-margin matrix factorization with data augmentation</article-title>,&#x201d; <conf-name>30th International Conference on Machine Learning</conf-name> (<conf-loc>Atlanta, USA</conf-loc>: <publisher-name>ICML-13</publisher-name>).</citation>
</ref>
<ref id="B36">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2014</year>). &#x201c;<article-title>Max-margin infinite hidden Markov models</article-title>,&#x201d; <conf-name>31th International Conference on Machine Learning (ICML-14)</conf-name> (<conf-loc>Beijing, China</conf-loc>).</citation>
</ref>
<ref id="B37">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Hei</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Fei</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Jiao</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>Cross-domain text classification based on BERT model</article-title>,&#x201d; <conf-name>2021 International Conference on Database Systems for Advanced Applications</conf-name> (<conf-loc>Taipei, Taiwan</conf-loc>).</citation>
</ref>
<ref id="B38">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Abawajy</surname>
<given-names>J. H.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Chowdhury</surname>
<given-names>M. U.</given-names>
</name>
<name>
<surname>Alelaiwi</surname>
<given-names>A.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>Fine-grained energy consumption model of servers based on task characteristics in cloud data center</article-title>. <source>IEEE Access</source> <volume>6</volume>, <fpage>27080</fpage>&#x2013;<lpage>27090</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2017.2732458</pub-id>
</citation>
</ref>
<ref id="B39">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ahmed</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Xing</surname>
<given-names>E. P.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>MedLDA: Maximum margin supervised topic models</article-title>. <source>J. Mach. Learn. Res.</source> <volume>13</volume>, <fpage>2237</fpage>&#x2013;<lpage>2278</lpage>. <pub-id pub-id-type="doi">10.5555/2503308.2503315</pub-id>
</citation>
</ref>
<ref id="B40">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Perkins</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Gibbs max-margin topic models with data augmentation</article-title>. <source>J. Mach. Learn. Res.</source> <volume>15</volume>, <fpage>1073</fpage>&#x2013;<lpage>1110</lpage>. <pub-id pub-id-type="doi">10.5555/2627435.2638570</pub-id>
</citation>
</ref>
<ref id="B41">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Perkins</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2013</year>). &#x201c;<article-title>Gibbs max-margin topic models with fast sampling algorithms</article-title>,&#x201d; <conf-name>30th International Conference on Machine Learning (ICML-13)</conf-name> (<conf-loc>Atlanta, USA</conf-loc>).</citation>
</ref>
</ref-list>
</back>
</article>