<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Appl. Math. Stat.</journal-id>
<journal-title>Frontiers in Applied Mathematics and Statistics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Appl. Math. Stat.</abbrev-journal-title>
<issn pub-type="epub">2297-4687</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fams.2025.1594873</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Applied Mathematics and Statistics</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Plug-and-play low-rank tensor completion and reconstruction algorithms with improved applicability of tensor decompositions</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Mukai</surname> <given-names>Manabu</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Hontani</surname> <given-names>Hidekata</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Yokota</surname> <given-names>Tatsuya</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2891905/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Computer Science, Nagoya Institute of Technology</institution>, <addr-line>Aichi</addr-line>, <country>Japan</country></aff>
<aff id="aff2"><sup>2</sup><institution>RIKEN Center for Advanced Intelligence Project</institution>, <addr-line>Tokyo</addr-line>, <country>Japan</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Abiy Tasissa, Tufts University, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Lei Wang, Beihang University, China</p>
<p>Tomohiro Hashizume, University of Hamburg, Germany</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Tatsuya Yokota <email>t.yokota&#x00040;nitech.ac.jp</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>17</day>
<month>09</month>
<year>2025</year>
</pub-date>
<pub-date pub-type="collection">
<year>2025</year>
</pub-date>
<volume>11</volume>
<elocation-id>1594873</elocation-id>
<history>
<date date-type="received">
<day>17</day>
<month>03</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>18</day>
<month>08</month>
<year>2025</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2025 Mukai, Hontani and Yokota.</copyright-statement>
<copyright-year>2025</copyright-year>
<copyright-holder>Mukai, Hontani and Yokota</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>In this paper, we propose a new unified optimization algorithm for general tensor completion and reconstruction problems, which is formulated as an inverse problem for low-rank tensors in general linear observation models. The proposed algorithm supports at least three basic loss functions (&#x02113;<sub>2</sub> loss, &#x02113;<sub>1</sub> loss, and generalized KL divergence) and various TD models (CP, Tucker, TT, TR decompositions, non-negative matrix/tensor factorizations, and other constrained TD models). We derive the optimization algorithm based on a hierarchical combination of the alternating direction method of multipliers (ADMM) and majorization-minimization (MM). We show that the proposed algorithm can solve a wide range of applications and can be easily extended to any established TD model in a plug-and-play manner.</p></abstract>
<kwd-group>
<kwd>tensor decompositions</kwd>
<kwd>tensor completion</kwd>
<kwd>tensor reconstruction</kwd>
<kwd>majorization-minimization (MM)</kwd>
<kwd>alternating direction method of multipliers (ADMM)</kwd>
<kwd>plug-and-play (PnP)</kwd>
<kwd>generalized KL divergence</kwd>
</kwd-group>
<counts>
<fig-count count="7"/>
<table-count count="3"/>
<equation-count count="63"/>
<ref-count count="109"/>
<page-count count="16"/>
<word-count count="12699"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Optimization</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1 Introduction</title>
<p>Tensor decompositions (TDs) are beginning to be used in various fields of applications such as image recovery [<xref ref-type="bibr" rid="B1">1</xref>], blind source separation [<xref ref-type="bibr" rid="B2">2</xref>], traffic data analysis [<xref ref-type="bibr" rid="B3">3</xref>], wireless communications [<xref ref-type="bibr" rid="B4">4</xref>], and quantum state tomography [<xref ref-type="bibr" rid="B5">5</xref>&#x02013;<xref ref-type="bibr" rid="B7">7</xref>]. Tensor decompositions are mathematical models that are used directly to exploit the low-rank structure of tensors [<xref ref-type="bibr" rid="B8">8</xref>&#x02013;<xref ref-type="bibr" rid="B10">10</xref>]. In addition, other various optional structures such as non-negativity [<xref ref-type="bibr" rid="B11">11</xref>&#x02013;<xref ref-type="bibr" rid="B13">13</xref>], sparsity [<xref ref-type="bibr" rid="B14">14</xref>&#x02013;<xref ref-type="bibr" rid="B17">17</xref>], smoothness [<xref ref-type="bibr" rid="B18">18</xref>&#x02013;<xref ref-type="bibr" rid="B21">21</xref>], and manifold constraints [<xref ref-type="bibr" rid="B22">22</xref>&#x02013;<xref ref-type="bibr" rid="B24">24</xref>] can be incorporated into factor matrices or core tensors in tensor decompositions. Such flexible modeling in tensor decompositions makes it popular for wide applications. In applications of tensor decompositions, various data analysis tasks can be addressed by designing optimization problems based on the assumptions of low-rank and other additional data structures and optimizing parameters to maximize the consistency (likelihood) with the observed data.</p>
<p>The challenges of applying tensor decomposition to each data analysis task are 2-fold. First, it is necessary to appropriately design TD models and optimization problems with constraints according to the objective of each data analysis task and the observation model of measurement data, which requires deep domain-specific insight and experience. The second challenge is to derive and implement an efficient optimization algorithm for the designed specific tensor decomposition problem, which requires high-level computer science skills such as applied linear algebra, numerical computation, mathematical optimization, and programming languages. Overcoming these two challenges is not easy, as it is rare to find someone who is both an expert in a specific domain and well-versed in computer science. Constructing a project team that includes experts in the specific domain and computer science for this purpose can be useful, but costs remain high. The first challenge is closely related to research in the target domain and will be difficult to solve fundamentally. However, the second challenge can be solved by technological advances. Here, we will move on to the discussion of solving the second challenge.</p>
<p>In conventional research, a new tensor modeling method is proposed to solve a specific task, the corresponding optimization algorithm is derived, implemented, and experimented on, and the whole is published as a single paper. This research approach is common and robust, but it is very costly because a custom optimization algorithm must be derived and implemented for each subdivided problem. For example, the problem of recovering tensor data that includes missing values is known as the tensor completion problem [<xref ref-type="bibr" rid="B25">25</xref>&#x02013;<xref ref-type="bibr" rid="B29">29</xref>]. Now, if we consider a model in which the true tensor is low Tucker/CP rank, the core tensor or factor matrices are non-negative, and the observations contain Gaussian noise, the optimization problem can be formulated as a non-negative Tucker/CP decomposition with missing values based on &#x02113;<sub>2</sub> loss minimization. It is possible to derive and implement specific optimization algorithms with some expertise [<xref ref-type="bibr" rid="B30">30</xref>, <xref ref-type="bibr" rid="B31">31</xref>]. However, in some applications, one often wants to change the assumed noise distribution, try different TD models, or add constraints to the factor matrices [<xref ref-type="bibr" rid="B32">32</xref>&#x02013;<xref ref-type="bibr" rid="B37">37</xref>]. It would be possible to derive and implement a specific optimization algorithm each time such a model change is made, but this would be very costly.</p>
<p>One approach to solving the above problems is to develop a universal optimization algorithm that can solve a variety of data analysis problems based on tensor decomposition. Having a single universal solver frees most users from the requirements of derivation and implementation of individual algorithms, allowing them to focus on designing the models. This environment will encourage users to iterate through trial-and-error modeling and accelerate applications of tensor decomposition. The technical challenge in developing a universal optimization algorithm is how to formulate the problem and how to design the structure of the algorithm. A universal optimization algorithm needs to efficiently connect various TD models to various reconstruction problems, but finding the method is not trivial.</p>
<p>In this paper, as a major step toward this goal, we propose a unified algorithm for obtaining any tensor decomposition under various noisy linear observation models. The proposed algorithm supports three basic loss functions (&#x02113;<sub>2</sub> loss, &#x02113;<sub>1</sub> loss, and generalized KL divergence) and various low-rank TD models. Since our formulation is considered as a general linear observation model, the proposed algorithm can address a variety of problems such as noise removal, tensor completion, deconvolution, super-resolution, compressed sensing, and medical imaging [<xref ref-type="bibr" rid="B1">1</xref>].</p>
<p>We derive the optimization algorithm based on the hierarchical combination of the alternating direction method of multipliers (ADMM) [<xref ref-type="bibr" rid="B38">38</xref>], majorization-minimization (MM) [<xref ref-type="bibr" rid="B39">39</xref>, <xref ref-type="bibr" rid="B40">40</xref>] algorithm and least-squares (LS) based tensor decomposition. The most distinctive feature of our approach is that it uses LS-based tensor decompositions as plug-and-play modules (denoisers). LS-based algorithms have been well established for various types of TD such as canonical polyadic decomposition (CPD) [<xref ref-type="bibr" rid="B41">41</xref>&#x02013;<xref ref-type="bibr" rid="B43">43</xref>], Tucker decomposition (TKD) [<xref ref-type="bibr" rid="B44">44</xref>&#x02013;<xref ref-type="bibr" rid="B48">48</xref>], tensor-train decomposition (TTD) [<xref ref-type="bibr" rid="B49">49</xref>, <xref ref-type="bibr" rid="B50">50</xref>], tensor-ring decomposition (TRD) [<xref ref-type="bibr" rid="B51">51</xref>], and non-negative matrix/tensor factorizations (NMF/NTF) [<xref ref-type="bibr" rid="B11">11</xref>, <xref ref-type="bibr" rid="B52">52</xref>, <xref ref-type="bibr" rid="B53">53</xref>]. Next, we derive an MM algorithm for solving the tensor decomposition problem in a linear observation model based on &#x02113;<sub>2</sub> loss (Gaussian noise). Since various TD algorithms can be adopted as modules, we can support various TD models at this point. Finally, we derive an ADMM to minimize the &#x02113;<sub>1</sub> loss and the generalized KL divergence for observations under Laplace and Poisson noises. As a result, an overall ADMM algorithm calls MM in the loop, and the MM calls TD algorithms for the update rule.</p>
<p>It should be noted that the structure of the proposed algorithm can incorporate different TD models in a plug-and-play (PnP) manner. This is inspired by work that applies arbitrary denoisers in a plug-and-play manner in image reconstruction such as PnP-ADMM [<xref ref-type="bibr" rid="B54">54</xref>]. LS-based TD can be viewed as a denoising that the low-rank tensor reconstruction from noisy tensor. In our study, we only assume the existence of LS-based TD algorithms and therefore also include <italic>undiscovered TD models</italic> that will be proposed in the future. In addition, the proposed framework can be easily extended for penalized matrix/tensor decompositions. Many sophisticated TD models can be applied as prior to linear inverse problems. For example, our method allows some sophisticated decomposition models proposed for tensor completion to be easily extended to other tasks such as robust tensor decomposition, deconvolution, compressive sensing, and computed tomography.</p>
<sec>
<title>1.1 Notations</title>
<p>A vector, a matrix, and a tensor are indicated by a bold lowercase letter, <bold>a</bold> &#x02208; &#x0211D;<sup><italic>I</italic></sup>, a bold uppercase letter, <bold>B</bold> &#x02208; &#x0211D;<sup><italic>I</italic> &#x000D7; <italic>J</italic></sup>, and a bold calligraphic letter, <inline-formula><mml:math id="M1"><mml:mstyle mathvariant="bold-script"><mml:mi>C</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; &#x0211D;<sup><italic>J</italic><sub>1</sub> &#x000D7; <italic>J</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>J</italic><sub><italic>N</italic></sub></sup>, respectively. An <italic>N</italic>th-order tensor, <inline-formula><mml:math id="M2"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup>, can be transformed into a vector, which is denoted by using the same character, <inline-formula><mml:math id="M3"><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle><mml:mo>=</mml:mo><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">vec</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02208;</mml:mo><mml:mi>&#x0211D;</mml:mi><mml:msubsup><mml:mrow><mml:mo>&#x0220F;</mml:mo></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. An (<italic>i</italic><sub>1</sub>, <italic>i</italic><sub>2</sub>, ..., <italic>i</italic><sub><italic>N</italic></sub>)-element of <inline-formula><mml:math id="M4"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> is denoted by <italic>x</italic><sub><italic>i</italic><sub>1</sub>, <italic>i</italic><sub>2</sub>, ..., <italic>i</italic><sub><italic>N</italic></sub></sub> or [<inline-formula><mml:math id="M5"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>]<sub><italic>i</italic><sub>1</sub>, <italic>i</italic><sub>2</sub>, ..., <italic>i</italic><sub><italic>N</italic></sub></sub>. The operators &#x022A1; represent the Hadamard product. &#x000B7;<sup>&#x02020;</sup> represents the Moore-Penrose pseudoinverse. sign(<inline-formula><mml:math id="M6"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>) &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup> and abs(<inline-formula><mml:math id="M7"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>) &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup> are, respectively, operations that return the sign and absolute value for each entry of <inline-formula><mml:math id="M8"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup>.</p>
</sec>
</sec>
<sec id="s2">
<title>2 Review of existing tensor completion and reconstruction methods from a perspective of plug-and-play algorithms</title>
<sec>
<title>2.1 LS-based tensor decomposition</title>
<p>Tensor decomposition (TD) is a mathematical model to represent a tensor as a product of tensors/matrices. There are many TD models such as canonical polyadic decomposition (CPD) [<xref ref-type="bibr" rid="B41">41</xref>&#x02013;<xref ref-type="bibr" rid="B43">43</xref>, <xref ref-type="bibr" rid="B55">55</xref>], Tucker decomposition (TKD) [<xref ref-type="bibr" rid="B44">44</xref>&#x02013;<xref ref-type="bibr" rid="B46">46</xref>], tensor-train decomposition (TTD) [<xref ref-type="bibr" rid="B49">49</xref>, <xref ref-type="bibr" rid="B50">50</xref>], tensor-ring decomposition (TRD) [<xref ref-type="bibr" rid="B51">51</xref>], block-term decomposition [<xref ref-type="bibr" rid="B56">56</xref>&#x02013;<xref ref-type="bibr" rid="B58">58</xref>], coupled tensor decomposition [<xref ref-type="bibr" rid="B59">59</xref>&#x02013;<xref ref-type="bibr" rid="B61">61</xref>], hierarchical Tucker decomposition [<xref ref-type="bibr" rid="B62">62</xref>], tensor wheel decomposition [<xref ref-type="bibr" rid="B63">63</xref>], fully connected tensor network [<xref ref-type="bibr" rid="B64">64</xref>], t-SVD [<xref ref-type="bibr" rid="B65">65</xref>], Hankel tensor decomposition [<xref ref-type="bibr" rid="B66">66</xref>&#x02013;<xref ref-type="bibr" rid="B69">69</xref>], and convolutional tensor decomposition [<xref ref-type="bibr" rid="B21">21</xref>].</p>
<p><xref ref-type="fig" rid="F1">Figure 1</xref> shows typical models of tensor decompositions in graphical notation [<xref ref-type="bibr" rid="B10">10</xref>]. In graphical notation, each node represents a core tensor and the edges connecting the nodes represent tensor products. As can be seen in the figure, all TD models are expressed by tensor products of core tensors.</p>
<fig position="float" id="F1">
<label>Figure 1</label>
<caption><p>Tensor decomposition models in graphical notation.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fams-11-1594873-g0001.tif">
<alt-text>Illustration of six tensor decomposition diagrams with labels: canonical polyadic decomposition, Tucker decomposition, hierarchical Tucker decomposition, tensor train decomposition, tensor ring decomposition, and fully-connected tensor network decomposition. Each diagram features nodes and connecting lines, representing different structures.</alt-text>
</graphic>
</fig>
<p>In this paper, we will write any TD model as</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M9"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>=</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mi>&#x0211D;</mml:mi><mml:mrow><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x000D7;</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Note that <xref ref-type="disp-formula" rid="E1">Equation 1</xref> is a somewhat abstract expression which only represents that the entire tensor <inline-formula><mml:math id="M10"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> is reconstructed by multiplying the <italic>L</italic> core tensors <inline-formula><mml:math id="M11"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub>, <inline-formula><mml:math id="M12"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub>, ..., <inline-formula><mml:math id="M13"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic>&#x02212;1</sub> and <inline-formula><mml:math id="M14"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub>. The least squares (LS) based tensor decomposition problem of a given tensor <inline-formula><mml:math id="M15"><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup> is given by</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M16"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munder><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>This squared error is a non-convex function with respect to overall optimization parameters (<inline-formula><mml:math id="M17"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub>, <inline-formula><mml:math id="M18"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub>, ..., <inline-formula><mml:math id="M19"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub>), but it is a convex quadratic function with respect to only one core tensor <inline-formula><mml:math id="M20"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>l</italic></sub> for any <italic>l</italic> &#x02208; {1, 2, ..., <italic>L</italic>}. If we focus on optimizing a certain core tensor <inline-formula><mml:math id="M21"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>l</italic></sub> and temporarily consider all other core tensors as constant variables and can be summarized as a single tensor, denoted as <inline-formula><mml:math id="M22"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>&#x02212;<italic>l</italic></sub>, then the sub-optimization problem is reduced to an LS-based matrix optimization problem:</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M23"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munder><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munder><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">mat</mml:mtext></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">mat</mml:mtext></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">mat</mml:mtext></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where we used abstract notations &#x02329;&#x02329;<inline-formula><mml:math id="M24"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>l</italic></sub>, <inline-formula><mml:math id="M25"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>&#x02212;<italic>l</italic></sub>&#x0232A;&#x0232A; &#x0003D; &#x02329;&#x02329;<inline-formula><mml:math id="M26"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub>, <inline-formula><mml:math id="M27"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub>, ..., <inline-formula><mml:math id="M28"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub>&#x0232A;&#x0232A;, and mat<sub><italic>lm</italic></sub>, <italic>m</italic> &#x02208; {1, 2, 3}, are the corresponding appropriate matricization operators for the certain <italic>l</italic>-th core tensor. Solving sub-optimization shown in <xref ref-type="disp-formula" rid="E3">Equation 3</xref> has a closed-form solution and updating <inline-formula><mml:math id="M29"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>l</italic></sub> by</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M30"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02190;</mml:mo><mml:msubsup><mml:mrow><mml:mtext class="textrm" mathvariant="normal">mat</mml:mtext></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">mat</mml:mtext></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">mat</mml:mtext></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02020;</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>always reduces (non-increases) the squared error. In other words, putting &#x003B8; &#x0003D; (<inline-formula><mml:math id="M31"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub>, <inline-formula><mml:math id="M32"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub>, ..., <inline-formula><mml:math id="M33"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub>) and <inline-formula><mml:math id="M34"><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula>, the following algorithm</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M35"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mtext>&#x02009;</mml:mtext><mml:mo>&#x02190;</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mn>...</mml:mn><mml:mo>,</mml:mo><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle><mml:mi>L</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E6"><label>(6)</label><mml:math id="M36"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle><mml:mi>l</mml:mi></mml:msub><mml:mo>&#x02190;</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:msubsup><mml:mtext>mat</mml:mtext><mml:mrow><mml:mi>l</mml:mi><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mtext>mat</mml:mtext></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo stretchy='false'>)</mml:mo><mml:msub><mml:mrow><mml:mtext>mat</mml:mtext></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x02020;</mml:mo></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E7"><label>(7)</label><mml:math id="M37"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x02190;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mn>...</mml:mn><mml:mo>,</mml:mo><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle><mml:mi>L</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>provides the result <italic>f</italic>(&#x003B8;<sub><italic>k</italic></sub>) &#x02265; <italic>f</italic>(&#x003B8;<sub><italic>k</italic>&#x0002B;1</sub>) for any <italic>l</italic> &#x02208; {1, 2, ..., <italic>L</italic>}. Therefore, the LS-based tensor decomposition problem in <xref ref-type="disp-formula" rid="E2">Equation 2</xref> can be solved by repeating the steps shown in <xref ref-type="disp-formula" rid="E6">Equation 6</xref> for every <italic>l</italic> &#x02208; {1, 2, ..., <italic>L</italic>}, and this is called the alternating least squares (ALS) algorithm [<xref ref-type="bibr" rid="B42">42</xref>, <xref ref-type="bibr" rid="B43">43</xref>, <xref ref-type="bibr" rid="B47">47</xref>, <xref ref-type="bibr" rid="B50">50</xref>, <xref ref-type="bibr" rid="B51">51</xref>]. ALS is a workhorse algorithm in TDs because it has no hyper-parameter (e.g., step-size of gradient descent), and the objective function can be stably reduced (monotonically non-increasing).</p>
<sec>
<title>2.1.1 Perspective on optimizing <inline-formula><mml:math id="M38"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> by ALS</title>
<p>Here, let us interpret that the ALS algorithm optimizes <inline-formula><mml:math id="M39"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> rather than (<inline-formula><mml:math id="M40"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub>, <inline-formula><mml:math id="M41"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub>, ..., <inline-formula><mml:math id="M42"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub>). This does not make any difference in the process. The LS-based low-rank tensor optimization problem can be formulated by</p>
<disp-formula id="E8"><label>(8)</label><mml:math id="M43"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x1D54A;<sub><italic>t</italic></sub> &#x02282; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup> is a set of low-rank tensors which have tensor decomposition &#x02329;&#x02329;<inline-formula><mml:math id="M44"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub>, <inline-formula><mml:math id="M45"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub>, ..., <inline-formula><mml:math id="M46"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub>&#x0232A;&#x0232A;,</p>
<disp-formula id="E9"><label>(9)</label><mml:math id="M47"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">{</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mtext>&#x000A0;</mml:mtext><mml:mo>|</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x02208;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>D</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x02208;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>D</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02208;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>D</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">}</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><inline-formula><mml:math id="M48"><mml:mstyle mathvariant="bold-script"><mml:mi>D</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>l</italic></sub> is a domain of <inline-formula><mml:math id="M49"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>l</italic></sub> and <italic>i</italic><sub>&#x1D54A;<sub><italic>t</italic></sub></sub>(<inline-formula><mml:math id="M50"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>) is an indicator function</p>
<disp-formula id="E10"><label>(10)</label><mml:math id="M51"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>&#x02208;</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>&#x0221E;</mml:mi></mml:mtd><mml:mtd><mml:mtext class="textrm" mathvariant="normal">otherwise</mml:mtext></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Since the objective functions in <xref ref-type="disp-formula" rid="E2">Equations 2</xref>, <xref ref-type="disp-formula" rid="E8">8</xref> are the same when <inline-formula><mml:math id="M52"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> &#x0003D; &#x02329;&#x02329;<inline-formula><mml:math id="M53"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub>, <inline-formula><mml:math id="M54"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub>, ..., <inline-formula><mml:math id="M55"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub>&#x0232A;&#x0232A;, their minimizers also satisfy <inline-formula><mml:math id="M56"><mml:msup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:math></inline-formula> if <inline-formula><mml:math id="M57"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sup>&#x0002A;</sup> is unique. Note that the core tensors are generally non-unique (e.g., we have <inline-formula><mml:math id="M58"><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mi>c</mml:mi><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:math></inline-formula> for any <italic>c</italic> &#x02260; 0).</p>
<p>Remark 1. If <inline-formula><mml:math id="M59"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sup>&#x0002A;</sup> is unique, then the solutions to the problems shown in <xref ref-type="disp-formula" rid="E2">Equations 2</xref>, <xref ref-type="disp-formula" rid="E8">8</xref> satisfy <inline-formula><mml:math id="M60"><mml:msup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:math></inline-formula>.</p>
<p>In practice, <inline-formula><mml:math id="M61"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sup>&#x0002A;</sup> may not be unique, but even then there will always be a pair of solutions such that <inline-formula><mml:math id="M62"><mml:msup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:math></inline-formula>. Hence, the problem in <xref ref-type="disp-formula" rid="E8">Equation 8</xref> can be solved by ALS. Updating the core tensors &#x003B8; by ALS simultaneously implies updating <inline-formula><mml:math id="M63"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>. Since the series &#x003B8;<sub>0</sub> &#x02192; &#x003B8;<sub>1</sub> &#x02192; &#x022EF; &#x02192; &#x003B8;<sub>&#x0221E;</sub> results in <inline-formula><mml:math id="M64"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sub>0</sub>&#x02192;<inline-formula><mml:math id="M65"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub> &#x02192; &#x022EF; &#x02192; <inline-formula><mml:math id="M66"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sub>&#x0221E;</sub>, we pay attention to the fact that the ALS produces a series of {<inline-formula><mml:math id="M67"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>k</italic></sub>} here. In this paper, we denote the operation of updating <inline-formula><mml:math id="M68"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> by ALS as</p>
<disp-formula id="E11"><label>(11)</label><mml:math id="M69"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M70"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup><mml:mo>:</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02192;</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> can be considered as an interior point algorithm to a point closer to <inline-formula><mml:math id="M71"><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:math></inline-formula>:</p>
<disp-formula id="E12"><label>(12)</label><mml:math id="M72"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02264;</mml:mo><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:msub><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>for any <inline-formula><mml:math id="M73"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; &#x1D54A;<sub><italic>t</italic></sub>. Although it is omitted in <xref ref-type="disp-formula" rid="E11">Equation 11</xref>, it is essential for implementation that the input and output of <inline-formula><mml:math id="M74"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup></mml:math></inline-formula> include not only <inline-formula><mml:math id="M75"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> but also &#x003B8;. We suppose that an operation <inline-formula><mml:math id="M76"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup></mml:math></inline-formula> outputs a reconstructed tensor resulting from updating at least once for all core tensors. The approximate solution to the problem shown in <xref ref-type="disp-formula" rid="E8">Equation 8</xref> can be found by repeating the ALS update sufficiently as</p>
<disp-formula id="E13"><label>(13)</label><mml:math id="M77"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msup><mml:mo>&#x02248;</mml:mo><mml:msubsup><mml:mrow><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mrow></mml:msubsup><mml:mo>&#x000B0;</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x022EF;</mml:mo><mml:mo>&#x000B0;</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M78"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sub>0</sub> &#x02208; &#x1D54A;<sub><italic>t</italic></sub> is some initialization of <inline-formula><mml:math id="M79"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>. Here, the entire ALS is represented as <inline-formula><mml:math id="M80"><mml:msubsup><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>. This is because <xref ref-type="disp-formula" rid="E8">Equation 8</xref> is a problem in finding the closest point in &#x1D54A;<sub><italic>t</italic></sub> from a point <inline-formula><mml:math id="M81"><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:math></inline-formula>, and it is just a projection of <inline-formula><mml:math id="M82"><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:math></inline-formula> onto &#x1D54A;<sub><italic>t</italic></sub>. Note that <inline-formula><mml:math id="M83"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sub>0</sub> cannot be ignored since almost all TD algorithms depend on its initialization.</p>
</sec>
</sec>
<sec>
<title>2.2 LS-based tensor completion and EM-ALS</title>
<p>Tensor completion is a task to estimate missing values in an observed incomplete tensor by using the low-rank structure of a tensor. In case of low-rank matrix completion, theories, algorithms, and applications are well studied [<xref ref-type="bibr" rid="B30">30</xref>, <xref ref-type="bibr" rid="B35">35</xref>, <xref ref-type="bibr" rid="B70">70</xref>&#x02013;<xref ref-type="bibr" rid="B73">73</xref>]. Unlike matrices, which have a unique rank definition, tensors have various ranks (e.g., CP rank, Tucker rank, and TT rank), and the meaning of low rank is different for decomposition models [<xref ref-type="bibr" rid="B26">26</xref>, <xref ref-type="bibr" rid="B27">27</xref>, <xref ref-type="bibr" rid="B64">64</xref>, <xref ref-type="bibr" rid="B74">74</xref>&#x02013;<xref ref-type="bibr" rid="B77">77</xref>]. Since the appropriate TD model depends on the application, it is desirable to have an environment in which various TD models can be freely selected and tested.</p>
<p>Here, we introduce a basic formulation of LS-based tensor completion as follows:</p>
<disp-formula id="E14"><label>(14)</label><mml:math id="M84"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">minimize</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle><mml:mo>&#x022A1;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>T</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M85"><mml:mstyle mathvariant="bold-script"><mml:mi>T</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup> is an observed incomplete tensor and <inline-formula><mml:math id="M86"><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:math></inline-formula> is a mask tensor of the same size as <inline-formula><mml:math id="M87"><mml:mstyle mathvariant="bold-script"><mml:mi>T</mml:mi></mml:mstyle></mml:math></inline-formula>. The entries of <inline-formula><mml:math id="M88"><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:math></inline-formula> are given by</p>
<disp-formula id="E15"><label>(15)</label><mml:math id="M89"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="script">W</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mn>1</mml:mn></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="script">T</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mtext class="textrm" mathvariant="normal">is observed</mml:mtext></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="script">T</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mtext class="textrm" mathvariant="normal">is missing</mml:mtext></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>To solve the problem in <xref ref-type="disp-formula" rid="E14">Equation 14</xref>, many algorithms have been studied [<xref ref-type="bibr" rid="B21">21</xref>, <xref ref-type="bibr" rid="B26">26</xref>, <xref ref-type="bibr" rid="B28">28</xref>, <xref ref-type="bibr" rid="B30">30</xref>, <xref ref-type="bibr" rid="B78">78</xref>&#x02013;<xref ref-type="bibr" rid="B80">80</xref>]. The gradient descent-based optimization algorithm [<xref ref-type="bibr" rid="B26">26</xref>] is proposed for the CP model. For the Tucker model, which imposes orthogonality on factor matrices, algorithms based on optimization on a manifold are proposed [<xref ref-type="bibr" rid="B28">28</xref>, <xref ref-type="bibr" rid="B81">81</xref>]. However, these algorithms are designed specifically for specific TD models and it is difficult to generalize them to various TD models.</p>
<p>Expectation-maximization alternating least squares (EM-ALS) [<xref ref-type="bibr" rid="B82">82</xref>] is a versatile tensor completion algorithm that is less dependent on differences in TD models. In fact, EM-ALS is incorporated into various tensor completion algorithms such as TMac [<xref ref-type="bibr" rid="B29">29</xref>], TMac-TT [<xref ref-type="bibr" rid="B76">76</xref>], MTRD [<xref ref-type="bibr" rid="B77">77</xref>], MDT-Tucker [<xref ref-type="bibr" rid="B66">66</xref>, <xref ref-type="bibr" rid="B67">67</xref>], and SPC [<xref ref-type="bibr" rid="B34">34</xref>]. The algorithm of EM-ALS can be derived from majorization-minimization (MM) [<xref ref-type="bibr" rid="B40">40</xref>, <xref ref-type="bibr" rid="B83">83</xref>] which iteratively minimizes an auxiliary function <italic>g</italic>(<inline-formula><mml:math id="M90"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>, <inline-formula><mml:math id="M91"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>&#x02032;) that serves as an upper bound on the objective function <italic>f</italic>(<inline-formula><mml:math id="M92"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>). In more detail, an auxiliary function holds the following conditions</p>
<disp-formula id="E16"><label>(16)</label><mml:math id="M93"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02265;</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mtext class="textrm" mathvariant="normal">&#x000A0;and&#x000A0;</mml:mtext><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and the update rule of MM is given by</p>
<disp-formula id="E17"><label>(17)</label><mml:math id="M94"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>This ensures that the objective function is monotonically decreasing as follows:</p>
<disp-formula id="E18"><label>(18)</label><mml:math id="M95"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02265;</mml:mo><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02265;</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Specifically, the objective function and its auxiliary functions are given by</p>
<disp-formula id="E19"><label>(19)</label><mml:math id="M96"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle><mml:mo>&#x022A1;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>T</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E20"><label>(20)</label><mml:math id="M97"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mo>&#x02225;</mml:mo><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover><mml:mo>&#x022A1;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M98"><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">1</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:math></inline-formula> is a tensor that flips the 0 and 1 of <inline-formula><mml:math id="M99"><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:math></inline-formula>. Looking at the additional terms in <xref ref-type="disp-formula" rid="E20">Equation 20</xref>, we can see that they clearly satisfy the condition shown in <xref ref-type="disp-formula" rid="E16">Equation 16</xref>. Furthermore, update rule can be transformed as</p>
<disp-formula id="E21"><label>(21)</label><mml:math id="M100"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M101"><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle><mml:mo>&#x022A1;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>T</mml:mi></mml:mstyle><mml:mo>&#x0002B;</mml:mo><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover><mml:mo>&#x022A1;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. Since the structure of <xref ref-type="disp-formula" rid="E21">Equation 21</xref> is the same as <xref ref-type="disp-formula" rid="E8">Equation 8</xref>, it can be solved by ALS. Finally, EM-ALS can be given by the following algorithm:</p>
<disp-formula id="E22"><label>(22)</label><mml:math id="M102"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle><mml:mo>&#x022A1;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>T</mml:mi></mml:mstyle><mml:mo>&#x0002B;</mml:mo><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover><mml:mo>&#x022A1;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E23"><label>(23)</label><mml:math id="M103"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M104"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sub>0</sub> &#x02208; &#x1D54A;<sub><italic>t</italic></sub> is required for the initialization. The steps shown in <xref ref-type="disp-formula" rid="E22">Equations 22</xref>, <xref ref-type="disp-formula" rid="E23">23</xref> are called the E-step and the M-step, respectively. Since M-step is an iterative algorithm, EM-ALS becomes a double iterative algorithm and is inefficient. Since the auxiliary function can also be decreased by one step of ALS <inline-formula><mml:math id="M105"><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, the M-step can often be replaced with this. In EM-ALS, the operation that depends on the TD model is only the part in <xref ref-type="disp-formula" rid="E23">Equation 23</xref>. This means that any update rule of various LS-based TDs that decreases the objective function can be used as is.</p>
<p>Remark 2. Plug-and-play of TD algorithms is possible for tensor completion in EM-ALS.</p>
</sec>
<sec>
<title>2.3 Robust tensor decomposition and ADMM</title>
<p>Robust tensor decomposition (RTD) is a task to reconstruct a low-rank tensor from an observed tensor with outliers. Typically, the outlier components are assumed to be sparse (or follow a Laplace distribution) and the problem is formulated as a tensor decomposition based on &#x02113;<sub>1</sub> loss as follows:</p>
<disp-formula id="E24"><label>(24)</label><mml:math id="M106"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msup><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>B</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:msub><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M107"><mml:mstyle mathvariant="bold-script"><mml:mi>B</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup> is an observed tensor that includes outlier components. Various TD models and algorithms for RTD have been proposed, such as CPD [<xref ref-type="bibr" rid="B84">84</xref>], TKD [<xref ref-type="bibr" rid="B85">85</xref>], TRD [<xref ref-type="bibr" rid="B86">86</xref>], t-SVD [<xref ref-type="bibr" rid="B87">87</xref>], and more sophisticated models for specific domains [<xref ref-type="bibr" rid="B37">37</xref>].</p>
<p>Zhang and Ding [<xref ref-type="bibr" rid="B85">85</xref>] have proposed Tucker-based RTD using the alternating direction method of multipliers (ADMM). Zhang and Ding&#x00027;s ADMM formulation is also versatile, like EM-ALS, and we review it briefly here. First, we introduce a new variable <inline-formula><mml:math id="M108"><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup> and add a constraint <inline-formula><mml:math id="M109"><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle></mml:math></inline-formula> &#x0003D; <inline-formula><mml:math id="M110"><mml:mstyle mathvariant="bold-script"><mml:mi>B</mml:mi></mml:mstyle></mml:math></inline-formula>&#x02212;<inline-formula><mml:math id="M111"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> to the RTD problem shown in <xref ref-type="disp-formula" rid="E24">Equation 24</xref>. For the constrained optimization problem, the augmented Lagrangian is given by</p>
<disp-formula id="E25"><label>(25)</label><mml:math id="M112"><mml:mtable class="eqnarray" columnalign="right"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="script">L</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">RTD</mml:mtext></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold"><mml:mo>&#x0039B;</mml:mo></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle><mml:msub><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mo>&#x0039B;</mml:mo></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>B</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>B</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <bold>&#x0039B;</bold> &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup> is a Lagrange multiplier, and &#x003B2; &#x0003E; 0 is a hyperparameter. In each step of ADMM, <inline-formula><mml:math id="M113"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> is updated as a minimizer of <inline-formula><mml:math id="M114"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="script">L</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">RTD</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold"><mml:mo>&#x0039B;</mml:mo></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> with respect to <inline-formula><mml:math id="M115"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>, <inline-formula><mml:math id="M116"><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle></mml:math></inline-formula> is updated as a minimizer of <inline-formula><mml:math id="M117"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="script">L</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">RTD</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold"><mml:mo>&#x0039B;</mml:mo></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> with respect to <inline-formula><mml:math id="M118"><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle></mml:math></inline-formula>, and <bold>&#x0039B;</bold> is updated by the method of multipliers. ADMM algorithm is given by</p>
<disp-formula id="E26"><label>(26)</label><mml:math id="M119"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>B</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mo>&#x0039B;</mml:mo></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mstyle><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E27"><label>(27)</label><mml:math id="M120"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle><mml:msub><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>B</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mo>&#x0039B;</mml:mo></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mstyle><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E28"><label>(28)</label><mml:math id="M121"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mo>&#x0039B;</mml:mo></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mo>&#x0039B;</mml:mo></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B2;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>B</mml:mi></mml:mstyle><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where initializations <inline-formula><mml:math id="M122"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sub>0</sub>, <inline-formula><mml:math id="M123"><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle></mml:math></inline-formula><sub>0</sub>, and <bold>&#x0039B;</bold><sub>0</sub> are required. The sub-optimization problems shown in <xref ref-type="disp-formula" rid="E26">Equations 26</xref>, <xref ref-type="disp-formula" rid="E27">27</xref> can be solved by LS-based TD algorithm and soft-thresholding, respectively. In practice, <xref ref-type="disp-formula" rid="E26">Equations 26</xref>, <xref ref-type="disp-formula" rid="E27">27</xref> can be replaced as</p>
<disp-formula id="E29"><label>(29)</label><mml:math id="M124"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mi>k</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>B</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle><mml:mi>k</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>&#x0039B;</mml:mi></mml:mstyle><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mfrac><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>

<disp-formula id="E30"><label>(30)</label><mml:math id="M125"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mrow><mml:mi>k</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:msubsup><mml:mtext>Proj</mml:mtext><mml:mrow><mml:msub><mml:mo>&#x1D54A;</mml:mo><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mtext>ALS</mml:mtext></mml:mrow></mml:msubsup><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mi>k</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mi>k</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>


<disp-formula id="E31"><label>(31)</label><mml:math id="M126"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>E</mml:mi></mml:mstyle><mml:mrow><mml:mi>k</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mtext>soft</mml:mtext><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x003B2;</mml:mi></mml:mfrac></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>B</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mrow><mml:mi>k</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>&#x0039B;</mml:mi></mml:mstyle><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mfrac></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>


<p>where soft<sub>&#x003C1;</sub>(&#x000B7;) with &#x003C1; &#x0003E; 0 is a soft-thresholding operator:</p>
<disp-formula id="E32"><label>(32)</label><mml:math id="M127"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">soft</mml:mtext></mml:mrow><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>A</mml:mi></mml:mstyle></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mtext class="textrm" mathvariant="normal">sign</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>A</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x022A1;</mml:mo><mml:mo class="qopname">max</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext class="textrm" mathvariant="normal">abs</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>A</mml:mi></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mi>&#x003C1;</mml:mi><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mn>0</mml:mn></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In a similar way to EM-ALS, <inline-formula><mml:math id="M128"><mml:msubsup><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> often be replaced with <inline-formula><mml:math id="M129"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> in <xref ref-type="disp-formula" rid="E30">Equation 30</xref>, and the any update rule of various LS-based TD can be used as is.</p>
<p>Remark 3. Plug-and-play of TD algorithms is possible for robust tensor decomposition in ADMM.</p>
</sec>
<sec>
<title>2.4 Missing elements for versatile tensor reconstruction</title>
<p>In this paper, we consider solving a more versatile tensor reconstruction problem than tensor completion and RTD. First, we assume the following linear observation model</p>
<disp-formula id="E33"><label>(33)</label><mml:math id="M130"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>A</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>for <italic>j</italic> &#x02208; {1, 2, ..., <italic>J</italic>}, where an observation <italic>b</italic><sub><italic>j</italic></sub> is obtained from innerproduct between a given tensor <inline-formula><mml:math id="M131"><mml:mstyle mathvariant="bold-script"><mml:mi>A</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>j</italic></sub> &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup> and an unknown low-rank tensor <inline-formula><mml:math id="M132"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup> with noise component <italic>n</italic><sub><italic>j</italic></sub>. This observation model can be transformed into vector-matrix form as</p>
<disp-formula id="E34"><label>(34)</label><mml:math id="M133"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle><mml:mo>=</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>&#x0002B;</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>n</mml:mtext></mml:mstyle><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M134"><mml:mstyle class="text"><mml:mtext mathvariant="bold">b</mml:mtext></mml:mstyle><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> is an observed signal, <inline-formula><mml:math id="M135"><mml:mstyle class="text"><mml:mtext mathvariant="bold">n</mml:mtext></mml:mstyle><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> is noise component, <bold>x</bold> &#x0003D; vec(<inline-formula><mml:math id="M136"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>) is a vector form of low-rank tensor, and</p>
<disp-formula id="E35"><label>(35)</label><mml:math id="M137"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:mtext class="textrm" mathvariant="normal">vec</mml:mtext><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>A</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext class="textrm" mathvariant="normal">vec</mml:mtext><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>A</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x022EE;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext class="textrm" mathvariant="normal">vec</mml:mtext><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>A</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mi>&#x0211D;</mml:mi><mml:mrow><mml:mi>J</mml:mi><mml:mo>&#x000D7;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x022EF;</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>is a design matrix. Introducing a low-rank TD constraint <bold>x</bold> &#x02208; &#x1D54A; instead of <inline-formula><mml:math id="M138"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; &#x1D54A;<sub><italic>t</italic></sub> and loss function based on the noise assumption of <bold>n</bold>, the optimization problem can be given as</p>
<disp-formula id="E36"><label>(36)</label><mml:math id="M139"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">minimize</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x1D54A;: &#x0003D; {vec(&#x02329;&#x02329;<inline-formula><mml:math id="M140"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub>, <inline-formula><mml:math id="M141"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub>, ..., <inline-formula><mml:math id="M142"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub>&#x0232A;&#x0232A;) | <inline-formula><mml:math id="M143"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub> &#x02208; <inline-formula><mml:math id="M144"><mml:mstyle mathvariant="bold-script"><mml:mi>D</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub>, <inline-formula><mml:math id="M145"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub> &#x02208; <inline-formula><mml:math id="M146"><mml:mstyle mathvariant="bold-script"><mml:mi>D</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub>, ..., <inline-formula><mml:math id="M147"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub> &#x02208; <inline-formula><mml:math id="M148"><mml:mstyle mathvariant="bold-script"><mml:mi>D</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub>} is a space of vectorization of low-rank tensors, <italic>D</italic>(&#x000B7;, &#x000B7;) stands for loss function such as &#x02113;<sub>2</sub> loss, &#x02113;<sub>1</sub> loss and generalized Kullback&#x02013;Leibler (KL) divergence. This study aims to solve the problem shown in <xref ref-type="disp-formula" rid="E6">Equation 36</xref>. Note that <bold>x</bold> &#x02208; &#x0211D;<sup><italic>I</italic></sup> is a <italic>I</italic>-dimensional vector in the form, and we consider that <bold>x</bold> represents a tensor <inline-formula><mml:math id="M149"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x000D7;</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup></mml:math></inline-formula> by <bold>x</bold> &#x0003D; vec(<inline-formula><mml:math id="M150"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>), where <inline-formula><mml:math id="M151"><mml:mi>I</mml:mi><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo>&#x0220F;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>.</p>
<p>The problem in <xref ref-type="disp-formula" rid="E36">Equation 36</xref> includes low-rank tensor completion and RTD. When the loss function is &#x02113;<sub>2</sub>, <bold>b</bold> &#x0003D; vec(<inline-formula><mml:math id="M152"><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:math></inline-formula>&#x022A1;<inline-formula><mml:math id="M153"><mml:mstyle mathvariant="bold-script"><mml:mi>T</mml:mi></mml:mstyle></mml:math></inline-formula>), and <bold>A</bold> &#x0003D; diag(vec(<inline-formula><mml:math id="M154"><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:math></inline-formula>)), then the problem in <xref ref-type="disp-formula" rid="E36">Equation 36</xref> reduces to the tensor completion problem in <xref ref-type="disp-formula" rid="E14">Equation 14</xref>. When the loss function is &#x02113;<sub>1</sub>, <bold>b</bold> &#x0003D; vec(<inline-formula><mml:math id="M155"><mml:mstyle mathvariant="bold-script"><mml:mi>B</mml:mi></mml:mstyle></mml:math></inline-formula>), and <bold>A</bold> &#x0003D; <bold>I</bold>, then the problem in <xref ref-type="disp-formula" rid="E36">Equation 36</xref> reduces to the RTD problem in <xref ref-type="disp-formula" rid="E24">Equation 24</xref>.</p>
<p><xref ref-type="fig" rid="F2">Figure 2</xref> shows the concept of the tensor reconstruction problem considered in this study. In our problem formulation in <xref ref-type="disp-formula" rid="E36">Equation 36</xref> includes various other patterns of tensor reconstruction. Tensor completion and RTD are just a few of them, and there are still many missing patterns. We aim to solve the problem in <xref ref-type="disp-formula" rid="E36">Equation 36</xref> for any design matrix <bold>A</bold> &#x02208; &#x0211D;<sup><italic>J</italic> &#x000D7; <italic>I</italic></sup>, and other loss functions such as the generalized Kullback-Leibler (KL) divergence. This generalization is important for various applications. For the design matrix <bold>A</bold>, the Toeplitz matrix is used in image deblurring, the downsampling matrix is used in the super-resolution task, the Radon transform matrix is used in computed tomography and the random projection matrix is used in compressed sensing [<xref ref-type="bibr" rid="B1">1</xref>]. For loss functions, &#x02113;<sub>2</sub> loss is used in Gaussian noise setting, &#x02113;<sub>1</sub> loss is used in Laplace noise (or sparse noise) setting, and generalized KL divergence is used in Poisson noise setting.</p>
<fig position="float" id="F2">
<label>Figure 2</label>
<caption><p>General form of tensor reconstruction problem. Depending on various constraints as low-rank tensors on <bold>x</bold>, various design matrices <bold>A</bold>, and various statistical properties of the noise component <bold>n</bold>, a wide variety of optimization problems can be considered. Low-rank tensor completion and robust tensor decomposition are just a few of them, and considering these problems will enable a wider range of applications.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fams-11-1594873-g0002.tif">
<alt-text>Equation illustrating (b = Ax&#x0002B;n) transforming to (x^). It includes factors: low-rank prior of x (e.g., CP, Tucker), observation model A (e.g., blur, downsampling), and noise prior n with loss functions (e.g., l2 loss).</alt-text>
</graphic>
</fig>
<p>Furthermore, we consider a penalized version as follow:</p>
<disp-formula id="E37"><label>(37)</label><mml:math id="M157"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">minimize</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003B1; &#x02265; 0 and <italic>p</italic>(&#x003B8;) is a penalty function for core tensors &#x003B8; &#x0003D; (<inline-formula><mml:math id="M158"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub>, <inline-formula><mml:math id="M159"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub>, ..., <inline-formula><mml:math id="M160"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub>). This is a generalization of the problem in <xref ref-type="disp-formula" rid="E36">Equation 36</xref>, since <xref ref-type="disp-formula" rid="E37">Equation 37</xref> with &#x003B1; &#x0003D; 0 is equivalent to <xref ref-type="disp-formula" rid="E36">Equation 36</xref>. This plays an important role for the introduction of Tikhonov regularization, sparse regularization, low-rank regularization (e.g., nuclear norm), and smoothing. In particular, Tikhonov regularization of the core tensors reduces the problem of non-uniqueness of scale and improves the convergence of the TD algorithm [<xref ref-type="bibr" rid="B88">88</xref>].</p>
</sec>
</sec>
<sec id="s3">
<title>3 Proposed method</title>
<sec>
<title>3.1 Sketch of optimization framework</title>
<p>The key idea to solve a diverse set of problems formulated in <xref ref-type="disp-formula" rid="E36">Equation 36</xref> is to employ the <italic>plug-and-play</italic> (PnP) approach of TD algorithms such as EM-ALS and ADMM. In this study, we first solve the case of &#x02113;<sub>2</sub> loss using LS-based TD with the MM framework, which is a generalization of EM-ALS. Furthermore, we use it to solve the cases of &#x02113;<sub>1</sub> loss and KL divergence with the ADMM framework. Thus, we call the proposed algorithm <italic>ADMM-MM</italic>. By replacing the LS-based TD module in a PnP manner, various types of TD can be easily generalized and applied to various applications.</p>
</sec>
<sec>
<title>3.2 Optimization</title>
<p>In this section, we explain one-by-one how to optimize the problem in <xref ref-type="disp-formula" rid="E37">Equation 37</xref> for three loss functions: &#x02113;<sub>2</sub> loss, &#x02113;<sub>1</sub> loss, and generalized KL divergence. Then, we explain how to combine these three cases as the ADMM-MM algorithm.</p>
<sec>
<title>3.2.1 Preliminary of LS-based TD</title>
<p>Key module in ADMM-MM is LS-based TD. In vector formulation <bold>x</bold> instead of <inline-formula><mml:math id="M161"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula>, the ALS algorithm of LS-based TD for <bold>v</bold> &#x0003D; vec(<inline-formula><mml:math id="M162"><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:math></inline-formula>) is denoted as</p>
<disp-formula id="E38"><label>(38)</label><mml:math id="M163"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x02248;</mml:mo><mml:msubsup><mml:mrow><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mrow></mml:msubsup><mml:mo>&#x025E6;</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mo>&#x025E6;</mml:mo><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M164"><mml:msubsup><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup></mml:math></inline-formula>, <inline-formula><mml:math id="M165"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">v</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup></mml:math></inline-formula> and <bold>x</bold><sub>0</sub> correspond to <inline-formula><mml:math id="M166"><mml:msubsup><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup></mml:math></inline-formula>, <inline-formula><mml:math id="M167"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup></mml:math></inline-formula> and <inline-formula><mml:math id="M168"><mml:mstyle mathvariant="bold-script"><mml:mi>X</mml:mi></mml:mstyle></mml:math></inline-formula><sub>0</sub>, respectively.</p>
<p>Note that <inline-formula><mml:math id="M169"><mml:msubsup><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup></mml:math></inline-formula> and <inline-formula><mml:math id="M170"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">v</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup></mml:math></inline-formula> do not necessarily have to be strictly based on ALS, and may be replaced by hierarchical ALS (HALS) for CPD [<xref ref-type="bibr" rid="B89">89</xref>] or multiplicative update rule for non-negative matrix factorization (NMF) [<xref ref-type="bibr" rid="B11">11</xref>, <xref ref-type="bibr" rid="B53">53</xref>]. Furthermore, we aim to solve the penalized version of LS-based TD. In this study, we assume <italic>existence of iterative penalized LS-based TD algorithm</italic> that minimizes the squared error with penalty, and we denote more general LS-based TD as</p>
<disp-formula id="E39"><label>(39)</label><mml:math id="M171"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x02248;</mml:mo><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mo>&#x025E6;</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mo>&#x025E6;</mml:mo><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>It should be noted that the above LS-based TD algorithm is expressed from the perspective of updating <bold>x</bold>(&#x003B8;), however, the core tensors &#x003B8; &#x0003D; (<inline-formula><mml:math id="M172"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>1</sub>, <inline-formula><mml:math id="M173"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub>2</sub>, ..., <inline-formula><mml:math id="M174"><mml:mstyle mathvariant="bold-script"><mml:mi>G</mml:mi></mml:mstyle></mml:math></inline-formula><sub><italic>L</italic></sub>) are also actually updated in practical implementation. In other words, in this algorithm, <bold>x</bold> and &#x003B8; are always considered a pair. For simplicity, we will not write it explicitly, but the input to <inline-formula><mml:math id="M175"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">v</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub></mml:math></inline-formula> requires not only <bold>x</bold><sub>0</sub> but also &#x003B8;<sub>0</sub>.</p>
</sec>
<sec>
<title>3.2.2 MM for &#x02113;<sub>2</sub> loss</title>
<p>Here, we consider a case to minimize &#x02113;<sub>2</sub> loss in <xref ref-type="disp-formula" rid="E37">Equation 37</xref> as</p>
<disp-formula id="E40"><label>(40)</label><mml:math id="M176"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>To minimize <italic>f</italic>(<bold>x</bold>), we propose to employ MM approach:</p>
<disp-formula id="E41"><label>(41)</label><mml:math id="M177"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin&#x000A0;</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow></mml:munder><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where auxiliary function <italic>g</italic>(<bold>x</bold>|<bold>x</bold><sub><italic>k</italic></sub>) is given by</p>
<disp-formula id="E42"><label>(42)</label><mml:math id="M178"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003BB;</mml:mi><mml:mstyle mathvariant='bold'><mml:mtext>I</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E43"><label>(43)</label><mml:math id="M179"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mi>&#x003BB;</mml:mi><mml:mstyle displaystyle="true"><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:mfrac><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:msubsup><mml:mrow><mml:mo stretchy="true">&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mstyle></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mtext class="textrm" mathvariant="normal">const</mml:mtext><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>From <xref ref-type="disp-formula" rid="E42">Equation 42</xref>, the auxiliary function satisfies the conditions of <xref ref-type="disp-formula" rid="E16">Equation 16</xref> when &#x003BB; is greater than the maximum eigenvalue of <bold>A</bold><sup>&#x022A4;</sup><bold>A</bold>. Then, the MM step in <xref ref-type="disp-formula" rid="E41">Equation 41</xref> can be reduced to</p>
<disp-formula id="E44"><label>(44)</label><mml:math id="M180"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:mfrac><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E45"><label>(45)</label><mml:math id="M181"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mfrac><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Note that the penalty parameter for LS-based TD becomes <inline-formula><mml:math id="M182"><mml:mfrac><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:mfrac></mml:math></inline-formula>, and we set &#x003BB; to be the maximum eigenvalue of <bold>A</bold><sup>&#x022A4;</sup><bold>A</bold> in practice.</p>
</sec>
<sec>
<title>3.2.3 ADMM for other loss functions</title>
<p>Next, we consider the problem in <xref ref-type="disp-formula" rid="E37">Equation 37</xref> for other loss functions and propose to employ ADMM as Zhang and Ding&#x00027;s formulation [<xref ref-type="bibr" rid="B85">85</xref>]. First, we introduce a new variable <bold>y</bold> &#x02208; &#x0211D;<sup><italic>J</italic></sup> and a linear constraint <bold>y</bold> &#x0003D; <bold>Ax</bold>, and its augmented Lagrangian is given by</p>
<disp-formula id="E46"><label>(46)</label><mml:math id="M183"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mrow><mml:mi mathvariant="script">L</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>z</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>z</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <bold>z</bold> &#x02208; &#x0211D;<sup><italic>J</italic></sup> is a Lagrange multiplier and &#x003B2; &#x0003E; 0 is a hyperparameter. In each step of ADMM, <bold>x</bold> is updated as a minimizer of <inline-formula><mml:math id="M184"><mml:mrow><mml:mi mathvariant="script">L</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">y</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">z</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> with respect to <bold>x</bold>, <bold>y</bold> is updated as a minimizer of <inline-formula><mml:math id="M185"><mml:mrow><mml:mi mathvariant="script">L</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">y</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">z</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> with respect to <bold>y</bold>, and <bold>z</bold> is updated by the method of multipliers. ADMM algorithm is given by</p>
<disp-formula id="E47"><label>(47)</label><mml:math id="M186"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>z</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac><mml:mo>-</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:msubsup><mml:mrow><mml:mo stretchy="true">&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>;</mml:mo></mml:mstyle></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E48"><label>(48)</label><mml:math id="M187"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>z</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac><mml:mo>-</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mo stretchy="true">&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>;</mml:mo></mml:mstyle></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E49"><label>(49)</label><mml:math id="M188"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>z</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>z</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B2;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where initializations <bold>x</bold><sub>0</sub>, <bold>y</bold><sub>0</sub> and <bold>z</bold><sub>0</sub> are required.</p>
<sec>
<title>3.2.3.1 Update rule for <bold>x</bold></title>
<p>Here, we consider practical process in <xref ref-type="disp-formula" rid="E47">Equation 47</xref>. Since <inline-formula><mml:math id="M189"><mml:msub><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">y</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">z</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac></mml:math></inline-formula> is constant and <italic>i</italic><sub>&#x1D54A;</sub>(<bold>x</bold>) is invariant to scaler, the structure of <xref ref-type="disp-formula" rid="E47">Equation 47</xref> is the same as in <xref ref-type="disp-formula" rid="E40">Equation 40</xref>. By using MM approach, update rule in <xref ref-type="disp-formula" rid="E47">Equation 47</xref> can be replaced with</p>
<disp-formula id="E50"><label>(50)</label><mml:math id="M190"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:mfrac><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>z</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E51"><label>(51)</label><mml:math id="M191"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Note that the penalty parameter for LS-based TD becomes <inline-formula><mml:math id="M192"><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac></mml:math></inline-formula>.</p>
</sec>
<sec>
<title>3.2.3.2 Update rule for <bold>y</bold></title>
<p>Update rule in <xref ref-type="disp-formula" rid="E48">Equation 48</xref> is derived depending on the loss function <italic>D</italic>(<bold>b</bold>, <bold>y</bold>). Many loss functions take the form of a sum of entry-wise losses <inline-formula><mml:math id="M193"><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">b</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">y</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:munderover><mml:mi>d</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, in which case the subproblem for updating <bold>y</bold> is separable for each entry <italic>y</italic><sub><italic>j</italic></sub> as</p>
<disp-formula id="E52"><label>(52)</label><mml:math id="M194"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">argmin</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munder><mml:mtext>&#x000A0;</mml:mtext><mml:mi>d</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where we put <inline-formula><mml:math id="M195"><mml:mstyle class="text"><mml:mtext mathvariant="bold">d</mml:mtext></mml:mstyle><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">A</mml:mtext></mml:mstyle><mml:msub><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">z</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac></mml:math></inline-formula>. The solution in <xref ref-type="disp-formula" rid="E52">Equation 52</xref> is unique if <italic>d</italic>(<italic>b</italic><sub><italic>j</italic></sub>, <italic>y</italic><sub><italic>j</italic></sub>) is convex. Our algorithm is possible to support loss functions that the problems formulated in <xref ref-type="disp-formula" rid="E52">Equation 52</xref> have closed-form solutions. Combettes and Pesquet [<xref ref-type="bibr" rid="B90">90</xref>] and Parikh and Boyd [<xref ref-type="bibr" rid="B91">91</xref>] can be useful for obtaining solutions to several distance functions <italic>d</italic>(<italic>b</italic><sub><italic>j</italic></sub>, <italic>y</italic><sub><italic>j</italic></sub>). For example, when we consider &#x02113;<sub>1</sub> loss <italic>d</italic>(<italic>b</italic><sub><italic>j</italic></sub>, <italic>y</italic><sub><italic>j</italic></sub>) &#x0003D; |<italic>b</italic><sub><italic>j</italic></sub> &#x02212; <italic>y</italic><sub><italic>j</italic></sub>|, the solution can be given by soft-thresholding as</p>
<disp-formula id="E53"><label>(53)</label><mml:math id="M196"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">soft</mml:mtext></mml:mrow><mml:mrow><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>When we consider generalized KL divergence for positive <italic>y</italic><sub><italic>j</italic></sub> &#x0003E; 0 as <inline-formula><mml:math id="M197"><mml:mi>d</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo class="qopname">log</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, the solution can be given by</p>
<disp-formula id="E54"><label>(54)</label><mml:math id="M198"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B2;</mml:mi><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:msqrt><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B2;</mml:mi><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:mn>4</mml:mn><mml:mi>&#x003B2;</mml:mi><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msqrt></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
</sec>
</sec>
</sec>
<sec>
<title>3.3 Proposed algorithm</title>
<p>Finally, the proposed ADMM-MM algorithm that supports three typical loss functions can be summarized in <xref ref-type="table" rid="T4">Algorithm 1</xref>. Our ADMM-MM allows for the immediate application of sophisticated TD models to other tasks such as completion, robust reconstruction, and compressive sensing. By simply selecting the update rule in the 7th line, we can accommodate the three loss functions. The majorization step on the 9th line is important to allow for accommodating a variety of design matrices <bold>A</bold>. Many LS-based TD with penalty can be applied into the ADMM-MM algorithm in a plug-and-play manner at the 10th line. The basic expectation of plug-and-play module, Proj<sub>&#x1D54A;<sub>&#x003B3;</sub></sub>(<bold>v</bold>, <bold>x</bold>) or <inline-formula><mml:math id="M205"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">v</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x003B3;</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, is that it has a monotonically decreasing property with respect to the penalized squared error shown in <xref ref-type="disp-formula" rid="E39">Equation 39</xref>. Note that our algorithm can also involve tensor nuclear norm regularization [<xref ref-type="bibr" rid="B65">65</xref>] although not a direct tensor decomposition. That is, a proximal mapping given in closed form of nuclear norm regularization can be replaced directly with Proj<sub>&#x1D54A;<sub>&#x003B3;</sub></sub>(<bold>v</bold>, <bold>x</bold>).</p>
<table-wrap position="float" id="T4">
<label>Algorithm 1</label>
<caption><p>ADMM-MM algorithm.</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr><td align="left" valign="top">&#x000A0;&#x000A0;<monospace>1: <bold>input</bold>: <bold>b</bold>, <bold>A</bold>, &#x1D54A;, type of loss, &#x003B1;, &#x003B2; &#x0003E; 0</monospace></td></tr>
<tr><td align="left" valign="top">&#x000A0;&#x000A0;<monospace>2: Initialize <bold>x</bold> &#x02208; &#x1D54A; and <bold>z</bold> &#x0003D; <bold>0</bold>;</monospace></td></tr>
<tr><td align="left" valign="top">&#x000A0;&#x000A0;<monospace>3: &#x003BB;&#x02190; maximum eigenvalue of <bold>A</bold><sup>&#x022A4;</sup><bold>A</bold>;</monospace></td></tr>
<tr><td align="left" valign="top">&#x000A0;&#x000A0;<monospace>4: <inline-formula><mml:math id="M199"><mml:mi>&#x003B3;</mml:mi><mml:mo>&#x02190;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:mfrac></mml:mtd><mml:mtd><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x02113;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:mfrac></mml:mtd><mml:mtd><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x02113;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mtext class="textrm" mathvariant="normal">or KL</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:math></inline-formula></monospace></td></tr>
<tr><td align="left" valign="top">&#x000A0;&#x000A0;<monospace>5: <bold>repeat</bold></monospace></td></tr>
<tr><td align="left" valign="top">&#x000A0;&#x000A0;<monospace>6: <inline-formula><mml:math id="M200"><mml:mstyle class="text"><mml:mtext mathvariant="bold">d</mml:mtext></mml:mstyle><mml:mo>&#x02190;</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">A</mml:mtext></mml:mstyle><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac><mml:mstyle class="text"><mml:mtext mathvariant="bold">z</mml:mtext></mml:mstyle></mml:math></inline-formula>;</monospace></td></tr>
<tr><td align="left" valign="top">&#x000A0;&#x000A0;<monospace>7: <inline-formula><mml:math id="M201"><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02190;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x02113;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">soft</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x02113;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>&#x003B2;</mml:mi><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:msqrt><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B2;</mml:mi><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:mn>4</mml:mn><mml:mi>&#x003B2;</mml:mi><mml:msub><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msqrt></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac></mml:mtd><mml:mtd><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">KL</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:math></inline-formula></monospace></td></tr>
<tr><td align="left" valign="top">&#x000A0;&#x000A0;<monospace>8: <inline-formula><mml:math id="M202"><mml:mstyle class="text"><mml:mtext mathvariant="bold">z</mml:mtext></mml:mstyle><mml:mo>&#x02190;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mstyle class="text"><mml:mtext mathvariant="bold">0</mml:mtext></mml:mstyle></mml:mtd><mml:mtd><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x02113;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mstyle class="text"><mml:mtext mathvariant="bold">z</mml:mtext></mml:mstyle><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B2;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">y</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">A</mml:mtext></mml:mstyle><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x02113;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mtext class="textrm" mathvariant="normal">or KL</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:math></inline-formula></monospace></td></tr>
<tr><td align="left" valign="top">&#x000A0;&#x000A0;<monospace>9: <inline-formula><mml:math id="M203"><mml:mstyle class="text"><mml:mtext mathvariant="bold">v</mml:mtext></mml:mstyle><mml:mo>&#x02190;</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:mfrac><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">A</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mstyle class="text"><mml:mtext mathvariant="bold">A</mml:mtext></mml:mstyle><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">A</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">y</mml:mtext></mml:mstyle><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac><mml:mstyle class="text"><mml:mtext mathvariant="bold">z</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>;</monospace></td></tr>
<tr><td align="left" valign="top"><monospace>10: <bold>x</bold>&#x02190;Proj<sub>&#x1D54A;<sub>&#x003B3;</sub></sub>(<bold>v</bold>, <bold>x</bold>) or <inline-formula><mml:math id="M204"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="script">U</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">v</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x003B3;</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>;</monospace> </td></tr>
<tr><td align="left" valign="top"><monospace>11: <bold>until</bold> convergence</monospace></td></tr>
<tr><td align="left" valign="top"><monospace>12: <bold>output</bold>: <bold>x</bold></monospace></td></tr>
</tbody>
</table>
</table-wrap>
<p>The proposed algorithm can be regarded as a unified and extended algorithm of EM-ALS and Zhang and Ding&#x00027;s ADMM. When <bold>A</bold> is the diagonal matrix of vectorization of the mask tensor <inline-formula><mml:math id="M206"><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:math></inline-formula> (e.g., <bold>A</bold> &#x0003D; diag(vec(<inline-formula><mml:math id="M207"><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:math></inline-formula>))) and &#x02113;<sub>2</sub> loss is assumed, the ADMM-MM algorithm results in EM-ALS. When <bold>A</bold> &#x0003D; <bold>I</bold> and &#x02113;<sub>1</sub> loss is assumed, the ADMM-MM algorithm results in Zhang and Ding&#x00027;s ADMM.</p>
<p>From the perspective of an optimization algorithm that uses ADMM with MM, the proposed algorithm can be regarded as a special case of linearized ADMM [<xref ref-type="bibr" rid="B92">92</xref>, <xref ref-type="bibr" rid="B93">93</xref>]. However, the linearized ADMM work [<xref ref-type="bibr" rid="B92">92</xref>] considers only convex optimizations, while the other work [<xref ref-type="bibr" rid="B93">93</xref>] considers only non-convex matrix norms which have effective proximal mappings. The major difference in the proposed method from the above methods is that the update rule of the iterative algorithm is applied to penalized TD instead of proximal mapping. Our proposal to plug-and-play various TDs is a new and meaningful attempt, which will greatly improve the applicability of TDs.</p>
<p>The computational complexity of a single iteration in the ADMM-MM algorithm is often dominated by the LS-based TD at the 10th line. The complexity of lines 6 through 9 is <inline-formula><mml:math id="M208"><mml:mrow><mml:mi mathvariant="script">O</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo>&#x003A9;</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, where &#x003A9; is the number of nonzero elements in the design matrix <bold>A</bold> &#x02208; &#x0211D;<sup><italic>J</italic> &#x000D7; <italic>I</italic></sup>. We often assume that the design matrix is sparse and <italic>J</italic> &#x02264; &#x003A9; &#x0226A; <italic>IJ</italic>. On the other hand, the complexity of LS-based TD for <italic>N</italic>-th order tensor <inline-formula><mml:math id="M209"><mml:mstyle mathvariant="bold-script"><mml:mi>V</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; &#x0211D;<sup><italic>I</italic><sub>1</sub> &#x000D7; <italic>I</italic><sub>2</sub> &#x000D7; &#x022EF; &#x000D7; <italic>I</italic><sub><italic>N</italic></sub></sup> depends on the model and algorithm. CPD/NNCPD can be solved by ALS and HALS [<xref ref-type="bibr" rid="B2">2</xref>, <xref ref-type="bibr" rid="B8">8</xref>]. The complexity of CP-ALS is <inline-formula><mml:math id="M210"><mml:mrow><mml:mi mathvariant="script">O</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi><mml:mi>I</mml:mi><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">CP</mml:mtext></mml:mstyle></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mrow><mml:mi mathvariant="script">O</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi><mml:msubsup><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">CP</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, while the complexity of HALS is <inline-formula><mml:math id="M211"><mml:mrow><mml:mi mathvariant="script">O</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi><mml:mi>I</mml:mi><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">CP</mml:mtext></mml:mstyle></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, assuming that the CP rank is denoted as <italic>R</italic><sub>CP</sub> and <inline-formula><mml:math id="M212"><mml:mi>I</mml:mi><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo>&#x0220F;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. ALS is computationally more expensive than HALS because it requires matrix inversion. TKD is usually solved by the higher-order orthogonal iteration (HOOI) algorithm [<xref ref-type="bibr" rid="B48">48</xref>], which solves <italic>N</italic> eigenvalue problems alternately. Since the symmetric matrix of size (<italic>I</italic><sub><italic>n</italic></sub>, <italic>I</italic><sub><italic>n</italic></sub>) by tensor-matrix multiplication is usually more expensive than its eigenvalue problem, then the complexity of HOOI is <inline-formula><mml:math id="M213"><mml:mrow><mml:mi mathvariant="script">O</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi><mml:mi>I</mml:mi><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">TK</mml:mtext></mml:mstyle></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, assuming that the Tucker rank is (<italic>R</italic><sub>TK</sub>, <italic>R</italic><sub>TK</sub>, ..., <italic>R</italic><sub>TK</sub>). TTD is usually solved by TT-ALS with QR orthogonalization [<xref ref-type="bibr" rid="B50">50</xref>]. The advantage of this algorithm is that QR orthogonalization allows each least-squares solution to be found without matrix inversion. The complexity of TT-ALS is <inline-formula><mml:math id="M214"><mml:mrow><mml:mi mathvariant="script">O</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi><mml:mi>I</mml:mi><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">TT</mml:mtext></mml:mstyle></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, assuming that the TT-rank is (<italic>R</italic><sub>TT</sub>, <italic>R</italic><sub>TT</sub>, ..., <italic>R</italic><sub>TT</sub>). TRD is usually solved by TR-ALS [<xref ref-type="bibr" rid="B51">51</xref>]. The complexity of TR-ALS is <inline-formula><mml:math id="M215"><mml:mrow><mml:mi mathvariant="script">O</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi><mml:mi>I</mml:mi><mml:msubsup><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">TR</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mrow><mml:mi mathvariant="script">O</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi><mml:msubsup><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">TR</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>6</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, assuming that the TR-rank is (<italic>R</italic><sub>TR</sub>, <italic>R</italic><sub>TR</sub>, ..., <italic>R</italic><sub>TR</sub>). From a complexity perspective, TKD and TTD are highly efficient. TRD has a higher complexity with respect to TR rank. CPD often requires a larger CP rank, making it more expensive than other models.</p>
</sec>
<sec>
<title>3.4 Discussions on the convergence</title>
<p>Unfortunately, this study does not include new theoretical results on the convergence of the ADMM-MM algorithm using Ls-based TD algorithms for sub-optimization. Instead, this section introduces existing theoretical results on convergence related to TD, EM/MM, and ADMM, and discusses their relevance to this study.</p>
<sec>
<title>3.4.1 Case of &#x02113;<sub>2</sub> loss with MM</title>
<p>The theory of monotonicity and convergence of the EM and MM algorithms has been discussed [<xref ref-type="bibr" rid="B83">83</xref>, <xref ref-type="bibr" rid="B94">94</xref>], and global convergence has been shown when the minimizer of the auxiliary function is unique [<xref ref-type="bibr" rid="B95">95</xref>]. In this study, ALS and MM are combined for &#x02113;<sub>2</sub> loss minimization. From a point of view of only MM, the results of MM [<xref ref-type="bibr" rid="B95">95</xref>] cannot be applied to our case since our auxiliary function is non-convex and its minimizer is not unique in general. On the other hand, the convergence of tensor completion via non-negative tensor factorization [<xref ref-type="bibr" rid="B96">96</xref>] has been studied, which corresponds to a special case of our algorithm.</p>
<p>In Section 3.2.2, we explain the proposed MM algorithm, which first derives the auxiliary function and then minimizes it using ALS (or BCD). However, the same auxiliary function and algorithm can be derived by considering the opposite way which first considers to minimize the original function using ALS (or BCD) and then derives the auxiliary function for each sub-optimization problem. From this perspective, the proposed algorithm for &#x02113;<sub>2</sub> loss is included in a framework named block majorization minimization or block successive upper bound minimization [<xref ref-type="bibr" rid="B97">97</xref>, <xref ref-type="bibr" rid="B98">98</xref>]. In short, from the results of convergence [<xref ref-type="bibr" rid="B97">97</xref>, <xref ref-type="bibr" rid="B98">98</xref>], the proposed algorithm has global convergence if the auxiliary function of each sub-optimization has a unique minimizer. Although the solution to the sub-optimization in ALS is generally not unique, this can sometimes be resolved by adding regularization. For example, in the case of the use of Tikhonov regularized ALS for sub-optimization, global convergence is guaranteed in the proposed MM algorithm.</p>
</sec>
<sec>
<title>3.4.2 Case of other loss with ADMM</title>
<p>When <bold>A</bold> &#x0003D; <bold>I</bold>, the proposed algorithm is reduced to the standard ADMM. Usually, the convergence of ADMM is based on the non-expansive property of projection onto a convex set or proximal mapping of a convex function [<xref ref-type="bibr" rid="B99">99</xref>], and it can not be applicable to our case since low-rank tensor approximation is characterized as a projection onto a non-convex set, Proj<sub>&#x1D54A;</sub>, or non-convex sub-optimization, Proj<sub>&#x1D54A;<sub>&#x003B3;</sub></sub>. As a related work [<xref ref-type="bibr" rid="B100">100</xref>], the ADMM algorithm and its convergence for the completion of the matrix with NMF have been discussed, however, in their formulation, each factor matrix of NMF is treated separately as an optimization variable in ADMM, which differs from our formulation. On the other hand, the non-convex ADMM [<xref ref-type="bibr" rid="B101">101</xref>] includes many non-convex functions and indicator functions of non-convex sets such as the Stiefel/Grassmannian manifold, and it is close to our problem. According to this theory, the compactness of the set &#x1D54A;, the coercivity and smoothness of the objective function, and the continuity of the sub-optimization paths are required for convergence. It is not trivial whether these conditions apply in our case because Proj<sub>&#x1D54A;<sub>&#x003B3;</sub></sub> is a composite problem of tensor decomposition and penalization of the core tensors.</p>
<p>When <bold>A</bold> &#x02260; <bold>I</bold>, the proposed algorithm is characterized as ADMM with MM for its sub-optimization. This combination of ADMM and MM is often referred to as linearized ADMM [<xref ref-type="bibr" rid="B92">92</xref>]. Non-convex linearized ADMM [<xref ref-type="bibr" rid="B93">93</xref>] and its convergence have been discussed, and it is applied to non-convex low-rank matrix reconstruction problem. However, it is not trivial whether the results of convergence [<xref ref-type="bibr" rid="B93">93</xref>] can be applied in our case. From the perspective of PnP-ADMM, the results of the convergence analysis based on the Lipschitz condition of the denoiser have been reported [<xref ref-type="bibr" rid="B102">102</xref>]. Thus, although there are various relevant results, whether they apply specifically to our case is an open problem. This open problem may be solved by analyzing the properties of the penalized TD algorithm.</p>
</sec>
</sec>
</sec>
<sec id="s4">
<title>4 Related works</title>
<p>There are several studies on algorithms for TD using ADMM. AO-ADMM [<xref ref-type="bibr" rid="B73">73</xref>] is an algorithm for constrained matrix/tensor factorization which solves the sub-problems for updating factor matrices using ADMM. Although alternating optimization (AO) is the main-routine and ADMM is subroutine in AO-ADMM, in contrast, the proposed ADMM-MM algorithm is ADMM is used as main-routine. AO-ADMM supports several loss functions, but it does not support various design matrices. In addition, AO-PDS [<xref ref-type="bibr" rid="B103">103</xref>] has been proposed using primal-dual splitting (PDS) instead of ADMM.</p>
<p>Robust Tucker decomposition (RTKD) with &#x02113;<sub>1</sub> loss has been proposed [<xref ref-type="bibr" rid="B85">85</xref>] and its algorithm has been developed based on ADMM. RTKD employs ADMM as the main routine, ALS is used as the subroutine, and it can be regarded as a special case of our proposed algorithm. RTKD does not support any other loss function, various design matrices, and other constraints.</p>
<p>The penalized TD using ADMM has been actively studied and many algorithms have been proposed [<xref ref-type="bibr" rid="B100">100</xref>, <xref ref-type="bibr" rid="B104">104</xref>&#x02013;<xref ref-type="bibr" rid="B107">107</xref>]. Each of these algorithms is a detailed formulation of ADMM tailored to the problem, and the optimization variables are separated into many blocks and updated alternately. The algorithms are not structured, making them difficult to generalize and extend.</p>
<p>In the context of generalized TD, generalized CPD [<xref ref-type="bibr" rid="B108">108</xref>] has been proposed. The purpose of the study is to make the CP decomposition compatible with various loss functions. Basically, a BCD-based gradient algorithm has been proposed. However, other TD models and perspectives on design matrices are not discussed.</p>
<p>PnP-ADMM [<xref ref-type="bibr" rid="B54">54</xref>] is a framework for using some black-box models (e.g., trained deep denoiser) instead of proximal mapping in ADMM. It is highly extensible, in that any model can be applied to various design matrices. The structure of using LS-based TD in a plug-and-play manner in the proposed algorithm is basically the same as that of PnP-ADMM. If we consider LS-based TD as a denoiser, the proposed algorithm may be considered a type of PnP-ADMM. In this sense, the proposed algorithm and PnP-ADMM are very similar, but they are significantly different in that our objective function is not a black box.</p>
</sec>
<sec id="s5">
<title>5 Experiments</title>
<p>The purpose of the experiments is to evaluate the performance of the proposed algorithm in terms of optimization and to investigate its usefulness for versatile tensor reconstruction tasks.</p>
<sec>
<title>5.1 Optimization behaviors for various tensor reconstruction tasks</title>
<p>Tensor reconstruction tasks in this experiment include tensor denoising, completion, de-blurring, and super-resolution. We used an RGB image named &#x0201C;facade&#x0201D; represented as a third-order tensor of size 256 &#x000D7; 256 &#x000D7; 3 for ground truth in denoising, completion, de-blurring, and super-resolution tasks.</p>
<p>For tensor denoising task, we set <bold>A</bold> &#x0003D; <bold>I</bold>. Gaussian noise, salt-and-pepper noise, and Poisson noise are added for individual tasks. For tensor completion tasks, we set <bold>A</bold> &#x0003D; diag(vec(<inline-formula><mml:math id="M216"><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:math></inline-formula>)) with a randomly generated mask tensor <inline-formula><mml:math id="M217"><mml:mstyle mathvariant="bold-script"><mml:mi>W</mml:mi></mml:mstyle></mml:math></inline-formula> &#x02208; {0, 1}<sup>256 &#x000D7; 256 &#x000D7; 3</sup>, where 90% of the entries are missing. For de-blurring tasks, we used a motion blur window of size 21 &#x000D7; 21, and the constructed block Toeplitz matrix is used as <bold>A</bold>. For super-resolution tasks, we used a Lanczos2 kernel for downsampling to 1/4 size, and the constructed downsampling matrix is used as <bold>A</bold>.</p>
<sec>
<title>5.1.1 Comparison with gradient methods</title>
<p>In this experiment, we compare the proposed algorithm with two gradient-based algorithms: projected gradient (PG) and block coordinate descent (BCD) with the low-rank CP decomposition model:</p>
<disp-formula id="E55"><label>(55)</label><mml:math id="M218"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>=</mml:mo><mml:mtext class="textrm" mathvariant="normal">vec</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x027E6;</mml:mi><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mi>&#x027E7;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02208;</mml:mo><mml:mo>&#x1D54A;</mml:mo><mml:mo>&#x02282;</mml:mo><mml:msup><mml:mi>&#x0211D;</mml:mi><mml:mrow><mml:mn>256</mml:mn><mml:mo>&#x000B7;</mml:mo><mml:mn>256</mml:mn><mml:mo>&#x000B7;</mml:mo><mml:mn>3</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E56"><label>(56)</label><mml:math id="M219"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mo>&#x1D54A;</mml:mo><mml:mo>:</mml:mo><mml:mtext>&#x0200B;&#x0200B;</mml:mtext><mml:mo>=</mml:mo><mml:mo>&#x0007B;</mml:mo><mml:mtext>vec</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mo>&#x0301A;</mml:mo><mml:mrow><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>G</mml:mi></mml:mstyle><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>G</mml:mi></mml:mstyle><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>G</mml:mi></mml:mstyle><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>3</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x0301B;</mml:mo></mml:mrow><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>G</mml:mi></mml:mstyle><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msup><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mo>&#x0211D;</mml:mo><mml:mrow><mml:mn>256</mml:mn><mml:mo>&#x000D7;</mml:mo><mml:mi>R</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>G</mml:mi></mml:mstyle><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msup><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mo>&#x0211D;</mml:mo><mml:mrow><mml:mn>256</mml:mn><mml:mo>&#x000D7;</mml:mo><mml:mi>R</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;</mml:mtext><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>G</mml:mi></mml:mstyle><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>3</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msup><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mo>&#x0211D;</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mo>&#x000D7;</mml:mo><mml:mi>R</mml:mi></mml:mrow></mml:msup><mml:mo>&#x0007D;</mml:mo><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In PG, <bold>x</bold> is moved along the gradient descent direction and then projected onto the set &#x1D54A;.</p>
<disp-formula id="E57"><label>(57)</label><mml:math id="M220"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:msub><mml:mrow><mml:mo>&#x02207;</mml:mo></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E58"><label>(58)</label><mml:math id="M221"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mtext class="textrm" mathvariant="normal">Proj</mml:mtext></mml:mrow><mml:mrow><mml:mo>&#x1D54A;</mml:mo></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">ALS</mml:mtext></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>v</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003BC; &#x0003E; 0 is a step-size. Note that it is the same as the proposed method when <inline-formula><mml:math id="M222"><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">b</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">&#x000A0;A</mml:mtext></mml:mstyle><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo>&#x02225;</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">b</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">A</mml:mtext></mml:mstyle><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> and <inline-formula><mml:math id="M223"><mml:mi>&#x003BC;</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow></mml:mfrac></mml:math></inline-formula>. In BCD based gradient algorithm is given by</p>
<disp-formula id="E59"><label>(59)</label><mml:math id="M224"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>-</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:msub><mml:mrow><mml:mo>&#x02207;</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:mtext class="textrm" mathvariant="normal">vec</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x027E6;</mml:mi><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mi>&#x027E7;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E60"><label>(60)</label><mml:math id="M225"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>-</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:msub><mml:mrow><mml:mo>&#x02207;</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:mtext class="textrm" mathvariant="normal">vec</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x027E6;</mml:mi><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mi>&#x027E7;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E61"><label>(61)</label><mml:math id="M226"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>-</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:msub><mml:mrow><mml:mo>&#x02207;</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>A</mml:mtext></mml:mstyle><mml:mtext class="textrm" mathvariant="normal">vec</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x027E6;</mml:mi><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mi>&#x027E7;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In both PG and BCD, step-size &#x003BC; was manually adjusted for the best performance. The proposed algorithm has an optimization parameter &#x003B2; and was also manually adjusted. We experimented with four tasks of denoising, completion, de-blurring and super-resolution under Gaussian, salt-and-pepper, and Poisson noise measurement. &#x02113;<sub>2</sub> loss, &#x02113;<sub>1</sub> loss, and generalized KL divergence are used for Gaussian, salt-and-pepper, and Poisson noise measurements, respectively.</p>
<p><xref ref-type="table" rid="T1">Table 1</xref> shows the achieved values of the objective function and its computational time [sec] for the three optimization methods in various settings. The best values are highlighted in bold. <xref ref-type="fig" rid="F3">Figure 3</xref> shows its optimization behaviors in the various tasks based on three loss functions. The proposed method stably and efficiently reduces the objective function in various settings in comparison to PG and BCD. Note that since the proposed method for &#x02113;<sub>2</sub> loss and PG are equivalent, they are not compared.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Comparison of objective functions and computational time [sec] at convergence for PG, BCD, and the proposed algorithms.</p></caption>
<table frame="box" rules="all">
<thead>
<tr>
<th valign="top" align="left" colspan="2"></th>
<th valign="top" align="center" colspan="2"><bold>PG</bold></th>
<th valign="top" align="center" colspan="2"><bold>BCD</bold></th>
<th valign="top" align="center" colspan="2"><bold>ADMM-MM (proposed)</bold></th>
</tr>
<tr>
<th valign="top" align="left" colspan="2"></th>
<th valign="top" align="center"><bold>Obj</bold>.</th>
<th valign="top" align="center"><bold>Time</bold></th>
<th valign="top" align="center"><bold>Obj</bold>.</th>
<th valign="top" align="center"><bold>Time</bold></th>
<th valign="top" align="center"><bold>Obj</bold>.</th>
<th valign="top" align="center"><bold>Time</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" rowspan="4">&#x02113;<sub>2</sub></td>
<td valign="top" align="left">Noise</td>
<td valign="top" align="center">-</td>
<td valign="top" align="center">-</td>
<td valign="top" align="center">1,623</td>
<td valign="top" align="center">410.3</td>
<td valign="top" align="center"><bold>1614</bold></td>
<td valign="top" align="center"><bold>8.924</bold></td>
</tr>
<tr>
<td valign="top" align="left">Missing</td>
<td valign="top" align="center">-</td>
<td valign="top" align="center">-</td>
<td valign="top" align="center">8,091</td>
<td valign="top" align="center">1,866</td>
<td valign="top" align="center"><bold>119.0</bold></td>
<td valign="top" align="center"><bold>7.792</bold></td>
</tr>
<tr>
<td valign="top" align="left">Blur</td>
<td valign="top" align="center">-</td>
<td valign="top" align="center">-</td>
<td valign="top" align="center">4,657</td>
<td valign="top" align="center">13,684</td>
<td valign="top" align="center"><bold>26.10</bold></td>
<td valign="top" align="center"><bold>298.5</bold></td>
</tr>
<tr>
<td valign="top" align="left">Down</td>
<td valign="top" align="center">-</td>
<td valign="top" align="center">-</td>
<td valign="top" align="center">80.68</td>
<td valign="top" align="center">1,165</td>
<td valign="top" align="center"><bold>2.365</bold></td>
<td valign="top" align="center"><bold>11.36</bold></td>
</tr> <tr>
<td valign="top" align="left" rowspan="4">&#x02113;<sub>1</sub></td>
<td valign="top" align="left">Noise</td>
<td valign="top" align="center">42.92</td>
<td valign="top" align="center">13.02</td>
<td valign="top" align="center">43.18</td>
<td valign="top" align="center">41.66</td>
<td valign="top" align="center"><bold>42.87</bold></td>
<td valign="top" align="center"><bold>2.16</bold></td>
</tr>
<tr>
<td valign="top" align="left">Missing</td>
<td valign="top" align="center">39.74</td>
<td valign="top" align="center"><bold>48.92</bold></td>
<td valign="top" align="center">38.65</td>
<td valign="top" align="center">89.79</td>
<td valign="top" align="center"><bold>35.09</bold></td>
<td valign="top" align="center">84.9</td>
</tr>
<tr>
<td valign="top" align="left">Blur</td>
<td valign="top" align="center">39.14</td>
<td valign="top" align="center"><bold>103.9</bold></td>
<td valign="top" align="center">39.57</td>
<td valign="top" align="center">171.7</td>
<td valign="top" align="center"><bold>38.93</bold></td>
<td valign="top" align="center">133.8</td>
</tr>
<tr>
<td valign="top" align="left">Down</td>
<td valign="top" align="center">1.040</td>
<td valign="top" align="center">108.9</td>
<td valign="top" align="center">1.759</td>
<td valign="top" align="center">42.18</td>
<td valign="top" align="center"><bold>1.004</bold></td>
<td valign="top" align="center"><bold>4.85</bold></td>
</tr> <tr>
<td valign="top" align="left" rowspan="4">KL</td>
<td valign="top" align="left">Noise</td>
<td valign="top" align="center">5.361</td>
<td valign="top" align="center">20.50</td>
<td valign="top" align="center">5.394</td>
<td valign="top" align="center">146</td>
<td valign="top" align="center"><bold>5.358</bold></td>
<td valign="top" align="center"><bold>4.65</bold></td>
</tr>
<tr>
<td valign="top" align="left">Missing</td>
<td valign="top" align="center">0.526</td>
<td valign="top" align="center">54.95</td>
<td valign="top" align="center">0.363</td>
<td valign="top" align="center">220.8</td>
<td valign="top" align="center"><bold>0.247</bold></td>
<td valign="top" align="center"><bold>8.532</bold></td>
</tr>
<tr>
<td valign="top" align="left">Blur</td>
<td valign="top" align="center">5.109</td>
<td valign="top" align="center"><bold>158.2</bold></td>
<td valign="top" align="center">5.155</td>
<td valign="top" align="center">576.4</td>
<td valign="top" align="center"><bold>4.905</bold></td>
<td valign="top" align="center">669.3</td>
</tr>
<tr>
<td valign="top" align="left">Down</td>
<td valign="top" align="center">0.189</td>
<td valign="top" align="center">98.59</td>
<td valign="top" align="center">0.304</td>
<td valign="top" align="center">32.16</td>
<td valign="top" align="center"><bold>0.135</bold></td>
<td valign="top" align="center"><bold>7.65</bold></td>
</tr></tbody>
</table>
<table-wrap-foot>
<p>The best objective values and computational times are highlighted by bold.</p>
</table-wrap-foot>
</table-wrap>
<fig position="float" id="F3">
<label>Figure 3</label>
<caption><p>Optimization behavior: comparison of PG, BCD, and the proposed algorithm for various loss functions in CP decomposition with various degraded images.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fams-11-1594873-g0003.tif">
<alt-text>Grid of twelve line graphs comparing algorithms for denoising, completion, deblurring, and super-resolution. Rows correspond to &#x02113;2, &#x02113;1, and KL objective functions across iterations. Algorithms compared are PG, BCD, and Proposed, with notable differences in convergence speed and efficiency.</alt-text>
</graphic>
</fig>
</sec>
<sec>
<title>5.1.2 Comparison with AO-ADMM</title>
<p>In this experiment, we compare the proposed method with AO-ADMM in standard non-negative CP decomposition (NNCPD) under three loss functions. The optimization problem is given by</p>
<disp-formula id="E62"><label>(62)</label><mml:math id="M227"><mml:mtable class="eqnarray" columnalign="right"><mml:mtr><mml:mtd><mml:munder><mml:mrow><mml:mtext class="textrm" mathvariant="normal">minimize</mml:mtext></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:munder><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mtext class="textrm" mathvariant="normal">vec</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x027E6;</mml:mi><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mi>&#x027E7;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mtext class="textrm" mathvariant="normal">&#x000A0;s.t.&#x000A0;</mml:mtext><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:msubsup><mml:mi>&#x0211D;</mml:mi><mml:mrow><mml:mo>&#x02265;</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mn>256</mml:mn><mml:mo>&#x000D7;</mml:mo><mml:mi>R</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:msubsup><mml:mi>&#x0211D;</mml:mi><mml:mrow><mml:mo>&#x02265;</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mn>256</mml:mn><mml:mo>&#x000D7;</mml:mo><mml:mi>R</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>G</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:msubsup><mml:mi>&#x0211D;</mml:mi><mml:mrow><mml:mo>&#x02265;</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mo>&#x000D7;</mml:mo><mml:mi>R</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x0211D;<sub>&#x02265;0</sub> is a set of non-negative real numbers. In AO-ADMM, each <bold>G</bold><sup>(<italic>l</italic>)</sup> is updated by sub-optimization using ADMM. We used the original implementation of AO-ADMM by Huang<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> with some modifications to adopt it for the TD problem. In the proposed method, we plug-and-played the multiplicative update (MU) [<xref ref-type="bibr" rid="B11">11</xref>] and the hierarchical alternating least squares (HALS) [<xref ref-type="bibr" rid="B89">89</xref>] for LS-based NNCPD.</p>
<p><xref ref-type="fig" rid="F4">Figure 4</xref> and <xref ref-type="table" rid="T2">Table 2</xref> show the comparison of the convergence behaviors between the proposed method and the AO-ADMM. We experimented with weak and strong noise setting under three types of noise. The value displayed below each plot represents the signal-to-noise ratio (SNR) of noisy measurements. Although the MU and HALS used in plug-and-play are not pure least squares minimizers, we can see that they successively converged. In the early stages of optimization, the objective function decreased faster with AO-ADMM, but there were no significant differences in the convergence speed. When comparing the time to convergence, AO-ADMM was slightly faster to minimize &#x02113;<sub>2</sub> loss, while the proposed method was slightly faster to minimize &#x02113;<sub>1</sub> loss and generalized KL divergence.</p>
<fig position="float" id="F4">
<label>Figure 4</label>
<caption><p>Comparison of the convergence behavior of the AO-ADMM and the proposed method. Non-negative CP decomposition is performed under three loss functions. Observations are varied with two different noise levels.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fams-11-1594873-g0004.tif">
<alt-text>Six line graphs depict the performance of different algorithms under weak and strong noise conditions. Each row represents noise level: the top for weak noise and the bottom for strong noise. The columns represent different minimization techniques: &#x02113;2, &#x02113;1 and KL minimization. Three algorithms are compared: AO-ADMM (red), ADMM-MM(MU) (green), and ADMM-MM(HALS) (blue). The y-axis indicates the objective function value, and the x-axis shows time. Decibel levels are noted under each graph.</alt-text>
</graphic>
</fig>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Comparison of the time required for the objective function to converge in noise removal using AO-ADMM and ADMM-MM.</p></caption>
<table frame="box" rules="all">
<thead>
<tr>
<th/>
<th/>
<th valign="top" align="center"><bold>AO-ADMM</bold></th>
<th valign="top" align="center"><bold>ADMM-MM (MU)</bold></th>
<th valign="top" align="center"><bold>ADMM-MM (HALS)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" rowspan="2">&#x02113;<sub>2</sub> loss</td>
<td valign="top" align="left">Weak noise</td>
<td valign="top" align="center"><bold>30.15</bold></td>
<td valign="top" align="center">172.8</td>
<td valign="top" align="center">74.70</td>
</tr>
<tr>
<td valign="top" align="left">Strong noise</td>
<td valign="top" align="center"><bold>15.62</bold></td>
<td valign="top" align="center">36.75</td>
<td valign="top" align="center">27.40</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="2">&#x02113;<sub>1</sub> loss</td>
<td valign="top" align="left">Weak noise</td>
<td valign="top" align="center">153.1</td>
<td valign="top" align="center"><bold>50.20</bold></td>
<td valign="top" align="center">41.46</td>
</tr>
<tr>
<td valign="top" align="left">Strong noise</td>
<td valign="top" align="center">27.31</td>
<td valign="top" align="center">38.35</td>
<td valign="top" align="center"><bold>14.98</bold></td>
</tr>
<tr>
<td valign="top" align="left" rowspan="2">KL divergence</td>
<td valign="top" align="left">Weak noise</td>
<td valign="top" align="center">86.36</td>
<td valign="top" align="center">123.7</td>
<td valign="top" align="center"><bold>48.27</bold></td>
</tr>
<tr>
<td valign="top" align="left">Strong noise</td>
<td valign="top" align="center">31.67</td>
<td valign="top" align="center">38.45</td>
<td valign="top" align="center"><bold>23.76</bold></td>
</tr></tbody>
</table>
<table-wrap-foot>
<p>The best objective values and computational times are highlighted by bold.</p>
</table-wrap-foot>
</table-wrap>
<p>It should be noted that it is difficult to make a fair comparison of the convergence of the AO-ADMM and ADMM-MM algorithms. AO-ADMM has an AO main-iteration and an ADMM sub-iteration for sub-problems, making it a double-loop algorithm. In this study, 10 sub-iterations were performed, but changing this may change the convergence results. Also the convergence behavior changes significantly depending on the parameter &#x003B2; in ADMM. The appropriate beta value differs depending on the task, loss function, and model.</p>
</sec>
<sec>
<title>5.1.3 Various tensor decomposition models</title>
<p>Here, we apply the proposed ADMM-MM algorithm to four types of TD models: CP, Tucker, TT, and TR decompositions. The standard ALS algorithm is used for CP and TR decompositions [<xref ref-type="bibr" rid="B42">42</xref>, <xref ref-type="bibr" rid="B43">43</xref>, <xref ref-type="bibr" rid="B51">51</xref>], and orthogonalized ALS is used for Tucker and TT decompositions [<xref ref-type="bibr" rid="B48">48</xref>, <xref ref-type="bibr" rid="B50">50</xref>].</p>
<p><xref ref-type="fig" rid="F5">Figure 5</xref> shows the selected results, and various tensor reconstruction settings for all TD models can be succinctly optimized by the proposed ADMM-MM algorithm.</p>
<fig position="float" id="F5">
<label>Figure 5</label>
<caption><p>Comparison of cost function convergence in various TD models across different design matrices and loss functions.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fams-11-1594873-g0005.tif">
<alt-text>Four line graphs comparing tensor decomposition methods, labeled CP, Tucker, TT, and TR, across iterations using different objective functions. The first graph shows noise with KL divergence; the second shows missing data with L1-loss; the third shows blur with L2-loss; the fourth shows downsampling with L1-loss. Each graph displays objective function values decreasing over iterations.</alt-text>
</graphic>
</fig>
</sec>
<sec>
<title>5.1.4 Sensitivity of a hyperparameter &#x003B2;</title>
<p>Here, we show the differences in convergence behaviors of ADMM-MM algorithm with respect to the values of hyperparameter &#x003B2;. In this experiment, the tensor completion task was solved using the ALS algorithm of CP decomposition with Tikhonov regularization. &#x003B2; is a hyperparameter related to ADMM, so there are two settings: minimizing &#x02113;<sub>1</sub> loss and KL divergence. <xref ref-type="fig" rid="F6">Figure 6</xref> shows the convergence behaviors of the loss function obtained with various values of &#x003B2;. Similar trends were obtained for both loss functions. When &#x003B2; is large, it is stable but convergence is slow. Making &#x003B2; smaller will speed up the convergence, but if it is too small, the convergence becomes unstable.</p>
<fig position="float" id="F6">
<label>Figure 6</label>
<caption><p>Convergence behaviors with respect to hyperparameter &#x003B2;. <bold>(a)</bold> l1-loss. <bold>(b)</bold> KL divergence.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fams-11-1594873-g0006.tif">
<alt-text>Graphs comparing loss functions over iterations for different values of beta. The left graph shows l1-loss, while the right graph displays KL divergence. Both graphs depict loss decreasing over time, with varying convergence rates based on beta values. The legend indicates lines for six beta values: 0.0001, 0.001, 0.01, 0.1, 1, and 10.</alt-text>
</graphic>
</fig>
</sec>
</sec>
<sec>
<title>5.2 Image processing applications</title>
<p>In this experiment, we demonstrate that the proposed algorithm connects various TD models with various image processing tasks.</p>
<sec>
<title>5.2.1 Color image reconstruction and computed tomography</title>
<p>Here, we show the results of color image reconstruction and computed tomography using the proposed ADMM-MM algorithm. An RGB image named &#x0201C;facade&#x0201D; is used for three image-inpainting tasks under three different noises (tasks 1, 2, and 3) and for an image deblurring task under sparse noise (task 4). An artificial low-rank tensor of size 128 &#x000D7; 128 &#x000D7; 3 is used for computed tomography under Poisson noise (task 5).</p>
<p>We apply various TD models for various image reconstruction tasks to demonstrate the potential of the proposed method. We used six TD models: CP decomposition (CPD), Tucker decomposition (TKD), TR decomposition (TRD) [<xref ref-type="bibr" rid="B51">51</xref>], tensor nuclear norm regularization (tSVD) [<xref ref-type="bibr" rid="B65">65</xref>], NNCP decomposition (NNCPD) [<xref ref-type="bibr" rid="B12">12</xref>], and smooth PARAFAC (SPC) [<xref ref-type="bibr" rid="B34">34</xref>]. tSVD stands for tensor nuclear norm regularization using singular value shresholding. NNCPD stands for CP decomposition with non-negative constraints of factor matrices. Each factor matrices are updated by multiplicative update rules. SPC stands for CP decomposition with smoothness constraints of factor matrices. Although the SPC has been originally proposed for LS-based tensor completion, our framework can extend it to &#x02113;<sub>1</sub> loss and KL-divergence with arbitrary design matrix <bold>A</bold>.</p>
<p><xref ref-type="fig" rid="F7">Figure 7</xref> shows the results obtained by six TD models in image processing tasks: tensor completion under Gaussian noise (task 1), tensor completion under sparse noise (task 2), tensor completion under Poisson noise (task 3), de-blurring under sparse noise (task 4), and computed tomography under Poisson noise (task 5). We can see that the proposed method allows to plug-and-play many TD models for application to a variety of image processing tasks.</p>
<fig position="float" id="F7">
<label>Figure 7</label>
<caption><p>Reconstruction of various degraded images using the proposed method with different tensor decomposition approaches.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fams-11-1594873-g0007.tif">
<alt-text>A grid of images demonstrates different tasks related to image restoration or processing. Rows represent tasks labeled task one through task five. Columns are labeled original, observed, CPD, TKD, TRD, tSVD, NNCPD, and SPC, displaying variations in quality and clarity. Task one through task four depict building facades with varying degrees of noise and restoration. Task five features colored shapes and patterns, illustrating abstract transformations and reconstructions.</alt-text>
</graphic>
</fig>
</sec>
<sec>
<title>5.2.2 Application to light field image recovery</title>
<p>Here, we apply the proposed algorithm to the problem of light field image restoration as one of the case studies. The light field image used is a fourth-order tensor named &#x0201C;vinyl&#x0201D; with size (128,128,3,81).<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> An algorithm for light field image restoration under the hybrid model of Tucker and TT decompositions, named fast tensor train nuclear norm (FTTNN),<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref> has been studied [<xref ref-type="bibr" rid="B109">109</xref>]. The task is to restore the original image from an image in which 20% of the pixels are randomly selected and overwritten with random values.</p>
<p>Our framework allows us to apply different kinds of TD model to this task and compare them with the existing algorithm. <xref ref-type="table" rid="T3">Table 3</xref> shows the results of the comparison with PSNR, RSE, and SSIM metrics. We applied TKD, TTD, TRD, CPD, NNCPD, and SPC to this task by using the proposed ADMM-MM algorithm. In particular, we were able to demonstrate a high level of performance with the constrained CPD models ( i.e., NNCPD and SPC) in light field image recovery.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Comparison of TD models with PSNR, RSE, and SSIM metrics in light field image recovery.</p></caption>
<table frame="box" rules="all">
<thead>
<tr>
<th valign="top" align="left"><bold>Method</bold></th>
<th valign="top" align="center"><bold>PSNR [dB]</bold></th>
<th valign="top" align="center"><bold>RSE</bold></th>
<th valign="top" align="center"><bold>SSIM</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">FTTNN [<xref ref-type="bibr" rid="B109">109</xref>]</td>
<td valign="top" align="center">42.05</td>
<td valign="top" align="center">0.0247</td>
<td valign="top" align="center">0.9923</td>
</tr>
<tr>
<td valign="top" align="left">TKD (by ADMM-MM)</td>
<td valign="top" align="center">42.74</td>
<td valign="top" align="center">0.0229</td>
<td valign="top" align="center">0.9946</td>
</tr>
<tr>
<td valign="top" align="left">TTD (by ADMM-MM)</td>
<td valign="top" align="center">39.99</td>
<td valign="top" align="center">0.0314</td>
<td valign="top" align="center">0.9925</td>
</tr>
<tr>
<td valign="top" align="left">TRD (by ADMM-MM)</td>
<td valign="top" align="center">45.66</td>
<td valign="top" align="center">0.0163</td>
<td valign="top" align="center">0.9968</td>
</tr>
<tr>
<td valign="top" align="left">CPD (by ADMM-MM)</td>
<td valign="top" align="center">45.76</td>
<td valign="top" align="center">0.0161</td>
<td valign="top" align="center">0.9948</td>
</tr>
<tr>
<td valign="top" align="left">NNCPD (by ADMM-MM)</td>
<td valign="top" align="center">47.14</td>
<td valign="top" align="center">0.0138</td>
<td valign="top" align="center">0.9978</td>
</tr>
<tr>
<td valign="top" align="left">SPC (by ADMM-MM)</td>
<td valign="top" align="center">48.98</td>
<td valign="top" align="center">0.0111</td>
<td valign="top" align="center">0.9973</td>
</tr></tbody>
</table>
</table-wrap>
</sec>
</sec>
</sec>
<sec sec-type="conclusions" id="s6">
<title>6 Conclusion</title>
<p>In this study, we proposed a versatile tensor reconstruction framework to plug-and-play various LS-based TD algorithms and apply them to various applications. This framework is very practical because many TD models are often initially studied on the basis of least squares. The newly proposed TD algorithm can be plug-and-played and operated based on any design matrix and at least three loss functions. In addition, any loss function having a proximal mapping can be easily introduced.</p>
<p>In experiments, we demonstrated the effectiveness of the proposed method compared to existing gradient-based optimization algorithms and AO-ADMM. Although the convergence of the proposed algorithm is theoretically ambiguous, experimentally we confirmed that it has been successfully optimized for various problems, models, and hyperparameter settings. It was also demonstrated that it is useful for a wide range of image processing applications.</p>
<p>In this study, we plug-and-played various TD algorithms and confirmed their effectiveness, but it would be meaningful to investigate a class of tensor decomposition algorithms that can be plug-and-played by theoretical analysis. Furthermore, there are some properties, such as convergence, that remain unclear, so continued investigation is necessary. In addition, it would also be worthwhile to conduct a study on ways to accelerate the convergence of the algorithm.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="s7">
<title>Data availability statement</title>
<p>Publicly available datasets were analyzed in this study. This data can be found here: <ext-link ext-link-type="uri" xlink:href="https://github.com/yokotatsuya/PnP-Tensor-Decomposition">https://github.com/yokotatsuya/PnP-Tensor-Decomposition</ext-link>.</p>
</sec>
<sec sec-type="author-contributions" id="s8">
<title>Author contributions</title>
<p>MM: Writing &#x02013; review &#x00026; editing, Writing &#x02013; original draft. HH: Writing &#x02013; review &#x00026; editing. TY: Writing &#x02013; original draft, Writing &#x02013; review &#x00026; editing.</p>
</sec>
<sec sec-type="funding-information" id="s9">
<title>Funding</title>
<p>The author(s) declare that financial support was received for the research and/or publication of this article. This work was partially supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI under Grant 23K28109.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="ai-statement" id="s10">
<title>Generative AI statement</title>
<p>The author(s) declare that Gen AI was used in the creation of this manuscript. We used generative AI to proofread the English in the manuscript.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec sec-type="disclaimer" id="s11">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<fn-group>
<fn id="fn0001"><p><sup>1</sup><ext-link ext-link-type="uri" xlink:href="https://www.cise.ufl.edu/&#x0007E;kejun/code.html">https://www.cise.ufl.edu/&#x0007E;kejun/code.html</ext-link></p></fn>
<fn id="fn0002"><p><sup>2</sup>Data is available online: <ext-link ext-link-type="uri" xlink:href="https://lightfield-analysis.uni-konstanz.de/">https://lightfield-analysis.uni-konstanz.de/</ext-link>.</p></fn>
<fn id="fn0003"><p><sup>3</sup>Code is available online: <ext-link ext-link-type="uri" xlink:href="https://github.com/ynqiu/fast-TTRPCA">https://github.com/ynqiu/fast-TTRPCA</ext-link>.</p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<label>1.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yokota</surname> <given-names>T</given-names></name> <name><surname>Caiafa</surname> <given-names>CF</given-names></name> <name><surname>Zhao</surname> <given-names>Q</given-names></name></person-group>. <article-title>Tensor methods for low-level vision</article-title>. In:<person-group person-group-type="editor"><name><surname>Liu</surname> <given-names>Y</given-names></name></person-group>, editor. <source>Tensors for Data Processing: Theory, Methods, and Applications</source>. Academic Press Inc Elsevier Science (<year>2021</year>). p. <fpage>371</fpage>&#x02013;<lpage>425</lpage>. <pub-id pub-id-type="doi">10.1016/B978-0-12-824447-0.00017-0</pub-id></citation>
</ref>
<ref id="B2">
<label>2.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cichocki</surname> <given-names>A</given-names></name> <name><surname>Zdunek</surname> <given-names>R</given-names></name> <name><surname>Phan</surname> <given-names>AH</given-names></name> <name><surname>Amari</surname> <given-names>SI</given-names></name></person-group>. <source>Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation</source>. New York: John Wiley &#x00026; Sons, Ltd. (<year>2009</year>). <pub-id pub-id-type="doi">10.1002/9780470747278</pub-id></citation>
</ref>
<ref id="B3">
<label>3.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>L</given-names></name> <name><surname>Li</surname> <given-names>Y</given-names></name> <name><surname>Li</surname> <given-names>Z</given-names></name></person-group>. <article-title>Efficient missing data imputing for traffic flow by considering temporal and spatial dependence</article-title>. <source>Transport Res Part C</source>. (<year>2013</year>) <volume>34</volume>:<fpage>108</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1016/j.trc.2013.05.008</pub-id></citation>
</ref>
<ref id="B4">
<label>4.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>H</given-names></name> <name><surname>Ahmad</surname> <given-names>F</given-names></name> <name><surname>Vorobyov</surname> <given-names>S</given-names></name> <name><surname>Porikli</surname> <given-names>F</given-names></name></person-group>. <article-title>Tensor decompositions in wireless communications and MIMO radar</article-title>. <source>IEEE J Sel Top Signal Process</source>. (<year>2021</year>) <volume>15</volume>:<fpage>438</fpage>&#x02013;<lpage>53</lpage>. <pub-id pub-id-type="doi">10.1109/JSTSP.2021.3061937</pub-id></citation>
</ref>
<ref id="B5">
<label>5.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kalev</surname> <given-names>A</given-names></name> <name><surname>Kosut</surname> <given-names>RL</given-names></name> <name><surname>Deutsch</surname> <given-names>IH</given-names></name></person-group>. <article-title>Quantum tomography protocols with positivity are compressed sensing protocols</article-title>. <source>NPJ Quant Inf</source> . (<year>2015</year>) <volume>1</volume>:<fpage>1</fpage>&#x02013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1038/npjqi.2015.18</pub-id></citation>
</ref>
<ref id="B6">
<label>6.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kyrillidis</surname> <given-names>A</given-names></name> <name><surname>Kalev</surname> <given-names>A</given-names></name> <name><surname>Park</surname> <given-names>D</given-names></name> <name><surname>Bhojanapalli</surname> <given-names>S</given-names></name> <name><surname>Caramanis</surname> <given-names>C</given-names></name> <name><surname>Sanghavi</surname> <given-names>S</given-names></name></person-group>. <article-title>Provable compressed sensing quantum state tomography via non-convex methods</article-title>. <source>NPJ Quant Inf</source> . (<year>2018</year>) <volume>4</volume>:<fpage>36</fpage>. <pub-id pub-id-type="doi">10.1038/s41534-018-0080-4</pub-id></citation>
</ref>
<ref id="B7">
<label>7.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qin</surname> <given-names>Z</given-names></name> <name><surname>Jameson</surname> <given-names>C</given-names></name> <name><surname>Gong</surname> <given-names>Z</given-names></name> <name><surname>Wakin</surname> <given-names>MB</given-names></name> <name><surname>Zhu</surname> <given-names>Z</given-names></name></person-group>. <article-title>Quantum state tomography for matrix product density operators</article-title>. <source>IEEE Trans Inf Theory</source>. (<year>2024</year>) <volume>70</volume>:<fpage>5030</fpage>&#x02013;<lpage>56</lpage>. <pub-id pub-id-type="doi">10.1109/TIT.2024.3360951</pub-id></citation>
</ref>
<ref id="B8">
<label>8.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kolda</surname> <given-names>TG</given-names></name> <name><surname>Bader</surname> <given-names>BW</given-names></name></person-group>. <article-title>Tensor decompositions and applications</article-title>. <source>SIAM Rev</source>. (<year>2009</year>) <volume>51</volume>:<fpage>455</fpage>&#x02013;<lpage>500</lpage>. <pub-id pub-id-type="doi">10.1137/07070111X</pub-id></citation>
</ref>
<ref id="B9">
<label>9.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cichocki</surname> <given-names>A</given-names></name> <name><surname>Mandic</surname> <given-names>D</given-names></name> <name><surname>De Lathauwer</surname> <given-names>L</given-names></name> <name><surname>Zhou</surname> <given-names>G</given-names></name> <name><surname>Zhao</surname> <given-names>Q</given-names></name> <name><surname>Caiafa</surname> <given-names>C</given-names></name> <etal/></person-group>. <article-title>Tensor decompositions for signal processing applications: from two-way to multiway component analysis</article-title>. <source>IEEE Signal Process Mag</source>. (<year>2015</year>) <volume>32</volume>:<fpage>145</fpage>&#x02013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1109/MSP.2013.2297439</pub-id></citation>
</ref>
<ref id="B10">
<label>10.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yokota</surname> <given-names>T</given-names></name></person-group>. <article-title>Very basics of tensors with graphical notations: unfolding, calculations, and decompositions</article-title>. <source>arXiv preprint arXiv:241116094</source> (<year>2024</year>).</citation>
</ref>
<ref id="B11">
<label>11.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>D</given-names></name> <name><surname>Seung</surname> <given-names>HS</given-names></name></person-group>. <article-title>Algorithms for non-negative matrix factorization</article-title>. In: <source>Advances in Neural Information Processing Systems</source> (<year>2000</year>). p. <fpage>13</fpage>.</citation>
</ref>
<ref id="B12">
<label>12.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cichocki</surname> <given-names>A</given-names></name> <name><surname>Zdunek</surname> <given-names>R</given-names></name> <name><surname>Amari</surname> <given-names>Si</given-names></name></person-group>. <article-title>Nonnegative matrix and tensor factorization [lecture notes]</article-title>. <source>IEEE Signal Process Mag</source>. (<year>2007</year>) <volume>25</volume>:<fpage>142</fpage>&#x02013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.1109/MSP.2008.4408452</pub-id></citation>
</ref>
<ref id="B13">
<label>13.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gillis</surname> <given-names>N</given-names></name></person-group>. <source>Nonnegative Matrix Factorization</source>. <publisher-loc>London</publisher-loc>: <publisher-name>SIAM</publisher-name> (<year>2020</year>). <pub-id pub-id-type="doi">10.1137/1.9781611976410</pub-id></citation>
</ref>
<ref id="B14">
<label>14.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Papalexakis</surname> <given-names>EE</given-names></name> <name><surname>Faloutsos</surname> <given-names>C</given-names></name> <name><surname>Sidiropoulos</surname> <given-names>ND</given-names></name></person-group>. <article-title>Parcube: Sparse parallelizable tensor decompositions</article-title>. In: <source>Proceedings of European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases</source>. <publisher-loc>Springer</publisher-loc> (<year>2012</year>). p. <fpage>521</fpage>&#x02013;<lpage>536</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-642-33460-3_39</pub-id><pub-id pub-id-type="pmid">30908262</pub-id></citation></ref>
<ref id="B15">
<label>15.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yokota</surname> <given-names>T</given-names></name> <name><surname>Cichocki</surname> <given-names>A</given-names></name></person-group>. <article-title>Multilinear tensor rank estimation via sparse Tucker decomposition</article-title>. In: <source>Proceedings of International Conference on Soft Computing and Intelligent Systems (SCIS) and International Symposium on Advanced Intelligent Systems (ISIS)</source>. <publisher-loc>IEEE</publisher-loc> (<year>2014</year>). p. <fpage>478</fpage>&#x02013;<lpage>483</lpage>. <pub-id pub-id-type="doi">10.1109/SCIS-ISIS.2014.7044685</pub-id></citation>
</ref>
<ref id="B16">
<label>16.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Caiafa</surname> <given-names>CF</given-names></name> <name><surname>Cichocki</surname> <given-names>A</given-names></name></person-group>. <article-title>Block sparse representations of tensors using Kronecker bases</article-title>. In: <source>Proceedings of ICASSP</source>. <publisher-loc>IEEE</publisher-loc> (<year>2012</year>). p. <fpage>2709</fpage>&#x02013;<lpage>2712</lpage>. <pub-id pub-id-type="doi">10.1109/ICASSP.2012.6288476</pub-id><pub-id pub-id-type="pmid">23020110</pub-id></citation></ref>
<ref id="B17">
<label>17.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caiafa</surname> <given-names>CF</given-names></name> <name><surname>Cichocki</surname> <given-names>A</given-names></name></person-group>. <article-title>Computing sparse representations of multidimensional signals using Kronecker bases</article-title>. <source>Neural Comput</source>. (<year>2013</year>) <volume>25</volume>:<fpage>186</fpage>&#x02013;<lpage>220</lpage>. <pub-id pub-id-type="doi">10.1162/NECO_a_00385</pub-id><pub-id pub-id-type="pmid">23020110</pub-id></citation></ref>
<ref id="B18">
<label>18.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Essid</surname> <given-names>S</given-names></name> <name><surname>F&#x000E9;votte</surname> <given-names>C</given-names></name></person-group>. <article-title>Smooth nonnegative matrix factorization for unsupervised audiovisual document structuring</article-title>. <source>IEEE Trans Multim</source>. (<year>2012</year>) <volume>15</volume>:<fpage>415</fpage>&#x02013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1109/TMM.2012.2228474</pub-id></citation>
</ref>
<ref id="B19">
<label>19.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yokota</surname> <given-names>T</given-names></name> <name><surname>Zdunek</surname> <given-names>R</given-names></name> <name><surname>Cichocki</surname> <given-names>A</given-names></name> <name><surname>Yamashita</surname> <given-names>Y</given-names></name></person-group>. <article-title>Smooth nonnegative matrix and tensor factorizations for robust multi-way data analysis</article-title>. <source>Signal Proc</source>. (<year>2015</year>) <volume>113</volume>:<fpage>234</fpage>&#x02013;<lpage>49</lpage>. <pub-id pub-id-type="doi">10.1016/j.sigpro.2015.02.003</pub-id></citation>
</ref>
<ref id="B20">
<label>20.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yokota</surname> <given-names>T</given-names></name> <name><surname>Kawai</surname> <given-names>K</given-names></name> <name><surname>Sakata</surname> <given-names>M</given-names></name> <name><surname>Kimura</surname> <given-names>Y</given-names></name> <name><surname>Hontani</surname> <given-names>H</given-names></name></person-group>. <article-title>Dynamic PET image reconstruction using nonnegative matrix factorization incorporated with deep image prior</article-title>. In: <source>Proceedings of ICCV</source> (<year>2019</year>). p. <fpage>3126</fpage>&#x02013;<lpage>3135</lpage>. <pub-id pub-id-type="doi">10.1109/ICCV.2019.00322</pub-id></citation>
</ref>
<ref id="B21">
<label>21.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Takayama</surname> <given-names>H</given-names></name> <name><surname>Yokota</surname> <given-names>T</given-names></name></person-group>. <article-title>A new model for tensor completion: smooth convolutional tensor factorization</article-title>. <source>IEEE Access</source>. (<year>2023</year>) <volume>11</volume>:<fpage>67526</fpage>&#x02013;<lpage>39</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2023.3291744</pub-id></citation>
</ref>
<ref id="B22">
<label>22.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cai</surname> <given-names>D</given-names></name> <name><surname>He</surname> <given-names>X</given-names></name> <name><surname>Han</surname> <given-names>J</given-names></name> <name><surname>Huang</surname> <given-names>TS</given-names></name></person-group>. <article-title>Graph regularized nonnegative matrix factorization for data representation</article-title>. <source>IEEE Trans Pattern Anal Mach Intell</source>. (<year>2010</year>) <volume>33</volume>:<fpage>1548</fpage>&#x02013;<lpage>60</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2010.231</pub-id><pub-id pub-id-type="pmid">21173440</pub-id></citation></ref>
<ref id="B23">
<label>23.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yin</surname> <given-names>M</given-names></name> <name><surname>Gao</surname> <given-names>J</given-names></name> <name><surname>Lin</surname> <given-names>Z</given-names></name></person-group>. <article-title>Laplacian regularized low-rank representation and its applications</article-title>. <source>IEEE Trans Pattern Anal Mach Intell</source>. (<year>2015</year>) <volume>38</volume>:<fpage>504</fpage>&#x02013;<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2015.2462360</pub-id><pub-id pub-id-type="pmid">27046494</pub-id></citation></ref>
<ref id="B24">
<label>24.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>X</given-names></name> <name><surname>Ng</surname> <given-names>MK</given-names></name> <name><surname>Cong</surname> <given-names>G</given-names></name> <name><surname>Ye</surname> <given-names>Y</given-names></name> <name><surname>Wu</surname> <given-names>Q</given-names></name></person-group>. <article-title>MR-NTD manifold regularization nonnegative Tucker decomposition for tensor data dimension reduction and representation</article-title>. <source>IEEE Trans Neural Netw Learn Syst</source>. (<year>2016</year>) <volume>28</volume>:<fpage>1787</fpage>&#x02013;<lpage>800</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2016.2545400</pub-id><pub-id pub-id-type="pmid">28727548</pub-id></citation></ref>
<ref id="B25">
<label>25.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cai</surname> <given-names>JF</given-names></name> <name><surname>Candes</surname> <given-names>EJ</given-names></name> <name><surname>Shen</surname> <given-names>Z</given-names></name></person-group>. <article-title>A singular value thresholding algorithm for matrix completion</article-title>. <source>SIAM J Optimiz</source>. (<year>2010</year>) <volume>20</volume>:<fpage>1956</fpage>&#x02013;<lpage>82</lpage>. <pub-id pub-id-type="doi">10.1137/080738970</pub-id></citation>
</ref>
<ref id="B26">
<label>26.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Acar</surname> <given-names>E</given-names></name> <name><surname>Dunlavy</surname> <given-names>DM</given-names></name> <name><surname>Kolda</surname> <given-names>TG</given-names></name> <name><surname>M&#x000F8;rup</surname> <given-names>M</given-names></name></person-group>. <article-title>Scalable tensor factorizations for incomplete data</article-title>. <source>Chemometr Intell Lab Syst</source>. (<year>2011</year>) <volume>106</volume>:<fpage>41</fpage>&#x02013;<lpage>56</lpage>. <pub-id pub-id-type="doi">10.1016/j.chemolab.2010.08.004</pub-id><pub-id pub-id-type="pmid">31251750</pub-id></citation></ref>
<ref id="B27">
<label>27.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>J</given-names></name> <name><surname>Musialski</surname> <given-names>P</given-names></name> <name><surname>Wonka</surname> <given-names>P</given-names></name> <name><surname>Ye</surname> <given-names>J</given-names></name></person-group>. <article-title>Tensor completion for estimating missing values in visual data</article-title>. <source>IEEE Trans Pattern Anal Mach Intell</source>. (<year>2013</year>) <volume>35</volume>:<fpage>208</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2012.39</pub-id><pub-id pub-id-type="pmid">22271823</pub-id></citation></ref>
<ref id="B28">
<label>28.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kressner</surname> <given-names>D</given-names></name> <name><surname>Steinlechner</surname> <given-names>M</given-names></name> <name><surname>Vandereycken</surname> <given-names>B</given-names></name></person-group>. <article-title>Low-rank tensor completion by Riemannian optimization</article-title>. <source>BIT Numer Mathem</source>. (<year>2014</year>) <volume>54</volume>:<fpage>447</fpage>&#x02013;<lpage>68</lpage>. <pub-id pub-id-type="doi">10.1007/s10543-013-0455-z</pub-id></citation>
</ref>
<ref id="B29">
<label>29.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>Y</given-names></name> <name><surname>Hao</surname> <given-names>R</given-names></name> <name><surname>Yin</surname> <given-names>W</given-names></name> <name><surname>Su</surname> <given-names>Z</given-names></name></person-group>. <article-title>Parallel matrix factorization for low-rank tensor completion</article-title>. <source>Inverse Problems Imag</source>. (<year>2015</year>) <volume>9</volume>:<fpage>601</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.3934/ipi.2015.9.601</pub-id></citation>
</ref>
<ref id="B30">
<label>30.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>YD</given-names></name> <name><surname>Choi</surname> <given-names>S</given-names></name></person-group>. <article-title>Weighted nonnegative matrix factorization</article-title>. In: <source>Proceedings of ICASSP</source>. <publisher-loc>IEEE</publisher-loc> (<year>2009</year>). p. <fpage>1541</fpage>&#x02013;<lpage>1544</lpage>. <pub-id pub-id-type="doi">10.1109/ICASSP.2009.4959890</pub-id></citation>
</ref>
<ref id="B31">
<label>31.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Nielsen</surname> <given-names>SFV</given-names></name> <name><surname>M&#x000F8;rup</surname> <given-names>M</given-names></name></person-group>. <article-title>Non-negative tensor factorization with missing data for the modeling of gene expressions in the human brain</article-title>. In: <source>Proceedings of IEEE International Workshop on Machine Learning for Signal Processing</source>. <publisher-loc>IEEE</publisher-loc> (<year>2014</year>). p. <fpage>1</fpage>&#x02013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1109/MLSP.2014.6958919</pub-id></citation>
</ref>
<ref id="B32">
<label>32.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>M&#x000F8;rup</surname> <given-names>M</given-names></name> <name><surname>Hansen</surname> <given-names>LK</given-names></name> <name><surname>Arnfred</surname> <given-names>SM</given-names></name></person-group>. <article-title>Algorithms for sparse nonnegative Tucker decompositions</article-title>. <source>Neural Comput</source>. (<year>2008</year>) <volume>20</volume>:<fpage>2112</fpage>&#x02013;<lpage>2131</lpage>. <pub-id pub-id-type="doi">10.1162/neco.2008.11-06-407</pub-id><pub-id pub-id-type="pmid">18386984</pub-id></citation></ref>
<ref id="B33">
<label>33.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhou</surname> <given-names>G</given-names></name> <name><surname>Cichocki</surname> <given-names>A</given-names></name> <name><surname>Zhao</surname> <given-names>Q</given-names></name> <name><surname>Xie</surname> <given-names>S</given-names></name></person-group>. <article-title>Efficient nonnegative Tucker decompositions: Algorithms and uniqueness</article-title>. <source>IEEE Trans Image Proc</source>. (<year>2015</year>) <volume>24</volume>:<fpage>4990</fpage>&#x02013;<lpage>5003</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2015.2478396</pub-id><pub-id pub-id-type="pmid">26390455</pub-id></citation></ref>
<ref id="B34">
<label>34.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yokota</surname> <given-names>T</given-names></name> <name><surname>Zhao</surname> <given-names>Q</given-names></name> <name><surname>Cichocki</surname> <given-names>A</given-names></name></person-group>. <article-title>Smooth PARAFAC decomposition for tensor completion</article-title>. <source>IEEE Trans Signal Proc</source>. (<year>2016</year>) <volume>64</volume>:<fpage>5423</fpage>&#x02013;<lpage>36</lpage>. <pub-id pub-id-type="doi">10.1109/TSP.2016.2586759</pub-id><pub-id pub-id-type="pmid">34143740</pub-id></citation></ref>
<ref id="B35">
<label>35.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ghalamkari</surname> <given-names>K</given-names></name> <name><surname>Sugiyama</surname> <given-names>M</given-names></name></person-group>. <article-title>Fast rank-1 NMF for missing data with KL divergence</article-title>. In: <source>Proceedings of International Conference on Artificial Intelligence and Statistics</source>. <publisher-loc>PMLR</publisher-loc> (<year>2022</year>). p. <fpage>2927</fpage>&#x02013;<lpage>2940</lpage>.</citation>
</ref>
<ref id="B36">
<label>36.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Durand</surname> <given-names>A</given-names></name> <name><surname>Roueff</surname> <given-names>F</given-names></name> <name><surname>Jicquel</surname> <given-names>JM</given-names></name> <name><surname>Paul</surname> <given-names>N</given-names></name></person-group>. <article-title>New penalized criteria for smooth non-negative tensor factorization with missing entries</article-title>. <source>IEEE Trans Signal Proc</source>. (<year>2024</year>). <pub-id pub-id-type="doi">10.1109/TSP.2024.3392357</pub-id></citation>
</ref>
<ref id="B37">
<label>37.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lyu</surname> <given-names>C</given-names></name> <name><surname>Lu</surname> <given-names>QL</given-names></name> <name><surname>Wu</surname> <given-names>X</given-names></name> <name><surname>Antoniou</surname> <given-names>C</given-names></name></person-group>. <article-title>Tucker factorization-based tensor completion for robust traffic data imputation</article-title>. <source>Transp Res Part C</source>. (<year>2024</year>) <volume>160</volume>:<fpage>104502</fpage>. <pub-id pub-id-type="doi">10.1016/j.trc.2024.104502</pub-id></citation>
</ref>
<ref id="B38">
<label>38.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boyd</surname> <given-names>S</given-names></name> <name><surname>Parikh</surname> <given-names>N</given-names></name> <name><surname>Chu</surname> <given-names>E</given-names></name> <name><surname>Peleato</surname> <given-names>B</given-names></name> <name><surname>Eckstein</surname> <given-names>J</given-names></name></person-group>. <article-title>Distributed optimization and statistical learning via the alternating direction method of multipliers</article-title>. <source>Found Trends Mach Learn</source>. (<year>2011</year>) <volume>3</volume>:<fpage>1</fpage>&#x02013;<lpage>122</lpage>. <pub-id pub-id-type="doi">10.1561/2200000016</pub-id></citation>
</ref>
<ref id="B39">
<label>39.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hunter</surname> <given-names>DR</given-names></name> <name><surname>Lange</surname> <given-names>K</given-names></name></person-group>. <article-title>A tutorial on MM algorithms</article-title>. <source>Am Stat</source>. (<year>2004</year>) <volume>58</volume>:<fpage>30</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1198/0003130042836</pub-id><pub-id pub-id-type="pmid">12611515</pub-id></citation></ref>
<ref id="B40">
<label>40.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>Y</given-names></name> <name><surname>Babu</surname> <given-names>P</given-names></name> <name><surname>Palomar</surname> <given-names>DP</given-names></name></person-group>. <article-title>Majorization-minimization algorithms in signal processing, communications, and machine learning</article-title>. <source>IEEE Trans Signal Proc</source>. (<year>2016</year>) <volume>65</volume>:<fpage>794</fpage>&#x02013;<lpage>816</lpage>. <pub-id pub-id-type="doi">10.1109/TSP.2016.2601299</pub-id></citation>
</ref>
<ref id="B41">
<label>41.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hitchcock</surname> <given-names>FL</given-names></name></person-group>. <article-title>The expression of a tensor or a polyadic as a sum of products</article-title>. <source>J Mathem Phys</source>. (<year>1927</year>) <volume>6</volume>:<fpage>164</fpage>&#x02013;<lpage>89</lpage>. <pub-id pub-id-type="doi">10.1002/sapm192761164</pub-id></citation>
</ref>
<ref id="B42">
<label>42.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carroll</surname> <given-names>JD</given-names></name> <name><surname>Chang</surname> <given-names>JJ</given-names></name></person-group>. <article-title>Analysis of individual differences in multidimensional scaling via an N-way generalization of &#x0201C;Eckart-Young&#x0201D; decomposition</article-title>. <source>Psychometrika</source>. (<year>1970</year>) <volume>35</volume>:<fpage>283</fpage>&#x02013;<lpage>319</lpage>. <pub-id pub-id-type="doi">10.1007/BF02310791</pub-id></citation>
</ref>
<ref id="B43">
<label>43.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harshman</surname> <given-names>RA</given-names></name></person-group>. <article-title>Foundations of the PARAFAC procedure: models and conditions for an &#x0201C;explanatory&#x0201D; multimodal factor analysis</article-title>. <source>UCLA Work Paper Phonet</source>. (<year>1970</year>) <volume>16</volume>:<fpage>1</fpage>&#x02013;<lpage>84</lpage>.</citation>
</ref>
<ref id="B44">
<label>44.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tucker</surname> <given-names>LR</given-names></name></person-group>. <article-title>Implications of factor analysis of three-way matrices for measurement of change</article-title>. <source>Probl Measur Change</source>. (<year>1963</year>) <volume>12</volume>:<fpage>122</fpage>&#x02013;<lpage>137</lpage>.</citation>
</ref>
<ref id="B45">
<label>45.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tucker</surname> <given-names>LR</given-names></name></person-group>. <article-title>The extension of factor analysis to three-dimensional matrices</article-title>. <source>Contr Mathem Psychol</source>. (<year>1964</year>) <volume>51</volume>:<fpage>109</fpage>&#x02013;<lpage>127</lpage>.</citation>
</ref>
<ref id="B46">
<label>46.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tucker</surname> <given-names>LR</given-names></name></person-group>. <article-title>Some mathematical notes on three-mode factor analysis</article-title>. <source>Psychometrika</source>. (<year>1966</year>) <volume>31</volume>:<fpage>279</fpage>&#x02013;<lpage>311</lpage>. <pub-id pub-id-type="doi">10.1007/BF02289464</pub-id><pub-id pub-id-type="pmid">5221127</pub-id></citation></ref>
<ref id="B47">
<label>47.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kroonenberg</surname> <given-names>PM</given-names></name> <name><surname>De Leeuw</surname> <given-names>J</given-names></name></person-group>. <article-title>Principal component analysis of three-mode data by means of alternating least squares algorithms</article-title>. <source>Psychometrika</source>. (<year>1980</year>) <volume>45</volume>:<fpage>69</fpage>&#x02013;<lpage>97</lpage>. <pub-id pub-id-type="doi">10.1007/BF02293599</pub-id></citation>
</ref>
<ref id="B48">
<label>48.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Lathauwer</surname> <given-names>L</given-names></name> <name><surname>De Moor</surname> <given-names>B</given-names></name> <name><surname>Vandewalle</surname> <given-names>J</given-names></name></person-group>. <article-title>On the best rank-1 and rank-(<italic>r</italic>_1, <italic>r</italic>_2, <italic>r</italic>_<italic>n</italic>) approximation of higher-order tensors</article-title>. <source>SIAM J Matrix Analy Applic</source>. (<year>2000</year>) <volume>21</volume>:<fpage>1324</fpage>&#x02013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1137/S0895479898346995</pub-id></citation>
</ref>
<ref id="B49">
<label>49.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oseledets</surname> <given-names>IV</given-names></name></person-group>. <article-title>Tensor-train decomposition</article-title>. <source>SIAM J Sci Comput</source>. (<year>2011</year>) <volume>33</volume>:<fpage>2295</fpage>&#x02013;<lpage>317</lpage>. <pub-id pub-id-type="doi">10.1137/090752286</pub-id></citation>
</ref>
<ref id="B50">
<label>50.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holtz</surname> <given-names>S</given-names></name> <name><surname>Rohwedder</surname> <given-names>T</given-names></name> <name><surname>Schneider</surname> <given-names>R</given-names></name></person-group>. <article-title>The alternating linear scheme for tensor optimization in the tensor train format</article-title>. <source>SIAM J Sci Comput</source>. (<year>2012</year>) <volume>34</volume>:<fpage>A683</fpage>&#x02013;<lpage>713</lpage>. <pub-id pub-id-type="doi">10.1137/100818893</pub-id></citation>
</ref>
<ref id="B51">
<label>51.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>Q</given-names></name> <name><surname>Zhou</surname> <given-names>G</given-names></name> <name><surname>Xie</surname> <given-names>S</given-names></name> <name><surname>Zhang</surname> <given-names>L</given-names></name> <name><surname>Cichocki</surname> <given-names>A</given-names></name></person-group>. <article-title>Tensor ring decomposition</article-title>. <source>arXiv preprint arXiv:16060.5535</source> (<year>2016</year>).</citation>
</ref>
<ref id="B52">
<label>52.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>YD</given-names></name> <name><surname>Choi</surname> <given-names>S</given-names></name></person-group>. <article-title>Nonnegative Tucker decomposition</article-title>. In: <source>Proceedings of CVPR</source>. <publisher-loc>IEEE</publisher-loc> (<year>2007</year>). p. <fpage>1</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2007.383405</pub-id></citation>
</ref>
<ref id="B53">
<label>53.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gillis</surname> <given-names>N</given-names></name> <name><surname>Glineur</surname> <given-names>F</given-names></name></person-group>. <article-title>Accelerated multiplicative updates and hierarchical ALS algorithms for nonnegative matrix factorization</article-title>. <source>Neural Comput</source>. (<year>2012</year>) <volume>24</volume>:<fpage>1085</fpage>&#x02013;<lpage>105</lpage>. <pub-id pub-id-type="doi">10.1162/NECO_a_00256</pub-id><pub-id pub-id-type="pmid">22168561</pub-id></citation></ref>
<ref id="B54">
<label>54.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Venkatakrishnan</surname> <given-names>SV</given-names></name> <name><surname>Bouman</surname> <given-names>CA</given-names></name> <name><surname>Wohlberg</surname> <given-names>B</given-names></name></person-group>. <article-title>Plug-and-play priors for model based reconstruction</article-title>. In: <source>Proceedings of GlobalSIP</source>. <publisher-loc>IEEE</publisher-loc> (<year>2013</year>). p. <fpage>945</fpage>&#x02013;<lpage>948</lpage>. <pub-id pub-id-type="doi">10.1109/GlobalSIP.2013.6737048</pub-id></citation>
</ref>
<ref id="B55">
<label>55.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kiers</surname> <given-names>HA</given-names></name></person-group>. <article-title>Towards a standardized notation and terminology in multiway analysis</article-title>. <source>J Chemometrics</source>. (<year>2000</year>) <volume>14</volume>:<fpage>105</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1002/1099-128X(200005/06)14:3&#x0003C;105::AID-CEM582&#x0003E;3.0.CO;2-I</pub-id></citation>
</ref>
<ref id="B56">
<label>56.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Lathauwer</surname> <given-names>L</given-names></name></person-group>. <article-title>Decompositions of a higher-order tensor in block terms&#x02013;Part I: Lemmas for partitioned matrices</article-title>. <source>SIAM J Matrix Analy Applic</source>. (<year>2008</year>) <volume>30</volume>:<fpage>1022</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1137/060661685</pub-id></citation>
</ref>
<ref id="B57">
<label>57.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Lathauwer</surname> <given-names>L</given-names></name></person-group>. <article-title>Decompositions of a higher-order tensor in block terms&#x02013;Part II: Definitions and uniqueness</article-title>. <source>SIAM J Matrix Analy Applic</source>. (<year>2008</year>) <volume>30</volume>:<fpage>1033</fpage>&#x02013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1137/070690729</pub-id></citation>
</ref>
<ref id="B58">
<label>58.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Lathauwer</surname> <given-names>L</given-names></name> <name><surname>Nion</surname> <given-names>D</given-names></name></person-group>. <article-title>Decompositions of a higher-order tensor in block terms&#x02013;Part III: Alternating least squares algorithms</article-title>. <source>SIAM J Matrix Analy Applic</source>. (<year>2008</year>) <volume>30</volume>:<fpage>1067</fpage>&#x02013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1137/070690730</pub-id></citation>
</ref>
<ref id="B59">
<label>59.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Mechelen</surname> <given-names>I</given-names></name> <name><surname>Smilde</surname> <given-names>AK</given-names></name></person-group>. <article-title>A generic linked-mode decomposition model for data fusion</article-title>. <source>Chemometr Intell Lab Syst</source>. (<year>2010</year>) <volume>104</volume>:<fpage>83</fpage>&#x02013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1016/j.chemolab.2010.04.012</pub-id><pub-id pub-id-type="pmid">24795156</pub-id></citation></ref>
<ref id="B60">
<label>60.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yokoya</surname> <given-names>N</given-names></name> <name><surname>Yairi</surname> <given-names>T</given-names></name> <name><surname>Iwasaki</surname> <given-names>A</given-names></name></person-group>. <article-title>Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion</article-title>. <source>IEEE Trans Geosci Rem Sens</source>. (<year>2011</year>) <volume>50</volume>:<fpage>528</fpage>&#x02013;<lpage>37</lpage>. <pub-id pub-id-type="doi">10.1109/TGRS.2011.2161320</pub-id></citation>
</ref>
<ref id="B61">
<label>61.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lahat</surname> <given-names>D</given-names></name> <name><surname>Adali</surname> <given-names>T</given-names></name> <name><surname>Jutten</surname> <given-names>C</given-names></name></person-group>. <article-title>Multimodal data fusion: an overview of methods, challenges, and prospects</article-title>. <source>Proc IEEE</source>. (<year>2015</year>) <volume>103</volume>:<fpage>1449</fpage>&#x02013;<lpage>77</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2015.2460697</pub-id></citation>
</ref>
<ref id="B62">
<label>62.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grasedyck</surname> <given-names>L</given-names></name></person-group>. <article-title>Hierarchical singular value decomposition of tensors</article-title>. <source>SIAM J Matrix Analy Applic</source>. (<year>2010</year>) <volume>31</volume>:<fpage>2029</fpage>&#x02013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1137/090764189</pub-id></citation>
</ref>
<ref id="B63">
<label>63.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>ZC</given-names></name> <name><surname>Huang</surname> <given-names>TZ</given-names></name> <name><surname>Deng</surname> <given-names>LJ</given-names></name> <name><surname>Dou</surname> <given-names>HX</given-names></name> <name><surname>Meng</surname> <given-names>D</given-names></name></person-group>. <article-title>Tensor wheel decomposition and its tensor completion application</article-title>. In: <source>Advances in Neural Information Processing Systems</source> (<year>2022</year>). p. <fpage>27008</fpage>&#x02013;<lpage>27020</lpage>.</citation>
</ref>
<ref id="B64">
<label>64.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>YB</given-names></name> <name><surname>Huang</surname> <given-names>TZ</given-names></name> <name><surname>Zhao</surname> <given-names>XL</given-names></name> <name><surname>Zhao</surname> <given-names>Q</given-names></name> <name><surname>Jiang</surname> <given-names>TX</given-names></name></person-group>. <article-title>Fully-connected tensor network decomposition and its application to higher-order tensor completion</article-title>. In: <source>Proceedings of the AAAI Conference on Artificial Intelligence</source> (<year>2021</year>). p. <fpage>11071</fpage>&#x02013;<lpage>11078</lpage>. <pub-id pub-id-type="doi">10.1609/aaai.v35i12.17321</pub-id></citation>
</ref>
<ref id="B65">
<label>65.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Z</given-names></name> <name><surname>Ely</surname> <given-names>G</given-names></name> <name><surname>Aeron</surname> <given-names>S</given-names></name> <name><surname>Hao</surname> <given-names>N</given-names></name> <name><surname>Kilmer</surname> <given-names>M</given-names></name></person-group>. <article-title>Novel methods for multilinear data completion and de-noising based on tensor-SVD</article-title>. In: <source>Proceedings of CVPR</source> (<year>2014</year>). p. <fpage>3842</fpage>&#x02013;<lpage>3849</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2014.485</pub-id></citation>
</ref>
<ref id="B66">
<label>66.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yokota</surname> <given-names>T</given-names></name> <name><surname>Erem</surname> <given-names>B</given-names></name> <name><surname>Guler</surname> <given-names>S</given-names></name> <name><surname>Warfield</surname> <given-names>SK</given-names></name> <name><surname>Hontani</surname> <given-names>H</given-names></name></person-group>. <article-title>Missing slice recovery for tensors using a low-rank model in embedded space</article-title>. In: <source>Proceedings of CVPR</source> (<year>2018</year>). p. <fpage>8251</fpage>&#x02013;<lpage>8259</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2018.00861</pub-id></citation>
</ref>
<ref id="B67">
<label>67.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yamamoto</surname> <given-names>R</given-names></name> <name><surname>Hontani</surname> <given-names>H</given-names></name> <name><surname>Imakura</surname> <given-names>A</given-names></name> <name><surname>Yokota</surname> <given-names>T</given-names></name></person-group>. <article-title>Fast algorithm for low-rank tensor completion in delay-embedded space</article-title>. In: <source>Proceedings of CVPR</source> (<year>2022</year>). p. <fpage>2048</fpage>&#x02013;<lpage>2056</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR52688.2022.00210</pub-id></citation>
</ref>
<ref id="B68">
<label>68.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sedighin</surname> <given-names>F</given-names></name> <name><surname>Cichocki</surname> <given-names>A</given-names></name> <name><surname>Yokota</surname> <given-names>T</given-names></name> <name><surname>Shi</surname> <given-names>Q</given-names></name></person-group>. <article-title>Matrix and tensor completion in multiway delay embedded space using tensor train, with application to signal reconstruction</article-title>. <source>IEEE Signal Process Lett</source>. (<year>2020</year>) <volume>27</volume>:<fpage>810</fpage>&#x02013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.1109/LSP.2020.2990313</pub-id></citation>
</ref>
<ref id="B69">
<label>69.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sedighin</surname> <given-names>F</given-names></name> <name><surname>Cichocki</surname> <given-names>A</given-names></name></person-group>. <article-title>Image completion in embedded space using multistage tensor ring decomposition</article-title>. <source>Front Artif Intell</source>. (<year>2021</year>) <volume>4</volume>:<fpage>687176</fpage>. <pub-id pub-id-type="doi">10.3389/frai.2021.687176</pub-id><pub-id pub-id-type="pmid">34485898</pub-id></citation></ref>
<ref id="B70">
<label>70.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Candes</surname> <given-names>EJ</given-names></name> <name><surname>Recht</surname> <given-names>B</given-names></name></person-group>. <article-title>Exact matrix completion via convex optimization</article-title>. <source>Found Comput Mathem</source>. (<year>2009</year>) <volume>9</volume>:<fpage>717</fpage>&#x02013;<lpage>72</lpage>. <pub-id pub-id-type="doi">10.1007/s10208-009-9045-5</pub-id></citation>
</ref>
<ref id="B71">
<label>71.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gillis</surname> <given-names>N</given-names></name> <name><surname>Glineur</surname> <given-names>F</given-names></name></person-group>. <article-title>Low-rank matrix approximation with weights or missing data is NP-hard</article-title>. <source>SIAM J Matrix Analy Applic</source>. (<year>2011</year>) <volume>32</volume>:<fpage>1149</fpage>&#x02013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1137/110820361</pub-id></citation>
</ref>
<ref id="B72">
<label>72.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hamon</surname> <given-names>R</given-names></name> <name><surname>Emiya</surname> <given-names>V</given-names></name> <name><surname>F&#x000E9;votte</surname> <given-names>C</given-names></name></person-group>. <article-title>Convex nonnegative matrix factorization with missing data</article-title>. In: <source>Proceedings of IEEE International Workshop on Machine Learning for Signal Processing</source>. <publisher-loc>IEEE</publisher-loc> (<year>2016</year>). p. <fpage>1</fpage>&#x02013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1109/MLSP.2016.7738910</pub-id><pub-id pub-id-type="pmid">25523040</pub-id></citation></ref>
<ref id="B73">
<label>73.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>K</given-names></name> <name><surname>Sidiropoulos</surname> <given-names>ND</given-names></name> <name><surname>Liavas</surname> <given-names>AP</given-names></name></person-group>. <article-title>A flexible and efficient algorithmic framework for constrained matrix and tensor factorization</article-title>. <source>IEEE Trans Signal Proc</source>. (<year>2016</year>) <volume>64</volume>:<fpage>5052</fpage>&#x02013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1109/TSP.2016.2576427</pub-id></citation>
</ref>
<ref id="B74">
<label>74.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gandy</surname> <given-names>S</given-names></name> <name><surname>Recht</surname> <given-names>B</given-names></name> <name><surname>Yamada</surname> <given-names>I</given-names></name></person-group>. <article-title>Tensor completion and low-n-rank tensor recovery via convex optimization</article-title>. <source>Inverse Probl</source>. (<year>2011</year>) <volume>27</volume>:<fpage>25010</fpage>. <pub-id pub-id-type="doi">10.1088/0266-5611/27/2/025010</pub-id></citation>
</ref>
<ref id="B75">
<label>75.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>YL</given-names></name> <name><surname>Hsu</surname> <given-names>CT</given-names></name> <name><surname>Liao</surname> <given-names>HYM</given-names></name></person-group>. <article-title>Simultaneous tensor decomposition and completion using factor priors</article-title>. <source>IEEE Trans Pattern Anal Mach Intell</source>. (<year>2013</year>) <volume>36</volume>:<fpage>577</fpage>&#x02013;<lpage>91</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2013.164</pub-id><pub-id pub-id-type="pmid">24457512</pub-id></citation></ref>
<ref id="B76">
<label>76.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bengua</surname> <given-names>JA</given-names></name> <name><surname>Phien</surname> <given-names>HN</given-names></name> <name><surname>Tuan</surname> <given-names>HD</given-names></name> <name><surname>Do</surname> <given-names>MN</given-names></name></person-group>. <article-title>Efficient tensor completion for color image and video recovery: low-rank tensor train</article-title>. <source>IEEE Trans Image Proc</source>. (<year>2017</year>) <volume>26</volume>:<fpage>2466</fpage>&#x02013;<lpage>79</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2017.2672439</pub-id><pub-id pub-id-type="pmid">28237929</pub-id></citation></ref>
<ref id="B77">
<label>77.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yu</surname> <given-names>J</given-names></name> <name><surname>Zhou</surname> <given-names>G</given-names></name> <name><surname>Zhao</surname> <given-names>Q</given-names></name> <name><surname>Xie</surname> <given-names>K</given-names></name></person-group>. <article-title>An effective tensor completion method based on multi-linear tensor ring decomposition</article-title>. In: <source>Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)</source>. <publisher-loc>IEEE</publisher-loc> (<year>2018</year>). p. <fpage>1344</fpage>&#x02013;<lpage>1349</lpage>. <pub-id pub-id-type="doi">10.23919/APSIPA.2018.8659492</pub-id></citation>
</ref>
<ref id="B78">
<label>78.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Srebro</surname> <given-names>N</given-names></name> <name><surname>Jaakkola</surname> <given-names>T</given-names></name></person-group>. <article-title>Weighted low-rank approximations</article-title>. In: <source>Proceedings of ICML</source> (<year>2003</year>). p. <fpage>720</fpage>&#x02013;<lpage>727</lpage>.</citation>
</ref>
<ref id="B79">
<label>79.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tomasi</surname> <given-names>G</given-names></name> <name><surname>Bro</surname> <given-names>R</given-names></name></person-group>. <article-title>PARAFAC and missing values</article-title>. <source>Chemometr Intell Lab Syst</source>. (<year>2005</year>) <volume>75</volume>:<fpage>163</fpage>&#x02013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.1016/j.chemolab.2004.07.003</pub-id></citation>
</ref>
<ref id="B80">
<label>80.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Filipovic</surname> <given-names>M</given-names></name> <name><surname>Jukic</surname> <given-names>A</given-names></name></person-group>. <article-title>Tucker factorization with missing data with application to low-n-rank tensor completion</article-title>. <source>Multidimens Syst Signal Process</source>. (<year>2015</year>) <volume>26</volume>:<fpage>677</fpage>&#x02013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1007/s11045-013-0269-9</pub-id><pub-id pub-id-type="pmid">24457512</pub-id></citation></ref>
<ref id="B81">
<label>81.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kasai</surname> <given-names>H</given-names></name> <name><surname>Mishra</surname> <given-names>B</given-names></name></person-group>. <article-title>Low-rank tensor completion: a Riemannian manifold preconditioning approach</article-title>. In: <source>Proceedings of ICML</source> (<year>2016</year>). p. <fpage>1012</fpage>&#x02013;<lpage>1021</lpage>.</citation>
</ref>
<ref id="B82">
<label>82.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dempster</surname> <given-names>AP</given-names></name> <name><surname>Laird</surname> <given-names>NM</given-names></name> <name><surname>Rubin</surname> <given-names>DB</given-names></name></person-group>. <article-title>Maximum likelihood from incomplete data via the EM algorithm</article-title>. <source>J R Stat Soc</source>. (<year>1977</year>) <volume>39</volume>:<fpage>1</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1111/j.2517-6161.1977.tb01600.x</pub-id></citation>
</ref>
<ref id="B83">
<label>83.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lange</surname> <given-names>K</given-names></name> <name><surname>Hunter</surname> <given-names>DR</given-names></name> <name><surname>Yang</surname> <given-names>I</given-names></name></person-group>. <article-title>Optimization transfer using surrogate objective functions</article-title>. <source>J Comput Graph Stat</source>. (<year>2000</year>) <volume>9</volume>:<fpage>1</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1080/10618600.2000.10474858</pub-id></citation>
</ref>
<ref id="B84">
<label>84.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>Q</given-names></name> <name><surname>Zhou</surname> <given-names>G</given-names></name> <name><surname>Zhang</surname> <given-names>L</given-names></name> <name><surname>Cichocki</surname> <given-names>A</given-names></name> <name><surname>Amari</surname> <given-names>SI</given-names></name></person-group>. <article-title>Bayesian robust tensor factorization for incomplete multiway data</article-title>. <source>IEEE Trans Neural Netw Learn Syst</source>. (<year>2015</year>) <volume>27</volume>:<fpage>736</fpage>&#x02013;<lpage>48</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2015.2423694</pub-id><pub-id pub-id-type="pmid">26068876</pub-id></citation></ref>
<ref id="B85">
<label>85.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>M</given-names></name> <name><surname>Ding</surname> <given-names>C</given-names></name></person-group>. <article-title>Robust Tucker tensor decomposition for effective image representation</article-title>. In: <source>Proceedings of ICCV</source> (<year>2013</year>). p. <fpage>2448</fpage>&#x02013;<lpage>2455</lpage>. <pub-id pub-id-type="doi">10.1109/ICCV.2013.304</pub-id></citation>
</ref>
<ref id="B86">
<label>86.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>H</given-names></name> <name><surname>Liu</surname> <given-names>Y</given-names></name> <name><surname>Long</surname> <given-names>Z</given-names></name> <name><surname>Zhu</surname> <given-names>C</given-names></name></person-group>. <article-title>Robust low-rank tensor ring completion</article-title>. <source>IEEE Trans Comput Imag</source>. (<year>2020</year>) <volume>6</volume>:<fpage>1117</fpage>&#x02013;<lpage>26</lpage>. <pub-id pub-id-type="doi">10.1109/TCI.2020.3006718</pub-id><pub-id pub-id-type="pmid">34478384</pub-id></citation></ref>
<ref id="B87">
<label>87.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lu</surname> <given-names>C</given-names></name> <name><surname>Feng</surname> <given-names>J</given-names></name> <name><surname>Chen</surname> <given-names>Y</given-names></name> <name><surname>Liu</surname> <given-names>W</given-names></name> <name><surname>Lin</surname> <given-names>Z</given-names></name> <name><surname>Yan</surname> <given-names>S</given-names></name></person-group>. <article-title>Tensor robust principal component analysis with a new tensor nuclear norm</article-title>. <source>IEEE Trans Pattern Anal Mach Intell</source>. (<year>2019</year>) <volume>42</volume>:<fpage>925</fpage>&#x02013;<lpage>38</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2019.2891760</pub-id><pub-id pub-id-type="pmid">30629495</pub-id></citation></ref>
<ref id="B88">
<label>88.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Uschmajew</surname> <given-names>A</given-names></name></person-group>. <article-title>Local convergence of the alternating least squares algorithm for canonical tensor approximation</article-title>. <source>SIAM J Matrix Analy Applic</source>. (<year>2012</year>) <volume>33</volume>:<fpage>639</fpage>&#x02013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1137/110843587</pub-id></citation>
</ref>
<ref id="B89">
<label>89.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cichocki</surname> <given-names>A</given-names></name> <name><surname>Zdunek</surname> <given-names>R</given-names></name> <name><surname>Amari</surname> <given-names>Si</given-names></name></person-group>. <article-title>Hierarchical ALS algorithms for nonnegative matrix and 3D tensor factorization</article-title>. In: <source>Proceedings of International Conference on Independent Component Analysis and Signal Separation</source>. <publisher-loc>Springer</publisher-loc> (<year>2007</year>). p. <fpage>169</fpage>&#x02013;<lpage>176</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-540-74494-8_22</pub-id></citation>
</ref>
<ref id="B90">
<label>90.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Combettes</surname> <given-names>PL</given-names></name> <name><surname>Pesquet</surname> <given-names>JC</given-names></name></person-group>. <article-title>Proximal splitting methods in signal processing</article-title>. In: <source>Fixed-point Algorithms for Inverse Problems in Science and Engineering</source>. (<year>2011</year>). p. <fpage>185</fpage>&#x02013;<lpage>212</lpage>. <pub-id pub-id-type="doi">10.1007/978-1-4419-9569-8_10</pub-id></citation>
</ref>
<ref id="B91">
<label>91.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Parikh</surname> <given-names>N</given-names></name> <name><surname>Boyd</surname> <given-names>S</given-names></name></person-group>. <article-title>Proximal algorithms</article-title>. <source>Found Trends Optim</source>. (<year>2014</year>) <volume>1</volume>:<fpage>127</fpage>&#x02013;<lpage>239</lpage>. <pub-id pub-id-type="doi">10.1561/2400000003</pub-id></citation>
</ref>
<ref id="B92">
<label>92.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lin</surname> <given-names>Z</given-names></name> <name><surname>Liu</surname> <given-names>R</given-names></name> <name><surname>Su</surname> <given-names>Z</given-names></name></person-group>. <article-title>Linearized alternating direction method with adaptive penalty for low-rank representation</article-title>. In: <source>Advances in Neural Information Processing Systems</source> (<year>2011</year>).</citation>
</ref>
<ref id="B93">
<label>93.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hien</surname> <given-names>LTK</given-names></name> <name><surname>Phan</surname> <given-names>DN</given-names></name> <name><surname>Gillis</surname> <given-names>N</given-names></name></person-group>. <article-title>Inertial alternating direction method of multipliers for non-convex non-smooth optimization</article-title>. <source>Comput Optim Appl</source>. (<year>2022</year>) <volume>83</volume>:<fpage>247</fpage>&#x02013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1007/s10589-022-00394-8</pub-id></citation>
</ref>
<ref id="B94">
<label>94.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>CJ</given-names></name></person-group>. <article-title>On the convergence properties of the EM algorithm</article-title>. <source>Ann Stat</source>. (<year>1983</year>) <volume>2</volume>:<fpage>95</fpage>&#x02013;<lpage>103</lpage>.</citation>
</ref>
<ref id="B95">
<label>95.</label>
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Vaida</surname> <given-names>F</given-names></name></person-group>. <article-title>Parameter convergence for EM and MM algorithms</article-title>. <source>Statistica Sinica</source>. (<year>2005</year>) <volume>15</volume>:<fpage>831</fpage>&#x02013;<lpage>840</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www3.stat.sinica.edu.tw/statistica/J15N3/j15n316/j15n316.html">https://www3.stat.sinica.edu.tw/statistica/J15N3/j15n316/j15n316.html</ext-link></citation>
</ref>
<ref id="B96">
<label>96.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>Y</given-names></name> <name><surname>Yin</surname> <given-names>W</given-names></name></person-group>. <article-title>A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion</article-title>. <source>SIAM J Imaging Sci</source>. (<year>2013</year>) <volume>6</volume>:<fpage>1758</fpage>&#x02013;<lpage>89</lpage>. <pub-id pub-id-type="doi">10.1137/120887795</pub-id></citation>
</ref>
<ref id="B97">
<label>97.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Razaviyayn</surname> <given-names>M</given-names></name> <name><surname>Hong</surname> <given-names>M</given-names></name> <name><surname>Luo</surname> <given-names>ZQ</given-names></name></person-group>. <article-title>A unified convergence analysis of block successive minimization methods for nonsmooth optimization</article-title>. <source>SIAM J Optim</source>. (<year>2013</year>) <volume>23</volume>:<fpage>1126</fpage>&#x02013;<lpage>53</lpage>. <pub-id pub-id-type="doi">10.1137/120891009</pub-id></citation>
</ref>
<ref id="B98">
<label>98.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hong</surname> <given-names>M</given-names></name> <name><surname>Razaviyayn</surname> <given-names>M</given-names></name> <name><surname>Luo</surname> <given-names>ZQ</given-names></name> <name><surname>Pang</surname> <given-names>JS</given-names></name></person-group>. <article-title>A unified algorithmic framework for block-structured optimization involving big data: with applications in machine learning and signal processing</article-title>. <source>IEEE Signal Process Mag</source>. (<year>2015</year>) <volume>33</volume>:<fpage>57</fpage>&#x02013;<lpage>77</lpage>. <pub-id pub-id-type="doi">10.1109/MSP.2015.2481563</pub-id></citation>
</ref>
<ref id="B99">
<label>99.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eckstein</surname> <given-names>J</given-names></name> <name><surname>Bertsekas</surname> <given-names>DP</given-names></name></person-group>. <article-title>On the Douglas&#x02013;Rachford splitting method and the proximal point algorithm for maximal monotone operators</article-title>. <source>Mathem Progr</source>. (<year>1992</year>) <volume>55</volume>:<fpage>293</fpage>&#x02013;<lpage>318</lpage>. <pub-id pub-id-type="doi">10.1007/BF01581204</pub-id><pub-id pub-id-type="pmid">31057305</pub-id></citation></ref>
<ref id="B100">
<label>100.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>Y</given-names></name> <name><surname>Yin</surname> <given-names>W</given-names></name> <name><surname>Wen</surname> <given-names>Z</given-names></name> <name><surname>Zhang</surname> <given-names>Y</given-names></name></person-group>. <article-title>An alternating direction algorithm for matrix completion with nonnegative factors</article-title>. <source>Front Mathem China</source>. (<year>2012</year>) <volume>7</volume>:<fpage>365</fpage>&#x02013;<lpage>84</lpage>. <pub-id pub-id-type="doi">10.1007/s11464-012-0194-5</pub-id><pub-id pub-id-type="pmid">32287029</pub-id></citation></ref>
<ref id="B101">
<label>101.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Y</given-names></name> <name><surname>Yin</surname> <given-names>W</given-names></name> <name><surname>Zeng</surname> <given-names>J</given-names></name></person-group>. <article-title>Global convergence of ADMM in nonconvex nonsmooth optimization</article-title>. <source>J Sci Comput</source>. (<year>2019</year>) <volume>78</volume>:<fpage>29</fpage>&#x02013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1007/s10915-018-0757-z</pub-id></citation>
</ref>
<ref id="B102">
<label>102.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ryu</surname> <given-names>E</given-names></name> <name><surname>Liu</surname> <given-names>J</given-names></name> <name><surname>Wang</surname> <given-names>S</given-names></name> <name><surname>Chen</surname> <given-names>X</given-names></name> <name><surname>Wang</surname> <given-names>Z</given-names></name> <name><surname>Yin</surname> <given-names>W</given-names></name></person-group>. <article-title>Plug-and-play methods provably converge with properly trained denoisers</article-title>. In: <source>Proceedings of ICML</source>. <publisher-loc>PMLR</publisher-loc> (<year>2019</year>). p. <fpage>5546</fpage>&#x02013;<lpage>5557</lpage>.</citation>
</ref>
<ref id="B103">
<label>103.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ono</surname> <given-names>S</given-names></name> <name><surname>Kasai</surname> <given-names>T</given-names></name></person-group>. <article-title>Efficient constrained tensor factorization by alternating optimization with primal-dual splitting</article-title>. In: <source>Proceedings of ICASSP</source>. <publisher-loc>IEEE</publisher-loc> (<year>2018</year>). p. <fpage>3379</fpage>&#x02013;<lpage>3383</lpage>. <pub-id pub-id-type="doi">10.1109/ICASSP.2018.8461790</pub-id></citation>
</ref>
<ref id="B104">
<label>104.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>DL</given-names></name> <name><surname>Fevotte</surname> <given-names>C</given-names></name></person-group>. <article-title>Alternating direction method of multipliers for non-negative matrix factorization with the beta-divergence</article-title>. In: <source>Proceedings of ICASSP</source>. <publisher-loc>IEEE</publisher-loc> (<year>2014</year>). p. <fpage>6201</fpage>&#x02013;<lpage>6205</lpage>. <pub-id pub-id-type="doi">10.1109/ICASSP.2014.6854796</pub-id><pub-id pub-id-type="pmid">36149991</pub-id></citation></ref>
<ref id="B105">
<label>105.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hajinezhad</surname> <given-names>D</given-names></name> <name><surname>Chang</surname> <given-names>TH</given-names></name> <name><surname>Wang</surname> <given-names>X</given-names></name> <name><surname>Shi</surname> <given-names>Q</given-names></name> <name><surname>Hong</surname> <given-names>M</given-names></name></person-group>. <article-title>Nonnegative matrix factorization using ADMM: Algorithm and convergence analysis</article-title>. In: <source>Proceedings of ICASSP</source>. <publisher-loc>IEEE</publisher-loc> (<year>2016</year>). p. <fpage>4742</fpage>&#x02013;<lpage>4746</lpage>. <pub-id pub-id-type="doi">10.1109/ICASSP.2016.7472577</pub-id><pub-id pub-id-type="pmid">37631765</pub-id></citation></ref>
<ref id="B106">
<label>106.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xue</surname> <given-names>J</given-names></name> <name><surname>Zhao</surname> <given-names>Y</given-names></name> <name><surname>Liao</surname> <given-names>W</given-names></name> <name><surname>Chan</surname> <given-names>JCW</given-names></name> <name><surname>Kong</surname> <given-names>SG</given-names></name></person-group>. <article-title>Enhanced sparsity prior model for low-rank tensor completion</article-title>. <source>IEEE Trans Neural Netw Learn Syst</source>. (<year>2019</year>) <volume>31</volume>:<fpage>4567</fpage>&#x02013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2019.2956153</pub-id><pub-id pub-id-type="pmid">31880566</pub-id></citation></ref>
<ref id="B107">
<label>107.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xue</surname> <given-names>J</given-names></name> <name><surname>Zhao</surname> <given-names>Y</given-names></name> <name><surname>Huang</surname> <given-names>S</given-names></name> <name><surname>Liao</surname> <given-names>W</given-names></name> <name><surname>Chan</surname> <given-names>JCW</given-names></name> <name><surname>Kong</surname> <given-names>SG</given-names></name></person-group>. <article-title>Multilayer sparsity-based tensor decomposition for low-rank tensor completion</article-title>. <source>IEEE Trans Neural Netw Learn Syst</source>. (<year>2021</year>) <volume>33</volume>:<fpage>6916</fpage>&#x02013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2021.3083931</pub-id><pub-id pub-id-type="pmid">34143740</pub-id></citation></ref>
<ref id="B108">
<label>108.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hong</surname> <given-names>D</given-names></name> <name><surname>Kolda</surname> <given-names>TG</given-names></name> <name><surname>Duersch</surname> <given-names>JA</given-names></name></person-group>. <article-title>Generalized canonical polyadic tensor decomposition</article-title>. <source>SIAM Rev</source>. (<year>2020</year>) <volume>62</volume>:<fpage>133</fpage>&#x02013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1137/18M1203626</pub-id></citation>
</ref>
<ref id="B109">
<label>109.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qiu</surname> <given-names>Y</given-names></name> <name><surname>Zhou</surname> <given-names>G</given-names></name> <name><surname>Huang</surname> <given-names>Z</given-names></name> <name><surname>Zhao</surname> <given-names>Q</given-names></name> <name><surname>Xie</surname> <given-names>S</given-names></name></person-group>. <article-title>Efficient tensor robust PCA under hybrid model of Tucker and tensor train</article-title>. <source>IEEE Signal Process Lett</source>. (<year>2022</year>) <volume>29</volume>:<fpage>627</fpage>&#x02013;<lpage>31</lpage>. <pub-id pub-id-type="doi">10.1109/LSP.2022.3143721</pub-id></citation>
</ref>
</ref-list>
</back>
</article>