<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Artif. Intell.</journal-id>
<journal-title>Frontiers in Artificial Intelligence</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Artif. Intell.</abbrev-journal-title>
<issn pub-type="epub">2624-8212</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/frai.2021.787534</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Artificial Intelligence</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Artificial Neural Network Based Non-linear Transformation of High-Frequency Returns for Volatility Forecasting</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>M&#x000FC;cher</surname> <given-names>Christian</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1490788/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Chair of Statistics and Econometrics, University of Freiburg</institution>, <addr-line>Freiburg</addr-line>, <country>Germany</country></aff>
<aff id="aff2"><sup>2</sup><institution>Graduate School of Decision Sciences, University of Konstanz</institution>, <addr-line>Konstanz</addr-line>, <country>Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Massimiliano Caporin, University of Padua, Italy</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Jan-Alexander Posth, Zurich University of Applied Sciences, Switzerland; Arindam Chaudhuri, Samsung R &#x00026; D Institute, India</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Christian M&#x000FC;cher <email>christian.muecher&#x00040;vwl.uni-freiburg.de</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Artificial Intelligence in Finance, a section of the journal Frontiers in Artificial Intelligence</p></fn></author-notes>
<pub-date pub-type="epub">
<day>11</day>
<month>02</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>4</volume>
<elocation-id>787534</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>09</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>27</day>
<month>12</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 M&#x000FC;cher.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>M&#x000FC;cher</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>This paper uses Long Short Term Memory Recurrent Neural Networks to extract information from the intraday high-frequency returns to forecast daily volatility. Applied to the IBM stock, we find significant improvements in the forecasting performance of models that use this extracted information compared to the forecasts of models that omit the extracted information and some of the most popular alternative models. Furthermore, we find that extracting the information through Long Short Term Memory Recurrent Neural Networks is superior to two Mixed Data Sampling alternatives.</p></abstract>
<kwd-group>
<kwd>neural networks</kwd>
<kwd>forecasting</kwd>
<kwd>high-frequency data</kwd>
<kwd>realized volatility</kwd>
<kwd>mixed data sampling</kwd>
<kwd>long short term memory</kwd>
</kwd-group>
<counts>
<fig-count count="5"/>
<table-count count="4"/>
<equation-count count="47"/>
<ref-count count="62"/>
<page-count count="18"/>
<word-count count="13008"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>The volatility, the time-varying centered second moment of a financial asset, is crucial to measure, forecast, and evaluate financial risk. There exist many ways of modeling and forecasting volatility separable into two main groups: Models based on (squared) daily returns and models based on the Realized Volatility (RV) estimator. In the first group, volatility is treated as a latent variable and estimated from the model. Famous examples in this regard are, on the one hand, (G)ARCH models (Engle, <xref ref-type="bibr" rid="B26">1982</xref>; Bollerslev, <xref ref-type="bibr" rid="B17">1986</xref>) and their various extensions that treat volatility as conditionally observable. On the other hand, Stochastic Volatility models (Taylor, <xref ref-type="bibr" rid="B60">1986</xref>; Ruiz, <xref ref-type="bibr" rid="B53">1994</xref>) treat conditional volatility as random variables and rely on filtering techniques for estimation and forecasting. While the vast models in this group capture stylized properties of financial data such as volatility clustering, long-memory, and asymmetric reactions of volatility to positive and negative shocks, they generally perform worse in forecasting volatility compared to the models of the second group (Andersen et al., <xref ref-type="bibr" rid="B6">2004</xref>; Sizova, <xref ref-type="bibr" rid="B58">2011</xref>).</p>
<p>The availability of high-frequency (HF), intraday returns and the introduction of RV as an estimator of integrated volatility over a day (Andersen et al., <xref ref-type="bibr" rid="B3">2001a</xref>,<xref ref-type="bibr" rid="B4">b</xref>; Barndorff-Nielsen and Shephard, <xref ref-type="bibr" rid="B11">2002a</xref>,<xref ref-type="bibr" rid="B12">b</xref>) lead to the second group of models. Since the RV ex-post gives a consistent volatility estimate, the main focus of models in the second group is forecasting. Andersen et al. (<xref ref-type="bibr" rid="B3">2001a</xref>,<xref ref-type="bibr" rid="B4">b</xref>), and Barndorff-Nielsen and Shephard (<xref ref-type="bibr" rid="B11">2002a</xref>) find that RV and the logarithm of RV exhibit long memory. Their autocorrelation functions show a hyperbolic decay, meaning that past shocks have a very long influence on the system of RV. Therefore, the authors propose forecasting volatility via fractional integrated autoregressive moving average (ARFIMA) models to account for the long memory. The most prominent alternative to ARFIMA models for forecasting volatility based on RV is the Heterogeneous Autoregressive Model (HAR) by Corsi (<xref ref-type="bibr" rid="B23">2009</xref>). The HAR model approximates the long memory in the data through RV&#x00027;s daily, weekly, and monthly averages. These averages are used in a linear model as explanatory variables to predict volatility. Corsi (<xref ref-type="bibr" rid="B23">2009</xref>) finds that the HAR model performs better than the ARFIMA models in forecasting volatility. The HAR model is popular because of its good performance and ease of implementation (the HAR can be estimated by simple OLS regression). There exist many extensions of the HAR model in the literature, such as the HAR with jumps model of Andersen et al. (<xref ref-type="bibr" rid="B2">2007</xref>), the Semivariance HAR of Patton and Sheppard (<xref ref-type="bibr" rid="B49">2015</xref>), or the HARQ of Bollerslev et al. (<xref ref-type="bibr" rid="B18">2016</xref>). However, the standard HAR model, both for the level and the logarithm of RV, still is a challenging benchmark to beat in applications on real financial data.</p>
<p>Artificial Neural Networks (ANNs) have become more and more popular over the last decade, and various fields apply them for classification, prediction, and modeling tasks. Cybenko (<xref ref-type="bibr" rid="B24">1989</xref>) and Hornik et al. (<xref ref-type="bibr" rid="B40">1989</xref>) show the capability of Feed Forward Neural Networks (FNNs), fully connected ANNs with one hidden layer, to approximate any continuous function on a compact set arbitrarily well. Furthermore, Sch&#x000E4;fer and Zimmermann (<xref ref-type="bibr" rid="B56">2006</xref>) show that Recurrent Neural Networks (RNNs) can approximate any open, dynamic system arbitrarily well. The popularity of ANNs is, on the one hand, due to these theoretical results. On the other hand, ANNs have been among the winning algorithms for various classification and forecasting competitions over the past years. RNNs combine the ability of ANNs to capture complex non-linear dependencies in the data with capturing temporal relationships. Long Short Term Memory (LSTM) RNNs (Hochreiter and Schmidhuber, <xref ref-type="bibr" rid="B39">1997</xref>) are a type of RNN specifically designed to capture long memory in data. Their capacity to capture non-linear, long-term dependencies in the data make them the perfect candidates for modeling volatility.</p>
<p>This paper aims to use LSTMs to non-linearly transform the HF returns of a financial asset, observed within a day, into a daily, scalar variable and to use this variable to forecast volatility. Non-linear transformations of the HF returns are not novel since the RV estimator (the sum of the squared HF returns of a day) is also a non-linear transformation, but a particular one. We investigate whether volatility forecasts solely constructed from the ANN-based transformation of the HF returns are different from forecasts obtained through the past RVs. While the ANN transformation is very flexible in the functional form, the resulting sequence might not capture the long persistence in the volatility, as the RV estimator does. However, the flexibility of the functional form might capture other information that is useful to predict volatility and that the RV estimator does not take into account. Examples of such information are the sign of the HF returns or patterns of HF returns occurring over a day. We thus combine the two approaches and investigate whether the resulting model exhibits a superior forecasting performance compared to the models that rely on each measure alone.</p>
<p>An alternative approach to transforming the HF returns is the Mixed Data Sampling (MIDAS) approach of Ghysels et al. (<xref ref-type="bibr" rid="B30">2004</xref>). In MIDAS, the transformation happens through a weighted sum of the HF returns. The weights are obtained non-linearly, e.g., by an Almon or a Beta Lag Polynomial (Ghysels et al., <xref ref-type="bibr" rid="B30">2004</xref>). We introduce a novel type of MIDAS model that obtains those weights through an LSTM cell. In MIDAS applications, however, the construction of the transformed HF measure is linear.</p>
<p>Though, as mentioned earlier, the RV estimator is also a transformation of the HF returns, throughout the paper, we will use the term transformed HF returns or transformed measure to refer to the scalar variable obtained through either the ANN transformation or the MIDAS transformation.</p>
<p>We compare the forecasting performance of models that use either one of the transformed measures to forecast volatility with each other and with models that construct the forecasts relying solely on information from past RV, such as the HAR model. We can thus answer whether the transformation can extract at least the same information as the past RV. We further compare these models&#x00027; forecasts with those obtained from models that combine the RV information with the transformed measures, allowing us to investigate whether the transformed measures contain information supplementary to the RV. Lastly, we can compare the different transformation methods to determine whether the non-linearity introduced through ANNs performs differently from the MIDAS approaches.</p>
<p>The remainder of this paper is structured as follows: section 2 gives an overview of the literature in volatility forecasting with ANNs. Section 3 introduces the LSTM RNN, and section 4 explains the different transformations of the HF returns. It first describes the non-linear transformation through LSTMs and then shows the two MIDAS approaches. Section 5 elaborates on using the transformed HF returns to generate volatility forecasts. We further introduce the benchmark models to which we compare our proposed methodology. Finally, we present the results of our empirical application in section 6, and section 7 concludes.</p>
</sec>
<sec id="s2">
<title>2. Literature Review</title>
<p>A vast area of finance applies ANNs. For example, White (<xref ref-type="bibr" rid="B62">1988</xref>), among others, uses ANNs to predict stock returns while Gu et al. (<xref ref-type="bibr" rid="B34">2020</xref>) use ANNs for asset pricing and Sadhwani et al. (<xref ref-type="bibr" rid="B55">2021</xref>) apply ANNs for mortgage risk evaluation. ANNs are further applied to model and forecast financial risk. The literature in this field reflects the two main branches of volatility modeling and forecasting mentioned in section 1: models based on daily (squared) returns and models based on realized measures estimated from the HF returns. An early contribution to the literature of volatility modeling and forecasting through daily squared returns is Donaldson and Kamstra (<xref ref-type="bibr" rid="B25">1997</xref>). The authors introduce a semi nonparametric non-linear GARCH model based on ANNs and show superior performance to other GARCH type alternatives. Franke and Diagne (<xref ref-type="bibr" rid="B28">2006</xref>) show that ANNs yield non-parametric estimators of the conditional variance function of an asset when trained with daily returns as inputs and squared returns as targets. Their results have been applied by Giordano et al. (<xref ref-type="bibr" rid="B31">2012</xref>) and generalized for the Multi-Layer-Perceptron (MLP), fully connected ANNs with multiple hidden layers, by Franke et al. (<xref ref-type="bibr" rid="B29">2019</xref>). Arneri&#x00107; et al. (<xref ref-type="bibr" rid="B7">2014</xref>) exploit the non-linear Autoregressive Moving Average (ARMA) structure of a Jordan type RNN (Jordan, <xref ref-type="bibr" rid="B42">1997</xref>) and the ARMA representation of the GARCH model to introduce the Jordan GARCH(1,1) model. Their model shows superior performance in out-of-sample root mean squared error (RMSE). Alternative approaches use the output of GARCH models, potentially combined with other explanatory variables, as inputs to an MLP (see e.g., Hajizadeh et al., <xref ref-type="bibr" rid="B35">2012</xref>; Kristjanpoller et al., <xref ref-type="bibr" rid="B44">2014</xref>).</p>
<p>The literature on forecasting volatility via ANNs through realized measures consists of two main fields. The first field uses ANNs to relax the linearity of the HAR model by feeding the lagged daily, weekly, and monthly averages of RV to MLPs. The evidence in this branch is mixed. Rosa et al. (<xref ref-type="bibr" rid="B52">2014</xref>) find improvements in the forecasting performance of the non-linear HAR model, while Vortelinos (<xref ref-type="bibr" rid="B61">2017</xref>) concludes that the ANN HAR model is not predicting volatility better than the linear HAR. He argues that the MLP cannot capture the long-term dependencies in the RV. Barun&#x000ED;k and K&#x00159;ehl&#x000ED;k (<xref ref-type="bibr" rid="B14">2016</xref>) find mixed evidence of the ANN HAR model for the volatility of energy market prices. Their ANN-based model produces more accurate forecasts than the linear model for some forecasting horizons and some markets. Arneri&#x00107; et al. (<xref ref-type="bibr" rid="B8">2018</xref>) find that an MLP fitted to the HAR inputs can outperform the linear benchmark. In addition, they find that including jump measures in the analysis further improves the forecasting performance. Christensen et al. (<xref ref-type="bibr" rid="B22">2021</xref>) find superior forecasting performance of their MLP HAR model over the linear HAR. Further, they find that the model&#x00027;s performance improves when additional firm-specific and macroeconomic indicators are added. Li and Tang (<xref ref-type="bibr" rid="B45">2021</xref>) apply an MLP to a large set of variables such as realized and MIDAS measures and option Implied Variances. They find that the resulting model outperforms the linear benchmark. The performance improves further through an ensemble learning algorithm that combines the outputs of other linear and non-linear machine learning techniques, such as penalized regression and random forests, with the output from the ANN model.</p>
<p>The second field in the literature utilizes RNNs to capture, in addition to non-linearity, long-term dependencies in the data. Miura et al. (<xref ref-type="bibr" rid="B47">2019</xref>) examine the volatility of cryptocurrencies finding that a ridge regression yields the best out of sample forecasting results, followed by LSTM RNNs. Ba&#x0015F;t&#x000FC;rk et al. (<xref ref-type="bibr" rid="B15">2021</xref>) apply LSTM RNNs to the past measure of RV and the negative part of past daily returns to jointly forecast the volatility and the Value at Risk (VaR) of a financial asset. The authors find superior forecasting performance of the LSTM network for the VaR forecasts. However, their approach cannot produce improved volatility forecasts compared to the linear alternatives.</p>
<p>A recent contribution to both branches of this literature is Bucci (<xref ref-type="bibr" rid="B20">2020</xref>), who compares the forecasting performance of various ANN structures to standard benchmarks from the financial econometrics literature such as the HAR model and ARFIMA models. He further investigates how adding macroeconomic and financial indicators as exogenous explanatory variables improves the model&#x00027;s forecasting performance. The target variable in his analysis is the monthly log square root of the RV. He finds that the long memory type ANNs such as the LSTM network outperform the financial econometrics literature&#x00027;s classical models. Furthermore, these models outperform the ANNs that do not account for long memory in the data. This result holds for various forecasting horizons.</p>
<p>Finally, Rahimikia and Poon (<xref ref-type="bibr" rid="B50">2020a</xref>) and Rahimikia and Poon (<xref ref-type="bibr" rid="B51">2020b</xref>) propose a HAR model augmented by an ANN applied to HF limited order book information and news sentiment data. In both papers, the authors find a superior forecasting performance of their model compared to the HAR benchmark. Their approach of augmenting the HAR model by transformed HF data is similar to the idea of this paper. In parts of our application, we augment models for LF measures such as the HAR with transformed HF information. The difference is that we consider the HF returns and not other auxiliary HF information. Further, we also consider models that use only the information from the transformed HF returns for the forecast.</p>
</sec>
<sec id="s3">
<title>3. Long Short Term Memory</title>
<p>LSTM RNNs are a specific type of RNN structures that overcome the problem that classical RNNs face. Specifically, the limited capacity of such networks to learn long-term relationships due to vanishing or exponentially increasing gradients (Hochreiter, <xref ref-type="bibr" rid="B38">1991</xref>; Bengio et al., <xref ref-type="bibr" rid="B16">1994</xref>). The cornerstone of LSTMs is the long memory cell denoted by <italic>C</italic><sub>&#x003C4;</sub>. A candidate value of which, <inline-formula><mml:math id="M1"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is a non-linear transformation (using the hyperbolic tangent activation function <italic>tanh</italic><xref ref-type="fn" rid="fn0001"><sup>1</sup></xref>) of a linear combination of the &#x003C4;-th periods&#x00027; input vector values <italic>v</italic><sub>&#x003C4;</sub> and the previous periods&#x00027; output value <italic>y</italic><sub>&#x003C4;&#x02212;1</sub> plus an intercept</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M2"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mi>h</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>C</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Next, the values of the forget <italic>f</italic><sub>&#x003C4;</sub> and the input <italic>i</italic><sub>&#x003C4;</sub> gate are computed. These are obtained by applying the sigmoid activation function &#x003C3;(&#x000B7;)<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> to a linear combination of the input vector values <italic>v</italic><sub>&#x003C4;</sub> and the previous periods&#x00027; output value <italic>y</italic><sub>&#x003C4;&#x02212;1</sub> plus an intercept.</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M3"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E3"><label>(3)</label><mml:math id="M4"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The memory cell value is computed by</p>
<disp-formula id="E4"><label>(5)</label><mml:math id="M5"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>i.e., it combines the previous periods&#x00027; cell value and the current periods&#x00027; candidate cell value. Since the sigmoid function returns values on the interval (0, 1), <italic>f</italic><sub>&#x003C4;</sub> denotes the share to be &#x0201C;forgotten&#x0201D; from the previous cell state and <italic>i</italic><sub>&#x003C4;</sub> the share of the proposal state to be added to the new cell state. The output of the LSTM cell <italic>y</italic><sub>&#x003C4;</sub> is generated by applying the <italic>tanh</italic> function to the memory cell values and multiplying the result by the value of the output gate <italic>o</italic><sub>&#x003C4;</sub></p>
<disp-formula id="E5"><label>(6)</label><mml:math id="M6"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>o</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mi>&#x003C8;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where the latter is obtained in the same manner as the values of <italic>f</italic><sub>&#x003C4;</sub> and <italic>i</italic><sub>&#x003C4;</sub></p>
<disp-formula id="E6"><label>(7)</label><mml:math id="M7"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>o</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The output gate gives the share of the activated cell values to return as the output of the memory cell. LSTM cells thus are dynamic systems wherein interacting layers drive the hidden state dynamics. This interaction enables the LSTM cell to account for a high degree of non-linearity and to capture long-term dependencies in the data.</p>
</sec>
<sec id="s4">
<title>4. Transformation of the High-Frequency Returns</title>
<p>Denote by <italic>r</italic><sub><italic>t,j</italic></sub> the <italic>t</italic>-th days&#x00027; <italic>j</italic>-th log-return. We have <italic>j</italic> &#x0003D; 1, &#x02026;, <italic>M</italic> equidistantly sampled returns within day <italic>t</italic>. The increments between two intraday returns determine the number of intraday observations. For returns sampled every 5 minutes within a normal trading day at the New York Stock Exchange, we obtain 78 intraday high-frequency returns. We will denote the vector of the intraday returns on day <italic>t</italic> by <italic>r</italic><sub><italic>t</italic>,1 : <italic>M</italic></sub>. We aim to apply a transformation to <italic>r</italic><sub><italic>t</italic>,1 : <italic>M</italic></sub> that returns a scalar value <inline-formula><mml:math id="M8"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mrow><mml:mi>H</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> which we will refer to as the transformed measure. The transformation depends on parameter vector &#x003B8;<sup><italic>HF</italic></sup>. We present three different methods to obtain the transformed measure in the following.</p>
<sec>
<title>4.1. Non-linear Transformation</title>
<p>The LSTM architecture described earlier can be used for a non-linear transformation of the HF returns. In its&#x00027; simplest form, we use the sequence of the <italic>M</italic> intraday returns as input to the LSTM cell, and the output of the cell at time <italic>M</italic>, <italic>y</italic><sub><italic>t,M</italic></sub>, as the transformed value. We thus iterate through the LSTM equations over the <italic>j</italic> &#x0003D; 1, &#x02026;, <italic>M</italic> intraday returns at day <italic>t</italic></p>
<disp-formula id="E7"><label>(8)</label><mml:math id="M9"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mi>h</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>C</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E8"><label>(9)</label><mml:math id="M10"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E9"><label>(10)</label><mml:math id="M11"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E10"><label>(11)</label><mml:math id="M12"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E11"><label>(12)</label><mml:math id="M13"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>o</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E12"><label>(13)</label><mml:math id="M14"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>o</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mi>&#x003C8;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and set <inline-formula><mml:math id="M15"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mrow><mml:mi>H</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>M</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, where &#x003B8;<sup><italic>HF</italic></sup> contains the LSTM cell weights and intercepts. Through the interaction of the three gates and the non-linear activation of the proposal state and the actual state, the LSTM cell allows for a high degree of non-linearity while also capturing long memory in the data. Both the cell input (<italic>r</italic><sub><italic>t,j</italic></sub>) and output (<italic>y</italic><sub><italic>t,j</italic></sub>) at within day lag <italic>j</italic> are scalars. The parameter vector &#x003B8;<sup><italic>HF</italic></sup> of the model using one LSTM cell thus contains 12 parameters: Four 2 &#x000D7; 1 weight vectors and four intercepts. To increase the degree of non-linearity, we further use a network that consists of one hidden layer of LSTM cells and use the outputs of these cells as inputs to another LSTM cell returning a scalar value.</p>
</sec>
<sec>
<title>4.2. MIDAS Transformations</title>
<p>Denote by <inline-formula><mml:math id="M16"><mml:msub><mml:mrow><mml:mi>&#x003C9;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mrow><mml:mi>H</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> the weights associated with the <italic>j</italic>-th intraday return on day <italic>t</italic>. The weights are determined by the elements of &#x003B8;<sup><italic>HF</italic></sup>. While the weights may be obtained in a non-linear manner, the resulting transformed measure</p>
<disp-formula id="E13"><label>(14)</label><mml:math id="M17"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mrow><mml:mi>H</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mrow><mml:mi>H</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>is a weighted sum and thus linear.</p>
<p>In a Beta Lag MIDAS model (labeled Beta MIDAS hereafter), the weight associated with the <italic>j</italic>-th lag is obtained by</p>
<disp-formula id="E14"><label>(15)</label><mml:math id="M18"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x003C9;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C6;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C6;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>B</mml:mi><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C6;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C6;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mi>B</mml:mi><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C6;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C6;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M19"><mml:mi>B</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo>&#x000B7;</mml:mo><mml:mfrac><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C6;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C6;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> is the probability density function (pdf) of the Beta distribution. In this case, the parameter vector <inline-formula><mml:math id="M20"><mml:msup><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mrow><mml:mi>H</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C6;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C6;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> contains the Beta distribution parameters. While only depending on two parameters, the weights obtained from the normalized Beta pdf are capable to capture complex non-linear functional forms.</p>
<p>An alternative way to obtain the weights associated with the <italic>j</italic>-th observation is to use the lag values as inputs to an LSTM cell. In this case, the input to the LSTM cell is <italic>r</italic><sub><italic>j</italic></sub> &#x0003D; <italic>j</italic> with <italic>j</italic> &#x0003D; 1, 2, &#x02026;, <italic>M</italic>. The corresponding output (<italic>y</italic><sub><italic>j</italic></sub>) lies on the interval between (&#x02212;1, 1). Note that in this case, <italic>r</italic><sub><italic>j</italic></sub> and <italic>y</italic><sub><italic>j</italic></sub> do not depend on <italic>t</italic> since they only vary within the day but not over the days. To transform the output at within day lag <italic>j</italic> (<italic>y</italic><sub><italic>j</italic></sub>) into a weight, we apply the <italic>exponential</italic> function and normalize the values, i.e.,</p>
<disp-formula id="E15"><label>(16)</label><mml:math id="M21"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>yielding weights that lie in the interval (0, 1) and sum up to one. This transformation is similar to the Beta MIDAS, where the Beta pdf values associated with <italic>j</italic>/<italic>M</italic> are normalized such that they sum up to one. Same as above, the parameter vector &#x003B8;<sup><italic>HF</italic></sup> of the LSTM MIDAS model contains 12 parameters.</p>
</sec>
</sec>
<sec id="s5">
<title>5. Volatility Forecasting</title>
<p>This paper aims to forecast the daily volatility of a financial asset. Consider the price process of a financial asset <italic>P</italic><sub><italic>t</italic></sub>, determined by the stochastic differential equation</p>
<disp-formula id="E16"><label>(17)</label><mml:math id="M22"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>d</mml:mi><mml:mo class="qopname">ln</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mi>d</mml:mi><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mi>W</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003BC;<sub><italic>t</italic></sub> and &#x003C3;<sub><italic>t</italic></sub> denote the drift and the instantaneous or spot volatility process, respectively, and <italic>W</italic><sub><italic>t</italic></sub> is a standard Brownian motion. The integrated variance from day <italic>t</italic> &#x02212; 1 to <italic>t</italic> is then defined as</p>
<disp-formula id="E17"><label>(18)</label><mml:math id="M23"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>I</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x0222B;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msubsup><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mi>d</mml:mi><mml:mi>s</mml:mi><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The integrated variance yields a direct measure of the discrete time return volatility (Andersen et al., <xref ref-type="bibr" rid="B6">2004</xref>), but the series is latent, and we can not observe it directly. However, we can estimate the integrated variance ex-post through the RV estimator defined as</p>
<disp-formula id="E18"><label>(19)</label><mml:math id="M24"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msubsup><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>i.e., the sum of the <italic>M</italic> squared intraday HF returns. Our goal is to assess how the information obtained from applying the different transformations of the HF returns explained earlier helps predict one step ahead volatility. To answer this, we consider different scenarios.</p>
<p>First, we vary the input variables used to predict volatility, considering three different settings. We start by combining the transformed measure <inline-formula><mml:math id="M25"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> (for readability, we omit the dependence of the transformed measure on &#x003B8;<sup><italic>HF</italic></sup> from here on) with past information on the RV. Next, we assess how this combination fares compared to using each stream of information on itself, i.e., using only the information obtained from the transformation and using only the past information on RV. Finally, when using the information on the transformed measure, we again differentiate between two settings: In the first, we only use the most recent (the past days) value of the transformed measure. In the second, we account for dynamics in the transformed measure and use the values of multiple past days. The contributions of Andersen et al. (<xref ref-type="bibr" rid="B3">2001a</xref>) and Andersen et al. (<xref ref-type="bibr" rid="B4">2001b</xref>) as well as models like the HAR and the work by, e.g., Audrino and Knaus (<xref ref-type="bibr" rid="B9">2016</xref>), show that it is necessary to account for the long memory in the volatility. In the setting where we use multiple past values of the transformed measure, we therefore apply an LSTM cell to the sequence of transformed measures. This means that for &#x003C4; &#x0003D; 1, &#x02026;, <italic>t</italic>, we iterate over</p>
<disp-formula id="E19"><label>(20)</label><mml:math id="M26"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mi>h</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>C</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E20"><label>(21)</label><mml:math id="M27"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E21"><label>(22)</label><mml:math id="M28"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E22"><label>(23)</label><mml:math id="M29"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E23"><label>(24)</label><mml:math id="M30"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>o</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo>&#x00398;</mml:mo></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E24"><label>(25)</label><mml:math id="M31"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>o</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub><mml:mi>&#x003C8;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and set <inline-formula><mml:math id="M32"><mml:msub><mml:mrow><mml:mi>&#x01EF9;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, where <inline-formula><mml:math id="M33"><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> contains the corresponding LSTM cell weights and intercepts. Using an LSTM cell circumvents the problem of lag order selection through either information criteria or shrinkage methods. The LSTM cell takes into account the whole sequence of <inline-formula><mml:math id="M34"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>:</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> by storing the necessary information in the memory cell. <xref ref-type="fig" rid="F1">Figure 1</xref> depicts the underlying idea.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Method depicted. The HF returns are transformed in the vertical direction, meaning that for each day 1, &#x02026;, <italic>t</italic> the same type of transformation is applied to the HF returns of that day. The result is a sequence of &#x003C4; = 1, &#x02026;, <italic>t</italic> transformed measures <inline-formula><mml:math id="M35"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> to which an LSTM cell is horizontally applied.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frai-04-787534-g0001.tif"/>
</fig>
<p>We then linearly combine the output <italic>&#x01EF9;</italic><sub><italic>t</italic></sub> (for readability, we omit the dependence of <italic>&#x01EF9;</italic><sub><italic>t</italic></sub> on <inline-formula><mml:math id="M36"><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> from here on) with the other LF measures under consideration. This results in the following, case dependent, transformed HF information input variable</p>
<disp-formula id="E25"><label>(26)</label><mml:math id="M37"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mi>&#x003BD;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>H</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:mtext class="textrm" mathvariant="normal">only recent HF information</mml:mtext></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x01EF9;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:mtext class="textrm" mathvariant="normal">all HF information</mml:mtext><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>We consider two settings, where we linearly combine the information from the past RV with that of the transformed HF returns and add an intercept. Herein, in resemblance to the classical HAR model, we first use past, daily, weekly, and monthly averages of the natural logarithm of RV (referred to as log RV hereafter). Denote the logarithm of RV at day <italic>t</italic> by ln <italic>RV</italic><sub><italic>t</italic></sub> i.e.,</p>
<disp-formula id="E26"><label>(27)</label><mml:math id="M38"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo class="qopname">ln</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Weekly and monthly averages of log RV are then defined by</p>
<disp-formula id="E27"><label>(28)</label><mml:math id="M39"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>5</mml:mn></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>5</mml:mn></mml:mrow></mml:munderover></mml:mstyle><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and</p>
<disp-formula id="E28"><label>(29)</label><mml:math id="M40"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>22</mml:mn></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>22</mml:mn></mml:mrow></mml:munderover></mml:mstyle><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Second, we use the output of an LSTM cell applied to the sequence of the past log RVs.</p>
<p>This results in the, case dependent, low frequency information input variable <inline-formula><mml:math id="M41"><mml:msubsup><mml:mrow><mml:mi>&#x003BD;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> and</p>
<disp-formula id="E29"><label>(30)</label><mml:math id="M42"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mi>&#x003BD;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>L</mml:mi><mml:mi>S</mml:mi><mml:mi>T</mml:mi><mml:mi>M</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mtext>&#x000A0;</mml:mtext><mml:mo>:</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mo>.</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The HAR model is the most commonly used benchmark in volatility forecasting. However, its&#x00027; implicit lag order selection (it is a restricted AR(22) model) is not necessarily validated in real data applications (Audrino and Knaus, <xref ref-type="bibr" rid="B9">2016</xref>). As mentioned earlier, we circumvent the trouble of lag order selection since we apply an LSTM cell to the LF inputs. The LSTM cell can capture the long-term dynamics. Alternatively, one could fit an autoregressive model of order <italic>p</italic> on the RV, add the lags of the transformed measure as additional explanatory variables, and perform lag order selection via Information Criteria or shrinkage methods. However, we leave these two alternatives for further research.</p>
<p>We take the exponential of these linear combinations to guarantee the positiveness of the generated forecast. The output of the model thus is generated by</p>
<disp-formula id="E30"><label>(31)</label><mml:math id="M43"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mi>c</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mi>&#x003BD;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mi>H</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup><mml:msubsup><mml:mrow><mml:mi>&#x003BD;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>H</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003B8; is a vector collecting all parameters. &#x003B2;<sup><italic>LF</italic></sup> contains either the parameters associated with the daily, weekly, and monthly averages of log RV or the parameters associated with the output of the LSTM cell applied to the sequence of log RV. &#x003B2;<sup><italic>HF</italic></sup> is the parameter of the scalar measure obtained from the transformation of the HF returns, and <italic>c</italic> is an intercept. The model that only uses the transformed HF returns for the forecast corresponds to restricting &#x003B2;<sup><italic>LF</italic></sup> &#x0003D; 0. This comparison allows for a very detailed analysis of the source of potential gains in the forecasting performance:</p>
<list list-type="order">
<list-item><p>We can assess whether there are significant differences in the forecasting performances of the models that only use the transformed measure as inputs to those that combine them with the LF variables. It is thus possible to inspect whether or not the sequence of transformed HF returns captures the information included in the past RV.</p></list-item>
<list-item><p>We can investigate whether it is necessary to consider the entire information in the transformed measure or whether the most recent information suffices.</p></list-item>
<list-item><p>We can compare the different transformation methods, assessing the differences between the linear MIDAS type transformations and the non-linear transformations.</p></list-item>
<list-item><p>We can examine whether using the classical HAR inputs with a fixed lag order of 22 is enough or whether using an LSTM cell on the past RV values, which is less restrictive in terms of the lag order selection, is fruitful.</p></list-item>
</list>
<sec>
<title>5.1. Benchmark Models</title>
<p>We apply a variety of benchmark models, four models of the HAR family and an ARFIMA(p,d,q) model. Our proposed methodology ensures the positiveness of the volatility predictions by construction (see Equation 30). However, when fitting the benchmark models to the level of RV, the forecasts are not guaranteed to be positive. We thus implement each benchmark model once for the level of RV and once for the log of RV to allow for a fair comparison. In the latter case, the forecasts are bias corrected (Granger and Newbold, <xref ref-type="bibr" rid="B33">1976</xref>), i.e.,</p>
<disp-formula id="E31"><label>(32)</label><mml:math id="M44"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo class="qopname">^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:msubsup><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M45"><mml:msubsup><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula> is the forecast error variance estimated from the residuals.</p>
<p>Following the suggestion of Andersen et al. (<xref ref-type="bibr" rid="B5">2003</xref>), we start by fitting an ARFIMA model</p>
<disp-formula id="E32"><label>(33)</label><mml:math id="M46"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mo>&#x003A6;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>L</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x00398;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mtext class="textrm" mathvariant="normal">&#x02003;with&#x02003;</mml:mtext><mml:mn>0</mml:mn><mml:mo>&#x0003C;</mml:mo><mml:mo>|</mml:mo><mml:mi>d</mml:mi><mml:mo>|</mml:mo><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>5</mml:mn></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>for <italic>x</italic><sub><italic>t</italic></sub> &#x0003D; <italic>RV</italic><sub><italic>t</italic></sub> and <italic>x</italic><sub><italic>t</italic></sub> &#x0003D; ln <italic>RV</italic><sub><italic>t</italic></sub>, where &#x003B5;<sub><italic>t</italic></sub> is a Gaussian white noise with zero mean and variance &#x003C3;<sup>2</sup>. &#x003A6;(<italic>L</italic>) and &#x00398;(<italic>L</italic>) are lag polynomials of degrees <italic>p</italic> and <italic>q</italic>, respectively whose roots lie outside the unit circle.</p>
<p>Next, we implement benchmark models from the HAR family, starting with the classical HAR model (Corsi, <xref ref-type="bibr" rid="B23">2009</xref>) in levels</p>
<disp-formula id="E33"><label>(34)</label><mml:math id="M47"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and in logs</p>
<disp-formula id="E34"><label>(35)</label><mml:math id="M48"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>For all HAR family models, the error term &#x003B5;<sub><italic>t</italic></sub> is assumed to be a white noise process with &#x1D53C;[&#x003B5;<sub><italic>t</italic></sub>] &#x0003D; 0 and <inline-formula><mml:math id="M49"><mml:mi>&#x1D54D;</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula>. Following Andersen et al. (<xref ref-type="bibr" rid="B2">2007</xref>), we include the CHAR model as the second benchmark. The CHAR model is based on the jump robust Bi-Power Variation (BPV) measure of Barndorff-Nielsen and Shephard (<xref ref-type="bibr" rid="B13">2004</xref>), defined as</p>
<disp-formula id="E35"><label>(36)</label><mml:math id="M50"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>B</mml:mi><mml:mi>P</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover></mml:mstyle><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M51"><mml:msub><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msqrt><mml:mrow><mml:mn>2</mml:mn><mml:mo>/</mml:mo><mml:mi>&#x003C0;</mml:mi></mml:mrow></mml:msqrt></mml:math></inline-formula> is the expectation of the absolute value of a standard normal random variable. The CHAR model then replaces the daily, weekly, and monthly averages of RV on the right hand side of the HAR model with the corresponding averages of BPV, i.e., for levels</p>
<disp-formula id="E36"><label>(37)</label><mml:math id="M52"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mi>B</mml:mi><mml:mi>P</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>B</mml:mi><mml:mi>P</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>B</mml:mi><mml:mi>P</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and for logs</p>
<disp-formula id="E37"><label>(38)</label><mml:math id="M53"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>B</mml:mi><mml:mi>P</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>B</mml:mi><mml:mi>P</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>B</mml:mi><mml:mi>P</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>An alternative model that accounts for jumps is the HAR with jumps (HAR-J) model (Andersen et al., <xref ref-type="bibr" rid="B2">2007</xref>). The HAR-J model adds the jump measure <italic>J</italic><sub><italic>t</italic></sub> &#x0003D; max(<italic>RV</italic><sub><italic>t</italic></sub> &#x02212; <italic>BPV</italic><sub><italic>t</italic></sub>, 0) or, when modeling log RV, ln (1 &#x0002B; <italic>J</italic><sub><italic>t</italic></sub>), as an additional explanatory variable to the HAR model. However, in our application, the HAR-J model in levels produces negative volatility predictions in two cases. For the log case the average losses of the HAR-J model are very similar to those from the HAR model. We thus omit the results from the HAR-J model, though the differences are statistically significant. They are available from the authors upon request. The next benchmark model is the Semivariance-HAR (SHAR) model by Patton and Sheppard (<xref ref-type="bibr" rid="B49">2015</xref>), which builds on the semi-variation measure of Barndorff-Nielsen et al. (<xref ref-type="bibr" rid="B10">2010</xref>) differentiating between variation associated with positive and negative intraday returns. The estimators are defined as</p>
<disp-formula id="E38"><label>(39)</label><mml:math id="M54"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>R</mml:mi><mml:msubsup><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msubsup><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>&#x1D540;</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and</p>
<disp-formula id="E39"><label>(40)</label><mml:math id="M55"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>R</mml:mi><mml:msubsup><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msubsup><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>&#x1D540;</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x1D540; is the indicator function and <inline-formula><mml:math id="M56"><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>R</mml:mi><mml:msubsup><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:mi>R</mml:mi><mml:msubsup><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msubsup></mml:math></inline-formula>. The SHAR model uses this decomposition of <italic>RV</italic><sub><italic>t</italic></sub> such that for levels</p>
<disp-formula id="E40"><label>(41)</label><mml:math id="M57"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo></mml:mrow></mml:msubsup><mml:mi>R</mml:mi><mml:msubsup><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msubsup><mml:mi>R</mml:mi><mml:msubsup><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and for logs</p>
<disp-formula id="E42"><label>(42)</label><mml:math id="M59"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo></mml:mrow></mml:msubsup><mml:mo class="qopname">ln</mml:mo><mml:msubsup><mml:mrow><mml:mi>R</mml:mi><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msubsup><mml:mo class="qopname">ln</mml:mo><mml:msubsup><mml:mrow><mml:mi>R</mml:mi><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msubsup></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x02003;&#x02003;&#x02003;&#x02003;</mml:mtext><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The last benchmark from the HAR family is the HARQ model of Bollerslev et al. (<xref ref-type="bibr" rid="B18">2016</xref>). The HARQ model uses the Realized Quarticity (RQ) estimator of Barndorff-Nielsen and Shephard (<xref ref-type="bibr" rid="B11">2002a</xref>) to correct for measurement error in the RV estimator. The HARQ model for levels is</p>
<disp-formula id="E43"><label>(43)</label><mml:math id="M60"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mi>Q</mml:mi></mml:mrow></mml:msub><mml:mi>R</mml:mi><mml:msubsup><mml:mrow><mml:mi>Q</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and for logs.</p>
<disp-formula id="E45"><label>(44)</label><mml:math id="M62"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mi>Q</mml:mi></mml:mrow></mml:msub><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>Q</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msubsup></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x02003;&#x02003;&#x02003;&#x02003;</mml:mtext><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B5;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
</sec>
</sec>
<sec id="s6">
<title>6. Application</title>
<p>We use the 5 minutes log-returns (<italic>M</italic> &#x0003D; 78 intraday observations per trading day) of IBM from January, 02, 2001 till December 28, 2018 (<italic>T</italic> &#x0003D; 4, 482 days). We use the first 80% of the data (till May 27, 2015) as the in-sample data and the last 20% as the out-of-sample data. In order to obtain forecasts from each model introduced earlier, the QLIKE loss (Patton, <xref ref-type="bibr" rid="B48">2011</xref>) between the forecast <italic>y</italic><sub><italic>t</italic></sub>(&#x003B8;) and the next periods RV, <italic>RV</italic><sub><italic>t</italic>&#x0002B;1</sub> is minimized, i.e., the objective is to find</p>
<disp-formula id="E46"><label>(45)</label><mml:math id="M63"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mover accent="true"><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mo class="qopname">argmin</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow></mml:msub><mml:mo class="qopname">QLIKE</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where the QLIKE loss function is defined as</p>
<disp-formula id="E47"><label>(46)</label><mml:math id="M64"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mo class="qopname">QLIKE</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>-</mml:mo><mml:mo class="qopname">ln</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The QLIKE is a better choice when forecasting volatility than the mean squared error since it considers that the variable of interest is positive. We implement all models (except the benchmark models) in <italic>Python</italic> using <italic>Keras</italic> (Chollet, <xref ref-type="bibr" rid="B21">2015</xref>) with the <italic>TensorFlow</italic> (Abadi et al., <xref ref-type="bibr" rid="B1">2015</xref>) backend. This workflow comes with a comprehensive set of functions, allowing custom types of neural networks. We implement, e.g., the Beta MIDAS model as a specific case of an MLP that takes a 2 &#x000D7; 1 vector of ones as inputs and has a diagonal weight matrix coinciding with the parameters &#x003C6;<sub>1</sub> and &#x003C6;<sub>2</sub> of the Beta pdf. The layer then returns an <italic>M</italic> &#x000D7; 1 vector of weights associated with the standardized Beta pdf as described earlier. We estimate the parameters of all models under consideration (except the Benchmark models) by Stochastic Gradient Descent (SGD). Since SGD introduces an implicit regularization of the parameters (Soudry et al., <xref ref-type="bibr" rid="B59">2018</xref>) this methodology should allow for a fair comparison of the forecasting results of the different models.</p>
<p>We train by Adaptive Moments SGD (ADAM, Kingma and Ba, <xref ref-type="bibr" rid="B43">2014</xref>) with a batch size (length of a randomly selected sample selected for one SGD parameter update) of 128. <italic>Keras</italic> computes the gradient of RNNs by Truncated Back Propagation Through Time (Rumelhart et al., <xref ref-type="bibr" rid="B54">1986</xref>); truncated in the manner that the computation of the gradient considers only a limited amount of past lags. The horizon of truncation is referred to as <italic>lookback</italic> and does not change the fact that the RNN considers the whole sequence of inputs when producing the forecast after training. We set the <italic>lookback</italic> equal to 128. We standardize the input data and divide the target data (the one step ahead RV) by its&#x00027; standard deviation. We do not demean the target data to ensure positivity. We store the standard deviations to re-scale the resulting predictions in each forecasting step.</p>
<p>We use an expanding window scheme for forecasting: We start training for 1,000 epochs (one epoch means the algorithm went through the whole sample once) on the first 80% of the data and use the trained network and the newly available information to make a one step ahead prediction. Then, the model is re-trained for another 100 epochs for each one step ahead prediction with the previous iterations parameter values as starting values, resulting in 897 out of sample forecasts. To make training more feasible, we employ early stopping criteria. These interrupt the training before the target number of epochs is hit, given that there was no improvement of the training error over several specified past epochs. The term <italic>patience</italic> refers to this specified number of epochs. We set the minimum, absolute change of the training loss to be considered an improvement to 10<sup>&#x02212;6</sup>, the initial training step <italic>patience</italic> to 500, and the <italic>patience</italic> in the re-training steps to 50. The code runs on an <italic>NVIDIA Tesla V100</italic> GPU on the bwHPC Cluster.</p>
<p>We estimate the HAR family benchmark models by OLS and the ARFIMA models using <italic>R</italic>&#x00027;s <italic>fracdiff</italic> (Maechler, <xref ref-type="bibr" rid="B46">2020</xref>) and <italic>forecast</italic> (Hyndman and Khandakar, <xref ref-type="bibr" rid="B41">2008</xref>) packages. On the in-sample data, an ARFIMA(5,d,2) model provides the best fit for the level of RV and an ARFIMA (0,d,1) for the logarithm of RV.</p>
<sec>
<title>6.1. Forecast Evaluation</title>
<p>We compare the forecasting performance of our presented model with varying inputs and the benchmark models for both levels and logs using different loss measures. First, we compare the average QLIKE loss of the different models. Next, we report the square root of the average squared error loss <inline-formula><mml:math id="M65"><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> (the RMSE). We further compute Value at Risk (VaR) and Expected Shortfall (ES) forecasts based on the volatility forecasts. The VaR is the <italic>p</italic>-th quantile of the return distribution and the ES is the expected value of the return, given that the return is smaller than the VaR. We compute the daily log returns <italic>r</italic><sub><italic>t</italic></sub> as the sum of the intraday returns of day <italic>t</italic>, which is equivalent to the log return based on the difference between the log closing and opening prices. <xref ref-type="table" rid="T1">Table 1</xref> reports descriptive statistics of the daily returns (<italic>r</italic><sub><italic>t</italic></sub>), the RV estimated from 5 minutes log-returns (<italic>RV</italic><sub><italic>t</italic></sub>), and the standardized returns <inline-formula><mml:math id="M66"><mml:msub><mml:mrow><mml:mi>z</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:msqrt><mml:mrow><mml:mi>R</mml:mi><mml:msub><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msqrt></mml:math></inline-formula> for the whole sample, the in- and the out-of-sample period.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Descriptive statistics.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th/>
<th valign="top" align="center"><bold>Min</bold></th>
<th valign="top" align="center"><bold>Max</bold></th>
<th valign="top" align="center"><bold>Mean</bold></th>
<th valign="top" align="center"><bold>Median</bold></th>
<th valign="top" align="center"><bold>Std</bold></th>
<th valign="top" align="center"><bold>Skewness</bold></th>
<th valign="top" align="center"><bold>Kurtosis</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" colspan="8"><bold>Whole-sample period</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>r</italic><sub><italic>t</italic></sub></td>
<td valign="top" align="center">&#x02212;11.1695</td>
<td valign="top" align="center">11.6993</td>
<td valign="top" align="center">0.0053</td>
<td valign="top" align="center">0.0120</td>
<td valign="top" align="center">1.4997</td>
<td valign="top" align="center">0.1136</td>
<td valign="top" align="center">10.5653</td>
</tr>
<tr>
<td valign="top" align="left"><italic>RV</italic><sub><italic>t</italic></sub></td>
<td valign="top" align="center">0.219</td>
<td valign="top" align="center">130.5922</td>
<td valign="top" align="center">2.3877</td>
<td valign="top" align="center">1.0188</td>
<td valign="top" align="center">5.7425</td>
<td valign="top" align="center">10.0287</td>
<td valign="top" align="center">153.5745</td>
</tr>
<tr>
<td valign="top" align="left"><italic>z</italic><sub><italic>t</italic></sub></td>
<td valign="top" align="center">&#x02212;2.9571</td>
<td valign="top" align="center">3.2444</td>
<td valign="top" align="center">0.0308</td>
<td valign="top" align="center">0.0142</td>
<td valign="top" align="center">0.9554</td>
<td valign="top" align="center">0.0833</td>
<td valign="top" align="center">2.6061</td>
</tr>
<tr>
<td valign="top" align="left" colspan="8"><bold>In-sample period</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>r</italic><sub><italic>t</italic></sub></td>
<td valign="top" align="center">&#x02212;11.1695</td>
<td valign="top" align="center">11.6993</td>
<td valign="top" align="center">0.0236</td>
<td valign="top" align="center">0.0148</td>
<td valign="top" align="center">1.5545</td>
<td valign="top" align="center">0.2314</td>
<td valign="top" align="center">10.1870</td>
</tr>
<tr>
<td valign="top" align="left"><italic>RV</italic><sub><italic>t</italic></sub></td>
<td valign="top" align="center">0.1325</td>
<td valign="top" align="center">130.5922</td>
<td valign="top" align="center">2.6083</td>
<td valign="top" align="center">1.1267</td>
<td valign="top" align="center">6.0041</td>
<td valign="top" align="center">9.9336</td>
<td valign="top" align="center">151.2284</td>
</tr>
<tr>
<td valign="top" align="left"><italic>z</italic><sub><italic>t</italic></sub></td>
<td valign="top" align="center">&#x02212;2.9571</td>
<td valign="top" align="center">3.2444</td>
<td valign="top" align="center">0.0408</td>
<td valign="top" align="center">0.0147</td>
<td valign="top" align="center">0.9514</td>
<td valign="top" align="center">0.0910</td>
<td valign="top" align="center">2.6362</td>
</tr>
<tr>
<td valign="top" align="left" colspan="8"><bold>Out-of-sample period</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>r</italic><sub><italic>t</italic></sub></td>
<td valign="top" align="center">&#x02212;7.9331</td>
<td valign="top" align="center">8.5542</td>
<td valign="top" align="center">&#x02212;0.0679</td>
<td valign="top" align="center">0.0000</td>
<td valign="top" align="center">1.2548</td>
<td valign="top" align="center">&#x02212;0.8824</td>
<td valign="top" align="center">11.5016</td>
</tr>
<tr>
<td valign="top" align="left"><italic>RV</italic><sub><italic>t</italic></sub></td>
<td valign="top" align="center">0.1219</td>
<td valign="top" align="center">63.4346</td>
<td valign="top" align="center">1.5063</td>
<td valign="top" align="center">0.6800</td>
<td valign="top" align="center">4.4410</td>
<td valign="top" align="center">9.7437</td>
<td valign="top" align="center">113.8447</td>
</tr>
<tr>
<td valign="top" align="left"><italic>z</italic><sub><italic>t</italic></sub></td>
<td valign="top" align="center">&#x02212;2.5287</td>
<td valign="top" align="center">2.7506</td>
<td valign="top" align="center">&#x02212;0.0092</td>
<td valign="top" align="center">0.0000</td>
<td valign="top" align="center">0.9703</td>
<td valign="top" align="center">0.0594</td>
<td valign="top" align="center">2.4857</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>For the descriptive statistics, r<sub>t</sub> is scaled by 10<sup>2</sup> and RV<sub>t</sub> by 10<sup>4</sup></italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>After standardizing the daily returns, their skewness and kurtosis are close to those of a standard normal distribution. For the out-of-sample period, we can not reject the H0 of a Kolmogorov-Smirnov test that the standardized returns are standard normally distributed (<italic>p</italic>-value = 0.646). For the whole sample (<italic>p</italic>-value = 0.018) and the in-sample period (<italic>p</italic>-value = 0.011) we reject this hypothesis at the 5% level. These results are reasonable since the out-of-sample period does not contain the financial crisis. We thus use the normal distribution to compute forecasts of VaR and ES. We also compute forecasts of VaR and ES using the standardized Student-t distribution, where, similar to Brownlees and Gallo (<xref ref-type="bibr" rid="B19">2010</xref>), for each iteration in the expanding window, we estimate the degrees of freedom based on the information available up to time <italic>t</italic>. All estimated degrees of freedom are larger than 100, indicating no need to account for fat tails. Further, the statistical analysis results and the ranking of the models do not change compared to the case of the normal distribution. We thus do not report the Student-t distribution results here. They are available from the authors on request.</p>
<p>To evaluate the performance of the models in forecasting VaR and ES, we use the <italic>asymmetric piece-wise linear</italic> loss function of Gneiting (<xref ref-type="bibr" rid="B32">2011</xref>) for the VaR and the <italic>zero-homogeneous</italic> loss function of Fissler and Ziegel (<xref ref-type="bibr" rid="B27">2016</xref>) for the VaR and the ES jointly.<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref> Using the short notation <italic>r</italic> &#x0003D; <italic>r</italic><sub><italic>t</italic></sub>, <inline-formula><mml:math id="M67"><mml:mover accent="false"><mml:mrow><mml:mi>V</mml:mi><mml:mi>a</mml:mi><mml:mi>R</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mi>V</mml:mi><mml:mi>a</mml:mi><mml:mi>R</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="M68"><mml:mover accent="false"><mml:mrow><mml:mi>E</mml:mi><mml:mi>S</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mi>E</mml:mi><mml:mi>S</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, these loss functions are</p>
<disp-formula id="E48"><label>(47)</label><mml:math id="M69"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>V</mml:mi><mml:mi>a</mml:mi><mml:mi>R</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mi>V</mml:mi><mml:mi>a</mml:mi><mml:mi>R</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>-</mml:mo><mml:mover accent="false"><mml:mrow><mml:mi>V</mml:mi><mml:mi>a</mml:mi><mml:mi>R</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D540;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">{</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>&#x02264;</mml:mo><mml:mover accent="false"><mml:mrow><mml:mi>V</mml:mi><mml:mi>a</mml:mi><mml:mi>R</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">}</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and.</p>
<disp-formula id="E49"><label>(48)</label><mml:math id="M70"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>V</mml:mi><mml:mi>a</mml:mi><mml:mi>R</mml:mi><mml:mi>E</mml:mi><mml:mi>S</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mi>V</mml:mi><mml:mi>a</mml:mi><mml:mi>R</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover accent="false"><mml:mrow><mml:mi>E</mml:mi><mml:mi>S</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mi>V</mml:mi><mml:mi>a</mml:mi><mml:mi>R</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>-</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi>&#x1D540;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">{</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>&#x02264;</mml:mo><mml:mover accent="false"><mml:mrow><mml:mi>V</mml:mi><mml:mi>a</mml:mi><mml:mi>R</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">}</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mover accent="false"><mml:mrow><mml:mi>E</mml:mi><mml:mi>S</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow></mml:mfrac><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mi>V</mml:mi><mml:mi>a</mml:mi><mml:mi>R</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mi>E</mml:mi><mml:mi>S</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow></mml:mfrac><mml:mo>&#x0002B;</mml:mo><mml:mo class="qopname">ln</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mover accent="false"><mml:mrow><mml:mi>E</mml:mi><mml:mi>S</mml:mi></mml:mrow><mml:mo class="qopname">^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
</sec>
<sec>
<title>6.2. Results</title>
<p><xref ref-type="table" rid="T2">Table 2</xref> reports the results of the out-of-sample losses introduced above for the different models. It further shows which models are in the Model Confidence Set (MCS) of Hansen et al. (<xref ref-type="bibr" rid="B37">2011</xref>) at the 10% level. We use the <italic>arch</italic> library of Sheppard et al. (<xref ref-type="bibr" rid="B57">2021</xref>) to compute the MCS <italic>p</italic>-values. In addition, we report the results of Binomial tests,<xref ref-type="fn" rid="fn0004"><sup>4</sup></xref> where we test each model against the other models in <xref ref-type="supplementary-material" rid="SM1">Supplementary Figures 1&#x02013;6</xref>. The table consists of two main blocks, again consisting of multiple blocks as indicated by the horizontal lines. The first main block contains the results for the ANN models, where the first two rows show the results for the models that do not use the transformed measure as additional input (indicated by the superscript O), i.e., the models corresponding to the restriction &#x003B2;<sup><italic>HF</italic></sup> &#x0003D; 0. The first model is a non-linear HAR estimated by SGD. Non-linear since we use the exponential of a linear combination of daily, weekly, and monthly averages of log RV on the right-hand side. The second row shows the results of modeling the long memory in RV not via the restricted AR(22) character of the HAR model but an LSTM cell applied to the log RV.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Out of sample losses.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th/>
<th valign="top" align="left"><bold>Model</bold></th>
<th valign="top" align="center"><bold>QLIKE</bold></th>
<th valign="top" align="center"><bold>RMSE</bold></th>
<th valign="top" align="center"><bold>VaR<sub><bold>1<italic>%</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold>VaR ES<sub><bold>1<italic>%</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold>VaR<sub><bold>2.5<italic>%</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold>VaR ES<sub><bold>2.5<italic>%</italic></bold></sub></bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="middle" align="left" style="border-bottom: thin solid #000000;" rowspan="32"><bold>ANN models</bold></td>
<td valign="top" align="left">HAR<sup>O</sup></td>
<td valign="top" align="center">0.6012<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4401</td>
<td valign="top" align="center">0.6598<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2635<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0444<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0502<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">LSTM<sup>O</sup></td>
<td valign="top" align="center">0.5683<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4356<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.6687<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.1732<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0497<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0029<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
 <tr>
<td valign="top" align="left">HAR<sup>M&#x02212;B</sup>-F</td>
<td valign="top" align="center">0.5995<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4407<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.6580<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2793<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0374<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0671<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">HAR<sup>M&#x02212;L</sup>-F</td>
<td valign="top" align="center">0.5655<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4380<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6397<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.3329<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0167<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.1032<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">HAR<sup>LSTM1</sup>-F</td>
<td valign="top" align="center">0.5990<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4347<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6446<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2779<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0278<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0626<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">HAR<sup>LSTM8</sup>-F</td>
<td valign="top" align="center">0.5976<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4400</td>
<td valign="top" align="center">0.6576<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2711<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0420<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0545<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">HAR<sup>LSTM64</sup>-F</td>
<td valign="top" align="center">0.5992<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4400</td>
<td valign="top" align="center">0.6579<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2731<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0420<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0559<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>M&#x02212;B</sup>-F</td>
<td valign="top" align="center">0.5813<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4361<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.6616<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2074<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0462<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0209<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>M&#x02212;L</sup>-F</td>
<td valign="top" align="center">0.5458<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4354<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.6617<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2360<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0378<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0464<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>LSTM1</sup>-F</td>
<td valign="top" align="center">0.5779<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4406<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.6668<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.1577<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0513<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.9970<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>LSTM8</sup>-F</td>
<td valign="top" align="center">0.5701<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4356<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.6688<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.1669<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0508<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0011<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>LSTM64</sup>-F</td>
<td valign="top" align="center">0.5693<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4356<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.6626<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2115<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0479<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0200<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">O<sup>M&#x02212;B</sup>-F</td>
<td valign="top" align="center">0.7923<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.5205<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6666<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.5151<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.1599<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0053<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">O<sup>M&#x02212;L</sup>-F</td>
<td valign="top" align="center">0.7035<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4499<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.7301<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.1101<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.1650<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.8801<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">O<sup>LSTM1</sup>-F</td>
<td valign="top" align="center">0.7468<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4514<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6772<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.4570<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.1580<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.9845<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">O<sup>LSTM8</sup>-F</td>
<td valign="top" align="center">0.7536<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4517<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6695<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.4596<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.1496<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.9898<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">O<sup>LSTM64</sup>-F</td>
<td valign="top" align="center">0.7458<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4516<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6525<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.5659<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.1448<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0239<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
 <tr>
<td valign="top" align="left">HAR<sup>M&#x02212;B</sup></td>
<td valign="top" align="center">0.6108</td>
<td valign="top" align="center">0.4403</td>
<td valign="top" align="center">0.6613<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2538<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0485<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0407<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">HAR<sup>M&#x02212;L</sup></td>
<td valign="top" align="center">0.5764<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4406<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.6620<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2263<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0456<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0356<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">HAR<sup>LSTM1</sup></td>
<td valign="top" align="center">0.5453<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4339<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6224<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.4520<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center"><bold>0</bold>.<bold>9901</bold><xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center"><bold>&#x02212;3</bold>.<bold>1643</bold><xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">HAR<sup>LSTM8</sup></td>
<td valign="top" align="center">0.6029</td>
<td valign="top" align="center">0.4400</td>
<td valign="top" align="center">0.6576<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2768<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0401<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0606<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">HAR<sup>LSTM64</sup></td>
<td valign="top" align="center">0.5989<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4400</td>
<td valign="top" align="center">0.6594<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2576<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0448<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0472<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>M&#x02212;B</sup></td>
<td valign="top" align="center">0.5764<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4356<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6627<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2073<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0508<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0143<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>M&#x02212;L</sup></td>
<td valign="top" align="center">0.5567<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4350<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.6648<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.1684<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0413<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0117<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>LSTM1</sup></td>
<td valign="top" align="center"><bold>0</bold>.<bold>5371</bold><xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center"><bold>0</bold>.<bold>4316</bold><xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center"><bold>0</bold>.<bold>6193</bold><xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.4187<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.9993<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.1350<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>LSTM8</sup></td>
<td valign="top" align="center">0.5755<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.4352<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">0.6631<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.1964<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">1.0445<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0176<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>LSTM64</sup></td>
<td valign="top" align="center">0.5712<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4361<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6656<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.2058<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0504<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0169<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">O<sup>M&#x02212;B</sup></td>
<td valign="top" align="center">0.7463<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4525<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6494<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.5692<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.1381<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0315<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">O<sup>M&#x02212;L</sup></td>
<td valign="top" align="center">0.7480<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4529<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6515<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.5576<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.1389<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0276<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">O<sup>LSTM1</sup></td>
<td valign="top" align="center">0.6701<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4466<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6333<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center"><bold>&#x02212;2</bold>.<bold>5970</bold><xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0973<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0812<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">O<sup>LSTM8</sup></td>
<td valign="top" align="center">0.7413<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4517<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6519<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.5700<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.1396<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0323<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">O<sup>LSTM64</sup></td>
<td valign="top" align="center">0.7457<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4516<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6525<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.5681<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.1456<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0240<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr> <tr>
<td valign="middle" align="left" rowspan="10"><bold>Benchmark models</bold></td>
<td valign="top" align="left">ARFIMA</td>
<td valign="top" align="center">0.6947<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4538<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6906<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.1262<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0833<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.9677<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">HAR</td>
<td valign="top" align="center">0.6758<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4532<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6975<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.1380<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0925<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.9620<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">CHAR</td>
<td valign="top" align="center">0.5665<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4384<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6462<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.3974<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0455<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0830<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">SHAR</td>
<td valign="top" align="center">0.6785<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4548<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6976<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.1348<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0928<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.9613<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">HARQ</td>
<td valign="top" align="center">0.6632<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4525<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6827<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.1610<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0736<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.9881<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
 <tr>
<td valign="top" align="left">ARFIMA-ln</td>
<td valign="top" align="center">0.6656<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4394<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.7095<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;1.9656<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0812<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.9242<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">HAR-ln</td>
<td valign="top" align="center">0.6751<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4396<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.7020<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.0047<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0703<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.9496<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">CHAR-ln</td>
<td valign="top" align="center">0.6141<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4373<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6773<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.1527<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0503<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;3.0169<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">SHAR-ln</td>
<td valign="top" align="center">0.6549<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4389<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6922<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.0547<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0615<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.9743<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">HARQ-ln</td>
<td valign="top" align="center">0.6413<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.4403<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.6930<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.0982<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">1.0660<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.9873<xref ref-type="table-fn" rid="TN2"><sup>&#x02020;</sup></xref><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The RMSE and VaR losses are scaled by 10<sup>3</sup>. Bold face numbers indicate the lowest out of sample loss</italic>.</p>
<fn id="TN1"><label>&#x0002A;</label><p><italic>Denotes models for which the H<sub>0</sub> of equal forecasting performance of a Binomial test with HAR<sup>O</sup> model as benchmark is rejected at the 5% level and</italic></p></fn>
<fn id="TN2"><label>&#x02020;</label><p><italic>denotes models that are in the Model Confidence set at the 10% level</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>Next, follow the models that use the information from the transformed measure. The superscript indicates the type of transformation: The superscript O indicates no transformation, the superscripts M-B and M-L indicate the Beta and LSTM MIDAS transformation, respectively, and the superscript LSTM plus a number indicates the non-linear, LSTM based transformation. The number indicates the number of LSTM cells in the hidden layer in this case.</p>
<p>The model&#x00027;s name indicates the type of low-frequency information: HAR refers to the daily, weekly, and monthly averages, and LSTM refers to an LSTM cell applied to the sequence of log RV. They reflect choosing <inline-formula><mml:math id="M71"><mml:msubsup><mml:mrow><mml:mi>&#x003BD;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> and <inline-formula><mml:math id="M72"><mml:msubsup><mml:mrow><mml:mi>&#x003BD;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>L</mml:mi><mml:mi>S</mml:mi><mml:mi>T</mml:mi><mml:mi>M</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo class="qopname">ln</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>:</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003B8;</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> respectively. The name O refers to only using the information from the transformed measure, i.e., it corresponds to the restriction &#x003B2;<sup><italic>LF</italic></sup> &#x0003D; 0. Here we have two blocks again. The first reporting models that apply an LSTM cell to the sequence of the transformed measure. These models thus take into account the full information in the sequence of the transformed measure, indicated by -F in the model name. The second block refers to models that only use the most recent value of the transformed measure. The two blocks thus correspond to the choice of <inline-formula><mml:math id="M73"><mml:msubsup><mml:mrow><mml:mi>&#x003BD;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>H</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="M74"><mml:msubsup><mml:mrow><mml:mi>&#x003BD;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>H</mml:mi><mml:mi>F</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x01EF9;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, respectively. The last block shows the results of the benchmark models, where we differentiate between them being applied for the level of RV and to the logarithm of RV (indicated by -ln in the model name). We transform the forecasts using the bias correction mentioned earlier for the latter case.</p>
<p>We first consider whether using only the transformed measure for forecasting volatility is fruitful. <xref ref-type="table" rid="T2">Table 2</xref> clearly shows that the models that only rely on the transformed measure (labeled O plus the superscript corresponding to the transformation used) are the worst-performing models within their respective blocks in terms of the QLIKE and the squared error loss. These models perform comparably or worse than the alternatives for the VaR loss and the joint loss of VaR and ES. The only exception is when jointly evaluating forecasts of VaR and ES at <italic>p</italic> = 1%. In this case, these models are the best performing ones, and among them, the model that uses the non-linear transformation via one LSTM cell performs best. The differences in the forecasting performance of the only transformed measure models to those that also use the information on past RV (HAR and LSTM plus superscript) are significant in terms of a binomial test for equal forecasting performance at the 1% level, as <xref ref-type="fig" rid="F2">Figure 2</xref> shows. <xref ref-type="fig" rid="F2">Figure 2A</xref> of the figure displays the test decision when comparing the models that combine the HAR inputs with <italic>&#x01EF9;</italic><sub><italic>t</italic></sub> against the model that only uses <italic>&#x01EF9;</italic><sub><italic>t</italic></sub> (O plus superscript). The x-axis labels specify the type of transformation used to obtain the transformed measure. <xref ref-type="fig" rid="F2">Figure 2B</xref> shows the results for only using the most recent information in the transformed measure, i.e., combining the HAR inputs with <inline-formula><mml:math id="M76"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> vs. solely using <inline-formula><mml:math id="M77"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. <xref ref-type="fig" rid="F2">Figures 2C,D</xref> show the results for the case where the HAR inputs are replaced by the output of an LSTM cell applied to the sequence of log RV. All <italic>p</italic>-values are smaller than 0.01 in all cases. For the QLIKE and the squared error loss, we can conclude that none of the transformations can extract enough information from the HF returns to replace the information on past RV for forecasting volatility. When forecasting the VaR and ES, these models yield results comparable to those of the other models. They outperform the alternative models only for the joint evaluation of the VaR and the ES at the 1% level.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Results for a Binomial test of equal forecasting performance between the models that use only the transformed measure and their counterpart that use it in combination. <bold>(A)</bold> HAR-F vs. O-F. <bold>(B)</bold> HAR vs. O. <bold>(C)</bold> LSTM-F vs. O-F. <bold>(D)</bold> LSTM vs. O.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frai-04-787534-g0002.tif"/>
</fig>
<p>Next, we address whether the -F models (the models that use the output of an LSTM cell applied to the sequence of the transformed measure) yield any differences in the forecasting performance compared to the models that only use the most recent information from the transformation. <xref ref-type="fig" rid="F3">Figure 3</xref> shows the testing results for differences between a model that only uses the most recent information in the transformed measure against its&#x00027; -F counterpart. At the 5% level, regardless of the LF input they are combined with, we see no significant differences in the forecasting performance of the MIDAS Beta, the LSTM8, and the LSTM64 transformation models compared to their -F counterparts. However, these differences are significant for the LSTM MIDAS and the LSTM1 transformation. When we only use the transformed measure, the LSTM MIDAS transformation model with full information produces lower average OLIKE and squared error losses. In contrast, the model that only uses the most recent information produces lower average losses when jointly evaluating VaR and ES. For the LSTM1 transformation, the model that only uses the most recent information yields the lower average loss for all loss functions. Combining the transformed measure with other LF information yields the following pattern: For the LSTM MIDAS transformation, where the differences are significant, the -F model gives the lower average losses. For the LSTM1 transformation, using only the most recent information yields lower average losses.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Results for a Binomial test of equal forecasting performance between the full information (<italic>&#x01EF9;</italic><sub><italic>t</italic></sub>) and the recent information (<inline-formula><mml:math id="M75"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>) models.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frai-04-787534-g0003.tif"/>
</fig>
<p>Whether there are significant differences in the forecasting performance between the models that use <italic>&#x01EF9;</italic><sub><italic>t</italic></sub> and the models that use <inline-formula><mml:math id="M78"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is thus case dependent. There are no significant differences for most models and transformations. For the LSTM MIDAS, it depends on whether it is used alone or in combination. In the former case, the -F models produce lower losses when the differences are significant. Using the -F models yields the lower QLIKE and squared error in the latter case. However, using <inline-formula><mml:math id="M79"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> produces lower VaR and ES losses. For the non-linear transformation through one LSTM cell, only applying the transformation to the most recent HF returns yields lower average losses. It seems that, in this case, the more distant information in the HF returns gets accounted for by the RV. However, the most recent HF returns contain information that the lagged RV does not yet capture.</p>
<p>When we use the LSTM MIDAS transformation, it is necessary to use the sequential information in the transformed measure. An alternative explanation for this could be that the -F model introduces additional non-linearity into the transformed measure by applying an LSTM cell to its&#x00027; sequence. While <inline-formula><mml:math id="M80"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> in the LSTM MIDAS case is constructed linearly as a weighted sum, <italic>&#x01EF9;</italic><sub><italic>t</italic></sub> is a non-linear transformation of the sequence of that linear measure. So the better forecasting performance of the model that uses <italic>&#x01EF9;</italic><sub><italic>t</italic></sub> for the LSTM MIDAS transformation could be due to that non-linearity. However, the transformation of the HF returns through one LSTM cell is already the output of a non-linear function. Since only the most recent transformed measure is informative for this transformation, it appears that there are no gains from introducing more non-linearity through an LSTM cell on the sequence of transformed measures. Comparing these two against each other, we see that models that use only the most recent non-linear transformed measure produce lower losses than the models that use the LSTM cell applied to the LSTM MIDAS transformation. These differences are significant at the 1% level for all losses (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Figures 1&#x02013;6</xref>). Thus, the non-linearity within the transformed measure seems to produce more helpful information for forecasting volatility than introducing non-linearity to the transformed measure obtained from the linear method.</p>
<p>Next, we address whether combining the transformed measure with the other LF inputs yields significant gains in forecasting compared to only using the LF inputs. <xref ref-type="fig" rid="F4">Figure 4</xref> displays the test results. <xref ref-type="fig" rid="F4">Figures 4A,B</xref> show the test results for combining the HAR model inputs with the transformed measures against the model that does not use the transformation. The x-axis labels again indicate the type of transformation. <xref ref-type="fig" rid="F4">Figure 4A</xref> shows the results for the -F models and <xref ref-type="fig" rid="F4">Figure 4B</xref> for the models that only use the most recent information. The lower part of the figure, <xref ref-type="fig" rid="F4">Figures 4C,D</xref>, display the corresponding results when replacing the HAR inputs with the output of an LSTM cell applied to the sequence of log RV. Combining any of the LF inputs with the LSTM1 transformation yields statistically different forecasts to the models that omit the transformed measure, in any case, and for all losses. In the case of the HAR model inputs, the combined model yields lower losses in both cases. For the LSTM input, the -F model performs worse, whereas the model that only uses the most recent HF information yields lower losses. For the other transformations, the results are case-dependent. When combined with the HAR inputs, the LSTM MIDAS model yields significantly different QLIKE and RMSE losses in the full information case. The HAR<sup>M&#x02212;L</sup>-F model yields the lower QLIKE and RMSE losses in these two cases. The differences are not statistically different at the 5% level in the remaining cases. The non-linear transformation with 64 LSTM cells yields statically different results for all losses but the QLIKE and the squared error loss in the full information case. Its&#x00027; losses are lower than the comparison model in these cases. When using the recent information only the non-linear transformation with 64 LSTM cells does not deliver significantly different results. However, in the full information case, the Beta MIDAS transformation for the VaR and ES for <italic>p</italic> &#x0003D; 1% and <italic>p</italic> &#x0003D; 2.5% yields losses significantly different from the comparison model&#x00027;s (at the 5% level). In these cases, the Beta MIDAS transformation model yields slightly better results.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Results for a Binomial test of equal forecasting performance between the models combine the transformed measure and their counterpart that does not use the transformed measure. <bold>(A)</bold> HAR-F vs. HAR<sup>O</sup>. <bold>(B)</bold> HAR vs. HAR<sup>O</sup>. <bold>(C)</bold> LSTM-F vs. LSTM<sup>O</sup>. <bold>(D)</bold> LSTM vs. LSTM<sup>O</sup>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frai-04-787534-g0004.tif"/>
</fig>
<p>Next, we consider whether there are differences in the forecasting performances of the models depending on whether we use the HAR inputs or the output of an LSTM cell applied to past log RV. <xref ref-type="table" rid="T2">Table 2</xref> reports a rejection of the H0 of the Binomial test for equal forecasting performance concerning the HAR<sup>O</sup> model at the 5% level with an asterisk. From the table, we see that for the LSTM<sup>O</sup> model, we can not reject the H0 for any of the loss measures. Thus there are no significant differences between the ANN model that only uses the HAR model inputs and the ANN model that uses an LSTM cell on past log RVs. One difference is that the LSTM<sup>O</sup> model is in the MCS at the 10% level for all losses, whereas the HAR<sup>O</sup> model is not in the 10% MCS for the squared error loss. For the remaining models that use the transformed measure, the test results are displayed in <xref ref-type="fig" rid="F5">Figure 5</xref>.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Results for a Binomial test of equal forecasting performance between the models that combine the transformed measure with the HAR inputs and their counterpart that uses the LSTM cell applied to log RV. <bold>(A)</bold> HAR-F vs LSTM-F. <bold>(B)</bold> HAR vs LSTM.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frai-04-787534-g0005.tif"/>
</fig>
<p>According to the figure, we can not reject the H0 at the 5% level for the QLIKE and the squared error loss. For the VaR and ES losses, we reject the H0 at the 5% level for the Beta MIDAS, the LSTM8, and the LSTM64 transformations when using <italic>&#x01EF9;</italic><sub><italic>t</italic></sub>. In this case, the models that use the HAR inputs perform better than those using the LSTM input. When using <inline-formula><mml:math id="M81"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, we reject for the Beta and LSTM MIDAS models and the LSTM64 models considering the VaR and ES losses. In all except one case, the models that use the HAR inputs yield the lower out of sample loss. The only exception is the LSTM MIDAS model for the VaR<sub>2.5%</sub>. In this case, the LSTM inputs are performing marginally better. Overall, it appears that the HAR model inputs can approximate the long memory in the data to an extent comparable to that of an LSTM cell. However, we want to stress that we did not hunt for an optimal LSTM network architecture for this task. The purpose of the LSTM cell in this application is simply to circumvent the implicit lag order selection of the HAR model. A network of LSTM cells applied to the sequence of log RV as in Bucci (<xref ref-type="bibr" rid="B20">2020</xref>) might yield more consistent improvements in the forecasting performance than the HAR model inputs. It is interesting to see that the daily, weekly, and monthly averages used in the HAR model are not only comparable to ARFIMA models in the extent they account for long memory (Corsi, <xref ref-type="bibr" rid="B23">2009</xref>), but also to an LSTM cell.</p>
<p>Among the benchmark models, the CHAR model is performing best. It produces the lowest out of sample loss among the benchmark models for the level and the log of RV. Furthermore, it produces lower losses for all except the RMSE loss in levels than in logs. It is the only benchmark model in the 10% MCS for all losses and it produces lower QLIKE and RMSE losses than the HAR<sup>O</sup> model, i.e., the HAR model estimated by SGD. These differences are significant at the 5% level. Also, for the other losses except for the VaR<sub>2.5%</sub>, it yields lower losses than the HAR<sup>O</sup>. This is in line with Rahimikia and Poon (<xref ref-type="bibr" rid="B50">2020a</xref>), who also find that the CHAR is performing best among the HAR family models. Apart from the CHAR model, the remaining benchmark models cannot perform better than any of the ANN models except those that only use the transformed measure.</p>
<p>We come to a short intermediate conclusion:</p>
<list list-type="order">
<list-item><p>We found that using only the transformed measure to forecast RV results in higher out-of-sample forecast losses than models that combine the transformed measure with information on past log RV. This holds especially true for the QLIKE and the RMSE error loss. The only exception is the loss of jointly evaluating the VaR and ES at <italic>p</italic> = 1%.</p></list-item>
<list-item><p>We found that when using linear means to construct the transformed measure, it is crucial to consider the sequential information in the transformed measure. However, this might be due to non-linearity induced through the LSTM cell that we apply to the transformed measure. Therefore, it is sufficient only to use the most recent information when constructing the transformed measure non-linearly. In most cases, this yields better forecasting performance.</p></list-item>
<list-item><p>The non-linear transformation through one LSTM cell seems superior to the other transformations throughout the statistical analysis. The models performing best are those that use this transformation. Further, we have the most statistical evidence for differences in the forecasting performance for these models. We will further investigate this in the following.</p></list-item>
<list-item><p>For the QLIKE and the RMSE loss, there are no statistical differences in the performance of the models that use the HAR inputs and the models that use an LSTM cell applied to log RV. The daily, weekly, and monthly averages of log RV appear to be sufficient to account for the long memory in the data. Especially when combined with the LSTM1 transformed measure, this also holds for all other losses.</p></list-item>
</list>
<p>This short wrap-up leads to two hypotheses. First, the non-linear transformation through one LSTM cell is superior to all other transformations. Second, the models that combine the transformed measure from such a non-linear transformation with the information on past log RV perform better than all other models. These two models are the two best ranked models for each loss measure, except the joint evaluation of VaR<sub>1%</sub> and ES<sub>1%</sub>. We cannot reject that these two models perform equally well for any of the losses (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Figures 1&#x02013;6</xref>).</p>
<p>Investigating these hypotheses results in non-pairwise comparisons of the models. Further, the hypotheses are uni-directional, i.e., we are interested in whether these models perform better than the competitors. Thus we can not use a Binomial Test for equal forecasting performance but instead use the test for superior predictive ability (SPA test) of Hansen (<xref ref-type="bibr" rid="B36">2005</xref>). We use the <italic>arch</italic> library of Sheppard et al. (<xref ref-type="bibr" rid="B57">2021</xref>) to perform the SPA test. When computing the p-values, we use a block bootstrap with the number of bootstrap resamplings set to 1000 and the block length set to 5. The results are not sensitive to the choice of these two values. We also computed the p-values with resamplings set to 3,000, 5,000, 7,000, 9,000 and block lengths of 10, 15, 20, &#x02026;, 95, 100. The results did not change by much. The SPA test tests whether the expected loss difference between the loss of a candidate and a set of alternative models is smaller or equal to zero. A rejection of the null hypothesis thus means that there is a model among the alternatives performing significantly better than the candidate model.</p>
<p>We start by reporting the <italic>p</italic>-values of a sequence of SPA tests where we use the LSTM1 transformation models as candidates against the models that use the other transformations. The p-values displayed in <xref ref-type="table" rid="T3">Table 3</xref> show that, at the 5% level, we can not reject the H0 of the SPA test in any case. Thus, at the 5% level, the non-linear transformation by one LSTM cell gives forecasting losses smaller or equal to those of all alternative transformations used. This holds for any loss function. At the more conservative 10% level, for the models that use the full information on the transformed measure (upper part of the table) and the joint loss of VaR and ES at 2.5%, we reject the H0. Thus, for this loss, at least one transformation works better. Overall, however, this evidence supports the first hypothesis of the non-linear transformation through one LSTM cell performing best.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p><italic>p</italic>-values of SPA tests for the LSTM1 against the alternative transformations.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th/>
<th valign="top" align="center"><bold>QLIKE</bold></th>
<th valign="top" align="center"><bold>RMSE</bold></th>
<th valign="top" align="center"><bold>VaR<sub><bold>1<italic>%</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold>VaR ES<sub><bold>1<italic>%</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold>VaR<sub><bold>2.5<italic>%</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold>VaR ES<sub><bold>2.5<italic>%</italic></bold></sub></bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">HAR<sup>LSTM1</sup>-F</td>
<td valign="top" align="center">0.182</td>
<td valign="top" align="center">0.870</td>
<td valign="top" align="center">0.542</td>
<td valign="top" align="center">0.385</td>
<td valign="top" align="center">0.365</td>
<td valign="top" align="center">0.232</td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>LSTM1</sup>-F</td>
<td valign="top" align="center">0.124</td>
<td valign="top" align="center">0.135</td>
<td valign="top" align="center">0.456</td>
<td valign="top" align="center">0.156</td>
<td valign="top" align="center">0.162</td>
<td valign="top" align="center">0.086</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">O<sup>LSTM1</sup>-F</td>
<td valign="top" align="center">0.233</td>
<td valign="top" align="center">0.724</td>
<td valign="top" align="center">0.405</td>
<td valign="top" align="center">0.431</td>
<td valign="top" align="center">0.603</td>
<td valign="top" align="center">0.517</td>
</tr> <tr>
<td valign="top" align="left">HAR<sup>LSTM1</sup></td>
<td valign="top" align="center">0.823</td>
<td valign="top" align="center">0.937</td>
<td valign="top" align="center">0.580</td>
<td valign="top" align="center">0.567</td>
<td valign="top" align="center">0.543</td>
<td valign="top" align="center">0.546</td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>LSTM1</sup></td>
<td valign="top" align="center">0.707</td>
<td valign="top" align="center">0.967</td>
<td valign="top" align="center">0.607</td>
<td valign="top" align="center">0.578</td>
<td valign="top" align="center">0.548</td>
<td valign="top" align="center">0.591</td>
</tr>
<tr>
<td valign="top" align="left">O<sup>LSTM1</sup></td>
<td valign="top" align="center">0.543</td>
<td valign="top" align="center">0.593</td>
<td valign="top" align="center">0.906</td>
<td valign="top" align="center">0.789</td>
<td valign="top" align="center">0.601</td>
<td valign="top" align="center">0.975</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>To assess the second hypothesis, we use all models excluding the HAR<sup>LSTM1</sup> and LSTM<sup>LSTM1</sup> as the set of alternatives. We then apply the SPA test for each of these two models as candidates. <xref ref-type="table" rid="T4">Table 4</xref> displays the <italic>p</italic>-values of those tests. Again, we see that the null hypothesis that no alternative model performs better than any of the two models under consideration can not be rejected for any loss function. Among the considered models, including the benchmarks for logs and levels, no model performs significantly better than the HAR<sup>LSTM1</sup> and the LSTM<sup>LSTM1</sup>.</p>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p><italic>p</italic>-values of SPA tests for HAR<sup>LSTM1</sup> and LSTM<sup>LSTM1</sup> against the remaining models.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th/>
<th valign="top" align="center"><bold>QLIKE</bold></th>
<th valign="top" align="center"><bold>RMSE</bold></th>
<th valign="top" align="center"><bold>VaR<sub><bold>1<italic>%</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold>VaR ES<sub><bold>1<italic>%</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold>VaR<sub><bold>2.5<italic>%</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold>VaR ES<sub><bold>2.5<italic>%</italic></bold></sub></bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">HAR<sup>LSTM1</sup></td>
<td valign="top" align="center">0.726</td>
<td valign="top" align="center">0.952</td>
<td valign="top" align="center">0.975</td>
<td valign="top" align="center">0.527</td>
<td valign="top" align="center">0.955</td>
<td valign="top" align="center">0.996</td>
</tr>
<tr>
<td valign="top" align="left">LSTM<sup>LSTM1</sup></td>
<td valign="top" align="center">0.794</td>
<td valign="top" align="center">0.993</td>
<td valign="top" align="center">0.970</td>
<td valign="top" align="center">0.408</td>
<td valign="top" align="center">0.928</td>
<td valign="top" align="center">0.973</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec sec-type="conclusions" id="s7">
<title>7. Conclusion</title>
<p>This paper aims to forecast the daily volatility utilizing information extracted from the intraday high-frequency (HF) returns through Long Short Term Memory (LSTM) Recurrent Neural Networks (RNN). These structures are flexible in the degree of non-linearity they allow for and capture long persistence in the data. Our method extracts a non-linear, scalar transformation of the HF returns (referred to as transformed HF measure). We use this measure to make one step ahead predictions of the daily volatility. We vary the degree of non-linearity by testing different numbers of LSTM cells in the RNN and find no merits in using more than one LSTM cell for the non-linear transformation. For comparison, we implement two Mixed Data Sampling (MIDAS) approaches to construct the transformation of the HF returns. The MIDAS models obtain weights associated with the HF return and build the transformation as a weighted sum. The first MIDAS model generates weights associated with the lag of an intraday return through an LSTM cell (LSTM MIDAS). The second is an Artificial Neural Network (ANN) implementation of the Beta Lag Polynomial MIDAS (Beta MIDAS) (Ghysels et al., <xref ref-type="bibr" rid="B30">2004</xref>).</p>
<p>To account for dynamics and long memory in the volatility series, we apply an LSTM cell to the sequence of transformed measures. However, we also compare settings where we only use the most recent information from the transformed measure. The reason is that the information from the HF returns might only be &#x0201C;new&#x0201D; for a short time. Further in the past, it is probably incorporated by the RV estimator. We compare the forecasting performance of models solely based on the transformed HF measure to those of models that only use the information from the past Realized Volatility (RV). Namely, the HAR model and a model that applies an LSTM cell to the sequence of past RVs. The HAR model is one of the most popular models to approximate long memory in the volatility series. LSTM RNNs can account for complex non-linear dependencies in the data and capture long-term dependencies. Our comparison assesses whether the proposed transformation can extract the same or more information from the HF returns than the RV estimator. Finally, we combine the information from the transformed measure and the information from the RV for the forecast. We can thus investigate whether our proposed transformations extract information from the HF returns that is supplementary to the RV information when forecasting volatility.</p>
<p>In an expanding window forecasting exercise on data on the IBM stock, we compare the performance of the models in forecasting out-of-sample volatility. We further compute Value at Risk (VaR) and Expected Shortfall (ES) forecasts based on the volatility forecast. We perform a thorough statistical analysis to identify the source of the improved forecasting performance. Our results on the data set under consideration are four-fold:</p>
<p>First, they show that making volatility forecasts based solely on the transformed HF measure is not fruitful. Neither of the transformations can produce a measure that accounts for the long persistence in the volatility. This result is independent of whether we account for dynamics in the transformed measure or only take the most recent value for the forecast. Interestingly, when jointly evaluating the VaR<sub>1%</sub> and ES<sub>1%</sub> forecasts based on the volatility forecasts, those models perform better than the alternatives. However, for the 2.5% VaR and ES, their performance is again worse or comparable to the alternatives. When forecasting volatility, the transformations we propose are thus unable to extract the same information from the high-frequency returns as the RV estimator. Since the RV estimator ex-post is a consistent estimator of the volatility of a day, it is crucial to take this information into account for the forecasting task. Maybe more complex non-linear ANN structures could extract the same amount of information from the HF returns. However, in our eyes, it is more fruitful to facilitate the forecasting task for the method by using the RV information.</p>
<p>Second, there is no difference in using the sequence of the transformed measure or only the most recent value for most cases. There are significant differences only for the LSTM MIDAS transformation and the non-linear transformation based on one LSTM cell. The LSTM MIDAS transformation excels when we account for dynamics in the transformed measure. In contrast, the non-linear transformation excels when only using the most recent information. Though puzzling at first, this finding is quite intuitive. The LSTM MIDAS builds the transformed measure as a weighted sum. The transformation is thus linear. However, the linearity is insufficient to extract additional information from the HF returns. Therefore the model that only uses the most recent transformed measure, in this case, performs no different than the model that does not use the information. However, we account for dynamics in the sequence of transformed measures by applying an LSTM cell to it. While this circumvents the trouble of lag order selection, it introduces non-linearity in the transformed measure, which likely results in better models&#x00027; better performance. When we use an LSTM cell to transform the HF returns non-linearly, there are no additional gains from accounting for dynamics in the measure. Accounting for dynamics leads to worse forecasting performance in some cases. We thus conclude that the transformed measure must be non-linear for the transformation to extract additional information from the HF returns. However, allowing for dynamics in the non-linearly obtained transformed measure does not add any additional gains. This coincides with our previous findings indicating that the additional information in the HF returns gets picked up by the RV estimator further in the past. In the short run, though, this information is helpful for the prediction of volatility.</p>
<p>Third, we add to the literature by finding another prove for the improved forecasting performance of ANN models compared to the linear HAR model benchmark. Our models that do not include the transformed measure, i.e., only use either the HAR model inputs or apply an LSTM cell to the sequence of RV, perform significantly differently from the classical HAR model, both estimated in logs and levels. The simple non-linearity we induce through modeling the exponential of the linear combination of past daily, weekly, and monthly averages of the logarithm of RV already is sufficient to outperform the classical linear HAR for both logs and levels. Our results thus add to the evidence provided by, e.g., Rosa et al. (<xref ref-type="bibr" rid="B52">2014</xref>) and Arneri&#x00107; et al. (<xref ref-type="bibr" rid="B8">2018</xref>). We also apply an LSTM cell to the sequence of the logarithm of RV as an alternative to the HAR inputs. The LSTM cell allows for a high degree of non-linearity, and it captures long memory in the data. We find no significant differences between the LSTM and the HAR input models when predicting volatility in most cases. Our findings thus indicate that, for the simple structures we use, the HAR inputs capture the long persistence in the volatility series equally well as the LSTM cell on the data set under consideration. To some extent, this contradicts the findings of Bucci (<xref ref-type="bibr" rid="B20">2020</xref>) who finds that gated recurrent ANNs such as LSTM RNNs outperform ANNs that do not account for long memory in the data. However, the author forecasts the logarithm of the square root of monthly RV and not, as in this case, the level of daily RV. When constructing VaR and ES forecasts based on the volatility forecasts, we find significant differences in the performance of the HAR and the LSTM input models, where for these quantities, the HAR input models show better performance.</p>
<p>Fourth, the statistical analysis of the forecasting results pointed toward two hypotheses. First, the non-linear transformation through one LSTM cell is superior to all alternative transformations that we suggest, especially when only accounting for the most recent information in the transformed measure. Through a sequence of tests for superior predictive ability (SPA tests), we find that the non-linear transformation through one LSTM cell outperforms the alternatives. When only considering the most recent HF information, this result holds under conservative choices for the significance level. However, this result only holds for less conservative choices for the significance level (5%) for the setting where we account for dynamics in the transformed measure. So the non-linear transformation through one LSTM cell outperforms the MIDAS alternatives and the alternatives that allow for higher degrees of non-linearity by using a network of LSTM cells. This is very convenient since it circumvents the challenging task of finding the optimal network architecture for the transformation. Second, combining this transformed measure with the information on the past RV yields superior forecasting performance to all other models under consideration. Another sequence of SPA tests shows that the models that augment the information from the log RV with the most recent transformation from one LSTM cell significantly outperform all alternative models, including the benchmarks. When augmented by the most recent transformation from one LSTM cell, there are no significant differences between the model that uses an LSTM on past log RV and the model that uses the HAR. So also in this case, the HAR models&#x00027; lagged daily, weekly, and monthly averages are approximating the long persistence in the volatility equally well as the LSTM cell.</p>
<p>Our analysis thus directs to a new type of HAR model that augments the classical HAR by a non-linear transformation of the HF returns within a day. These results are in line with the findings of Rahimikia and Poon (<xref ref-type="bibr" rid="B50">2020a</xref>), who also find that their proposed HAR model augmented by HF limited order book and news sentiment data shows superior forecasting performance. However, the information we utilize for the augmentation does not stem from an auxiliary source such as news feeds but from the same information used to construct the RV estimator. Our resulting models can outperform some of the most popular benchmark models in the literature, such as ARFIMA models, the HAR, the CHAR, and the HARQ model. A natural extension of the presented work would be to use Bi-Power Variation and Realized Quarticity measures as additional inputs for the forecasting task. One could then assess, whether in this case, there are also gains in the forecasting performance through augmenting this model with the non-linear transformation of the HF returns through one LSTM cell.</p>
</sec>
<sec sec-type="data-availability" id="s8">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="s9">
<title>Author Contributions</title>
<p>CM contributed to the conceptualization of the idea, implemented the code, and wrote the manuscript.</p>

</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ack><p>We want to thank Gerhard Fechteler, Eric Ghysels, Lyudmila Grigoryeva, Roxana Halbleib, Ekaterina Kazak, Ingmar Nolte, Winfried Pohlmeier, the members of the Chair of Econometrics at the Department of Economics at the University of Konstanz, Germany, and two anonymous referees, for helpful comments. All remaining errors are ours. We acknowledge support by the state of Baden-W&#x000FC;rttemberg through bwHPC. The author acknowledges financial support from the German federal state of Baden-W&#x000FC;rttemberg through a Landesgraduiertenstipendium.</p></ack>
<sec sec-type="supplementary-material" id="s11">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/frai.2021.787534/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/frai.2021.787534/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Presentation_1.pdf" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Abadi</surname> <given-names>M.</given-names></name> <name><surname>Agarwal</surname> <given-names>A.</given-names></name> <name><surname>Barham</surname> <given-names>P.</given-names></name> <name><surname>Brevdo</surname> <given-names>E.</given-names></name> <name><surname>Chen</surname> <given-names>Z.</given-names></name> <name><surname>Citro</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2015</year>). <source>TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems</source>. Available online at: tensorflow.org.</citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andersen</surname> <given-names>T. G.</given-names></name> <name><surname>Bollerslev</surname> <given-names>T.</given-names></name> <name><surname>Diebold</surname> <given-names>F. X.</given-names></name></person-group> (<year>2007</year>). <article-title>Roughing it up: including jump components in the measurement, modeling, and forecasting of return volatility</article-title>. <source>Rev. Econ. Stat</source>. <volume>89</volume>, <fpage>701</fpage>&#x02013;<lpage>720</lpage>. <pub-id pub-id-type="doi">10.1162/rest.89.4.701</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andersen</surname> <given-names>T. G.</given-names></name> <name><surname>Bollerslev</surname> <given-names>T.</given-names></name> <name><surname>Diebold</surname> <given-names>F. X.</given-names></name> <name><surname>Ebens</surname> <given-names>H.</given-names></name></person-group> (<year>2001a</year>). <article-title>The distribution of realized stock return volatility</article-title>. <source>J. Finan. Econ</source>. <volume>61</volume>, <fpage>43</fpage>&#x02013;<lpage>76</lpage>. <pub-id pub-id-type="doi">10.1016/S0304-405X(01)00055-1</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andersen</surname> <given-names>T. G.</given-names></name> <name><surname>Bollerslev</surname> <given-names>T.</given-names></name> <name><surname>Diebold</surname> <given-names>F. X.</given-names></name> <name><surname>Labys</surname> <given-names>P.</given-names></name></person-group> (<year>2001b</year>). <article-title>The distribution of realized exchange rate volatility</article-title>. <source>J. Am. Stat. Assoc</source>. <volume>96</volume>, <fpage>42</fpage>&#x02013;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1198/016214501750332965</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andersen</surname> <given-names>T. G.</given-names></name> <name><surname>Bollerslev</surname> <given-names>T.</given-names></name> <name><surname>Diebold</surname> <given-names>F. X.</given-names></name> <name><surname>Labys</surname> <given-names>P.</given-names></name></person-group> (<year>2003</year>). <article-title>Modeling and forecasting realized volatility</article-title>. <source>Econometrica</source> <volume>71</volume>, <fpage>579</fpage>&#x02013;<lpage>625</lpage>. <pub-id pub-id-type="doi">10.1111/1468-0262.00418</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andersen</surname> <given-names>T. G.</given-names></name> <name><surname>Bollerslev</surname> <given-names>T.</given-names></name> <name><surname>Meddahi</surname> <given-names>N.</given-names></name></person-group> (<year>2004</year>). <article-title>Analytical evaluation of volatility forecasts</article-title>. <source>Int. Econ. Rev</source>. <volume>45</volume>, <fpage>1079</fpage>&#x02013;<lpage>1110</lpage>. <pub-id pub-id-type="doi">10.1111/j.0020-6598.2004.00298.x</pub-id><pub-id pub-id-type="pmid">26057584</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arneri&#x00107;</surname> <given-names>J.</given-names></name> <name><surname>Poklepovi&#x00107;</surname> <given-names>T.</given-names></name> <name><surname>Aljinovi&#x00107;</surname> <given-names>Z.</given-names></name></person-group> (<year>2014</year>). <article-title>Garch based artificial neural networks in forecasting conditional variance of stock returns</article-title>. <source>Croat. Oper. Res. Rev</source>. <fpage>329</fpage>&#x02013;<lpage>343</lpage>. <pub-id pub-id-type="doi">10.17535/crorr.2014.0017</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arneri&#x00107;</surname> <given-names>J.</given-names></name> <name><surname>Poklepovi&#x00107;</surname> <given-names>T.</given-names></name> <name><surname>Teai</surname> <given-names>J. W.</given-names></name></person-group> (<year>2018</year>). <article-title>Neural network approach in forecasting realized variance using high-frequency data</article-title>. <source>Bus. Syst. Res</source>. <volume>9</volume>, <fpage>18</fpage>&#x02013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.2478/bsrj-2018-0016</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Audrino</surname> <given-names>F.</given-names></name> <name><surname>Knaus</surname> <given-names>S. D.</given-names></name></person-group> (<year>2016</year>). <article-title>Lassoing the HAR model: a model selection perspective on realized volatility dynamics</article-title>. <source>Econ. Rev</source>. <volume>35</volume>, <fpage>1485</fpage>&#x02013;<lpage>1521</lpage>. <pub-id pub-id-type="doi">10.1080/07474938.2015.1092801</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Barndorff-Nielsen</surname> <given-names>O. E.</given-names></name> <name><surname>Kinnebrock</surname> <given-names>S.</given-names></name> <name><surname>Shephard</surname> <given-names>N.</given-names></name></person-group> (<year>2010</year>). <article-title>Measuring downside risk - realized semivariance</article-title>, in <source>Volatility and Time Series Econometrics: Essays in Honor of Robert F. Engle</source>, eds <person-group person-group-type="editor"><name><surname>Bollerslev</surname> <given-names>T.</given-names></name> <name><surname>Russel</surname> <given-names>J.</given-names></name> <name><surname>Watson</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>London, UK</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>), <fpage>117</fpage>&#x02013;<lpage>136</lpage>. <pub-id pub-id-type="doi">10.1093/acprof:oso/9780199549498.003.0007</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barndorff-Nielsen</surname> <given-names>O. E.</given-names></name> <name><surname>Shephard</surname> <given-names>N.</given-names></name></person-group> (<year>2002a</year>). <article-title>Econometric analysis of realized volatility and its use in estimating stochastic volatility models</article-title>. <source>J. R. Stat. Soc. Ser. B</source> <volume>64</volume>, <fpage>253</fpage>&#x02013;<lpage>280</lpage>. <pub-id pub-id-type="doi">10.1111/1467-9868.00336</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barndorff-Nielsen</surname> <given-names>O. E.</given-names></name> <name><surname>Shephard</surname> <given-names>N.</given-names></name></person-group> (<year>2002b</year>). <article-title>Estimating quadratic variation using realized variance</article-title>. <source>J. Appl. Econ</source>. <volume>17</volume>, <fpage>457</fpage>&#x02013;<lpage>477</lpage>. <pub-id pub-id-type="doi">10.1002/jae.691</pub-id><pub-id pub-id-type="pmid">25855820</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barndorff-Nielsen</surname> <given-names>O. E.</given-names></name> <name><surname>Shephard</surname> <given-names>N.</given-names></name></person-group> (<year>2004</year>). <article-title>Power and bipower variation with stochastic volatility and jumps</article-title>. <source>J. Financ. Econ</source>. <volume>2</volume>, <fpage>1</fpage>&#x02013;<lpage>37</lpage>. <pub-id pub-id-type="doi">10.1093/jjfinec/nbh001</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barun&#x000ED;k</surname> <given-names>J.</given-names></name> <name><surname>K&#x00159;ehl&#x000ED;k</surname> <given-names>T.</given-names></name></person-group> (<year>2016</year>). <article-title>Combining high frequency data with non-linear models for forecasting energy market volatility</article-title>. <source>Expert Syst. Appl</source>. <volume>55</volume>, <fpage>222</fpage>&#x02013;<lpage>242</lpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2016.02.008</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ba&#x0015F;t&#x000FC;rk</surname> <given-names>N.</given-names></name> <name><surname>Schotman</surname> <given-names>P. C.</given-names></name> <name><surname>Schyns</surname> <given-names>H.</given-names></name></person-group> (<year>2021</year>). <source>A Neural Network With Shared Dynamics for Multi-Step Prediction of Value-At-Risk and Volatility</source>. <pub-id pub-id-type="doi">10.2139/ssrn.3871096</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bengio</surname> <given-names>Y.</given-names></name> <name><surname>Simard</surname> <given-names>P.</given-names></name> <name><surname>Frasconi</surname> <given-names>P.</given-names></name></person-group> (<year>1994</year>). <article-title>Learning long-term dependencies with gradient descent is difficult</article-title>. <source>IEEE Trans. Neural Netw</source>. <volume>5</volume>, <fpage>157</fpage>&#x02013;<lpage>166</lpage>. <pub-id pub-id-type="doi">10.1109/72.279181</pub-id><pub-id pub-id-type="pmid">18267787</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bollerslev</surname> <given-names>T.</given-names></name></person-group> (<year>1986</year>). <article-title>Generalized autoregressive conditional heteroscedasticity</article-title>. <source>J. Econ</source>. <volume>31</volume>, <fpage>307</fpage>&#x02013;<lpage>327</lpage>. <pub-id pub-id-type="doi">10.1016/0304-4076(86)90063-1</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bollerslev</surname> <given-names>T.</given-names></name> <name><surname>Patton</surname> <given-names>A. J.</given-names></name> <name><surname>Quaedvlieg</surname> <given-names>R.</given-names></name></person-group> (<year>2016</year>). <article-title>Exploiting the errors: a simple approach for improved volatility forecasting</article-title>. <source>J. Econ</source>. <volume>192</volume>, <fpage>1</fpage>&#x02013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1016/j.jeconom.2015.10.007</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brownlees</surname> <given-names>C. T.</given-names></name> <name><surname>Gallo</surname> <given-names>G. M.</given-names></name></person-group> (<year>2010</year>). <article-title>Comparison of volatility measures: a risk management perspective</article-title>. <source>J. Financ. Econ</source>. <volume>8</volume>, <fpage>29</fpage>&#x02013;<lpage>56</lpage>. <pub-id pub-id-type="doi">10.1093/jjfinec/nbp009</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bucci</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Realized volatility forecasting with neural networks</article-title>. <source>J. Financ. Econ</source>. <volume>18</volume>, <fpage>502</fpage>&#x02013;<lpage>531</lpage>. <pub-id pub-id-type="doi">10.1093/jjfinec/nbaa008</pub-id><pub-id pub-id-type="pmid">24732236</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Chollet</surname> <given-names>F.</given-names></name></person-group> (<year>2015</year>). <source>Keras</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://keras.io/">https://keras.io/</ext-link></citation>
</ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Christensen</surname> <given-names>K.</given-names></name> <name><surname>Siggaard</surname> <given-names>M.</given-names></name> <name><surname>Veliyev</surname> <given-names>B.</given-names></name></person-group> (<year>2021</year>). <source>A Machine Learning Approach to Volatility Forecasting</source>. <publisher-name>CREATES Research Paper 2021-03</publisher-name>, <fpage>3</fpage>.</citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Corsi</surname> <given-names>F.</given-names></name></person-group> (<year>2009</year>). <article-title>A simple approximate long-memory model of realized volatility</article-title>. <source>J. Financ. Econ</source>. <volume>7</volume>, <fpage>174</fpage>&#x02013;<lpage>196</lpage>. <pub-id pub-id-type="doi">10.1093/jjfinec/nbp001</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cybenko</surname> <given-names>G.</given-names></name></person-group> (<year>1989</year>). <article-title>Approximation by superpositions of a sigmoidal function</article-title>. <source>Math. Control Signals Syst</source>. <volume>2</volume>, <fpage>303</fpage>&#x02013;<lpage>314</lpage>. <pub-id pub-id-type="doi">10.1007/BF02551274</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Donaldson</surname> <given-names>R.</given-names></name> <name><surname>Kamstra</surname> <given-names>M.</given-names></name></person-group> (<year>1997</year>). <article-title>An artificial neural network-garch model for international stock return volatility</article-title>. <source>J. Empir. Finance</source> <volume>4</volume>, <fpage>17</fpage>&#x02013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1016/S0927-5398(96)00011-4</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Engle</surname> <given-names>R.</given-names></name></person-group> (<year>1982</year>). <article-title>Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation</article-title>. <source>Econometrica</source> <volume>50</volume>, <fpage>987</fpage>&#x02013;<lpage>1007</lpage>. <pub-id pub-id-type="doi">10.2307/1912773</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fissler</surname> <given-names>T.</given-names></name> <name><surname>Ziegel</surname> <given-names>J. F.</given-names></name></person-group> (<year>2016</year>). <article-title>Higher order elicitability and osband&#x00027;s principle</article-title>. <source>Ann. Stat</source>. <volume>44</volume>, <fpage>1680</fpage>&#x02013;<lpage>1707</lpage>. <pub-id pub-id-type="doi">10.1214/16-AOS1439</pub-id><pub-id pub-id-type="pmid">18320210</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Franke</surname> <given-names>J.</given-names></name> <name><surname>Diagne</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Estimating market risk with neural networks</article-title>. <source>Stat. Decis</source>. <volume>24</volume>, <fpage>233</fpage>&#x02013;<lpage>253</lpage>. <pub-id pub-id-type="doi">10.1524/stnd.2006.24.2.233</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Franke</surname> <given-names>J.</given-names></name> <name><surname>Hardle</surname> <given-names>W. K.</given-names></name> <name><surname>Hafner</surname> <given-names>C. M.</given-names></name></person-group> (<year>2019</year>). <article-title>Neural networks and deep learning</article-title>, in <source>Statistics of Financial Markets: An Introduction</source> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>459</fpage>&#x02013;<lpage>495</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-13751-9_19</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ghysels</surname> <given-names>E.</given-names></name> <name><surname>Santa-Clara</surname> <given-names>P.</given-names></name> <name><surname>Valkanov</surname> <given-names>R.</given-names></name></person-group> (<year>2004</year>). <source>The Midas Touch: Mixed Data Sampling Regression Models</source>.</citation>
</ref>
<ref id="B31">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Giordano</surname> <given-names>F.</given-names></name> <name><surname>La Rocca</surname> <given-names>M.</given-names></name> <name><surname>Perna</surname> <given-names>C.</given-names></name></person-group> (<year>2012</year>). <article-title>Nonparametric estimation of volatility functions: Some experimental evidences</article-title>, in <source>Mathematical and Statistical Methods for Actuarial Sciences and Finance</source>, eds <person-group person-group-type="editor"><name><surname>Perna</surname> <given-names>C.</given-names></name> <name><surname>Sibillo</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>Milano</publisher-loc>: <publisher-name>Springer Milan</publisher-name>), <fpage>229</fpage>&#x02013;<lpage>236</lpage>. <pub-id pub-id-type="doi">10.1007/978-88-470-2342-0_27</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gneiting</surname> <given-names>T.</given-names></name></person-group> (<year>2011</year>). <article-title>Making and evaluating point forecasts</article-title>. <source>J. Am. Stat. Assoc</source>. <volume>106</volume>, <fpage>746</fpage>&#x02013;<lpage>762</lpage>. <pub-id pub-id-type="doi">10.1198/jasa.2011.r10138</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Granger</surname> <given-names>C. W.</given-names></name> <name><surname>Newbold</surname> <given-names>P.</given-names></name></person-group> (<year>1976</year>). <article-title>Forecasting transformed series</article-title>. <source>J. R. Stat. Soc. Ser. B</source> <volume>38</volume>, <fpage>189</fpage>&#x02013;<lpage>203</lpage>. <pub-id pub-id-type="doi">10.1111/j.2517-6161.1976.tb01585.x</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gu</surname> <given-names>S.</given-names></name> <name><surname>Kelly</surname> <given-names>B.</given-names></name> <name><surname>Xiu</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <article-title>Empirical asset pricing via machine learning</article-title>. <source>Rev. Financ. Stud</source>. <volume>33</volume>, <fpage>2223</fpage>&#x02013;<lpage>2273</lpage>. <pub-id pub-id-type="doi">10.1093/rfs/hhaa009</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hajizadeh</surname> <given-names>E.</given-names></name> <name><surname>Seifi</surname> <given-names>A.</given-names></name> <name><surname>Zarandi</surname> <given-names>M. F.</given-names></name> <name><surname>Turksen</surname> <given-names>I.</given-names></name></person-group> (<year>2012</year>). <article-title>A hybrid modeling approach for forecasting the volatility of s&#x00026;p 500 index return</article-title>. <source>Expert Syst. Appl</source>. <volume>39</volume>, <fpage>431</fpage>&#x02013;<lpage>436</lpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2011.07.033</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hansen</surname> <given-names>P. R.</given-names></name></person-group> (<year>2005</year>). <article-title>A test for superior predictive ability</article-title>. <source>J. Bus. Econ. Stat</source>. <volume>23</volume>, <fpage>365</fpage>&#x02013;<lpage>380</lpage>. <pub-id pub-id-type="doi">10.1198/073500105000000063</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hansen</surname> <given-names>P. R.</given-names></name> <name><surname>Lunde</surname> <given-names>A.</given-names></name> <name><surname>Nason</surname> <given-names>J. M.</given-names></name></person-group> (<year>2011</year>). <article-title>The model confidence set</article-title>. <source>Econometrica</source> <volume>79</volume>, <fpage>453</fpage>&#x02013;<lpage>497</lpage>. <pub-id pub-id-type="doi">10.3982/ECTA5771</pub-id><pub-id pub-id-type="pmid">30516143</pub-id></citation></ref>
<ref id="B38">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hochreiter</surname> <given-names>S.</given-names></name></person-group> (<year>1991</year>). <source>Untersuchungen zu Dynamischen Neuronalen Netzen</source>. Diploma, Technische Universit&#x000E4;t M&#x000FC;nchen, 91(1).</citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hochreiter</surname> <given-names>S.</given-names></name> <name><surname>Schmidhuber</surname> <given-names>J.</given-names></name></person-group> (<year>1997</year>). <article-title>Long short-term memory</article-title>. <source>Neural Comput</source>. <volume>9</volume>, <fpage>1735</fpage>&#x02013;<lpage>1780</lpage>. <pub-id pub-id-type="doi">10.1162/neco.1997.9.8.1735</pub-id><pub-id pub-id-type="pmid">9377276</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hornik</surname> <given-names>K.</given-names></name> <name><surname>Stinchcombe</surname> <given-names>M.</given-names></name> <name><surname>White</surname> <given-names>H.</given-names></name></person-group> (<year>1989</year>). <article-title>Multilayer feedforward networks are universal approximators</article-title>. <source>Neural Netw</source>. <volume>2</volume>, <fpage>551</fpage>&#x02013;<lpage>560</lpage>. <pub-id pub-id-type="doi">10.1016/0893-6080(89)90020-8</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hyndman</surname> <given-names>R. J.</given-names></name> <name><surname>Khandakar</surname> <given-names>Y.</given-names></name></person-group> (<year>2008</year>). <article-title>Automatic time series forecasting: the forecast package for R</article-title>. <source>J. Stat. Softw</source>. <volume>26</volume>, <fpage>1</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.18637/jss.v027.i03</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jordan</surname> <given-names>M. I.</given-names></name></person-group> (<year>1997</year>). <article-title>Chapter 25: Serial order: A parallel distributed processing approach</article-title>, in <source>Neural-Network Models of Cognition, Vol. 121 of Advances in Psychology</source>, eds <person-group person-group-type="editor"><name><surname>Donahoe</surname> <given-names>J. W.</given-names></name> <name><surname>Dorsel</surname> <given-names>V. P.</given-names></name></person-group> (<publisher-loc>North-Holland</publisher-loc>), <fpage>471</fpage>&#x02013;<lpage>495</lpage>. <pub-id pub-id-type="doi">10.1016/S0166-4115(97)80111-2</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kingma</surname> <given-names>D. P.</given-names></name> <name><surname>Ba</surname> <given-names>J.</given-names></name></person-group> (<year>2014</year>). <article-title>Adam: a method for stochastic optimization</article-title>. <source>arXiv preprint arXiv:1412.6980</source>.</citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kristjanpoller</surname> <given-names>W.</given-names></name> <name><surname>Fadic</surname> <given-names>A.</given-names></name> <name><surname>Minutolo</surname> <given-names>M. C.</given-names></name></person-group> (<year>2014</year>). <article-title>Volatility forecast using hybrid neural network models</article-title>. <source>Expert Syst. Appl</source>. <volume>41</volume>, <fpage>2437</fpage>&#x02013;<lpage>2442</lpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2013.09.043</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>S. Z.</given-names></name> <name><surname>Tang</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <source>Forecasting Realized Volatility: An Automatic System Using Many Features and Many Machine Learning Algorithms</source>. Available Online at: <ext-link ext-link-type="uri" xlink:href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3776915">https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3776915</ext-link></citation>
</ref>
<ref id="B46">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Maechler</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <source>fracdiff: Fractionally Differenced ARIMA aka ARFIMA(P,d,q) Models. R Package Version 1.5-1</source>.</citation>
</ref>
<ref id="B47">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Miuro</surname> <given-names>R.</given-names></name> <name><surname>Pichl</surname> <given-names>L.</given-names></name> <name><surname>Kaizoji</surname> <given-names>T.</given-names></name></person-group> (<year>2019</year>). <article-title>Artificial neural networks for realized volatility prediction in cryptocurrency time series</article-title>, in <source>Advances in Neural Networks ISNN 2019</source>, eds <person-group person-group-type="editor"><name><surname>Lu</surname> <given-names>H.</given-names></name> <name><surname>Tang</surname> <given-names>H.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name></person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>165</fpage>&#x02013;<lpage>172</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-22796-8_18</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Patton</surname> <given-names>A. J.</given-names></name></person-group> (<year>2011</year>). <article-title>Volatility forecast comparison using imperfect volatility proxies</article-title>. <source>J. Econ</source>. <volume>160</volume>, <fpage>246</fpage>&#x02013;<lpage>256</lpage>. <pub-id pub-id-type="doi">10.1016/j.jeconom.2010.03.034</pub-id></citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Patton</surname> <given-names>A. J.</given-names></name> <name><surname>Sheppard</surname> <given-names>K.</given-names></name></person-group> (<year>2015</year>). <article-title>Good volatility, bad volatility: signed jumps and the persistence of volatility</article-title>. <source>Rev. Econ. Stat</source>. <volume>97</volume>, <fpage>683</fpage>&#x02013;<lpage>697</lpage>. <pub-id pub-id-type="doi">10.1162/REST_a_00503</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Rahimikia</surname> <given-names>E.</given-names></name> <name><surname>Poon</surname> <given-names>S.-H.</given-names></name></person-group> (<year>2020a</year>). <source>Big data approach to realised volatility forecasting using HAR model augmented with limit order book and news</source>. Available Online at: <ext-link ext-link-type="uri" xlink:href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3684040">https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3684040</ext-link></citation>
</ref>
<ref id="B51">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Rahimikia</surname> <given-names>E.</given-names></name> <name><surname>Poon</surname> <given-names>S.-H.</given-names></name></person-group> (<year>2020b</year>). <source>Machine learning for realised volatility forecasting</source>. Available Online at: <ext-link ext-link-type="uri" xlink:href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3707796">https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3707796</ext-link></citation>
</ref>
<ref id="B52">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rosa</surname> <given-names>R.</given-names></name> <name><surname>Maciel</surname> <given-names>L.</given-names></name> <name><surname>Gomide</surname> <given-names>F.</given-names></name> <name><surname>Ballini</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>Evolving hybrid neural fuzzy network for realized volatility forecasting with jumps</article-title>, in <source>2014 IEEE Conference on Computational Intelligence for Financial Engineering &#x00026; Economics (CIFEr)</source> (<publisher-loc>London, UK</publisher-loc>), <fpage>481</fpage>&#x02013;<lpage>488</lpage>. <pub-id pub-id-type="doi">10.1109/CIFEr.2014.6924112</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ruiz</surname> <given-names>E.</given-names></name></person-group> (<year>1994</year>). <article-title>Quasi-maximum likelihood estimation of stochastic volatility modles</article-title>. <source>J. Econ</source>. <volume>63</volume>, <fpage>289</fpage>&#x02013;<lpage>306</lpage>. <pub-id pub-id-type="doi">10.1016/0304-4076(93)01569-8</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rumelhart</surname> <given-names>D. E.</given-names></name> <name><surname>Hinton</surname> <given-names>G. E.</given-names></name> <name><surname>Williams</surname> <given-names>R. J.</given-names></name></person-group> (<year>1986</year>). <article-title>Learning representations by back-propagating errors</article-title>. <source>Nature</source> <volume>323</volume>, <fpage>533</fpage>&#x02013;<lpage>536</lpage>. <pub-id pub-id-type="doi">10.1038/323533a0</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sadhwani</surname> <given-names>A.</given-names></name> <name><surname>Giesecke</surname> <given-names>K.</given-names></name> <name><surname>Sirignano</surname> <given-names>J.</given-names></name></person-group> (<year>2021</year>). <article-title>Deep learning for mortgage risk</article-title>. <source>J. Financ. Econ</source>. <volume>19</volume>, <fpage>313</fpage>&#x02013;<lpage>368</lpage>. <pub-id pub-id-type="doi">10.1093/jjfinec/nbaa025</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Schafer</surname> <given-names>A. M.</given-names></name> <name><surname>Zimmermann</surname> <given-names>H. G.</given-names></name></person-group> (<year>2006</year>). <article-title>Recurrent neural networks are universal approximators</article-title>, in <source>Artificial Neural Networks &#x02013; ICANN 2006</source>, eds <person-group person-group-type="editor"><name><surname>Kollias</surname> <given-names>D. S.</given-names></name> <name><surname>Stafylopatis</surname> <given-names>A.</given-names></name> <name><surname>Duch</surname> <given-names>W.</given-names></name> <name><surname>Oja</surname> <given-names>E.</given-names></name></person-group> (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>632</fpage>&#x02013;<lpage>640</lpage>. <pub-id pub-id-type="pmid">17696290</pub-id></citation></ref>
<ref id="B57">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sheppard</surname> <given-names>K.</given-names></name> <name><surname>Khrapov</surname> <given-names>S.</given-names></name> <name><surname>Lipt&#x000E1;k</surname> <given-names>G.</given-names></name> <name><surname>Capellini</surname> <given-names>R.</given-names></name> <name><surname>Fortin</surname> <given-names>A.</given-names></name> <name><surname>Judell</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2021</year>). <source>Bashtage/Arch: Release 5.1.0</source>.</citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sizova</surname> <given-names>N.</given-names></name></person-group> (<year>2011</year>). <article-title>Integrated variance forecasting: model based vs. reduced form</article-title>. <source>J. Econ</source>. <volume>162</volume>, <fpage>294</fpage>&#x02013;<lpage>311</lpage>. <pub-id pub-id-type="doi">10.1016/j.jeconom.2011.02.004</pub-id></citation>
</ref>
<ref id="B59">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Soudry</surname> <given-names>D.</given-names></name> <name><surname>Hoffer</surname> <given-names>E.</given-names></name> <name><surname>Nacson</surname> <given-names>M. S.</given-names></name> <name><surname>Gunasekar</surname> <given-names>S.</given-names></name> <name><surname>Srebro</surname> <given-names>N.</given-names></name></person-group> (<year>2018</year>). <article-title>The implicit bias of gradient descent on separable data</article-title>. <source>J. Mach. Learn. Res</source>. <volume>19</volume>, <fpage>2822</fpage>&#x02013;<lpage>2878</lpage>. Available Online at: <ext-link ext-link-type="uri" xlink:href="http://jmlr.org/papers/v19/18-188.html">http://jmlr.org/papers/v19/18-188.html</ext-link></citation>
</ref>
<ref id="B60">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Taylor</surname> <given-names>S. J.</given-names></name></person-group> (<year>1986</year>). <source>Modeling Financial Time Series</source>. <publisher-loc>Wiley</publisher-loc>: <publisher-name>World Scientific</publisher-name>.</citation>
</ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vortelinos</surname> <given-names>D. I.</given-names></name></person-group> (<year>2017</year>). <article-title>Forecasting realized volatility: HAR against principal components combining, neural networks and garch</article-title>. <source>Res. Int. Bus. Finance</source>. <volume>39</volume>, <fpage>824</fpage>&#x02013;<lpage>839</lpage>. <pub-id pub-id-type="doi">10.1016/j.ribaf.2015.01.004</pub-id></citation>
</ref>
<ref id="B62">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>White</surname> <given-names>H.</given-names></name></person-group> (<year>1988</year>). <article-title>Economic prediction using neural networks: the case of IBM daily stock returns</article-title>, in <source>ICNN, Vol. 2</source> (<publisher-loc>San Diego, CA</publisher-loc>), <fpage>451</fpage>&#x02013;<lpage>458</lpage>. <pub-id pub-id-type="doi">10.1109/ICNN.1988.23959</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>The hyperbolic tangent function applied to value <italic>x</italic> is <italic>tanh</italic>(<italic>x</italic>) &#x0003D; [exp (<italic>x</italic>) &#x02212; exp (&#x02212;<italic>x</italic>)]/[exp (<italic>x</italic>) &#x0002B; exp (&#x02212;<italic>x</italic>)]. It is a sigmoid function rescaled to the interval (&#x02212;1, 1).</p></fn>
<fn id="fn0002"><p><sup>2</sup>The sigmoid function applied to value <italic>x</italic> is defined as &#x003C3;(<italic>x</italic>) &#x0003D; 1/[1 &#x02212; exp (&#x02212;<italic>x</italic>)].</p></fn>
<fn id="fn0003"><p><sup>3</sup>There exists no strictly consistent loss function for the ES alone (Gneiting, <xref ref-type="bibr" rid="B32">2011</xref>).</p></fn>
<fn id="fn0004"><p><sup>4</sup>The Binomial test tests whether positive and negative sign changes in the loss differential of two models are equally likely. It is also known as the Diebold Mariano Sign test.</p></fn>
</fn-group>
</back>
</article>
