<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Built Environ.</journal-id>
<journal-title>Frontiers in Built Environment</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Built Environ.</abbrev-journal-title>
<issn pub-type="epub">2297-3362</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fbuil.2021.679488</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Built Environment</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>A Monte Carlo Simulation Approach in Non-linear Structural Dynamics Using Convolutional Neural Networks</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Bamer</surname> <given-names>Franz</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/897170/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Thaler</surname> <given-names>Denny</given-names></name>
<xref ref-type="corresp" rid="c002"><sup>&#x0002A;</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1210877/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Stoffel</surname> <given-names>Marcus</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/1307006/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Markert</surname> <given-names>Bernd</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/896926/overview"/>
</contrib>
</contrib-group>
<aff><institution>Institute of General Mechanics, RWTH Aachen University</institution>, <addr-line>Aachen</addr-line>, <country>Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Xinzheng Lu, Tsinghua University, China</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Kaiqi Lin, Fuzhou University, China; Kaiming Bi, Curtin University, Australia</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Franz Bamer <email>bamer&#x00040;iam.rwth-aachen.de</email></corresp>
<corresp id="c002">Denny Thaler <email>thaler&#x00040;iam.rwth-aachen.de</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Earthquake Engineering, a section of the journal Frontiers in Built Environment</p></fn>
<fn fn-type="other" id="fn002"><p>&#x02020;These authors have contributed equally to this work and share first authorship</p></fn></author-notes>
<pub-date pub-type="epub">
<day>03</day>
<month>05</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>7</volume>
<elocation-id>679488</elocation-id>
<history>
<date date-type="received">
<day>11</day>
<month>03</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>30</day>
<month>03</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2021 Bamer, Thaler, Stoffel and Markert.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Bamer, Thaler, Stoffel and Markert</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<abstract><p>The evaluation of the structural response statistics constitutes one of the principal tasks in engineering. However, in the tail region near structural failure, engineering structures behave highly non-linear, making an analytic or closed form of the response statistics difficult or even impossible. Evaluating a series of computer experiments, the Monte Carlo method has been proven a useful tool to provide an unbiased estimate of the response statistics. Naturally, we want structural failure to happen very rarely. Unfortunately, this leads to a disproportionately high number of Monte Carlo samples to be evaluated to ensure an estimation with high confidence for small probabilities. Thus, in this paper, we present a new Monte Carlo simulation method enhanced by a convolutional neural network. The sample-set used for this Monte Carlo approach is provided by artificially generating site-dependent ground motion time histories using a non-linear Kanai-Tajimi filter. Compared to several state-of-the-art studies, the convolutional neural network learns to extract the relevant input features and the structural response behavior autonomously from the entire time histories instead of learning from a set of hand-chosen intensity inputs. Training the neural network based on a chosen input sample set develops a meta-model that is then used as a meta-model to predict the response of the total Monte Carlo sample set. This paper presents two convolutional neural network-enhanced strategies that allow for a practical design approach of ground motion excited structures. The first strategy enables for an accurate response prediction around the mean of the distribution. It is, therefore, useful regarding structural serviceability. The second strategy enables for an accurate prediction around the tail end of the distribution. It is, therefore, beneficial for the prediction of the probability of failure.</p></abstract>
<kwd-group>
<kwd>Monte Carlo method</kwd>
<kwd>non-linear structural mechanics</kwd>
<kwd>elastoplastic structure</kwd>
<kwd>convolutional neural networks</kwd>
<kwd>machine learning</kwd>
<kwd>earthquake engineering</kwd>
<kwd>probability of failure</kwd>
</kwd-group>
<counts>
<fig-count count="7"/>
<table-count count="0"/>
<equation-count count="16"/>
<ref-count count="27"/>
<page-count count="10"/>
<word-count count="5702"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Deciding whether a structure subjected to ground motion is or is not safe is a delicate task in engineering. One big issue is the high level of uncertainty when considering a ground excitation time history relevant to structural design. This fact forces us to apply probabilistic methodologies to design structures and infrastructures (Der Kiureghian, <xref ref-type="bibr" rid="B7">1996</xref>; Bucher, <xref ref-type="bibr" rid="B6">2009</xref>).</p>
<p>Failure usually occurs in the non-linear range of structural behavior, making an analytic or closed form of the response statistics difficult or even impossible. In this context, the equivalent linearization method was proposed to approximate the response statistics, which is generally quite accurate around the mean of the distribution (Roberts and Spanos, <xref ref-type="bibr" rid="B19">1990</xref>). To also provide accurate response statistics for infrequent events, the tail equivalent linearization method was proposed (Fujimura and Der Kiureghian, <xref ref-type="bibr" rid="B8">2007</xref>).</p>
<p>The general Monte Carlo method is helpful to provide the response statistics of non-linear systems. The strategy is based on a series of computer experiments and provides an unbiased estimation for the probability of failure of a non-linear system (Hurtado and Barbat, <xref ref-type="bibr" rid="B10">1998</xref>).</p>
<p>However, considering that each computer experiment includes a time history analysis of a complex high-dimensional finite element model, the Monte Carlo method turns out to be computationally expensive. As engineers, we have to ensure that structural failure should happen extremely rare, as it is generally accompanied by severe societal and economic loss. Thus, a disproportionately high number of computer experiments must be performed to estimate the occurrence of low-probability events with high confidence. This fact makes the application of the crude Monte Carlo method unfeasible for non-linear high-dimensional problems. A promising strategy to overcome this issue is to use model order reduction methods (Bamer and Bucher, <xref ref-type="bibr" rid="B2">2012</xref>; Bamer and Markert, <xref ref-type="bibr" rid="B3">2017</xref>; Bamer et al., <xref ref-type="bibr" rid="B4">2018</xref>). This strategy enables one to decrease the computational burden for each sample computation significantly, and it is particularly efficient if only one reduced-order basis is applied during the whole Monte Carlo simulation run (Bamer et al., <xref ref-type="bibr" rid="B1">2017</xref>).</p>
<p>In the context of computational speed-up and meta-modeling, the application of neural networks has proven to be a promising strategy (Stoffel et al., <xref ref-type="bibr" rid="B20">2018</xref>, <xref ref-type="bibr" rid="B21">2019</xref>, <xref ref-type="bibr" rid="B22">2020a</xref>,<xref ref-type="bibr" rid="B23">b</xref>) for problems in non-linear structural mechanics. In particular, the incorporation of recurrent neural network architectures for elastoplastic problems leads to representations of the non-linear structural behavior revealing accurate and efficient solution functions (Koeppe et al., <xref ref-type="bibr" rid="B12">2019</xref>, <xref ref-type="bibr" rid="B13">2020</xref>).</p>
<p>With special emphasis on problems in earthquake engineering, Sun et al. (<xref ref-type="bibr" rid="B24">2021</xref>) has summarized the incorporation of machine learning approaches to response, damage, and failure prediction. Convolutional neural networks and deep learning have been used to predict the whole response time history of the structure subjected to a transient excitation (Zhang et al., <xref ref-type="bibr" rid="B27">2020</xref>) and wavelet transformation-based response prediction (Lu et al., <xref ref-type="bibr" rid="B16">2020</xref>, <xref ref-type="bibr" rid="B17">2021</xref>; Liao et al., <xref ref-type="bibr" rid="B15">2021</xref>). Thaler et al. (<xref ref-type="bibr" rid="B25">2021a</xref>,<xref ref-type="bibr" rid="B26">b</xref>) proposed a machine leaning-enhanced Monte Carlo method using a simple feed-forward neural network. In doing so, they used a selected choice of intensity measures from the time history data as a set of input parameters to predict the response of the structure. However, representing the whole time history data by a few hand-chosen scalar values only, much information regarding the excitation function can get lost.</p>
<p>In this paper, we present an extended Monte Carlo approach using convolutional neural networks so that it is not necessary to extract hand-designed input features, e.g., intensity measures, for the neural network. Furthermore, in order to assure high reliability of the neural network predictions in the tail of the distribution, an extended training strategy is applied.</p>
<p>The structure of this paper is goal-oriented and straightforward. In section 2, we firstly present an overview of convolutional neural networks followed by the presentation of the new Monte Carlo simulation strategy. Subsequently, in section 3, we present the new strategy using an illustrative numerical example. The efficiency, as well as advantages and disadvantages, are discussed. Finally, in section 4, the conclusion is drawn.</p></sec>
<sec sec-type="methods" id="s2">
<title>2. Methods</title>
<sec>
<title>2.1. Feedforward and Convolutional Neural Network Architectures in a Nutshell</title>
<p>Machine learning strategies have been applied to a wide range of tasks in engineering. This section briefly introduces the basic theory of the two machine learning methods applied in this paper, i.e., feedforward and convolutional neural networks.</p>
<p>The feedforward neural network consists of at least three layers: input, hidden, and output layer. Every layer contains a set of neurons. The input layer receives the initial data. The data is passed forward, layer by layer, until the output layer is reached from this layer. Every neuron of a certain layer, which is not the output layer, is connected to every other neuron of the next layer during one forward step. As a result of this, each neuron-to-neuron link has its corresponding weight value. A weight matrix can then represent the set of all connections. Thus, the output of the previous layer <bold>y</bold><sub><italic>i</italic>&#x02212;1</sub> is the input of the actual layer <bold>x</bold><sub><italic>i</italic></sub>. The input is then multiplied by the weight matrix <bold>W</bold><sub><italic>i</italic></sub>, and additionally, summed up by a bias vector <bold>b</bold><sub><italic>i</italic></sub>. The sum is transferred by an activation function <italic>f</italic><sub><italic>act</italic></sub> which is then the output <bold>y</bold><sub><italic>i</italic></sub> of a layer:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>W</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>This operation is repeated until the output layer, which constitutes the prediction of the neural network. In our paper, we apply supervised learning, where the prediction is compared to a known target output for regression tasks. The error is used to optimize the weight matrices and the bias of all the layers using back-propagation.</p>
<p>The convolutional neural network is a specific type of architecture that is used to process data. It is specialized for input data that show a correlation of neighboring data points. Thus, typical applications of convolutional neural networks are image data. However, as presented during this paper, convolutional neural networks can also effectively be applied to extract the main features of earthquake time histories. Using convolutional layers improves the machine learning system by sparse interaction and parameter sharing. Consequently, a reduction of the total number of trainable parameters, and therefore, decreased computational effort. Convolutional neural networks are composed of two main operations: convolution and pooling.</p>
<p>The convolution is the basic mathematical operation of this neural network type. Regarding one dimensional discrete data, such as earthquake time histories for example, the convolution operation &#x0003C; &#x0002A; &#x0003E; of two vectors or one-dimensional tensors, <bold>g</bold> and <bold>h</bold>, is written as:</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>g</mml:mtext></mml:mstyle><mml:mo>*</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>h</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:msub><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>-</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>h</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The convolutional operation is usually followed by a non-linear activation function <italic>f</italic><sub><italic>act</italic></sub>. The output <bold>y</bold><sub><italic>i</italic></sub> of the convolution operation, the so-called feature map, is written as:</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>y</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>*</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>w</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>b</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In this equation, <bold>w</bold><sub><italic>i</italic></sub> denotes a weight vector with the spatial size <italic>k</italic> of the kernel, which slides over the one-dimensional input data array <bold>x</bold><sub><italic>i</italic></sub>.</p>
<p>In the next step, pooling improves the statistical efficiency by reducing the spatial size of the data. As shown in <xref ref-type="fig" rid="F1">Figure 1</xref>, pooling summarizes the input space. Looking at this illustrative example, one can see that the pooling procedure summarizes three values to one output value. As a result of this, the decrease in the total number of values, i.e., the compression of information, is dependent on the stride size, which is the step increment at which the kernel moves over the data array. For the example in <xref ref-type="fig" rid="F1">Figure 1</xref>, the stride size is two. Two different types of pooling procedures are considered in this paper: max-pooling and average-pooling. As expected from the nomenclature, max-pooling extracts the maximum of all values within the moving kernel, and average pooling extracts the average of the values within the moving kernel. Even though convolutions and pooling are individual operations, it is often referred to as a convolutional layer, whereby the operations are carried out in different stages.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Exemplary spatial reduction of the top data array using pooling functions (pool size = 3, stride size = 2); output data reduced by maximum pooling (bottom left); output data reduced by average pooling (bottom right).</p></caption>
<graphic xlink:href="fbuil-07-679488-g0001.tif"/>
</fig>
<p>Similar to the weight optimization procedure of fully connected layers, the values of the convolutional kernels are optimized during training, which allows for the extraction of features of the input data. Different features can be extracted from the initial data by using multiple kernels within the convolutional layer. A convolutional neural network usually consists of several convolutional layers, followed by feedforward layers. The choice of the architecture depends on the complexity of the data and the task to solve (Kruse et al., <xref ref-type="bibr" rid="B14">2015</xref>; Goodfellow et al., <xref ref-type="bibr" rid="B9">2016</xref>).</p></sec>
<sec>
<title>2.2. Machine Learning Enhanced Monte Carlo Simulation</title>
<p>The probability of structural failure <italic>P</italic><sub><italic>f</italic></sub> can be expressed in terms of a multiple integral:</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M4"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mo>&#x0222B;</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>&#x0222B;</mml:mo></mml:mrow><mml:mrow><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:munder></mml:mstyle><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>X</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mtext class="textrm" mathvariant="normal">d</mml:mtext><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mtext class="textrm" mathvariant="normal">d</mml:mtext><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>g</italic>(<italic>x</italic>) is the limit state function that divides the entire domain into the save region <italic>g</italic>(<italic>x</italic>) &#x02265; 0 and the failure region <italic>g</italic>(<italic>x</italic>) &#x0003C; 0. Dependent on the complexity of the limit state function, the solution of this integral is generally not straightforward. However, one approach to solve this integral is provided by the Monte Carlo simulation. One performs a series of computer experiments by artificially generating a set of random inputs (Hurtado and Barbat, <xref ref-type="bibr" rid="B10">1998</xref>). The integral describing the probability of failure in Equation (4) is rewritten as:</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M6"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msub><mml:mi>p</mml:mi><mml:mi>f</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>&#x0221E;</mml:mi></mml:mrow><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>&#x0221E;</mml:mi></mml:mrow><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:mstyle></mml:mrow></mml:mrow></mml:mstyle><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>&#x0221E;</mml:mi></mml:mrow><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>g</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:mstyle><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>X</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mtext>&#x000A0;d</mml:mtext><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mtext>d</mml:mtext><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>with <italic>I</italic><sub><italic>g</italic></sub>(<italic>x</italic><sub>1</sub>, . . . , <italic>x</italic><sub><italic>n</italic></sub>) being an indicator function defining a safe or failure state:</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M7"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mtext>&#x000A0;</mml:mtext><mml:mtext class="textrm" mathvariant="normal">if&#x000A0;</mml:mtext><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mo>&#x02264;</mml:mo></mml:mtd><mml:mtd><mml:mn>0</mml:mn><mml:mtext>&#x000A0;</mml:mtext><mml:mo>(</mml:mo><mml:mtext class="textrm" mathvariant="normal">structural&#x000A0;failure</mml:mtext><mml:mo>)</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E7"><label>(7)</label><mml:math id="M8"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mtext>&#x000A0;</mml:mtext><mml:mtext class="textrm" mathvariant="normal">if&#x000A0;</mml:mtext><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mo>&#x0003E;</mml:mo></mml:mtd><mml:mtd><mml:mn>0</mml:mn><mml:mtext>&#x000A0;</mml:mtext><mml:mo>(</mml:mo><mml:mtext class="textrm" mathvariant="normal">save</mml:mtext><mml:mo>)</mml:mo><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Thus, the probability of failure is provided by a consistent unbiased estimate in terms of an expected value:</p>
<disp-formula id="E8"><label>(8)</label><mml:math id="M9"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mtext class="textrm" mathvariant="normal">.</mml:mtext></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The standard deviation of the estimation of the probability of failure is evaluated as:</p>
<disp-formula id="E9"><label>(9)</label><mml:math id="M10"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:mfrac><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:mfrac><mml:mo>&#x02248;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:mfrac><mml:mo>&#x021D2;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msqrt><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:msqrt><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>which means that the variance of the probability of failure increases with a decreasing number of samples <italic>m</italic>. In engineering, we obviously want the probability of failure to remain a low value. Thus, the reliability of the estimate of a small probability of failure is considerably low using the Monte Carlo method and <italic>m</italic> must be chosen to be a disproportionately high number. In other words, extensive relevant earthquake record data must be collected in order to reliably design structures. A rule of thumb may be that a number of 10<sup>5</sup> excitation records should be considered for structural design with a desired failure probability of 10<sup>&#x02212;4</sup>&#x02014;obviously an impossible task, as the number of recorded relevant ground excitations around the respective building site is inherently limited. Therefore, we suggest to artificially generate a sufficiently high number of random ground excitations using the metadata from one relevant record. In doing so, site-dependent structural design is possible. In order to realize artificial site-dependency, we consider the incorporation of a non-linear Kanai-Tajimi filter (Bamer and Markert, <xref ref-type="bibr" rid="B3">2017</xref>). To extract the important information from a ground excitation record, a time window of size <italic>t</italic><sub><italic>w</italic></sub> is defined that moves over the excitation time history, as shown in <xref ref-type="fig" rid="F2">Figure 2</xref>. For the numerical example in this paper, a severe earthquake, recorded in Kobe Takarazuka in 1995 (Kobe, <xref ref-type="bibr" rid="B11">2016</xref>), is chosen as benchmark time history.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>(A)</bold> Moving time window to extract relevant metadata from acceleration records; <bold>(B)</bold> extracted intensity and <bold>(C)</bold> extracted lowest frequency content.</p></caption>
<graphic xlink:href="fbuil-07-679488-g0002.tif"/>
</fig>
<p>In this paper, we extract two time-dependent identification parameters, which are the intensity <italic>&#x000EA;</italic>(<italic>t</italic>) and the number of zero crossings <inline-formula><mml:math id="M11"><mml:mover accent="true"><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> of the time history, as shown in <xref ref-type="fig" rid="F2">Figure 2</xref>. The first parameter is evaluated by the integration of the squared ground acceleration <italic>&#x01E8D;</italic><sub><italic>r</italic></sub> of the benchmark record from <inline-formula><mml:math id="M12"><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac></mml:math></inline-formula> to <inline-formula><mml:math id="M13"><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac></mml:math></inline-formula>:</p>
<disp-formula id="E10"><label>(10)</label><mml:math id="M14"><mml:mrow><mml:mover accent='true'><mml:mi>e</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mi>w</mml:mi></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mi>w</mml:mi></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:msubsup><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>x</mml:mi><mml:mo>&#x000A8;</mml:mo></mml:mover><mml:mi>r</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow></mml:mstyle><mml:mtext>d</mml:mtext><mml:mi>t</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>The second parameter leads to the time history of the lowest ground frequency of the benchmark record. Equivalently to the procedure in Equation (10), the ground frequency <inline-formula><mml:math id="M15"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x003C9;</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> is evaluated by considering the number of zero-crossings <inline-formula><mml:math id="M16"><mml:mover accent="true"><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> in the time period <inline-formula><mml:math id="M17"><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:math></inline-formula>:</p>
<disp-formula id="E11"><label>(11)</label><mml:math id="M18"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x003C9;</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Polynomial functions approximate the extracted data, which results in the representations of the intensity <italic>e</italic>(<italic>t</italic>) and the frequency content <italic>&#x003C9;</italic>(<italic>t</italic>), highlighted in red in the respective plot of <xref ref-type="fig" rid="F2">Figure 2</xref>.</p>
<p>The non-linear Kanai-Tajimi filter is used to model the movement of the ground. In this context, the filter is represented by a non-linear single degree of freedom system that is subjected to a stationary Gaussian random white noise <italic>w</italic>(<italic>t</italic>):</p>
<disp-formula id="E12"><label>(12)</label><mml:math id="M19"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x01E8D;</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mn>2</mml:mn><mml:msub><mml:mrow><mml:mi>&#x003B6;</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>&#x003C9;</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi>&#x01E8B;</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003C9;</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>w</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The damping parameter &#x003B6;<sub><italic>g</italic></sub> and the time-dependent frequency <italic>&#x003C9;</italic><sub><italic>g</italic></sub>(<italic>t</italic>) are both related to site-dependent ground properties. The response of the filter <italic>x</italic><sub><italic>f</italic></sub> is evaluated using numeric integration, the filtered ground acceleration is then written as:</p>
<disp-formula id="E13"><label>(13)</label><mml:math id="M20"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mo>&#x000A8;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mn>2</mml:mn><mml:msub><mml:mrow><mml:mi>&#x003B6;</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>&#x003C9;</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi>&#x01E8B;</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003C9;</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In order to consider the extracted intensity, the obtained filter acceleration <inline-formula><mml:math id="M21"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mo>&#x000A8;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is finally multiplied by the intensity polynomial <italic>e</italic>(<italic>t</italic>):</p>
<disp-formula id="E14"><label>(14)</label><mml:math id="M26"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x01E8D;</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mo>&#x000A8;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:mi>e</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mtext class="textrm" mathvariant="normal">.</mml:mtext></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The result of this procedure, using our benchmark ground acceleration, is depicted in <xref ref-type="fig" rid="F3">Figure 3</xref>. Here the benchmark excitation, recorded in Kobe, is depicted in <xref ref-type="fig" rid="F3">Figure 3A</xref> and one respective sample excitation is presented in <xref ref-type="fig" rid="F3">Figure 3B</xref>. One can observe the similarity regarding both frequency and intensity time histories while preserving the required level of randomness required for the Monte Carlo simulation.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Ground acceleration time histories; <bold>(A)</bold> real acceleration record measured in Kobe (<xref ref-type="bibr" rid="B11">2016</xref>) as the benchmark earthquake; <bold>(B)</bold> artificially generated earthquake that inherits the properties from the benchmark earthquake.</p></caption>
<graphic xlink:href="fbuil-07-679488-g0003.tif"/>
</fig>
<p>Theoretically, the procedure is straightforward, as shown in <xref ref-type="fig" rid="F4">Figure 4</xref>. One creates a set of ground excitation time histories <inline-formula><mml:math id="M23"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow></mml:math></inline-formula> based on which the response set <inline-formula><mml:math id="M24"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">R</mml:mi></mml:mrow></mml:math></inline-formula> is evaluated. Considering the limit state function <italic>g</italic>(<bold>x</bold>), every response function leads then to its corresponding indicator function value <inline-formula><mml:math id="M25"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">I</mml:mi></mml:mrow></mml:math></inline-formula>:</p>
<disp-formula id="E15"><label>(15)</label><mml:math id="M27"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi mathvariant="-tex-caligraphic">S</mml:mi><mml:mo>=</mml:mo><mml:mtext>&#x0003C;</mml:mtext><mml:msubsup><mml:mover accent='true'><mml:mi>x</mml:mi><mml:mo>&#x000A8;</mml:mo></mml:mover><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mover accent='true'><mml:mi>x</mml:mi><mml:mo>&#x000A8;</mml:mo></mml:mover><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>m</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup><mml:mo>&#x0003E;</mml:mo><mml:mtext>&#x02192;&#x000A0;</mml:mtext><mml:mi mathvariant="-tex-caligraphic">R</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>x</mml:mi></mml:mstyle><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mo>,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>x</mml:mi></mml:mstyle><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>m</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0003E;</mml:mo><mml:mtext>&#x02192;&#x000A0;</mml:mtext><mml:mi mathvariant="-tex-caligraphic">L</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mi>g</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>x</mml:mi></mml:mstyle><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msup><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mi>g</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>x</mml:mi></mml:mstyle><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>m</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msup><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0003E;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The evaluation of the response set <inline-formula><mml:math id="M28"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">R</mml:mi></mml:mrow></mml:math></inline-formula> is realized by numeric integration. In this paper, the Newmark method is used to evaluate the response function sample-by-sample (Bamer et al., <xref ref-type="bibr" rid="B5">2021</xref>). However, due to relation (9), the number of samples <italic>m</italic> must be chosen disproportionately high to obtain a reliable response statistics for small probabilities of structural failure. Therefore, a realization of the crude Monte Carlo method becomes unfeasible if the system is high-dimensional and complex. In order to reliably estimate small probabilities of structural failure, we propose a strategy where a neural network is trained to learn the response behavior using a smaller training sample subset <inline-formula><mml:math id="M29"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02282;</mml:mo><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow></mml:math></inline-formula>, as shown in the bottom plot of <xref ref-type="fig" rid="F4">Figure 4</xref>. Subsequently, the neural network can predict the full response and indicator sample sets, <inline-formula><mml:math id="M30"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">R</mml:mi></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="M31"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">I</mml:mi></mml:mrow></mml:math></inline-formula>. One significantly benefits from the superior computational efficiency of the neural network meta-model, as it practically constitutes a series of subsequent multiplications. This strategy has one downside: the occurrence of extreme events in the sample subset <inline-formula><mml:math id="M32"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is unlikely if the number of training subset samples is considerably smaller than the number of total samples. Neural networks will then not accurately predict the most relevant extreme events for a reliable estimation of structural failure. One measure to avoid this problem is to significantly increase the number of training subset samples, which would significantly decrease the numerical efficiency of the proposed strategy. Therefore, we pursue a different approach: we extend the training sample subset <inline-formula><mml:math id="M33"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> by introducing an additional random parameter <italic>&#x003B1;</italic> for the extracted intensity &#x01E8D;<sub><italic>r</italic></sub>, presented in Equation (10). The extension is written as:</p>
<disp-formula id="E16"><label>(16)</label><mml:math id="M34"><mml:mrow><mml:mover accent='true'><mml:mi>e</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mi>w</mml:mi></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mi>w</mml:mi></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:msubsup><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:msub><mml:mover accent='true'><mml:mi>x</mml:mi><mml:mo>&#x000A8;</mml:mo></mml:mover><mml:mi>r</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow></mml:mstyle><mml:mtext>d</mml:mtext><mml:mi>t</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>As shown later in the results, a unified distribution for <italic>&#x003B1;</italic> between 0.8 and 1.5 is a good choice for the problems in this paper. Using this extension allows us to train the neural network with a significantly higher portion of extreme events in the tail end of the distribution. The neural network is then later able to predict such relevant extreme events accurately. Instead of using a few hand-designed intensity parameters (Morfidis and Kostinakis, <xref ref-type="bibr" rid="B18">2018</xref>), we use the whole excitation history as input for the neural network, as shown at the left side of <xref ref-type="fig" rid="F5">Figure 5</xref>. As a result, any loss of information that could influence the structural response is avoided. In this paper, the output quantity of interest is chosen as the peak story drift ratio (PSDR) of the ground floor. However, if necessary, one can adopt the output quantity of interest. A convolutional neural network architecture is chosen for the enhanced Monte Carlo simulation method. The convolutional layers allow for automatic extraction of the time-dependent relevant patterns of the excitation time input and a breakdown of the time history information to flattened characterization parameters (see <xref ref-type="fig" rid="F5">Figure 5</xref>). The subsequent fully-connected layers lead then to one output quantity of interest&#x02014;the PSDR.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>(Top)</bold> Crude Monte Carlo simulation workflow; <bold>(Bottom)</bold> Machine-learning-enhanced Monte Carlo simulation workflow.</p></caption>
<graphic xlink:href="fbuil-07-679488-g0004.tif"/>
</fig>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Convolutional neural network architecture for the enhanced Monte Carlo simulation method.</p></caption>
<graphic xlink:href="fbuil-07-679488-g0005.tif"/>
</fig></sec></sec>
<sec id="s3">
<title>3. Numerical Results</title>
<p>The numerical demonstration of the new strategies is presented on a three-story two-bay frame structure subjected to the set of ground motions, as depicted in <xref ref-type="fig" rid="F6">Figure 6</xref>. One ground motion sample is presented in <xref ref-type="fig" rid="F6">Figure 6A</xref> and the structure is presented in <xref ref-type="fig" rid="F6">Figure 6B</xref>. For the columns, beam elements with a squared hollow cross section (0.3 &#x000D7; 0.3 m, thickness 0.03 m) are chosen. For the beams, beam elements with a rectangular cross section (0.3 &#x000D7; 0.4 m, thickness 0.03 m) are chosen. Every column and beam is discretized by four fiber beam elements leading to a total number of 188 degrees of freedom (Bamer and Bucher, <xref ref-type="bibr" rid="B2">2012</xref>; Bamer and Markert, <xref ref-type="bibr" rid="B3">2017</xref>; Bamer et al., <xref ref-type="bibr" rid="B1">2017</xref>). An elastoplastic material law with kinematic hardening is considered using the following material parameters: initial Young&#x00027;s modulus &#x02026;2.1 &#x000D7; 10<sup>11</sup> N m<sup>&#x02212;2</sup>, hardening stiffness &#x02026;2.1 &#x000D7; 10<sup>10</sup> N m<sup>&#x02212;2</sup>, yielding stiffness &#x02026;2.4 &#x000D7; 10<sup>8</sup> N m<sup>&#x02212;2</sup>, density &#x02026;7.850 kg m-3. Additionally, point masses of a value of 5.000 kg are added to every finite element node belonging to the beams of the frame structure in order to roughly consider the structural setting. We used our python and C&#x0002B;&#x0002B; based in-house tool to perform the numerical calculations.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Structural response of a frame structure subjected to ground excitation; <bold>(A)</bold> time history of one artificially generated earthquake; <bold>(B)</bold> illustrative frame structure used for the numerical demonstration and visualization of the story drifts (SD); <bold>(C)</bold> displacement time history of the left corner of every story of the frame structure.</p></caption>
<graphic xlink:href="fbuil-07-679488-g0006.tif"/>
</fig>
<p>Using the Newmark algorithm for numeric integration, the response of the structure to the generated ground excitation (<xref ref-type="fig" rid="F6">Figure 6A</xref>) is evaluated. The absolute displacement of the left corners of all three stories is depicted in <xref ref-type="fig" rid="F6">Figure 6C</xref>, based on which the PSDR is extracted. One can see that the structure experiences plastic deformation due to this sample excitation, as a plastic drift-off remains after the whole integration time period. We confirmed the level of plasticity by observing the material hystereses on the Gauss-Integration point level. Plasticity mainly occurs around the frame corners of the first floors. For this example, the major displacement always occurs on the first floor, leading to the decision to take the PSDR of the first floor as the design quantity of interest.</p>
<p>We performed a rigorous hyperparameter search to find a neural network setup that leads to predictions of a high level of accuracy. In doing so, we found that an optimum architecture is dependent on the number of samples. If the chosen neural network architecture is chosen complex enough to extract all relevant features, a larger number of samples leads to higher accuracy of the neural network predictions. Concerning the efficiency of the proposed approach, we decided to keep the number of samples small. Therefore, the convolutional neural network architecture can also be kept simple to extract only the most relevant features. The chosen neural network architecture receives the sample excitation time history as input. Subsequently, three convolutional layers are implemented, followed by two fully connected layers, which lead to the PSDR prediction. Each convolutional layer consists of eight filters. As a result, a kernel size of 32 and a stride size of 3 have been found useful for the first convolutional layer. Regarding the second convolutional layer, the kernel size reduces to 16 with a stride size of 2. The last convolutional layer has a kernel size of 8 and a stride size of 1. After each convolutional stage, max-pooling summarizes the outputs using a pooling size of 2 and the same stride size. After passing the filters of the last convolutional layer, the outputs are averaged before the values are flattened and forwarded to the fully connected layers. The two fully connected layers consist of ten neurons each. The rectified linear activation function is applied to the convolutional stages and the fully connected layers. This architecture has shown the best accuracy if the number of samples within the sample subset of <inline-formula><mml:math id="M35"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">T</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> is 5 &#x000D7; 10<sup>3</sup>. Using the standard training strategy, the mean absolute error of a validation set using 10<sup>3</sup> samples shrinks down to 1.9 &#x000D7; 10<sup>&#x02212;3</sup> after 200 epochs of training. Using the extended set for the training procedure, the mean absolute error after 200 training epochs is slightly higher with a value of 2.1 &#x000D7; 10<sup>&#x02212;3</sup>. During the hyperparameter search, the number of convolutional layers was varied between one and six using up to 32 filters with kernel sizes between 3 and 64. Furthermore, the pooling sizes within the convolutional layers were varied, and both previously presented pooling functions were applied. A more extensive hyperparameter search might result in better prediction accuracy of the neural network, with high probability, if using an increased number of training samples. However, this convolutional neural network architecture can predict the PSDR accurately enough for this study. We used the Python-based library TensorFlow to implement the neural network meta-model.</p>
<p>The Monte Carlo simulation results are depicted in <xref ref-type="fig" rid="F7">Figure 7</xref>. Within the subplots of this figure, the crude Monte Carlo simulation using 10<sup>5</sup> samples is depicted by the gray dotted lines, the enhanced Monte Carlo simulation is depicted as the red lines, and the green lines depict the extended enhanced Monte Carlo simulation. We performed the numerical example on a computer with an Intel Xeon E5-2643 processor running at 3.3 GHz. The operating system is Linux Ubuntu 16.04 with 60 GB of working memory available, which is by no means required for the simulation. For the example in this paper, the crude Monte Carlo simulation requires a calculation time of &#x0007E;1 week, while the prediction procedure of the two neural network-enhanced strategies requires only a few seconds. However, to provide a fair speed-up measure, one must consider evaluating the training sample set, the hyperparameter fitting, and the training procedure. Considering those factors, we chose a fair speed-up measure for the new strategies by dividing the number of full samples by the number of training samples. The numerical demonstration results in a value of 10<sup>5</sup> samples for the full set and over 5 &#x000D7; 10<sup>3</sup> samples for the training set, which results in a speed-up factor of 20.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>(A)</bold> Probability density function (PDF), <bold>(B)</bold> cumulative distribution function (CDF), and <bold>(C)</bold> logarithmic plot of the complementary cumulative distribution function applying the crude Monte Carlo method (black, dotted line), the neural network-enhanced Monte Carlo method using the standard training strategy (red line), and the neural network-enhanced Monte Carlo method using the extended training strategy (green line).</p></caption>
<graphic xlink:href="fbuil-07-679488-g0007.tif"/>
</fig>
<p><xref ref-type="fig" rid="F7">Figure 7A</xref> presents the probability density function (PDF) in terms of a histogram over the displacement intervals. At first sight, one can see that the enhanced Monte Carlo simulation method using the extended scheme is not as accurate as the enhanced Monte Carlo using the standard training scheme. Also, regarding the cumulative distribution function (CDF) prediction, shown in <xref ref-type="fig" rid="F7">Figure 7B</xref>, the enhanced Monte Carlo simulation scheme using the standard training algorithm seems to be more accurate than the enhanced Monte Carlo method using the extended training scheme. This is not surprising as the neural network is better trained around the mean using standard sample subset <inline-formula><mml:math id="M36"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> then using the extended subset <inline-formula><mml:math id="M37"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">S</mml:mi></mml:mrow></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. However, accuracy around the mean is not that relevant in structural engineering design, as, for obvious reasons, we want structural failure to happen very rarely. Therefore, we present the tail end of the complementary cumulative distribution function (CCDF) in <xref ref-type="fig" rid="F7">Figure 7C</xref>, which shows the occurrence of exceeding a certain PSDR in a logarithmic plot. Here, the clear advantage of the enhanced Monte Carlo simulation using an extended sample subset becomes clear, as it outperforms the standard, enhanced Monte Carlo simulation in the crucial region around the tail end of the distribution. Evaluating the probability of exceedance for a PSDR over 3.5 &#x000D7; 10<sup>&#x02212;2</sup> m results in a reliable approximation using the extended neural network enhanced strategy, while the neural network enhanced method using the standard training becomes gradually more inaccurate until the method entirely fails for predicting the probability of a PSDR exceeding a value of 4.2 &#x000D7; 10<sup>&#x02212;2</sup> m.</p></sec>
<sec sec-type="conclusions" id="s4">
<title>4. Conclusion</title>
<p>In this paper, we proposed a new Monte Carlo simulation method enhanced by a convolutional neural network. Using a non-linear Kanai-Tajimi filter, site-dependent ground conditions were taken into account. Two convolutional neural network-enhanced strategies were proposed. The first strategy uses a standard training procedure by considering samples from the full excitation sample set. The second strategy uses a sample set with increased variance regarding intensity to train the neural network more around the tail end of the distribution. In the presented numerical example, the PSDR is the output quantity. However, the proposed strategy can be adapted to predict other output features. The investigation on a more sophisticated damage prediction using multiple output quantities is of high interest for future research.</p>
<p>We have shown that both strategies reveal outstanding efficiency when compared to the crude Monte Carlo simulation. Structural design that incorporates non-linear behavior has practically not been realized using the Monte Carlo method as it was yet not possible in a feasible amount of time. However, the new strategies allow for Monte Carlo simulations in a feasible amount of time that can now be incorporated into a practical design environment. The accuracy of the convolutional neural network is dependent on the number of training samples. Accordingly, one always makes a trade-off between the accuracy of the predictions and the computing time for evaluating the training samples.</p>
<p>The two types of convolutional neural network enhanced Monte Carlo simulation strategies have different advantages and disadvantages. The first type of the new method applying the standard training strategy shows a high accuracy around the mean of the distribution. One can conclude that they will be more useful for estimations of the response regarding structural serviceability. The second type of the new method applying the extended training strategy reveals to be rather more inaccurate around the mean of the distribution than the first type. However, it reveals an outstanding accuracy around the tail end of the distribution, where the first method fails. It is, therefore, highly appropriate to be applied to practical engineering procedures that involve the design of structures considering the low target probability of failure that is, especially, required for engineering structures in an urban built environment.</p></sec>
<sec sec-type="data-availability-statement" id="s5">
<title>Data Availability Statement</title>
<p>The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.</p></sec>
<sec id="s6">
<title>Author Contributions</title>
<p>FB and DT developed the new method, performed the numerical simulations, and wrote the manuscript. MS and BM reviewed the manuscript and supervised the research project. All authors contributed to the article and approved the submitted version.</p></sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Amiri</surname> <given-names>A.</given-names></name> <name><surname>Bucher</surname> <given-names>C.</given-names></name></person-group> (<year>2017</year>). <article-title>A new model order reduction strategy adapted to nonlinear problems in earthquake engineering</article-title>. <source>Earthq. Eng. Struct. Dyn</source>. <volume>46</volume>, <fpage>537</fpage>&#x02013;<lpage>559</lpage>. <pub-id pub-id-type="doi">10.1002/eqe.2802</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Bucher</surname> <given-names>C.</given-names></name></person-group> (<year>2012</year>). <article-title>Application of the proper orthogonal decomposition for linear and nonlinear structures under transient excitations</article-title>. <source>Acta Mech</source>. <volume>223</volume>, <fpage>2549</fpage>&#x02013;<lpage>2563</lpage>. <pub-id pub-id-type="doi">10.1007/s00707-012-0726-9</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Markert</surname> <given-names>B.</given-names></name></person-group> (<year>2017</year>). <article-title>An efficient response identification strategy for nonlinear structures subject to nonstationary generated seismic excitations</article-title>. <source>Mech. Based Des. Struct. Mach</source>. <volume>45</volume>, <fpage>313</fpage>&#x02013;<lpage>330</lpage>. <pub-id pub-id-type="doi">10.1080/15397734.2017.1317269</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Shi</surname> <given-names>J.</given-names></name> <name><surname>Markert</surname> <given-names>B.</given-names></name></person-group> (<year>2018</year>). <article-title>Efficient solution of the multiple seismic pounding problem using hierarchical substructure techniques</article-title>. <source>Comput. Mech</source>. <volume>62</volume>, <fpage>761</fpage>&#x02013;<lpage>782</lpage>. <pub-id pub-id-type="doi">10.1007/s00466-017-1525-x</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Shirafkan</surname> <given-names>N.</given-names></name> <name><surname>Xiaodan</surname> <given-names>C.</given-names></name> <name><surname>Oueslati</surname> <given-names>A.</given-names></name> <name><surname>Stoffel</surname> <given-names>M.</given-names></name> <name><surname>de Saxc&#x000E9;</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>A newmark space-time formulation in structural dynamics</article-title>. <source>Comput. Mech</source>. <fpage>1</fpage>&#x02013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1007/s00466-021-01989-4</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bucher</surname> <given-names>C.</given-names></name></person-group> (<year>2009</year>). <source>Computational Analysis of Randomness in Structural Mechanics</source>, Vol. 3. <publisher-loc>London</publisher-loc>: <publisher-name>Tayler &#x00026; Francis</publisher-name>.</citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Der Kiureghian</surname> <given-names>A.</given-names></name></person-group> (<year>1996</year>). <article-title>Structural reliability methods for seismic safety assessment: a review</article-title>. <source>Eng. Struct</source>. <volume>18</volume>, <fpage>412</fpage>&#x02013;<lpage>424</lpage>. <pub-id pub-id-type="doi">10.1016/0141-0296(95)00005-4</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fujimura</surname> <given-names>K.</given-names></name> <name><surname>Der Kiureghian</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>Tail-equivalent linearization method for nonlinear random vibration</article-title>. <source>Probabil. Eng. Mech</source>. <volume>22</volume>, <fpage>63</fpage>&#x02013;<lpage>76</lpage>. <pub-id pub-id-type="doi">10.1016/j.probengmech.2006.08.001</pub-id><pub-id pub-id-type="pmid">24605063</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Goodfellow</surname> <given-names>I. J.</given-names></name> <name><surname>Bengio</surname> <given-names>Y.</given-names></name> <name><surname>Courville</surname> <given-names>A.</given-names></name></person-group> (<year>2016</year>). <source>Deep Learning</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://www.deeplearningbook.org">http://www.deeplearningbook.org</ext-link></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hurtado</surname> <given-names>J.</given-names></name> <name><surname>Barbat</surname> <given-names>H.</given-names></name></person-group> (<year>1998</year>). <article-title>Monte Carlo techniques in computational stochastic mechanics</article-title>. <source>Archiv. Comput. Methods Eng</source>. <volume>5</volume>, <fpage>3</fpage>&#x02013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.1007/BF02736747</pub-id><pub-id pub-id-type="pmid">29261776</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="web"><person-group person-group-type="author"><collab>Kobe</collab></person-group> (<year>2016</year>). <source>Kobe Takarzuka Earthquake Measurement 1995&#x02013;01-16 20:46:52 UTC. Center for Engineering Strong Motion Data. Virtual Data Center</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://www.strongmotioncenter.org/">www.strongmotioncenter.org/</ext-link></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koeppe</surname> <given-names>A.</given-names></name> <name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Markert</surname> <given-names>B.</given-names></name></person-group> (<year>2019</year>). <article-title>An efficient Monte Carlo strategy for elasto-plastic structures based on recurrent neural networks</article-title>. <source>Acta Mech</source>. <volume>230</volume>, <fpage>3279</fpage>&#x02013;<lpage>3293</lpage>. <pub-id pub-id-type="doi">10.1007/s00707-019-02436-5</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koeppe</surname> <given-names>A.</given-names></name> <name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Markert</surname> <given-names>B.</given-names></name></person-group> (<year>2020</year>). <article-title>An intelligent nonlinear meta element for elastoplastic continua: deep learning using a new time-distributed residual U-Net architecture</article-title>. <source>Comput. Methods Appl. Mech. Eng</source>. <volume>366</volume>:<fpage>113088</fpage>. <pub-id pub-id-type="doi">10.1016/j.cma.2020.113088</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kruse</surname> <given-names>R.</given-names></name> <name><surname>Borgelt</surname> <given-names>C.</given-names></name> <name><surname>Braune</surname> <given-names>C.</given-names></name> <name><surname>Klawonn</surname> <given-names>F.</given-names></name> <name><surname>Moewes</surname> <given-names>C.</given-names></name> <name><surname>Steinbrecher</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <source>Computational Intelligence, Eine Methodische Einf&#x000FC;hrung in K&#x000FC;nstliche Neuronale Netze, Evolution&#x000E4;re Algorithmen, Fuzzy-Systeme und Bayes-Netze</source>. <publisher-loc>Wiesbaden</publisher-loc>: <publisher-name>Springer Vieweg</publisher-name>.</citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liao</surname> <given-names>W.</given-names></name> <name><surname>Chen</surname> <given-names>X.</given-names></name> <name><surname>Lu</surname> <given-names>X.</given-names></name> <name><surname>Huang</surname> <given-names>Y.</given-names></name> <name><surname>Tian</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>Deep transfer learning and time-frequency characteristics-based identification method for structural seismic response</article-title>. <source>Front. Built Environ</source>. <volume>7</volume>:<fpage>10</fpage>. <pub-id pub-id-type="doi">10.3389/fbuil.2021.627058</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lu</surname> <given-names>X.</given-names></name> <name><surname>Liao</surname> <given-names>W.</given-names></name> <name><surname>Huang</surname> <given-names>W.</given-names></name> <name><surname>Xu</surname> <given-names>Y.</given-names></name> <name><surname>Chen</surname> <given-names>X.</given-names></name></person-group> (<year>2020</year>). <article-title>An improved linear quadratic regulator control method through convolutional neural network&#x02013;based vibration identification</article-title>. <source>J. Vib. Control</source> <volume>27</volume>:<fpage>107754632093375</fpage>. <pub-id pub-id-type="doi">10.1177/1077546320933756</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lu</surname> <given-names>X.</given-names></name> <name><surname>Xu</surname> <given-names>Y.</given-names></name> <name><surname>Tian</surname> <given-names>Y.</given-names></name> <name><surname>Cetiner</surname> <given-names>B.</given-names></name> <name><surname>Taciroglu</surname> <given-names>E.</given-names></name></person-group> (<year>2021</year>). <article-title>A deep learning approach to rapid regional post-event seismic damage assessment using time&#x02013;frequency distributions of ground motions</article-title>. <source>Earthq. Eng. Struct. Dyn</source>. <volume>50</volume>, <fpage>1612</fpage>&#x02013;<lpage>1627</lpage>. <pub-id pub-id-type="doi">10.1002/eqe.3415</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morfidis</surname> <given-names>K.</given-names></name> <name><surname>Kostinakis</surname> <given-names>K.</given-names></name></person-group> (<year>2018</year>). <article-title>Approaches to the rapid seismic damage prediction of R/C buildings using artificial neural networks</article-title>. <source>Eng. Struct</source>. <volume>165</volume>, <fpage>120</fpage>&#x02013;<lpage>141</lpage>. <pub-id pub-id-type="doi">10.1016/j.engstruct.2018.03.028</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Roberts</surname> <given-names>J.</given-names></name> <name><surname>Spanos</surname> <given-names>P.</given-names></name></person-group> (<year>1990</year>). <source>Random Vibration and Statistical Linearization</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>John Wiley &#x00026; Sons</publisher-name>.</citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stoffel</surname> <given-names>M.</given-names></name> <name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Markert</surname> <given-names>B.</given-names></name></person-group> (<year>2018</year>). <article-title>Artificial neural networks and intelligent finite elements in non-linear structural mechanics</article-title>. <source>Thin Walled Struct</source>. <volume>131</volume>, <fpage>102</fpage>&#x02013;<lpage>106</lpage>. <pub-id pub-id-type="doi">10.1016/j.tws.2018.06.035</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stoffel</surname> <given-names>M.</given-names></name> <name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Markert</surname> <given-names>B.</given-names></name></person-group> (<year>2019</year>). <article-title>Neural network based constitutive modeling of nonlinear viscoplastic structural response</article-title>. <source>Mech. Res. Commun</source>. <volume>95</volume>, <fpage>85</fpage>&#x02013;<lpage>88</lpage>. <pub-id pub-id-type="doi">10.1016/j.mechrescom.2019.01.004</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stoffel</surname> <given-names>M.</given-names></name> <name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Markert</surname> <given-names>B.</given-names></name></person-group> (<year>2020a</year>). <article-title>Deep convolutional neural networks in structural dynamics under consideration of viscoplastic material behaviour</article-title>. <source>Mech. Res. Commun</source>. <volume>108</volume>:<fpage>103565</fpage>. <pub-id pub-id-type="doi">10.1016/j.mechrescom.2020.103565</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stoffel</surname> <given-names>M.</given-names></name> <name><surname>Gulakala</surname> <given-names>R.</given-names></name> <name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Markert</surname> <given-names>B.</given-names></name></person-group> (<year>2020b</year>). <article-title>Artificial neural networks in structural dynamics: a new modular radial basis function approach vs. convolutional and feedforward topologies</article-title>. <source>Comput. Methods Appl. Mech. Eng</source>. <volume>364</volume>:<fpage>112989</fpage>. <pub-id pub-id-type="doi">10.1016/j.cma.2020.112989</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>H.</given-names></name> <name><surname>Burton</surname> <given-names>H. V.</given-names></name> <name><surname>Huang</surname> <given-names>H.</given-names></name></person-group> (<year>2021</year>). <article-title>Machine learning applications for building structural design and performance assessment: state-of-the-art review</article-title>. <source>J. Build. Eng</source>. <volume>33</volume>:<fpage>101816</fpage>. <pub-id pub-id-type="doi">10.1016/j.jobe.2020.101816</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thaler</surname> <given-names>D.</given-names></name> <name><surname>Bamer</surname> <given-names>F.</given-names></name> <name><surname>Markert</surname> <given-names>B.</given-names></name></person-group> (<year>2021a</year>). <article-title>A machine learning enhanced structural response prediction strategy due to seismic excitation</article-title>. <source>PAMM</source> <volume>20</volume>:<fpage>e202000294</fpage>. <pub-id pub-id-type="doi">10.1002/pamm.202000294</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thaler</surname> <given-names>D.</given-names></name> <name><surname>Stoffel</surname> <given-names>M.</given-names></name> <name><surname>Markert</surname> <given-names>B.</given-names></name> <name><surname>Bamer</surname> <given-names>F.</given-names></name></person-group> (<year>2021b</year>). <article-title>Machine learning enhanced tail end prediction of structural response statistics in earthquake engineering</article-title>. <source>Earthq. Eng. Struct. Dyn</source>. <fpage>eqe3432</fpage>. <pub-id pub-id-type="doi">10.1002/eqe.3432</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>R.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Sun</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>Physics-guided convolutional neural network (PHYCNN) for data-driven seismic response modeling</article-title>. <source>Eng. Struct</source>. <volume>215</volume>:<fpage>110704</fpage>. <pub-id pub-id-type="doi">10.1016/j.engstruct.2020.110704</pub-id></citation></ref>
</ref-list> 
</back>
</article> 