# Uncertainty Modelling in Metamodels for Fire Risk Analysis

^{1}

^{2}

^{3}

^{4}

^{*}

*Keywords:*metamodel; surrogate; uncertainty; risk; fire; evacuation

Next Article in Journal

Federal Insitute for Materials Research and Testing, Unter den Eichen 87/88, 12205 Berlin, Germany

Institute for Advanced Simulation, Forschungszentrum Jülich, Wilhelm-Johnen-Straße, 52425 Jülich, Germany

Computational Civil Engineering, University of Wuppertal, 42285 Wuppertal, Germany

Division of Structural Engineering, Lund University, 221 00 Lund, Sweden

Author to whom correspondence should be addressed.

Academic Editor: Tom Brijs

Received: 1 April 2021 / Revised: 3 June 2021 / Accepted: 16 June 2021 / Published: 23 June 2021

In risk-related research of fire safety engineering, metamodels are often applied to approximate the results of complex fire and evacuation simulations. This approximation may cause epistemic uncertainties, and the inherent uncertainties of evacuation simulations may lead to aleatory uncertainties. However, neither the epistemic ‘metamodel uncertainty’ nor the aleatory ‘inherent uncertainty’ have been included in the results of the metamodels for fire safety engineering. For this reason, this paper presents a metamodel that includes metamodel uncertainty and inherent uncertainty in the results of a risk analysis. This metamodel is based on moving least squares; the metamodel uncertainty is derived from the prediction interval. The inherent uncertainty is modelled with an original approach, directly using all replications of evacuation scenarios without the assumption of a specific probability distribution. This generic metamodel was applied on a case study risk analysis of a road tunnel and showed high accuracy. It was found that metamodel uncertainty and inherent uncertainty have clear effects on the results of the risk analysis, which makes their consideration important.

In fire safety engineering, risks for occupants are of high concern and continuously investigated in risk analyses. In a risk analysis, risks are quantified with the consequences and frequencies of many scenarios subjected to uncertainty [1] (pp. 1, 5) with the frequency and the consequences of many scenarios with random parameter settings [1] (pp. 1, 5). Risks can be expressed as the individual risk, namely the ‘measure of fire risk limited to consequences experienced by an individual and based on the individual’s pattern of life’ and the societal risk as a ‘measure of fire risk combining consequences experienced by every affected individual’, often represented with a risk curve [2] (p. 3f).

The risk-related research in fire safety engineering comprises diverse methodologies for the analysis of consequences in many scenarios. In the methodology proposed by Albrecht [3] with reference to Albrecht and Hosser [4], life safety in a community assembly building was quantified with the probability for safe evacuation. De Sanctis et al. [5] expressed the Live Quality Index based consequences of small fires in single family houses based on statistical data, and the consequences of large fires were considered with a probabilistic decision analytical approach. The methodology published by De Sanctis and Fontana [6] was applied on the risk- and Life Quality Index-based optimisation of the widths of doors in a retail building. Van Weyenberge et al. [7] analysed the risks for humans in assembly compartments with reference to Van Weyenberge et al. [8]. Di Nardo et al. [9] used system dynamics to include time-dependent variables for the qualitative and quantitative analysis of risks caused by LPG cylinders in houses. Coping with more complex structures, Schröder [10] evaluated the safe evacuation of underground metro stations in many different scenarios. Anderson and Ezekoye [11] carried out an analysis of the community-averaged extent of damages caused by fires in residential buildings of the United States and Yamamoto et al. [12] investigated the fire safety of road tunnel users. In particular, the risks of road tunnel users have been widely under research, e.g., by Schubert et al. [13], and culminated in several European methodologies for risk analysis, such as for Germany [14] and Austria [15].

Whereas De Sanctis et al. [5] and Schubert et al. [13] applied probabilistic and empirical models to compute the consequences of fire and evacuation scenarios, the other methodologies combined a fire model and an evacuation model. The fire models are mostly computational fluid dynamics models [3,7,10,12,14,15] and the evacuation models are most often one-dimensional models [3,6,7,14,15], except for Yamamoto et al. [12], who used a cellular automaton and Schröder [10], who employed a microscopic evacuation model. Thus, in several methodologies, complex models were used, causing high computational costs to evaluate the consequences for occupants in evacuation scenarios under the effects of smoke spread from fire scenarios.

Because of the high computational costs of complex models, several authors apply metamodels to determine consequences, for example Albrecht and Hosser [4], De Sanctis and Fontana [6], Van Weyenberge et al. [7] and ILF Consulting Engineers [15], together with a zone model. A metamodel comprises three integral parts, summarised in Queipo et al. [16] (p. 3): the experimental design, the database and the response surface model (RSM). The experimental design specifies the parameters of discrete scenarios to be computed with the complex model. The result of interest of these simulations is most often a measure of the consequences in the scenarios. The database comprises these results for all data points of the experimental design. From these results of the database, the RSM approximates the result for any random scenario represented by a point on the domain of the variables. Thus, the RSM simplifies the complex model and, therefore, is quick in the determination of results but causes ‘metamodel uncertainties’ [17] (p. 9). Since the ‘inaccuracy of the metamodels can be interpreted as the metamodel uncertainty where the true response is unknown except at the sample points’ [18] (p. 1) and since adding additional data points could reduce the ‘inaccuracy’, the metamodel uncertainty can, in our case, mostly be characterised as an ‘epistemic uncertainty’ also acknowledging minor ‘aleatory uncertainties’ [19]. Summing up, the metamodel has low computational costs and, for this reason, can be helpful with regard to the global objective of the risk analysis. Namely, the global objective is directed at the consequences of many random scenarios on the entire domain of the variables.

A scenario is typically specified with ‘control variables’ [20] (p. 15), briefly named variables, such as the maximum heat release rate (HRR) or the number of occupants. Next to these variables, ‘environmental variables’ [20] (p. 15) cause an ‘intrinsic’ randomness [21,22,23] in the fire and the evacuation scenario, for example in the gas turbulence or the individual characteristics of the occupants. Whereas the environmental variables are, in common practice, of minor concern in the fire scenarios, they have a large effect in the evacuation scenarios. For this reason, they are considered in the evacuation models of several methodologies [4,6,7,15]. Thus, the stochastic result of the evacuation scenario is subjected to an uncertainty, named the inherent uncertainty. Obviously, the inherent uncertainty can be reduced by a detailed modelling of, for example, the individual characteristics and for this reason it is also ‘epistemic’ [19]. However, since this precise description is uncommon in evacuation modelling, the inherent uncertainty is treated as mainly an ‘aleatory uncertainty’ with the ‘intrinsic randomness of a phenomenon’ [19]. Hence, replications of one scenario lead to an observed random sample (ORS) of the results, which represents the true but unknown inherent uncertainty of the evacuation model. A general approach in evacuation modelling exemplified by Ronchi et al. [22] is to run several replications of a specific evacuation scenario and then evaluate the ORS characterised by the two discrete measures, mean and deviation.

Besides fire safety engineering, several publications, such as Marrel et al. [24] and Moutoussamy et al. [25], address metamodels for the stochastic simulation results of complex models. Marrel et al. [24] describe a joint metamodel for the mean and the dispersion of stochastic model results without replications. This metamodel is based on a Gaussian process model with additional nugget effects to not directly interpolate to the data points. The nugget effect is different for each data point, which allows to consider spatially different dispersions. The dispersion is modelled with a multidimensional differential exponential function. Moutoussamy et al. [25] present a metamodel to directly determine the probability density functions of the results of the complex model at any arbitrary point. Their method relies on replications at the data points and does not require the assumption of a specific distribution type. They first discuss the classical kernel regression, where all data points are considered with a weight depending on the distance to the arbitrary point. Next, they propose a metamodel based on functional decomposition, which is similar to kernel regression but the results are derived from a reduced database. The problem that the model of the probability density function also produces negative values is coped with adapted methods, such as the alternate quadratic minimisation.

Although several methodologies in fire safety engineering using metamodels analyse the metamodel uncertainty, e.g., Albrecht [3], Van Weyenberge et al. [8], or consider environmental variables, e.g., Albrecht [3], De Sanctis and Fontana [6], and LF Consulting Engineers [15], neither the metamodel uncertainty nor the inherent uncertainty have been transferred to the results of the metamodel. At least, Van Weyenberge et al. [7] discuss the integration of the inherent uncertainty. However, the authors of the present publication think that it is important to take into account the metamodel uncertainty and the inherent uncertainty in the final result of the metamodel to represent the result of the complex model at an arbitrary point.

For this reason, a metamodel for fire safety engineering is presented, which includes both uncertainties, and it is used in an exemplary case study for a fire risk analysis of a road tunnel. This metamodel is based on the results of a computational fluid dynamics model and a microscopic evacuation model. It considers temporal aspects within the scenarios and has therewith another focus as the approach of Di Nardo et al. [9], who model the evolution of risks. However, the metamodel can be also used within their approach. Despite the available approaches for stochastic results, such as those of Marrel et al. [24] or Moutoussamy et al. [25], the RSM is based on the deterministic results of the complex model, namely the mean of each ORS, and also produces deterministic results. One reason for this deterministic RSM is to allow to separate the deterministic result of the RSM from the inherent uncertainty at any arbitrary point in order to comply with the general approach for evacuation scenarios [22], that is, characterising the ORS by its mean and deviation. Regarding the inherent uncertainty in the results of the complex model, the authors propose an original approach called the sampled uncertainty approach. This approach is suitable for the requirements of the microscopic evacuation models, namely, a limited number of replications and different unspecific frequency distributions in the ORSs. In conclusion, our metamodel, which includes the metamodel uncertainty and the inherent uncertainty, is different from the other metamodels in fire safety engineering outlined above and, for this reason, can contribute to the scientific basis.

Basically, the metamodel consists of the three parts of RSM, metamodel uncertainty and inherent uncertainty. The symbols used to describe these three parts are shown in Table 1. Firstly, the RSM is based on the projection array-based design method of Loeppky et al. [26] for the experimental design and on the moving least squares method by Lancaster and Salkauskas [27], both further detailed in Section 2.1 and Section 2.2.1. The experimental design establishes the database of data points simulated with the complex model. Secondly, the metamodel uncertainty is the mainly epistemic uncertainty of the RSM and is determined with the prediction interval method by Kim and Choi [18] outlined in Section 2.2.2. Thirdly, the original sampled uncertainty approach is used to reproduce the ORS as described in Section 2.3.

To sum up, the result of the metamodel $\widehat{y}$ at a point $\tilde{\mathit{x}}$ (or $\widehat{\mathit{Y}}$ for multiple points) in Equation (1) combines the result of the RSM $\overline{y}$, the metamodel uncertainty $\delta \widehat{y}$ and the relative inherent uncertainty $\widehat{\u03f5}$.

$$\widehat{y}=\left(\overline{y}+\delta \widehat{y}\right)\xb7\widehat{\u03f5}$$

It therewith should reproduce the result of the complex model. The result of the metamodel only considering the metamodel uncertainty and not the inherent uncertainty is denoted with ${\widehat{y}}^{m}$, and vice versa, it is denoted with ${\widehat{y}}^{i}$.

The metamodel is used for a risk analysis in a case study described in Section 3.1. The risk analysis requires the results of a high number of different points. These points are drawn in a Monte-Carlo simulation and their results are determined with the metamodel. The metamodel, therefore, uses the database earlier simulated with the complex model.

Due to the global objective of the risk analysis, the results have to be computed on the entire domain of the variables. According to Santner et al. [20] (p. 124), ‘computer experiments’ often share the same global objective; hence, their ‘space-filling’ experimental design should ‘spread the [data] points at which we observe the response evenly throughout the region’. Latin hypercube designs [28] meet this requirement and are, therefore, commonly used in computer experiments [20] (p. 125), for example by Van Weyenberge et al. [7].

The projection array-based design method by Loeppky et al. [26] extends the Latin hypercube design in order to further improve its space-filling properties. In detail, the projection array-based design is based on the substructure consisting of substrata from Latin hypercube designs as well as on an additional structure of projection arrays formed by strata, which are, for example, rectangles in a two-dimensional case. Each projection array in a projection array-based design may contain at maximum one data point, and each substrata of a variable contains exactly one data point, following Latin hypercube designs. Loeppky et al. [26] further present a sequential refinement for the projection array-based design, in other words adding subsequently new data points to an existing experimental design.

The projection array-based design method is employed here because of its space-filling properties and its sequential refinement. During its setup, data points are added randomly to the available strata and projection arrays. To improve the space-filling properties, each projection array-based design is chosen from a large set of different designs with regard to a maximin and minimax optimisation.

The methodologies of Anderson and Ezekoye [11] and Bundesanstalt für Straßenwesen (BASt) [14] use event trees for the risk analysis and, therefore, directly use discrete scenarios simulated with the complex model for the single events. This approach corresponds to a ‘nearest neighbour interpolation (NNI)’, which virtually adopts the result for an arbitrary point directly from the data point of a discrete scenario with the smallest Euclidean distance. Several computer codes are readily available to realise the NNI method.

The methodologies of Albrecht [3] and Van Weyenberge et al. [7] employ the moving least squares method (MLS) [27] for their RSMs. MLS conducts a local weighted least squares regression of a linear or quadratic polynomial at a point $\tilde{\mathit{x}}$. It therefore extends the global least squares regression by weighting the data points as shown in Equation (2) [29] (p. 18ff).

$${\overline{\mathit{Y}}}^{c}=\mathit{W}\mathit{X}\mathit{\beta}+\mathit{\delta}\mathit{Y}$$

Here, $\mathit{\delta}\mathit{Y}$ are the approximation errors, $\mathit{\beta}$ are the regression coefficients and ${\overline{\mathit{Y}}}^{c}={\left[{\overline{y}}_{1},\dots ,{\overline{y}}_{{N}_{dps}}\right]}^{T}$ are the deterministic results, i.e., the means of the ORSs, of the complex model at the data points of the experimental design $\mathit{X}={\left[{\mathit{x}}_{1},\dots ,{\mathit{x}}_{{N}_{dps}}\right]}^{T}$ with ${N}_{dps}$ data points ${\mathit{x}}_{i}=\left[{x}_{i,1},{x}_{i,2},\dots \right]$.

The local weighting matrix $\mathit{W}\equiv \mathit{W}\left(\tilde{\mathit{x}}\right)$ is a diagonal matrix which weights the data points depending on their Euclidean distance to the point $\tilde{\mathit{x}}$ with a weighting function. The least squares estimators $\mathit{b}$ of the regression coefficients can be calculated with Equation (3).

$$\left({\mathit{X}}^{T}\mathit{W}\mathit{X}\right)\xb7\mathit{b}={\mathit{X}}^{T}\mathit{W}{\overline{\mathit{Y}}}^{c}$$

Consequently, the local least squares estimators are only valid for one point, and Equation (4) leads to a local result $\overline{y}\equiv \overline{y}\left(\tilde{\mathit{x}}\right)$.

$$\overline{y}=\tilde{\mathit{x}}\xb7\mathit{b}$$

Three weighing functions are adopted from Kim and Choi [18] (Equation (4a)) as well as Most and Bucher [30] (Equations (12) and (16)). The weighting function and its weighting parameter are calibrated with a straightforward algorithm to fit to the results of the data points. This algorithm reduces the prediction variance determined at arbitrary points on the entire domain, similar to Kim and Choi [18] (p. 4), who use the prediction variance for the ‘design optimisation’.

The results $\overline{y}$ of Equation (4) for every point are deterministic if the probabilistic properties of the regression coefficients are neglected. The regression causes residuals, namely the difference between the result of a data point and the approximated result of the RSM.

Kim and Choi [18] introduce a method to calculate the metamodel uncertainty of MLS in the following, called the prediction interval method. In detail, the metamodel uncertainty $\delta \widehat{y}$ is the difference between the result of the RSM and the unknown result of the complex model $\delta \widehat{y}=\overline{y}-{\overline{y}}^{c}$. It is normally distributed with a mean of zero and the variance $var\left(\delta \widehat{y}\right)$.

The prediction interval $\Delta \widehat{y}$ is defined with Equation (5).

$$\Delta \widehat{y}=\left|{t}_{\alpha /2,{N}_{dps}-{N}_{terms}}\xb7\sqrt{{s}^{2}}\right|$$

It depends on the Student distribution with the statistic ${t}_{\alpha /2,{N}_{dps}-{N}_{terms}}$ for the two-sided confidence level $\alpha $ and the degree of freedom ${N}_{dps}-{N}_{terms}$, where ${N}_{terms}$ is the number of terms in the regression model. Further, the prediction variance ${s}^{2}\equiv {\left(s\left(\overline{y}-{\overline{y}}^{c}\right)\right)}^{2}$ is given in Equation (6) [18] (Equation (21)) for the variance of the metamodel uncertainty $var\left(\delta \widehat{y}\right)$.

$${s}^{2}={\sigma}^{2}\xb7\left(1+{\left(\tilde{\mathit{x}}\right)}^{T}\xb7{\left({\mathit{X}}^{T}\mathit{W}\mathit{X}\right)}^{-1}\xb7{\mathit{X}}^{T}\mathit{W}\mathit{W}\mathit{X}\xb7{\left({\mathit{X}}^{T}\mathit{W}\mathit{X}\right)}^{-1}\xb7\tilde{\mathit{x}}\right)$$

The prediction variance depends on the variance estimator ${\sigma}^{2}$ in Equation (7), also known as the leave-one-out cross-validation error, where ${\overline{y}}_{-i}$ denotes the result of the RSM at the data point ${\mathit{x}}_{i}$ with a database excluding this specific data point.

$${\sigma}^{2}=\frac{1}{{N}_{dps}-{N}_{terms}}\sum _{i=1}^{{N}_{dps}}{\left({\overline{y}}_{-i}-{\overline{y}}_{i}^{c}\right)}^{2}$$

The metamodel uncertainty is derived from the prediction interval method with Equation (8), where $\tilde{t}$ is a random number subjected to the Student distribution.

$$\delta \widehat{y}={s}^{2}\xb7{\tilde{t}}_{{N}_{dps}-{N}_{terms}}$$

Salemi et al. [31] present a metamodel, using MLS with a database that comprises a high number of data points with many replications. The variance at a point is quantified with the equally weighted averaged variance of the ORSs at its neighbours, meaning the spatially close data points. In their approach to quantify the aleatory uncertainty, it is clearly distinguished between the deterministic results of the ORSs, i.e., their mean, for the RSM and the variances of the ORSs. In this respect, their approach differs to other approaches, such as that of Moutoussamy et al. [25], but it suits the general approach for evacuation scenarios exemplified by Ronchi et al. [22].

However, evacuation scenarios are often analysed only in a limited number of data points ${N}_{dps}$ and replications ${N}_{rep}$. For this reason the databases common for evacuation scenarios differ clearly to the database used by Salemi et al. [31]. Furthermore, the ORSs of evacuation scenarios often have a variety of different, unspecific frequency distributions unequal to normal or lognormal distributions. For this reason, the approach of Salemi et al. [31] or Gaussian processes [24] are less suitable for the databases of microscopic evacuation models. Hence, the authors introduce an original approach, called the ‘sampled uncertainty approach’, to determine the inherent uncertainty.

The sampled uncertainty approach comprises three principal steps to derive the inherent uncertainty at a point $\tilde{\mathit{x}}$. To begin, each ORS ${\mathit{Y}}_{i}^{c}=\left\{{y}_{i,1}^{c},\dots ,{y}_{i,{N}_{rep}}^{c}\right\}$ in the database is divided by its mean to get the relative ORS ${\mathit{Y}}_{i}^{c*}\equiv {\mathit{Y}}^{c*}\left({\mathit{x}}_{i}\right)=\left\{\frac{{y}_{i,1}^{c}}{{\overline{y}}_{i}^{c}},\dots ,\frac{{y}_{i,{N}_{rep}}^{c}}{{\overline{y}}_{i}^{c}}\right\}$. Next, the relative ORSs of all ${N}_{nb}$ neighbours of the point $\tilde{\mathit{x}}$ are merged in the combined relative sample ${\mathit{Y}}_{{N}_{nb}}^{c*}\left(\tilde{\mathit{x}}\right)=\left\{{\mathit{Y}}_{1}^{c*},\dots ,{\mathit{Y}}_{{N}_{nb}}^{c*}\right\}$. This combined relative sample is specific for each point. It contains ${N}_{rep}\xb7{N}_{nb}$ replications and has the local discrete distribution $\mathcal{D}\left({\mathit{Y}}_{{N}_{nb}}^{c*}\left(\tilde{\mathit{x}}\right)\right)$ in Equation (9).

$$\mathcal{D}\left({\mathit{Y}}_{{N}_{nb}}^{c*}\left(\tilde{\mathit{x}}\right)\right)=\mathcal{D}\left({\omega}_{1}\xb7\mathcal{U}\left({\mathit{Y}}_{1}^{c*}\right),\dots ,{\omega}_{{N}_{nb}}\xb7\mathcal{U}\left({\mathit{Y}}_{{N}_{nb}}^{c*}\right)\right)$$

Here, $\mathcal{U}\left({\mathit{Y}}_{i}^{c*}\right)$ is the uniform distribution of the ORS ${\mathit{Y}}_{i}^{c*}$ in which each replication is subjected to the probability $p=\frac{1}{{N}_{rep}}$. Additionally, each of these uniform distributions is weighted with a combination factor $\omega $ that sums up to ${\sum}_{i=1}^{{N}_{nb}}{\omega}_{i}=1$ over all ${N}_{nb}$ neighbours and is $\omega =0$ for the other data points. The number of neighbours, therefore, defines the region around an point where the ORSs are considered. At last, the relative inherent uncertainty $\widehat{\u03f5}\equiv \widehat{\u03f5}\left(\tilde{\mathit{x}}\right)$ is directly drawn in Equation (10) from the combined relative sample.

$$\widehat{\u03f5}\sim \mathcal{D}\left({\mathit{Y}}_{{N}_{nb}}^{c*}\left(\tilde{\mathit{x}}\right)\right)$$

The combined relative sample should correspond to the true ORS of the complex model at a specific point $\tilde{\mathit{x}}$. Notably, this ORS is unknown since the results were not simulated. Obviously, the required combination factors are unknown; hence, three basic modes for the combination are discussed. Firstly, the combined relative sample can contain only the closest ORS with $\omega =1$ and ${N}_{nb}=1$. This mode leads to a discontinuous transition in the centre between two data points, which is not reasonable. Next, ${N}_{nb}$ ORSs can be weighted with equal combination factors $\omega =\frac{1}{{N}_{nb}}$ such as in the approach of Salemi et al. [31]. However, this uniform combination does not represent the true ORS if the point $\tilde{\mathit{x}}\equiv {\mathit{x}}_{i}$ is equal to a data point. So third, in the linear combination, ${N}_{nb}$ data points are linearly weighted with the weights $\omega \left({\mathit{x}}_{i}\right)=1-\frac{\left({N}_{nb}-1\right)\xb7d\left(\tilde{\mathit{x}},{\mathit{x}}_{i}\right)}{{\sum}_{j=1}^{{N}_{nb}}d\left(\tilde{\mathit{x}},{\mathit{x}}_{j}\right)}$ depending on their Euclidean distance d to the point $\tilde{\mathit{x}}$. Since the combined relative sample should represent the true ORS directly at a data point, it further yields $\omega \left(\tilde{\mathit{x}}\equiv {\mathit{x}}_{i}\right)=1$. For this reason, the initial parameter ${N}_{nb}$ has to be adapted for each point $\tilde{\mathit{x}}$ as a consequence of ${\sum}_{i=1}^{{N}_{nb}}{\omega}_{i}=1$, e.g., a point $\tilde{\mathit{x}}\equiv \mathit{x}$ equal to a data point leads to the adapted number of neighbours ${N}_{nb}=1$. In conclusion, the linear combination represents the true ORS at a point $\tilde{\mathit{x}}$ with regard to the following: firstly, no discontinuous transitions in the results for the inherent uncertainty; secondly, the unbiased combination of ORSs in the case of equal Euclidean distances between two neighbouring data points; and thirdly, the direct adoption of an ORS at a data point. In this respect, its results are most realistic among the three basic modes and is based only on the little available information of the ORSs.

A relative ORS may lead to unrealistic results of the inherent uncertainty if its mean results are close to zero. Hence, a limit ${\overline{y}}_{lim}^{c}={10}^{-4}$ for the mean results of ORSs is defined with regard to the evacuation scenarios but can be different in the case of other applications. This limit prevents unrealistically high results in the metamodel because each relative ORS with ${\overline{y}}_{i}^{c}<{\overline{y}}_{lim}^{c}$ is manipulated to ${\mathit{y}}_{i}^{c*}=\left\{1,\dots ,1\right\}$ and arbitrary points linked to these ORSs always result in the relative inherent uncertainty $\widehat{\u03f5}=1$.

In conclusion, the sampled uncertainty approach is suitable for a limited number of data points and replications and, therefore, meets the requirements of microscopic evacuation models. It derives the inherent uncertainty from the neighbours and separates the inherent uncertainty from the deterministic results of the RSM, and therewith it is similar to the approach of Salemi et al. [31]. However, there are also clear differences because the ORSs are directly used without the quantification of additional parameters for the variance or the fitting of a specific distribution type to the ORSs. Moreover, the sampled uncertainty approach flexibly adapts to the variety of different frequency distributions of the ORS.

The metamodel is applied on a risk analysis for the road tunnel depicted in Figure 1 with the variables provided in Table 2. The tunnel geometry is very common in Germany and the ventilation corresponds to German legislation; for example, the forced longitudinal ventilation is directed downhill in order to confine the smoke for the period of the evacuation. This case study is focused on the evacuation area with the one emergency exit depicted in the figure. This evacuation area is most quickly exposed to smoke; hence, including further evacuation areas with more emergency exits would have little effect on the outcome. More detailed background to the risk analysis was presented by Berchtold et al. [32] and Berchtold et al. [33]. The frequency of the fire in the scenario derives from the average daily traffic volume, the ratio of heavy good vehicles and the tunnel length. Furthermore, the fire scenario itself depends on the variables of the maximum heat release rate $HR{R}_{max}$ and time to maximum HRR ${t}_{HRR}$. Since the evacuation scenario adopts the smoke spread of the fire scenario, it also depends on these variables but additionally on the maximum pre-evacuation time ${t}_{pre}$ among all tunnel users and on the number of tunnel users ${N}_{tu}$. Moreover, the evacuation scenarios are distinguished between scenarios with a tunnel alarm (TA) and with the failure of the tunnel alarm (FA), defined with a Boolean variable. In the latter case, the tunnel users are alarmed individually by smoke. Considering this Boolean variable, two metamodels with different databases for TA and FA are used in this case study.

The databases for both metamodels are set up with the experimental design depicted in Figure 2, using the projection array-based design method described in Section 2.1. The scenarios are simulated with the fire model Fire Dynamics Simulator (FDS) [35], some on the supercomputer JURECA [36], and the microscopic evacuation model, FDS+Evac [37]. The experimental design is set up in three subsequent refinement steps, which are focused on the highest epistemic uncertainties at the outer region of the domain. The different RSMs in each refinement step as well as their results in the Monte-Carlo simulations are denoted with ${\overline{\mathit{Y}}}_{0}$, ${\overline{\mathit{Y}}}_{1}$, ${\overline{\mathit{Y}}}_{2}$, ${\overline{\mathit{Y}}}_{3}$, respectively, both for TA and FA. The number of fatalities ${N}_{fat}$ is determined within the simulations of FDS and FDS+Evac for each scenario, using the default incapacitation model of FDS+Evac, the fractional effective dose concept. Then, the fraction of fatalities is calculated by dividing the number of fatalities through the number of tunnel users in the scenario. This result is of interest for the metamodel and it is assumed to be accurate in the present publication.

The metamodels adopt the results of both databases and determine the consequences of ${10}^{6}$ random scenarios in a Monte-Carlo simulation. Table 2 shows the probability distributions of the variables used to define the random scenarios. Due to the global objective of risk analysis, the metamodel is validated on the entire domain of the variables. For this reason, all variables are attributed to uniform distributions to get an even spread of the random scenarios for the evaluation in Section 3.2. Then, the risk analysis discussed in Section 3.3 is based on more realistic models for the maximum HRR and the number of tunnel users. There, the results are expressed with the individual risk and the societal risk curve.

A validation is defined as the identification of ‘model form errors [uncertainties of the model] by comparison with physical observations’ [17] (p. 9) or the ‘process of determining the degree to which a calculation method is an accurate representation of the real world …’ [38] (p. 3). However, the validation of the metamodel is somehow different to a common validation in fire safety engineering because the ‘physical observation’ or ‘real world’ are not experiments, but the results of the complex model. For this reason, the metamodel is compared to the database that is assumed to contain accurate results.

The validation of the RSM, the metamodel uncertainty and the inherent uncertainty are presented consecutively. It should be noted that MLS models cannot be expressed in an analytical equation since they are a set of local regressions in Equations (3) and (4) at multiple points.

The validation or ‘model adequacy checking’ [29] (p. 43ff) of the RSM is directed at the reproducibility of the response surface of its entire domain. Therefore, the convergence of the generalisation error and the RSM are assessed. Firstly, the generalisation error, in other fields called the prediction error sum of squares [29] (p. 46), is the root of Equation (7) with ${N}_{dps}$ in the fraction. It converges from the second refinement step ${\overline{\mathit{Y}}}_{2}$ with values of about 0.03 (FA) and 0.02 (TA), which reflects the ‘inability’ [17] (p. 9) of the RSM to ‘accurately’ reproduce the results of the complex model. The authors acknowledge this inability and include the metamodel uncertainty in the results of the metamodel. Secondly, the evaluation of the RSM with a global objective is based on results of Monte-Carlo simulations. Each Monte-Carlo simulation leads to a specific sample of results of arbitrary points combining both TA and FA. Hence, each sample of a RSM has a specific frequency distribution. Thus, the convergence between two RSMs is shown by comparing their frequency distributions in a quantile plot in Figure 3. This figure shows the results of the Monte-Carlo simulations with the RSMs ${\overline{\mathit{Y}}}_{0}$, ${\overline{\mathit{Y}}}_{1}$, ${\overline{\mathit{Y}}}_{2}$, ${\overline{\mathit{Y}}}_{3}$ of all refinement steps. As a result, the RSMs ${\overline{\mathit{Y}}}_{2}$ and ${\overline{\mathit{Y}}}_{3}$ converge in accordance with the generalisation error. To sum up, subsequent refinements of the experimental design ${\mathit{X}}_{2}$ caused only small effects on the result of the RSM, and for this reason, the sequential refinement was stopped.

Next, the differences between the RSMs derived from the NNI method ${\overline{\mathit{Y}}}^{NNI}$ and MLS ${\overline{\mathit{Y}}}^{MLS}$, using the database ${\overline{\mathit{Y}}}_{2}^{c}$, are discussed with regard to the generalisation error, local effects on the RSMs as well as global effects on the results of the Monte-Carlo simulations. The generalisation errors of MLS with values of 0.03 (for TA and FA) are clearly lower than the generalisation error of NNI with 0.06 for TA and FA. Looking at the local effects, MLS and NNI can both reproduce the large horizontal response surface adjacent to high gradients as illustrated in Figure 4. However, NNI cause discontinuities that are not expected in the true response surface. The global effects of these discontinuities can be seen in the results of the Monte-Carlo simulations with the frequency distributions shown in Figure 5. NNI causes apparent differences to MLS in the upper quantiles of the results meaning that more points lead to high results. This difference originates in the elevated results in the local region of points with $HR{R}_{max}=200$ MW and ${t}_{HRR}=600$ s in Figure 4. Obviously, the choice of the response surface method can have clear effects on the results of a Monte-Carlo simulation.

To sum up, MLS led to the convergence of the generalisation error and of the RSM after the second refinement step and showed advantages to NNI with regard to the generalisation error and the representation of the complex response surface on the entire domain. In conclusion, the RSM using MLS and the database ${\overline{\mathit{Y}}}_{2}^{c}$ constitutes the deterministic results of the complex model for the global objective.

For the validation of the metamodel, the predictive capability of the prediction interval is evaluated. Therefore, the complete sample validation [18] (p. 5) is used in which the results of the RSM ${\overline{\mathit{Y}}}_{2}$ are compared to a validation set. The validation set consists of a high number of different points evenly spread over the entire domain. These points are produced with a RSM ${\overline{\mathit{Y}}}_{{\mathit{X}}_{val}}$. The experimental design ${\mathit{X}}_{val}$ of this RSM is a batch design of the PAD method with the same number of data points as the experimental design ${\mathit{X}}_{2}$, and also contains data points at the outer vertices. However, it does not focus on a particular region, such as the experimental design ${\mathit{X}}_{2}$, and therefore, it is based on different structures and substructures. Hence, the validation set of the batch design is considered to be independent to the RSM ${\overline{\mathit{Y}}}_{2}$.

For the validation, the confidence level $\alpha $ of the prediction interval of the RSM ${\overline{\mathit{Y}}}_{2}$ and an empirical confidence level $\widehat{\alpha}$ are juxtaposed with each other. The empirical confidence level is the probability p in Equation (11) that the validation set ${\overline{\mathit{Y}}}_{{\mathit{X}}_{val}}$ lies within the prediction interval $\Delta {\widehat{\mathit{Y}}}_{2}\left(\alpha \right)$ of the RSM ${\overline{\mathit{Y}}}_{2}$.

$$\widehat{\alpha}=p\left({\overline{\mathit{Y}}}_{2}-\Delta {\widehat{\mathit{Y}}}_{2}\left(\alpha \right)<{\overline{\mathit{Y}}}_{{\mathit{X}}_{val}}<{\overline{\mathit{Y}}}_{2}+\Delta {\widehat{\mathit{Y}}}_{2}\left(\alpha \right)\right)$$

The inaccuracy of the RSM is covered by the prediction interval if the empirical confidence level is similar to the confidence level $\alpha $. If the empirical confidence level is higher, the prediction interval is larger than the observed inaccuracy of the RSM, in other words, conservative.

As a result, the empirical confidence levels are clearly elevated in comparison to the prescribed confidence levels as shown in Table 3. Accordingly, the prediction interval is too conservative.

One reason for the conservative predictive capabilities is that the prediction interval is independent from the local results of the response surface. This characteristic leads to a drawback in a region with a plain response surface close to zero as illustrated in Figure 4. In this region, there are two reasons why the metamodel uncertainty should be small. Firstly, the residuals are presumed to be small because the results of all data points in this region are close to zero. Secondly, the results of the RSM are expected to be close to zero because of the results of its neighbours. The prediction variance in this region is elevated despite the results being known. However, adding additional data points with the expected result of zero in this region can reduce the empirical confidence levels and therewith increase the predictive capabilities.

The sampled uncertainty approach for the inherent uncertainty aims to reproduce the true inherent uncertainty of the complex model at any point. Looking at an ORS of one specific data point, the sampled uncertainty approach directly samples from this ORS and thus, produces a bootstrap sample, which represents the ORS in the case of many realisations. For this reason, the sampled uncertainty approach always represents the ORS directly at the data points.

Next, the results of the sampled uncertainty approach are compared to the ORSs at validation points ${\mathit{y}}_{val}^{c*}$. The sampled uncertainty approach uses the database ${\mathit{Y}}_{2}^{c}$ and the validation points are derived from the batch design described in Section 3.2.2. In total, 60 and 55 validation points for TA and FA respectively are considered, excluding the outer vertices and validation points with the mean results smaller than the limit ${\overline{y}}_{lim}^{c}$.

Different combination modes are discussed in Section 3.2.2. Hence, the linear combination mode is compared to the observed relative samples of the closest data point ${\mathit{y}}_{clo}^{c*}\in {\mathit{Y}}_{2}^{c}$ as well as to the uniform combination with ${N}_{nb}=20$ neighbours. For their comparison, the sampled uncertainty approach with the linear and the uniform combination mode produces the frequency distributions ${\widehat{\u03f5}}_{lin}$ and ${\widehat{\u03f5}}_{uni}$ drawn from the combined relative sample at each validation point. The difference between the frequency distributions are quantified with the Wasserstein metric.

Table 4 shows the medians of the Wasserstein metric among all validation points for the different modes. Accordingly, the linear combination leads mostly to the smallest median values and moreover, it improves the measures in 70 and 80 percent of the validation points as exemplified in Figure 6a for an improvement and in Figure 6b for a worsening. Hence, the linear combination best represents the ORSs of the validation points.

As another result, the Wasserstein metric and the root mean squared error seem to correlate with the distance between the validation points and their closest data points. Hence, further refinement steps could reduce the differences between the frequency distributions. However, a Monte-Carlos simulation of arbitrary points from the RSM ${\overline{\mathit{Y}}}_{2}$ with the linear combination mode leads to similar results as when the closest neighbour combination mode is used. For this reason, further refinement of the database will not improve the results of the risk analysis. Consequently, the sampled uncertainty approach with the linear combination mode sufficiently reproduces the true inherent uncertainty of the complex model at a point.

The case study in Section 3.1 is used to exemplify the effects of the metamodel uncertainty and of the inherent uncertainty on the results of a risk analysis for road tunnels. During the validation, the variables maximum heat release rate $HR{R}_{max}$ and number of tunnel users ${N}_{tu}$ are subjected to uniform distributions to achieve an equal spread of points on the entire domain. Now, for the risk analysis, these variables are based on the more realistic probability distributions in Table 2. For this reason, the random scenarios consider smaller maximum values, both for the maximum HRR and for the number of tunnel users, which leads to smaller consequences in the random scenarios compared to the results presented in Section 3.2.

The following discussion of the effects is based on the metamodels summarised in Table 5, and on the results in Figure 7, which illustrates the effects on the consequences, as well as on the risk measures for individual risk ${\mathcal{R}}_{ind}$ and the societal risk curve in Table 5 and Figure 8.

To calculate the risk measures, the consequences of the random scenarios determined with the metamodel have two particular characteristics. First, the consequence of each random scenario is related to the number of fatalities and for this reason, it is bound to the lower limit of zero. Second, the each consequence is multiplied, i.e., weighted, with the scenario’s frequency, according to the definition of risk. It follows that random scenarios with small consequences have stronger weights in the risk measures, whereas random scenarios with high consequences are likely to have reduced weights because of their rare occurrence. The following discussion of the effects, therefore, has to be seen with respect to the lower limit as well as the weighting. The discussion is generalised at the end of this section.

The effects of the metamodel uncertainty on the consequences and on the risk measures are two-fold. Firstly, random scenarios with small consequences ${\overline{y}}_{i}\approx 0$ in the RSM led to high metamodel uncertainties as a result of the drawback of the prediction interval method. The additive integration in Equation (1) together with the lower limit led to clearly elevated consequences in the metamodel ${\widehat{y}}_{i}^{m}>0$. Secondly, the metamodel uncertainty had small effects on random scenarios with high consequences in the RSM, leading to similar frequency distributions between the metamodel ${\widehat{\mathit{Y}}}^{m}$ and the RSM $\overline{\mathit{Y}}$ for the upper quantiles in Figure 7. Looking at the risk measures, the effects of the metamodel uncertainty on the consequences were amplified by the weighting with the frequency of the random scenarios. The metamodel uncertainty leads to, firstly, a clear rise in the individual risk as well as in the lower part of the societal risk curve. This effect originates from the random scenarios with small consequences. It is further amplified by the drawback of the prediction interval method in the metamodel uncertainty as discussed in Section 3.2.2. Nonetheless, the effect is still considerable on the individual risk and on the societal risk curve in Figure 9 if the drawback is reduced by adding additional data points with the expected result. Secondly, the metamodel uncertainty has little effects on the upper part of the societal risk curve because of random scenarios with high consequences.

The inherent uncertainty in the metamodel ${\widehat{\mathit{Y}}}^{i}$ causes a larger dispersion in the consequences of the random scenarios in comparison to the RSM $\overline{\mathit{Y}}$ as depicted in Figure 7. The effects have to be discussed with regard to the multiplicative integration of the relative inherent uncertainty in Equation (1). More precisely, the inherent uncertainty has slight effects at random scenarios with small consequences ${\overline{y}}_{i}\approx 0$ in the RSM. Hence, it influences the individual risk and at the left part of the societal risk curve only little. The relative inherent uncertainty contributes to the clear effects at random scenarios with high consequences in the RSM. This clear effect results again in large effects on the maximum consequences among all random scenarios and thus also on the right part of the societal risk curve. However, the weighting of the random scenarios with their frequencies, esp. the small frequencies in scenarios with high consequences, reduces this effect of the inherent uncertainty on both risk measures.

The effects discussed above are influenced by the lower limit of the consequences and the weighting of the random scenarios. However, the metamodel might be also used for other purposes besides risk analysis for life safety, where the results of the metamodel are neither bound to a lower limit nor weighted. First, if the results of the metamodel are not bound to a lower limit, the metamodel uncertainty has no effect on a measure such as the individual risk because the mean of this normally distributed uncertainty is zero. The effect of the inherent uncertainty on such a measure depends on the skewness of the ORSs, e.g., positive skewness leads to an increase and might additionally be augmented in the case of a lower limit. The societal risk curve, or a similar measure, is shifted to the right by both uncertainties, independent from the lower limit. Second, looking at the result of the metamodel without the weighting of the consequences, the strong effect of the metamodel uncertainty on a measure such as the individual risk would be reduced because the random scenarios with small consequences but high frequencies where the metamodel uncertainty is increased are no longer given a stronger weight. Opposite, the effect of the inherent uncertainty, which is higher at scenarios with high consequences and small frequencies, would be clearly increased without the weighting. Summing up, the metamodel model uncertainty and the inherent uncertainty are expected to have different but clear effects on measures such as the individual risk or the risk curve without the lower limit of the consequence or the weighting of the scenarios.

In this publication, a metamodel on the basis of complex simulation models was developed, validated and applied for a risk analysis of a road tunnel. The metamodel consists of three parts: the response surface model based on the projection array-based design method and moving least squares, the metamodel uncertainty and the inherent uncertainty of the complex model. Its validation reveals accordance with the results of the complex models. As a result, the moving least squares model shows a high accuracy on the entire complex response surfaces, which is, in particular, confirmed with the comparison to the nearest neighbour interpolation. Accordingly, the use of moving least squares instead of the nearest neighbour interpolation can improve the accuracy of a risk analysis. The metamodel uncertainty and the inherent uncertainty have clear effects on the results of the risk analysis and are especially important where the database is small or where the complex model has large aleatory uncertainties.

The original sampled uncertainty approach uses all simulated scenario replications and describes the aleatory uncertainty without the assumption of parameters or specific types of probability distributions. For this reason, it is explicitly suitable for a low number of replications and varying frequency distributions among the results of the different scenarios.

The methods of the generic metamodel are suitable for the evaluation of a wide parameter domain of complex response surfaces. The separation of the deterministic response surface model and the aleatory uncertainty of the complex model makes the metamodel applicable on deterministic and stochastic complex models as well as experiments in engineering. Thus, it is useful for expensive simulations or experiments where the results are required on a wide domain of parameters.

The methodology and the results presented in this publication are subjected to limitations. The results of the risk analysis are limited to the specific scenario described in the case study. Furthermore, response surface methods other than moving least squares, such as the first- and second-order regression methods, may be more efficient when the focus is on a small range of parameters as it often is in optimisation problems. Moreover, the accuracy of the metamodel uncertainty may be limited due to the use of the prediction interval method. Improving this issue by adding additional points or by applying other response surface methods, such as the Gaussian process model, still constitutes an open point for future research.

Conceptualisation, F.B., S.T. and C.K.; data curation, F.B. and L.A.; formal analysis, F.B.; funding acquisition, L.A., S.T. and C.K.; investigation, F.B.; methodology, F.B., S.T. and C.K.; project administration, S.T. and C.K.; resources, L.A., S.T. and C.K.; software, F.B. and L.A.; supervision, L.A. and S.T.; validation, F.B., L.A. and S.T.; visualisation, F.B.; writing—original draft, F.B.; writing—review and editing, L.A., S.T. and C.K. All authors have read and agreed to the published version of the manuscript.

The authors gratefully acknowledge the computing time granted (project jjsc27) by the JARA-HPC Vergabegremium and VSR commission on the supercomputer JURECA [36] at Forschungszentrum Jülich. This research was funded by the German Ministry for Education and Research (BMBF), contract No. 13N13266 (project ORPHEUS). BMBF did not influence this research and publication in any aspects.

This study did not involve neither humans nor animals.

This study did not involve neither humans nor animals.

The data presented in this study are available on request from the corresponding author.

The authors declare no conflict of interest.

The following abbreviations are used in this manuscript:

FA | failure of tunnel alarm |

HRR | heat release rate |

MLS | moving least squares |

NNI | nearest neighbour interpolation |

ORS | observed random sample |

RSM | response surface model |

TA | tunnel alarm |

- International Organization for Standardization. Risk Management—Principles and Guidelines; ISO 31000:2009(E): ICS Notation 03.100.01; Beuth Verlag GmbH: Berlin, Germany, 2009. [Google Scholar]
- International Organization for Standardization. ISO 16732-1: Fire Safety Engineering—Fire Risk Assessment—Part 1: General. 2012. Available online: https://www.iso.org/standard/54789.html (accessed on 15 August 2019).
- Albrecht, C. Quantifying life safety Part I: Scenario-based quantification. Fire Saf. J.
**2014**, 64, 87–94. [Google Scholar] [CrossRef] - Albrecht, C.; Hosser, D. A Response Surface Methodology for Probabilistic Life Safety Analysis using Advanced Fire Engineering Tools. Fire Saf. Sci.
**2011**, 10, 1059–1072. [Google Scholar] [CrossRef] - De Sanctis, G.; Fischer, K.; Kohler, J.; Faber, M.H.; Fontana, M. Combining engineering and data-driven approaches: Development of a generic fire risk model facilitating calibration. Fire Saf. J.
**2014**, 70, 23–33. [Google Scholar] [CrossRef] - De Sanctis, G.; Fontana, M. Risk-based optimisation of fire safety egress provisions based on the LQI acceptance criterion. Reliab. Eng. Syst. Saf.
**2016**, 152, 339–350. [Google Scholar] [CrossRef] - Van Weyenberge, B.; Criel, P.; Deckers, X.; Caspeele, R.; Merci, B. Response surface modelling in quantitative risk analysis for life safety in case of fire. Fire Saf. J.
**2017**, 91, 1007–1015. [Google Scholar] [CrossRef] - Van Weyenberge, B.; Deckers, X.; Caspeele, R.; Merci, B. Development of a Risk Assessment Method for Life Safety in Case of Fire in Rail Tunnels. Fire Technol.
**2016**, 52, 1465–1479. [Google Scholar] [CrossRef] - Di Nardo, M.; Gallo, M.; Murino, T.; Sontillo, L.C. System Dynamics Simulation for Fire and Explosion Risk Analysis in Home Environment. Int. Rev. Model. Simul.
**2017**, 10, 43–54. [Google Scholar] [CrossRef] - Schröder, B. Multivariate Methods for Life Safety Analysis in Case of Fire; Schriften des Forschungszentrums Jülich IAS Series; Forschungszentrum, Zentralbibliothek: Jülich, Germany, 2016; Volume 34. [Google Scholar]
- Anderson, A.; Ezekoye, O.A. Quantifying Generalized Residential Fire Risk Using Ensemble Fire Models with Survey and Physical Data. Fire Technol.
**2018**, 43, 127. [Google Scholar] [CrossRef] - Yamamoto, K.; Sawaguchi, Y.; Nishiki, S. Simulation of Tunnel Fire for Evacuation Safety Assessment. Safety
**2018**, 4, 1–12. [Google Scholar] [CrossRef] - Schubert, M.; Høj, N.P.; Köhler, J.; Faber, M.H. Development of a Best Practice Methodology for Risk Assessment in Road Tunnels: Research Project ASTRA 2009/001. 2011. Available online: https://trimis.ec.europa.eu/sites/default/files/project/documents/20150625_094802_21792_priloha_radek_1071_meteorology_risk_tunnel.pdf (accessed on 15 August 2019).
- Bundesanstalt für Straßenwesen (BASt). Bewertung der Sicherheit von Strassentunneln; Wirtschaftsverlag NW, Verlag für Neue Wissenschaft GmbH: Bremerhaven, Germany, 2009. [Google Scholar]
- ILF Consulting Engineers. Erweiterung und Vertiefung des österr. Tunnelmodells—TuRisMo 2: Arbeitsbericht zum Arbeitsausschuss Tunnel-Sicherheit. Available online: https://www.tunnelriskmodel.at/wp-content/uploads/2015/10/ILF_2015_Erweiterung_und_Vertiefung_des_oesterr_Tunnelmodells_REPORT.pdf (accessed on 15 August 2019).
- Queipo, N.V.; Haftka, R.T.; Shyy, W.; Goel, T.; Vaidyanathan, R.; Kevin Tucker, P. Surrogate-based analysis and optimization. Prog. Aerosp. Sci.
**2005**, 41, 1–28. [Google Scholar] [CrossRef] - Nannapaneni, S.; Mahadevan, S. Reliability analysis under epistemic uncertainty. Reliab. Eng. Syst. Saf.
**2016**, 155, 9–20. [Google Scholar] [CrossRef] - Kim, C.; Choi, K.K. Reliability-Based Design Optimization Using Response Surface Method With Prediction Interval Estimation. J. Mech. Des.
**2008**, 130, 121401:1–121401:12. [Google Scholar] [CrossRef] - Kiureghian, A.D.; Ditlevsen, O. Aleatory or epistemic? Does it matter? Struct. Saf.
**2009**, 31, 105–112. [Google Scholar] [CrossRef] - Santner, T.J.; Williams, B.J.; Notz, W.I. The Design and Analysis of Computer Experiments; Springer: New York, NY, USA, 2003. [Google Scholar]
- Caliendo, C.; Ciambelli, P.; Guglielmo, M.L.D.; Meo, M.G.; Russo, P. Simulation of People Evacuation in the Event of a Road Tunnel Fire. Procedia Soc. Behav. Sci.
**2012**, 53, 178–188. [Google Scholar] [CrossRef] - Ronchi, E.; Reneke, P.A.; Peacock, R.D. A Method for the Analysis of Behavioural Uncertainty in Evacuation Modelling. Fire Technol.
**2014**, 50, 1545–1571. [Google Scholar] [CrossRef] - Lovreglio, R.; Ronchi, E.; Borri, D. The validation of evacuation simulation models through the analysis of behavioural uncertainty. Reliab. Eng. Syst. Saf.
**2014**, 131, 166–174. [Google Scholar] [CrossRef] - Marrel, A.; Ioss, B.; Da Veiga, S.; Ribatet, M. Global sensitivity analysis of stochastic computer models with joint metamodels. Comput. Stat.
**2012**, 22, 833–847. [Google Scholar] [CrossRef] - Moutoussamy, V.; Nany, S.; Pauwels, B. Emulators for stochastic simulation codes. ESAIM Proc. Surv.
**2015**, 48, 116–155. [Google Scholar] [CrossRef] - Loeppky, J.L.; Moore, L.M.; Williams, B.J. Projection array based designs for computer experiments. J. Stat. Plan. Inference
**2012**, 142, 1493–1505. [Google Scholar] [CrossRef] - Lancaster, P.; Salkauskas, K. Surfaces Generated by Moving Least Squares Methods. Math. Comput.
**1981**, 37, 141. [Google Scholar] [CrossRef] - McKay, M.D.; Beckman, R.J.; Conover, W.J. A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code. Technometrics
**1979**, 21, 239–245. [Google Scholar] [CrossRef] - Myers, R.H.; Montgomery, D.C. Response Surface Methodology: Process and Product Optimization Using Designed Experiments, 2nd ed.; John Wiley & Sons: New York, NY, USA, 2002. [Google Scholar]
- Most, T.; Bucher, C. New concepts for moving least squares: An interpolating non-singular weighting function and weighted nodal least squares. Eng. Anal. Bound. Elem.
**2008**, 32, 461–470. [Google Scholar] [CrossRef] - Salemi, P.; Nelson, B.L.; Staum, J. Moving Least Squares Regression for High-Dimensional Stochastic Simulation Metamodeling. ACM Trans. Modeling Comput. Simul.
**2016**, 26, 1–25. [Google Scholar] [CrossRef] - Berchtold, F.; Thöns, S.; Knaust, C.; Rogge, A. Risk analysis in road tunnels—Most important risk indicators. In Seventh International Symposium on Tunnel Safety and Security (ISTSS); SP Technical Research Institute of Sweden: Boras, Sweden, 2016; pp. 637–648. [Google Scholar]
- Berchtold, F.; Knaust, C.; Arnold, L.; Thöns, S.; Rogge, A. Risk analysis for road tunnels—A metamodel to efficiently integrate complex fire scenarios. In Proceedings of the Eighth International Symposium on Tunnel Safety and Security (ISTSS): RISE Report 2018, Boras, Sweden, 14–16 March 2018; pp. 349–360. [Google Scholar]
- Centre d’Études des Tunnels. Guide to Road Tunnel Safety Documentation: Booklet 4: Specific Hazard Investigations. 2003. Available online: http://www.cetu.developpement-durable.gouv.fr/IMG/pdf/Fascicule-4-english_cle059211.pdf (accessed on 15 August 2019).
- National Institute of Standards and Technology. Fire Dynamics Simulator (Version 6.3.1): User’s Guide: NIST Special Publication 1019. 2016. Available online: https://pages.nist.gov/fds-smv/manuals.html (accessed on 15 August 2019).
- Krause, D.; Thörnig, P. JURECA: Modular supercomputer at Jülich Supercomputing Centre. J. Large-Scale Res. Facil. JLSRF
**2018**, 4, 132. [Google Scholar] [CrossRef] - Korhonen, T.; Hostikka, S. Fire Dynamics Simulator with Evacuation: FDS+Evac 2.2.1: Technical Reference and User’s Guide. 2009. Available online: https://www.vttresearch.com/sites/default/files/pdf/workingpapers/2009/W119.pdf (accessed on 15 August 2019).
- International Organization for Standardization. Fire Safety Engineering—Procedures and Requirements for Verification and Validation of Fire Methods—Part 1: General; ISO 16730-1: ICS Notation 13.220.01; ISO: Geneva, Switzerland, 2014. [Google Scholar]

Symbol | Description |
---|---|

${N}_{dps}$ | number of data points |

$\mathit{x}$ | data point |

$\tilde{\mathit{x}}$ | arbitrary point in the domain |

$\mathit{X}$ | experimental design |

$\overline{y}$, $\overline{\mathit{Y}}$ | deterministic result of the RSM |

$\widehat{y}$, $\widehat{\mathit{Y}}$ | result of the metamodel considering the metamodel uncertainty and the inherent uncertainty |

${\widehat{y}}^{i}$, ${\widehat{\mathit{Y}}}^{i}$ | result of the metamodel considering the inherent uncertainty |

${\widehat{y}}^{m}$, ${\widehat{\mathit{Y}}}^{m}$ | result of the metamodel considering the metamodel uncertainty |

${\overline{y}}^{c}$, ${\overline{\mathit{Y}}}^{c}$ | deterministic result of the complex model at one data point (data base $\mathit{X}$) |

${\mathit{Y}}^{c}$ | ORS, vector of results of all replications of a data point |

${\mathit{Y}}^{c*}$ | relative ORS divided by the mean result $\overline{y}$ of the ORS |

$\delta \widehat{y}$ | metamodel uncertainty |

$\widehat{\u03f5}$ | relative inherent uncertainty |

$\Delta \widehat{y}$, $\Delta \widehat{\mathit{Y}}$ | prediction interval |

Variable | Notation | Model |
---|---|---|

maximum HRR/MW | $HR{R}_{max}$ | $\mathcal{D}\left(\left\{5,30,50,100\right\}\right)=\left\{0.9,0.09,0.009,0.001\right\}$ [14] |

time to maximum HRR/s | ${t}_{HRR}$ | $\mathcal{U}\left(600,1200\right)$ |

maximum pre-evacuation time/s | ${t}_{pre}$ | $\mathcal{U}\left(100,300\right)$ |

number of tunnel users | ${N}_{tu}$ | analytical model [34] |

average daily traffic volume/day | $\mathcal{U}\left(5000,\mathrm{40,000}\right)$ | |

ratio of heavy good vehicles | $\mathcal{U}\left(0.05,0.45\right)$ | |

tunnel length/km | $\mathcal{U}\left(1,3\right)$ | |

maximum HRR (uniform)/MW | $HR{R}_{max}$ | $\mathcal{U}\left(25,200\right)$ |

number of tunnel users (uniform) | ${N}_{tu}$ | $\mathcal{U}\left(30,180\right)$ |

$\mathit{\alpha}$ | $\widehat{\mathit{\alpha}}\left(\mathit{TA},\mathit{FA}\right)$ |
---|---|

0.75 | 0.91/0.96 |

0.90 | 0.97/0.99 |

0.95 | 0.98/1.00 |

${\widehat{\mathit{\u03f5}}}_{\mathit{lin}}$ | ${\mathit{y}}_{\mathit{clo}}^{\mathit{c}*}$ | ${\widehat{\mathit{\u03f5}}}_{\mathit{uni}}$ | |
---|---|---|---|

${w}_{d}\left(TA\right)$ | 0.10 | 0.10 | 0.15 |

${w}_{d}\left(FA\right)$ | 0.032 | 0.041 | 0.069 |

Metamodel | $\overline{\mathit{Y}}$ | ${\widehat{\mathit{Y}}}^{\mathit{m}}$ | ${\widehat{\mathit{Y}}}^{\mathit{i}}$ | $\widehat{\mathit{Y}}$ |
---|---|---|---|---|

metamodel uncertainty | no | yes | no | yes |

inherent uncertainty | no | no | yes | yes |

$\frac{{\mathcal{R}}_{ind}}{{\mathcal{R}}_{ind}\left(\overline{\mathit{Y}}\right)}$ | 1 | $8.3$ | $1.0$ | $8.3$ |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).