Begell House
International Journal for Uncertainty Quantification
International Journal for Uncertainty Quantification
2152-5080
5
4
2015
PROBABILISTIC QUANTIFICATION OF HAZARDS: A METHODOLOGY USING SMALL ENSEMBLES OF PHYSICS-BASED SIMULATIONS AND STATISTICAL SURROGATES
This paper presents a novel approach to assessing the hazard threat to a locale due to a large volcanic avalanche. The methodology combines: (i) mathematical modeling of volcanic mass flows; (ii) field data of avalanche frequency, volume, and runout; (iii) large-scale numerical simulations of flow events; (iv) use of statistical methods to minimize computational costs, and to capture unlikely events; (v) calculation of the probability of a catastrophic flow event over the next T years at a location of interest; and (vi) innovative computational methodology to implement these methods. This unified presentation collects elements that have been separately developed, and incorporates new contributions to the process. The field data and numerical simulations used here are subject to uncertainty from many sources, uncertainties that must be properly accounted for in assessing the hazard. The methodology presented here will be demonstrated with data from the Soufriere Hills Volcano on the island of Montserrat, where there is a relatively complete record of volcanic mass flows from the past 15 years. This methodology can be transferred to other volcanic sites with similar characteristics and where sparse historical data have prevented such high-quality analysis. More generally, the core of this methodology is widely applicable and can be used for other hazard scenarios, such as floods or ash plumes.
M. J.
Bayarri
Departament d Estadistica y Investigacio Operativa, Universitat de Valencia, 46100 Burjassot, Valencia, Spain
J. O.
Berger
Department of Statistical Science, Duke University, Durham, North Carolina 27708-0251, USA; Department of Statistics, King Abdulaziz University, Jeddah, Saudi Arabia
E. S.
Calder
Earth and Planetary Science, The University of Edinburgh, Edinburgh, UK
Abani K.
Patra
Mechanical and Aerospace Engineering Department of University at Buffalo-SUNY, Buffalo,
New York, 14260, USA
E Bruce
Pitman
Department of Mathematics, University at Buffalo Buffalo, New York 14260, USA
E. T.
Spiller
Department of Mathematics, Statistics, and Computer Science, Marquette University, Milwaukee, Wisconsiin 53201, USA
Robert L.
Wolpert
Department of Statistical Science, Duke University, Durham, North Carolina 27708-0251, USA
297-325
UNCERTAINTY QUANTIFICATION OF THE GEM CHALLENGE MAGNETIC RECONNECTION PROBLEM USING THE MULTILEVEL MONTE CARLO METHOD
Plasma modelers often change the ion-to-electron mass ratio and speed of light to Alfven speed ratio to decrease computational cost. Changing these parameters may affect the outcome of simulation and uncertainty in the results. This work aims to quantify the uncertainty of varying the ion-to-electron mass ratio, speed of light to Alfven speed ratio, and the initial magnetic flux perturbation on the reconnected flux to provide a confidence limit. In this study, the multilevel Monte Carlo (MMC) method is employed to estimate the mean and variance, and the results are compared with the standard Monte Carlo (MC) and the probabilistic collocation (PC) methods. The plasma model used here is the two-fluid plasma where ions and electrons are treated as two separate fluids. Numerical simulations are presented showing the effectiveness of the MMC method when applied to the quasi-neutral ion cyclotron waves and the Geospace Environment Modeling (GEM) magnetic reconnection challenge problems. The mean reconnected flux with error bars provides a reconnection flux variation envelope, which can help numerical modelers to evaluate whether their reconnection flux lies inside the envelope for different plasma models. The results of the MMC mean and variance are comparable to the MC method but at a much lower computational cost.
Eder M.
Sousa
Aerospace and Energetics Research Program, University of Washington, Seattle, Washington 98195, USA
Guang
Lin
Computational Science & Mathematics Division, Pacific Northwest National Laboratory, Richland, Washington 99352; Department of Mathematics, School of Mechanical Engineering, Purdue University, West Lafayette, Indiana, USA
Uri
Shumlak
Aerospace and Energetics Research Program, University of Washington, Seattle, Washington 98195, USA
327-339
SURROGATE PREPOSTERIOR ANALYSES FOR PREDICTING AND ENHANCING IDENTIFIABILITY IN MODEL CALIBRATION
In physics-based engineering modeling and uncertainty quantification, distinguishing the effects of two main sources of uncertainty − calibration parameter uncertainty and model discrepancy − is challenging. Previous research has shown that identifiability, which is quantified by the posterior covariance of the calibration parameters, can sometimes be improved by experimentally measuring multiple responses of the system that share a mutual dependence on a common set of calibration parameters. In this paper, we address the issue of how to select the most appropriate subset of responses to measure experimentally, to best enhance identifiability. We use a preposterior analysis approach that, prior to conducting physical experiments but after conducting computer simulations, can predict the degree of identifiability that will result using different subsets of responses to measure experimentally. It predicts identifiability via the preposterior covariance from a modular Bayesian Monte Carlo analysis of a multi-response spatial random process (SRP) model. Furthermore, to handle the computational challenge in preposterior analysis, we propose a surrogate preposterior analysis based on Fisher information of the calibration parameters. The proposed methods are applied to a simply supported beam example to select two out of six responses to best improve identifiability. The estimated preposterior covariance is compared to the actual posterior covariance to demonstrate the effectiveness of the methods.
Zhen
Jiang
Department of Mechanical Engineering, Northwestern University, 2145 Sheridan Road, Evanston, Illinois 60208, USA
Daniel W.
Apley
Department of Industrial Engineering and Management Sciences, Northwestern University, 2145 Sheridan Road, Evanston, Illinois 60208, USA
Wei
Chen
Department of Mechanical Engineering, Northwestern University, 2145 Sheridan Road, Evanston, Illinois 60208, USA
341-359
A NEW GIBBS SAMPLING BASED BAYESIAN MODEL UPDATING APPROACH USING MODAL DATA FROM MULTIPLE SETUPS
This paper presents a new Gibbs sampling based approach for Bayesian model updating of a linear dynamic system based on modal data (natural frequencies and partial mode shapes of some of the dominant modes) obtained from a structure using multiple setups. Modal data from multiple setups pose a problem as mode shapes identified from multiple setups are normalized individually and the scaling factors to form the overall mode shape are not known a priori. For comprehensive quantification of the uncertainties, the proposed approach allows for an efficient update of the probability distribution of the model parameters, overall mode shapes, scaling factors, and prediction error variances. The proposed approach does not require solving the eigenvalue problem of any structural model or matching of model and experimental modes, and is robust to the dimension of the problem. The effectiveness and efficiency of the proposed method are illustrated by simulated numerical examples.
Sahil
Bansal
Indian Institute of Technology Roorkee
361-374
LOW-COST MULTI-DIMENSIONAL GAUSSIAN PROCESS WITH APPLICATION TO UNCERTAINTY QUANTIFICATION
Computer codes simulating physical systems often have responses that consist of a set of distinct outputs that evolve in space and time and depend on many uncertain input parameters. The high dimensional nature of these computer codes makes the computations of Gaussian process (GP)-based emulators infeasible, even for a small number of simulation runs. In this paper we develop a covariance function for the GP to explicitly treat the covariance among distinct output variables, input variables, spatial domain, and temporal domain and also allows for Bayesian inference at low computational cost. We base our analysis on a modified version of the linear model of coregionalization (LMC). The proper use of the conditional representation of the multivariate output and the separable model for different domains leads to a Kronecker product representation of the covariance matrix. Moreover, we introduce a nugget to the model which leads to better statistical properties (regarding predictive accuracy) of the multivariate GP without adding to the overall computational complexity. Finally, the prior specification of the LMC parameters allows for an efficient Markov chain Monte Carlo (MCMC) algorithm. Our approach is demonstrated on the Kraichnan-Orszag problem and Flow through randomly heterogeneous porous media.
Bledar A.
Konomi
Department of Mathematical Sciences, University of Cincinnati, Cincinnati, Ohio 45221, USA
Guang
Lin
Computational Science & Mathematics Division, Pacific Northwest National Laboratory, Richland, Washington 99352; Department of Mathematics, School of Mechanical Engineering, Purdue University, West Lafayette, Indiana, USA
375-392