Begell House Inc.
International Journal for Uncertainty Quantification
IJUQ
2152-5080
9
4
2019
A MULTILEVEL APPROACH FOR SEQUENTIAL INFERENCE ON PARTIALLY OBSERVED DETERMINISTIC SYSTEMS
321-330
Ajay
Jasra
Department of Statistics & Applied Probability, National University of Singapore, Singapore
Kody J.H.
Law
School of Mathematics, University of Manchester, Manchester, M139PL, UK
Yi
Xu
Department of Statistics & Applied Probability, National University of Singapore, Singapore
In this article we consider sequential inference on partially observed deterministic systems. Examples include: inference on the expected position of a dynamical system, with random initial position, or Bayesian static parameter inference for unobserved partial differential equations (PDEs), both associated to sequentially observed real data. Such statistical models are found in a wide variety of real applications, including weather prediction. In many practical scenarios one must discretize the system, but even under such discretization, it is not possible to compute the associated expected value (integral) required for inference. Such quantities are then approximated by Monte Carlo methods, and the associated cost to achieve a given level of error in this context can substantially be reduced by using multilevel Monte Carlo (MLMC). MLMC relies upon exact sampling of the model of interest, which is not always possible. We devise a sequential Monte Carlo (SMC) method, which does not require exact sampling, to leverage the MLMC method. We prove that for some models with n data points, that to achieve a mean square error (MSE) in estimation of O(∈ 2) (for some 0 < ∈ < 1) our MLSMC method has a cost of O(n 2∈ -2) versus an SMC method that just approximates the most precise discretiztion of O(n 2∈ -3). This is illustrated on two numerical examples.
PIG PROCESS: JOINT MODELING OF POINT AND INTEGRAL RESPONSES IN COMPUTER EXPERIMENTS
331-349
Heng
Su
Wells Fargo Bank, Charlotte, NC 28202, USA
Rui
Tuo
Department of Industrial and Systems Engineering, Texas A&M University, College Station,
TX 77843, USA
C. F. Jeff
Wu
The H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of
Technology, Atlanta, GA 30332, USA
Motivated by work on building energy simulation, this paper develops a new class of models called point-integral
Gaussian (PIG) processes. The covariance structures of these models are obtained and their parameter estimation
and prediction are derived. In the case of axis-parallel rectangular regions, closed form expressions for the covariance functions are obtained. Two simulated examples are used to demonstrate the use of the PIG process models and show their superior performance over those without the integral information.
SURROGATE MODELING OF STOCHASTIC FUNCTIONS−APPLICATION TO COMPUTATIONAL ELECTROMAGNETIC DOSIMETRY
351-363
Soumaya
Azzi
LTCI, Télécom ParisTech, Chair C2M, 46 Rue Barrault, 75013 Paris, France
Yuanyuan
Huang
LTCI, Télécom ParisTech, Chair C2M, 46 Rue Barrault, 75013 Paris, France
Bruno
Sudret
ETH Zurich, Institute of Structural Engineering, Chair of Risk, Safety and Uncertainty Quantification, Stefano-Franscini-Platz 5, CH-8093 Zurich, Switzerland
Joe
Wiart
LTCI, Télécom ParisTech, Chair C2M, 46 Rue Barrault, 75013 Paris, France
This paper is dedicated to the surrogate modeling of a particular type of computational model called stochastic simulators, which inherently contain some source of randomness. In this particular case the output of the simulator in a given point is a probability density function. In this paper, the stochastic simulator is represented as a stochastic process and the surrogate model is built using the Karhunen-Loeve expansion. In a first approach, the stochastic process covariance was surrogated using polynomial chaos expansion; meanwhile in a second approach the eigenvectors were interpolated. The performance of the method is illustrated on a toy example and then on an electromagnetic dosimetry example. We then provide metrics to measure the accuracy of the surrogate.
EMBEDDED MODEL ERROR REPRESENTATION FOR BAYESIAN MODEL CALIBRATION
365-394
Khachik
Sargsyan
Sandia National Laboratories, Livermore, CA, USA
Xun
Huan
Sandia National Laboratories, 7011 East Ave, MS 9051, Livermore, CA 94550, USA; Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
Habib N.
Najm
Sandia National Laboratories
P.O. Box 969, MS 9051, Livermore, CA 94551, USA
Model error estimation remains one of the key challenges in uncertainty quantification and predictive science. For
computational models of complex physical systems, model error, also known as structural error or model inadequacy,
is often the largest contributor to the overall predictive uncertainty. This work builds on a recently developed frame-work of embedded, internal model correction, in order to represent and quantify structural errors, together with model parameters, within a Bayesian inference context. We focus specifically on a polynomial chaos representation with additive modification of existing model parameters, enabling a nonintrusive procedure for efficient approximate likelihood construction, model error estimation, and disambiguation of model and data errors' contributions to predictive uncertainty. The framework is demonstrated on several synthetic examples, as well as on a chemical ignition problem.
WASSERSTEIN METRIC-DRIVEN BAYESIAN INVERSION WITH APPLICATIONS TO SIGNAL PROCESSING
395-414
Mohammad
Motamed
Department of Mathematics and Statistics, University of New Mexico, Albuquerque, New
Mexico
Daniel
Appelo
Department of Applied Mathematics, University of Colorado Boulder, Boulder, Colorado
We present a Bayesian framework based on a new exponential likelihood function driven by the quadratic Wasserstein
metric. Compared to conventional Bayesian models based on Gaussian likelihood functions driven by the least-squares
norm (L2 norm), the new framework features several advantages. First, the new framework does not rely on the like-lihood of the measurement noise and hence can treat complicated noise structures such as combined additive and
multiplicative noise. Second, unlike the normal likelihood function, the Wasserstein-based exponential likelihood function does not usually generate multiple local extrema. As a result, the new framework features better convergence to
correct posteriors when a Markov Chain Monte Carlo sampling algorithm is employed. Third, in the particular case
of signal processing problems, although a normal likelihood function measures only the amplitude differences between
the observed and simulated signals, the new likelihood function can capture both amplitude and phase differences. We apply the new framework to a class of signal processing problems, that is, the inverse uncertainty quantification of
waveforms, and demonstrate its advantages compared to Bayesian models with normal likelihood functions.