Begell House Inc.
International Journal for Uncertainty Quantification
IJUQ
2152-5080
5
1
2015
MINIMAL SPARSE SAMPLING FOR FOURIER-POLYNOMIAL CHAOS IN ACOUSTIC SCATTERING
1-20
Roger M.
Oba
Acoustics Division, Naval Research Laboratory, Washington DC 20375, USA
Single frequency acoustic scattering from an uncertain surface (with sinusoidal components) admits an efficient Fourier-polynomial chaos (FPC) expansion of the acoustic field. The expansion coefficients are computed non-intrusively, i.e., by functional sampling from existing acoustic models. The structure of the acoustic decomposition permits sparse selection of FPC orders within the framework of the Smolyak construction. The main result shows a minimal, sparse sampling required to exactly reconstruct FPC expansions of Smolyak form. To this end, this paper defines two concepts: exactly discretizable orthonormal, function systems (EDO); and nested systems created by decimation or "fledging". An EDO generalizes the Nyquist-Shannon sampling conditions (exact recovery of "band-limited" functions given sufficient sampling) to multidimensional FPC expansions. EDO criteria replace the concept of polynomially exact quadrature. Fledging parallels the idea of sub-sampling for sub-bands, from higher to lower level. The FPC Smolyak construction is an EDO fledged from a full grid EDO. An EDO results exactly when the sampled FPC expansion can be inverted to find its coefficients. EDO fledging requires that the lower level (1) has grid points and expansion orders nested in the higher level, and (2) derives its map from the samples to the coefficients from the higher level map. The theory begins with a single dimension fledged EDO, since a tensor product of fledged EDOs yields a fledged tensor EDO. A sequence of nested EDO levels fledge recursively from the largest EDO. The Smolyak construction uses telescoping sums of tensor products up to a maximum level to develop nested EDO systems for sparse grids and orders. The Smolyak construction transform gives exactly the inverse of the weighted evaluation map, and that inverse has a condition number that expresses the numerical limitations of the Smolyak construction.
A MIXED UNCERTAINTY QUANTIFICATION APPROACH USING EVIDENCE THEORY AND STOCHASTIC EXPANSIONS
21-48
Harsheel
Shah
Aerospace Simulations Laboratory, Missouri University of Science & Technology, Rolla, Missouri 65409, USA
Serhat
Hosder
Aerospace Simulations Laboratory, Department of Mechanical and Aerospace Engineering, 290B Toomey Hall, Missouri University of Science and Technology, Rolla, Missouri 65409-0500, USA
Tyler
Winter
M4 Engineering Inc., Long Beach, California, 90807, USA
Uncertainty quantification (UQ) is the process of quantitative characterization and propagation of input uncertainties to the response measure of interest in experimental and computational models. The input uncertainties in computational models can be either aleatory, i.e., irreducible inherent variations, or epistemic, i.e., reducible variability which arises from lack of knowledge. Previously, it has been shown that Dempster Shafer theory of evidence (DSTE) can be applied to model epistemic uncertainty in case of uncertainty information coming from multiple sources. The objective of this paper is to model and propagate mixed uncertainty (aleatory and epistemic) using DSTE. In specific, the aleatory variables are modeled as Dempster Shafer structures by discretizing them into sets of intervals according to their respective probability distributions. In order to avoid excessive computational cost associated with large scale applications, a stochastic response surface based on point-collocation non-intrusive polynomial chaos has been implemented as the surrogate model for the response. A convergence study for accurate representation of aleatory uncertainty in terms of minimum number of subintervals required is presented. The mixed UQ approach is demonstrated on a numerical example and high fidelity computational fluid dynamics study of transonic flow over RAE 2822 airfoil.
A GRADIENT-BASED SAMPLING APPROACH FOR DIMENSION REDUCTION OF PARTIAL DIFFERENTIAL EQUATIONS WITH STOCHASTIC COEFFICIENTS
49-72
Miroslav
Stoyanov
Applied Mathematics Group, Computer Science and Mathematics Division, Oak Ridge National Laboratory, 1 Bethel Valley Road, P.O. Box 2008, Oak Ridge TN 37831-6164
Clayton G.
Webster
Department of Computational and Applied Mathematics, Oak Ridge National Laboratory, One Bethel Valley Road, P.O. Box 2008, MS-6164, Oak Ridge, Tennessee 37831-6164, USA
We develop a projection-based dimension reduction approach for partial differential equations with high-dimensional stochastic coefficients. This technique uses samples of the gradient of the quantity of interest (QoI) to partition the uncertainty domain into "active" and "passive" subspaces. The passive subspace is characterized by near-constant behavior of the quantity of interest, while the active subspace contains the most important dynamics of the stochastic system. We also present a procedure to project the model onto the low-dimensional active subspace that enables the resulting approximation to be solved using conventional techniques. Unlike the classical Karhunen-Loeve expansion, the advantage of this approach is that it is applicable to fully nonlinear problems and does not require any assumptions on the correlation between the random inputs. This work also provides a rigorous convergence analysis of the quantity of interest and demonstrates: at least linear convergence with respect to the number of samples. It also shows that the convergence rate is independent of the number of input random variables. Thus, applied to a reducible problem, our approach can approximate the statistics of the QoI to within desired error tolerance at a cost that is orders of magnitude lower than standard Monte Carlo. Finally, several numerical examples demonstrate the feasibility of our approach and are used to illustrate the theoretical results. In particular, we validate our convergence estimates through the application of this approach to a reactor criticality problem with a large number of random cross-section parameters.
BAYESIAN INFERENCE FOR INVERSE PROBLEMS OCCURRING IN UNCERTAINTY ANALYSIS
73-98
Shuai
Fu
University Paris-Sud 11, France; EDF, R&D, France
Gilles
Celeux
Inria Saclay-Ile-de-France
Nicolas
Bousquet
EDF, R&D, France
Mathieu
Couplet
EDF, R&D, France
The inverse problem considered here is the estimation of the distribution of a nonobserved random variable X, linked through a time-consuming physical model H to some noisy observed data Y. Bayesian inference is considered to account for prior expert knowledge on X in a small sample size setting. A Metropolis-Hastings-within-Gibbs algorithm is used to compute the posterior distribution of the parameters of the distribution of X through a data augmentation process. Since running H is quite expensive, this inference is achieved by a kriging emulator interpolating H from a numerical design of experiments (DOE). This approach involves several errors of different natures and, in this article, we pay effort to measure and reduce the possible impact of those errors. In particular, we propose to use the so-called DAC criterion to assess in the same exercise the relevance of the DOE and the prior distribution. After describing the calculation of this criterion for the emulator at hand, its behavior is illustrated on numerical experiments.