Begell House Inc.
International Journal for Uncertainty Quantification
IJUQ
2152-5080
4
3
2014
UNCERTAINTY QUANTIFICATION IN DYNAMIC SIMULATIONS OF LARGE-SCALE POWER SYSTEM MODELS USING THE HIGH-ORDER PROBABILISTIC COLLOCATION METHOD ON SPARSE GRIDS
185-204
Guang
Lin
Computational Science & Mathematics Division, Pacific Northwest National Laboratory, Richland, Washington 99352; Department of Mathematics, School of Mechanical Engineering, Purdue University, West Lafayette, Indiana, USA
Marcelo
Elizondo
Pacific Northwest National Laboratory, 902 Battelle Boulevard, Richland, Washington 99352, USA
Shuai
Lu
Pacific Northwest National Laboratory, 902 Battelle Boulevard, Richland, Washington 99352, USA
Xiaoliang
Wan
Department of Mathematics, Louisiana State University, Baton Rouge, Louisiana 70803, USA
This paper employs a probabilistic collocation method (PCM) to quantify the uncertainties in dynamic simulations of power systems. The approach was tested on a single machine infinite bus system and the over 15,000 -bus Western Electricity Coordinating Council (WECC) system in western North America. Compared to the classic Monte Carlo (MC) method, the PCM applies the Smolyak algorithm to reduce the number of simulations that have to be performed. Therefore, the computational cost can be greatly reduced using PCM. A comparison was made with the MC method on a single machine as well as the WECC system. The simulation results show that by using PCM only a small number of sparse grid points need to be sampled even when dealing with systems with a relatively large number of uncertain parameters.
IMPROVEMENTS TO GRADIENT-ENHANCED KRIGING USING A BAYESIAN INTERPRETATION
205-223
Jouke H.S.
de Baar
TU Delft, Kluyverweg 1 (10.18), 2629 HS Delft, The Netherlands
Richard P.
Dwight
Aerodynamics Group, Faculty of Aerospace, TU Delft, P.O. Box 5058, 2600GB Delft, The
Netherlands
Hester
Bijl
TU Delft, Kluyverweg 1 (10.18), 2629 HS Delft, The Netherlands
Cokriging is a flexible tool for constructing surrogate models on the outputs of computer models. It can readily incorporate gradient information, in which form it is named gradient-enhanced Kriging (GEK), and promises accurate surrogate models in >10 dimensions with a moderate number of sample locations for sufficiently smooth responses. However, GEK suffers from several problems: poor robustness and ill-conditionedness of the surface. Furthermore it is unclear how to account for errors in gradients, which are typically larger than errors in values. In this work we derive GEK using Bayes' Theorem, which gives an useful interpretation of the method, allowing construction of a gradient-error contribution. The Bayesian interpretation suggests the "observation error" as a proxy for errors in the output of the computer model. From this point we derive analytic estimates of robustness of the method, which can easily be used to compute upper bounds on the correlation range and lower bounds on the observation error. We thus see that by including the observation error, treatment of errors and robustness go hand in hand. The resulting GEK method is applied to uncertainty quantification for two test problems.
A MULTI-FIDELITY STOCHASTIC COLLOCATION METHOD FOR PARABOLIC PARTIAL DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA
225-242
Maziar
Raissi
Department of Mathematical Sciences, George Mason University, 4400 University Drive, MS: 3F2, Planetary Hall, Fairfax, Virginia 22030, USA
Padmanabhan
Seshaiyer
Department of Mathematical Sciences, George Mason University, 4400 University Drive, MS: 3F2, Planetary Hall, Fairfax, Virginia 22030, USA
Over the last few years there have been dramatic advances in the area of uncertainty quantification. In particular, we have seen a surge of interest in developing efficient, scalable, stable, and convergent computational methods for solving differential equations with random inputs. Stochastic collocation (SC) methods, which inherit both the ease of implementation of sampling methods like Monte Carlo and the robustness of nonsampling ones like stochastic Galerkin to a great deal, have proved extremely useful in dealing with differential equations driven by random inputs. In this work we propose a novel enhancement to stochastic collocation methods using deterministic model reduction techniques. Linear parabolic partial differential equations with random forcing terms are analysed. The input data are assumed to be represented by a finite number of random variables. A rigorous convergence analysis, supported by numerical results, shows that the proposed technique is not only reliable and robust but also efficient.
QUANTIFICATION OF UNCERTAINTY FROM HIGH-DIMENSIONAL SCATTERED DATA VIA POLYNOMIAL APPROXIMATION
243-271
Lionel
Mathelin
LIMSI-CNRS, BP 133, 91403 Orsay, France; Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139, USA
This paper discusses a methodology for determining a functional representation of a random process from a collection of scattered pointwise samples. The present work specifically focuses onto random quantities lying in a high-dimensional stochastic space in the context of limited amount of information. The proposed approach involves a procedure for the selection of an approximation basis and the evaluation of the associated coefficients. The selection of the approximation basis relies on the a priori choice of the high-dimensional model representation format combined with a modified least angle regression technique. The resulting basis then provides the structure for the actual approximation basis, possibly using different functions, more parsimonious and nonlinear in its coefficients. To evaluate the coefficients, both an alternate least squares and an alternate weighted total least squares methods are employed. Examples are provided for the approximation of a random variable in a high-dimensional space as well as the estimation of a random field. Stochastic dimensions up to 100 are considered, with an amount of information as low as about 3 samples per dimension, and robustness of the approximation is demonstrated with respect to noise in the dataset. The computational cost of the solution method is shown to scale only linearly with the cardinality of the a priori basis and exhibits a (Nq)s, 2 ≤ s ≤ 3, dependence with the number Nq of samples in the dataset. The provided numerical experiments illustrate the ability of the present approach to derive an accurate approximation from scarce scattered data even in the presence of noise.