Begell House Inc.
International Journal for Uncertainty Quantification
IJUQ
2152-5080
2
1
2012
PREFACE
ii
10.1615/Int.J.UncertaintyQuantification.v2.i1.10
Nicholas
Zabaras
Department of Mechanical and Aerospace Engineering, Department of Applied and Computational Mathematics and Statistics University of Notre Dame, Notre Dame, IN; University of Warwick Coventry CV4 7AL United Kingdom
USA/SOUTH AMERICA SYMPOSIUM ON STOCHASTIC MODELING AND UNCERTAINTY QUANTIFICATION, LEBLON BEACH, RIO DE JANEIRO, BRAZIL, AUGUST 1-5, 2011
ON THE ROBUSTNESS OF STRUCTURAL RISK OPTIMIZATION WITH RESPECT TO EPISTEMIC UNCERTAINTIES
1-20
10.1615/Int.J.UncertaintyQuantification.v2.i1.20
Andre T.
Beck
Structural Engineering Department, EESC, University of São Paulo, Av. Trabalhador São-carlense, 400, 13566-590 São Carlos, SP, Brazil
W. J. S.
Gomes
Structural Engineering Department, EESC, University of São Paulo, Av. Trabalhador São-carlense, 400, 13566-590 São Carlos, SP, Brazil
F. A. V.
Bazan
Structural Engineering Department, EESC, University of São Paulo, Av. Trabalhador São-carlense, 400, 13566-590 São Carlos, SP, Brazil
risk analysis
representation of uncertainty
robust optimization
structural reliability
fuzzy variables
epistemic uncertainty
In the context of structural design, risk optimization allows one to find a proper point of balance between the concurrent
goals of economy and safety. Risk optimization involves the minimization of total expected costs, which include expected
costs of failure. Expected costs of failure are evaluated from nominal failure probabilities, which reflect the analyst′s degree of belief in the structure′s performance. Such failure probabilities are said to be nominal because they are evaluated
from imperfect and/or incomplete mechanical, mathematical and probabilistic models. Hence, model uncertainty and
other types of epistemic uncertainties are likely to compromise the results of risk optimization. In this paper, the concept
of robustness is employed in order to find risk optimization solutions which are less sensitive to epistemic uncertainties.
The investigation is based on a simple but illustrative problem, which is built from an elementary but fundamental
structural (load-resistance) reliability problem. Intrinsic or aleatoric uncertainties, which can be quantified probabilistically
and modeled as random variables or stochastic processes, are incorporated in the underlying structural reliability
problem. Epistemic uncertainties that can only be quantified possibilistically are modeled as fuzzy variables, based on
subjective judgment. These include uncertainties in random load and resistance variables, in the nominal (calculated)
failure probabilities and in the nominal costs of failure. The risk optimization problem is made robust with respect to the
whole fuzzy portfolio of epistemic uncertainties. An application example, involving optimization of partial safety factors
for the codified design of steel beams under bending, is also presented. In general, results obtained herein show that
the robust formulation leads to optimal structural configurations which are more conservative, present higher nominal
costs but which are less sensitive to epistemic uncertainties, in comparison to the non-robust optimum structures. This
is especially true for larger levels of intrinsic uncertainties (in the underlying reliability problem) and for greater costs
of failure. The essential result of robust optimization is also shown to be insensitive to reasonable variations of expert
confidence: the robust solution is more conservative and more expensive, but also less sensitive to epistemic uncertainties.
The more pessimistic the expert, the more conservative is the robust solution he gets, in comparison to the nominal,
non-robust solution.
DISTANCES AND DIAMETERS IN CONCENTRATION INEQUALITIES: FROM GEOMETRY TO OPTIMAL ASSIGNMENT OF SAMPLING RESOURCES
21-38
10.1615/Int.J.UncertaintyQuantification.v2.i1.30
Tim
Sullivan
Freie Universität Berlin
Houman
Owhadi
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
concentration of measure
large deviations
normal distance
optimal sampling
Talagrand distance
uncertainty quantification
This note reviews, compares and contrasts three notions of "distance" or "size"
that arise often in concentration-of-measure inequalities. We review Talagrand′s convex distance and
McDiarmid′s diameter, and consider in particular the normal distance on a topological vector space
𝒳, which corresponds to the method of Chernoff bounds, and is in some
sense "natural" with respect to the duality structure on 𝒳.
We show that, notably, with respect to this distance, concentration inequalities on the tails of linear,
convex, quasiconvex and measurable functions on 𝒳 are mutually equivalent.
We calculate the normal distances that correspond to families of Gaussian and of bounded random variables in
ℝN, and to functions of N empirical means. As an application, we consider the
problem of estimating the confidence that one can have in a quantity of interest that depends upon many
empirical—as opposed to exact—means and show how the normal distance leads to a formula for
the optimal assignment of sampling resources.
A NON-PARAMETRIC METHOD FOR INCURRED BUT NOT REPORTED CLAIM RESERVE ESTIMATION
39-51
10.1615/Int.J.UncertaintyQuantification.v2.i1.40
Helio
Lopes
Departamento de Matematica, Pontiftcia Universidade Catolica do Rio de Janeiro, Rio de Janeiro, Brazil
Jocelia
Barcellos
Departamento de Matematica, Pontiftcia Universidade Catolica do Rio de Janeiro, Rio de Janeiro, Brazil
Jessica
Kubrusly
Instituto de Matematica e Estatistica, Universidade Federal Fluminense, Niteroi, Rio de Janeiro, Brazil
Cristiano
Fernandes
Departamento de Engenharia Eletrica, Pontificia Universidade Catolica do Rio de Janeiro, Rio de Janeiro, Brazil
risk analysis
statistical learning
kernel methods
Monte Carlo
The number and cost of claims that will arise from each policy of an insurance company′s portfolio are unknown. In fact, there is a high degree of uncertainty on how much will ultimately be the cost of claims, not only during the period of inception but also after the contract termination, since there might be future, not yet reported, losses associated with past claims. Therefore, in practice, insurance companies have to protect themselves against the possibility of this ultimate cost by creating an additional reserve known as the incurred but not reported (IBNR) reserve. This work introduces new non-parametric models to IBNR estimation based on kernel methods; namely, support vector regression and Gaussian process regression. These are used to learn certain types of nonlinear structures present in claims data using the residuals produced by a benchmark IBNR estimation model, Mack′s chain ladder. The proposed models are then compared to Mack′s model using real data examples. Our results show that the three new proposed models are competitive when compared to Mack′s benchmark model: they may produce the closest predictions of IBNR and also more accurate estimates, given that the variance for the reserve estimation, obtained through the bootstrap technique, is usually smaller than the one given by Mack′s model.
UNCERTAINTY QUANTIFICATION IN COMPUTATIONAL PREDICTIVE MODELS FOR FLUID DYNAMICS USING A WORKFLOW MANAGEMENT ENGINE
53-71
10.1615/Int.J.UncertaintyQuantification.v2.i1.50
Gabriel
Guerra
Mechanical Engineering Department, Federal University of Rio de Janeiro, Brazil
Fernando A.
Rochinha
COPPE, Universidade Federal
do Rio de Janeiro Brazil
Renato
Elias
High Performance Computing Center, Federal University of Rio de Janeiro, Brazil
Daniel
de Oliveira
Systems Engineering and Computer Science Department, Federal University of Rio de Janeiro, Brazil
Eduardo
Ogasawara
Systems Engineering and Computer Science Department, Federal University of Rio de Janeiro, Brazil
Jonas Furtado
Dias
Systems Engineering and Computer Science Department, Federal University of Rio de Janeiro, Brazil
Marta
Mattoso
Systems Engineering and Computer Science Department, Federal University of Rio de Janeiro, Brazil
Alvaro L. G. A.
Coutinho
High Performance Computing Center
sparse grid stochastic collocation method
scientific workflows
provenance
computational fluid dynamics
parallelization
adaptive sparse grid
Computational simulation of complex engineered systems requires intensive computation and a significant amount of data management. Today, this management is often carried out on a case-by-case basis and requires great effort to track it. This is due to the complexity of controlling a large amount of data flowing along a chain of simulations. Moreover, many times there is a need to explore parameter variability for the same set of data. On a case-by-case basis, there is no register of data involved in the simulation, making this process prone to errors. In addition, if the user wants to analyze the behavior of a simulation sample, then he/she must wait until the end of the whole simulation. In this context, techniques and methodologies of scientific workflows can improve the management of simulations. Parameter variability can be put in the general context of uncertainty quantification (UQ), which provides a rational perspective for analysts and decision makers. The objective of this work is to use scientific workflows to provide a systematic approach in: (i) modeling UQ numerical experiments as scientific workflows, (ii) offering query tools to evaluate UQ processes at runtime, (iii) managing the UQ analysis, and (iv) managing UQ in parallel executions. When using scientific workflow engines, one can collect data in a transparent manner, allowing execution steering, the postassessment of results, and providing the information for reexecuting the experiment, thereby ensuring reproducibility, an essential characteristic in a scientific or engineering computational experiment.
ON THE ROLE OF DATA MINING TECHNIQUES IN UNCERTAINTY QUANTIFICATION
73-94
10.1615/Int.J.UncertaintyQuantification.v2.i1.60
Chandrika
Kamath
Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California, 94551, USA
classification
machine learning
principal component analysis
high-dimensional methods
data mining
uncertainty quantification
Techniques from scientific data mining are increasingly being used to analyze and understand data from scientific observations, simulations, and experiments. These methods provide scientists the opportunity to automate the tedious manual processing of the data, control complex systems, and gain insights into the phenomenon being modeled or observed. This process of data-driven scientific inference borrows ideas and solutions from a range of fields including machine learning, image and video processing, statistics, high-performance computing, and pattern recognition. The tasks involved in these analyses include the extraction of structures from the data, the identification of representative features for these structures, dimension reduction, and building predictive and descriptive models. At first glance, data mining and data-driven analysis may appear unrelated to stochastic modeling and uncertainty quantification. But, as we show in this paper, there are commonalities in the problems addressed and techniques used, providing the two communities the opportunity to benefit from the expertise and experiences of each other.