Begell House Inc.
International Journal for Uncertainty Quantification
IJUQ
2152-5080
3
2
2013
PREFACEWORKING WITH UNCERTAINTY WORKSHOP: REPRESENTATION, QUANTIFICATION, PROPAGATION, VISUALIZATION, AND COMMUNICATION OF UNCERTAINTY, PROVIDENCE, RHODE ISLAND, OCTOBER 2011
vii-viii
Chris R.
Johnson
Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, 84112, USA
Alex
Pang
Computer Science Department, Jack Baskin School of Engineering, 1156 High Street, University of California, Santa Cruz, California 95060, USA
APPROXIMATE LEVEL-CROSSING PROBABILITIES FOR INTERACTIVE VISUALIZATION OF UNCERTAIN ISOCONTOURS
101-117
Kai
Poethkow
Zuse Institute Berlin, Takustrasse 7,14195 Berlin, Germany
Christoph
Petz
Zuse Institute Berlin, Takustrasse 7,14195 Berlin, Germany
Hans-Christian
Hege
Zuse Institute Berlin, Takustrasse 7,14195 Berlin, Germany
A major method for quantitative visualization of a scalar field is depiction of its isocontours. If the scalar field is
afflicted with uncertainties, uncertain counterparts to isocontours have to be extracted and depicted. We consider the
case where the input data is modeled as a discretized Gaussian field with spatial correlations. For this situation we
want to compute level-crossing probabilities that are associated to grid cells. To avoid the high computational cost of
Monte Carlo integrations and direction dependencies of raycasting methods, we formulate two approximations for these
probabilities that can be utilized during rendering by looking up precomputed univariate and bivariate distribution
functions. The first method, called maximum edge crossing probability, considers only pairwise correlations at a time.
The second method, called linked-pairs method, considers joint and conditional probabilities between vertices along
paths of a spanning tree over the n vertices of the grid cell; with each possible tree an n-dimensional approximate
distribution is associated; the choice of the distribution is guided by minimizing its Bhattacharyya distance to the
original distribution. We perform a quantitative and qualitative evaluation of the approximation errors on synthetic
data and show the utility of both approximations on the example of climate simulation data.
ADAPTIVE SAMPLING WITH TOPOLOGICAL SCORES
119-141
Dan
Maljovec
Scientific Computing and Imaging Institute, University of Utah, 72 South Central Campus Drive, Salt Lake City, Utah 84112, USA
Bei
Wang
Scientific Computing and Imaging Institute, University of Utah, 72 South Central Campus Drive, Salt Lake City, Utah 84112, USA
Ana
Kupresanin
Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California 945509234, USA
Gardar
Johannesson
Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California 945509234, USA
Valerio
Pascucci
Scientific Computing and Imaging Institute, University of Utah, 72 South Central Campus Drive, Salt Lake City, Utah 84112, USA
Peer-Timo
Bremer
Scientific Computing and Imaging Institute, University of Utah, 72 South Central Campus Drive, Salt Lake City, Utah 84112, USA; Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California 945509234, USA
Understanding and describing expensive black box functions such as physical simulations is a common problem in many application areas. One example is the recent interest in uncertainty quantification with the goal of discovering the relationship between a potentially large number of input parameters and the output of a simulation. Typically, the simulation of interest is expensive to evaluate and thus the sampling of the parameter space is necessarily small. As a result choosing a "good" set of samples at which to evaluate is crucial to glean as much information as possible from the fewest samples. While space-filling sampling designs such as Latin hypercubes provide a good initial cover of the entire domain, more detailed studies typically rely on adaptive sampling: Given an initial set of samples, these techniques construct a surrogate model and use it to evaluate a scoring function which aims to predict the expected gain from evaluating a potential new sample. There exist a large number of different surrogate models as well as different scoring functions each with their own advantages and disadvantages. In this paper we present an extensive comparative study of adaptive sampling using four popular regression models combined with six traditional scoring functions compared against a space-filling design. Furthermore, for a single high-dimensional output function, we introduce a new class of scoring functions based on global topological rather than local geometric information. The new scoring functions are competitive in terms of the root mean squared prediction error but are expected to better recover the global topological structure. Our experiments suggest that the most common point of failure of adaptive sampling schemes are ill-suited regression models. Nevertheless, even given well-fitted surrogate models many scoring functions fail to outperform a space-filling design.
VISUALIZING UNCERTAINTY IN PREDICTED HURRICANE TRACKS
143-156
Jonathan
Cox
Clemson University, 100 McAdams Hall, Clemson, SC 29634
Donald
House
Clemson University, 100 McAdams Hall, Clemson, SC 29634
Michael
Lindell
Texas A&M University, 3137 TAMU, College Station, TX 77843
The error cone is a display produced by the National Hurricane Center in order to present its predictions of the path of a hurricane. While the error cone is one of the primary tools used by officials, and the general public, to make emergency response decisions, the uncertainty underlying this display can be easily misunderstood. This paper explores the design of an alternate display that provides a continually updated set of possible hurricane tracks, whose ensemble distribution closely matches the underlying statistics of a hurricane prediction. We explain the underlying algorithm and data structures, and demonstrate how our displays compare with the error cone. Finally, we review the design and results of a user study that we conducted as a preliminary test of the efficacy of our approach in communicating prediction uncertainty.
UNCERTAINTY CLASSIFICATION AND VISUALIZATION OF MOLECULAR INTERFACES
157-169
Aaron
Knoll
Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Illinois 60439, USA
Maria K. Y.
Chan
Center for Nanoscale Materials Argonne National Laboratory, Argonne, Illinois 60439, USA
Kah Chun
Lau
Materials Science Division, Argonne National Laboratory, Argonne, Illinois 60439, USA
Bin
Liu
Center for Nanoscale Materials Argonne National Laboratory, Argonne, Illinois 60439, USA
Jeffrey
Greeley
Center for Nanoscale Materials Argonne National Laboratory, Argonne, Illinois 60439, USA
Larry
Curtiss
Materials Science Division, Argonne National Laboratory, Argonne, Illinois 60439, USA
Mark
Hereld
Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Illinois 60439, USA
Michael E.
Papka
Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Illinois 60439, USA
Molecular surfaces at atomic and subatomic scales are inherently ill-defined. In many computational chemistry problems, boundaries are better represented as volumetric regions than as discrete surfaces. Molecular structure of a system at equilibrium is given by the self-consistent field, commonly interpreted as a scalar field of electron density. While experimental measurements such as chemical bond and van der Waals radii do not spatially define the interface, they can serve as useful indicators of chemical and inert interactions, respectively. Rather than using these radial values to directly determine surface geometry, we use them to map an uncertainty interval in the electron density distribution, which then guides classification of volume data. This results in a new strategy for representing, analyzing, and rendering molecular boundaries that is agnostic to the type of interaction.
CORRELATION VISUALIZATION FOR STRUCTURAL UNCERTAINTY ANALYSIS
171-186
Tobias
Pfaffelmoser
Technische Universitat Munchen, Computer Graphics and Visualization Group, Informatik 15, Boltzmannstrasse 3, 85748 Garching, Germany
Rudiger
Westermann
Technische Universitat Munchen, Computer Graphics and Visualization Group, Informatik 15, Boltzmannstrasse 3, 85748 Garching, Germany
In uncertain scalar fields, where the values at every point can he assumed as realizations of a random variable, standard deviations indicate the strength of possible variations of these values from their mean values, independently of the values at any other point in the domain. To infer the possible variations at different points relative to each other, and thus to predict the possible structural occurrences, i.e., the structural variability, of particular features in the data, the correlation between the values at these points has to be considered. The purpose of this paper is to shed light on the use of correlation as an indicator for the structural variability of isosurfaces in uncertain three-dimensional scalar fields. In a number of examples, we first demonstrate some general conclusions one can draw from the correlations in uncertain data regarding its structural variability. We will further explain, why an adequate correlation visualization is crucial for a comprehensive uncertainty analysis. Then, our focus is on the visualization of local and usually anisotropic correlation structures in the vicinity of uncertain isosurfaces. Therefore, we propose a model that can represent anisotropic correlation structures on isosurfaces and allows visual distinguishing of the local correlations between points on the surface and along the surface's normal directions. A glyph-based approach is used to simultaneously visualize these dependencies. The practical relevance of our work is demonstrated in artificial and real-world examples using standard random distributions and ensemble simulations.