Research Highlights

EQUINOX: A hybrid sparse-grid approach for nonlinear filtering (Spring 2015)

Achievement:

Significance and Impact:

(Left) The problem of interest: bearing-only tracking;
(Right)
Comparison of marginal distributions of the dynamical state process obtained by particle filters (PFs) and our approach. It shows that PFs need 160,000 samples to achieve similar accuracy as our approach with about 2,300 sparse grid points.

Research Details:

Sponsor/Facility: Work was performed at ORNL and sponsored by ASCR.

PI and affiliation: Clayton Webster - Oak Ridge National Laboratory

Team: F. Bao, Y. Cao, C. Webster, and G. Zhang

Publications: F. Bao, Y. Cao, C. Webster, and G. Zhang, A hybrid sparse-grid approach for nonlinear filtering problems on adaptive-domain of the Zakai equation approximation. SIAM J. on Uncertainty Quantification, 2, 784 - 804, 2014.

Overview: A hybrid finite difference algorithm for the Zakai equation is constructed to solve nonlinear filtering problems. The algorithm combines the splitting-up finite difference scheme and hierarchical sparse grid method to solve moderately high dimensional nonlinear filtering problems. When applying hier- archical sparse grid method to approximate bell-shaped solutions in most applications of nonlinear filtering problem, we introduce a logarithmic approximation to reduce the approximation errors. Some space adaptive methods are also introduced to make the algorithm more efficient. Numerical experiments are carried out to demonstrate the performance and efficiency of our algorithm.


EQUINOX: A Multilevel MC method for reservoir modeling (Spring 2015)

Achievement:

Significance and Impact:

(Left) 3-D reservoir model with 1.0e6 cells. The permeability field (k) is very heterogeneous, such that it is extremely high-dimensional and computationally expensive for uncertainty quantification.

(Right) To achieve the same root mean square error (RMSE), our MLMC method needs less computational time compared to standard Monte Carlo (MC); for the same computational time, MLMC can achieve higher accuracy with smaller RMSE.

Research Details:

Sponsor/Facility: Work was performed at ORNL and sponsored by ASCR.

PI and affiliation: Clayton Webster - Oak Ridge National Laboratory

Team: C. Barbier, D. Lu, C. Webster and G. Zhang

Publications: C. Barbier, D. Lu, C. Webster and G. Zhang, A multilevel Monte Carlo approach for application to uncertainty quantification in oil reservoir simulations. To appear: Water Resources Research, 2015.

Overview: This study describes a multilevel Monte Carlo (MLMC) method for UQ in reservoir simulation. MLMC is a variance reduction technique for the standard MC. It improves computational efficiency by conducting simulations on a geometric sequence of grids, a larger number of simulations on coarse grids and fewer simulations on fine grids. In this study, we applied the MLMC method to a highly heterogeneous reservoir model modified from the tenth SPE project.The results indicate that MLMC can achieve the same accuracy as standard MC with a significantly reduced computational cost, e.g., about 82-97% and 65-97% computational savings in estimating expectations and approximating distribution functions, respectively. The MLMC method is model independent and can be applied in environmental modeling and many other fields.


EQUINOX: A Multilevel stochastic collocation (MLSC) method (Spring 2015)

Achievement:

Significance and Impact:

For N=20 dimensions we compare Monte Carlo (MC), MLMC, SC and MLSC: (Left) Total Cost versus error of Monte Carlo (MC); (Right) Number of MLSC samples per level (predicted by the theory vs. actual computations)

Research Details:

Sponsor/Facility: Work was performed at ORNL and sponsored by ASCR.

PI and affiliation: Clayton Webster - Oak Ridge National Laboratory

Team: M. Gunzburger, P. Jantsch, A. Teckentrup and C. Webster

Publications: M. Gunzburger, P. Jantsch, A. Teckentrup and C. Webster, A multilevel stochastic collocation method for PDEs with random input data. To appear: SIAM J. on Uncertainty Quantification, 2015.

Overview: In this work, we propose and analyze a multilevel version of the stochastic collocation method that, as is the case for multilevel Monte Carlo (MLMC) methods, uses hierarchies of spatial approximations to reduce the overall computational complexity. In addition, our proposed approach utilizes, for approximation in stochastic space, a sequence of multi-dimensional interpolants of increasing fidelity which can then be used for approximating statistics of the solution as well as buildilng high-order surrogates featuring faster convergence rates. This work also provides a rigorous convergence and computational cost analysis of the new multilevel stochastic collocation method, demonstrating its advantages with regard to standard single-level stochastic collocation approximations as well as MLMC methods. Numerical results illustrate the theory and the effectiveness of the proposed multilevel method.


EQUINOX: Accelerating SC methods for extreme scale computing (Spring 2015)

Achievement:

Significance and Impact:

For nonlinear elliptic PDEs with random input data, we plot the percentage reduction in cumulative conjugate gradient iterations at each level on collocation approximation in N = 3,5,7,9,11, and 13 dimensions with correlation lengths 1/64 (left) and 1/2 (right).

Research Details:

Sponsor/Facility: Work was performed at ORNL and sponsored by ASCR.

PI and affiliation: Clayton Webster - Oak Ridge National Laboratory

Team: D. Galindo, P. Jantsch, C. Webster, and G. Zhang

Publications: D. Galindo, P. Jantsch, C. Webster, and G. Zhang, Accelerating stochastic collocation methods for partial differential equations with random input data, Submitted: SIAM J. on Uncertainty Quantification, 2015.

Overview: This work proposes and analyzes a generalized acceleration technique for decreasing the computational complexity of using stochastic collocation (SC) methods to solve partial differential equations (PDEs) with random input data. The SC approaches considered in this effort consist of a standard Galerkin finite element approximation in the physical space, and a sequential multi-dimensional La- grange interpolation in the random parametric domain, formulated by collocating on a set of points so that the resulting approximation is defined in a hierarchical sequence of polynomial spaces of increasing fidelity. As opposed to multilevel methods that reduce the overall computational burden by taking advantage of a hierarchical spatial approximation, our approach exploits the construction of the SC interpolant to accelerate the underlying ensemble of deterministic solutions.


EQUINOX: Bayesian inference for LES in turbulent flow (Spring 2015)

Achievement:

Significance and Impact:

Left: Adaptive computational grid for Large Eddy Simulations (LES), Right: The corresponding filter radius function

We are the first to employ DNS data to:

Research Details:

Sponsor/Facility: Work was performed at ORNL and sponsored by ASCR.

PI and affiliation: Clayton Webster - Oak Ridge National Laboratory

Team: H. Tran, C. Webster and G. Zhang

Publications:

H. Tran, C. Webster and G. Zhang, Bayesian inference for Smagorinsky models in simulating flow around a cylinder at sub-critical Reynolds number. Submitted: Lecture Notes on CSE, 2015.

Overview:

In this effort, we developed an adaptive hierarchical sparse-grid (AHSG) surrogate modeling approach to Bayesian inference of large eddy simulation (LES) models, which, through a numerical demonstration of the Smagorinsky turbulence model of two-dimensional flow around a cylinder at sub-critical Reynolds number, is proven to significantly reduce the number of costly LES executions without losing much accuracy in the posterior probability estimation. First, an AHSG surrogate for the output of forward models is constructed using a relatively small number of model executions. Using such surrogate, the likelihood function can be rapidly evaluated at any point in the parameter space without simulating the computationally expensive LES model. Here, the model parameters are calibrated against synthetic data related to the mean flow velocity and Reynolds stresses at different locations in the flow wake. We demonstrate the efficiency of our approach and discuss the influence of the user-elected LES parameters on the quality of output data.


EQUINOX: Best M-term sparse polynomial methods for high-D PDEs (Spring 2015)

Achievement:

Significance and Impact:

A comparison of our theoretical error estimate with Monte Carlo as well as all existing methods that make use of the Stechkin approach for computing the solution of parameterized 8-dimensional PDEs

Research Details:

Sponsor/Facility: Work was performed at ORNL and sponsored by ASCR.

PI and affiliation: Clayton Webster - Oak Ridge National Laboratory

Team: H. Tran, C. G. Webster, and G. Zhang

Publications:

H. Tran, C. G. Webster, G. Zhang, Analysis of quasi-optimal polynomial approximations for parameterized PDEs with deterministic and stochastic coefficients, Submitted: Foundations of Comp. Math., 2015.

Overview:

In this work, we present a generalized methodology for analyzing the convergence of quasi-optimal Taylor and Legendre approximations, applicable to a wide class of parameterized elliptic PDEs with both deterministic and stochastic inputs. Such methods construct an index set that corresponds to the "best M-terms" based on sharp estimates of the polynomial coefficients. Several types of isotropic and anisotropic (weighted) multi-index sets are explored, and rigorous proofs reveal sharp asymptotic error estimates in which we achieve sub- exponential convergence rates with respect to the total number of degrees of freedom. Finally, computational evidence complements the theory and shows the advantage of our generalized methodology compared to previously developed estimates.


EQUINOX: Hyper-spherical methods for high-D discontinuity detection (Spring 2015)

Achievement:

Significance and Impact:

Our HS-HASG is the most efficient existing approach for detecting discontinuous regions in high-dimensional random domains.

Research Details:

Sponsor/Facility: Work was performed at ORNL and sponsored by ASCR.

PI and affiliation: Clayton Webster - Oak Ridge National Laboratory

Team: M. Gunzburger, C. Webster and G. Zhang

Publications:

M. Gunzburger, C. Webster and G. Zhang, A hyper-spherical sparse grid approach for high-dimensional discontinuity detection. To appear: SIAM J. on Numerical Analysis, 2015.

Overview:

This work proposes and analyzes a hyper-spherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hyper-surface of an N-dimensional discontinuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided, as are several numerical examples that illustrate the effectiveness of the approach.


EQUINOX: Regularity analysis of stochastic Navier-Stokes (Spring 2015)

Achievement:

Our theoretical results reveal that the solution is analytic w.r.t. the random input data.However, we also show that the region of analyticity shrinks as time increases, thus, making it difficult for techniques such as stochastic collocation or polynomial chaos converge.

Significance and Impact:

Evolution of the mean lift (top) and mean drag coefficient (bottom)

Our analytic results give great insight into the behavior of the flow with respect to the stochastic inputs, thus enabling the construction of an appropriate stochastic approximation technique.

Research Details:

Sponsor/Facility: Work was performed at ORNL and sponsored by ASCR.

PI and affiliation: Clayton Webster - Oak Ridge National Laboratory

Team: H. Tran, C. Trenchea and C. Webster

Publications:

H. Tran, C. Trenchea and C. Webster, A convergence analysis stochastic collocation for Navier-Stokes with random coefficients.Submitted: Mathematics of Computation, 2015.

Overview:

Stochastic collocation method has proved to be an efficient method and been widely applied to solve various partial differential equations with random input data, including Navier-Stokes equations. However, up to now, rigorous convergence analyses are limited to linear elliptic and parabolic equations; its performance for Navier-Stokes equations was demonstrated mostly by numerical experiments. In this paper, we provide an error analysis of stochastic collocation method for a semi-implicit Backward Euler discretization for NSE and prove the exponential decay of the interpolation error in the probability space. Our analysis indicates that due to the nonlinearity, as final time T increases and NSE solvers pile up, the accuracy may be reduced significantly. Subsequently, the theoretical results are illustrated by the numerical test of time dependent fluid flow around a bluff body.


REVIEWS: Stochastic FEMs for PDEs with random data (Spring 2015)

Achievement: A generalized stochastic finite element framework is established in which methods such as Monte Carlo, stochastic Galerkin and collocation are described and compared from both a theoretical and computation perspective.

Significance and Impact:

Publication covers

Research Details:

Sponsor/Facility: Work was performed at ORNL and sponsored by ASCR.

PI and affiliation: Clayton Webster - Oak Ridge National Laboratory

Team: Max Gunzburger, Clayton Webster, and Guannan Zhang

Publications:

M. Gunzburger and C. Webster, Uncertainty Quantification for partial differential equations with stochastic coefficients. To appear: The Mathematical Intelligencer, 2015.

M. Gunzburger, C. Webster and G. Zhang, Stochastic finite element methods for partial differential equations with random input data. Acta Numerica, 521-650, 2015.

Overview:

The quantification of probabilistic uncertainties in the outputs of physical, biological, and social systems governed by partial differential equations with random inputs require, in practice, the discretization of those equations. Stochastic finite element methods refer to an extensive class of algorithms for the approximate solution of partial differential equations having random input data, for which spatial discretization is effected by a finite element method. Fully discrete approximations require further discretization with respect to solution dependences on the random variables. For this purpose several approaches have been developed, including intrusive approaches such as stochastic Galerkin methods, for which the physical and probabilistic degrees of freedom are coupled, and non-intrusive approaches such as stochastic sampling and interpolatory-type stochastic collocation methods, for which the physical and probabilistic degrees of freedom are uncoupled. All these method classes are surveyed in this article, including some novel recent developments.


EQUINOX: Architecture-aware scalable algorithms for embedded parallel ensemble simulation

Achievement: Performance and scalability of traditional sampling-based uncertainty quantification methods for PDE-based simulation on emerging computer architectures is a serious concern. To address this, we initiated the development of an embedded sample propagation method that propagates small groups of samples, which we call ensembles, simultaneously through the PDE simulation.

Significance and Impact:

Speed up in multi-grid preconditioned linear solve time with increasing number of compute nodes.

Research Details:

Sponsor/Facility: Work was performed at SNL and sponsored by ASCR.

PI and affiliation: Eric Phipps - Sandia National Laboratories

Team: Marta D'Elia, Eric Phipps, J. Hu, H. C. Eewards

Publications:

E. PHIPPS, M. D'ELIA, H. C. EDWARDS, M. HOEMMEN, J. HU, AND S. RAJAMANICKAM, Embedded ensemble propagation for improving performance, portability and scalability of uncertainty quan- tification on emerging computational architectures, SIAM Journal on Scientific Computing, submitted (2015).

E. PHIPPS AND A. SALINGER, Embedded uncertainty quantification methods via stokhos, Handbook of Uncertainty Quantification, 2016.

M. D'ELIA, H. C. EDWARDS, J. HU, AND E. PHIPPS, Grouping strategies for embedded stochas- tic collocation methods applied to anisotropic diffusion problems, SIAM/ASA Journal on Uncertainty Quantification, submitted (2016).

Overview:

Quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in an embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan).


EQUINOX: Design of experiments with imperfect computer models and noisy data

Achievement: Many computer models contain unknown parameters, which need to be estimated using physical observations. One important topic in the calibration of computer models is to simultaneously tackle uncertainties from both data and imperfect models. In this area, we focused on novel design and analysis of computer experiments and statistical methods for calibrating computer models using real observations.

Significance and Impact:

Research Details:

Sponsor/Facility: Work was performed at GT and sponsored by ASCR.

PI and affiliation: Jeff Wu - Georgia Tech

Team: J. Wu, R. Tuo, and R. Joseph

Publications:

V. R. JOSEPH, E. GUL, AND S. BA, Maximum Projection Designs for Computer Experiments, Biometrika, 102 (2015), pp. 371–380

R. TUO AND C. F. J. WU, A Theoretical Framework For Calibration in Computer Models: Parameterization, estimation and convergence properties, SIAM/ASA Journal on Uncertainty Quantifi- cation,Vol. 4, pp. 767-795, 2016.

R. TUO AND C. F. J. WU, Efficient calibration for imperfect computer models, Annals of Statistics, 43 (2015), pp. 2331– 2352.

Overview:

Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or are not available in physical experiments. Kennedy and O'Hagan [M.C. Kennedy and A. O'Hagan, J. R. Stat. Soc. Ser. B Stat. Methodol., 63 (2001), pp. 425–464] suggested an approach to estimating them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original Kennedy–O'Hagan (KO) method leads to asymptotically L2-inconsistent calibration. This L2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called L2 calibration, is proposed, proven to be L2-consistent, and enjoys optimal convergence rate. A numerical example and some mathematical analysis are used to illustrate the source of the L2-inconsistency problem.


EQUINOX: Software development

Equinox Partners: ORNL_LogoORNL_LogoORNL_LogoORNL_Logo