HOME    »    SCIENTIFIC RESOURCES    »    Volumes
Abstracts and Talk Materials
Uncertainty Quantification in Materials Modeling
December 16 - 17, 2013


Chandler Becker (National Institute of Standards and Technology)
http://www.nist.gov/mml/msed/thermodynamics_kinetics/chandler_becker.cfm

Uncertainties Under a Deadline: Some Factors in the Engineering Use of Molecular Simulation

Keywords of the presentation: molecular simulation, interatomic potentials, force fields, uncertainties

Knowledge of uncertainties is a key requirement for the acceptance of molecular simulation as part of an engineering toolkit. One of the main sources of uncertainty in a molecular simulation is the force field (interatomic potential) used to model the material. We will discuss our efforts to provide researchers with useful information to help them make informed selections of these models in a timely manner. We will also address feedback gleaned from discussions with industrial researchers about how they use (or want to use) molecular simulations and how some of their requirements differ from those of an academic environment. We will particularly discuss the need to have molecular simulation compete on the timescales required by industry and what that means for interatomic potentials and uncertainties. Joint work with Zachary T. Trautt and Francesca Tavazza.

Michael J Demkowicz (Massachusetts Institute of Technology)
http://dmse.mit.edu/faculty/profile/demkowicz

Accurate Predictions Using Imperfect Models: An Application in Materials Research

Keywords of the presentation: Bayesian inference, reduced order models, noise models, Cahn-Hilliard equation, phase field method

Model order reduction is a pervasive and necessary activity in materials research. However, even though reduced order models are never perfect, their “imperfection” is almost never accounted for. In this talk, I will give an example of how the quality of an inference in materials research may be improved by accounting for the imperfection of a reduced order model. The example will consider inference of the properties of a substrate based on the behavior of a phase separating thin film deposited on the substrate.

Kurt Lejaeghere (University of Ghent (UG))
http://molmod.ugent.be/members/kurt-lejaeghere

DFT-based Thermal Properties: Three Levels of Error Management

Keywords of the presentation: DFT, semi-empirical relations, errors

It is often computationally expensive to predict finite-temperature properties of a crystal from density-functional theory (DFT). The temperature-dependent thermal expansion coefficient α, for example, is calculated from the phonon spectrum, and the melting temperature Tm can only be obtained from ab initio molecular dynamics. Alternatively, semi-empirical relations already provide good estimates at a significantly lower computational cost. These relations link complex quantities, such as α and Tm, to much simpler DFT predictors, such as the cohesive energy or the bulk modulus. The difference between these semi-empirical estimates and experiment is governed by three sources of errors: the numerical accuracy of the DFT implementation [1,2], the limitations of the exchange-correlation functional [1], and the approximations involved in the semi-empirical relation itself [3]. We quantify each of these errors, and find them to contribute according to the order of listing: least for the implementation dependence and most for the effect of the semi-empirical relation. Despite these deviations, some semi-empirical relations do outperform more fundamental methods. An estimate for the Grüneisen parameter that is based on the pressure derivative of the bulk modulus, for example, yields better predictions for the thermal expansion coefficient at room temperature than quasiharmonic phonon theory [3]. ----------------------- [1] K. Lejaeghere, V. Van Speybroeck, G. Van Oost, and S. Cottenier, ' Error Estimates for Solid-State Density-Functional Theory Predictions: An Overview by Means of the Ground-State Elemental Crystals', Crit. Rev. Solid State 39, 1-24 (2014). [open access at DOI: 10.1080/10408436.2013.772503] [2] https://molmod.ugent.be/DeltaCodesDFT. [3] K. Lejaeghere, J. Jaeken, V. Van Speybroeck, and S. Cottenier, 'Ab-initio-based thermal property predictions at a low cost: an error analysis', submitted to Phys. Rev. B.

Guang Lin (Pacific Northwest National Laboratories)
http://www.pnl.gov/science/staff/staff_info.asp?staff_num=7095

Bayesian Approaches for Spatial-Stochastic Basis Selection: Applications to Fuel Cell Predictive Modeling

Keywords of the presentation: Bayesian model selection; Compressive sensing

In this talk, two fully Bayesian methods (Bayesian uncertainty method and Bayesian mixture procedure) will be introduced that can evaluate generalized Polynomial Chaos (gPC) expansions in both stochastic and spatial domains when the number of the available basis functions is significantly larger than the size of the training data-set. The method relies on modeling the PC coefficients suitably and performing simultaneously basis selection and coefficient evaluation via a fully Bayesian stochastic procedure, called mixed shrinkage prior (MSP), we have developed. MSP involves assigning a prior probability on the gPC structure and assigning conjugate priors on the expansion coefficients that can be thought of as mixtures of Ridge-LASSO shrinkage priors, in augmented form. The method offers a number of advantages over existing compressive sensing methods in gPC literature, such that it recovers possible sparse structures in both stochastic and spatial domains while the resulted expansion can be re-used directly to economically obtain results at future spatial input values. Yet, it inherits all the advantages of Bayesian model uncertainty methods, e.g. accounts for uncertainty about basis significance and provides interval estimation through posterior distributions. A unique highlight of the MSP procedure is that it can address heterogeneous sparsity in the spatial domain for different random dimensions. Furthermore, it yields a compromise between Ridge and LASSO regressions, and hence combines a weak (l2-norm) and strong (l1-norm) shrinkage, in an adaptive, data-based manner. We demonstrate the good performance of the method on elliptic stochastic partial differential equations, and proton exchange membrane fuel cell predictive modeling.

Paul Nathan Patrone (University of Minnesota, Twin Cities)
http://www.ima.umn.edu/~ppatrone/

Bayesian Calibration of Molecular Dynamics Simulations for Composite Materials Properties

In recent years, the composites community has increasingly used molecular dynamics to simulate and explore material properties such as glass-transition temperature and yield strain. In virtually all such simulations, a key challenge is to select one or more input structures that represent the real polymer matrix at the nanoscale. Often an appropriate choice of inputs is not known a priori, which can lead to a large uncertainty in the simulated composite properties. In this talk, I discuss ongoing research whose goal is to determine, via Bayesian inference, an ensemble of inputs that represents a class of commercially important amine-cured epoxies. We construct an analytical approximation (i.e. a surrogate or emulator) of the simulations, treating the input structure energy and size as adjustable calibration parameters. By training the emulator with experimental results, we will determine a posterior distribution (or probability) that a given set of calibration parameters corresponds to the real systems.

Rekha Rao (Sandia National Laboratories)

Foam Property Prediction from Process Modeling

Keywords of the presentation: foam, finite element, level set, property prediction, manufacturing

We are developing computational models to elucidate the injection, expansion, and dynamic filling process for polyurethane foam such as PMDI. The polyurethane is a chemically blown foam, where carbon dioxide is produced via reaction of water, the blowing agent, and isocyanate. In a competing reaction, the isocyanate reacts with polyol producing the polymer. A new kinetic model is implemented in a computational framework, which decouples these two reactions as extent of reactions equations. The model predicts the polymerization reaction via condensation chemistry and the foam expansion kinetics through a Michaelis-Menten approach. Both reactions are exothermic and temperature dependent. The conservation equations, including the equations of motion, an energy balance, and two rate equations for the polymerization and foaming reactions, are solved via a stabilized finite element method. The rheology is determined experimentally and is assumed to follow a generalized-Newtonian law where it depends on the degree of cure and temperature, but is not viscoelastic.

The conservation equations are combined with a level set free-surface algorithm to determine the location of the foam front as it expands over time. The model predicts the velocity, temperature, viscosity, free surface location, and extent of polymerization of the foam. In addition, it predicts the local density and density gradients based on the Michaelis-Menten kinetics of foam expansion. Results from the model are compared to experimental flow visualization data and post-test CT data for the density. Recent work seeks to couple the fluid-thermal simulations with a nonlinear viscoelastic model for the structural response, giving us the means to predict dimensional changes during manufacture and aging. Property predictions from the fluid-thermal model are used to include density and cure gradients in the structural response, resulting in inhomogeneous material properties. This work will also be discussed.

*Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

Peter A. Schultz (Sandia National Laboratories)
http://www.cs.sandia.gov/~paschul/

The Journey from Atoms to Assessed Engineering with Quantitative Confidence: Purpose, Principles, and Some Practice

Keywords of the presentation: Verification and validation, uncertainty quantification, atomistic simulations, multiscale

How does one go from good science to good engineering, and conversely, how does one reach into sub-continuum scale physics to add greater fidelity to an engineering scale analysis meant to inform high-consequence decisions? For scientific investigations at the atomistic scale, the numerical analyses that feed into quantitative assessments of uncertainties at the engineering scale can seem inaccessibly remote. A seemingly numerically impenetrable series of upscalings in an multiscale sequence intervenes, usually without a predetermined, well-defined roadmap of causal relationships. What's the point of sub-continuum uncertainties? For every aspect contributing to an engineering assessment, a measure of quantitative confidence is required and valuable. This talk discusses verification and validation, and uncertainty quantification, in the context of physical simulations at lower length and time scales that are the foundation of a nascent multiscale sequence in an uncharted landscape. I will illustrate strategies that focus on the central principle of obtaining and assessing quantitative confidence in results, identifying meaningful sources of uncertainties. --- Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy’s NNSA under contract DE-AC04-94AL85000.

Michael R. Shirts (University of Virginia)
http://faculty.virginia.edu/shirtsgroup

Statistical Mechanics, Statistical Error, Validation and Parameterization in Atomistic Simulations of Soft Materials

Keywords of the presentation: statistical mechanics, atomistic simulation, validation, statistical error

When performing simulations of soft materials such as proteins and polymers, physical properties of interest always have associated statistical uncertainty. This inability to calculate properties exactly and understand the uncertainties precisely is one of the most important factors limiting our understanding how predictive such simulations actually are, as well as limiting the ability to improve the parameterization of force fields for these materials.

I will discuss several topics related to improving the reliability and consistency of such simulations, placing them in the context of community needs for molecular simulation to be truly useful, reliable and predictive in molecular design and research. For example, I will discuss uncertainty quantification and minimization as well as validation of thermodynamic ensembles. I will discuss our efforts to benchmark free energy calculation methods and systems, both for small molecules and for proteins-ligand binding energies, including error estimates and sensitivity to simulation parameters. Finally, I will discuss ongoing efforts to improve parameterization of biomolecular force fields using experimental data, including Bayesian approaches.

Ralph Smith (North Carolina State University)
http://www4.ncsu.edu/~rsmith/

Prediction Interval Construction for Smart Material Systems in the Presence of Model Discrepancy

Keywords of the presentation: Prediction Intervals, model discrepancies, Bayesian model calibration

In this presentation, we will discuss issues pertaining to the construction of prediction intervals in the presence of model biases or discrepancies. We will illustrate this in the context of models for smart material systems but the issues are relevant for a range of physical and biological models. In many cases, model discrepancies are neglected during Bayesian model calibration. However, this can yield nonphysical parameter values for applications in which the effects of unmodeled dynamics are significant. It can also produce prediction intervals that are inaccurate in the sense that they do not include the correct percentage of future experimental or numerical model responses. This problem is especially pronounced when making extrapolatory predictions such as predicting in time. In this presentation, we will discuss techniques to quantify model discrepancy terms in a manner that yields correct prediction intervals. We will also indicate open questions and future research directions.

Elaine Spiller (Marquette University)

Probabilistic Hazard Mapping and Uncertainty Quantification Based on Granular Flow Simulations

PDE models of granular flows are invaluable tools for developing probabilistic hazards maps for volcanic landslides, but they are far from perfect. Epistemic uncertainty -- uncertainty due to a lack of model refinement -- arises through assumptions made in physical models, numerical approximation, and imperfect statistical models. In the context of geophysical hazard mapping, we propose a surrogate-based methodology which efficiently assesses the impact of various uncertainties enabling a quick yet methodical comparison of the effects of uncertainty and error on computer model output.

Yan Wang (Georgia Institute of Technology)
http://www.me.gatech.edu/faculty/wang-y

Quantifying Model Form Uncertainty in Molecular Dynamics Simulation

Keywords of the presentation: model form uncertainty, polynomial chaos, hidden Markov model, Bayes’ rule, model validation

Molecular dynamics (MD) simulation has been widely used in atomistic modeling of material structure and behavior. As with all modeling and simulation methods, results from MD are susceptible to a variety of uncertainties. A major source of model form and parameter uncertainties in MD is the interatomic potential function. In this study, the effect of parameter uncertainty from interatomic potential functions on MD is investigated and quantified using polynomial chaos expansion, which allows for the efficient calculation of output probability distributions without extensive simulation evaluations. A generalized hidden Markov model based on a generalized interval Bayes’ rule is used for cross-scale model validation where the systematic error in experimental measurement data is incorporated to improve the robustness of assessment.

Dongbin Xiu (University of Utah)
http://www.sci.utah.edu/~dxiu/

Managing Computational Complexity: UQ with Simulation Models of Different Fidelities

Keywords of the presentation: Multiple fidelity, UQ algorithm

UQ computations can be highly time consuming. When the underlying deterministic systems are very large, high fidelity UQ simulations can be prohibitively expensive, if not impossible. However, in many practical problems, there often exist a set of models, each with different fidelities or accuracy for the problem. Typically, high fidelity models are highly accurate but expensive to run; low fidelity models are, albeit not highly accurate, able to capture important features of the problem and cheap to simulate. It is therefore desirable to take advantage of the existence of all these models and construct a practical UQ algorithm to conduct reliable and accurate UQ analysis with reasonable simulation cost.

In this talk, we present a newly developed UQ algorithm that accomplishes this goal. It utilizes both the speed of the low-fidelity models and the accuracy of the high-fidelity models, and is able to provide accurate UQ results. Furthermore, the algorithm is rigorous, as its numerical error bound has been established. We will present both the mathematical framework and implementation details, and then illustrate its efficacy via a set of examples.

Nicholas J. Zabaras (Cornell University)
http://mpdc.mae.cornell.edu/

Towards Predictive Modeling in Heterogeneous Media

Keywords of the presentation: Heterogeneous media, Stochastic Modeling, Curse of Dimensionality, Coarse Graining, Probabilistic Graphical Models, Polycrystals

Predictive modeling of physical processes in heterogeneous media requires innovations in mathematical and computational thinking. While multiscale approaches have been successful in modeling the effects of fine scales to macroscopic response, a significant grant challenge remains in understanding the effects of topological uncertainties in characterization of properties. We will briefly address major limitations in physical modeling in heterogeneous media including data-driven models of stochastic input, the curse of stochastic dimensionality, stochastic coarse graining and development of inexpensive surrogate stochastic models. A number of examples will be discussed including an information theoretic approach to coarse graining in materials science and deformation of random polycrystalline materials.

Connect With Us:
Go