HOME    »    SCIENTIFIC RESOURCES    »    Volumes
Abstracts and Talk Materials
Large-scale Inverse Problems and Quantification of Uncertainty
June 6 - 10, 2011

Moritz Allmaras (Texas A & M University)
Yulia Hristova (University of Michigan)

Poster- Detecting small low emission radiating sources
December 31, 1969

In order to prevent smuggling of highly enriched nuclear material through border controls new advanced detection schemes need to be developed. Typical issues faced in this context are sources with very low emission against a dominating natural background radiation. Sources are expected to be small and shielded and hence cannot be detected from measurements of radiation levels alone. We propose a detection method that relies on the geometric singularity of small sources to distinguish them from the more uniform background. The validity of our approach can be justified using properties of related techniques from medical imaging. Results of numerical simulations are presented for collimated and Compton-type measurements in 2D and 3D.

Mark Berliner (The Ohio State University)

Hierarchical Bayesian Modeling: Why and How
June 10, 2011

Keywords of the presentation: borehole data; ensemble modeling; glacial dynamics

After a brief review of the hierarchical Bayesian viewpoint, I will present examples of interest in the geosciences. The first is a paleoclimate setting. The problem is to use observed temperatures at various depths and the heat equation to infer surface temperature history. The second combines an elementary physical model with observational data in modeling the flow of the Northeast Ice-Stream in Greenland. The next portion of the talk presents ideas and examples for incorporating output from large-scale computer models (e.g., climate system models) into hierarchical Bayesian models.

Corey Bryant (The University of Texas at Austin)
Rebecca Elizabeth Morrison (The University of Texas at Austin)

Poster- Model Cross-Validation: An example from a shock-tube experiment
December 31, 1969

The decision to incorporate cross-validation into one's validation scheme raises immediate questions, not the least of which is-- how should one partition the data into calibration and validation sets? We answer this question systematically; indeed, we present an algorithm to find the optimal partition of the data subject to some constraints. While doing this, we address two critical issues: 1) that the model be evaluated with respect to its predictions of the quantity of interest and its ability to reproduce the data, and 2) that the model be highly challenged by the validation set, assuming it is properly informed by the calibration set. This method also relies on the interaction between the experimentalist and/or modeler, who understand the physical system and the limitations of the model; the decision-maker, who understands and can quantify the cost of model failure; and us, the computational scientists, who strive to determine if the model satisfies both the modeler's and decision-maker's requirements. We also note that our framework is quite general, and may be applied to a wide range of problems. Here, we illustrate it through a specific example involving a data reduction model for an ICCD camera from a shock-tube experiment.

Tan Bui-Thanh (The University of Texas at Austin)

Poster - Scalable parallel algorithms for uncertainty quantification in high dimensional inverse problems
December 31, 1969

Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as the central challenge facing the field of computational science and engineering. In particular, when the forward simulations require supercomputers, and the uncertain parameter dimension is large, conventional uncertainty quantification methods fail dramatically. Here we address uncertainty quantification in large-scale inverse problems. We adopt the Bayesian inference framework: given observational data and their uncertainty, the governing forward problem and its uncertainty, and a prior probability distribution describing uncertainty in the parameters, find the posterior probability distribution over the parameters. The posterior probability density function (pdf) is a surface in high dimensions, and the standard approach is to sample it via a Markov-chain Monte Carlo (MCMC) method and then compute statistics of the samples. However, the use of conventional MCMC methods becomes intractable for high dimensional parameter spaces and expensive-to-solve forward PDEs.

Under the Gaussian hypothesis, the mean and covariance of the posterior distribution can be estimated from an appropriately weighted regularized nonlinear least squares optimization problem. The solution of this optimization problem approximates the mean, and the inverse of the Hessian of the least squares function (at this point) approximates the covariance matrix. Unfortunately, straightforward computation of the nominally dense Hessian is prohibitive, requiring as many forward PDE-like solves as there are uncertain parameters. However, the data are typically informative about a low dimensional subspace of the parameter space. We exploit this fact to construct a low rank approximation of the Hessian and its inverse using matrix-free Lanczos iterations, which typically requires a dimension-independent number of forward PDE solves. The UQ problem thus reduces to solving a fixed number of forward and adjoint PDE problems that resemble the original forward problem. The entire process is thus scalable with respect to forward problem dimension, uncertain parameter dimension, observational data dimension, and number of processor cores. We apply this method to the Bayesian solution of an inverse problem in 3D global seismic wave propagation with tens of thousands of parameters, for which we observe two orders of magnitude speedups.

Julianne Chung (Virginia Polytechnic Institute and State University)

Poster- Designing Optimal Spectral Filters for Inverse Problems
December 31, 1969

Spectral filtering suppresses the amplification of errors when computing solutions to ill-posed inverse problems; however, selecting good regularization parameters is often expensive. In many applications, data is available from calibration experiments. In this poster, we describe how to use this data to pre-compute optimal spectral filters. We formulate the problem in an empirical Bayesian risk minimization framework and use efficient methods from stochastic and numerical optimization to compute optimal filters. Our formulation of the optimal filter problem is general enough to use a variety of error metrics, not just the mean square error. Numerical examples from image deconvolution illustrate that our proposed filters perform consistently better than well-established filtering methods.

Tiangang Cui (University of Auckland)

Poster- Adaptive Error Modelling in MCMC Sampling for Large Scale Inverse Problems
December 31, 1969

We present a new adaptive delayed-acceptance Metropolis-Hastings (ADAMH) algorithm that adapts to the error in a reduced order model to enable efficient sampling from the posterior distribution arising in complex inverse problems. This use of adaptivity differs from existing algorithms that tune random walk proposals, though ADAMH also implements that. We build on the conditions given by Roberts and Rosenthal (2007) to give practical constructions that are provably convergent. The components are the delayed-acceptance MH of Christen and Fox (2005), the enhanced error model of Kaipio and Somersalo (2007), and adaptive MCMC (Haario et al., 2001; Roberts and Rosenthal, 2007).

We applied ADAMH to calibrate large scale numerical models of geothermal fields. It shows good computational and statistical efficiencies on measured data. We expect that ADAMH will allow significant improvement in computational efficiency when implementing sample-based inference in other large scale inverse problems.

Louis J. Durlofsky (Stanford University)

Data Assimilation and Efficient Forward Modeling for Subsurface Flow
June 8, 2011

Keywords of the presentation: subsurface flow; reservoir simulation; history matching; reduced-order modeling; kernel principal component analysis; trajectory piecewise linearization

In this talk I will present computational procedures applicable for the real-time model-based management and optimization of subsurface flow operations such as oil production and geological carbon storage. Specifically, the use of kernel principal component analysis (KPCA) for representing geostatistical models in data assimilation procedures and the use of reduced-order models for efficient flow simulations will be described. KPCA-based representations will be shown to better capture multipoint spatial statistics, which gives them an advantage over standard Karhunen-Loeve procedures for representing complex geological systems. The use of KPCA within a gradient-based data assimilation (history matching) procedure will be illustrated. Next, a reduced-order modeling technique applicable for forward simulations will be described. This approach, called trajectory piecewise linearization (TPWL), entails linearization around previously simulated states and projection into a low-dimensional subspace using proper orthogonal decomposition. The method requires training runs that are performed using a full-order model, though subsequent simulations are very fast. The performance of the TPWL approach and its use in optimization will be demonstrated for realistic field problems.

Richard Dwight (Technische Universiteit te Delft)

Poster- Bayesian Inference for Data Assimilation using Least-Squares Finite Element Methods
December 31, 1969

It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way. The approach was shown to be effective without regularization terms, and can handle substantial noise in the experimental data without filtering. Of great practical importance is that - it is not significantly more expensive than a single physical simulation. However the method as presented so far in the literature is not set in the context of an inverse problem framework, so that for example the meaning of the final result is unclear. In this paper it is shown that the method can be interpreted as finding a maximum a posteriori (MAP) estimator in a Bayesian approach to data assimilation, with normally distributed observational noise, and a Bayesian prior based on an appropriate norm of the governing equations. In this setting the method may be seen to have several desirable properties: most importantly discretization and modelling error in the simulation code does not affect the solution in limit of complete experimental information, so these errors do not have to be modelled statistically. Also the Bayesian interpretation better justifies the choice of the method, and some useful generalizations become apparent. The technique is applied to incompressible Navier-Stokes flow in a pipe with added velocity data, where its effectiveness, robustness to noise, and application to inverse problems is demonstrated.

Virginie Ehrlacher (École des Ponts ParisTech)

Poster - Convergence of a greedy algorithm for high-dimensional convex nonlinear problems
December 31, 1969

In this work, we present a greedy algorithm based on a tensor product decomposition, whose aim is to compute the global minimum of a strongly convex energy functional. We prove the convergence of our method provided that the gradient of the energy is Lipschitz on bounded sets. This is a generalization of the result which was proved by Le Bris, Lelievre and Maday (2009) in the case of a linear high dimensional Poisson problem. The main interest of this method is that it can be used for high dimensional nonlinear convex problems. We illustrate this algorithm on a prototypical example for uncertainty propagation on the obstacle problem.

Colin Fox (University of Otago)

The best we can do with MCMC, and how to do better.
June 6, 2011

Keywords of the presentation: MCMC, sample-based inference, inverse problems

Sample-based inference is a great way to summarize inverse and predictive distributions arising in large-scale applications. The best current technology for drawing samples are the MCMC algorithms, with the latest algorithms enabling comprehensive solution of substantial geophysical problems. However, for the largest-scale applications the geometric convergence of MCMC needs to be improved upon. A source of ideas are the algorithms from computational optimization. Developing the computational science of sampling algorithms is essential, for which a suite of test problems, using low-level mid-level and high-level representations, could be useful in focusing efforts in the community.

Roger G. Ghanem (University of Southern California)

Hierarchical Bayesian Models for Uncertainty Quantification and Model Validation
June 9, 2011

Keywords of the presentation: Polynomial Chaos, Bayes Theorem, Stochastic Inverse Problems.

Recent developments with polynomial chaos expansions with random coefficients facilitate the accounting for subscale features, not captured in standard probabilistic models. These representations provide a geometric characterization of random variables and processes, which is quite distinct from the characterizations (in terms of probability density functions) typically adapted to Bayesian analysis. Given the importance of Bayes theorem within probability theory, it is important to synthesize the connection between these two representations. In this talk, we will describe a hierarchical Bayesian framework that introduces polynomial chaos expansions with random parameters as a consequence of Bayesian data assimilation. We will provide insight into the behavior and use of these expansions and exemplify them through a multiscale application from thermal science. Specifically, information collected from fine scale simulations is used to construct stochastic reduced order models. These coarse models are indexed in terms of specimen-to-specimen variability and also in terms of variability in their subscale features. The ability of these doubly-stochastic expansions to improve the predictive value of model-based simulations is highlighted.

Omar Ghattas (The University of Texas at Austin)
Karen E. Willcox (Massachusetts Institute of Technology)

Workshop Introduction
June 6, 2011

This lecture provides an introduction to the IMA Workshop on Large-scale Inverse Problems and Quantification of Uncertainty. We present context and motivation for the workshop topic along with a discussion of open research challenges. We will discuss workshop goals and provide a brief overview of the workshop schedule.

Nathan Gibson (Oregon State University)

Poster- Solution Method for ODEs with Random Forcing
December 31, 1969

We consider numerical methods for finding approximate solutions to ODEs with parameters which are distributed with some probabilty. In particular, we focus on those with forcing functions that have random frequencies. We apply a generalized Polynomial Chaos approach to solving such equations and introduce a method for determining the system of decoupled, deterministic equations for the gPC coefficients which avoids direct numerical integration by taking advantage of properties of orthogonal polynomials.

Albert B. Gilg (Technical University of Munich )
Utz Wever (Siemens AG)

Poster- Robust Design for Industrial Applications
December 31, 1969

Industrial product and process designs often exploit physical limits to improve performance. In this regime uncertainty originating from fluctuations during fabrication and small disturbances in system operations severely impacts product performance and quality. Design robustness becomes a key issue in optimizing industrial designs. We present examples of challenges and solution approaches implemented in our robust design tool RoDeO.

Eldad Haber (University of British Columbia)

Design of simultaneous source
June 8, 2011

In recent years a new data collection approach has been proposed for geophysical exploration. Rather than recording data for each source separately, sources are shot simultaneously and the combined data is recorded. The question we answer in this talk is, what should be the pattern of shots in order to optimally recover the earth's parameters. To answer the question we use experimental design methodology and show how to efficiently solve the resulting optimization problem

David Higdon (Los Alamos National Laboratory)

Bayesian approaches for combining computational model output and physical observations
June 8, 2011

Keywords of the presentation: inverse problem, gaussian process, emulation, Bayesian statistics

A Bayesian formulation adapted from Kennedy and O'Hagan (2001) and Higdon et al. (2008) is used to give parameter constraints from physical observations and a limited number of simulations. The framework is based on the idea of replacing the simulator by an emulator which can then be used to facilitate computations required for the analysis. In this talk I'll describe the details of this approach and apply it to an example that uses large scale structure of the universe to inform about a subset of the parameters controlling a cosmological model. I'll also explain basics of using Gaussian process models and compare them to an approach that uses the ensemble Kalman filter.

Jan Dirk Jansen (Technische Universiteit te Delft)

System-theoretical aspects of oil and gas reservoir history matching
June 7, 2011

Keywords of the presentation: history matching, ill-posedness, oil and gas, reservoir, upscaling, control

'History matching' of reservoir models by adapting model parameters such that the model ouput matches historic production data is known to be a very ill-posed problem. I will discuss the limited observability and controllability of reservoir states (pressures, fluid saturations) and limited identifiability of reservoir parameters (permeabilities, porosities, etc.). I'll present results from our group in Delft including a method to use the remaining freedom in the parameter space after history matching to obtain upper and lower bounds for the prediction of oil recovery from the updated reservoir model.

Bangti Jin (Texas A & M University)

Poster - Sparsity reconstruction in electrical impedance tomography
December 31, 1969

Electrical impedance tomography is a diffusive imaging modality for determining the conductivity distributions of an object from boundary measurements. We here propose a novel reconstruction algorithm based on Tikhonov regularization with sparsity constraints. The well-posedness of the formulation, and convergence rates results are established. Numerical experiments for simulation and real data are presented to illustrate the effectiveness of the approach.

Hector Klie (ConocoPhillips)

Poster- A Multiscale Learning Approach for History Matching
December 31, 1969

The present work describes a machine learning approach for performing history matching. It consists of a hybrid multiscale search methodology based on SVD and the wavelet transform to incrementally reduce the parameter space dimensionality. The parameter space is globally explored and sampled by the simultaneous perturbation stochastic approximation (SPSA) algorithm at a different resolution scales. At a sufficient degree of coarsening, the parameters are estimated with the aid of an artificial neural network. The neural network serves also as a convenient device to evaluate the sensitiveness of the objective function with respect to variations of each individual model parameter in the vicinity of a promising optimal solution. Preliminary results shed light on future research avenues for optimizing the use of additional sources of information such as seismic or timely sensor data in history matching procedures.

This work has been developed in collaboration with Adolfo Rodriguez (Subsurface Technology, ConocoPhillips) and Mary F. Wheeler (Center for Subsurface Modeling, University of Texas at Austin)

Pierre FJ Lermusiaux (Harvard University)

Ocean Uncertainty Prediction and non-Gaussian Data Assimilation with Stochastic PDEs: Bye-Bye Monte-Carlo?
June 7, 2011

Keywords of the presentation: Stochastic PDEs, DO equations, Data Assimilation, Ocean Modeling

Uncertainty predictions and data assimilation for ocean and fluid flows are discussed within the context of Dynamically Orthogonal (DO) field equations and their adaptive error subspaces. These stochastic partial differential equations provide prior probabilities for novel nonlinear data assimilation methods which are derived and illustrated. The use of these nonlinear data assimilation methods and DO equations for targeted observations, i.e. for predicting the optimal sampling plans, is discussed. Numerical aspects are summarized, including new consistent schemes and test cases for the discretization of DO equations. Examples are provided using time-dependent ocean and fluid flows in two spatial dimensions.

Co-authors from our MSEAS group at MIT: Thomas Sondergaard, Themis Sapsis, Matt Ueckermann and Tapovan Lolla


Quan Long (King Abdullah University of Science & Technology)

Poster- Information Gain in Model Validation for Porous Media
December 31, 1969

In this work, we use the relative entropy of the posterior probability density function (PPDF) to measure the information gain in the Bayesian model validation procedure. The entropies related to different groups of validation data are compared and we subsequently choose the validation data with the most information gain (Principle of Maximum Entropy) to predict a quantity of interest in the more complicated prediction case. The proposed procedure is independent of any model related assumption, therefore enabling an objective decision making on the rejection/adoption of cali- brated models. This work can be regarded as an extension to the Bayesian model validation method proposed by [Babusˇka et al.(2008)]. We illustrate the methodology on an numerical example dealing with the validation of models for porous media. Specifically the effective permeability of a 2D porous media is calibrated and validated. We use here synthetic data obtained by computer simulations of the Navier- Stokes equation

Bani K. Mallick (Texas A & M University)

Bayesian Uncertainty Quantification for Subsurface Inversion using Multiscale Hierarchical Model
June 9, 2011

Keywords of the presentation: Uncertanty quantification, Hierarchical model, spatial field, MCMC

We present a Bayesian approach to to nonlinear inverse problems in which the unknown quantity is a random field (spatial or temporal). The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from from heterogeneous sources and provide a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. Karhunen-Lo'eve expansion is used for dimension reduction of the random field. Furthemore, we use a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we have shown that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. Computation challenges in this construction arise from the need for repeated evaluations of the forward model (e.g. in the context of MCMC) and are compounded by high dimensionality of the posterior. We develop two-stage reversible jump MCMC which has the ability to screen the bad proposals in the first inexpensive stage. Numerical results are presented by analyzing simulated as well as real data.

Youssef Marzouk (Massachusetts Institute of Technology)

A map-based approach to Bayesian inference in inverse problems
June 8, 2011

Keywords of the presentation: Bayesian inference; optimal transport; inverse problems; optimization

Bayesian inference provides a natural framework for quantifying uncertainty in PDE-constrained inverse problems, for fusing heterogeneous sources of information, and for conditioning successive predictions on data. In this setting, simulating from the posterior via Markov chain Monte Carlo (MCMC) constitutes a fundamental computational bottleneck. We present a new technique that entirely avoids Markov chain-based simulation, by constructing a map under which the posterior becomes the pushforward measure of the prior. Existence and uniqueness of a suitable map is established by casting our algorithm in the context of optimal transport theory. The proposed maps are analytically and efficiently computed using various optimization methods.

Jodi Mead (Boise State University)

Efficient estimates of prior information and uncertainty with chi-square tests
June 10, 2011

Keywords of the presentation: covariance matrices, least squares, statistical test

Many practical inverse problems are ill-posed, involve large amounts of data and have high dimensional parameter spaces. It is necessary to include uncertainty both to regularize the problem and account for errors in the data and model. However, when processes are modeled as random, a complete treatment of uncertainty requires specification of prior probability distributions for data or parameters. In this work statistical information in the form of uncertainty in parameters and state variables is assumed and propagated, however, the underlying probability distributions do not need to be specified or calculated. This results in an efficient approach to large-scale, ill-posed inverse problems.

Even though prior probability distributions are not necessarily specified, we are required to specify prior knowledge in the form of second moments or variances. We estimate these by applying chi-square tests to calculate the second moment of the error in a model, an initial parameter estimate, or data. Efficient newton-type algorithms have been developed to calculate regularization parameters, and estimate the standard deviation of data error. More recently, we have used chi-square tests to calculate diagonal error covariance matrices and these can be used to obtain non-smooth least squares solutions. Finally, we have developed the chi-squared method for nonlinear problems and will show some recent results. Applications with the chi-square method includes soil moisture estimation, lagrangian flow, and threat detection.

Dimitrios Mitsotakis (Victoria University of Wellington)

Poster-A hybrid numerical method for the numerical solution of the Benjamin equation
December 31, 1969

Because Benjamin equation has a spatial structure somewhat like that of the Korteweg–de Vries equation, explicit schemes have unacceptable stability limitations. We instead implement a highly accurate, unconditionally stable scheme that features a hybrid Galerkin FEM/pseudospectral method with periodic splines to approximate the spatial structure and a two-stage Gauss–Legendre implicit Runge-Kutta method for the temporal discretization. We present several numerical experiments shedding light in some properties of the solitary wave solutions for the specific equation.

Dianne P. O'Leary (University of Maryland)

Confidence in Image Reconstruction
June 6, 2011

Keywords of the presentation: image deblurring, statistical confidence intervals, machine learning, bias constraints

Forming the image from a CAT scan and taking the blur out of vacation pictures are problems that are ill-posed. By definition, small changes in the data to an ill-posed problem make arbitrarily large changes in the solution. How can we hope to solve such problems when data are noisy and computer arithmetic is inexact?

In this talk we discuss the use of calibration data, side conditions, and bias constraints to improve the quality of solutions and our confidence in the results.

Some of this work is joint with Julianne Chung, Matthias Chung, James Nagy, and Bert Rust.


Dean S. Oliver (University of Bergen)

Ensemble-based methods: filters, smoothers and iteration
June 7, 2011

Keywords of the presentation: ensemble Kalman filter, minimization, Monte Carlo

For many large-scale nonlinear inverse problems, Monte Carlo methods provide the only practical method of quantifying uncertainty. Ensemble-based methods such as the ensemble Kalman filter and ensemble smoothers have found increasing application in data assimilation systems for weather prediction, oceanography, and subsurface flow. In this talk, I will describe the methods in general, their connection with Gauss-Newton minimization methods and the approach to sampling. The methodology will be illustrated with several fairly large-scale examples from subsurface flow.

Henning Omre (Norwegian University of Science and Technology (NTNU))

Spatial categorical inversion: Seismic inversion into lithology/fluid classes
June 7, 2011

Keywords of the presentation: Spatial statistics, Bayesian inversion, categorical variables, deconvolution, reservoir modelling

Modeling of discrete variables in a three-dimensional reference space is a challenging problem. Constraints on the model expressed as invalid local combinations and as indirect measurements of spatial averages add even more complexity.

Evaluation of offshore petroleum reservoirs covering many square kilometers and buried at several kilometers depth contain problems of this type. Focus is on identification of hydrocarbon (gas or oil) pockets in the subsurface - these appear as rare events. The reservoir is classified into lithology (rock)classes - shale and sandstone - and the latter contains fluids - either gas, oil or brine (salt water). It is known that these classes are vertically thin with large horizontal continuity. The reservoir is considered to be in equilibrium - hence fixed vertical sequences of fluids - gas/oil/brine - occur due to gravitational sorting. Seismic surveys covering the reservoir is made and through processing of the data, angle-dependent amplitudes of reflections are available. Moreover, a few wells are drilled through the reservoir and exact observations of the reservoir properties are collected along the well trace.

The inversion is phrased in a hierarchical Bayesian inversion framework. The prior model, capturing the geometry and ordering of the classes, is of Markov random field type. A particular parametrization coined Profile Markov random field is defined. The likelihood model linking lithology/fluids and seismic data captures major characteristics of rock physics models and the wave equation. Several parameters in this likelihood model is considered to be stochastic and they are inferred from seismic data and observations along the well trace. The posterior model is explored by an extremely efficient McMC-algorithm.

The methodology is defined and demonstrated on observations from a real North Sea reservoir.

Co-author: Kjartan Rimstad, Department of Mathematical Sciences, NTNU, Trondheim, Norway

Abani Patra (University at Buffalo (SUNY))

Poster- Uncertainty Quantification in Geophysical Mass Flows and Hazard Map Construction
December 31, 1969

We outline here some procedures for uncertainty quantification in hazardous geophysical mass flows like debris avalanches using computer models and statistical surrogates. Novel methodologies used include techniques to propagate uncertainty in topographic representations and methodologies to improve concurrency in the map construction.

Rosemary Renaut (Arizona State University)

NSF SEES Presentation
June 9, 2011

Keywords of the presentation: Sustainaibility Science Engineering, National Science Foundation

The NSF has a new focus on issues relating to Sustainability sciences. I will provide a short overview of existing solicitations and plans for the future. The main intent of this short presentation is to increase awareness in our community of these upcoming opportunities. Mainly I will direct you to numerous publicly available links concerning these plans for funding Science, Engineering and Education activities for attaining a Sustainable Future.

Rosemary Renaut (Arizona State University)

An approach for robust segmentation of images from arbitrary Fourier data using l1 minimization techniques
June 9, 2011

Keywords of the presentation: l1 regularization, regularization parameter, MRI, segmentation

I will review approaches for detecting edges from Fourier data. Application to cases where the data is noisy, blurred, or partially missing, requires use of a regularization term, and accompanying regularization parameter. Our analysis focuses on validation through robustness with respect to correctly classifying edge data. Note that in this method, segmentation is achieved without reconstruction of the underlying image.

Juan Mario Restrepo (University of Arizona)

Climate Variability: Goals and Challenges
June 9, 2011

Keywords of the presentation: climate, variability, uncertainty quantification

A fundamental challenge in climate science is to make sense of very limited and poorly constrained data. Even though many data gathering campaigns are taking pl ace or are being planned, the very high dimensional state space of the system ma kes the prospects of climate variability analysis from data alone very tenuous, especially in the near term. The use of models and data, via data assimilation, is one of the strategies pursued to improve climate predictions and retrodiction s. I will review some of the challenges with this process, cover some of our gro up's efforts to meet these. I wil also enumerate a prioritized list of problems, which if addressed with careful mathematical treatment, will have a significant impact on climate variability understanding.


Paul Shearer (University of Michigan)

Poster- High-Accuracy Blind Deconvolution of Solar Images
December 31, 1969

Extreme ultraviolet (EUV) solar images, taken by spaceborne telescopes, are critical sources of information about the solar corona. Unfortunately all EUV images are contaminated by blur caused by mirror scattering and diffraction. We seek to accurately determine, with uncertainty quantification, the true distribution of solar EUV emissions from these blurry observations. This is a blind deconvolution problem in which the point spread function (PSF) is complex, very long-range, and very incompletely understood. Fortunately, images of partial solar eclipses (transits) provide a wealth of indirect information about the telescope PSF, as blur from the Sun spills over into the dark transit object. We know that deconvolution with the true PSF should remove all apparent emissions of the transit object.

We propose a MAP-based multiframe blind deconvolution method which exploits transits to determine the PSF and true EUV emission maps. Our method innovates in the PSF model, which enforces approximate monotonicity of the PSF; and in the algorithm solving the MAP optimization problem, which is inspired by a recent accelerated Arrow-Hurwicz method of Chambolle and Pock. When applied to the EUV blind deconvolution problem, the algorithm estimates PSFs which remove blur from the transit objects with unprecedented accuracy.

Christine A. Shoemaker (Cornell University)

Surrogate Response Surfaces in Global Optimization and Uncertainty Quantification of Computationally Expensive Simulations with PDE and Environmental Inverse Applications
June 10, 2011

Solving inverse problems for nonlinear simulation models with nonlinear objective is usually a global optimization problem. This talk will present an overview of the development of algorithms that employ response surfaces as a surrogate for an expensive simulation model to significantly reduce the computational effort required to solve continuous global optimization problems and uncertainty analysis of simulation models that require a substantial amount of CPU time for each simulation.

I will show that for many cases of nonlinear simulation models, the resulting optimization problem is multimodal and hence requires a global optimization method. In order to reduce the number of simulations required, we are interested in utilizing information from all previous simulations done as part of an optimization search by building a (radial basis function) multivariate response surface that interpolates these earlier simulations. I will discuss the alternative approaches of direct global optimization search versus using a multistart method in combination with a local optimization method. I will also describe an uncertainty analysis method SOARS that uses derivative-free optimization to help construct a response surface of the likelihood function to which Markov Chain Monte Carlo is applied. This approach has been shown to reduce CPU requirements to less than 1/65 of what is required by conventional MCMC uncertainty analysis. I will present examples of the application of these methods to significant environmental problems described by computationally intensive simulation models used worldwide. One model (TOUGH2) involves partial differential equation models for fluid flow for carbon sequestration and the second is SWAT, which is used to describe potential pollution of NYC’s drinking water. In both cases, the model uses site-specific data.

This work has been a collaboration with others including: R. Regis and Y. Wang (Optimization), N. Bliznyuk and D. Ruppert (uncertainty), A. Espinet and J. Woodbury (Environmental Applications)

Nicoleta Eugenia Tarfulea (Purdue University, Calumet)

Poster- Modeling and Analysis of HIV Evolution and Therapy
December 31, 1969

We present a mathematical model to investigate theoretically and numerically the effect of immune effectors, such as the cytotoxic lymphocyte (CTL), in modeling HIV pathogenesis during primary infection. Additionally, by introducing drug therapy, we assess the effect of treatments consisting of a combination of several antiretroviral drugs. Nevertheless, even in the presence of drug therapy, ongoing viral replication can lead to the emergence of drug-resistant virus variances. Thus, by including two viral strains, wild-type and drug-resistant, we show that the inclusion of the CTL compartment produces a higher rebound for an individual’s healthy helper T-cell compartment than does drug therapy alone. We characterize successful drugs or drug combination scenarios for both strains of virus.

Nicolae Tarfulea (Purdue University, Calumet)

Poster- Observability for Initial Value Problems with Sparse Initial Data
December 31, 1969

In recent years many authors have developed a series of ideas and techniques on the reconstruction of a finite signal from many fewer observations than traditionally believed necessary. This work addresses the recovery of the initial state of a high-dimensional dynamic variable from a restricted set of measurements. More precisely, we consider the problem of recovering the sparse initial data for a large system of ODEs based on limited observations at a later time. Under certain conditions, we prove that the sparse initial data is uniquely determined and provide a way to reconstruct it.

Connect With Us: