HOME    »    SCIENTIFIC RESOURCES    »    Volumes
Abstracts and Talk Materials
Mathematical Modeling in Industry IX - A Workshop for Graduate Students
August 1 - 10, 2005

D. Gregory Arnold (US Air Force Research Laboratory)

Team 1: Sparse Aperture Imaging

Standard 2-D SAR image formation techniques require data samples that have been collected on a regular rectangular grid (with respect to the viewpoint). A target that happened to move through the imaged scene will be displaced and smeared across the final image. Imaging a moving target is equivalent to collecting data that is not on a regular grid (although more generally this generates data in a cube, rather than data in a square). Surprisingly little is understood about how to form an image in this more general case. This topic encompasses a myriad of research areas. One starting point would be to use standard scattering center models to predict the scattering at new locations. This approach requires the ability to extract viewpoint stable features from the measured data. Filtering, tracking, and dynamic programming are key elements to this approach. Alternately, knowing the final product is a 3D image, it may be possible produce the end result without requiring exp! licit feature extraction. Data will be available to test newly developed approaches. figure1 figure2

html version of the graphics word version of the graphics

Nicholas Bennett (Schlumberger-Doll Research)

Team 2: Uncertainty Quantification in Geophysical Inverse Problems

Solving an inverse problem means determining the parameters of a model given a set of measurements. In solving many practical inverse problems, accounting for the uncertainty of the solution is very important to aid in decision-making. A standard approach to do this begins by choosing a model parametrization and then using a Bayesian approach to make inferences on the model parameters from measurement data. However, this quantified uncertainty is a function of the model parametrization and for many inverse problems, there are many model parametrizations that account for the data equally well. A well known approach to accounting for model uncertainty is Bayesian Model Averaging where many model parametrizations are considered. Wavelet model updates provide an efficient means of sifting through a family of model parametrizations given by decimated wavelet bases. By decimated wavelet basis we mean a subset of the model's coordinates in a wavelet basis.

When working with measurement data sets which are particularly noisy or large in terms of their data storage size, it is natural to consider denoising or compressing the data also using a decimated wavelet representation. Kalman filters can be used to update the solution (including its uncertainty) when denoising or locally changing the resolution of the measurement data by changing the decimation.

We shall explore how to compute likely representations of both model and measurements in the context of solving a few model geophysical inverse problems.

Petri Fast (Lawrence Livermore National Laboratory)

Team 3: Contact Algorithms for Dry Granular Flow with Elastic Grains

Consider the dynamics of interacting elastic disks in the plane. This is an experimentally realizable 2-d model of dry granular flow where the stresses can be visualized using the photoelastic effect. As the elastic disks move in a vacuum they interact through collisions with each other and with the surrounding geometry. Because of the finite propagation speed of deformations inside each grain it can be difficult to capture computationally even simple experiments involving just a few interacting grains.

The goal of this project is to improve our ability to simulate dense granular flow in complex geometry.

Some project ideas: (A) Pursue full numerical simulations of of the interacting elastic disks driven by contact mechanics. (B) Develop simplified discrete element models for tracking the center of mass of each disk using classical mechanics including heuristics for the delay arising from the finite propagation speed of elastic deformations in each disk. (C) Compare with experimental data.

Edward Keyes (Semiconductor Insights)

Team 4: Integrated Circuit Layout Reconstruction

Problem Statement: Integrated Circuits are designed using a polygon representations of the wiring and transistors layers known as "layout". Layout must conform to strict design rules. Typically the design rules describes the minimum width of any polygon, minimum spacing between adjacent polygons, directionality (normally only vertical or horizontal runs are allowed) and corner angles (normally only 90 degrees). Multiple wiring layers are common, with modern ICs having as many as ten wiring levels. Different wiring layers are connected through a contact or "via" level. The via levels must also obey their own set of design rules on size and spacing.

The process of analyzing the design of an integrated circuit involves imaging the various layers and then using image processing to reconstruct the layout of the different layers. Built-in biases, both in the manufacturing process and the analysis process prevent the reconstructed layout from being a faithful representation of the original layout. For example, sharp corners become rounded, line widths may either expand or contract and straight lines become roughened with many false vertices. Typical extracted layout is shown in figure 1. figure 1

Figure 1: Unsimplified Layout

Our problem is, given the raw polygon data from image processing, how can we as faithfully as possible recreate the original layout?

There are a number of potential errors that must be avoided in any reconstruction scheme including: creation of shorts between adjacent polygons, creation of a break or "open" in an existing polygon, creation of self intersecting polygons, creation of "via opens" (a break in the connection to an upper or lower layer).

The ideal algorithm should also significantly compaction the layout data by eliminating spurious vertices.

The layout reconstruction problem has significant overlap with the general problem of line simplification which has been extensively studied for Geographic Information Systems (GIS). One of the first and most effective methods for line simplification is the algorithm proposed by Douglas and Peucker[1] in 1973. A straightforward implementation of the algorithm runs in time O(k2) for a chain of size k; Hershberger and Snoeyink[2] show how to improve this to O(k log k). This algorithm is very efficient, however it is a general line simplification solution and not optimized to the highly specific characteristics of IC layout.

Unique features of our problem are the strict design rules governing permissible polygon shapes as well as the constraints imposed by the via points.


[1] DOUGLAS, D. H., AND PEUCKER, T. K. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. The Canadian Cartographer 10 (1973), 112-122.

[2] HERSHBERGER, J., AND SNOEYINK, J. Speeding up the Douglas-Peucker line simplification algorithm. Proc. 5th Internat. Sympos. Spatial Data Handling (1992), pp. 134-143.

David S. Ross (Kaiser Permanente)

Team 5: Models of Human Physiology, and Randomized Clinical Trials

Randomized clinical trials are the central feature of modern medical methodology. In an RCT, two or more populations of patients who are statistically identical on the relevant dimensions are given different medical treatments. The results are then recorded, and are used to establish medical practice. The Archimedes Project of Kaiser Permanente has undertaken the development of a mathematical model of human physiology that will complement RCTs. The purpose of the model is to integrate data from various RCTs, then to predict health outcomes for populations not specifically considered in any particular RCT, perhaps for treatments not specifically considered in any particular RCT. What sort of model(s) of human physiology are appropriate for this purpose?


Eddy, D., and Schlessinger, L.,Diabetes Care 26:3102-3110, 2003

Eddy, D., and Schlessinger, L.,Diabetes Care 26:3093-3101, 2003

Chai Wah Wu (IBM)

Team 6: Algorithms for Digital Halftoning

If you have ever looked closely at an image in a magazine or a newspaper, you will see that the image, which looks like it has many shades and colors, is actually composed of dots of ink of only a handful of colors. Digital halftoning is the art of representing a full color picture with only a few colors and is necessary in all digital printers. Halftoning is possible due to the low-pass frequency characteristic of the human visual system. Several types of algorithms exist in modern digital printers with usually a tradeoff between speed and quality. This tradeoff is important since digital printers can operate at speeds of a few pages a minute (an inkjet printer for home use) to thousands of pages a minute (a production class digital printer). This project studies some questions regarding the stability and efficiency of algorithms used to perform digital halftoning. One of the goals of this project is to design an algorithm which has superior output quality, yet operates at speeds close to speedier algorithms. The type of mathematics used include linear algebra, dynamical systems theory, convex analysis, and mathematical programming. Presentation1.gif

Industrial Mathematics Modeling for Graduate Students

Connect With Us: