HOME    »    PROGRAMS/ACTIVITIES    »    Annual Thematic Program
Abstracts
2004-2004 IMA/MCIM Industrial Problems Seminar

1:25 pm
570 Vincent Hall

September 12, 2003, 1:25pm, 570 Vincent Hall
Robert Crone (Seagate Technology, Minneapolis, MN, Robert.M.Crone@seagate.com)

Applied Mathematics and Disk Drive Design
Slides:   html    pdf    ps

Talks(A/V)    Talks(Audio)

Applied mathematics is used throughout the design process, from deriving governing equations to analyzing manufacturing data. This presentation will give an overview of the steps involved in software development, focusing attention on the applied mathematics algorithms commonly used today. Potential problems with the current formulation will also be discussed. The presentation will conclude by discussing some additional applications of applied mathematics (e.g., level sets or active contours) within the design process.

September 26, 2003, 1:25pm, 570 Vincent Hall
Dipak Chowdhury (Corning Incorporated, New York, ChowdhurDQ@Corning.com)

Simulation of Extreme Events in Optical Communication System
Slides:   pdf

Performance metric for optical communication system is typically defined by its bit-error-rate (BER) and system outage (unavailability). Actual BER of a system is dependent on noise generated by the optical amplifiers and other random system parameters (e.g., dispersion, polarization, etc). Extreme (or rare) events in the fluctuation of random parameters and processes cause most of the errors. As a result, quantifying system performance by estimating BER involves estimating the system performance when these extreme events occur. It is very difficult to simulate a realistic environment in the laboratory. In addition, experimental estimation of outage probability will take unrealistic length of time. As a result numerical simulation is the tool of choice. Mote-Carlo simulations are not generally very effective in simulating rare events. Techniques like biased Mote-Carlo simulations and other variance reduction techniques evolved to mitigate the difficulty. However, in order to bias the system to simulate a rare (or extreme) event which impacts the system adversely one needs to understand how to bias a system variable in order to degrade performance. Biasing parameters become even more difficult when multiple random variables are involved.

Difficulty of simulating such rare events in optical system simulation is discussed by demonstrating an example for polarization mode dispersion (PMD). The goal is to review the state-of-the-art in rare event simulation for PMD and then to ask the question "how to extend this state-of-the-art for more comprehensive set of system variables."

October 10, 2003, 1:25pm, 570 Vincent Hall
Indraneel Das ((United Technologies Research Center, Hartford, CT,
DasI@UTRCCT.res.utc.com)

Challenges in Industrial Optimization … Technical or Otherwise

Numerous industrial problems have been approached using optimization and many corporations have benefited from the resulting solutions. Yet, why has it not grown into a core competency in the corporate world? The process of producing value for the industry using Optimization involves

(a) identifying/getting the problem
(b) analyzing the technical aspects and "solving" the problem
(c) explaining why "the answer" is the answer, and what is good about it
(d) setting oneself up for the next problem.

The speaker will discuss the challenges in each of these steps. In particular, he will point out the underlying technical challenges in optimization problems ranging from petroleum and telecommunications to manufacturing and energy solutions, all drawn from his diverse industrial background, and emphasize relevant areas of research.

October 17, 2003, 1:25pm, 570 Vincent Hall
Dharmashankar Subramanian ((Honeywell Labs, Minneapolis, MN, Dharmashankar.Subramanian@honeywell.com)

Mathematical Programming and Multiaircraft Conflict Resolution
Slides:   pdf
Talks(A/V)    Talks(Audio)

Free flight is an emerging paradigm in Air Traffic Management (ATM). Conflict detection and resolution is the heart of any free flight concept, and is the focus of this presentation. We address the problem of optimal cooperative three-dimensional (3D) conflict resolution involving multiple aircraft using rigorous numerical trajectory optimization methods. The conflict problem is posed as an optimal control problem of finding trajectories that minimize a certain objective function while maintaining the safe separation between each aircraft pair. We assume the origin and destination of the aircrafts are known and consider aircrafts models with simplified kinematics as well as detailed nonlinear point-mass dynamics. The protection zone around the aircraft is modeled to be cylindrical in shape. We propose a novel formulation of the cylindrical protection zone using continuous variables. The optimal control is converted to a finite dimensional Non Linear Program (NLP) using collocation on finite elements. We solve the NLP using an Interior Point algorithm that incorporates a novel line search method. Lastly, we also discuss some open problems of research interest in the above context.

October 31, 2003, 1:25pm, 570 Vincent Hall
Nikos Paragios (Siemens Corpporate Research, Nikos.Paragios@scr.siemens.com  http://home.comcast.net/~aparagio)

Variational Methods, Implicit Representation & Visual Grouping

Segmentation and extraction of structures of interest refers to a core component of Imaging and Vision with a variety of applications. One can think that this task is equivalent with the separation of a bounded domain (image/volume) into regions with consistent properties. Such properties can be defined on an arbitrary space, like intensity properties, texture properties, motion properties, stereo properties, etc.

An elegant tool to perform such grouping is by the propagation of curves aiming at separating regions with consistent characteristics. Such propagation is either derived from the minimization of an objective function, or defined according to the application objectives (geometric flows). Implicit representations and level set methods is an emerging technique to perform this task. In this talk, we will describe variants of this technique to perform visual grouping. First, a connection between curve/surface propagation and level set methods will be established. Then, we will propose a general formulation to address grouping using implicit representations. Such formulation can account for various information cues. Boundary, Regional (intensity, motion, stereo, texture), prior knowledge (shape of the structure of interest) and user interaction will be the major components of our approach.

December 5, 2003, 1:25pm, 570 Vincent Hall
Thomas R. Hoffend Jr., Ph.D. (Optical Markets and Technologies, 3M Display and Graphics Research Laboratory, trhoffendjr@mmm.com)

Characterization, Modeling, and Donor Media Design for a Laser-Induced Thermal Imaging Process

A laser-induced thermal imaging (LITI) process has been developed by 3M as a patterning method for fabricating color filters for liquid crystal displays, organic electro-luminescent devices (OLEDs), and other articles requiring precise patterning of materials with order micron scale accuracy over areas which may exceed a square meter.

For the LITI process, the material to be patterned is coated on a layered donor sheet which consists typically of a backing film, a primer layer, a light-to-heat conversion layer (LTHC), and a protective interlayer between the LTHC and the transfer material layer. The primer layer aids adhesion of the LTHC to the backing film, the LTHC absorbs imaging laser light and converts it to heat, and the interlayer serves several purposes which may include promoting structural integrity of the LTHC during the imaging process and protecting the transfer material from particle absorbers dispersed in the LTHC. During the imaging process, the donor sheet coated with transfer material is pressed against the receiving surface and one or two laser beams are scanned progressively across the donor sheet. In one mode of transfer, the LTHC absorbs the laser light and converts it to heat, the heat diffuses to the transfer layer, and the transfer layer softens and sticks to the receptor surface in the scanned areas. The donor sheet is then peeled away from the receptor and the adhered regions of transfer material pull off the donor and remain as precisely regions of material on the receptor surface.

The purpose of this talk is to introduce the LITI process and give an overview of several aspects of process characterization, modeling, and design techniques that have been investigated. The talk will therefore be given from an engineering point of view with the hopes that we will stimulate discussion that will suggest interesting problems for applied mathematical research. In this talk, we will give an introduction to the LITI process and motivation and overview for LITI process modeling. We will then discuss modeling of the imager energy deposition, computation of the average fluence pattern, and imager equalization. We will next discuss heat flow calculations and prediction of the probability of severe overheating defects given computed thermal profiles and measured defect rates. Following this, we will present methods that we have explored for predicting the width of patterned lines, an image-based metrology technique for robust measurement of patterned line width and edge roughness, and results for predicted versus measured line widths for a series of designed experiments. We will conclude with a discussion of optimal design of graded and stratified LTHC layers.

December 12, 2003, 1:25pm, 570 Vincent Hall
Scott Shald (Coherent Technologies, Inc. Littleton, CO  Scott.Shald@ctilidar.com)

Modeling of a Natural Gas Pipeline Leak Sensor

Coherent Technologies (CTI) is building a sensor for airborne remote sensing of natural gas pipeline leaks. The sensor uses Differential Absorption Ladar (DIAL) techniques to measure the concentration of natural gas in the air. As the aircraft flies over a pipeline, multiple DIAL measurements are made to produce an image of the natural gas content in the pipeline corridor. Areas with high concentrations suggest the presence of a leak in the pipeline.

The decision to build this sensor was based upon extensive modeling of the performance of the sensor. The modeling determined requirements for the sensor, such as laser power, aperture size, operating wavelengths, and scan patterns. The issues that were examined and the trade-offs that were explored will be presented.

January 23, 2004, 1:25pm, 570 Vincent Hall
Edward Keyes (Orisar / Semiconductor Insights edward@semiconductor.com http://www.semiconductor.com)

Open Algorithmical Problems in the Analysis of Integrated Circuits

Joint work with Vyacheslav Zavadsky.

In this talk, we will discuss several graph related problems that arise during the detailed reverse synthesis of integrated circuits (ICs). In a reverse synthesis process the electrical design schematics for an IC are reconstructed from the physical implementation of the IC (the “layout”). The process involves generation of a global circuit netlist from the physical layout followed by organization of the global netlist into recognizable circuits (amplifiers, buffer, adders etc).

Our first open problem is a general solution for the localization of mis-connections between two signals in the net list. Mis-connections occur due to errors in the reconstructed IC layout that are then incorporated into the reconstructed global netlist. In specific cases, false connections can be located as s-t cut in a network. A more general open global minimum cut like problem is presented.

The second open problem relates to the organization of the global netlist into individual circuits. We would like a method to locate a given circuit pattern within a large global netlist. This problem is essentially a generalized subgraph isomorphism problem on a netlist graph. We will present a survey of existing methods that successfully work (typically in linear time) for conventional netlists, and then consider the special problems of reconstructed netlists. Our special interest would be application of implicit breadth first relabeling techniques to limit the number of branches for brute force isomorhism approach. In particular, we will present an open problem from the error correction codes domain, which we believe is essential to obtaining efficient algorithm for the isomorphism problem.

January 30, 2004, 1:25pm, 570 Vincent Hall
Dan Wack (KLA-Tencor Dan.Wack@kla-tencor.com http://www.kla-tencor.com)

Application of Inverse Electromagnetic Scattering to Critical Dimension Measurement and Control in Semiconductor Production

Smaller device dimensions and tighter process control windows have created a need for metrology tools that measure more than just one-dimensional critical dimension (CD)features. The need to easily detect, identify, and measure changes in feature profiles is becoming critical to controlling current and future semiconductor lithography and etch processes. Measuring changes in sidewall angle and resist height, as well as detecting subtle phenomena such as line-rounding, t-topping, and resist footing, is now as important as the traditional CD line-width measurement. This additional profile information can be used to enhance process-control mechanisms and can also be used to evaluate and characterize the performance of a stepper/track module. Traditional CD metrology techniques give no indication of a measured feature's sidewall angle or height.

Spectroscopic CD (SCD) is an optical metrology technique that can address these needs. SCD is based on parallel data acquisition zero-order diffraction by spectroscopic ellipsometry (SE) over the spectral range 200-900 nm, a widely used optical technique for measuring film thickness and film properties. This talk presents the SCD measurement technique, which is an inverse electromagnetic wave scattering method to estimate the parameters describing the shape of a grating unit cell. SCD results are compared to results from a CD SEM and a cross-section SEM. Repeatability, long-term stability, and matching data from a gate-level nominal process are also presented. These repeatability and stability tests verify that SCD meets the roadmap requirements for current and future semiconductor processes.

I will describe the mathematical framework for both the "rigorous" forward solve, and the optimization techniques of the inverse method, as well as the computer resources required to achieve acceptable turn-around times on the production fab floor. Directions for research to accelerate the computational throughput will be assessed.

February 20, 2004, 1:25pm, 570 Vincent Hall
William H. Frey (General Motors Research and Development Center william.h.frey@gm.com http://www.gm.com)

Modeling Buckled Developable Surfaces for Binder Design in Sheet Metal Forming
Slides:   frey1.pdf    frey2.pdf

In the first stage of sheet metal stamping, a binder ring, an annular surface surrounding the die cavity, clamps down on the flat blank, bending it to a developable "binder wrap" surface which may be smooth or buckled. Buckles generally appear in the binder wrap when the binder ring does not lie on a smooth developable surface that spans the die cavity. However, sometimes buckles can improve the formability of the stamped part, so the ability to design buckled developable surfaces becomes desirable. Designing buckled developable surfaces requires geometric modeling of creases and other singularities in the interior of a flat sheet. In this talk, I will review the properties of such surfaces, describe a method of approximating buckled binder wrap surfaces by developable three-dimensional triangulations and discuss the insights gained from specific examples.

February 27, 2004, 1:25pm, 570 Vincent Hall
Dariusz Madej (Symbol Technologies dmadej@symbol.com http://www.symbol.com)

Matching Wavelets to Signals. Speckle Noise Filtering in a Laser Bar Code Scanner
Slides:   pdf
Talks(A/V)    Talks(Audio)

Speckle noise, which arises in coherent illumination of diffusing target, is a principal factor limiting performance of a laser bar code scanner and its miniaturization. Speckle noise cannot be eliminated using tradition signal processing methods, like Fourier domain filtering, averaging of several signals, or denoising by wavelet shrinkage.

The author will present a denoising method using a nonlinear filtering in a "quasi" wavelet space. First, a signal is transformed by means of an integral transformation, which resembles Continuous Wavelet Transform (CWT). However, a family of wavelets for different scales is generated not by dilation, like it would have been in the CWT, but in another, systematic way. Proposed transformation, we call it QCWT (Quasi CWT), seems to be invertible, and each wavelet fulfills the wavelet admissibility condition. The advantage of such approach is that "wavelets" match closely elements of bar code signals, which allows for better performance. Local maxima of the transformed signal are used for the denoising process. Additionally statistical properties inferred from signal are used to reduce the required computations.

The proposed method allows for decoding bar codes with signal-to-noise ratio lower by 10 dB, and is compared to other advanced methods used in laser scanners.

March 5, 2004, 1:25pm, 570 Vincent Hall
Curt Flory (Agilent Technologies curt_flory@agilent.com http://www.agilent.com)

Intuitive Understanding of Grating-Coupled Radiation Using Green's Function Methods
Talks(A/V)    Talks(Audio)    Paper: pdf

Radiation scattered from diffraction gratings on the surface of waveguides is analyzed using the Volume Current Method. The framework allows separation of the effects of the grating array global periodicity and the effects of the specific shape of the individual grating elements. A straightforward analogy between the influence of the grating element shape and the behavior of phased-antenna array systems allows a clear and intuitive understanding of the effects of blazed gratings on the directionality of grating-coupled radiation.

March 26, 2004, 1:25pm, 570 Vincent Hall
Ilya Kolmanovsky (Ford Research and Advanced Engineering, Ford Motor Company, Dearborn, MI) ikolmano@ford.com

Parameter Governors for Constrained Nonlinear Systems

Pointwise-in-time state and control constraints represent some of the key challenges in many automotive powertrain control problems. Although for specific applications the engineers are usually successful in treating the constraints on a case-by-case basis, systematic control system design techniques that deal with constraints are of significant interest, and they hold promise to greatly reduce the development time and effort.

In particular, Model Predictive Control (MPC) provides a flexible and powerful framework for enforcing constraints while optimizing system performance. The MPC is based on an on-line dynamic optimization of the control input subject to constraints, over a receding horizon. By augmenting an MPC controller with on-line parameter estimation and accounting upfront for uncertainties and unmeasured disturbances in its design, robust constraint enforcement can be guaranteed At the same time, for memory and chronometrics limited automotive microcontrollers implementing a general MPC controller can be intricate. Suboptimal schemes that apply on-line optimization only to selected parameters in the nominal control laws can reduce the computational requirements and deal effectively with pointwise-in-time constraints. These reduced complexity embedded optimization (EO) algorithms are referred to as parameter governors.

The talk will start by reviewing some of the powertrain control applications in which dealing with constraints is an important priority. The parameter governors and their theoretical properties will be described next and illustrated with several examples. The results will be specialized to three classes of parameter governors that include reference governors, feed-forward governors and gain governors. Other applications of parameter governing-like ideas to on-line parameter estimation will be touched upon.

April 2, 2004, 1:25pm, EE 3/180 (note change of location)
Richard A. Derrig
Ph.D. (President, OPAL Consulting LLC, Visiting Scholar, Wharton School, University of Pennsylvania)

Mathematical Models for Insurance Fraud Detection
Paper:  Fraud Classification Using Principal Component Analysis of RIDITs (pdf)
Talks(A/V)    Talks(Audio)

A discussion of some joint research with folks at the University of Texas on fraud detection via a binary classification of (insurance claim) characteristic vectors in n-space. This result fits into a "data mining" slot known as "unsupervised" learning, i.e. there are no known assignments to the two classes (fraud/ no fraud) but rather known or assumed responses (vector components) that are monotone in a latent variable (fraud/ no fraud). The origins of the technique are in educational testing (marketing) where the feature vectors are scored answers to questions and the latent variable is pass/fail (buy/no buy). Comparisons with other common modelling results for fraud and an application to structural changes in databases will be covered. No prior knowledge of insurance will be assumed or needed.

April 9, 2004, 1:25pm, 570 Vincent Hall
William Morokoff (Director, New Product Research, Moody’s KMV, William.Morokoff@MKMV.com http://www.moodyskmv.com)

Modeling and Computational Challenges in Measuring Portfolio Credit Risk
Slides:   pdf

Credit risk refers to the risk of losses on financial contracts due to a counterparty’s failure to pay its obligations. Such defaults are often associated with bankruptcies; notable recent examples include Enron and K-Mart. In an effort to diversify away this credit risk, banks create portfolios of credit exposures (loans, etc.) to thousands of counterparties. Measuring the amount of risk retained and the probability of large losses on the portfolio (Value at Risk) are challenging modeling problems based on estimating probabilities of rare events such as multiple defaults. Models that capture the complex nature of the default correlations across counterparties generally require Monte Carlo simulation to evaluate.

This talk will focus on several modeling problems associated with credit portfolios and related derivatives such as basket default swaps and collateralized debt obligations (CDOs). These include capturing correlation risk and modeling credit deltas (e.g. sensitivity to default risk or exposure size). The computational challenges associated with the Monte Carlo methods will also be addressed, and an application of importance sampling for the credit portfolio problem will be presented.

April 30, 2004, 1:25pm, 570 Vincent Hall
Apo Sezginer (Invarium Inc., San Jose, CA) apo.sezginer@invarium.com

How to Fit 100-Millon Transistors on a Thumbnail
Slides:   pdf

This will be an introductory talk on sub-wavelength optical projection lithography. "Sub-wavelength" refers to the fact that the features in the CPU and memory of your PC are smaller than the wavelength of the light by which those features are printed. Field and wave nature of light are impossible to ignore when imaging in the sub-wavelength scale. Therefore, designing chips today involves EM wave modeling of optical projection. The talk will give a flavor of fundamental limits of resolution, resolution enhancement techniques, approximations and numerical algorithms used to model projection lithography and their shortcomings.

Industrial Programs
Go