Main navigation | Main content
Grégoire Allaire (Centre de Mathematiques Appliquees, Ecole Polytechnique) gregoire.allaire@polytechnique.fr http://www.cmap.polytechnique.fr/~allaire/
Shape Optimization Using Sensitivity Analysis and a Level-Set Method (workshop) Slides: pdf Figures and Movies: http://www.cmap.polytechnique.fr/~optopo/index_en.html
This is a joint work with F. Jouve and A.-M. Toader. The typical problem of structural optimization is to find the "best" structure which is, at the same time, of minimal weight and of maximum strength or which performs a desired deformation. In this context we propose a new numerical method of shape optimization based on a combination of the classical shape derivative and of the level-set method for front propagation. We implement this method in two and three space dimensions for a model of linear or non-linear elasticity. We consider various objective functions and constraints on the perimeter. The shape derivative is computed by an adjoint method. The cost of our numerical algorithm is moderate since the shape is captured on a fixed Eulerian mesh. Although this method is not specifically designed for topology optimization, it can easily handle topology changes. However, the resulting optimal shape is strongly dependent on the initial guess. We make some comparisons with the homogenization method for topology optimization.
Uri M. Ascher (Department of Computer Science, University of British Columbia) ascher@cs.ubc.ca
Computational Methods for Distributed Parameter Estimation in dD (workshop) Slides: http://www.cs.ubc.ca/~ascher/ima03.pdf
Inverse problems involving recovery of distributed parameter functions arise in many applications. Many practical instances require data inversion where the forward problem can be written as a system of elliptic or parabolic partial differential equations. Realistic instances of such problems in 3D can be very computationally intensive and require care in the selection and development of appropriate algorithms.
The necessary conditions for the inverse optimization problem yield a large, nonlinear, tightly coupled set of PDEs. We devise multigrid methods as well as preconditioned Krylov solvers for the rapid solution of such systems.
The likely presence of discontinuities in the solution leads us to consider regularization functionals that do not penalize such model discontinuities. The utilization of Huber's norm from robust statistics will be discussed. This is joint work with E. Haber.
Roscoe A. Bartlett (Sandia National Laboratories) rabartl@sandia.gov
An Overview of MOOCHO (previously known as rSQP++) and the Use of Reduction/Transformation Vector Operators in Linear Algebra Interfaces for Advanced Numerical Algorithms and Computing Environments (minisymposium) Slides: pdf ps
Here we describe the architecture and linear algebra interfaces of MOOCHO (Multifunctional Object-Oriented arCHitecture for Optimization). MOOCHO is designed for large-scale invasive simultaneous analysis and design (SAND) using primarily SQP-related methods. Originally developed under the name rSQP++ at CMU, the framework and interfaces have been modified to allow completely transparent use of fully-parallel distributed-memory linear algebra kernels and applications (i.e.~PDEs). A MOOCHO algorithm developer can write optimization code that will run in a variety of distributed-memory environments without ever seeing any message passing calls or really needing to know anything about parallel programming. All of this is possible due to the design of a flexible interface for vector reduction/transformation operators (RTOp). The design approach for RTOp is based on the ``visitor'' pattern which is also known as function forwarding. The impact of the RTOp design is much larger than just for SAND optimization and also effectively addresses the special needs of many other types of abstract numerical algorithms such as iterative linear equation solvers, nonlinear equation solvers, explicit and implicit ODE and DAE solvers, sensitivity analysis (i.e. bifurcation) methods and uncertainty quantification methods.
Martin P. Bendsoe (Department of Mathematics, Technical University of Denmark) M.P.Bendsoe@mat.dtu.dk
Topology Design of Continuum Structures - Basic Concept and New Applications (workshop)
This talk will discuss computational models that allow for the optimization of the lay-out of structures, MEMS, and materials. After a brief introduction to the fundamental modelling paradigms of topology design, the emphasis will be on the optimization of the properties of composite materials and on the design of band-gap structures for acoustic wave propagation.
Recent
Reference:
M.P.Bendsoe & O. Sigmund:
Topology Optimization - Theory, Methods and Applications.
Springer Verlag, Berlin-Heidelberg, 2003 (just published).
Steven J. Benson (Argonne National Laboratory) benson@mcs.anl.gov
TAO: An Optimization Toolkit for Large-Scale Applications (minisymposium) Slides: pdf
The Toolkit for Advanced Optimization (TAO) offers a suite of optimization solvers for applications of unconstrained and bound constrained minimization. These solvers can be used on a large set of serial and parallel architectures. TAO also allows applications to customize the data structures and linear solvers used in its optimization solvers. This flexibility offers the potential to achieve high performance on large-scale applications.
Lorenz T. Biegler (Chemical Engineering Department, Carnegie Mellon University, Pittsburgh, PA 15213) biegler@cmu.edu
Optimization Methods for DAE Systems (workshop) Slides: pdf
This talk discusses recent advances with simultaneous methods that solve optimization problems formulated with DAE models. Included are parameter estimation problems, problems in engineering design and optimal control problems. Optimization algorithms and implementation approaches will be reviewed and a number of applications drawn from chemical process engineering will be used to highlight these approaches.
George Biros (Courant Institute, New York University) biros@cs.nyu.edu
Boundary Integral Formulations for Shape Optimization of Elliptic PDEs (workshop) Slides: pdf
Boundary integral formulations have been extensively used for the analysis and numerical solution of elliptic partial differential equations. However, with the exception of inverse scattering problems, there has been limited work in applying boundary integral equations for the solution of optimal control and optimal design problems. There are several reasons for that: prohibitive complexity of efficient implementations for non-Laplacian kernels, difficulty with distributed force terms, and restriction to problems with piecewise constant coefficients. In this talk I will discuss a new method for the forward problem, the Embedded Boundary Integral method, and the I will present an algorithm for shape optimization of Stokes flows with distributed forces. The new formulation circumvents the need for unstructured meshes, while accurately resolving the interface.
Paul T. Boggs (Sandia National Laboratories, California) ptboggs@ca.sandia.gov
A Source Inversion Problem Requiring Rapid Response (workshop)
The problem to be discussed is to find the source of a toxic release in a large building, e.g., an airport. In such cases, time is of the essence and is clearly more important than accuracy, since even a relatively crude approximation of the source will allow timely response by emergency managers to contain the release and to take appropriate crowd control actions. Air flow in a building is controlled by the heating, ventilation, and air conditioning (HVAC) system. It is a complicated problem to model and solve such a system, but the fact is that this only needs to be done once, i.e., we can assume a steady state flow into which a toxin is released. The dispersion of the toxin is governed by an advection-diffusion equation which depends on the air flow and the problem is to determine the most likely source (or sources) given concentration readings at various points in the building. We can assume a small array of PC-class computers, but cannot assume a massively parallel system to do the computations. Many questions need to be addressed, including the appropriate grid size, which has to be small enough to capture sufficient details of the flow, but large enough to allow rapid solution, the model for the release of the toxin, the appropriate boundary conditions, the need for constraints, and the need to begin the computing with poor initial data and to add new data as received. We report on the progress to date in the development of the model and the use and benefits of Sundance, an innovative software system for creating and using discrete differential operators.
Paul T. Boggs (Sandia National Laboratories, California) ptboggs@ca.sandia.gov
The Implementation of an Object-Oriented SQP Method for Solving PDE-Constrained Problems (minisymposium)
We briefly describe our version of the SQP algorithm for solving general large-scale nonlinear programming problems, commenting in particular on the types of constraints that may arise. In particular, we consider PDE constraints along with related inequality constraints that may be needed in a comprehensive model. The advantages of a general abstraction for the linear algebra components that arise will be shown. A particular such system, the Trilinos Solver Framework (TSF), will be discussed. Some examples, including the ease with which new linear system solving strategies can be constructed, will be given. A brief demonstration will also be shown.
Jeff Borggaard (Department of Mathematics, Interdisciplinary Center for Applied Mathematics, Virginia Tech) jborggaard@vt.edu
Continuous Sensitivity Equations and Numerical Noise (workshop) Slides: pdf
For optimal design problems, approximating differential equation constraints can lead to numerical noise in the objective function to be minimized. This noise is introduced through ``efficiency strategies'' (e.g. mesh adaptation, adaptive time step selection) or ``stabilization strategies'' (e.g. flux limiters, turbulence models). In many problems, this noise has adverse effects on gradient-based optimization algorithms.
This talk will address the role continuous sensitivity equations can play in blunting these adverse effects. Since differentiation is performed before approximation, the effects of noise on the gradient calculation can be controlled in many cases. We investigate this using a variety of numerical examples where constraints range from ordinary differential equations to flow optimization problems with constraints given by Navier-Stokes and Euler equations. Some discussion of the convergence of optimization algorithms using continuous derivatives will be presented.
Tony F. Chan (Department of Mathematics, UCLA) (poster)
A Level Set Method for Electrical Impedance Tomography
Joint work with Eric T. Chung (Department of Mathematics, UCLA) and Xue-Cheng Tai (Department of Mathematics, University of Bergen).
Electrical impedance tomography is a widely investigated problem with many applications in physical and biological sciences. It is well known that the inverse problem is nonlinear and highly ill-posed. Plenty of numerical techniques with different advantages have been proposed to solve the problem. In this paper, we propose a numerical scheme for the problem with piecewise constant coefficient based on level set method. Numerical tests are implemented to show that our method can be able to recover a sharp interface and can tolerate higher level of noise in the data.
Guy Chavent (Universite Paris-Dauphine and Inria-Rocquencourt, France) guy.chavent@inria.fr
Curvature Steps and Geodesic Moves for Non Linear Least Squares Descent Algorithms (workshop/poster session) Slides: html
Descent algorithms for the minimization of ||F(x)||**2 proceed from one estimate x_k to the next by moving away (line search) from x_k in a direction y_k specific to the algorithm, until the first stationary point of ||F(x)|| is attained
1) Curvature Step: define p the path image by F of the half-line of the parameter space,
nu the arc-length along this path,
r(nu)=||p(nu)|| the norm of the residual,
nu_bar the first nu where r(nu) hits a stationary
point,
r_bar the corresponding value of the residual.
The path p is constrained to stay on the "output set" whose shape is not known. So we shall suppose only that an lower bound R to the radius of curvature along the path is available, and prove that, among all paths p(nu) which leaves p0 in the descent direction v0 with a curvature bounded by 1/R, there is one (and generally only one) "worst path" p_M(nu) which give the largest value r_bar_M of the residual at the first stationary point, and the smallest corresponding value nu_bar_M.
The "curvature step" alpha_k defined by:
alpha_k*||F'(x_k).y_k|| = nu_bar_M
ensures that
-one does not pass the first stationary point,
-the residual drop is larger than r0 - r_bar_M.
and is the largest one which ensures these two properties.
2) Geodesical move: the more "straight" the path, the larger the guaranteed decrease of the residual! Hence we replace the line search by a search along a curve g whose image g by F has the smallest curvature, that is p is a geodesic of the output set D. This is put to work in the Gauss-Newton algorithm, with a second order approximate geodesic g_approx.The added cost is only that of the second directional derivative. As a byproduct of the determination of g_approx, one obtains the information required to perform a curvature step at no additionnal cost, and one can move to the next point.
Numerical comparison of a Gauss-Newton algorithm with geodesical move and curvature step with with a classical Gauss-Newton algorithm with an Armijo line search will be presented on test examples exhibiting increasing strong singularities.
Michael S. Eldred (Principal Member of Technical Staff, Optimization and Uncertainty Estimation Department, Sandia National Laboratories) mseldre@sandia.gov http://endo.sandia.gov/~mseldre/
DAKOTA: Virtual Prototyping with Large-Scale Engineering Simulations (minisymposium) Slides: html pdf ppt
DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) is a general purpose software toolkit for performing systems analysis and design on high performance computers. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods, uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods, parameter estimation with nonlinear least squares methods, and sensitivity/primary effects analysis with design of experiments and parameter study capabilities. These capabilities may be used on their own or as components within advanced strategies for surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the software components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment as well as a platform for rapid prototyping of advanced solution methodologies.
Joerg M. Gablonsky (The Boeing Company) Joerg.M.Gablonsky@boeing.com
Effective Parallel Optimization of Expensive Functions (poster session)
Joint work with Paul Frank.
Optimization of simulation based functions is an important topic in an industrial environment. In many cases these legacy codes cannot provide derivatives, run only on certain computing platforms, and do not take advantage of parallel computing. Design Explorer, a Boeing software package, uses methods from statistics, optimization, and computer science to efficiently optimize these functions. A new addition is a parallel framework that allows us to use a variety of different computing platforms and many computers at the same time.
Edward Michael Gertz (University of Wisconsin, Madison) gertz@mcs.anl.gov
An Object Oriented Architecture for Nonlinear Constrained Optimization (minisymposium) Slides: pdf
IOTR is a nonlinear programming code begin jointly developed by Josh Griffin and Philip Gill at the University of California, San Diego and by Stephen Wright and E. Michael Gertz at the University of Wisconsin, Madison. We discuss the mathematical basis for this code, but focus on its object oriented design. We discuss the code's class structure and the motivation for its design. We show how this structure may be useful for simulation-based models.
Philip E. Gill (Department of Mathematics, University of California, San Diego) peg@optimal.ucsd.edu http://scicomp.ucsd.edu/~peg/
SQP Methods for Large-Scale Optimization (workshop)
We consider some algorithmic issues associated with the development of general-purpose software for large-scale nonlinearly constrained optimization. Our large-scale constrained optimization package SNOPT implements an SQP method that utilizes a positive-definite limited-memory quasi-Newton approximation to the Lagrangian Hessian. The efficiency of any large-scale SQP method is critically dependent on the method used to solve the QP subproblem. Although the method of SNOPT is designed to use any ``black-box'' QP solver, the base version employs the QP code SQOPT, which is based on maintaining an implicit null-space basis and the dense triangular factor of an approximate reduced Hessian. The need to compute and store this factor limits SNOPT to problems with many thousands of constraints and variables but a moderate number of degrees of freedom. We consider a primal-dual interior-point QP solver that provides a viable SQP algorithm for optimization problems of arbitrary dimension.
We conclude with some comments on the use of exact second derivatives in SQP methods.
Mark S. Gockenbach (Michigan Technological University) msgocken@mtu.edu
The Standard Vector Library (minisymposium) Slides: pdf ps
The Hilbert Class Library (HCL) is a library of C++ classes that was created by the speaker and W. W. Symes to address the software needs of simulation-driven optimization. The advantages of HCL derive mainly from its abstract nature; because the classes are defined (mostly) by the mathematical properties of vectors, operators, etc., optimization algorithms coded in the HCL framework can be applied to problems of arbitrary complexity.
The Standard Vector Library (SVL) is an attempt to build on the achievements of HCL while addressing some of its shortcomings. The primary issue in the design of SVL is the definition of a suitable vector class. The challenge is to define "vector" mathematically and yet allow for the efficient application of a variety of array operations that do not form a part of the definition of vector. This talk will emphasize our proposed definition of the vector concept and show how it allows optimization algorithms to transparently manipulate vector data structures of various types: simple in-core arrays, disk files, and distributed arrays.
Susana Gómez (IIMAS, National University of Mexico) susanag@servidor.unam.mx
Global and Local Optimisation for Oil Reservoir Modelling (workshop) Slides: html pdf ppt
History Matching (parameter estimation) plays an important role in reservoir characterisation and monitoring. Due to the ill-posednes of this inverse problem, the least-squares optimisation has many local optima with good match, which produce alternative scenarios of production. These alternative solutions also provide a way to deal with the uncertainty of the modelling process.
Our aim is the generation of global optimisation methods that obtain many local optimal solutions with good match to the data in a stable way, and in reasonable computer time, to produce efficient decision making tools. As the performance of the Tunnelling global method used in this work, depends on the local search methods used, we present here the results obtained when using a Truncated Gauss-Newton Method (TRON) and a Limited Memory Quasi-Newton Method (LBFGSB). The speed and robustness of the methods will be shown when tested on synthetic and real field cases
To get stable solutions of this ill-posed inverse problem in the presence of noisy data, the regularisation effect of stopping the Conjugate Gradient iterations using the L-curve, within the Truncated Gauss-Newton Method, and the use of preconditioners will be discussed.
Nicholas I.M. Gould (Computational Science & Engineering Department, Rutherford Appleton Laboratory, England) n.gould@rl.ac.uk
SQP, SLP and Interior-point methods for large-scale nonlinear programming (workshop) Slides: pdf ps
In this talk, we will discuss methods that we and others have suggested as being appropriate for large-scale nonlinear programming. Until recently, we had believed that all that was holding back general purpose large-scale Sequential Quadratic Programming (SQP) methods was the lack of suitable quadratic programming (QP) algorithms. Consequently, we invested considerable effort in designing suitable QP methods. To our horror, we now believe that there are still outstanding issues that militate against SQP. In particular, the cost of solving a sequence of QPs is extravagant in comparison with what can be achieved by other methods, and the presence of local QP minimizers can cause havoc when designing descent-based methods.
Our attention has thus recently turned to cheaper alternatives to SQP. We are currently investigating Sequential Linear Programming (SLP) methods with equality-constrained QP acceleration, and a variety of robust interior-point methods. In this talk we plan to report on the current status of these projects.
Michael Hintermüller (SFB - Optimierung und Kontrolle, Institute of Mathematics, University of Graz, Austria) michael.hintermueller@kfunigraz.ac.at
First and Second Order Shape Sensitivity and Level Set Methods in Optimal Control of PDEs and Image Segmentation (workshop) Slides: pdf
State constrained optimal control problems are difficult to resolve numerically in a primal-dual framework. This is due to the measure properties of the Lagrange multiplier associated with the state constraints. In the first part of this talk a new paradigm for solving this problem class is introduced. The approach is based on a free boundary problem reformulation of the first order optimality system and an associated minimization problem with the boundary (interface) between the active and inactive sets as the optimization variable. The shape gradient of the objective functional, which vanishes at the optimal solution, serves as the velocity field in a level set based scheme.
In the second part of this presentation we focus on image segmentation problems. The aim in image segmentation is to find boundary curves (contours) of regions of approximately homogeneous gray values. The first approach is related to the active contour technique and the minimization of an edge detector based functional. Numerically this minimization process is realized by applying a shape Newton technique. This technique can be interpreted as a preconditioned gradient method yielding smooth contours during the iteration and allowing larger time step sizes in the level set equation. In our numerical tests it is superior to shape gradient methods. Finally, an analogous shape Newton technique is applied to minimizing the Mumford-Shah-functional, another paradigm for image segmentation. Here an additional difficulty is related to the fact that the shape Hessian is too expensive to be evaluated. Rather we have the application of the shape Hessian to the shape gradient at hand. This fact suggests to design a (preconditioned) conjugate gradient scheme for solving the Newton equation.
Paul Hovland (Argonne National Laboratory) hovland@mcs.anl.gov
Automatic Differentiation and its Role in Simulation-based Optimization (workshop) Slides: html pdf ppt
We provide a brief introduction to automatic differentiation tools and theory and describe the role of current and next generation automatic differentiation tools in simulation-based optimization. We focus on the computational costs of automatic differentiation and the trend toward tools that are more robust, easier to use, and more powerful. We conclude with a short description of our research agenda, with an emphasis on the motivational role played by current and future applications.
David E. Keyes (Mathematics & Statistics, Old Dominion University, BAL 500 Norfolk, VA 23529-0077 & Institute for Scientific Computing Research, Lawrence Livermore National Lab, Box 808, L-419 Livermore, CA 94551-9989) dekeyes@llnl.gov
Parameter Estimation in Empirical Models of Wildland Firespread (workshop) Slides: pdf
We describe work in progress in modeling wildland firespread and tuning the resulting models to historic or idealized firespread data. Modeling wildland firespread has acquired a high priority in recent years, as ecosystems in which burning has been actively suppressed for a century erupt in catastrophic fires. Modelers would like to predict fire damage, control fires in real time with limited resources safely deployed for optimal effectiveness, and formulate long-term strategies for prescribed burns to reduce the incidence of catastrophic fires.
Modeling efforts are on-going at many levels of fidelity, from simple models amenable to low-memory, low-flop-intensity implementation (e.g., on the laptops of firechiefs in the field), to rich models approaching the goal of first principles, for which a single simulation taxes the most powerful computers in the national labs for weeks. Our current efforts are at the simple end of the fidelity spectrum, with a single "forward" problem consisting of the advance of a closed curve representing an idealized firefront over a two-dimensional domain characterized by topography, areal fuel density, windfield, and other factors. Due to the economic and safety importance of firespread modeling, there is an extensive literature of empirical models with similar limited input requirements. Surprisingly, in view of the importance claimed for these models, attempts to parameterize them with real data seem to be in their infancy, including our own very recent forays.
We employ a level set method, for which the primary modeling requirement is a rule for the instantaneous advance of an infinitesimal element of fire perimeter arc, normal to itself. In this talk, we provide a brief technical background of the firespread problem and its forward solution using level sets (in contrast with other approaches), we describe our own contributions to the modeling based on early studies, and focus the discussion on the parameter estimation problem.
This is joint work with Vivien Mallet of Ecole de Ponts et Chausees (Paris) and Francis Fendell of TRW (Redondo Beach). It is part of a larger project ("Caliente"; see http://www-2.cs.cmu.edu/~caliente/) on PDE-constrained optimization.
Tamara G. Kolda (Sandia National Labs, Livermore, CA) tgkolda@sandia.gov http://csmr.ca.sandia.gov/~tgkolda/
Asynchronous and Fault-Tolerant Pattern Search Method for Science and Engineering Optimization Applications (workshop)
Pattern search is a derivative-free optimization method that is widely applicable to science and engineering optimization. We have extended the parallel version of pattern search to be both asynchronous and fault-tolerant, resulting in the Asynchronous Parallel Pattern Search (APPS) algorithm. This method has proven to be extremely effective on several engineering problems at Sandia National Labs and elsewhere. We will discuss issues of the parallelization, the convergence analysis, and future challenges.
Tamara G. Kolda (Sandia National Labs, Livermore, CA) tgkolda@sandia.gov http://csmr.ca.sandia.gov/~tgkolda/
Design & Development of NOX, A C++ Object-Oriented Nonlinear Equation Solver (minisymposium)
Our goal was to design and develop a truly flexible nonlinear equation solver software package that could meet a number of objectives. First, the solver should be independent of the particular choice of linear algebra environment and application interface, yet still have the ability to exploit application-dependent enhancements such as preconditioning. Second, the solver should support both standard and arbitrary convergence criteria. Third, the solver should allow the users to define their own solvers and solver components. Finally, the framework should support all possible solution methods, both present and future. Exploiting the flexibiliy of object-oriented design, I will describe how we were able to largely achive these goals in NOX, our C++ object-oriented nonlinear equation solver. I will also quickly mention some of the tools that the NOX development team has found useful such as CVS, Bugzilla, Bonsai, and Mailman.
Kevin Long (Computational Science and Mathematics Research Department, Sandia National Laboratories) krlong@gibbon.ca.sandia.gov
Sundance: A High-Level Tool for PDE-Constrained Simulation and Optimization (minisymposium)
In this talk we describe Sundance, a software system for rapid development of parallel PDE simulation and optimization problems. Sundance was originally designed with PDE-constrained optimization in mind, but the features that make it suitable for use in PDE-constrained optimization turn out to also make it useful as a general-purpose research tool in PDE simulation.
Modern algorithms for PDE-constrained optimization require the application of operators and the solution of systems of equations that are different from those used in a single solution of the PDE, and require a closer degree of interaction between the PDE solver and optimizer than is usually available. Even with modern PDE frameworks such as Sandia's SIERRA and Nevada, it involves considerable development effort to obtain the quantities required for PDE-constrained optimization, and consequently the best algorithms for PDE-constrained optimization have been little used outside of academic research projects. Indeed, even research into these algorithms is hindered because of the considerable startup costs involved in writing a non-traditional PDE-simulator.
To facilitate research into, and application of, PDE-constrained optimization we have developed a general-purpose PDE solver system that has been designed from the ground up with large-scale, parallel, PDE-constrained optimization in mind. This system, called Sundance, accepts a system of coupled PDEs and boundary conditions written in symbolic form that is close to the notation in which a scientist or engineer would normally write them with pencil and paper. Each function or variation appearing in this symbolic description is annotated with a specification of the finite-element basis with which that object will be discretized. This information, along with a mesh, is then used by Sundance to assemble the implied discretized operators.
The symbolic interface to Sundance makes it quite simple to develop a high-performance code to create the non-traditional operators seen in PDE-constrained optimization. Furthermore, this symbolic interface is essentially a rapid-development tool that can be used for exploration of new solver algorithms, physical models, or finite-element formulations that are not available in standard codes.
Ivan B. Oliveira (Department of Mechanical Engineering, Massachusetts Institute of Technology) oliveira@MIT.EDU
Reliable Real-Time Optimization of Nonconvex Systems Described by Parametrized Partial Differential Equations: Application to a Three-Dimensional Thermal Fin Problem (poster session)
Optimal design of engineering systems described by partial differential equations often requires many computationally-demanding evaluations. Computational difficulty is further aggravated by the typically nonconvex relationship between input variables (controls) and output functionals (costs and constraints) exhibited by these systems. We present here a technique for the rapid and reliable optimization of systems characterized by linear-functional outputs of partial differential equations with affine input parameter dependence. The critical ingredients of the method are: (i) reduced-basis techniques for significant reduction in state-space dimensionality; (ii) a posteriori error bounds for rigorous error and feasibility control; (iii) "off-line/on-line'' computational decompositions for rapid evaluation of outputs of interest, and respective sensitivities, in the limit of many queries; and (iv) a trust-region Sequential Quadratic Programming interpretation of Primal-Dual Interior Point Methods for efficient solution of possibly nonconvex optimization problems. To illustrate our approach, we consider the geometry and heat-transfer coefficient (power) optimization of a three-dimensional thermal fin: given (any) volume and fan power objective-function weights, root temperature "not to exceed'' limits, and fin thermal conductivities, the subsequent optimization --- with guaranteed feasibility --- requires roughly 1 second on a Pentium-4 machine. Our method enables the possibility of not only interactive optimization at conception/manufacturing, but also of real-time adaptive optimal design in operation.
Ulf Torbjörn Ringertz (Department of Aeronautics, Kungliga Tekniska Högskolan, SE-100 44, Stockholm, Sweden) rzu@kth.se
Aircraft Trajectory Optimization (workshop) Slides: pdf ps
Although trajectory optimization is a well established research topic, many difficult problems still needs to be addressed in order to fully integrate this type of capability in aircraft control systems. The talk will give a background on aircraft trajectory optimization describing the modeling used and suitable optimization techniques. Then the talk will focus on the necessary developments in terms of modeling, optimization methods, and implementation that are required in order to use aircraft trajectory as an on-line flight planning tool.
Ekkehard W. Sachs (Virginia Tech and University of Trier) sachs@origin2.icam.vt.edu
Managing POD Models in PDE-Constrained Optimization (workshop) Slides: pdf
Reduced order models play an important role in the design, simulation and optimization of systems which involve partial differential equations. Proper Orthogonal Decomposition (POD) has proven to be a useful tool in various applications of fluid flow. We use modern optimization techniques to design efficient algorithms which utilize both technologies. Issues like the use of various POD models and extensions of trust-region techniques will be presented.
Shlomo Ta'asan (Department of Mathematical Sciences, Carnegie Mellon University) shlomo@sattva.math.cmu.edu http://www.math.cmu.edu/people/fac/taasan.html http://www.math.cmu.edu/~shlomo
Differential Optimization Problems: Analysis of the Hessian (workshop) Slides: pdf
In this talk we discuss the use of elementary theory of pseudo-differential operators to analyze Hessians for optimization problems governed by partial differential equations. The role of this analysis is to come up with Hessian approximation that will accelerate gradient based methods. Quasi-Newton methods such as BFGS cannot cope efficiently with a very large dimension of the design space. Their efficiency is very good for a small dimensional design space, and it deteriorate as the dimension of the design space increases. The approach presented here is a complementing approach for the classical quasi-Newton methods. It uses the asymptotic behavior of the symbol of the Hessian to construct an accurate Hessian approximation for the high frequency range, in the representation of the design space. The resulting approximate Hessians are differential or pseudo-differential operators.
Peter M. van den Berg (Laboratory of Electromagnetic Research, Delft University of Technology, The Netherlands) P.M.vandenBerg@ITS.TUDelft.NL
Source Type of Optimization for Inverse Scattering (workshop) Paper: pdf ps
Joint work with Aria Abubakar.
We will review a number of algorithms to solve the nonlinear inverse scattering problem, where the discrepancy between the measured data and predicted data is minimized. These algorithms are all based on a source type of integral representation of the scattered field, where the integral operator acts on a contrast source, being the product of the interior field and the contrast of the scattering object. Special attention is paid to the development of the contrast-source-inversion method, where the contrast sources are updated iteratively, and where, in each iteration, an explicit minimizer for the contrast is obtained. We discuss the efficient implementation of extra constraints, such as the total variation of the contrast, leading to the concept of a multiplicative constraint. We present actual reconstructions from both synthetic and experimental scattered field data, with applications to medical imaging and sub-surface sensing, both in 2D and 3D.
Andreas Waechter (Department of Mathematical Sciences, IBM T.J. Watson Research Center) andreasw@andrew.cmu.edu
Application of an Interior Point Method for Large-Scale Nonlinear Programming to Circuit Tuning (workshop) Slides: pdf
Circuit tuning is an important task in the design of integrated circuits. The goal is to optimally choose sizes of transistors in order to minimize the time required for the slowest signal to pass through the circuit. This application is solved on a routine basis in the development of new custom chips within IBM, and there is a strong motivation to solve larger instances more quickly.
We will demonstrate how the arising nonlinear optimization problem can be solved using a primal-dual interior point algorithm for large-scale nonlinear programming. The computation of the constraints in the problem formulation involves the repeated simulation of subcircuits (Channel Connected Components). Therefore, the values and gradients of the constraint functions are noisy, and their evaluation is computationally expensive. We will show how the optimization method was tailored in order to takes those problem characteristics into account. Numerical results on a set of benchmark tuning problems will be presented and compared with the previous approach.
Andrea Walther (Institut fuer Wissenschaftliches Rechnen, Technische Universitaet Dresden) awalther@math.tu-dresden.de
Adjoint Based Constrained Optimization (workshop) Slides: pdf ps
In this talk we consider a new approach to constrained optimization that is based on direct and adjoint vector-function evaluations. Neither constraint Jacobian nor Lagrangian Hessian are evaluated explicitly. These derivative matrices may either be approximated by secant updating or completely avoided by a Minres-type inner iteration on the KKT system. Both approaches can be combined by using limited memory approximations of Jacobian and Hessian to precondition the inner iteration for computing an approximate Newton-step. Per outer iteration the effort in terms of function and derivative evaluations as well as linear algebra calculations is significantly reduced compared to SQP and other classical NLP algorithms. Preliminary results on a set of equality constrained problems from the CUTE test collection show that the total number of outer iterations is only moderately increased. Work on the handling of inequality constraints is in progress.
Margaret H. Wright (Computer Science Department, Courant Institute of Mathematical Sciences, New York University) mhw@cs.nyu.edu
Numerical Pitfalls in Interior-point Methods (workshop)
Structural ill-conditioning of the Newton equations was the source of great anxiety thirty years ago, and may well have been the principal reason for the abandonment then of interior-point methods. Although some concerns about ill-conditioning have been shown in recent years to be (mostly) groundless, other nuances remain that can cause misunderstandings and computational difficulties. This talk will describe potentially problematic anomalies and how to treat them.
Dexuan Xie (Department of Mathematical Sciences, University of Wisconsin, Milwaukee, WI 53201-0413) dxie@uwm.edu
Optimization Algorithms for Visualizing Structure-Activity Relationships of Chemical Databases (poster session)
This poster presents our recent work on the solution of two challenging optimization problems (the similarity and diversity sampling problems) that arise from computer-aided drug design. The similarity problem involves finding a drug from a large database that is similar to another drug with known bio-active properties. The diversity problem involves finding a diverse subset of ``representative" compounds so that researchers can scan only a subset of the huge database each time a specific pharmacological agent is sought. Mathematically, the diversity sampling problem is a combinatorial optimization problem, which is extremely difficult to be solved on a computer due to its non-polynomial complexity.
As a first step of our approach to the solution of the similarity and diversity sampling problems, we propose a unconstrained nonlinear optimization model problem and efficient numerical algorithms to its solution. The corresponding program package including a user friendly graphical interface program is also developed. Numerical experiments on real drug databases demonstrate that the optimization algorithms are very effective and have high accuracy in visualizing structure-activity relationships of chemical database. Hence, the program package can be a powerful tool in sampling similar and diverse chemical compounds.
Man-Chung Yeung (Department of Mathematical Sciences, University of Wyoming) MYeung@uwyo.edu
Transpose-free Multiple Lanczos and its Application in Padé Approximation pdf (poster session)
David P. Young (The Boeing Company) david.p.young@pss.Boeing.comNonlinear Elimination in Aerodynamic Analysis and Design Optimization (workshop) Slides: html pdf ppt
A common element in the compressible CFD analysis problem and the design optimization problem is that the nonlinear aspects of the problem are the most challenging. A commonly used solution method for both problems is Newton's method. Often in discussions of these methods, the emphasis has been on methods for computing the required derivatives rather than on the perhaps equally important issue of the globalization strategy. In fact, the globalization strategy can be more important in cases where there are bifurcations or complex interactions with grid refinement. In this talk, we will discuss some globalization strategies based on nonlinear elimination and their application to incease the reliability of CFD computations. In this algorithmic context, we will describe some applications of optimization in aerodynamic design and consider the algorithmic implications of the engineering requirements.
Connect With Us: |