November 1 - 5, 2010
For the model linear parabolic equation we propose a nonadaptive wavelet finite element space-time sparse discretization. The problem is reduced to a finite, overdetermined linear system of equations. We prove stability, i.e., that the finite section normal equations are well-conditioned if appropriate Riesz bases are employed, and that the Galerkin solution converges quasi-optimally in the natural solution space to the original equation. Numerical examples confirm the theory.
This work is part of a PhD thesis under the supervision of Prof. Christoph Schwab, supported by Swiss National Science Foundation grant No. PDFMP2-127034/1.
Our goal is to accurately simulate transport of a miscible component in a bulk fluid over long times, i.e., locally conservatively and with little numerical diffusion. Characteristic methods have the potential to do this, since they have no CFL time-step constraints. The volume corrected characteristic mixed method was developed to conserve mass of both the component and the bulk fluid. We have proved that it has the important properties of monotonicity and stability, and therefore exhibits no overshoots nor undershoots. Moreover, the method converges optimally with or without physical diffusion. We show its performance through example simulations of pure curl flow and a nuclear waste repository.
The finite element exterior calculus, FEEC, has provided a
viewpoint from which to understand and develop stable finite
element methods for a variety of problems. It has enabled us to
unify, clarify, and refine many of the classical mixed finite
element methods, and has enabled the development of previously
elusive stable mixed finite elements for elasticity. Just as
an abstract Hilbert space framework helps clarify the theory of
finite elements for model elliptic problems, abstract Hilbert
complexes provides a natural framework for FEEC. In this talk
we will survey the basic theory of Hilbert complexes and their
discretization, discuss their applications to finite element
methods. In particular, we will emphasize the role of two key
properties, the subcomplex property and the bounded cochain
projection property, in insuring stability of discretizations
by transferring to the discrete level the structures that insure
well-posedness of the PDE problem at the continuous level.
We discuss the performance of three numerical methods for the fully nonlinear Monge-Ampère equation. The first two are pseudo-time
continuation methods while the third is a pure pseudo-time marching algorithm. The pseudo-time continuation methods are shown to converge for smooth data on a uniformly convex domain. We give numerical evidence that
they perform well for the non-degenerate Monge-Ampère equation.
The pseudo-time marching method applies in principle to any nonlinear equation. Numerical results with this approach for the degenerate Monge-Ampère equation are given as well as for the Pucci and Gauss-curvature equations.
Keywords of the presentation: Isogeometric Analysis, NURBS, T-Splines, Shells, Turbulence, Fluid-Structure Interaction, Wind Turbines
Isogeometric Analysis [1] is a recently developed novel discratization technique that is based on the basis functions of computer-aided design and computer graphics. Although the main motivation behind the development of Isogeometric Analysis was to establish a tighter link between geometry modeling and computational analysis procedures, the new technology demonstrated better per-degree-of-freedom performance than standard finite elements on a broad range of problems in computational mechanics. This better "efficiency" of isogeometric analysis was attributed to more accurate analysis geometry definition and higher-order smoothness of the underlying basis functions. In this presentation, I will give an overview of the early developments in isogeometric analysis of fluids and structures. I will aslo give a summary of approximation results for the function spaces employed in isogeometric analysis. In the main body of the presentation I will show our recent work on isogeometric shell structures, turbulence modeling and fluid-structure interaction (FSI). I will conclude by presenting our recent isogeometric FSI simulations of a wind turbine rotor operating under realistic wind conditions and at full spatial scale in 3D [2,3].
References
[1] J.A. Cottrell, T.J.R. Hughes, and Y. Bazilevs, “Isogeometric Analysis. Toward Integration of CAD and FEA”, Wiley 2009.
[2] Y. Bazilevs, M.-C. Hsu, I. Akkerman, S. Wright, K. Takizawa, B. Henicke, T. Spielman, and T.E. Tezduyar, “3D Simulation of Wind Turbine Rotors at Full Scale. Part I: Geometry Modeling and Aerodynamics”, International Journal of Numerical Methods in Fluids, (2010). Published online.
[3] Y. Bazilevs, M.-C. Hsu, J. Kiendl, R. Wuechner and K.-U. Bletzinger, “3D Simulation of Wind Turbine Rotors at Full Scale. Part II: Fluid-Structure Interaction”, International Journal of Numerical Methods in Fluids, (2010). Accepted for publcation.
Read More...
The contact between two membranes can be described by a system of variational inequalities, where the unknowns are the displacements of the membranes and the action of a membrane on the other one. We first propose a discretization of this system, where the displacements are approximated by standard finite elements and the action by a local postprocessing which admits an equivalent mixed reformulation. We perform the a posteriori analysis of this discretization and prove optimal error estimates. Numerical experiments confirm the efficiency of the error indicators.
Discretization converts infinite dimensional mathematical models into finite dimensional algebraic equations that can be solved on a computer. This process is accompanied by unavoidable information losses which can degrade the predictiveness of the discrete equations.
Compatible and regularized discretizations control these losses directly by using suitable field representations and/or by modifications of the variational forms. Such methods excel in controlling "structural" information losses responsible for the stability and well-posedness of the discrete equations.
However, direct approaches become increasingly complex and restrictive for multi-physics problems comprising of fundamentally different mathematical models, and when used to control losses of "qualitative" properties such as maximum principles, positivity, monotonicity and local bounds preservation.
In this talk we show how optimization ideas can be used to control externally, and with greater flexibility, information losses which are difficult (or impractical) to manage directly in the discretization process. This allows us to improve predictiveness of computational models, increase robustness and accuracy of solvers, and enable efficient reuse of code. Two examples will be presented: an optimization-based framework for multi-physics coupling, and an optimization-based algorithm for constrained interpolation (remap). In the first case, our approach allows to synthesize a robust and efficient solver for a coupled multiphysics problem from simpler solvers for its constituent components. To illustrate the scope of the approach we derive such a solver for nearly hyperbolic PDEs from standard, off-the-shelf algebraic multigrid solvers, which by themselves cannot solve the original equations. The second example demonstrates how optimization ideas enable design of high-order conservative, monotone, bounds preserving remap and transport schemes which are linearity preserving on arbitrary unstructured grids, including grids with polyhedral and polygonal cells.
This is a joint work with D. Ridzal , G. Scovazzi (SNL) and M. Shashkov (LANL).
Sandia National Laboratories is a
multi-program laboratory operated by Sandia Corporation, a
wholly owned subsidiary of Lockheed Martin company, for the
U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Maxwell's eigenvalue problem can be seen as a particular case of the
Hodge-Laplace eigenvalue problem in the framework of exterior calculus.
In this context we present two mixed formulations that are equivalent
to the problem under consideration and their numerical approximation.
It turns out that the natural conditions for the good approximation of
the eigensolutions of the mixed formulations are equivalent to a
well-known discrete compactness property that has been firstly used by
Kikuchi for the analysis of edge finite elements.
The result can be applied to the convergence analysis of the p-version
of edge finite elements for the approximation of Maxwell's eigenvalue
problem.
We are witnessing an increasing interest for cooperative
dynamical systems proposed in the recent literature as possible
models for
opinion dynamics in social and economic networks.
Mathematically,
they consist of a large number, N, of 'agents' evolving
according to
quite simple dynamical systems coupled in according to some
'locality'
constraint. Each agent i maintains a time function
x_{i}(t)
representing the 'opinion,' the 'belief' it has on something.
As time elapses, agent i interacts with neighbor agents
and modifies its opinion by averaging it with
the one of its neighbors. A critical issue is the way
'locality'
is modelled and interaction takes place. In Krause's model
each agent can see the opinion of all the others but
averages with only those which are within a threshold R from
its current opinion.
The main interest for these models is for N quite large.
Mathematically, this means that one takes the limit for N →
+ ∞.
We adopt an Eulerian approach, moving focus from opinions of
various
agents to distributions of opinions. This leads to a sort of
master
equation which is a PDE in the space of probabily measures; it
can be
analyzed by the techniques of Transportation Theory, which
extends
in a very powerful way the Theory of Conservation Laws.
Our Eulerian approach gives rise to a natural numerical
algorithm based on the `push forward' of measures, which
allows one to perform numerical simulations with complexity
independent on the number of agents, and in a genuinely
multi-dimensional manner.
We prove the existence of a limit measure as t → ∞,
which
for the exact dynamics is purely atomic with atoms at least at
distance R apart,
whereas for the numerical dynamics it is 'almost purely atomic'
(in
a precise sense). Several representative examples will be
discussed.
This is a joint work with Fabio Fagnani and Paolo Tilli.
We report our recent efforts in developing the adaptive perfectly matched layer (PML) method solving the time-harmonic electromagnetic and acoustic scattering problems. The PML parameters such as the thickness of the layer and the absorbing medium property are determined through sharp a posteriori error estimates. Combined with the adaptive ﬁnite element method, the adaptive PML method provides a complete numerical strategy to solve the scattering problem in the framework of FEM which produces automatically a coarse mesh size away from the ﬁxed domain and thus makes the total computational costs insensitive to the choice of the thickness of the PML layer. Numerical experiments are
included to illustrate the competitive behavior of the proposed adaptive method.
The reduced basis method (RBM) is indispensable in scenarios where a large number of solutions to a parametrized partial differential equation are desired. These include simulation-based design, parameter optimization, optimal control, multi-model/scale simulation etc. Thanks to the recognition that the parameter-induced solution manifolds can be well approximated by finite-dimentional spaces, RBM can improve efficiency reliably by several orders of magnitudes. This poster presents RBM for various electromagnetic problems including radar cross section computation of an object whose scattered field is highly sensitive to the geometry. We also propose a new reduced basis element method (RBEM) that simulate electromagnetic wave propagation in a pipe of varying shape. This is joint work with Jan Hesthaven and Yvon Maday.
I will present how our
new developed spectral method solvers can be applied to highly nonlinear
and high-order evolution equations such as strongly anisotropic
Cahn-Hilliard equations from materials science. In addition, we consider
how to design schemes that are energy stable and easy to solve (avoid
solving nonlinear equations implicitly). We use the Legendre-Galerkin
method to simulate the anisotropic Cahn-Hilliard equation with the
Willmore regularization. Excellent agreement between numerical simulations
and theoretical results are observed.
We develop a high-order positivity-preserving discontinuous Galerkin
(DG) scheme for linear Vlasov-Boltzmann transport equations (BTE)
under the action of quadratically confined electrostatic potentials.
The solutions of the BTEs are positive probability distribution
functions. It is very challenging to have a mass-conservative,
high-order accurate scheme that preserves positivity of the
numerical solutions in high dimensions. Our work extends the
maximum-principle-satisfying scheme for scalar conservation laws
to include the linear Boltzmann collision
term. The DG schemes we developed conserve mass and preserve the
positivity of the solution without sacrificing accuracy. A
discussion of the standard semi-discrete DG schemes for the BTE are
included as a foundation for the stability and error estimates for
this new scheme. Numerical results of the relaxation models are
provided to validate the method.
In this talk, we discuss a new class of discontinuous Galerkin methods called "hybridizable". Their distinctive feature is that the only globally-coupled degrees of freedom are those of the numerical trace of the scalar variable. This renders them efficiently implementable. Moreover, they are more precise than all other discontinuous Galerkin methods as thet share with mixed methods their superconvergence properties in the scalar variable and their optimal order of convergence for the vector variable. We are going to show how to devise these methods and comment on their implementation and convergence properties.
In this work we investigate the numerical solution for two-dimensional
Maxwell's equations on graded meshes. The approach is based on the
Hodge decomposition for divergence-free vector fields. An approximate
solution for Maxwell's equations is obtained by solving standard second
order elliptic boundary value problems. We illustrate this new approach by a
P1 finite element method.
Joint work with Chunyan Huang, Christoph Schwab and Gerrit
Welper.
The success of adaptive (wavelet) methods for operator equations relies on well-posedness of
suitable variational formulations and on the
availability of Riesz bases (or frames) for the corresponding energy space provided that
the corresponding representation of the operator is in a certain sense quasi sparse.
When dealing with transport dominated problems
such favorable conditions are no longer met for the commonly used variational principles. Moreover, solutions typically exhibit strongly anisotropic features
such as layers or shocks. Focussing for simplicity on the simplest model of linear transport
we present alternative variational formulations that are, in particular, stable in L_{2} so that corresponding discrete solutions are best approximants in L_{2}. Moreover, this provides a theoretical platform for ultimately employing directional representation systems like shearlets, which are known to form L_{2}-frames and offer much more economical sparse representations of anisotropic structures than classical wavelet systems.
This is a central objective in an ongoing collaboration with G. Kutyniok's group within the Priority Research Programme (SPP) No. 1324 of the German Research Foundation.
In principle, the approach can be understood as a Petrov-Galerkin formulation in the infinite dimensional setting.
We address several theoretical and (uncommon) numerical tasks arising in this context and indicate first steps towards rigorously founded adaptive solution concepts. These results are illustrated by preliminary numerical experiments first in a finite element setting.
Joint work with Jay Golapalakrishnan, U. Florida.
Adaptive finite elements vary element size h or/and
polynomial order p to deliver approximation properties superior
to standard discretization methods. The best approximation
error
may converge even exponentially fast to zero as a function of
problem size (CPU time, memory). The adaptive methods are thus
a natural
candidate for singularly perturbed problems like
convection-dominated
diffusion, compressible gas dynamics, nearly incompressible
materials, elastic deformation of structures with thin-walled
components, etc. Depending upon the problem, diffusion
constant,
Poisson ratio or beam (plate, shell) thickness, define the
small
parameter.
This is the good news. The bad news is that only a small
number
of variational formulations is stable for adaptive meshes
By the stability we mean a situation where the discretization
error can be bounded by the best approximation error times
a constant that is independent of the mesh. To this class
belong
classical elliptic problems (linear and non-linear),
and a large class of wave propagation problems whose
discretization
is based on hp spaces reproducing the classical exact
grad-curl-div
sequence. Examples include acoustics, Maxwell equations,
elastodynamics,
poroelasticity and various coupled and multiphysics problems.
For singularly perturbed problems, the method should also be
robust, i.e. the stability constant should be independent
of the perturbation parameter. This is also the dream for
wave propagation problems in the frequency domain where the
(inverse of) frequency can be identified as the perturbation
parameter. In this context, robustness implies a method
whose stability properties do not deteriorate with the
frequency
(method free of pollution (phase) error).
We will present a new paradigm for constructing
discretization
schemes for virtually arbitrary systems of linear PDE's that
remain stable for arbitrary hp meshes, extending thus
dramatically
the applicability of hp approximations. The DPG methods build
on two fundamental ideas:
- a Petrov-Galerkin method with optimal test functions for
which
continuous stability automatically implies discrete
stability,
- a discontinuous Petrov-Galerkin formulation based on the
so-called ultra-weak variational hybrid formulation.
We will use linear acoustics and convection-dominated
diffusion
as model problems to present the main concepts and then review
a number of other applications for which we have collected some
numerical experience including:
1D and 2D convection-dominated diffusion (boundary layers)
1D Burgers and compressible Navier-Stokes equations
(shocks)
Timoshenko beam and axisymmetric shells (locking, boundary
layers)
2D linear elasticity (mixed formulation, singularities)
1D and 2D wave propagation (pollution error control)
2D convection and 2D compressible Euler equations (contact
discontinuities and shocks)
The presented methodology incorporates the following features:
The problem of interest is formulated as a system of first
order PDE's in the distributional (weak) form, i.e. all
derivatives
are moved to test functions. We use the DG setting, i.e. the
integration by parts is done over individual elements.
As a consequence, the unknowns include not only field
variables within
elements but also fluxes on interelement boundaries. We do not
use the concept of a numerical flux but, instead, treat the
fluxes as
independent, additional unknowns (a hybrid method).
For each trial function corresponding to either field or
flux
variable, we determine a corresponding optimal test function
by solving an auxiliary local problem on one element.
The use of optimal test functions guarantees attaining the
supremum
in the famous inf-sup condition from Babuska-Brezzi theory.
The resulting stiffness matrix is always hermitian and
positive-definite. In fact, the method can be interpreted as a
least-squares
applied to a preconditioned version of the problem.
By selecting right norms for test functions, we can obtain
stability properties uniform not only with respect to
discretization
parameters but also with respect to the perturbation parameter
(diffusion constant, Reynolds number, beam or shell thickness,
wave number)
In other words, the resulting discretization is robust.
For a detailed presentation on the subject, see [1-8].
[1] L. Demkowicz and J. Gopalakrishnan.
A Class of Discontinuous Petrov-Galerkin Methods.
Part I: The Transport Equation.
Comput. Methods Appl. Mech. Engrg., in print.
see also ICES Report 2009-12.
[2] L. Demkowicz and J. Gopalakrishnan.
A Class of Discontinuous Petrov-Galerkin Methods.
Part II: Optimal Test Functions.
Numer. Mth. Partt. D.E., accepted,
ICES Report 2009-16.
[3] L. Demkowicz, J. Gopalakrishnan and A. Niemi.
A Class of Discontinuous Petrov-Galerkin Methods.
Part III: Adaptivity.
ICES Report 2010-1, submitted to ApNumMath.
[4] A. Niemi, J. Bramwell and L. Demkowicz,
"Discontinuous Petrov-Galerkin Method with
Optimal Test Functions for Thin-Body Problems in Solid
Mechanics,"
ICES Report 2010-13, submitted to CMAME.
[5] J. Zitelli, I. Muga, L, Demkowicz, J. Gopalakrishnan, D.
Pardo and
V. Calo,
"A class of discontinuous Petrov-Galerkin methods. IV:
Wave propagation problems,
ICES Report 2010-17, submitted to J.Comp. Phys.
[6] J. Bramwell, L. Demkowicz and W. Qiu,
"Solution of Dual-Mixed Elasticity Equations
Using AFW Element and DPG. A Comparison"
ICES Report 2010-23.
[7] J. Chan, L. Demkowicz, R. Moser and N Roberts,
"A class of Discontinuous Petrov-Galerkin methods.
Part V: Solution of 1D Burgers and Navier--Stokes
Equations"
ICES Report 2010-25.
[8] L Demkowicz and J. Gopalakrishnan, "A Class of
Discontinuous
Petrov-Galerkin Methods. Part VI: Convergence Analysis for
the Poisson
Problem," ICES Report, in preparation.
Joint work with J. Bramwell and W. Qiu.
The presentation is devoted to a numerical comparison and illustration
of the two methods using a couple of 2D numerical examples. We
compare stability properties of both methods and their
efficiency.
Joint work with N.V. Roberts, D. Ridzal, P. Bochev, K.J.
Peterson, and Ch. M. Siefert.
The DPG method of Demkowicz and Gopalakrishnan guarantees the
optimality of the solution in what they call the energy norm.
An important choice that must be made in the application of the
method is the definition
of the inner product on the test space. In te presentation we
apply the DPG method to the Stokes problem in two dimensions,
analyzing it to determine appropriate inner products, and
perform a series of
numerical experiments.
Joint work with J. Chan.
We present an application of the DPG method to convection,
linear
systems of hyperbolic equations and the compressible Euler
equations.
Joint work with J. Zitelli, I. Muga, J. Gopalakrishnan,
D. Pardo, and V. M. Calo.
The phase error, or the pollution effect in the finite element
solution of wave propagation problems, is a well known phenomenon
that must be confronted when solving problems in the high-frequency
range. This paper presents a new method with no phase error.
1D proof and both 1D and 2D numerical experiments are presented.
We will give a brief overview of multiscale modeling for wave equation
problems and then focus on two techniques. One is an energy conserving
DG method for time domain and the other is a new a new sweeping
preconditioner for frequency domain simulation. The latter is resulting
in a computational procedure that essentially scales linearly even in
the high frequency regime.
In this talk I shall discuss some recent progresses in developing interior
penalty discontinuous Galerkin (IPDG) methods and local discontinuous
Galerkin (LDG) methods for high frequency scalar wave equation.
The focus of the talk is to present some non-standard (h- and hp-)
IPDG and LDG methods which are proved to be absolutely stable
(with respect to the wave number and the mesh size) and optimally
convergent (with respect to the mesh size). The proposed IPDG
and LDG methods are shown to be superior over standard finite element
and finite difference methods, which are known only to be stable under
some stringent mesh constraints. In particular, it is observed that
these non-standard IPDG and LDG methods are capable to correctly track
the phases of the highly oscillatory waves even when the mesh violates
the "rule-of-thumb" condition. Numerical experiments will be presented
to show the efficiency of the non-standard IPDG and LDG methods. If time
permits, latest generalizations of these DG methods to the high frequency
Maxwell equations will also be discussed. This is a joint work with Haijun Wu
of Nanjing University (China) and Yulong Xing of the University of Tennessee
and Oak Ridge National Laboratory.
Eigenvalue problems for partial differential equations (PDEs) arise in a large number of current technological applications, e.g., in the computation of the acoustic field inside vehicles (such as cars, trains or airplanes). Another current key application is the noise compensation in highly efficient motors and turbines. For the analysis of standard adaptive finite element methods an exact solution of the discretized algebraic eigenvalue problem is required, and the error and complexity of the algebraic eigenvalue problems are ignored. In the context of eigenvalue problems these costs often dominate the overall costs and because of that, the error estimates for the solution of the algebraic eigenvalue problem with an iterative method have to be included in the adaptation process. The goal of our work is to derive adaptive methods of optimal complexity for the solution of PDE-eigenvalue problems including problems with parameter variations in the context of homotopy methods. In order to obtain low (or even optimal) complexity methods, we derive and analyse methods that adapt with respect to the computational grid, the accuracy of the iterative solver for the algebraic eigenvalue problem, and also with respect to the parameter variation. Such adaptive methods require the investigation of a priori and a posteriori error estimates in all three directions of adaptation. As a model problem we study eigenvalue problems that arise in convection-diffusion problems. We developed robust a posteriori error estimators for the discretization as well as for the iterative solver errors, first for self-adjoint second order eigenvalue problems (undamped problem, diffusion problem), and then bring in the non-selfadjoint part (damping, convection) via a homotopy, where the step-size control for the homotopy is included in the adaptation process.
Cochains are the natural discrete analogues of the continuous
differential forms. The exterior derivative is replaced in the discrete
setting by the coboundary operator. In this way the vector operations
grad, curl and div are encoded. Since application of the coboundary
twice yields the zero operator, the vector identities div curl = 0 and
curl grad = 0 are identically satisfied on arbitrarily shaped grids,
since the coboundary operator acts on cochains in a purely topological
sense.
For the application of cochains in numerical methods cochain
interpolation is required which needs to satisfy two criteria:
1. When the interpolated k-cochain is integrated over a k-chain, the
cochain should be retrieved.
2. The interpolated k-cochain should be close the corresponding
continuous k-form in some norm.
In this poster cochain interpolations will be presented which satisfy
criterion 1. and which display exponential convergence with polynomial
enrichment for suffiently smooth k-forms.
Several examples of the use of these interpolating functions will be
presented, such as:
1. Discrete conservation laws naturally reduce to finite volume
discretizations.
2. The condition number of the resulting system matrix grows much slower
with polynomial enrichment than conventional spectral methods.
3. Low order finite volume methods are extremely good preconditioners.
4. The resonant cavity eigenvalue problem in a square box is resolved
with exponential accuracy on orthogonal and highly deformed grids,
whereas conventional spectral methods fail to do so.
References:
[1] Marc Gerritsma, Edge functions for spectral element methods,
Proceedings of ICOSAHOM 2009, 2010.
[2] Mick Bouman, Artur Palha, Jasper Kreeft and Marc Gerritsma, A
conservative spectral element method for curvilinear domains,
Proceedings of ICOSAHOM2009, 2010.
[3] Bochev, P.B. and J.M. Hyman, Principles of mimetic discretizations
of differential operators, IMA Volumes In Mathematics and its
Applications, 142, 2006.
We develop a C0 interior penalty method for a biharmonic problem with
essential and natural boundary conditions of Cahn-Hilliard type. Both a priori and a
posteriori error estimates are derived. C0 interior penalty methods are much simpler than C1
finite element methods. Compared to mixed finite element methods, the stability of C0 interior penalty methods
can be established in a straightforward manner and the symmetric positive definiteness of the continuous problems is preserved by C^{0} interior penalty methods. Furthermore, since the underlying finite element spaces are standard spaces for second order problems, multigrid solves for the Laplace operator can be used as natural preconditioners for C0 interior penalty methods.
We introduce a new adaptive data analysis method to analyze
multiscale nonlinear and non-stationary data. The purpose of this
work is to find the sparsest representation of a multiscale signal
using basis that is adapted to the data instead of being prescribed
a priori. Using a variation approach base on nonlinear L1 optimization,
our method defines trends and Instantaneous Frequency of amultiscale signal. One advantage of performing such decomposition is to preserve some intrinsic physical property of the signal without
introducing artificial scales or harminics. For multiscale data that have a nonlinear sparse representation, we prove that
our nonlinear optimization method converges to the exact signal with
the sparse representation. Moreover, we will show that our method is
insensitive to noise and robust enough to apply to realistic physical
data. For general data that do not have a sparse representation,
our method will give an approximate decomposition and the accuracy
is controlled by the scale separation property of the original signal.
Joint work with Max D. Gunzburger, School of Computational
Science, Florida State University.
Optimization problems constrained by partial differential equations
(PDEs) are particularly challenging from a computational point of
view: the first order necessary conditions for optimality lead to a
coupled system of PDEs. Specifically, for the solution of control
problems constrained by a parabolic PDE, one needs to solve a system
of PDEs coupled globally in time and space. For these,
conventional time-stepping
methods quickly reach their limitations due to the enormous demand for
storage. For such a coupled PDE system, adaptive methods which aim at
distributing the available degrees of freedom in an
a-posteriori-fashion to capture singularities in the data or domain,
with respect to both space and time, appear to be most promising.
Here I propose an adaptive method based on wavelets.
It builds on a recent paper by Schwab and Stevenson where a single
linear parabolic evolution problem is formulated in a weak space-time
form and where an adaptive wavelet method is designed for which
convergence and optimal convergence rates (when compared to wavelet-best
N term approximation) can be shown. Our approach extends this
paradigm to control problems constrained by evolutionary PDEs
for which we can prove convergence and
optimal rates for each of the involved unknowns (state, costate, and
control).
This poster presents different topics concerning the modelling and
numerical solution of complex systems from my work group, all centering
around multiscale methods for partial differential equations.
Applications are from theoretical physics, geodesy, electrical
engineering, and finance. Depending on the concrete application, we
employ wavelet, adaptive wavelet or monotone multigrid methods.
In this talk, I will present our recent work in developing high order
central discontinuous Galerkin (DG) methods for Hamilton-Jacobi (H-J)
equations and ideal MHD equations. Originally introduced for hyperbolic
conservation laws, central DG methods combine ideas in central schemes
and DG methods. They avoid the use of exact or approximate Riemann
solvers, while evolving two copies of approximating solutions on
overlapping meshes.
To devise Galerkin type methods for H-J equations, the main difficulty is that these equations in general are not in the divergence form. By
recognizing a weighted-residual or stabilization-based formulation of
central DG methods when applied to hyperbolic conservation laws, we
propose a central DG method for H-J equations. Though the stability and
the error estimate are established only for linear cases, the accuracy
and reliability of the method in approximating the viscosity solutions
are demonstrated through general numerical examples. This work is jointly done with Sergey Yakovlev.
In the second part of the talk, we introduce a family of central DG
methods for ideal MHD equations which provide the exactly divergence-free magnetic field. Ideal MHD system consists of a set of nonlinear
hyperbolic equations, with a divergence-free constraint on the magnetic
field. Though such constraint holds for the exact solution as long as it
does initially, neglecting this condition
numerically may lead to nonphysical features of approximating solutions
or numerical instability. This work is jointly done with Liwei Xu and
Sergey Yakovlev.
We develop a new hierarchical reconstruction (HR) method for limiting solutions of the discontinuous Galerkin and finite volume methods up to fourth order
without local characteristic decomposition for solving hyperbolic nonlinear conservation laws on triangular meshes. The new HR utilizes a set of point values when
evaluating polynomials and remainders on neighboring cells, extending the technique introduced in Hu, Li and Tang. The point-wise HR simplifies the implementation
of the previous HR method which requires integration over neighboring cells and makes HR easier to extend to arbitrary meshes. We prove that the new point-wise
HR method keeps the order of accuracy of the approximation polynomials. Numerical computations for scalar and system of nonlinear hyperbolic equations are performed on
two-dimensional triangular meshes. We demonstrate that the new hierarchical reconstruction generates essentially non-oscillatory solutions for schemes up to fourth order
on triangular meshes.
We present an adaptation of a Multiscale Finite Element Method
(MsFEM by T. Hou et al.) to a simplified context of pollution
dissemination in urban area in a real time marching simulation code.
To avoid the use of a complex unstructured mesh that perfectly fits
any building of the urban area a penalization technique is used. The
physical model becomes a diffusion+penalization equation with highly
heterogeneous and discontinuous coefficients. MsFEM is adapted by
developing a new basis function oversampling technique. This is tested
on a genuine urban area. We also present new variants of MsFEM
inspired by the non conforming finite elements à la Crouzeix-Raviart.
Keywords of the presentation: nonlinear eigenvalue problems, two grid method
Approximation of non linear eigenvalue problems represent the key ingredient in quantum chemistry. These approximation are much computer demanding and these approximations saturate the ressources of many HPC centers. Being nonlinear, the approximation methods are iterative and a way to reduce the cost is to use different grids as has been proposed in fluid mechanics for various non linear problems as the Navier Stokes problem. We explain the basics of the approximation, present the numerical analysis and numerical results that illustrate the good behavior of the two grids scheme. This work has been done in collaboration with Eric Cancès and Rachida Chakir.
We consider the numerical solution linear, two dimensional singularly
perturbed reaction-diffusion problem posed on a unit square with homogeneous
Dirichlet boundary conditions. In [1], it is shown that a two-scale sparse
grid finite element method applied to this problem achieves the same order of
accuracy as a standard Galerkin finite element method, while reducing
the number of degrees of freedom from O(N^{2}) to O(N^{3/2}).
In this presentation, we discuss implementation aspects of the algorithm,
particularly regarding the computational cost. We also compare the method
with the related "combination" technique.
[1] F. Liu, N. Madden, M. Stynes & A. Zhou, A two-scale sparse grid method for
a singularly perturbed reaction-diffusion problem in two dimensions, IMA J.
Numer. Anal. 29 (2009), 986-1007.
We consider boundary value problems for the Helmholtz equation
at large
wave numbers k. In order to understand how the wave number
k affects
the convergence properties of discretizations of such problems,
we develop a regularity theory for the Helmholtz equation that
is explicit
in k. At the heart of our analysis is the decomposition of
solutions
into two components: the first component is an analytic, but
highly oscillatory function and the second one has finite
regularity but
features wavenumber-independent bounds.
This understanding of the solution structure opens the door to
the
analysis of discretizations of the Helmholtz equation that are
explicit
in their dependence on the wavenumber k.
As a first example, we show for a conforming high order finite
element method
that quasi-optimality is guaranteed if (a) the approximation
order p is
selected as p = O(log k) and (b) the mesh size h is such
that
kh/p is small.
As a second example, we consider combined field boundary
integral equation
arising in acoustic scattering. Also for this example, the same
scale resolution
conditions as in the high order finite element case suffice to
ensure
quasi-optimality of the Galekrin discretization.
This work presented is joint work with Stefan Sauter (Zurich)
and Maike Löhndorf (Vienna).
This presentation is devoted to plane wave methods for approximating the time-harmonic wave equation paying particular attention to the Ultra Weak Variational Formulation (UWVF). This method is essentially an upwind Discontinuous Galerkin (DG) method in which the approximating basis functions are special traces of solutions of the underlying wave equation. In the classical UWVF, due to Cessenat and Després, sums of plane wave solutions are used element by element to approximate the global solution. For these basis functions, convergence analysis and considerable computational experience shows that, under mesh refinement, the method exhibits a high order of convergence depending on the number of plane wave used on each element. Convergence can also be achieved by increasing the number of basis functions on a fixed mesh (or a combination of the two strategies). However ill-conditioning arising from the plane wave basis can ultimately destroy convergence. This is particularly a problem near a reentrant corner where we expect to need to refine the mesh.
The presentation will start with a summary of the UWVF and some typical analytical and numerical results for the Hemholtz equation. An alternative to plane waves, is to use polynomial basis functions on small elements. Using mixed finite element methods, we can view the UWVF as a hybridization strategy and I shall also present theoretical and numerical results for this approach.
Although neither the Bessel function or the plane wave UWVF are free of dispersion error (pollution error) they can provide a method that can use large elements and small number of degrees of freedom per wavelength to approximate the solution. It has been extended to Maxwell's equations and elasticity. Perhaps the main open problems are how to improve on the bi-conjugate gradient method that is currently used to solve the linear system, and how to adaptively refine the approximation scheme.
Conformal conservation laws are defined and derived for a class of multi-symplectic equations with added dissipation. In particular, the conservation laws of symplecticity, energy and momentum are considered, along with others that arise from linear symmetries. Numerical methods that preserved these conformal conservation laws are presented. The nonlinear Schrödinger equation and semi-linear wave equation with added dissipation are used as examples to demonstrate the results.
The Monge-Ampere equation is a fully nonlinear second order PDE that arises in various application areas such as differential geometry, meteorology, reflector design, economics, and optimal transport. Despite its prevalence in many applications, numerical methods for the Monge-Ampere equation are still in their infancy. In this work, I will discuss a new approach to construct and analyze several finite element methods for the Monge-Ampere equation. As a first step, I will show that a key feature in developing convergent discretizations is to construct schemes with stable linearizations. I will then describe a methodology for constructing finite elements that inherits this trait and provide two examples: C^0 finite element methods and discontinuous Galerkin methods. Finally, I will present some promising application areas to apply this methodology including mesh generation and computing a manifold with prescribed Gauss curvature.
We extend hybridizable discontinuous Galerkin (HDG) methods to CFD applications. The HDG methods inherit the geometric flexibility and high-order accuracy of discontinuous Galerkin methods, and offer a significant reduction in the computational cost. In order to capture shocks, we employ an artificial viscosity model based on an extension of existing artificial viscosity methods. In order to integrate the Spalart-Allmaras turbulence model using high-order methods, some modification of the model is necessary. Mesh adaptation based on shock indicator is used to improve shock profiles. Several test cases are presented to illustrate the proposed approach.
We present a recent development of hybridizable
discontinuous Galerkin (HDG) methods for continuum mechanics.
The HDG methods inherit the geometric flexibility, high-order
accuracy, and multiphysics capability of discontinuous Galerkin
(HDG) methods. They also possess several unique features which
distinguish themselves from other DG methods: (1) the global
unknowns are the numerical traces of the field variables;
(2) all the approximate variables converge with the
optimal order k+1 for diffusion-dominated problems; (3) in
such cases, local postprocessing can be developed to
increase the convergence rate to k+2 for the approximation
of the field variables; (4) they can deal with
non-compatible boundary conditions; (5) they result in
a
compact matrix system and (6) they are somewhat easier to
implement and provide a single code for solving multiphysics
problems.
An effective technique which employs only the underlying surface discretization to calculate domain integrals
appearing in boundary element methods has been developed. The proposed approach first converts a domain
integral with continuous or weakly-singular integrand into a boundary integral. The resulting surface integral
is then computed via standard quadrature rules commonly used for boundary elements. This transformation of a
domain integral into a boundary counterpart is accomplished through a systematic generalization of the
fundamental theorem of calculus to higher dimension. In addition, it is established that the higher-dimensional
version of the first fundamental theorem of calculus corresponds to the classical Poincaré lemma.
This research was supported by the Office of Advanced Scientific Computing Research,
U.S. Department of Energy, under contract DE-AC05-00OR22725 with UT-Battelle, LLC.
Several finite element methods used in the numerical discretization of wave problems in frequency domain are based on incorporating a priori knowledge about the differential equation into the local approximation spaces by using Trefftz-type basis functions, namely functions which belong to the kernel of the considered differential operator. For the Helmholtz equation, for instance, examples of Trefftz basis functions are plane waves, Fourier-Bessel functions and Hankel functions, and there are in the literature several methods based on them: the Plane Wave/Bessel Partition of Unit Method by Babuška and Melenk, the Ultra Weak Variational Formulation by Cessenat and Després, the Plane Wave/Bessel Least Square Method by Monk and Wang, the Discontinuous Enrichment Method by Farhat and co-workers, the Method of Fundamental Solutions by Stojek, to give some examples. These methods differ form one another not only for the type of Trefftz basis functions used in the approximating spaces, but also for the way of imposing continuity at the interelement boundaries: partition of unit, least squares, Lagrange multipliers or discontinuous Galerkin techniques. In this talk, the construction of Trefftz-discontinuous Galerkin methods for both the Helmholtz and the time-harmonic Maxwell problems will be reviewed and their abstract error analysis will be presented. It will also be shown how to derive best approximation error estimates for Trefftz functions, needed to complete the convergence analysis, by using Vekua's theory. Some explicit estimates in the case of plane waves will be given. These results have been obtained in collaboration with Ralf Hiptmair and Andrea Moiola form ETH Zürich.
We apply the very high order Strang split semi-Lagrangian WENO algorithm for kinetic equations. The spatial accuracy of the current Strang split finite difference WENO algorithm is very high (as high as ninth order), however the temporal error is dominated by the dimensional splitting, which is only second order accurate. It is therefore very important to overcome this splitting error, in order to have a consistently high order numerical algorithm. We are currently working on using the IDC framework to overcome the `at best' second order Strang splitting error. Specifically, the dimensional splitting error is overcomed by iteratively correcting the numerical solution via the error function, which is solved by approximating the error equation. We will show numerically that if one embeds a first order dimensional splitting algorithm into the IDC framework, there will be first order increase in order of accuracy when one applies a correction loop in IDC algorithm. Applications to the Vlasov-Poisson system will be presented.
Model order reduction techniques provide an efficient and reliable way
of solving partial differential equations in the many-query or real-
time context, such as (shape) optimization, characterization,
parameter estimation and control.
The reduced basis (RB) approximation is used for a rapid and reliable
solution of parametrized partial differential equations (PDEs). The
reduced basis method is crucial to find the solution of parametrized
problems as projection of previously precomputed solutions for certain
instances of the parameters. It consists on rapidly convergent
Galerkin approximations on a space spanned by “snapshots” on a
parametrically induced solution manifold; rigorous and sharp a
posteriori error estimators for the outputs/quantities of interest;
efficient selection of quasi-optimal samples in general parameter
domains; and Offline-Online computational procedures for rapid
calculation in the many-query and real-time contexts.
The error estimators play an important role in efficient and
effective sampling procedures: the inexpensive error bounds allow to explore
much larger subsets of parameter domain in search of
most representative or best “snapshots”, and to determine when
we have just enough basis functions.
Extensions of the RB method have been combined with domain
decomposition tecniques: this approach, called reduced basis element
method (RBEM), is suitable for the approximation of the solution of
problems described by partial differential equations within domains
which are decomposable into smaller similar blocks and properly
coupled. The goal is to speed up the computational time with rapid and
efficient numerical strategies to deal with complex and realistic
configurations, where topology features are recurrent. The
construction of the map from the reference shapes to each
corresponding block of the computational domain is done by the
generalized transfinite maps. The empirical interpolation procedure has
been applied to the geometrical non-affine transformation terms to re-
cast the problem in an affine setting.
Domain decomposition techniques are important to enable the use of
parallel architectures in order to speed up the computational time,
compared to a global approach, and also to increase the geometric
complexity dealing with independent smaller tasks on each sub-domain,
where the approximated solution is recovered as projection of local
previously computed solutions and then properly glued through
different domains by some imposed coupling conditions to guarantee the
continuity of stresses and velocities in viscous flows, for example.
The Offline/Online decoupling of the reduced basis procedure and the
computational decomposition of the method allow to reduce considerably
the problem complexity and the simulation times.
We propose here an option for RBEM, called reduced basis hybrid method
(RBHM) where we focus on different coupling conditions to guarantee the continuity of velocity and pressure. Each basis
function in each reference subdomain is computed considering zero-
stress condition at the interfaces, the continuity of the stresses
(non-zero) at the interfaces is recovered by a coarse finite element
solution on the global domain, while the continuity of velocities is
guaranteed by Lagrange multipliers.
This computational procedure allows to reduce considerably the problem
complexity and the computational times which are dominated online by
the coarse finite element solution, while all the RB offline
calculations may be carried out by a parallel computing approach.
Applications and results are shown on several combinations of
geometries representing cardiovascular networks made up of stenosis,
bifurcation, ect.
The concept of Isogeometric Analysis (IGA) was first applied to the approximation of Maxwell equations in [A. Buffa, G. Sangalli, R. Vázquez, Isogeometric analysis in electromagnetics: B-splines approximation, CMAME, doi:10.1016/j.cma.2009.12.002.]. The method is based on the construction of suitable B-spline spaces such that they conform a De Rham diagram. Its main advantages are that the geometry is described exactly with few elements, and the computed solutions are smoother than those provided by finite elements. We present here the theoretical background to the approximation of vector fields in IGA. The key point of our analysis is the definition of suitable projectors that render the diagram commutative. The theory is then applied to the numerical approximation of Maxwell source and eigen problem, and numerical results showing the good behavior of the scheme are also presented.
Joint with R. Hiptmair, Konstantin Grella, Eividn Fonn of SAM, ETH.
We report on an ongoing project on
Sparse Tensor Finite Element Discretizations
for
High Dimensional Linear Transport Problems.
After reviewing several well-posed variational formulations and the regularity of weak solutions of these problems, we discuss their stable discretizations, with a focus on hierarchic, multilevel type discretizations. Particular examples include (multi)wavelet and shearlet discretizations. We discuss sparse tensor discretizations for Least
Squares Formulations of first order transport equations on high dimensional parameter
spaces. The formulation is due to Manteuffel etal. (SINUM2000).
We present preliminary numerical results for both, sparse tensor spectral as well
as for wavelet discretizations.
Results are report from ongoing work at the Seminar for Applied Mathematics at ETH Zurich which is supported by the Swiss National Science Foundation (SNF) and from joint work with the groups of W. Dahmen and of G. Kutyniok within the Priority Research Programme (SPP) No. 1324 of the German Research Foundation.
http://www.dfg-spp1324.de
We are concerned with the stability and approximation properties of enriched meshfree methods for the discretization of PDE on arbitrary domains. In particular we focus on the particle-partition of unity method (PPUM) yet the presented results hold for any partition of unity based enrichment scheme. The goal of our enrichment scheme is to recover the optimal convergence rate of the uniform h-version independent of the regularity of the solution. Hence, we employ enrichment not only for modeling purposes but rather to improve the approximation properties of the numerical scheme. To this end we enrich our PPUM function space in an enrichment zone hierarchically near the singularities of the solution. This initial enrichment however can lead to a severe ill-conditioning and can compromise the stability of the discretization. To overcome the ill-conditioning of the enriched shape functions we present an appropriate local preconditioner which yields a stable basis independent of the employed initial enrichment. The construction of this preconditioner is of linear complexity with respect to the number of discretization points. The treatment of general domains with mesh-based methods such as the finite element method is rather involved due to the necessary mesh-generation. In collocation type meshfree methods this complex pre-processing step is completely avoided by construction. However, in Galerkin type meshfree discretization schemes we must compute domain and boundary integrals and thus must be concerned with the meshfree treatment of arbitrary domains. Here, we present a cut-cell-type scheme for the partition of unity method and ensure stability by enforcing the flat-top condition in a simple post-processing step.
Many scientific, engineering and financial applications require
solving high-dimensional PDEs. However, traditional tensor product
based algorithms suffer from the so called "curse of dimensionality".
We shall construct a new sparse spectral method for
high-dimensional problems, and present, in particular,
rigorous error estimates as well as efficient numerical algorithms for
elliptic equations.
We shall also propose a new weighted weak formulation for
the Fokker-Planck equation of FENE dumbbell model, and prove its
well-posedness in weighted Sobolev spaces. Based on the new
formulation, we are able to design simple,
efficient, and unconditionally stable semi-implicit Fourier-Jacobi
schemes for the Fokker-Planck equation of FENE dumbbell model.
It is hoped that the combination of the two new approaches
would make it possible to directly simulate the five or six dimensional
Navier-Stokes Fokker-Planck system.
We construct uniformly high order accurate discontinuous Galerkin
(DG) and weighted essentially non-oscillatory (WENO) finite volume
(FV) schemes satisfying a strict maximum principle for scalar
conservation laws and passive convection in incompressible flows, and
positivity preserving for density and pressure for compressible Euler
equations. A general framework (for arbitrary order of accuracy) is
established to construct a limiter for the DG or FV method with first order
Euler forward time discretization solving one dimensional scalar
conservation laws. Strong stability preserving (SSP) high order time
discretizations will keep the maximum principle and make the scheme
uniformly high order in space and time. One remarkable property of
this approach is that it is straightforward to extend the method to
two and higher dimensions. The same limiter can be shown to preserve
the maximum principle for the DG or FV scheme solving two-dimensional
incompressible Euler equations in the vorticity stream-function
formulation, or any passive convection equation with an incompressible
velocity field. A suitable generalization results in a high order DG
or FV scheme satisfying positivity preserving property for density and
pressure for compressible Euler equations. Numerical tests
demonstrating the good performance of the scheme will be reported.
This is a joint work with Xiangxiong Zhang.
In this talk, we give an overview of adaptive wavelet methods for solving operator equations. In particular, we will focus on the following topics: The application of these methods to time evolution problems as parabolic problems and the instationary Stokes system; the advantage of the application of tensor product wavelets and the role of anisotropic regularity; the construction of piecewise tensor product wavelet bases on general domains; the application of the adaptive scheme to singularly perturbed problems.
Singularity formation in relativistic flow is an open theoretical
problem in relativistic hydrodynamics. These singularities can be
either shock formation, violation of the subluminal conditions or
concentration of the mass. We numerically investigate singularity
formation in solutions of the relativistic Euler equations in
(2+1)-dimensional spacetime with smooth initial data. A hybrid method
is used to solve the radially symmetric case. The hybrid method takes
the Glimm scheme for an accurate treatment of non-linear waves and a
central-upwind scheme in other regions where the fluid flow is
sufficiently smooth. The numerical results indicate that shock
formation occurs in a certain parametric regime. This is a joint work
with Pierre Gremaud.
The new concept of numerical smoothness is applied to the RKDG (Runge-Kutta/Discontinuous Galerkin) methods for scalar nonlinear conservations laws. The main result is an a posteriori error estimate for the RKDG methods of arbitrary order in space and time, with optimal convergence rate. Currently, the case of smooth solutions is the focus point. However, the error analysis framework is prepared to deal with discontinuous solutions in the future.
We present novel discretization techniques based on boundary integral equations formulations for the solution of three dimensional acoustic and electromagnetic scattering problems in domains with corners and edges. Our method is based on three main components: (1) the use of regularization/preconditioning techniques to design well-conditioned boundary integral equations in domains with geometric singularities; (2) the use of ansatz formulations that explicitly account for the singular and possibly unbounded behavior of the quantities that enter the integral formulations; and (3) the use of a novel Nystrom discretization technique based on non-overlapping integration patches and Chebyshev discretization together with Clenshaw-Curtis-type integrations. We will illustrate the excellent performance of our solvers for a variety of challenging 3D configurations that include closed/open domains with corners and edges. Joint work with O. Bruno (Caltech) and A. Anand (IIT Kanpur).
High-order flux reconstruction (FR) schemes are efficient,
simple to implement, and allow various high-order methods, such as the
nodal discontinuous Galerkin (DG) method and any spectral difference
method, to be cast within a single unifying framework. Recently, we have
identified a new class of 1D linearly stable FR schemes. Identification
of such schemes offers significant insight into why certain FR schemes
are stable, whereas other are not. Also, from a practical standpoint,
the resulting linearly stable formulation provides a simple prescription
for implementing an infinite range of intuitive and apparently robust
high-order methods. We are currently extending the 1D formulation to
multiple dimensions (including to simplex elements). We are also
developing CPU/GPU enabled unstructured high-order inviscid and viscous
compressible flow solvers based on the range of linearly stable FR
schemes. Details of both the mathematical theory and the practical
implementation will be presented in the poster.
Locomotion at the micro-scale is important in biology and in industrial
applications such as targeted drug delivery and micro-fluidics. We
present results on the optimal shape of a rigid body locomoting in 3-D
Stokes flow. The actuation consists of applying a fixed moment and
constraining the body to only move along the moment axis; this models the
effect of an external magnetic torque on an object made of magnetically
susceptible material. The shape of the object is parametrized by a 3-D
centerline with a given cross-sectional shape. No a priori assumption is
made on the centerline. We show there exists a minimizer to the infinite
dimensional optimization problem in a suitable infinite class of
admissible shapes. We develop a variational (constrained) descent method
which is well-posed for the continuous and discrete versions of the
problem. Sensitivities of the cost and constraints are computed
variationally via shape differential calculus. Computations are
accomplished by a boundary integral method to solve the Stokes equations,
and a finite element method to obtain descent directions for the
optimization algorithm. We show examples of locomotor shapes with and
without different fixed payload/cargo shapes.
We studied the well-balancedness properties of the high order finite
difference WENO schemes and
high order low dissipative filter schemes based on a five-species
one-temperature reacting flow model.
Both 1d and 2d results are shown to demonstrate the advantages of
using well-balanced schemes for non-equilibrium flows.
This poster will describe recent progress in adapting discontinuous
Galerkin methods to obtain high efficiency on modern graphics
processing units. A new low storage version of the methods allows
unstructured meshes where all elements to be curvilinear without incurring the
usual expensive memory overhead of the traditional scheme. Some
performance tests reveal that a modest workstation can generate
teraflop performance. Simulation results from time-domain electromagnetics
and also compressible flows will demonstrate the promise of this new
formulation.
We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance.
The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain.
Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of the linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations.
Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual and equilibrated based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time integration schemes.
However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.
We present a central discontinuous Galerkin method for solving ideal
magnetohydrodynamic (MHD) equations.
The methods are based on the original central discontinuous Galerkin methods
designed for hyperbolic conservation laws on overlapping meshes,
and they use different discretization for magnetic induction equations.
The resulting schemes carry many features of standard central discontinuous Galerkin
methods such as high order accuracy and being free of exact or approximate Riemann solvers.
And more importantly, the numerical magnetic field is exactly divergence-free.
Such property, desired in reliable simulations of MHD equations,
is achieved by first approximating the normal component of the magnetic field through
discretizing induction equations on the mesh skeleton, namely, the element interfaces.
And then it is followed by an element-by-element divergence-free reconstruction
with the matching accuracy. Numerical examples are presented to demonstrate
the high order accuracy and the robustness of the schemes. This is a joint work with
Fengyan Li and Sergey Yakovlev at Rensselaer Polytechnic Institute.
We present an accurate and efficient numerical model for the simulation
of fully nonlinear (non-breaking), three-dimensional surface water waves
on infinite or finite depth.
As an extension of the work of Craig and Sulem (1993), the numerical method
is based on the reduction of the problem to a lower-dimensional Hamiltonian system
involving surface quantities alone.
This is accomplished by introducing the Dirichlet--Neumann operator
which is described in terms of its Taylor series expansion in homogeneous
powers of the surface elevation.
Each term in this Taylor series can be computed efficiently using
the fast Fourier transform.
An important contribution of this paper is the development and implementation
of a symplectic implicit scheme for the time integration of the Hamiltonian
equations of motion, as well as detailed numerical tests on the convergence
of the Dirichlet--Neumann operator.
The performance of the model is illustrated by simulating the long-time
evolution of two-dimensional steadily progressing waves, as well as
the development of three-dimensional (short-crested) nonlinear waves,
both in deep and shallow water. This is a joint work with Philippe Guyenne
at the University of Delaware.
Joint work with Mary Wheeler (University of Texas) and Ivan
Yotov (University of Pittsburgh).
We develop a new mixed finite element method for elliptic
problems on general quadrilateral and hexahedral grids that
reduces to a cell-centered finite difference scheme. A special
non-symmetric quadrature rule is employed that yields a
positive definite cell-centered system for the scalar by
eliminating local fluxes. The method is shown to be accurate on
highly distorted rough quadrilateral and hexahedral grids,
including hexahedra with non-planar faces. Theoretical and
numerical results indicate first-order convergence for the
scalar and face fluxes. We also develop multiscale mortar
method that utilize multipoint flux mixed finite element method
as the fine scale discretization. Continuity of flux between
coarse elements is imposed via a mortar finite element space on
a coarse grid scale. With an appropriate choice of polynomial
degree of the mortar space, we derive optimal order convergence
on the fine scale for both the multiscale pressure and
velocity, as well as the coarse scale mortar pressure. We
present applications to muliphase flow in porous media.