March 11 - 15, 2013
Keywords of the presentation: Data Assimilation, Ensemble Prediction, Parameter Estimation, Climate System Prediction
Data assimilation for a climate system model is the process of combining model forecasts with observations to produce improved estimates of the model state. Ensemble filter data assimilation algorithms attempt to provide a discrete sample of model state estimates that are consistent with observations and model constraints. A practical introduction to ensemble filters is presented emphasizing empirically motivated aspects of the algorithms that are in need of theoretical understanding from the mathematical community. Results of applying ensemble filters to the atmosphere and ocean component models of a climate modeling system provide examples of current capabilities and challenges. Dealing with systematic model biases is one of the most serious challenges and can be addressed by improving the models, by extending the data assimilation system, and by including stochastic variability in the models. The possibility of using data assimilation to improve models by estimating the values of uncertain model parameters is also discussed.
Keywords of the presentation: Stochastic parameterization of transient eddy forcing
Dynamical underpinnings of the effects of transient eddy forcing on large-scale circulation are explored by considering effects
of spatially localized oscillatory forcing on wind-driven double gyres. The outcome is in terms of understanding what are
the key eddy forcing parameters and locations.
A common assumption of Kalman filtering methods for nonlinear dynamics is that the covariance structures of the system noise and observation noise are readily available. If these covariances are not known, or are changing, filter performance will be compromised. Early in the development of Kalman filters, Mehra enabled adaptivity by showing how to estimate these needed covariances, but only in the case of linear dynamics with full observation. We propose an adaptive filter based on the unscented version of the ensemble Kalman filter (EnKF) which estimates the covariances in real time even for nonlinear dynamics and observations. We test the adaptive filter on a 40-dimensional Lorenz96 model and show the dramatic improvements in state estimation that are possible. We also discuss the extent to which such an adaptive filter can compensate for model error.
Keywords of the presentation: uncertainty quantification, model error, information theory, statistical predictions, turbulent systems, fluctuation-dissipation theorem
Incomplete knowledge of the true dynamics and its partial observations pose a notoriously difficult problem in many scientific applications which require predictions of high-dimensional dynamical systems with physical instabilities and energy fluxes across a wide range of scales. The issue of 'model error' is particularly important in prediction of turbulent geophysical systems with with a multitude
of active spatio-temporal scales and rough energy spectra near the resolution cut-off of the numerical models. I will give an overview of a newly emerging stochastic-statistical framework which allows for information-theoretic quantification of uncertainty and reduction of model error effects in imperfect statistical predictions of complex, nonlinear and non-Gaussian multi-scale dynamics. Two main themes of the talk are (i) existence and mitigation of 'information' barriers to imperfect model improvement and (ii) the utility of fluctuation-dissipation type arguments in forced, dissipative turbulent systems for assessing the skill of imperfect predictions of 'climate change' scenarios based on the information obtained from the unperturbed climate/equilibrium.
Keywords of the presentation: optimal response, tropical heating, AMOC
Having a linear operator that can accurately estimate how a system will react to a weak external stimulus makes it possible to address many problems of interest to the atmosphere/ocean science community. Such problems include questions of optimal forcing and response and systematic investigations of response as a function of forcing properties. An approach that has been shown to produce such operators is the fluctuation dissipation theorem (FDT). Here we demonstrate its usefulness by applying it to two topics. The first topic concerns the response of the atmosphere to tropical heat sources. We use FDT to investigate how the remote response to such sources depends on their time dependent properties, in particular to the speed at which such sources may move along the equator, as for example during Madden-Julian Oscillation events. The second topic concerns the response of the ocean to surface fluxes. In this case we search for the most efficient way to excite long lasting responses in the SST field and the Atlantic Overturning Circulation if the ocean is forced by anomalous sources of heat, momentum or salt.
Based on the spectral theory of chaotic and dissipative dynamical systems, we argue that the time variability of recurrent large-scale patterns typically simulated by geophysical kid models plays a potentially key role in the parameter dependence of long-term statistics of such models. The cornerstone of our approach consists of interpreting this variability in terms of Ruelle-Pollicott (RP)-resonances which encode crucial information about the nonlinear dynamics of the model. A new approach for estimating RP resonances as filtered through observables of the system will be also presented. This approach relies on appropriate representations of the dynamics by Markov operators which are adapted to a given observable. It will be shown on an El Nino-Southern Oscillation (ENSO) model of intermediate complexity that the model statistics are the most sensitive for the smallest spectral gaps of the associated Markov
operator; such small gaps turn out to correspond to regimes where peaks in the power spectrum are the most energetic, while correlations decay more slowly. Theoretical arguments will be provided to discuss the possible generalizations of this work to more realistic climate simulations obtained with general circulation models (GCMs).
Keywords of the presentation: stochastic parameterization, statistical inference, atmospheric convection
I will present recent work on the construction of stochastic parameterizations by statistical inference from high-resolution model datasets. Large Eddy Simulation (LES) models are able to resolve atmospheric convection, but are too expensive to run on large domains. However, data from LES models on limited domains can be used to estimate stochastic processes that mimick the LES convective response to the large-scale atmospheric state. These stochastic processes are conditional on the large-scale state. They are formulated as discrete processes (conditional Markov chains), allowing for easy estimation and computation. The discrete states correspond to different convective states (turbulent flux states, or cloud types). I will discuss application of this approach to LES datasets for shallow and deep convection.
Keywords of the presentation: eddy parameterization, general circulation
A simple model of the general circulation based on idealized eddy heat and momentum fluxes is presented. The eddy heat flux is parameterized based on a stochastically excited baroclinic model. The eddy momentum flux is parameterized based on a stochastic model whereby surface eddy heat fluxes randomly excite Rossby waves which in turn propagate vertically and horizontally. The model also includes parameterized radiative transfer processes. These parameterizations are then coupled with the momentum and energy equations to solve for the complete general circulation. The resulting model is shown to produce a realistic Hadley cell, heat and momentum fluxes, and zonal jets. Despite the crudity of the parameterizations, the model is argued to be the simplest possible model of the dry general circulation based on physically plausible eddy dynamics.
Keywords of the presentation: Ocean Circulation, Wind Stress Noise, Stochastic Behavior
Results will be presented of a study on the interaction of noise and nonlinear dynamics
in a quasi-geostrophic model of the wind-driven ocean circulation. The recently developed
framework of dynamically orthogonal field theory is used to determine the statistics of
the flows which arise through successive bifurcations of the system as the ratio of forcing
to friction is increased. Focus will be on the understanding of the role of the spatial and
temporal coherence of the noise in the wind-stress forcing. For example, when the wind-stress noise
is additive and temporally white, the statistics of the stochastic ocean flow does not depend
on the spatial structure and amplitude of the noise. This implies that a spatially
inhomogeneous noise forcing in the wind stress field only has an effect on the dynamics
of the flow when the noise is temporally colored. The latter kind of stochastic forcing
may cause more complex or more coherent dynamics depending on its spatial correlation
properties.
We investigate the consequences of power laws and self similarity in mesocyclones and tornados as presented in (Cai 2005; Wurman and Gill 2001; Wurman and Alexander 2005). We give a model for tornado genesis and maintainence using the 3-dimensional vortex gas theory of (Chorin 1994). High energy vortices with negative temperature in the sense of (Onsager 1949) play an important role in the model. We speculate that the high temperature vortices formation is related to the helicity they inherit as they form or tilt into the vertical. We also exploit the notion of self-similarity to justify power laws for weak and strong tornados given in (Cai 2005; Wurman and Gill 2001; Wurman and Alexander 2005). Doing a nested grid simulation using ARPS we find results consistent with scaling in the studies above.
Joint work with Pavel Belik, Kurt Scholz and Misha Shvartsman
Keywords of the presentation: Mixing, Geostrophic eddies, Southern Ocean, Stochastic model
Geostrophic eddies control the meridional mixing of heat, carbon, and other climatically important tracers in the Southern Ocean. We will report the first direct estimates of eddy mixing across the Southern Ocean from the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean. The estimates are based on the dispersion of an anthropogenic tracer and floats over a period of two years. We find that mixing is weak in the upper 2 km and enhanced at depth. A simple stochastic model will be introduced to interpret these variations and explain their importance for climate models.
Keywords of the presentation: Spatiotemporal analysis, high-dimensional data
Nonlinear Laplacian spectral analysis (NLSA) is a method for spatiotemporal analysis of high-dimensional data, which represents spatial and temporal patterns through singular value decomposition of a family of maps acting on scalar functions on the nonlinear data manifold. Through the use of orthogonal basis functions (determined by means of graph Laplace-Beltrami eigenfunction algorithms) and time-lagged embedding, NLSA captures intermittency, rare events, and other nonlinear dynamical features which are not accessible through classical linear approaches such as singular spectrum analysis. We present applications of NLSA to detection of decadal and intermittent variability in the North Pacific sector of comprehensive climate models, and convectively-coupled modes of tropical atmospheric dynamics in infrared brightness temperature satellite data.
Nonlinear Laplacian spectral analysis (NLSA) is a method for spatiotemporal analysis of high-dimensional data, which represents spatial and temporal patterns through singular value decomposition of a family of maps acting on scalar functions on the nonlinear data manifold. Through the use of orthogonal basis functions (determined by means of graph Laplace-Beltrami eigenfunction algorithms) and time-lagged embedding, NLSA captures intermittency, rare events, and other nonlinear dynamical features which are not accessible through classical linear approaches such as singular spectrum analysis. We present applications of NLSA to detection of decadal and intermittent modes of variability in the North Pacific sector of comprehensive climate models.
Keywords of the presentation: Sea ice, climate change, composite materials, statistical physics
The precipitous decline of the summer Arctic sea ice pack is probably the most
visible, large scale change on Earth's surface in recent years. In fact, the
observed losses have significantly outpaced the projections of most global
climate models. As a material, sea ice is a porous composite of pure ice with
brine inclusions. Moreover, sea ice displays composite structure on many length
scales, from millimeters to kilometers. We will discuss how mathematical models
of composite materials and statistical physics are being used to analyze the
effective properties of sea ice on various scales. The results give insight
into key sea ice processes such as melt pond evolution, snow-ice formation, and
nutrient replenishment for algal communities living in the brine microstructure.
These processes must be better understood to improve projections of the fate of
sea ice, and the response of polar ecosystems. Video from an Antarctic
expedition
in 2012 where we measured sea ice properties will be shown after the talk.
_____________________________________________________________________
Video Information
Produced by Ken Golden, Co-Produced by Gordon Jones
The video chronicles the SIPEX II expedition off the
coast of East Antarctica
aboard the Australian icebreaker Aurora Australis during the fall of
2012. The science,
adventure, and wildlife of an Antarctic expedition are highlighted
using footage taken by
Ken Golden, with contributions from others on the voyage. The film
sequences in the
video are not only visually stunning, but help to illustrate how
mathematics is being used
to advance our understanding of climate change in the polar regions.
Keywords of the presentation: stochastic parametrisation; multiplicative noise; homogenisation; diffusion limits of deterministic systems
Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems.
This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.
Keywords of the presentation: fluctuation-dissipation theorem, atmospheric general circulation model, response operator, sensitivity
In this study we discuss one possibility to estimate response of statistical characteristics of atmospheric and coupled atmospheric-oceanic general circulation models onto small external forcings.
The method is based on applying fluctuation-dissipation theorem (FDT) which states that for systems with stationary quasi-Gaussian PDF a response of the system statistical characteristics to weak external forcing could be expressed in terms of covariances and lag-covariances of fluctuations of the unperturbed system. The major advantage of this approach is that one may approximate system response operator (relating changes of the statistical characteristic of interest with changes in the system external forcing) using modeling (or observational) data only.
In this study we use FDT to construct approximate response operators for atmospheric general circulation models CAM3 of NCAR and INM-A5421 of INM RAS as well as coupled atmospheric-oceanic general circulation models CCSM4 (NCAR) and INMCM (INM). We analyze the accuracy of the approach and compare system sensitivities (i.e. the structures of external forcing producing maximum possible response of the system and corresponding response values).
Keywords of the presentation: parameterization, multiscale methods, turbulence
Superparameterization is a multiscale numerical method that models the impacts of unresolved subgridscale dynamics on resolved large scales by embedding high-resolution, periodic simulations of fine-scale phenomena into the computational grid of a large-scale model. The conventional framework for superparameterization is computationally demanding, and fails to capture certain small-scale instabilities like baroclinic instability. We provide a new stochastic framework that captures an increased range of small-scale instabilities while significantly decreasing the computational cost. The framework defines both a nonlinear deterministic closure for subgridscale terms and a stochastic closure whose mean equals the deterministic closure. The framework is tested in a one-dimensional model of wave turbulence (deterministic closure) and in a two-layer model of quasigeostrophic turbulence (stochastic closure).
I will discuss a new class of physics constrained multi-level quadratic regression models for predicting invariant marginal statistics. These paradigm models incorporates memory effects in time and the nonlinear noise from energy conserving nonlinear interactions. I will discuss the mathematical conditions for ergodic solutions and present some preliminary results on their statistical predictive skill.
We describe a novel Markov chain Monte Carlo approach that does not require a likelihood evaluation. Rather, we use unbiased estimates of the likelihood and a Bernoulli factory to decide whether proposed states are accepted or not. We illustrate this approach using a oceanographic data inversion example. The variates required to estimate the likelihood function are obtained via a Feynman-Kac representation. This lifts the restriction of selecting a regular grid for the physical model and eliminates the need for data pre-processing. We implement our approach using the parallel GPU computing environment.
Geophysical fluid systems exhibit a wide range of spatial and temporal scales. Obseving network of geophysical fluid systems, consisting in-situ and remote-sensing, are sampled at multi-resolution. To assimilate spatially high resolution and sparse observations together, a data assimilation scheme must fulfill two specific objectives. First, the large scale flow components must be effectively constrained using the sparse observations. Second, small-scale features that are resolved by the high-resolution observations must be utilized to the maximum degree possible. In this talk, we present a practical, multi-scale approach to data assimilation and demonstrate advantage of multi-scale approach over conventional data assimilation.
Stochastic analysis was already shown to represent a powerful tool for the derivation of realizable linear and nonlinear SGS stress models, and for the development of unified turbulence models that can be used continuously as LES and RANS, or FDF and PDF methods. Here, it was shown that stochastic analysis is also very helpful for the development of realizable linear and nonlinear dynamic SGS stress models. We verified the derived model for simulation of turbulent Ekman layer.
Organized convection in the tropics hinders long term weather and climate predictions on the global scale and in midlatitudes in particular.
Improvement in the representation of organized convection and cloud processes in coarse-resolution climate models is one of the major challenges in contemporary climate modelling.
The multicloud model of Khouider and Majda (2006, 2008) is based on the self-similar structure of tropical convective systems as the building block.
It carries the three main cloud types that characterize organized tropical convection. As such it is very successful in capturing the physical and dynamical features of synoptic scale convectively coupled waves and the associated planetary scale intra-seasonal oscillation known as the Madden-Jullian oscillation. In order to represent the sub-grid variability due to unresolved interactions between those clouds and the environment, a stochastic multi-cloud model is proposed by Khouider, Biello, and Majda (2010). This model is further refined and compared against its deterministic counterpart by Frenkel, Majda, and Khouider (2012,2013). This poster summarizes the new results from this assessment and comparison.
Keywords of the presentation: Tropical convection, Climate models, stochastic parameterization, convectively coupled waves, MJO
Despite recent advances in supercomputing, current general circulation models (GCMs) poorly represent the variability associated with organized tropical convection. The reason for this failure is believed to be due to the inadequate treatment of organized convection by the underlying cumulus parameterizations. Most of these parameterizations are based on the quasi-equilibrium theory which assumes that convection has an instantaneous or rapid response to large scale instability and thus fail to capture the intermittent and sluggish nature of deep convection that are believed to be key for its tremendous capability to organize itself into mesoscale to planetary scale convective systems including synoptic scale convectively coupled waves and the Madden-Julian oscillation (MJO). Khouider and Majda (2006) recently introduced a new conceptual model parameterization for organized convection which is based on the interactions between three cloud types, congestus, deep, and stratiform that, according to recent satellite and in situ observations, are the dominant cloud features in organized tropical convective systems (TCS) of all scales. TCS’s are characterized by leading congestus cloud decks followed by deep convection and trailing stratiform anvils. As such, they present a self-similar vertical structure characterized by a backward tilt in moisture, zonal winds, temperature, and heating.
The multicloud model is very successful in reproducing the main features of observed convectively coupled waves and the MJO including their phase speeds and their zonal and vertical structures, in the context of both a simple model with a crude vertical resolution reduced to the first two baroclinic modes and a GCM. The key to this success includes the use of two main heating profiles, a half sine profile corresponding to deep convection that heats the whole troposphere and a full sine profile with a dipole corresponding to heating below and cooling above during the congestus phase and heating above and cooling below during the stratiform phase. Two versions of the multicloud model are proposed. A deterministic version that facilitates linear wave analysis and a stochastic version that considerably enhances the variability and captures the observed chaotic nature of organized convection. In the deterministic version, the three cloud types interact with each other through simple adjustment equations and a moisture switch function that depends on the dryness of the middle troposphere; when the troposphere is dry congestus clouds are favored and when the troposphere is moist deep convection is promoted. The stochastic model on the other hand is based on a lattice of sites that are 1 to 10 km apart that either are occupied by a cloud of a certain type or are clear sky. The lattice sites switch from one state to another according to whether the large-scale environment is favorable to one cloud type or the other, dependent mainly on CAPE and the tropospheric dryness.
In this talk, both the deterministic and the stochastic models will be presented and will be compared against each other. Some emphasis shall be put on the chaotic evolution of convective precipitation and gravity waves in the stochastic model.
Joint work with Andrew Majda and Yevgeniy Frenkel.
Keywords of the presentation: Bayesian inference, adaptive sampling, adjoint methods, PC surrogates
This talk discusses the inference of physical parameters using model surrogates.
Attention is focused on the use of adaptive sampling schemes to build
suitable representations of the dependence of the model response on uncertain input
data. A Bayesian inference formalism is then applied to update the uncertain inputs
based on available measurements or observations. To perform the update, we consider
two alternative approaches, based on the application of Markov Chain Monte Carlo
methods or of adjoint-based optimization techniques. We outline the implementation of
these techniques to infer dependence of the drag coefficient on wind-speed based on
AXBT temperature data obtained during Typhoon Fanapi.
Keywords of the presentation: uncertainty estimates of climate patterns, low-dimensional parametric representation of the NAO, state- space modeling of the NAO
On all timescales, from weekly to monthly to intraseasonal to interannual to decadal, the climate
system is dominated by large-scale spatial patterns or “modes” of atmospheric and oceanic variability that control regional climate. These patterns are most often defined in terms of Empirical Orthogonal Function (EOF) analysis (equivalent to principal component (PC) analysis). The resulting climate patterns or climate modes organize coherent variations in climate over large regions, and have proven useful in creating indices that document the strength of the relevant pattern, e.g., in the atmosphere, and can be used to correlate the pattern with related phenomena in other components of the climate system, such as patterns of sea surface temperature and sea ice concentration. Limitations of current EOF analysis include the absence of associated measures of uncertainty or variability, and the implicit assumption of stationarity underlying the use of the first EOF calculated from long time-series.
The focus of this presentation is on the wintertime pattern of the North Atlantic Oscillation (NAO), motivated in part by the desire to relate the NAO to fluctuations in sea ice concentration. I will show some results from observations then go on to describe ongoing work with colleagues in the Statistics Department at UC Irvine. A low-dimensional parametric representation of the NAO has been developed. It can be applied over shorter time scales than conventional EOF analysis and allows easy estimates of uncertainty in parameter estimates and therefore in the NAO. This model has been extended to accommodate time varying parameters. The parameters have physical interpretations such as locations of the centers of action of the NAO.
(Joint work with Hal Stern, Xu Tian, Yi-Hui Wang and Yaming Yu)
We describe the application of the implicit particle filter to a shallow water model of nearshore circulation. This is a model with approximately 30,000 state variables. We assimilate gridded observations of the two horizontal velocity components at each of 16 locations. Results are shown for two distinct flow regimes, one characterized by regular shear waves, and the other a more complex aperiodic flow that we show is qualitatively different. The system runs efficiently on a single workstation.
The Bayesian Hierarchical Model (BHM) methodology has been applied to the
generation of ensemble ocean surface vector winds in a sequence of increasingly
sophisticated models. This history is briefly reviewed to establish the
approach to BHM development in atmosphere-ocean contexts. Recently,
ensemble surface winds and wind stresses are obtained from BHMs, given data
stage inputs from satellites and weather-center analyses. Process model distributions
are based on leading
order terms from a Rayleigh Friction Equation balance and from formulae for
bulk transfers at the air-sea interface. The forcing ensembles exploit precise
observations and precise specifications of error to infer error in ocean forecasts
based on two different kinds of data assimilation (DA) systems; i.e., a sequential DA system in
the Mediterranean Sea and a variational DA system in the California Current System.
Plans for developments in the next level of sophistication in atmosphere-ocean BHMs
will introduce process model breakthroughs to be discussed in the talk that
follows (Wikle et al.).
We study a deterministic chaotic slow-fast system which exhibits noise-induced tipping between metastable regimes. We investigate whether stochastic forecast models can be beneficial in ensemble data assimilation, in particular in the realistic setting when observations are only available for slow variables. The main result is that under certain conditions stochastic forecast models with model error can improve the analysis skill when used in place of the perfect full deterministic model. The stochastic climate model is far superior at detecting transitions between regimes. Stochastic climate models are capable of producing superior skill in an ensemble setting due to finite ensemble size effects; ensembles obtained from the perfect deterministic forecast model lack sufficient spread even for moderate ensemble sizes. This is corroborated with numerical simulations.
Keywords of the presentation: parameter sensitivity, climate modeling, atmospheric convection, precipitation, long-tailed distributions
The representation of subgrid scale processes introduces many poorly constrained parameters into mathematical representations of the climate system, contributing substantial uncertainty in model projections of climate change, especially for variables such as precipitation. This talk will outline a few pragmatic problems where connection to the stochastic community may be productive. Examples in climate models suggest that the parameter dependence for leading climate statistics can be approximated as smooth, although with nonlinearity strong enough to be important within the feasible range for certain parameters. This apparent smoothness is conjectured to be due to the internal variability acting like a noise bath for the observables of interest. To the extent this holds, it permits practical techniques for sensitivity quantification, for instance borrowed from the optimization literature, and suggests that part of the difficulty in tuning climate models may be viewed as the challenge of a multi-objective optimization problem of high dimension. But what might be the limitations to such smooth approximations? This is illustrated in an intermediate complexity model of El Niño, which in certain flow regimes exhibits rough parameter dependence in the following sense: statistics of important observables exhibit rapid changes within parameter intervals small compared to what could be constrained by any achievable observing system. This motivates an approach based on estimating Ruelle-Pollicott resonances filtered through a subset of climate observables in a related talk by M. Chekroun. In a Markov representation — in which non-observed variables act as noise — the spectral gap (given by the separation of eigenvalues of the transition matrix from the unit circle) is related to the sensitivity of the system.
Finally, the practical example of improving constraints on deep convective parameterization is associated with a number of statistics for the onset of deep convection, including longer-than-Gaussian tails in water vapor distributions, and related behavior in temperature and other tracers, that will be outlined for their connection with talks by S. Stechmann and P. Sura.
(Joint work with M. Chekroun, S. Stechmann, A. Bracco, H. Luo, J. McWilliams, D. Kondrashov, M. Ghil, S. Sahany, R. Neale, K. Hales.)
Keywords of the presentation: trends, time series analysis, non-parametric, empirical mode decomposition, intrinsic time-scale decomposition
How does one determine whether the high summer temperatures in Moscow of a few years ago was an extreme climatic fluctuation or the result of a systematic global warming trend? How does one perform an analysis of the causes of this summer's high temperatures in the US, if climate variability is poorly constrained? It is only under exceptional circumstances that one can determine whether a climate signal belongs to a particular statistical distribution. In fact, climate signals are rarely ``statistical;'' there is usually no way to obtain enough field data to produce a trend or tendency, based upon data alone. There are other challenges to obtaining a climate trend: inherent multi-scale manifestations, and nonlinearities and our incomplete knowledge of climate variability. We propose a trend or tendency methodology that does not make use of a parametric or a statistical assumption and it is capable of dealing with multi-scale time series. The methodology uses intrinsic time-scale decomposition to find an adaptive decomposition of the original signal; the components of the signal are then used in the construction of the tendency. Properties of the decomposition as well as of the tendency will be discussed.
Using real and synthetic
data the tendency will be compared to other more traditional notions of a trend.
A quantitative empirical connection is exhibited between a modified random-walk Markov model and the dynamics of observed, satellite-altimeter-tracked nonlinear ocean eddies. (Joint work with M. Schlax and D. Chelton, Oregon St Univ.)
Keywords of the presentation: Turbulent closure, Nonlinear energy transfers, Lorenz 96
We develop a novel second-order closure methodology for uncertainty quanti
cation in damped forced nonlinear systems with high dimensional phase-space that possess a high-dimensional chaotic attractor. We focus on turbulent systems with quadratic nonlinearities where the
nite size of the attractor is caused exclusively by the synergistic activity of persistent, linearly unstable directions and a nonlinear energy transfer mechanism. We
first illustrate how existing UQ schemes that rely on the Gaussian assumption will fail to perform reliable UQ in the presence of unstable dynamics. To overcome these difficulties, a modi
ed quasilinear Gaussian (MQG) closure is developed in two stages. First we exploit exact statistical relations between second order correlations and third order moments in statistical equilibrium in order to decompose the energy ux at equilibrium into precise additional damping and enhanced noise on suitable modes, while preserving statistical symmetries; in the second stage, we develop a nonlinear MQG dynamical closure which has this statistical equilibrium behavior as a stable
fixed point of the dynamics. Our analysis, UQ schemes, and conclusions are illustrated through a speci
c toy-model, the forty-modes Lorenz 96 system, which despite its simple formulation, presents strongly turbulent behavior with a large number of unstable dynamical components in a variety of chaotic regimes.
(Joint work with A. Majda)
Read More...
Keywords of the presentation: stochastic jump process, meteorology, atmospheric science, tropical convection, extreme precipitation events
In the tropics, storms and convection occur intermittently and have a major impact on weather and climate, yet they are sub grid-scale processes for General Circulation Models (GCMs). This talk presents two prototype stochastic models for representing these effects in GCMs. The first model is aimed at precipitation statistics that resemble critical phenomena from statistical physics, including power-law distributions and long-range correlations. The second model is aimed at convective momentum transport due to convective systems, which is intermittent and sometimes extracts energy from the larger-scale flow and sometimes transfers energy to the larger-scale flow. Both models employ stochastic jump processes to represent the intermittent on-off nature of deep convection in the tropics.
This is joint work with David Neelin and with Andy Majda.
In this research, we consider a mathematical model of permafrost lake growth and then using this model we propose a simple phenomenological equation that allows us to evaluate the impact of the Siberian permafrost on climate. Mathematically, permafrost thawing can be described by the classical Stefan approach. We can use a modified approach based on the phase transition theory. This takes into account that thawing layer has a small but non-zero width. The transition from the frozen state to the thawing state is a microscopic process, while lakes are great macroscopic objects. Thus we can assume that locally a lake boundary is a sphere of a large radius of curvature. Moreover, the growth is a slow process. Under such assumptions, thawing front velocity can be investigated. Indeed, there are possible asymptotic approaches based on so-called mean curvature motion. As a result, we obtain a deterministic equation that serves as an extremely simplified model of lake growth. We can, therefore, propose here a simple method to compute methane emission into the atmosphere using natural assumption that the horizontal dimensions of the lakes are much larger than the lake depth. We note that the permafrost lake model that we developed for the methane emission positive feedback loop problem is a conceptual climate model.
It is known that observed sea surface temperature (SST) variability
satisfies a certain skewness-kurtosis constraint which is a
characterization of the strong non-Gaussian behavior. We present a
stochastic model for SST variability with seasonally varying mixed
layer depth and forcing. The model incorporates both additive and
multiplicative noise. The strongly non-Gaussian skewness-kurtosis
relationship found in daily SST data is recovered through
stochastic and asymptotic analysis, and simple computation. This
generalizes an earlier result of P. Sura et. al. where the seasonal
impact was neglected.
(Joint work with Robert West)
Extreme climate events may be defined as atmospheric or oceanic phenomena that occupy the tails of a data set’s probability density function (PDF), where the magnitude of the event is large, but the probability of occurrence is rare. Though these types of events are statistically sparse, it is necessary to understand the distribution of events in the tails, as quantifying the likelihood of climate extremes is an important step in predicting overall climate variability. It has been known for some time that the PDFs of atmospheric phenomena are decidedly non-Gaussian, though the shape of PDF has not been specified explicitly. More recently, it has been shown from observations that many atmospheric variables follow a power law distribution in the tails. This is in agreement with stochastic theory, which asserts that power law distributions should exist in the tails. However, a statistically rigorous study of the resulting power law distributions has not yet been performed. To show the relationship systematically, we examine the PDF tails of dynamically significant atmospheric variables (such as geopotential height and relative vorticity) for evidence of power law behavior. This is achieved by using statistical algorithms that test PDFs for the bounds and magnitude of power law distributions and estimating the statistical significance of the power law tails. Local and spatial examples of power law distributions in the atmosphere are presented using time series of atmospheric data.
Keywords of the presentation: Weather and climate extremes, non-Gaussian statistics, power-laws
This lecture discusses the theoretical framework, observational evidence, and related developments in stochastic modeling of weather and climate extremes.
Keywords of the presentation: global atmospheric models, ensemble prediction system
We describe a novel approach to evaluate the performance of global atmospheric ensemble prediction systems. This approach is based on the observation that beyond a 2-3-day forecast time (i) the errors in the extratropics in a numerical weather prediction are dominated by synoptic scale structures (ii) the spatial error patterns in a 1000-km radius local neighborhood of location l can be described by a local linear space, S(l), and (iii) ensemble prediction systems usually provide a good representation of S(l). The performance of an ensemble prediction system that satisfies (iii) can be assessed by investigating the efficiency of the ensemble in representing the errors in S(l). This approach is applied to the operational ensembles included in the THORPEX Interactive Grand Global Ensemble (TIGGE) data set. We show that the different ensemble systems, which use a variety of techniques for the simulation of the effects of the initial condition uncertainties and the random component of the forecast errors due to model deficiency, satisfy different optimality conditions when evaluated by our approach.
We briefly discuss the relevance of some stochastic models in climate, and present a survey of the current mathematical theory of the stochastic primitive equations in the context of stochastic partial differential equations. Based on joint work with Arnaud Debussche, Nathan Glatt-Holtz,and Mohamed Ziane.
Estimation of continuous-time stochastic dynamics is a common task for researchers in many fields. In many cases, the goal is to obtain estimates of parameters in assumed dynamical models for the data. This poster will present such a method developed by D. Crommelin and E. Vanden-Eijnden that minimizes a spectral distance between a model and a time series. We apply it to simulated data from the Ornstein-Uhlenbeck process and discuss ways of improving the estimates of the associated parameters. Finally, the method is applied to ERA-40 reanalysis vector wind data and a sea surface wind model to obtain global parameter field estimates.
It is often desirable to derive an effective stochastic model for the physical process from observational and/or numerical data. Various techniques exist for performing estimation of drift and diffusion in stochastic differential equations from discrete datasets. In this talk we discuss the question of sub-sampling of the data when it is desirable to approximate statistical features of a smooth trajectory by a stochastic differential equation. In this case estimation of stochastic differential equations would yield incorrect results if the dataset is too dense in time. Therefore, the dataset has to sub-sampled (i.e. rarefied) to ensure estimators' consistency. Favorable sub-sampling regime is identified from the asymptotic consistency of the estimators. Nevertheless, we show that estimators are biased for any finite sub-sampling time-step and construct new bias-corrected estimators.
The Trojan Y-Chromosome (TYC) strategy, an autocidal genetic biocontrol method, has been proposed to eliminate invasive alien species. In this work, we develop a stochastic model to study the viability of the TYC eradication and control strategy of an invasive species. The dynamics of this stochastic model is governed by a Markov jump process. We rigorously prove that there is a positive probability that the extinction of wild-type females takes place within a finite time. Moreover, in the case where sex-reversed trojan females are introduced at a constant size, we formulate a stochastic differential equation (SDE) model, as an approximation to the proposed Markov jump process model. Using the SDE model, we investigate the probability distribution and expectation of the eradication time of wild-type females by solving Kolmogorov equations associated with these statistics. The results illustrate how the probability distribution and expectation of the eradication time are shaped by the initial conditions and the model parameters. In particular, the results indicate that (1) the extinction of wild-type females is expected solely with the presence of supermales; (2) elevating the constant size of the sex-reversed trojan females being introduced into the population will lead to an decrease in the expected extinction time, as opposed to an increase in the probability for the extinction to take place within a given investigation time.
As environmental data sets increase in spatial and temporal extent with the advent of new remote sensing platforms and long-term monitoring networks, there is increasing interest in forecasting processes to utilize this information. Such forecasts require realistic initial conditions over complete spatial domains. Typically, data sources are incomplete in space, and the processes include complicated dynamical interactions across physical and, in many ecological applications, biological variables. This suggests that data assimilation, whereby observations are fused with mechanistic models, is the most appropriate means of generating complete initial conditions. Often, the mechanistic models used for these procedures are very expensive computationally. We demonstrate a rank-reduced approach for data assimilation whereby the mechanistic model is based on a statistical emulator that can accommodate potentially realistic quadratic nonlinearity. Critically, the rank-reduction and emulator construction are linked and, by utilizing a hierarchical Bayesian framework, uncertainty associated with the dynamical emulator can be accounted for, providing a so-called "weak-constraint" data assimilation procedure. This approach is demonstrated on a high-dimensional multivariate coupled biogeochemical ocean process in the Coastal Gulf of Alaska.
Keywords of the presentation: mesoscale eddies, probability distributions, parametrization
The ocean contains a vigorous mesoscale eddy field with spatial scales of approximately 10 to 100km, evolving over time scales from weeks to months. These eddies are important in establishing the ocean's circulation and tracer properties. Grid spacing of roughly 10 km and smaller are necessary to properly simulate the eddy field, therefore ocean climate models are unlikely to routinely resolve geostrophic eddies and their effect needs to be parametrized. The goal of our study is to construct a stochastic parameterization of mesoscale eddies in ocean modelsusing the statistics derived from the output of high resolution model. A quasi-geostrophic (QG) model in a double-gyre configuration is run at resolution of 7.5 km (eddy-resolving). The output of the high-resolution model is coarse-grained and used to calculate probability distribution functions for the eddy source term conditioned on the model state. A stochastic parametrization is then created using the evaluated conditional probability distribution functions and implemented in a coarse resolution version of the QG model. The dynamics of the mean flow, its variability and eddy-mean flow interaction are examined and compared with deterministic closures of geostrophic eddies.