September 23-26, 2013
Keywords of the presentation: Inverse problems, optimization, atmospheric compensation, surface reflectance, spectral matching
DigitalGlobe operates a constellation of earth imaging satellites with the highest commercially available spatial resolution in the industry. WorldView-2, launched in 2009, is the first high-resolution 8-band multispectral commercial satellite, which is capable of collecting up to 1 million square kilometers of 8-band imagery per day. It provides 46 cm panchromatic and 1.85 m multispectral resolution for very detailed imagery. Altogether, these nine bands can be used to determine the water depth in shallow coastal and inland waters, as well as characterize the corresponding benthic habitats at very high resolution. This presentation will introduce this shallow water bathymetry application and describe two inverse problems used in our process – atmospheric compensation and depth retrieval.
This is joint work with Grzegorz Miecznik and Fabio Pacifici.
Keywords of the presentation: large data, image processing, graph based methods, total variation
Geometric methods have revolutionized the field of image processing
and image analysis. I will review some of these classical methods including image snakes,
minimization, image segmentation methods based on curve minimization, diffuse interface
methods, and state of the art fast computational algorithms
that use ideas from compressive sensing.
Recently some of these concepts have shown promise for problems in high dimensional data
analysis and machine learning on graphs. I will briefly review the methods from imaging and
then focus on the new methods for high dimensional data and network data.
Keywords of the presentation: machine learning, kernel methods, SVM, deep learning, Gaussianization
This talk is divided into two parts: In the first part, I will review the impact that kernel methods have had in remote sensing image and data processing during the last decade, while the second part foresees the future developments grounded on deep learning machines.
Part 1: Kernel methods constitute a simple way of translating linear algorithms into nonlinear ones. I will review the main aspects of kernel methods and their advantages for RS data processing, and will pay attention to our recent kernel developments for: 1) classification and change detection problems that consider the class-specific features; 2) noise-resistant nonlinear feature extraction methods; 3) regression and dependence estimation in Bayesian nonparametrics; and 4) multidimensional image quality assessment with kernels. The introduced methods extend previous standard algorithms to deal with non-stationary environments and structured domains, and assumptions about the noise nature. Examples in image processing will guide this overview.
Part 2: Deep learning is a new field of machine learning that alleviates many problems of kernel machines, and has provided outstanding results in pattern recognition, speech recognition, bioinformatics and natural language processing. I will review their main features and introduce some recent developments in my group to perform 1) multidimensional image density estimation, saliency, multi-information estimation and anomaly detection via data multi-layered Gaussianization, and 2) multi- and hyperspectral image classification with semisupervised large scale neural networks.
Keywords of the presentation: radar imaging, SAR
Radar imaging is a technology that has been developed, very successfully,
within the engineering community during the last 50 years. Radar systems
on satellites now make beautiful images of regions of our earth and of other
planets such as Venus. One of the key components of this impressive
technology is mathematics, and many of the open problems are mathematical ones.
This lecture will explain, from first principles, some of the basics of radar and the mathematics involved in producing high-resolution radar images.
We explore sparse support vector machines (SSVMs) for the hyperspectral imagery band selection problem. An SSVM has a 1-norm regularizer in the objective function which suppresses many components of w, the normal vector to the separating hyperplane between two classes of data, and thus indicates spectral bands that are effective at separating the data. We propose a band selection framework in which we use the effectiveness and sparsity of SSVMs combined with bootstrap aggregation approach to reduce variability in the components of w and find the redundant bands.We can eliminate more bands by reapplying SSVMs to the reduced data and cutting off the bandsbased on comparing magnitude ratios throughout the list of the ranked bands. We propose to extend a binary band selection to a multiclass case by using one-against-one (OAO) SSVMs and different methods of band ranking all over the multiple classes to find a superset of relevant bands. At the last step of the method, spatial smoothing by majority filter is used to improve the classification results. We illustrate the perfomance of the method on the AVIRIS Indian Pines data set with high accuracy rates achieved on different subsets of selected bands.
Keywords of the presentation: manifold learning, classification, unmixing
Interest in manifold learning for representing the topology of large, high dimensional nonlinear data sets in lower, but still meaningful dimensions for visualization and analysis has grown rapidly over the past decade, including analysis of hyperspectral remote sensing data. The high spectral resolution and the typically continuous bands of hyperspectral image (HSI) data enable discrimination between spectrally similar targets of interest, provide capability to estimate within pixel abundances of constituents, and allow direct exploitation of absorption features in predictive models. Although hyperspectral data are typically modeled assuming that the data originate from linear stochastic processes, nonlinearities are often exhibited in the data due to the effects of multipath scattering, variations in sun-canopy-sensor geometry, The machine learning community has demonstrated the potential of manifold based approaches for nonlinear dimensionality reduction and modeling of nonlinear structure. nonhomogeneous composition of pixels, and attenuating properties of media. Because of the dense spectral sampling of HSI data, the associated spectral information in many adjacent bands is highly correlated, resulting in much lower intrinsic dimensions spanned by the data. Increased availability of HSI and greater access to advanced computing have motivated development of specialized methods for exploitation of nonlinear characteristics of these data.
Theoretical contributions and applications of manifold learning have progressed in tandem, with new results providing capability for data analysis, and applications highlighting limitations in existing methods. The machine learning community has demonstrated the potential of manifold based approaches for nonlinear dimensionality reduction and modeling of nonlinear structure. For HSI, the enormous size of the data sets and spatial clustering of classes on the image grid provide both challenges and opportunities to extend manifold learning methods. The potential value of manifold learning for HSI analysis has been demonstrated for remote sensing applications including feature extraction, segmentation, classification, anomaly detection, and spectral unmixing with recent approaches exploiting inter-band correlation and local spatial homogeneity. Challenges encountered in analyzing data sets have inspired recent advances in manifold learning methods. This presentation provides an overview of manifold learning methods recently developed for classification and unmixing of hyperspectral image data, including intelligent selection of landmarks to represent the manifold, spatial-spectral methods, and approaches that jointly exploit local and global geometry.
Keywords of the presentation: data fusion, Laplacian eigenmaps, data graph, hyperspectral, LIDAR
The problem of data integration and fusion is a longstanding problem in the remote sensing community. It deals with finding effective and efficient ways to integrate information from heterogeneous sensing modalities. In this talk we shall present a completely deterministic approach which exploits fused representations of certain well known data-dependent operators, such as, e.g., graph Laplacian and graph Schroedinger operators. It is through the eigendecomposition of these operators that we introduce the notion of fusion/integration of heterogeneous data, such as hyperspectral imagery (HSI) and LIDAR, or spatial information. We verify the results of our methods by applying them to HSI classification.
Keywords of the presentation: hyperspectral images, unmixing, material identification, imaging spectroscopy, intimate mixing, Hapke models
Hyperspectral images provide the capability of identifying materials at a sub-pixel level by unmixing the spectra measured at each pixel. Many unmixing algorithms use a model of the mixing process. Unmixing is then approached as an inverse problem: find the spectra that were mixed according to the model to produce a given measurement, or set of measurements. Over the past decade, the prevalent mixing model investigated is the linear mixing model. Many unmixing techniques based on this model have been proposed. However, nonlinear spectral mixing effects can be a crucial component in many real-world scenarios, such as planetary remote sensing, mineral mapping, vegetation canopies or urban scenes. Several nonlinear mixing models were proposed decades ago, mostly in the applied remote sensing literature. They were applied mainly to small data sets, often in laboratory settings. It is only recently there has been a surge of interest in nonlinear unmixing in the signal processing community. Many in the latter community are not aware of much of the early work on nonlinear unmixing. This talk provides an historical overview of some nonlinear mixing models and associated unmixing algorithms. The main models covered are bilinear, intimate mixing, radiosity, and piecewise-convex models.
Keywords of the presentation: spatial interpolation, filtering, data fusion, classification, Landsat, hyperspectral imagery.
Geostatistics provides a set of statistical tools for the analysis of data distributed in space and time. Since its development in the mining industry, geostatistics has emerged as the primary tool for spatial data analysis in various fields, ranging from earth and atmospheric sciences, to agriculture, soil science, environmental studies, and more recently exposure assessment. This presentation provides a broad overview of the range of applications of geostatistics to remotely sensed data, including the description of spatial patterns through semivariograms, the estimation of missing pixels by kriging, and the filtering or spatial decomposition of spectral values by kriging analysis. This will be followed by the description of recent developments for: 1) the automatic detection of anomalies in images using local indicators of spatial autocorrelation, 2) merging of data measured on different spatial supports using area-to-point kriging, and 3) the classification of time series of Landsat imagery.
Demixing problems in many areas such as hyperspectral imaging and differential optical absorption spectroscopy (DOAS) often require ﬁnding sparse nonnegative linear combinations of dictionary elements that match observed data. We show how aspects of these problems, such as misalignment of DOAS references and uncertainty in hyperspectral endmembers, can be modeled by expanding the dictionary with grouped elements and imposing a structured sparsity assumption that the combinations within each group should be sparse or even 1-sparse. If the dictionary is highly coherent, it is difficult to obtain good solutions using convex or greedy methods, such as non-negative least squares (NNLS) or orthogonal matching pursuit. We use penalties related to the Hoyer measure, which is the ratio of the l1 and l2 norms, as sparsity penalties to be added to the objective in NNLS-type models. For solving the resulting nonconvex models, we propose a scaled gradient projection algorithm that requires solving a sequence of strongly convex quadratic programs. We discuss its close connections to convex splitting methods and difference of convex programming. We also present promising numerical results for example DOAS analysis and hyperspectral demixing problems.
Keywords of the presentation: Turbulence, Kernel Regression, Noise, Denoising, Deconvolution
A new approach is proposed, capable of restoring a single high-quality image from a given image sequence distorted by atmospheric turbulence. This approach reduces the space and time-varying deblurring problem to a shift invariant one. It first registers each frame to suppress geometric deformation through non-rigid registration. Next, a temporal regression (fusion) process is carried out to produce an image from the registered frames, which can be viewed as being convolved with a space invariant diffraction limited blur. Finally, a blind deconvolution algorithm is implemented to deblur the fused image, generating a high quality output. Experiments using real data illustrate that this approach can effectively alleviate blur and distortions, recover details of the scene, and significantly improve visual quality.
Keywords of the presentation: non-uniform sampling, sparse image representations, compression
In recent years, interest has grown in the study of sparse solutions to underdetermined systems of linear equations because of their many potential applications. In particular, these types of solutions can be used to describe images in a compact form, provided one is willing to accept an imperfect representation. We shall develop this approach in the context of sampling theory, and for problems in image compression. We use various error estimation criteria - PSNR, SSIM, and MSSIM - to conduct a presentation that is phenomenological and computational, as opposed to theoretical. This machinery leads naturally to a compressed sensing problem that can be seen as a non-uniform sampling reconstruction problem with promising applications.
Joint work with John J. Benedetto (University of Maryland, College Park)
Keywords of the presentation: hyperspectral, unmixing, augmented Lagrangian, Bregman
We have developed, with Russell Warren, a method for HS image data that requires neither pure pixels nor any prior information about the image other than an estimate of the number of different materials present. We use nonlinear optimization and estimate the spectral and spatial structure through a relatively simple noise-tolerant algorithm. This removes the need for separate end member and spatial abundance steps. The algorithm is illustrated on real data.
Spatially distributed information on soil microtopography is required for a better understanding
of several processes in hydrology, geomorphology and soil sciences. The objectives of this study were:
i) to compare several statistical, geostatistical and fractal indices used to describe soil surface roughness and
ii) to show how these indices can quantify the space and time evolution of the highly heterogeneous soil
structure following water erosion and runoff. The microtopography of the soil surface was digitized by
an instantaneous profile laser scanner before and after application of simulated rainfall at a 2 mm resolution.
The roughness indices calculated included random roughness (statistical), sill and range of the semivariogram
(geostatistical), fractal dimension and fractal length (fractal) and various parameters gathered from generalized
dimension and singularity spectra (multifractal). These indices were interpreted in the context of aggregate
breakdown and soil surface crusting. The performance of the various mathematical tools employed to describe
either the vertical or the spatial components of the soil surface roughness was shown to be different.
The suitability of this indices for inclusion as spatial information in hydrology and erosion models is
also an important aspect to be considered.
Joint work with: Eva Vidal Vázquez and Ildegardis Bertol
Keywords of the presentation: remote sensing, hyperspectral and LiDAR imagery, data representation and fusion, hyperspectral image joint deblurring and unmixing, alternating direction methods
The science of optical remote sensing includes aerial, satellite, and
spacecraft observations of scenes or targets. Various diverse sensing modalities
are useful, and a variety of types of on-board sensors are employed for data collection. The optical sensing modalities we are concerned with are Hyperspectral (HSI) imaging and Light Detection and Ranging (LiDAR). HSI sensing collects information across a wide range of the electromagnetic spectrum while LiDAR is a remote sensing technology that collects RGB data while measuring distance by illuminating a target area with lasers and analyzing the reflected light. Fusion and resulting interrogation of multi-modal (HSI and LiDAR) images is an extremely important, but challenging, aspect of geospatial imaging.Read More...
In this talk we briefly discuss some recent mathematical techniques for HSI and LiDAR data fusion, geometric feature and pattern representation, for dimensionality reduction, classification, target detection and identification. Illustrations of the algorithms are provided on both simulated and real data. We then concentrate on recent work involving wavelength dependent hyperspectral PSF estimation and associated joint deblurring and sparse unmixing using Alternating Direction Method of Multipliers (ADMM) optimization for convex inverse problems.
Joint work with Dejan Nikic and Jason Wu (Boeing); and Paul Pauca, Todd Torgersen and Peter Zhang (Wake Forest), Sebastian Berisha and James Nagy (Emory) and others to be listed in the presentation.
Keywords of the presentation: thermal infrared, spatial resolution, image enhancement
Thermal infrared (TIR) imaging is most commonly used to determine scene temperatures for industrial, military, medical, and natural hazard applications. In the past several decades, the advent of relatively inexpensive handheld thermal cameras and the launch of multispectral TIR sensors have resulted in an abundance of data for use in numerous planetary science, geology, and volcanology problems. Multispectral TIR data not only provide better temperature accuracy but also the ability to extract emissivity, a fundamental property used to identify the composition of natural and man-made materials and gases. Emissivity is sensitive to material coatings, particle size, temperature as well as other factors making its analysis more complex but also providing the ability to measure a suite of surface properties. However, TIR images typically have lower spatial and/or spectral resolution than visible/near infrared camera or satellite data, which makes them harder to interpret. Improving spatial resolution by way of spectral diversity in a scene is therefore an important analysis tool. Super-resolution is such a process that fuses the original TIR data with an additional source at the desired (improved) resolution. We present a technique for super-resolution imaging that has been tested successfully using multispectral and multiresolution data from the Earth-orbiting Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument (resolved from 90 to 15 m/pixel) and the Mars-orbiting Thermal Emission Imaging System (THEMIS) instrument (resolved from 100 to 36 m/pixel). This approach not only enhances the spatial resolution, but also results in radiometrically-accurate TIR data that can then be interrogated for spectral diversity. We hope to use the super-resolution approach to improve the data from a new multispectral adaptation of our ground-based TIR camera in the near future.Read More...
Parsimony, including sparsity and low rank, has been shown to successfully model
data in numerous machine learning and signal processing tasks. Traditionally, such
modeling approaches rely on an iterative algorithm that minimizes an objective
function with parsimony-promoting terms. The inherently sequential structure and
data-dependent complexity and latency of iterative optimization constitute a major
limitation in many applications requiring real-time performance or involving largescale data. Another limitation encountered by these modeling techniques is the
difficulty of their inclusion in discriminative learning scenarios. In this work, we
propose to move the emphasis from the model to the pursuit algorithm, and develop a
process-centric view of parsimonious modeling, in which a learned deterministic
fixed-complexity pursuit process is used in lieu of iterative optimization. We show a
principled way to construct learnable pursuit process architectures for structured
sparse and robust low rank models, derived from the iteration of proximal descent
algorithms. These architectures learn to approximate the exact parsimonious
representation at a fraction of the complexity of the standard optimization methods.
We also show that appropriate training regimes allow to naturally extend
parsimonious models to discriminative settings. State-of-the-art results are
demonstrated on several challenging problems in image and audio processing with
several orders of magnitude speedup compared to the exact optimization algorithms.
Joint work with P. Sprechmann and A. Bronstein.
Keywords of the presentation: sea-level rise, geodesy, ice mass balance, temporal gravity field
The complicated dynamic processes of the Earth system manifest
themselves in the form of mass transports, for example resulting from
interactions and feedbacks between the solid Earth and its fluid
layers, including ocean, atmosphere, cryosphere and hydrosphere. These
mass transport signals, covering a wide range of temporal and spatial
signals and could originate from natural and anthropogenic climate
change processes are observable via contemporary space geodetic
sensors. These signals are identified to be critical towards addresses
contemporary outstanding scientific questions, including the causes of
global sea-level rise, quantifying ice-sheet and glacier mass balance,
studying basin-scale hydrology and global water cycle, better
understanding of coseismic deformation and geodynamic processes such
as the glacial isostatic adjustment. Space geodesy during the onset of
the 21st Century is evolving into a transformative cross-disciplinary
Earth science discipline. In particular, advances in the measurement
of the gravity with modern free-fall methods reached accuracies of
10-9 g (~1 microGal or 10 nm/s2), allowing accurate measurements of
height changes at ~3 mm relative to the Earth’s center of mass, and
mass transports within the Earth interior or its geophysical fluids,
enabling a possible global quantification of climate-change signals.
The fundamental mathematical tools to efficiently and accurately
exploit and process the big satellite geodetic and remote sensing data
sets require addressing fundamental mathematical techniques including
boundary value problem, potential theory, adjustment, precision orbit
determination, radar physics, and scalable distributed computational
algorithms. This presentation summarizes results from the use of data
from space gravimetry satellite mission GRACE (Gravity Recovery And
Climate Experiment) and other geodetic sensors including satellite
altimetry and synthetic aperture radar interferometry (InSAR), to
study contemporary scientific problems such as observing and
quantifying the causes of 20th and present-day sea-level rise, Earths’
ice and water reservoir mass fluxes, and applications such as timely
monitoring of natural hazards such as floods, quantifying world’s
water resources, and addressing coastal vulnerability due to sea-level
Keywords of the presentation: multiscaling, fractal methods, multifractal analysis, image analysis
Soil structure is the arrangement of soil particles into secondary units called aggregates or peds. These manifest the cumulative effect of local pedogenic processes and influence soil behaviour - especially as it pertains to aeration and hydrophysical properties. One of the most direct methods of probing and characterizing soil structure is the analysis of the spatial arrangement of pore and solid spaces on images of sections of resin-impregnated soil or non-disruptive CT scanning. CT images involve a revolving x-ray tube that surrounds a soil sample and a detector unit to produce 2-D images to provide grey-level images of slices that after computer integration generates 3-D images. Then a threshold method is applied to extract a binary image.
Over the last years, a series of authors have applied fractal techniques to the characterization of pore-scale images of soil. The objective is to extract fractal dimensions, which characterize multiscale and self-similar geometric void phase structures within the two-dimensional or three-dimensional images. If such a hierarchical structure exists then we expect to see in the image that the soil matrix viewed at different resolutions looks the same.
More recently interest has turned to multifractal analysis of images of porous media. A multifractal or more precisely a geometrical multifractal is a non-uniform fractal, which unlike a uniform fractal exhibits local density ﬂuctuations. Its characterization requires not a single dimension but a sequence of generalized fractal dimensions. A multifractal analysis to extract these dimensions from a soil image may have utility for more complex distributions of solid or void in space, if there is marked variation in local density or porosity.
Our purpose is to briefly review what has been done in this field with fractal and multifractal analysis and discuss the restrictions linked with the characterisation of soil binary images. Finally we will apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy index. These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.
The measured spectral signature in the field of view of a remote sensor rarely comes from a single material. Unmixing refers to the extraction of the number, the spectral signature, and the abundance of the materials or endmembers in the field of view of the sensor. Unmixing plays an important role in hyperspectral image processing in a wide range of application. Most unmixing techniques are pixel–based procedures that do not take advantage of spatial information provided by the hyperspectral image. Here we present an approach for unmixing of hyperspectral imagery based on a spatial multiscale representation which allows the identification of locally spectrally uniform areas that are used as candidate endmembers. Extracted signatures are clustered into endmember classes that account for the spectral variability of endmembers. Abundances are estimated using constrained least square methods. Experimental results using AVIRIS imagery are presented to demonstrate the proposed approach.
Keywords of the presentation: data thinning, subspace tracking, manifold learning, multiscale analysis
In this talk, I will describe a novel approach to change-point
detection when the observed high-dimensional data may have missing
elements. The performance of classical methods for change-point
detection typically scales poorly with the dimensionality of the data,
so that a large number of observations are collected after the true
change-point before it can be reliably detected. Furthermore, missing
components in the observed data handicap conventional approaches. The
proposed method addresses these challenges by modeling the dynamic
distribution underlying the data as lying close to a time-varying
low-dimensional submanifold embedded within the ambient observation
space. Specifically, streaming data is used to track a submanifold
approximation, measure deviations from this approximation, and
calculate a series of statistics of the deviations for detecting when
the underlying manifold has changed in a sharp or unexpected manner.
The proposed approach leverages several recent results in the field of
high-dimensional data analysis, including subspace tracking with
missing data, multiscale analysis techniques for point clouds, online
optimization, and change-point detection performance analysis.
Simulations and experiments highlight the robustness and efficacy of
the proposed approach in detecting an abrupt change in an otherwise
slowly varying low-dimensional manifold.
Rebecca Willett is an associate professor in the Electrical and
Computer Engineering Department at the University of
Wisconsin-Madison. She completed her PhD in Electrical and Computer
Engineering at Rice University in 2005 and was an assistant then
associate professor of Electrical and Computer Engineering at Duke
University from 2005 to 2013. Prof. Willett received the National
Science Foundation CAREER Award in 2007, is a member of the DARPA
Computer Science Study Group, and received an Air Force Office of
Scientific Research Young Investigator Program award in 2010. Prof.
Willett has also held visiting researcher positions at the Institute
for Pure and Applied Mathematics at UCLA in 2004, the University of
Wisconsin-Madison 2003-2005, the French National Institute for
Research in Computer Science and Control (INRIA) in 2003, and the
Applied Science Research and Development Laboratory at GE Healthcare
in 2002. Her research interests include network and imaging science
with applications in medical imaging, neural coding, astronomy, and
Keywords of the presentation: natural disaster, multi-sensor remote sensing, multi-resolution data, supervised classification, hierarchical Markov random field, statistical model
In this talk, we will describe a novel classification approach for multi-resolution, multi-sensor (optical and synthetic aperture radar (SAR)) and/or multi-band images. This challenging image processing problem is of great importance for various remote sensing monitoring applications and has been scarcely addressed so far. To deal with this classification problem, we propose a two-step explicit statistical model. We first design a model for the multi-variate joint class-conditional statistics of the co-registered input images at each resolution. We then plug the estimated joint probability density functions into a hierarchical Markovian model based on a quad-tree structure, where each tree-scale corresponds to the different input image resolutions and to the corresponding multi-scale decimated wavelet transforms, thus preventing a strong re-sampling of the initial images. To obtain the classification map, we resort to an estimator of the marginal posterior mode. We integrate a prior update in this model in order to improve the robustness of the proposed classifier against noise and speckle. The resulting classification performance is illustrated on several remote sensing multi-resolution datasets including very high resolution and multi-sensor images acquired by COSMO-SkyMed and GeoEye-1 satellites.
We present simulation-based studies of the use of compressively
sensed spectral-polarimetric spatial image data from a solar-illuminated
reflecting surface to recover its material signature,
three-dimensional (3D) shape, pose, and degree of surface
roughness. The spatial variations of the polarimetric Bidirectional
Reflectance Distribution Function (pBRDF) around glint points
contain unique information about the shape and roughness of the
reflecting surface that is revealed most dramatically in
polarization-difference maps from which the spatially generalized
diffuse-scattering contributions to brightness are largely absent.
Here we employ a specific compressed-sensing protocol, the socalled
Coded-Aperture Snapshot Spectral-Polarimetric Imager
(CASSPI) advanced recently by Tsai and Brady, to simulate noisy
measurements from which these surface attributes are recovered
robustly in a sequential manner.
Joint work with Sudhakar Prasad and Robert J. Plemmons.