Poster session and reception

Tuesday, October 23, 2018 - 5:00pm - 6:30pm
Lind 400
  • Towards rock trait distribution maps from aerial imagery
    Jnaneshwar Das (Arizona State University)
  • Superpixel-based Hyperspectral Unmixing with Regional Segmentation
    Miguel Velez-Reyes (The University of Texas at El Paso)
  • GeoVReality: A high-efficiency computational interactive virtual reality visualization platform for geophysical research
    David Yuen (Columbia University)
  • Experimental evidence of climate-driven knickpoints
    Arvind Singh (University of Central Florida)
    Here we present findings from a unique experimental landscape in which we document SS self-organization under constant uplift and rainfall intensity and TS reorganization under increased rainfall intensity. We reveal an emergent hierarchical and scale dependent erosional signature of SS in which hillslopes and part of the fluvial regime exhibits a higher likelihood of erosion than the rest of the landscape. We summarize this signature in a curve of the probability of above-median erosion vs area (E50-area) curve and elucidate its connection to the dynamic nature of landscapes at SS. Furthermore, we show how the E50-area curve captures geomorphic regime transitions during the reorganization phase under increased rainfall intensity. Finally, we document the changes in the longitudinal river profiles with increasing precipitation intensity, revealing the formation of knickpoints at certain confluences where large discontinuities in the ratio Qs/Qw are observed.
  • Bathymetry Imaging with Indirect Observations
    Hojat Ghorbanidehno (Stanford University)
    Bathymetry, i.e, depth imaging in a river, is of crucial importance for shipping operations and flood management. With advancements in sensor technology and computational resources, various types of indirect measurements can be used to estimate high-resolution riverbed topography. Here, the compressed state Kalman filter (CSKF) and principal component geostatistical approach (PCGA), two fast and scalable inverse modeling methods powered by a low-rank representation of covariance matrix structure, are presented and applied to nearshore and riverine bathymetry problems. The results are compared with that of ensemble-based methods.
  • High Resolution Aerial Imagery Segmentation
    Panfeng Li (Los Alamos National Laboratory)
  • VelocityGAN: Data-Driven Full-Waveform Inversion Using Conditional Adversarial Networks
    Zhongping Zhang (Los Alamos National Laboratory)
    Waveform inversion is a typical non-linear and ill-posed inverse problem. Existing physics-driven computational methods for solving waveform inversion suffer from the cycle skipping and local minima issues, and not to mention solving waveform inversion is computationally expensive. In this paper, we developed a real-time data-driven technique, VelocityGAN, to accurately reconstruct subsurface velocities. Our VelocityGAN is an end-to-end framework which can generate high-quality velocity images directly from the raw seismic waveform data. A series of numerical experiments are conducted on the synthetic seismic reflection data to evaluate the effectiveness and efficiency of VelocityGAN. We not only compare it with existing physics-driven approaches but also choose some deep learning frameworks as our data-driven baselines. The experiment results show that VelocityGAN outperforms the physics-driven waveform inversion methods and achieves the state-of-the-art performance among data-driven baselines.
  • Efficient Seismic Event Detection using Robust Principle Component Analysis (RPCA)
    Ningyu Sha (Michigan State University)
    There are many methods to detect seismic events based on the property of signal and noise. Signals received at different receivers are correlated, and they lie in a low dimensional space. Therefore, dimension reduction techniques are utilized to model signals. Robust Principle Component Analysis (RPCA) is one of them. Here we combine infimal convolution and fast iterative soft-thresholding algorithm (FISTA) to obtain a fast algorithm. Numerical experiments on synthetic data and real data are provided to show the better performance than existing algorithms for RPCA.
  • Machine learning-based surface wave tomography of Long Beach, CA, USA
    Michael Bianco (University of California, San Diego)
    We use a machine learning-based tomography method to obtain high-resolution subsurface geophysical structure in Long Beach, CA, from seismic noise processing on a large-N array. This locally sparse travel time tomography (LST) method exploits the dense sampling obtained from large arrays by learning a dictionary of local, or small-scale, geophysical features directly from the data. These local features are represented as small rectangular groups of pixels, called patches, from the overall phase-speed image. This local model is combined with the overall phase speed map, called the global model, via an averaging procedure. The global model constrains larger scale features using least squares regularization. Using data recorded from the Long Beach array in 2011, we perform high-resolution surface wave tomography of Long Beach region in the 1 Hz Rayleigh wave band. Among the geophysical features visible in the phase speed map, there is a prominent high-speed anomaly corresponding to the important aquifer Silverado aquifer, which has not been isolated in previous surface wave tomography studies. This anomaly is likely caused by the higher density of the Silverado relative to other geological units. Our results show promise for the use of LST in resolving high resolution geophysical structure in travel time tomography studies.
  • Beach and Dune Morphology Changes induced by Hurricane Harvey using Machine Learning approaches and Repeat TLS Surveys in the Freeport, TX
    Xin (Sarah) Zhou (University of Houston)
    Catastrophic events dramatically change the morphology of the beach and dune in its influenced scopes, such as Hurricane Harvey in 2017. Compared to traditional survey methods and recent rapid developing Airborne LiDAR measure strategy, modern mapping technique like Terrestrial Laser Scanning (TLS) integrated with GPS unit measures the landscapes with high efficiency as well as prominent spatial and temporal resolutions. This paper developed a novel and competent workflow using open-source tools like Generic Mapping Tools (GMT), Geographical Resources Analysis Support System (GRASS) and R language to analyze impacts on beach and dune area after Hurricane Harvey. Two sets of point clouds data acquired by RIEGL VZ-2000 before (late May, middle of June) and after Harvey (early September) were compared. Lines of dune ridge, shore (0.3 m contour line), and dune toe are extracted from the bare earth Digital Elevation Models (DEMs). Shell scripts efficiently batch process the DEM to acquire multiple subsets, statistically export each volume, and selectively find its typical cross-shore profile to evaluate the dynamic erosion patterns. R language collects, cleans, and transforms both line features (observations) and volume changes (labels). Overfitting problem is conspicuous in linear regression method caused by the extremely high resolution of DEM (0.5 m). Weather calendar maps during the 3 months were drawn to display the contribution of the hurricane compared to other ordinary days in summer. The primary results indicate the beach and dune area experiences considerable topographic changes: the shoreline moves inland 3.4m averagely; the volume changes happened in the delta area (with 2km long and 100m wide) are severer: the total volume changes from 46127m3 to 20399m3, which drops 55.8% of the original volume; wind speed and stream level records acquired from the nearby USGS station (GIWW) indicate extreme weather caused by Harvey hugely impacts beach and dune morphology.
  • Automated Parameter Selection for Regularized Inverse Problems
    Toby Sanders (Arizona State University)
    Regularization in ill-conditioned inverse problems can yield significant benefit to the solution. There is usually an important constant parameter involved in this regularization, which balances the data-fidelity term with the smoothness (regularization) term. The value of this parameter is often chosen seemingly arbitrarily based on experience of the individual implementing the software. This work [1] introduces new criteria for automated selection of the right parameter for Tikhonov regularization problems based on the Bayesian point of view. A corresponding numerical method and fixed point iteration is proposed and shown to converge to the right parameter in relatively few iterations (e.g. 5 to 10). Numerical cost is considered, and significant acceleration (e.g. 100 times) of the algorithm is developed for certain problems. Finally, the method is generalized to include compressed sensing formulations (L1 regularization) and some convergence results are provided.

    [1] T. Sanders, R. B. Platte, R. D. Skeel, Maximum Evidence Algorithms for Parameter Selection in Regularized Inverse Problems, submitted to SIAM Journal on Imaging Science.
  • Diffusive Optical Tomography in the Bayesian Framework
    Kit Newton (University of Wisconsin, Madison)
    Optical imaging, mostly used in medical imaging, is a technique for constructing optical properties in tested tissues via measurements of the incoming and outgoing light intensity. Mathematically, light propagation is modeled by the radiative transfer equation (RTE), and optical tomography amounts to reconstructing the scattering and the absorption coefficients in the RTE using the boundary measurements. We study this problem in the Bayesian framework, and pay special attention to the strong scattering regime. Asymptotically, when this happens, the RTE is equivalent to the diffusion equation (DE), whose inverse problem is severely ill. We study the stability deterioration as the equation changes regimes and prove the convergence of the inverse RTE to the inverse DE in both nonlinear and linear settings.
  • Advanced deep learning techniques for bathymetry estimation
    Hojat Ghorbanidehno (Stanford University)
    The high cost and complex logistics of using ship-based surveys for bathymetry, i.e., depth imaging, have encouraged the use of various types of indirect measurements for waterbody high-resolution bathymetry. However, estimating bathymetry from indirect measurements is usually an ill-posed inverse problem and can be computationally challenging as most of the inversion techniques require iterative calls to complex forward models to compute Jacobians and subsequent inversion of relatively large matrices.

    In this work, we use a fully connected deep neural network to estimate bathymetry using indirect measurements. We show that these neural nets can perform the inversion operation much faster than traditional inversion methods. The improvement and performance of these methodologies are illustrated by applying them to depth imaging problems for a riverine synthetic test case.
  • Impact of Variable-Density Flow on the Value-of-Information of Pressure and Concentration Data for Saline Aquifer Characterization
    Seonkyoo Yoon (University of Minnesota, Twin Cities)
    We demonstrate that understanding the underpinning physics that controls the value-of-information of observation data can contribute to efficient designs of data collection for saline aquifer characterization. We systematically investigated how variable-density flows impact the value-of-information of pressure and concentration data in view of parameter estimation. We find that: 1. The advantage of joint inversion of pressure and concentration data decreases as the coupling effects between flow and transport increase; 2. The coupling effects decrease as the heterogeneity of permeability fields increase, thereby joint inversion of pressure and concentration data is advocated over using the single data type for high heterogeneous systems.
  • Diffusion Processes for Hyperspectral Data: Clustering and Active Learning
    James Murphy (Tufts University)
    The problem of unsupervised learning and segmentation of hyperspectral images is a significant challenge in remote sensing. The high dimensionality of hyperspectral data, presence of substantial noise, and overlap of classes all contribute to the difficulty of automatically clustering and segmenting hyperspectral images. We propose an unsupervised learning technique called spectral-spatial diffusion learning (DLSS) that combines a geometric estimation of class modes with a diffusion-inspired labeling that incorporates both spectral and spatial information. The mode estimation incorporates the geometry of the hyperspectral data by using diffusion distance to promote learning a unique mode from each class. These class modes are then used to label all points by a joint spectral-spatial nonlinear diffusion process. A related variation of DLSS is also presented, which enables active learning by requesting labels for a very small number of well-chosen pixels, dramatically boosting overall clustering results.
  • Application of Machine Learning Techniques in CO2 Storage and Enhanced Oil Recovery
    Bailian Chen (Los Alamos National Laboratory)
    Recently, machine learning techniques have been widely used in CO2 storage and enhanced oil recovery related projects, especially in the area of development of computationally fast empirical models, surrogate models or proxy models in subsurface modeling. In this study, we present two applications of machine learning techniques in the development of proxy or predictive empirical models to address issues of interest for US-DOE. In the first example, a machine learning and uncertainty quantification based approach was proposed for geologic CO2 sequestration monitoring design, and the efficiency and accuracy of the proposed approach was demonstrated in this study [1]. In the second example, empirical models for the prediction of CO2 storage and oil recovery potential in Residual Oil Zones (ROZ) were developed using different machine learning techniques [2]. The constructed empirical models were applied to evaluate the CO2 storage capacity and oil recovery potential for five real ROZ fields in Permian Basin.

    [1] Chen, B., D.R. Harp, Y. Lin, E.H. Keating and R.J. Pawar: “Geologic CO2 Sequestration Monitoring Design: A Machine Learning and Uncertainty Quantification Based Approach,” Applied Energy, 225, 332-345, 2018.

    [2] Chen, B. and R.J. Pawar: “Capacity Assessment of CO2 storage and Enhanced Oil Recovery in Residual Oil Zones,” SPE paper #: 191604, 2018.
  • Space-Time Analysis of Optical Responses of Liquid Crystals
    Alexander Smith (University of Wisconsin, Madison)
    We present a spatial and temporal analysis of Liquid Crystal sensor responses to different chemical stimuli for classification. This work focuses upon the development of highly accurate classification while minimizing the number of required features. Previous work conducted by Cao and co-workers developed an accurate linear classifier with features extracted from Alexnet and basic image analysis techniques such as Histogram of Oriented Gradients [1]. We extend this work by analyzing the effectiveness of features extracted from the VGG16 network, which is more compact than Alexnet, and utilizes smaller convolutional filters [2]. Our findings demonstrate that features extracted from the first and second convolutional block of VGG16 allow for perfect linear classification on the given dataset while reducing the number of required features one hundred (100) fold. The number of features is further reduced to ten (10) via recursive feature elimination with minimal losses in classification accuracy. Analysis of the reduced feature set provides a window into the physical reasoning behind the accurate linear classification.

    [1] Y. Cao, H. Yu, N. L. Abbott, V. M. Zavala. Liquid Crystal Response Classification Using Imaging and Machine Learning. (Submitted 2017)

    [2] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv 1409.1556, 2014.
  • Probabilistic Mapping of August 2018 Flood of Kerala, India, Using Space-borne Synthetic Aperture Radar
    Sonam Sherpa (Arizona State University)
    Heavy precipitation during August 2018 in Kerala, India caused flash flooding, which inundated thousands of homes and displaced one million people, amounting to US$ 3 billion in the loss. Here we apply a Bayesian framework to Synthetic Aperture Radar (SAR) images acquired by Sentinel-1 C-Band during and after the flood events. The extent of flooded area is mapped by examining the statistical properties of the radar backscattering the intensity. Such probabilistic maps are an essential component in any efforts toward flood data assimilation.
  • Feature Detection from LiDAR Point Clouds and Thermal Imagery Using Distance Dependent Chinese Restaurant Process Clustering
    Avipsa Roy (Arizona State University)
    The recent advances in geographic data collection have rapidly changed visualizing geospatial information. Integrating light detecting and ranging (LiDAR) into mobile mapping systems have allowed for a new horizon in determining signatures from geotagged images. The LiDAR point clouds combined with simple meshes can help to discover features that inform the state of facility operations in and around building structures by examining elevation, temperature, and intensities associated with each block of the facility. Our study area is the open research nuclear reactor called High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory in Oak Ridge, Tennessee. The broader goal of the research encompasses an NA-22 effort called MINOS (Multi Informatics for Nuclear Operation Scenarios), meant to use data analytic tools to characterize activities within a nuclear facility by studying a variety of sensors placed around it. Our objective is to perform feature detection and identify informative signatures in and around the HFIR Facility from the multiple spatial modalities (point clouds, thermal imagery) captured by the VLP-16 LiDAR unit. We use the Bayesian non-parametric clustering technique called the distance-dependent Chinese Restaurant Process to perform image segmentation on features derived jointly from thermal images and LiDAR point cloud.
  • Novelty Detection for Multispectral Images with Application to Planetary Exploration
    Hannah Kerner (Arizona State University)
    In this work, we present a system based on convolutional autoencoders for detecting novel features in multispectral images. We introduce SAMMIE: Selections based on Autoencoder Modeling of Multispectral Image Expectations. Previous work using autoencoders for novel image detection relied on the scalar reconstruction error for classifying new inputs as novel or typical. We show that some novelty detection problems require richer outputs than scalar reconstruction error and that a spatial-spectral error map can enable both accurate classification of novelty in multispectral images and human-comprehensible explanations of the detection. We apply our methodology to the detection of novel geologic features in multispectral images of the Martian surface collected by the Mastcam imaging system on the Mars Science Laboratory Curiosity rover.