HOME    »    PROGRAMS/ACTIVITIES    »    Annual Thematic Program
January 9-12, 2006

Second Chances

Wednesday, January 11

Second Chances

Tuesday, January 10

Second Chances

Monday, January 9

Akram Aldroubi (Vanderbilt University)

Processing of Diffusion-Tensor Magnetic Resonance Images

Diffusion-Tensor Magnetic Resonance Imaging (DT-MRI) is a relatively recent imaging modality. DT-MRI images are measurements of the diffusion tensor of water in each voxel of an imaging volume. They can be viewed as noisy, discrete, voxel-averaged samples of a continuous function from 3D space into the positive definite symmetric matrices. These DT-MRI images can be used to probe the structural and architectural features of fiber tissue such white matter and the heart ventricles. We will present an overview of the problems, methods and applications associated with DT-MRI data and their processing.

Gaik Ambartsoumian (Texas A & M University) , Peter Kuchment (Texas A & M University)

Image Reconstruction in Thermoacoustic Tomography

Thermoacoustic tomography (TCT or TAT) is a new and promising method of medical imaging. It is based on a hybrid imaging technique, where the input and output signals have different physical nature.

In TAT, a microwave or radiofrequency electromagnetic pulse is sent through the biological object triggering an acoustic wave measured in the exteriror of the object. The resulting data is then used to recover the absorption function.

The poster addresses several problems of image reconstruction in thermoacoustic tomography. The presented results include injectivity properties of the related spherical Radon transform, its range description, reconstruction and incomplete data problems.

Jung-Ha An (University of Minnesota Twin Cities)

Image Segmentation using a Modified Mumford-Shah Model

The purpose of this paper is to acquire image segmentation using a modified Mumford-Shah model. A variational region intensity based image segmentation model is proposed. The boundary of the given image is extracted by using a modified Mumford-Shah segmentation model. The proposed model is tested against synthetic data and simulated normal noisy human-brain MRI images. The experimental results provide preliminary evidence of the effectiveness of the presented model.

Mark A. Anastasio (Illinois Institute of Technology)

Diffraction tomography using intensity measurements

Diffraction tomography (DT) is a well-established imaging method for determination of the complex-valued refractive index distribution of a weakly scattering object. The success of DT imaging in optical applications, however, has been limited because it requires explicit knowledge of the phase of the measured wavefields. To circumvent the phase-retrieval problem, a theory of intensity DT (I-DT) has been proposed that replaces explicit phase measurements on a single detector plane by intensity measurements on two or more different parallel planes. In this work, we propose novel I-DT reconstruction theories that are applicable to non-conventional scanning geometries. Such advancements can improve the effectiveness of existing imaging systems and, perhaps more importantly, prompt and facilitate the development of systems for novel applications. Numerical simulations are conducted to demonstrate and validate the proposed tomographic reconstruction algorithms.

Heath Barnett (Louisiana State University) , Les G. Butler (Louisiana State University) http://chemistry.lsu.edu/butler/ , Kyungmin Ham (Louisiana State University)

3D Image Acquisition and Image Analysis Algorithms

Given a near-perfect X-ray source, such as a synchrotron, then what are the reasonable options for image reconstruction? Back-projection reconstruction has dominated, with lambda tomography not receiving the attention it deserves. Give the computation power a the beamline, is it reasonable to perform both reconstructions so as to discern object domains and interfaces? Second, all imaging methods reach a similar bottleneck, image analysis. Here, analysis means counting domains, identifying structure, following paths. If only the “yellow book” (Numerical Recipes: the art of scientific computing by Press, Flannery, Teukolsky, and Vettering) had another couple of chapters on algorithms for image analysis. Today, we start writing those chapters.

We present some sample data sets from our work at the LSU synchrotron, the Advanced Photon Source, and the National Synchrotron Light Source. Also, we discuss potential future issues for neutron tomography at the Spallation Neutron Source.

Oliver Brunke (Universität Bremen)

Synchrotron micro computed tomography as a tool for the quantitative characterization of the structural changes during the ageing of metallic foams

Metallic foams are a rather new class of porous and lightweight materials offering a unique combination of mechanical, thermal and acoustical properties. Their high stiffness to weight ratio, acoustic damping properties and thermal resistance provide possible applications in automobile industry for instance as crash energy absorbers or acoustic dampers, or in aerospace industry for lightweight parts in rockets or aircrafts.

We will demonstrate methods which we use for the analysis of 3D datasets of Aluminum foams obtained at the synchrotron µCT facility at the HASYLAB/DESY. It will be shown that by means of standard 3D image processing techniques it is possible to study and quantify how different processing parameters like foaming temperature, time influence the structure formation and development of metallic foams.

Les G. Butler (Louisiana State University) http://chemistry.lsu.edu/butler/ , Eric Todd Quinto (Tufts University) http://www.tufts.edu/~equinto

Summary session: What was accomplished? What's next?

no abstract

Boris Aharon Efros (Ben Gurion University of the Negev)

Multiframe Dim Target Detection Using 3D Multiscale Geometric Analysis

Joint work with Dr. Ofer Levi, Industrial Engineering Department and Prof. Stanley Rotman Electrical Engineering Department, Ben-Gurion University of the Negev.

We present new multi-scale geometric tools for both analysis and synthesis of 3-D data which may be scattered or observed in voxel arrays, which are typically very noisy, and which may contain one-dimensional structures such as line segments and filaments. These tools mainly rely on the 3-D Beamlet Transform (developed by Donoho et al.) offering the collection of line integrals along a strat egic multi-scale set of line segments, the Beamlet set (running through the image at different orientations, positions and lengths). 3D Beamlets methods can be applied in a wide variety of application fields that involve 3D imaging, in this work we focus on applying Bea mlet methods for the problem of dim target multi-frame detection and develop specialized tools for this application. We use tools from Gra ph theory and apply them to the special graph generated by the beamlets set.

Adel Faridani (Oregon State University)

Tomography and Sampling Theory

Computed tomography entails the reconstruction of a function from measurements of its line integrals. In this talk we explore the question: How many and which line integrals should be measured in order to achieve a desired resolution in the reconstructed image? Answering this question may help to reduce the amount of measurements and thereby the radiation dose, or to obtain a better image from the data one already has. Our exploration leads us to a mathematically and practically fruitful interaction of Shannon sampling theory and tomography. For example, sampling theory helps to identify efficient data acquisition schemes, provides a qualitative understanding of certain artifacts in tomographic images, and facilitates the error analysis of some reconstruction algorithms. On the other hand, applications in tomography have stimulated new research in sampling theory, e.g., on nonuniform sampling theorems and estimates for the aliasing error. Our dual aim is an exposition of the main principles involved as well as the presentation of some new insights.

Alex Gittens (University of Houston) http://tangentspace.net/cz

Frame Isotropic Multiresolution Analysis for Micro CT Scans of Coronary Arteries

Joint with Bernhard G. Bodmann, Donald J. Kouri, and Manos Papadakis.

Recent studies have shown that as much as 85% of heart attacks are caused by the rupture of lesions comprising fatty deposits capped by a thin layer of fibrous tissue-- so-called vulnerable plaques. An imaging modality for the reliable and early detection of vulnerable plaques is therefore of significant clinical relevance. As a move in that direction, we develop a texture-based algorithm for labeling tissues in high resolution CT volume scans based upon variations in the local statistics of the wavelet coefficients. We use a fast wavelet transform associated with isotropic, three-dimensional wavelets; as a result, the algorithm is able to process large volume sets in their entirety, as opposed to two-dimensional cross-sections, and retains an orientation independent sensitivity to features at all levels. The algorithm has been applied to the classification of tissues in scans of coronary artery specimens taken using a General Electric RS-9 Micro CT scanner with a linear resolution of 27 micrometers. In the current revision, it has shown promise for reliably distinguishing fibromuscular, lipid, and calcified tissues.

Natalia Grinberg (Universität Fridericiana (TH) Karlsruhe)

Factorization method in inverse obstacle scattering

Many inverse problems from acoustics, elasticity or electromagnetism can be reduced to the inverse scattering problem for the Helmholtz equation. We consider scattering by inclusions or obstacles in an homogeneous background medium. The factorization method establishes explicite relation between the spectral properties of the far field operator or its derivatives and the shape of the scatterer. This relation allows to reconstruct unknown scattering object pointwise. The factorization method works pretty well for any type of boundary condition and is dimension independent.

Gabor T. Herman (City University of New York (CUNY)) http://www.cs.gc.cuny.edu/~gherman/

Discrete tomography

Breakout groups 1/11/2006

Gabor T. Herman (City University of New York (CUNY)) http://www.cs.gc.cuny.edu/~gherman/

Recovery of the internal grain structure of polycrystals from X-ray diffaction data using discrete tomography

Many materials (such as metals) are polycrystals: they consist of crystaline grains at various orientations. The interaction of these grains with X-rays can be detected as diffraction spots. Discrete tomography can be used to recover the internal oriention arrangement of the grains form such diffraction measurements.

Michael Hofer (Vienna University of Technology)

3D Shape Recognition and Reconstruction with Line Element Geometry

This poster presents a new method for the recognition and reconstruction of simple geometric shapes from 3D data. Line element geometry, which generalizes both line geometry and the Laguerre geometry of oriented planes, enables us to recognize a wide class of surfaces (spiral surfaces, cones, helical surfaces, rotational surfaces, cylinders, etc.) by fitting linear subspaces in an appropriate seven-dimensional image space. In combination with standard techniques such as PCA and RANSAC, line element geometry is employed to effectively perform the segmentation of complex objects according to surface type.

George Kamberov (Stevens Institute of Technology)

Segmentation and Geometry of 3D Scenes form Unorganized Point Clouds

We present a framework to segment 3D point clouds into 0D,1D and 2D connected components (isolated points, curves, and surfaces) and then to assign robust estimates of the Gauss and mean curvatures and the principal curvature directions at each surface point. The framework is point-based. It does not use surface reconstruction, works on noisy data, no-human-in-the-loop is required to deal with non-uniformly sampled clouds and boundary points. The topology and geometry recovery are parallelizable with low overhead.

Alexander Katsevich (University of Central Florida)

Improved cone beam local tomography

A new local tomography function g is proposed. It is shown that g still contains non-local artifacts, but their level is an order of magnitude smaller than those of the previously known local tomography function. We also investigate local tomography reconstruction in the dynamic case, i.e. when the object f being scanned is undergoing some changes during the scan. Properties of g are studied, the notion of visible singularities is suitably generalized, and a relationship between the wave fronts of f and g is established. It is shown that the changes in f do not cause any smearing of the singularities of g. Results of numerical experiments are presented.

Richard Ketcham (University of Texas)

Surface detection

Breakout groups 1/10/2005

Richard Ketcham (University of Texas)

Measuring features in volumetric data sets using Blob3D

Blob3D is a software project begun at the University of Texas at Austin in 1999 for facilitating measurements of discrete features of interest in volumetric data sets. It is designed in particular to deal with cases where features are touching or impinging, and to allow up to tens of thousands of features to be processed in a reasonable amount of time. Processing is broken into three stages: segmentation of a phase of interest, separation of touching objects, and extraction of measurements from the interpreted volume. For each stage a variety of three-dimensional algorithms have been created that account for vagaries of CT data, and program design is intended to enable relatively straightforward addition of new methods as they are developed. Separation is the most time-intensive step, as it utilizes manual and semi-automated methods that rely heavily on the user. This approach is most appropriate in many instances where the natural variation and complexity of the features require expert interpretation, but further automation is a future goal. Although designed in particular for geological applications using X-ray CT data, Blob3D is sufficiently general that it can be applied to other data types and in other fields.

Richard Ketcham (University of Texas)

Surface detection

Breakout groups 1/12/2006

Seongjai Kim (Mississippi State University)

Zoomable PDEs

The presentation introduces edge-forming schemes for image zooming by arbitrary magnification factors. Image zooming via conventional interpolation methods often produces the so-called checkerboard effect, in particular, when the magnification factor is large. In order to remove the artifact and to form reliable edges, a nonlinear convex-concave model and its numerical algorithm are suggested along with anisotropic edge-forming numerical schemes. The algorithm is analyzed for stability and choices of parameters. It has been numerically verified that the resulting algorithm can form clear edges in 2 to 3 iterations of a linearized ADI method. Various results are given to show effectiveness and reliability of the algorithm, for both gray-scale and color images.

This is a joint work with Dr. Youngjoon Cha.

Carl E. Krill III (Universität Ulm) http://www.uni-ulm.de/matwis/

Unraveling the Mysteries of Grain Growth by X-Ray Tomography: Segmentation of the 3-D Microstructure of Polycrystalline Al-Sn

During the phenomenon of grain growth, larger grains tend to grow at the expense of their smaller neighbors, resulting in a steady increase in the average grain size. Because the growth of any given grain is affected by that of its neighbors, the behavior of the ensemble of grains is a strong function of nearest-neighbor size correlations. Quantitative information concerning these correlations can be extracted only from a truly three-dimensional characterization of the sample microstructure. We have used x-ray microtomography to measure the size correlations in a polycrystalline specimen of Al alloyed with 2 at.% Sn. The tin atoms segregate to the grain boundaries, where they impart a strong contrast in x-ray attenuation that can be reconstructed tomographically; however, the nonuniform nature of the segregation process presents a formidable challenge to the automated segmentation of the reconstructions. By employing an iterative region-growing algorithm followed by a novel grain-boundary-network optimization routine (based on a phase-field simulation of grain growth), we were able to measure the size, topology and local connectivity of nearly 5000 contiguous Al grains, from which the nearest-neighbor size correlations could be computed. The resulting information was incorporated into a non-mean-field theory for grain growth, the accuracy of which was evaluated by comparing its predictions to the observed microstructure of the Al-Sn samples.

Peter Kuchment (Texas A & M University)

On mathematics of thermoacoustic imaging

Joint with Gaik Ambartsoumian.

In thermoacoustic tomography TAT (sometimes called TCT), one triggers an ultrasound signal from the medium by radiating it with a short EM pulse. Mathematically speaking, under ideal conditions, the imaging problem boils down to inversion of a spherical Radon transform. The talk will survey known results and open problems in this area.

Ofer Levi (Ben Gurion University of the Negev)

Real Time Multi-Scale Geometric Segmentation of 3D Images

no abstract

Chunming Li (Vanderbilt University) http://vuiis.vanderbilt.edu/~licm/

Active Contours with Local Binary Fitting Energy

We propose a novel active contour model for image segmentation. The proposed model is based on an assumption that the image is locally binary. Our method is able to segment images with non-homogeneous regions, which is difficult for existing region based active contour models. Experimental results demonstrate the effectiveness of our method, and comparative study shows its advantage over previous methods.

Hyeona Lim (Mississippi State University)

Method of Background Subtraction for Medical Image Segmentation

Medical images can involve high levels of noise and unclear edges and therefore their segmentation is often challenging. In this presentation, we consider the method of background subtraction (MBS) in order to minimize difficulties arising in the application of segmentation methods to medical imagery. When an appropriate background is subtracted from the given image, the residue can be considered as a perturbation of a binary image, for which most segmentation methods can detect desired edges effectively. New strategies are presented for the computation of the background and tested along with active contour models. Various numerical examples are presented to show effectiveness of the MBS for segmentation of medical imagery. The method can be extended to an efficient surface detection of 3-D medical images.

Jundong Liu (Ohio University) http://ace.cs.ohiou.edu/~liu

A Unified Registration and Segmentation Framework for Medical Images

In this paper, we present an unified framework for joint segmentation and registration. The registration component of the method relies on two forces for aligning the input images: one from the image similarity measure, and the other from an image homogeneity constraint. The former, based on local correlation, aims to find the detailed intensity correspondence for the input images. The latter, generated from the evolving segmentation contours, provides an extra guidance in assisting the alignment process towards a more meaningful, stable and noise-tolerant procedure. We present several 2D/3D example on synthetic and real data.

Thomas H. Mareci (University of Florida) http://faraday.mbi.ufl.edu/~thmareci/

Imaging Translational Water Diffusion with Magnetic Resonance for Fiber Mapping in the Central Nervous System

Work in collaboration with Evren Ozarslan of the National Institutes of Health and Baba Vemuri of the University of Florida.

Magnetic resonance can be used to measure the rate and direction of molecular translational diffusion. Combining this diffusion measurement with magnetic resonance imaging methods allows the visualization of 3D motion of molecules in structured environments, like biological tissue. In its simplest form, the 3D measure of diffusion can be modeled as a real, symmetry rank-2 tensor of diffusion rate and direction at each image voxel. At a minimum, this model requires seven unique measurements of diffusion to fit the model (Basser, et al., J Magn Reson 1994;B:247–254). The resulting rank-2 tensor can be used to visualize diffusion as an ellipsoid at each voxel and fiber connections can be inferred by connecting the path, defined by the long axis (principle eigenvector) of the ellipse, passing through each voxel.

However, the rank-2 model of diffusion fails to accurately represent diffusion in complex structured environments, like nervous tissue with many crossing fibers. This limitation can be overcome by extending the angular resolution of diffusion measurements (Tuch, et al., Proceedings of the 7th Annual Meeting of Inter Soc Magn Reson Med, Philadelphia, 1999. p 321.) and by modeling the diffusion with higher rank tensors (Ozarslan et al., Magn. Reson. Med. 2003; 50:955-965 & Magn. Reson. Med. 2005;53;866-876). At each voxel in this more complete model, the 3D diffusion is represented by an "orientation distribution function" (ODF) indicating the probability of diffusion rate and direction. The diffusion ODF can be used to infer fiber connectivity but the issue of probable path selection remains a challenge. Plus the chosen procedure for path selection will influence with the level of resolution required for the measurements. In this presentation, methods of diffusion measurement and examples of diffusion-weighted magnetic resonance images from brain and spinal cord will be presented to illustrate the potential and challenges for path selecting leading to fiber mapping in the central nervous system.

Frank Natterer (Westfälische Wilhelms-Universität Münster) http://wwwmath.uni-muenster.de/math/u/natterer/

Ultrasound tomography

no abstract

Ozan Oktem (Sidec Technologies)

Electron tomography. A short overview of methods and challenges

Already in 1968 one recognized that the transmission electron microscope could be used in a tomographic setting as a tool for structure determination of macromolecules. However, its usage in mainstream structural biology has been limited and the reason is mostly due to the incomplete data problems that leads to severe ill-posedness of the inverse problem. Despite these problems its importance is beginning to increase, especially in drug discovery.

In order to understand the difficulties of electron tomography one needs to properly formulate the forward problem that models the measured intensity in the microscope. The electron-specimen interaction is modelled as a diffraction tomography problem and the picture is completed by adding a description of the optical system of the transmission electron microscope. For weakly scattering specimens one can further simplify the forward model by employing the first order Born approximation which enables us to explicitly express the forward operator in terms of the propagation operator from diffraction tomography acting on the specimen convolved with a point spread function, derived from the optics in the microscope. We next turn to the algorithmic and mathematical difficulties that one faces in dealing with the resulting inverse problem, especially the incomplete data problems that leads to severe ill-posedness. Even though we briefly mention single particle methods, our focus is will be on electron tomography of general weakly scattering specimens and we mention some of the progress that has been made in the field. Finally, if time permits, we provide some examples of reconstructions from electron tomography and demonstrate some of the biological interpretations that one can make.

Sarah K. Patch (University of Wisconsin) http://www.phys.uwm.edu/people/faculty/patchs.html

Thermoacoustic Tomography - Reconstruction of Data Measured under Clinical Constraints

Thermoacoustic tomography (TCT) is a hybrid imaging technique that has been proposed as an alternative to xray mammography. Ideally, electromagnetic (EM) energy is deposited into the breast tissue uniformly in space, but impulsively in time. This heats the tissue causing thermal expansion. Cancerous masses absorb more energy than healthy tissue, creating a pressure wave, which is detected by standard ultrasound transducers placed on the surface of a hemisphere surrounding the breast. Assuming constant sound speed and zero attenuation, the data represent integrals of the tissue's EM absorptivity over spheres centered about the receivers (ultrasound transducers).

The inversion problem for TCT is therefore to recover the EM absorptivity from integrals over spheres centered on a hemisphere. We present an inversion formula for the complete data case, where integrals are measured for centers on the entire sphere. We discuss differences between ideal and clinically measurable TCT data and options for accurately reconstructing the latter.

Henning Friis Poulsen (Riso National Laboratory)

Grain maps and grain dynamics — a reconstruction challenge

Crystalline materials such as most metals, ceramics, rocks, drugs, and bones are composed of a 3D space-filling network of small crystallites r the grains. The geometry of this network governs a range of physical properties such as hardness and lifetime before failure. Our group has pursued an experimental method r 3DXRD r which for the first time enable structural characterisation of the grains in 3D. Furthermore, changes in grain morphology can be followed during typical industrial processes such as annealing or deformation.

3DXRD is based on reconstruction principles. In comparison to conventional tomography the use of higher dimensional spaces is required. The projections are subject to group symmetry and their number is inherently limited. The grains exhibit a number of geometric properties which can be utilised. Furthermore the problem at hand can be reformulated in terms of both vector-type and scalar-type reconstructions. In conjunction these effects make 3DXRD reconstruction mathematically challenging.

The 3DXRD method will be presented with a few applications. The algorithms developed so far r for simplifying cases r will be summarised with focus on continuous reconstruction methods.

Eric Todd Quinto (Tufts University) http://www.tufts.edu/~equinto

An introduction to the mathematics of tomography

The speaker will provide an overview of the Radon transform, showing its relationship with X-ray tomography and other tomographic problems. He will also describe the filtered back projection inversion formula and contrast it with Lambda tomography. Finally, he plans to give an elementary introduction to microlocal analysis and its implications for limited data tomography. Sample reconstructions (tomography pictures) will be provided to illustrate the ideas.

Gregory J. Randall (University of the Republic)

An Active Regions approach for the segmentation of 3D biological tissue

Joint work with Juan Cardelino and Marcelo Bertalmio.

Some of the most succesful algorithms for the automated segmentation of images use an Active Regions approach, where a curve is evolved so as to maximize the disparity of its interior and exterior. But these techniques require the manual selection of several parameters, which make impractical the work with long image sequences or with a very dissimilar set of sequences. Unfortunately this is precisely the case with 3D biological image sequences. In this work we improve on previous Active Regions algorithms in two aspects: by introducing a way to compute and update the optimum weights for the different channels involved (color, texture, etc.) and by estimating if the moving curve has lost any object so as to launch a re-initialization step. Our method is shown to outperform previous approaches. Several examples of biological image sequences, quite long and different among themselves, are presented.

Walter Richardson Jr. (University of Texas)

Sobolev gradients and negative norms for image decomposition

The use of Sobolev gradients and negative norms has proven to be a very useful preconditioning strategy for a variety of problems from mechanics and CFD, including transonic flow, minimal surface, and Ginzburg-Landau. We summarize results of applying this methodology in a variational approach for image decomposition f=u+v+w.

Erik L. Ritman (Mayo Clinic) http://mayoresearch.mayo.edu/mayo/research/staff/ritman_el.cfm

Dual-Modality Micro-CT with Poly-Capillary X-ray Optics

Conventional attenuation-based x-ray micro-CT is limited in terms of the image contrast it can convey for differentiating different tissue components, spaces and functions. Multi-modality imaging (e.g., radionuclide emission and/or x-ray scatter) can expand the information that can be obtained about those tissue aspects, but a challenge is accurate co-registration of the multiple images needed for the CT image data to be used to enhance the other modality's specificity. Poly-capillary optics consist of bundles of hollow glass capillaries (nominally 25µm in lumen diameter) which can "bend" x-rays or gamma rays by virtue of reflection of the photons within those capillaries. This approach serves both to exclude unwanted radiation (i.e., collimates the radiation) and to allow passage of radiation along accurately described paths - either parallel or focused. As both x-rays from an external x-ray source and from gamma ray emitters and x-ray scatterers within an object can be imaged with this approach, the images from these three modalities are perfectly co-registered. This allows use of the x-ray image to provide for attenuation correction of the internally generated radiation, as well as restricting that emission to specific anatomic structures and spaces by virtue of a priori physiological knowledge.

Justin Romberg (California Institute of Technology)

Image acquisition from a highly incomplete set of measurements

Many imaging techniques acquire information about an underlying image by making indirect linear measurements. For example, in computed tomography we observe line integrals through the image, while in MRI we observe samples of the image's Fourier transform. To acquire an N-pixel image, we will in general need to make at least N measurements.

What happens if the number of measurements is (much) less than N (that is, the measurements are incomplete)? We will present theoretical results showing that if the image is sparse, it can be reconstructed from a very limited set of measurements essentially as well as from a full set by solving a certain convex optimization program. By "sparse", we mean that the image can be closely approximated using a small number of elements from a known orthobasis (a wavelet system, for example).

Although the reconstruction procedure is nonlinear, it is exceptionally stable in the presence of noise, both in theory and in practice.

We will conclude with several practical examples of how the theory can be applied to "real-world" imaging problems.

Partha S. Routh (Boise State University) http://cgiss.boisestate.edu/~routh

Total variation imaging followed by spectral decomposition using continuous wavelet transform

In general geophysical images provide two kinds of information: (a) structural images of discontinuities that define various lithology units and (b) physical property distribution within these units. Depending on the resolution of the geophysical survey, large scale changes can usually be detected that are often correlated with stratigraphic architecture of the subsurface. Knowledge of these architectural elements provides information about subsurface. Total variation (TV) regularization is one possibility to preserve discontinuity in the images. Another goal is to interpret these images is to obtain features that have varying scale information. Wavelets have the attractive quality of being able to resolve scale information in signal or data set. Moreover, heterogeneity produces non-stationary signal that can be effectively analyzed using wavelets due to its localization property. In this work we will present a new methodology for computing a time-frequency map for non-stationary signals using the continuous wavelet transform (CWT) that is more advantageous than conventional method of producing a time-frequency map using the Short Time Fourier Transform (STFT). This map is generated by transforming the time-scale map by taking the Fourier transform of the inverse CWT to produce a time-frequency map. We refer to such a map as the time-frequency CWT (TFCWT). Imaging using total variation regularization operator followed by spectral decomposition using TFCWT can be used as an effective interpretive tool.

Guillermo R. Sapiro (University of Minnesota Twin Cities) http://www.ece.umn.edu/users/guille/

3D segmentation in tomography

In this talk I will describe recent results in the segmentation of relevant structures in electron tomography. We have developed novel techniques based on PDEs to work with this extremely hard data. I will describe the problem and the proposed solution, both at a tutorial level for a general audience.

This is joint work with A. Bartesaghi and S. Subramaniam from NCI at NIH.

Eric S. Weber (Iowa State University) http://www.public.iastate.edu/~esweber/

Orthogonal Wavelet Frames for Color Image Compression

We present an algorithm for constructing orthogonal wavelet frames from MRA’s in L2(R), as well as for the associated filter banks. This construction gives rise to a vector-valued wavelet transform (VDWT) for vector valued data, such asimages. We present numerical results of image data compression using the VDWT.

Martin Welk (Universität des Saarlandes)

Structure-Preserving Filtering of DT-MRI Data: PDE and Discrete Approaches

Joint work with: Joachim Weickert, Christian Feddern, Bernhard Burgeth, Christoph Schnoerr, Florian Becker

Curvature-driven PDE filters like mean-curvature motion (MCM), and median filters are well-studied as structure-preserving filters for grey-value images. They are related via a remarkable approximation result by Guichard and Morel.

We show generalisations of both types of filters to multivariate, specifically matrix-valued images. We discuss properties and algorithmic aspects, and demonstrate their usefulness for the filtering of diffusion-tensor data.

Art W. Wetzel (Pittsburgh Supercomputing Center) http://www.psc.edu/~awetzel/

A Networked Framework for Visualization and Analysis of Volumetric Datasets

Work in collaboration with the PSC-VB team and the Duke Center for In Vivo Microscopy (CIVM) with support from the National Library of Medicine.

Volumetric datasets (CT, MRI, EM, etc.) on the gigabyte scale are relatively common in the basic and clinic al Life Sciences, and datasets on the terabyte scale will become increasingly common in the near future. At these scales visualization and analysis using typical users desktop systems is difficult. We have been developing a client-server system, the PSC Volume Browser (PSC-VB), that links the graphics power of user's PCs with remote high performance servers and supercomputing resources to enable the routine sharing, visualization, and anaylsis of of large volumetric and time series datasets. PSC-VB provides the framework for efficient data transfer and data manipulation using both client and server side processing. The system is designed to link extensible toolsets for data analysis including the National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) and user provided processing modules.

We are currently using PSC-VB for analysis of mouse cardiac function using time series micro-CT volumes and mouse embryo development micro-MRI volumes captured at the Duke CIVM. This talk will provide an overview of the PSC-VB system and its specific application to the CIVM time series data analysis as well as preliminary efforts to build very large data volumes from serial section electron microscopy images. Although our current applications involve biological data the general framework is applicable to other data modalities and has been used to view, for example, earthquake ground motions and electro magnetic fields.

Graham A. Winbow (ExxonMobil)

Common reflection angle imaging

Common reflection angle migration (CRAM) is a computationally efficient ray-based seismic imaging technology developed at ExxonMobil which, as its name implies, enables us to form images of the subsurface in which all reflection events are imaged at the same reflection angle. It is most useful in complex imaging situations, such as beneath salt masses where signal/noise is a key issue and CRAM often enables us to separate signal and noise by comparing and contrasting different common reflection angle volumes. Our poster shows a recent example of how this works in practice.

Alex Zamyatin (BIR, Inc.)

Sinogram decomposition for fan beam transform and its applications

Recently several research groups independently proposed a sinogram decomposition approach for different problems in medical imaging. Sinogram is the set of projections of the reconstructed object. The main idea is to treat a sinogram as a family of the sinogram curves (s-curves). Each s-curve is obtained by tracing a single object point in the sinogram. There are many operators that can be defined on a the space of s-curves: backprojection (sum), minimum/maximum, etc. Therefore, the sinogram decomposition approach can be used in many applications: reconstruction from noisy data, sinogram completion for truncation correction and field-of-view extension, and artifact correction. In this poster we will derive equations of s-curves in fan-beam geometry, native for medical CT scanners, parameterize the family of s-curves through a given sinogram pixel, and consider some of the applications, where we suggest ways for estimation of missing data using this approach.

Go