Main navigation | Main content

HOME » SCIENTIFIC RESOURCES » Volumes

Abstracts and Talk Materials

Keywords: Geometric and Clifford Algebras, Image and Signal Processing, Segmentation, Diffusion, Differential Geometry for Image Processing.

Abstract: Since the seminal work of Hestenes, Clifford algebras (also called geometric algebras) appeared to be a powerful tool for geometric modeling in theoretical physics. During the last years, the so-called "geometric calculus" has found many applications in computer science, especially in vision and robotics (Lasenby, Bayro Corrochano, Dorst ...). Concerning image processing, Clifford algebras were already used implicitly by Sangwine through the coding of color with quaternions.

The aim of this talk is first to give basic notions about these algebras and the related spinor groups. I will then detail two applications : a metric approach for edge detection in nD images and the construction of a Clifford Hodge operator that generalizes the Beltrami operator of Sochen et al. This latter can be used for diffusion in color images but also for diffusion of vector and orthonormal frame fields. The geometric framework of these applications is the one of fiber and Clifford bundles. Very roughly speaking, the basic idea is to take advantage of embedding an acquisition space in a higher dimensional algebra containing elements of different kinds and related to a specified metric. If time remains, I will also mention in few words applications for signal processing (Clifford Fourier transform and color monogenic signal).

Two basic references :

Math.: Chevalley, C.: The Algebraic Theory of Spinors and Clifford Algebras, new edn. Springer (1995)

Computer Science: Sommer, G.: Geometric Computing with Clifford Algebras. Theoretical Fundations and Applications in Computer Vision and Robotics. Springer, Berlin (2001)

Abstract: Since the seminal work of Hestenes, Clifford algebras (also called geometric algebras) appeared to be a powerful tool for geometric modeling in theoretical physics. During the last years, the so-called "geometric calculus" has found many applications in computer science, especially in vision and robotics (Lasenby, Bayro Corrochano, Dorst ...). Concerning image processing, Clifford algebras were already used implicitly by Sangwine through the coding of color with quaternions.

The aim of this talk is first to give basic notions about these algebras and the related spinor groups. I will then detail two applications : a metric approach for edge detection in nD images and the construction of a Clifford Hodge operator that generalizes the Beltrami operator of Sochen et al. This latter can be used for diffusion in color images but also for diffusion of vector and orthonormal frame fields. The geometric framework of these applications is the one of fiber and Clifford bundles. Very roughly speaking, the basic idea is to take advantage of embedding an acquisition space in a higher dimensional algebra containing elements of different kinds and related to a specified metric. If time remains, I will also mention in few words applications for signal processing (Clifford Fourier transform and color monogenic signal).

Two basic references :

Math.: Chevalley, C.: The Algebraic Theory of Spinors and Clifford Algebras, new edn. Springer (1995)

Computer Science: Sommer, G.: Geometric Computing with Clifford Algebras. Theoretical Fundations and Applications in Computer Vision and Robotics. Springer, Berlin (2001)

Keywords: geometry, image processing, diffuse interface, sparse representations, pan
sharpening

Abstract: I will present a survey of recent results on geometry-based image processing. The topics will include wavelet-based diffuse interface methods, pan sharpening and hyperspectral sharpening, and sparse image representation.

Abstract: I will present a survey of recent results on geometry-based image processing. The topics will include wavelet-based diffuse interface methods, pan sharpening and hyperspectral sharpening, and sparse image representation.

October 5, 2009

Keywords: Image segmentation, Mumford-Shah model, graph clustering, relaxation method, total variation, operator splitting technique, normalized cut, Cheeger cut.

Abstract: In this talk, I will introduce two algorithms for image segmentation and graph clustering. One of the most influential image segmentation models is the Mumford-Shah’s model. Several algorithms such as the level set method have been introduced to compute a minimizing solution to the MS’s problem, but none of them can compute a global solution. We introduce a convex formulation for the multiphase piecewise constant MS problem (which is equivalent to the NP-hard problem of Potts in the discrete literature) and compute exact global minimizing solutions. We believe our method is the first in the literature that can make this claim. The second model will focus on graph clustering, which aims at grouping similar high-dimensional data s.a. images. The main problem of graph clustering is to minimize a cut of a graph. Popular cuts are the normalized cut of Shi-Malik and the Cheeger’s cut, which are NP-hard problems. We introduce a continuous relaxation of the Cheeger’s cut problem and we show that the relaxation is actually equivalent to the original problem, which is not the case with the Shi-Malik’s relaxation. We also give an algorithm which is experimentally efficient on some clustering benchmarks since the algorithm can cluster 10,000 high-dimensional points in a few seconds.

This is joint work with T.F. Chan, E. Brown (UCLA) and A. Szlam (NYU).

Abstract: In this talk, I will introduce two algorithms for image segmentation and graph clustering. One of the most influential image segmentation models is the Mumford-Shah’s model. Several algorithms such as the level set method have been introduced to compute a minimizing solution to the MS’s problem, but none of them can compute a global solution. We introduce a convex formulation for the multiphase piecewise constant MS problem (which is equivalent to the NP-hard problem of Potts in the discrete literature) and compute exact global minimizing solutions. We believe our method is the first in the literature that can make this claim. The second model will focus on graph clustering, which aims at grouping similar high-dimensional data s.a. images. The main problem of graph clustering is to minimize a cut of a graph. Popular cuts are the normalized cut of Shi-Malik and the Cheeger’s cut, which are NP-hard problems. We introduce a continuous relaxation of the Cheeger’s cut problem and we show that the relaxation is actually equivalent to the original problem, which is not the case with the Shi-Malik’s relaxation. We also give an algorithm which is experimentally efficient on some clustering benchmarks since the algorithm can cluster 10,000 high-dimensional points in a few seconds.

This is joint work with T.F. Chan, E. Brown (UCLA) and A. Szlam (NYU).

Non-parametric Bayesian techniques are considered for learning dictionaries for
sparse image representations, with applications in denoising, inpainting and
compressive sensing (CS). The beta process is employed as a prior for learning
the dictionary, and this non-parametric method naturally infers an appropriate
dictionary size. The Dirichlet process and a probit stick-breaking process are
also considered to exploit structure within an image. The proposed method can
learn a sparse dictionary in situ; training images may be exploited if
available, but they are not required. Further, the noise variance need not be
known, and can be nonstationary. Another virtue of the proposed method is that
sequential inference can be readily employed, thereby allowing scaling to large
images. Several example results are presented, using both Gibbs and variational
Bayesian inference, with comparisons to other state-of-the-art approaches.

Keywords: Parallel Programming, optimization, l^{1}

Abstract: We consider the recovery of signal via compressive sensing where the signal itself or its gradient are assumed to be sparse. This amounts to solve a l^{1} or a Total Variation minimization problem.
We propose minimization algorithms specifically designed to take advantage
of shared memory, vectorized, parallel and manycore microprocessors such as
the Cell processor, new generation Graphics Processing Units (GPUs) and
standard vectorized multicore processors (e.g. standard quad core CPUs).

Abstract: We consider the recovery of signal via compressive sensing where the signal itself or its gradient are assumed to be sparse. This amounts to solve a l

Keywords: conformal, metric, shape, curves, harmonic, multiscale

Abstract: We will present several families of metrics on plane curves, each of which is based on some multi-scale representation and is equivalent to a Sobolev-type norm. These metrics arise when trying to characterize local regularity of functions and curves. The underlying techniques are borrowed from harmonic and complex analysis. We will present theoretical and practical results; in particular we will show experimental results on the MPEG-7 Shape 1b test dataset.

Abstract: We will present several families of metrics on plane curves, each of which is based on some multi-scale representation and is equivalent to a Sobolev-type norm. These metrics arise when trying to characterize local regularity of functions and curves. The underlying techniques are borrowed from harmonic and complex analysis. We will present theoretical and practical results; in particular we will show experimental results on the MPEG-7 Shape 1b test dataset.

Keywords: object
detection, invariant features, hierarchical search

Abstract: This talk is about research in scene interpretation. Most algorithms for detecting and describing instances from object categories consist of looping over a partition of a "pose space" with dedicated binary classifiers. This strategy is inefficient for a complex pose: fragmenting the training data severely reduces accuracy, and the computational cost is prohibitive due to visiting a massive pose partition. To overcome data-fragmentation I will discuss a novel framework centered on pose-indexed features, which allows for efficient, one-shot learning of pose-specific classifiers. Such features are designed so that the probability distribution of the response is invariant if an object is actually present. I will illustrate these ideas by detecting and localizing cats in highly cluttered greyscale scenes. This is joint work with Francois Fleuret.

Abstract: This talk is about research in scene interpretation. Most algorithms for detecting and describing instances from object categories consist of looping over a partition of a "pose space" with dedicated binary classifiers. This strategy is inefficient for a complex pose: fragmenting the training data severely reduces accuracy, and the computational cost is prohibitive due to visiting a massive pose partition. To overcome data-fragmentation I will discuss a novel framework centered on pose-indexed features, which allows for efficient, one-shot learning of pose-specific classifiers. Such features are designed so that the probability distribution of the response is invariant if an object is actually present. I will illustrate these ideas by detecting and localizing cats in highly cluttered greyscale scenes. This is joint work with Francois Fleuret.

Keywords: Conformal, Ricci flow, Hodge, Teichmuller, Riemann surface

Abstract: Conformal mappings are angle-preserving mappings. All closed surfaces can be conformally mapped to one of three canonical spaces: the sphere, the plane or the hyperbolic disk. All surfaces with boundaries can be mapped to the canonical spaces with circular holes. The computational algorithms for finding such mappings will be explained.

Two surfaces are conformally equivalent if they can be mapped to each other by a conformal mapping. All conformal equivalence classes form a finite dimensional Riemannian manifold, the so-called Teichmuller space. Teichmuller space is a natural shape space model. The algorithm for computing Teichmuller coordinates for each surface will be introduced.

The computational algorithms are based on Ricci flow, which refers to the process of deforming Riemannian metric proprotional to curvature, such that curvature evolves according to a heat diffusion. Discrete Ricci flow will be explained in details.

The broad applications of conformal geometry in computer graphics, computer vision, medical imaging, and networking will be briefly introduced as well.

Abstract: Conformal mappings are angle-preserving mappings. All closed surfaces can be conformally mapped to one of three canonical spaces: the sphere, the plane or the hyperbolic disk. All surfaces with boundaries can be mapped to the canonical spaces with circular holes. The computational algorithms for finding such mappings will be explained.

Two surfaces are conformally equivalent if they can be mapped to each other by a conformal mapping. All conformal equivalence classes form a finite dimensional Riemannian manifold, the so-called Teichmuller space. Teichmuller space is a natural shape space model. The algorithm for computing Teichmuller coordinates for each surface will be introduced.

The computational algorithms are based on Ricci flow, which refers to the process of deforming Riemannian metric proprotional to curvature, such that curvature evolves according to a heat diffusion. Discrete Ricci flow will be explained in details.

The broad applications of conformal geometry in computer graphics, computer vision, medical imaging, and networking will be briefly introduced as well.

Gromov-Hausdorff distance (d_{GH}) is a definition for the discrepancy
between metric
spaces. Until recently, it has been applied mainly in theoretical
exploration of metric spaces in metric geometry, as well as in theoretical
computer science, specifically, in the context of metric embedding of
graphs. A couple of years ago it was introduced into the field of
shape analysis by Memoli and Sapiro. In this talk we will explore the
relation between
the Gromov-Hausdorff distance and multi-dimensional scaling (MDS), a
classical approach for
embedding a given metric space into one in which distances can be
analytically computed. The
obvious example for such a target embedding space in MDS is Euclidean.
Alternatively, we
could use the Generalized MDS (GMDS) as a building block in numerically
approximating d_{GH}.
This generalization deals with target spaces in which distances can be
numerically approximated
rather than evaluated analytically.

The exposition of ideas in metric geometry and numerical optimization would be motivated through practical examples like 3D face recognition, texture mapping in computer graphics, defining and numerically exploring intrinsic symmetries and more. We will start from the actual acquisition process and also present a 3D color video camera we developed and demonstrate the potential of our computational tools on various applications.

The exposition of ideas in metric geometry and numerical optimization would be motivated through practical examples like 3D face recognition, texture mapping in computer graphics, defining and numerically exploring intrinsic symmetries and more. We will start from the actual acquisition process and also present a 3D color video camera we developed and demonstrate the potential of our computational tools on various applications.

In this talk, we study the problem of extracting local
scales of oscillatory patterns in images and on plane curves. In the
first case, Given a multi-scale representation {u(t)} of an image f,
we are
interested in automatically picking out a few choices of t_{_}i(x), which
we call local scales, that better represent the multi-scale structure
of f at x. We will characterize local scales coming from the Gaussian
kernel. In the second case, we propose an approach to extracting
local scales on curves to segment objects with irregular boundaries.
Theory and experimental results will be presented with applications to
image decomposition/denoising.

Keywords: unsupervised learning, object recognition, sparse coding,
convolutional networks

Abstract:Image processing and recognition has traditionally relied on hard-wired features and trainable classifiers. The next challenge of computer vision, machine learning, and image processing, is to devise methods that can automatically learn feature extractors and high-level image representations from labeled and unlabeled data. The set of methods collectively known as "Deep Learning" is an attempt to learn hierarchies of features with multiple levels of abstraction, and suitable invariances. I will describe several deep learning methods, some of which involve new forms of sparse coding. Specific model architectures for image recognition, based on stacks on non-linear filter banks, and trained with these methods will be described. A number of applications to object dectection, object recognition, and vision-based navigation for mobile robots will be shown.

Abstract:Image processing and recognition has traditionally relied on hard-wired features and trainable classifiers. The next challenge of computer vision, machine learning, and image processing, is to devise methods that can automatically learn feature extractors and high-level image representations from labeled and unlabeled data. The set of methods collectively known as "Deep Learning" is an attempt to learn hierarchies of features with multiple levels of abstraction, and suitable invariances. I will describe several deep learning methods, some of which involve new forms of sparse coding. Specific model architectures for image recognition, based on stacks on non-linear filter banks, and trained with these methods will be described. A number of applications to object dectection, object recognition, and vision-based navigation for mobile robots will be shown.

October 6, 2009

Principal component analysis is a fundamental operation in computational data analysis, with myriad applications ranging from web search, to bioinformatics, to dynamical system identification, to computer vision and image analysis. However, its performance and applicability in real scenarios are limited by a lack of robustness to outlying or corrupted observations. In this work, we consider the idealized “robust principal component analysis” problem of recovering a low-rank matrix A from corrupted observations D = A + E. Here, the error entries E can be arbitrarily large (modeling grossly corrupted observations common in visual and bioinformatic data), but are assumed to be sparse. We prove that most matrices A can be efficiently and exactly recovered from most error sign-and-support patterns, by solving a simple convex program. Our result holds even when the rank of A grows nearly proportionally (up to a logarithmic factor) to the dimensionality of the observation space and the number of errors E grows in proportion to the total number of entries in the matrix. A by-product of our analysis is the first proportional growth results for the related but somewhat easier problem of completing a low-rank matrix from a small fraction of its entries. We propose a provably convergent algorithm based on proximal gradient and iterative thresholding that, for large matrices, is significantly faster and more scalable than general-purpose solvers. We provide simulations and real-data examples corroborating the theoretical results. The simulation results actually have revealed even more striking phenomena and remarkable pictures that merit future investigation.

This is joint work with my students John Wright, Arvind Ganesh, and Shankar Rao.

Brief Biography: Yi Ma is an associate professor at the Electrical & Computer Engineering Department of the University of Illinois at Urbana-Champaign. He is currently on leave as research manager of the Visual Computing group at Microsoft Research Asia in Beijing. His research interests include computer vision, image processing, and systems theory. Yi Ma received two Bachelors’ degree in Automation and Applied Mathematics from Tsinghua University (Beijing, China) in 1995, a Master of Science degree in EECS in 1997, a Master of Arts degree in Mathematics in 2000, and a PhD degree in EECS in 2000, all from the University of California at Berkeley. Yi Ma received the David Marr Best Paper Prize at the International Conference on Computer Vision 1999 and the Longuet-Higgins Best Paper Prize at the European Conference on Computer Vision 2004. He also received the CAREER Award from the National Science Foundation in 2004 and the Young Investigator Award from the Office of Naval Research in 2005. He has given several Plenary Talks at international conferences. He is an associate editor of IEEE Transactions on Pattern Analysis and Machine Intelligence. He is a senior member of IEEE and a member of ACM, SIAM, and ASEE.

This is joint work with my students John Wright, Arvind Ganesh, and Shankar Rao.

Brief Biography: Yi Ma is an associate professor at the Electrical & Computer Engineering Department of the University of Illinois at Urbana-Champaign. He is currently on leave as research manager of the Visual Computing group at Microsoft Research Asia in Beijing. His research interests include computer vision, image processing, and systems theory. Yi Ma received two Bachelors’ degree in Automation and Applied Mathematics from Tsinghua University (Beijing, China) in 1995, a Master of Science degree in EECS in 1997, a Master of Arts degree in Mathematics in 2000, and a PhD degree in EECS in 2000, all from the University of California at Berkeley. Yi Ma received the David Marr Best Paper Prize at the International Conference on Computer Vision 1999 and the Longuet-Higgins Best Paper Prize at the European Conference on Computer Vision 2004. He also received the CAREER Award from the National Science Foundation in 2004 and the Young Investigator Award from the Office of Naval Research in 2005. He has given several Plenary Talks at international conferences. He is an associate editor of IEEE Transactions on Pattern Analysis and Machine Intelligence. He is a senior member of IEEE and a member of ACM, SIAM, and ASEE.

October 5, 2009

Keywords: Gromov-Hausdorff distance,
Gromov-Wasserstein distance, shape analysis, metric geometry

Abstract: The Gromov-Hausdorff distance provides a powerful tool for formalizing the problem of shape matching. Two problems with it are that (1) in practice it leads to combinatorial optimization problems which are NP hard and (2) despite its theoretical attractiveness and naturality, it has been difficult to use for studying and establishing links to the many other shape matching methods available in the literature. The Gromov-Wasserstein distance, a variant of the Gromov-Hausdorff distance that is based on mass transportation ideas, directly leads to continuous optimization problems with quadratic objective functions and linear constraints. Furthermore, it has been proved that the methods based on shape distributions, shape context/integral invariants, and eccentricity can all be related to this Gromov-Wasserstein distance via explicit lower bounds. In this talk we review the construction of the GW distance, its properties and lower bounds. In particular, we describe recent work done on (1) relating it to persistent topology based shape signatures, and (2) on defining a certain spectral notion of the GW distance. This spectral notion of the GW distance permits proving that several invariants constructed from the spectrum of the Laplace-Beltrami operator on manifolds are stable in a quantitative way.

Abstract: The Gromov-Hausdorff distance provides a powerful tool for formalizing the problem of shape matching. Two problems with it are that (1) in practice it leads to combinatorial optimization problems which are NP hard and (2) despite its theoretical attractiveness and naturality, it has been difficult to use for studying and establishing links to the many other shape matching methods available in the literature. The Gromov-Wasserstein distance, a variant of the Gromov-Hausdorff distance that is based on mass transportation ideas, directly leads to continuous optimization problems with quadratic objective functions and linear constraints. Furthermore, it has been proved that the methods based on shape distributions, shape context/integral invariants, and eccentricity can all be related to this Gromov-Wasserstein distance via explicit lower bounds. In this talk we review the construction of the GW distance, its properties and lower bounds. In particular, we describe recent work done on (1) relating it to persistent topology based shape signatures, and (2) on defining a certain spectral notion of the GW distance. This spectral notion of the GW distance permits proving that several invariants constructed from the spectrum of the Laplace-Beltrami operator on manifolds are stable in a quantitative way.

Keywords: Shape spaces, landmark points, sectional curvature

Abstract: In the past few years there has been a growing interest, in diverse scientific communities, in endowing "shape spaces" with Riemannian metrics, so to be able to measure similarities between shapes and perform statistical analysis on data sets (e.g. for object recognition, target detection and tracking, classification, and automated medical diagnostics). The geometry of such spaces has started to emerge only very recently; in this talk we will explore the sectional curvature for the Riemannian manifold of landmark points (which is one of the simplest, in that it is finite-dimensional) and discuss its effects on applications.

Abstract: In the past few years there has been a growing interest, in diverse scientific communities, in endowing "shape spaces" with Riemannian metrics, so to be able to measure similarities between shapes and perform statistical analysis on data sets (e.g. for object recognition, target detection and tracking, classification, and automated medical diagnostics). The geometry of such spaces has started to emerge only very recently; in this talk we will explore the sectional curvature for the Riemannian manifold of landmark points (which is one of the simplest, in that it is finite-dimensional) and discuss its effects on applications.

Keywords: Online image processing, unsupervised algorithms, fast algorithms, color balance, screened Poisson equation, denoising, image comparison, Retinex theory, affine invariant SIFT

Abstract: This is a new concept of publication for image processing. Putting image processing and image analysis algorithms on line allows every researcher to test directly the algorithms on his (her) own images. Some sample images are also proposed on each algorithm site. This project is under construction, but several algorithms are already available at

http://mw.cmla.ens-cachan.fr/megawave/algo/.

Each on line algorithm is described in a web site, which gives the main bibliographic links, and which comments on many experimental results. Each algorithm is also thoroughly described, and a code can be downloaded. Image processing on line is only possible with algorithms which have been mathematically analyzed and rationalized to the point where they do not depend anymore on technical parameters. A publication on line is different from --but can be complementary to-- a journal publication. The online algorithms must be elaborated to the point where they are fully autonomous, or depend on at most one user's parameter (typically the scale). I'll describe briefly the online algorithms: Microtexture synthesis algorithm, a Cartoon + Texture decomposition, a fully autonomous line segment detector, a PDE implementation of the color perception Retinex theory or a fully affine invariant image comparison algorithm, ASIFT.

Abstract: This is a new concept of publication for image processing. Putting image processing and image analysis algorithms on line allows every researcher to test directly the algorithms on his (her) own images. Some sample images are also proposed on each algorithm site. This project is under construction, but several algorithms are already available at

http://mw.cmla.ens-cachan.fr/megawave/algo/.

Each on line algorithm is described in a web site, which gives the main bibliographic links, and which comments on many experimental results. Each algorithm is also thoroughly described, and a code can be downloaded. Image processing on line is only possible with algorithms which have been mathematically analyzed and rationalized to the point where they do not depend anymore on technical parameters. A publication on line is different from --but can be complementary to-- a journal publication. The online algorithms must be elaborated to the point where they are fully autonomous, or depend on at most one user's parameter (typically the scale). I'll describe briefly the online algorithms: Microtexture synthesis algorithm, a Cartoon + Texture decomposition, a fully autonomous line segment detector, a PDE implementation of the color perception Retinex theory or a fully affine invariant image comparison algorithm, ASIFT.

October 6, 2009

After spending about 5 minutes showing recent results on
video segmentation (joint work with Adobe), I will describe some
recent works in my group in the area of dictionary learning and sparse coding.
In particular I will present new models derived from information theory,
new models dedicated to go beyond standard sparse coding applications and
into unsupervised clustering, and
new models related to compressed sensing.

Keywords: Feature Selection, Canonical Elements, Object Recognition, and Reconstruction

Abstract: The problem of representing a large dataset consisiting of complex patterns with a smaller more compact form has been tackled through synthesis of new data points to represent clusters of the original data points (feature transformation). In contrast, the focus of this research is on the development of a generic methods for selecting canonical subsets of data-sets that are highly representative of the original complex patterns. The development of the canonical subset method was motivated by the fact that in many cases feature transformation may not be practical, relevant, or even possible. Our objective is to expose the underlying structure of the data and have the global topology drive the subset-selection process. The contributions of the work are formulation of the subset selection problem as an optimization problem, an analysis of the complexity of the problem, the development of approximation algorithms to compute canonical subsets, and a demonstration of the utility of the algorithms in several problem domains.

Abstract: The problem of representing a large dataset consisiting of complex patterns with a smaller more compact form has been tackled through synthesis of new data points to represent clusters of the original data points (feature transformation). In contrast, the focus of this research is on the development of a generic methods for selecting canonical subsets of data-sets that are highly representative of the original complex patterns. The development of the canonical subset method was motivated by the fact that in many cases feature transformation may not be practical, relevant, or even possible. Our objective is to expose the underlying structure of the data and have the global topology drive the subset-selection process. The contributions of the work are formulation of the subset selection problem as an optimization problem, an analysis of the complexity of the problem, the development of approximation algorithms to compute canonical subsets, and a demonstration of the utility of the algorithms in several problem domains.

We propose a method based on sparse representation to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a sparse representation with respect to a dictionary formed by all other data points. In general, finding such a spare representation is NP hard. Our key contribution is to show that, under mild assumptions, the sparse representation can be obtained 'exactly' by using
_{1} optimization. The segmentation of the data is obtained by applying spectral clustering to a similarity matrix built from this sparse representation. Our method can be extended to handle noise, outliers as well as missing data by exploiting sparsity. Experiments on the Hopkins155 motion segmentation database and other motion sequences with outliers and missing data show that our approach significantly outperforms state-of-the-art methods.

Keywords: Shape evolution; Shape spaces; Active contours; Riemannian metrics on diffeomorphisms

Abstract: We present a geometric flow approach to the segmentation of two- three- dimensional shapes by minimizing a cost function similar to the ones used with geometric active contours or to the Chan-Vese approach. Our goal, well-adapted to many shape segmentation problems, including those arising from medical images, is to ensure that the evolving contour or surface remains smooth and diffeomorphic to its initial position over time.

This is formulated as a variational problem in a group of diffeomorphisms equipped with a right-invariant Riemannian metric. A resulting gradient descent algorithm is used, with an interesting variant that constrains the velocity in shape space to belong in a finite dimensional space determined by time-dependent control points.

Experimental results with 2D and 3D shapes will be presented.

Abstract: We present a geometric flow approach to the segmentation of two- three- dimensional shapes by minimizing a cost function similar to the ones used with geometric active contours or to the Chan-Vese approach. Our goal, well-adapted to many shape segmentation problems, including those arising from medical images, is to ensure that the evolving contour or surface remains smooth and diffeomorphic to its initial position over time.

This is formulated as a variational problem in a group of diffeomorphisms equipped with a right-invariant Riemannian metric. A resulting gradient descent algorithm is used, with an interesting variant that constrains the velocity in shape space to belong in a finite dimensional space determined by time-dependent control points.

Experimental results with 2D and 3D shapes will be presented.