Technische Universität München Robotics and Embedded Systems

Real-time Computer Vision Topics

Veranstalter Giorgio Panin, Ph.D.
Typ Hauptseminar
Semester WS 2005/2006
ECTS 4.0
Zeit & Ort Do 15:30 - 17:00 MI 03.07.023
Schein erfolgreiche Teilnahme am Seminar




Contour tracking with Particle Filters – The Condensation Algorithm

The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimodal, cannot represent simultaneous alternative hypotheses. The CONDENSATION algorithm uses “factored sampling'', previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. CONDENSATION uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in real-time.

Contour tracking based on local image statistics – The CCD Algorithm

The task of fitting parametric curve models to boundaries of perceptually meaningful image regions is a key problem in computer vision with numerous applications, such as image segmentation, pose estimation, 3-D reconstruction, and object tracking. The Contracting Curve Density (CCD) algorithm and the CCD tracker are solutions to this problem. The CCD algorithm solves the curve-fitting problem for a single image whereas the CCD tracker solves it for a sequence of images. The CCD algorithm extends the state-of-the-art in two important ways. First, it applies a novel likelihood function for the assessment of a fit between the curve model and the image data. This likelihood function can cope with highly inhomogeneous image regions because it is formulated in terms of local image statistics that are learned on the fly from the vicinity of the expected curve. Second, the CCD algorithm employs blurred curve models as efficient means for iteratively optimizing the posterior density over possible model parameters. Blurred curve models enable the algorithm to trade-off two conflicting objectives, namely a large area of convergence and a high accuracy. The CCD tracker is a fast variant of the CCD algorithm. It achieves a low runtime, even for high-resolution images, by focusing on a small set of carefully selected pixels. In each iteration step, the tracker takes only such pixels into account that are likely to further reduce the uncertainty of the curve. Moreover, the CCD tracker exploits statistical dependencies between successive images, which also improves its robustness. This can be achieved without substantially increasing the runtime.

Mutual Information for 3D-2D Model-Image alignment and optimization techniques

An information-theoretic approach has been developed for finding the pose of an object in an image. The technique does not require information about the surface properties of the object, besides its shape, and is robust with respect to variations of illumination. In our derivation, few assumptions are made about the nature of the imaging process. As a result, the algorithms are quite general and can foreseeably be used in a wide variety of imaging situations. Experiments are presented that demonstrate the approach in registering magnetic resonance images, aligning a complex 3D object model to real scenes including clutter and occlusion, tracking a human head in a video sequence and aligning a view-based 2D object model to real images. The method is based on a formulation of the mutual information between the model and the image. As applied in this work, the technique is intensity-based, rather than feature-based. It works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust then traditional correlation. Additionally, it has an efficient implementation that is based on stochastic approximation.

Features detection and tracking with advanced methodologies – The SIFT Algorithm

SIFT (Scale Invariant Feature Transform) is a method for extracting distinctive invariant features from images, which can be used to perform reliable matching between different images of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a a substantial range of affine distortion, addition of noise, change in 3D viewpoint, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This can be a starting point for an approach that uses these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.