- 01/2014 - 06/2014 (Mitsubishi Electric Research Labs)
- Research areas: 3D face detection/tracking
- 12/2012 - 12/2013: Technische Universität München, Germany
- Guest researcher
- Research areas: rigid/deformable 3D shape registration, 3D object recognition and pose estimation
- 03/2008 - 11/2012: Technische Universität München, Germany
- Ph.D. in Computer Science (summa cum laude)
- Thesis: Stochastic and Deterministic Methods for 3D Shape Registration (supervisor: Prof. Dr.-Ing. Darius Burschka)
- 2005 - 2008: Leibniz Universität Hannover, Germany
- 2001 - 2005: Brandenburg University of Applied Sciences
- Dipl.-Inf. (FH) in Computer Science
- Thesis: Object Recognition on the RCUBE Platform (supervisor: Prof. Dr. sc. techn. Harald Loose)
- Born 1983 in Sofia, Bulgaria
- 2008: Best graduate at the Faculty of Electrical Engineering and Computer Science of the Leibniz Universität Hannover (Preis des Präsidenten - announcement in German).
- 2005: Prize of the Association of German Engineers (Verein Deutscher Ingenieure - VDI) for outstanding study achievements at the Brandenburg University of Applied Sciences (announcement in German (page 84)).
- 2005: Scholarship from the German Academic Exchange Service (DAAD) for outstanding study achievements at the Brandenburg University of Applied Sciences.
- Rigid/deformable 3D shape registration
- 3D object recognition
- Numerical optimization
- Mesh processing
- Reviewer for
- Elsevier Computer Vision and Image Understanding (CVIU)
- International Conference on Robotics and Automation (ICRA)
Predicting Human Intention in Visual Observations of Hand/Object Interactions The main contribution of this work is a probabilistic method for predicting human manipulation intention from image sequences of human-object interaction. Predicting intention amounts to inferring the imminent manipulation task when human hand is observed to have stably grasped the object. Inference is performed by means of a probabilistic graphical model that encodes object grasping tasks over the 3D state of the observed scene. The 3D state is extracted from RGB-D image sequences by a novel vision-based, markerless hand-object 3D tracking framework. To deal with the high-dimensional state-space and mixed data types (discrete and continuous) involved in grasping tasks, we introduce a generative vector quantization method using mixture models and self-organizing maps. This yields a compact model for encoding of grasping actions, able of handling uncertain and partial sensory data. Experimentation showed that the model trained on simulated data can provide a potent basis for accurate goal-inference with partial and noisy observations of actual real-world demonstrations. We also show a grasp selection process, guided by the inferred human intention, to illustrate the use of the system for goal-directed grasp imitation. This work was presented at the IEEE International Conference on Robotics and Automation (ICRA 2013).
Rigid 3D Geometry Matching for Grasping of Known Objects in Cluttered Scenes In this work, we present an efficient 3D object recognition and pose estimation approach for grasping procedures in cluttered and occluded environments. In contrast to common appearance-based approaches, we rely solely on 3D geometry information. Our method is based on a robust geometric descriptor, a hashing technique and an efficient, localized RANSAC-like sampling strategy. We assume that each object is represented by a model consisting of a set of points with corresponding surface normals. Our method simultaneously recognizes multiple model instances and estimates their pose in the scene. A variety of tests shows that the proposed method performs well on noisy, cluttered and unsegmented range scans in which only small parts of the objects are visible. The main procedure of the algorithm has a linear time complexity resulting in a high recognition speed which allows a direct integration of the method into a continuous manipulation task. The experimental validation with a 7-degrees-of-freedom Cartesian impedance controlled robot shows how the method can be used for grasping objects from a complex random stack. This application demonstrates how the integration of computer vision and soft-robotics leads to a robotic system capable of acting in unstructured and occluded environments. Researchers at the following lab are using our 3D geometry matching software:
- Columbia University Robotics Group at Columbia University in New York, USA.
- The Media Computing Group at RWTH Aachen, Germany.
Deformable 3D Shape Registration Based on Local Similarity Transforms We propose a new method for deformable 3D shape registration. The algorithm computes shape transitions based on local similarity transforms which allows to model not only as-rigid-as-possible deformations but also local and global scale. We formulate an ordinary differential equation (ODE) which describes the transition of a source shape towards a target shape. We assume that both shapes are roughly pre-aligned (e.g., frames of a motion sequence). The ODE consists of two terms. The first one causes the deformation by pulling the source shape points towards corresponding points on the target shape. Initial correspondences are estimated by closest-point search and then refined by an efficient smoothing scheme. The second term regularizes the deformation by drawing the points towards locally defined rest positions. These are given by the optimal similarity transform which matches the initial (undeformed) neighborhood of a source point to its current (deformed) neighborhood. The proposed ODE allows for a very efficient explicit numerical integration. This avoids the repeated solution of large linear systems usually done when solving the registration problem within general-purpose non-linear optimization frameworks. We experimentally validate the proposed method on a variety of real data and perform a comparison with several state-of-the-art approaches.
An Efficient RANSAC for 3D Object Recognition in Noisy and Occluded Scenes In this paper, we present an efficient algorithm for 3D object recognition in presence of clutter and occlusions in noisy, sparse and unsegmented range data. The method uses a robust geometric descriptor, a hashing technique and an efficient RANSAC-like sampling strategy. We assume that each object is represented by a model consisting of a set of points with corresponding surface normals. Our method recognizes multiple model instances and estimates their position and orientation in the scene. The algorithm scales well with the number of models and its main procedure runs in linear time in the number of scene points. Moreover, the approach is conceptually simple and easy to implement. Tests on a variety of real data sets show that the proposed method performs well on noisy and cluttered scenes in which only small parts of the objects are visible.
Stochastic Optimization for Rigid Point Set Registration In this work, we propose a new algorithm for pairwise rigid point set registration with unknown point correspondences. The main properties of our method are noise robustness, outlier resistance and global optimal alignment. The problem of registering two point clouds is converted to a minimization of a nonlinear cost function. We propose a new cost function based on an inverse distance kernel that significantly reduces the impact of noise and outliers. In order to achieve a global optimal registration without the need of any initial alignment, we develop a new stochastic approach for global minimization. It is an adaptive sampling method which uses a generalized BSP tree and allows for minimizing nonlinear scalar fields over complex shaped search spaces like, e.g., the space of rotations. We introduce a new technique for a hierarchical decomposition of the rotation space in disjoint equally sized parts called spherical boxes. Furthermore, a procedure for uniform point sampling from spherical boxes is presented. Tests on a variety of point sets show that the proposed registration method performs very well on noisy, outlier corrupted and incomplete data. For comparison, we report how two state-of-the-art registration algorithms perform on the same data sets. Researchers at the following labs are using our software:
- Computational Learning and Motor Control Lab at USC, Los Angeles, USA.
- Computer Vision and Active Perception Lab at KTH in Stockholm, Sweden.
Visualizing Morphogenesis and Growth by Temporal Interpolation of Surface-Based 3D Atlases Image-based 3D atlases have been proven to be very useful in biological and medical research. They serve as spatial reference systems that enable researchers to integrate experimental data in a spatially coherent way and thus to relate diverse data from different experiments. Typically such atlases consist of tissue-separating surfaces. The next step are 4D atlases that provide insight into temporal development and spatiotemporal relationships. Such atlases are based on time series of 3D images and related 3D models. We present work on temporal interpolation between such 3D atlases. Due to the morphogenesis of tissues during biological development, the topology of the non-manifold surfaces may vary between subsequent time steps. For animation therefore a smooth morphing between non-manifold surfaces with different topology is needed
- Open source Kinect 3d viewer (metric 3d reconstruction for Kinect).
- Hauptseminar 3D Object Recognition and Registration WS 2009/2010
|||Chavdar Papazov, Sami Haddadin, Sven Parusel, Kai Krieger, and Darius Burschka. Rigid 3D Geometry Matching for Grasping of Known Objects in Cluttered Scenes. International Journal of Robotics Research, 31, April 2012. [ .bib | .pdf ]|
|||Chavdar Papazov and Darius Burschka. Stochastic Global Optimization for Robust Point Set Registration. Computer Vision and Image Understanding, 115, December 2011. [ DOI | .bib | .pdf ]|
|||Chavdar Papazov and Darius Burschka. Deformable 3D Shape Registration Based on Local Similarity Transforms. Computer Graphics Forum, 30, 2011. (special issue SGP'11). [ .bib | .pdf ]|
|||Dan Song, Nikolaos Kyriazis, Iason Oikonomidis, Chavdar Papazov, Antonis Argyros, Darius Burschka, and Danica Kragic. Predicting Human Intention in Visual Observations of Hand/Object Interactions. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'13), May 2013. [ .bib | .pdf ]|
|||Jonathan Bohren, Chavdar Papazov, Darius Burschka, Kai Krieger, Sven Parusel, Sami Haddadin, Wiliam Shepherdson, Gregory Hager, and Louis Whitcomb. A Pilot Study in Vision-Based Augmented Telemanipulation for Remote Assembly Over High-Latency Networks. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'13), 2013. [ .bib | .pdf ]|
|||Dan Song, Nikolaos Kyriazis, Iason Oikonomidis, Chavdar Papazov, Antonis A. Argyros, Darius Burschka, and Danica Kragic. Predicting human intention in visual observations of hand/object interactions. In ICRA, pages 1608-1615, 2013. [ .bib ]|
|||Chavdar Papazov and Darius Burschka. An Efficient RANSAC for 3D Object Recognition in Noisy and Occluded Scenes. In Proceedings of the 10th Asian Conference on Computer Vision (ACCV'10), November 2010. (oral presentation; acceptance rate: 5%). [ .bib | .pdf ]|
|||Chavdar Papazov and Darius Burschka. Stochastic Optimization for Rigid Point Set Registration. In Proceedings of the 5th International Symposium on Visual Computing (ISVC'09), December 2009. (oral presentation). [ .bib | .pdf ]|
|||Chavdar Papazov, Vincent J. Dercksen, Hans Lamecker, and Hans-Christian Hege. Visualizing Morphogenesis and Growth by Temporal Interpolation of Surface-Based 3D Atlases. In Proceedings of the 2008 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2008. [ .bib | .pdf ]|