Vision and Mobile Robotics Laboratory | Publications
Home | Members | Projects | Publications | Software | Videos | Job opportunities Internal

BACK TO INDEX

Publications of year 2006
Journal articles or book chapters
  1. Burcu Akinci, Frank Boukamp, Chris Gordon, Daniel Huber, Catherine Lyons, and Kuhn Park. A Formalism for Utilization of Sensor Systems and Integrated Project Models for Active Construction Quality Control. Automation in Construction, 15(2):124--138, February 2006. (url) (pdf)
    Annotation: "Defects experienced during construction are costly and preventable. However, inspection programs employed today cannot adequately detect and manage defects that occur on construction sites, as they are based on measurements at specific locations and times, and are not integrated into complete electronic models. Emerging sensing technologies and project modeling capabilities motivate the development of a formalism that can be used for active quality control on construction sites. In this paper, we outline a process of acquiring and updating detailed design information, identifying inspection goals, inspection planning, as-built data acquisition and analysis, and defect detection and management. We discuss the validation of this formalism based on four case studies." .

    @article{Akinci_2006_5375,
    author = "Burcu Akinci and Frank Boukamp and Chris Gordon and Daniel Huber and Catherine Lyons and Kuhn Park",
    title = "A Formalism for Utilization of Sensor Systems and Integrated Project Models for Active Construction Quality Control",
    journal = "Automation in Construction",
    month = "February",
    year = "2006",
    volume = "15",
    number = "2",
    pages = "124--138",
    url = "http://www.ri.cmu.edu/pubs/pub_5375.html",
    pdf = "http://www.ri.cmu.edu/pub_files/pub4/akinci_burcu_2006_1/akinci_burcu_2006_1.pdf",
    keywords = "",
    annote = "Defects experienced during construction are costly and preventable. However, inspection programs employed today cannot adequately detect and manage defects that occur on construction sites, as they are based on measurements at specific locations and times, and are not integrated into complete electronic models. Emerging sensing technologies and project modeling capabilities motivate the development of a formalism that can be used for active quality control on construction sites. In this paper, we outline a process of acquiring and updating detailed design information, identifying inspection goals, inspection planning, as-built data acquisition and analysis, and defect detection and management. We discuss the validation of this formalism based on four case studies." 
    }

  2. Kumar Sanjiv and Martial Hebert. Discriminative Random Fields. International Journal of Computer Vision, 68(2):179-202, 2006. (pdf)
    Annotation: "In this research we address the problem of classification and labeling of regions given a single static natural image. Natural images exhibit strong spatial dependencies, and modeling these dependencies in a principled manner is crucial to achieve good classification accuracy. In this work, we present Discriminative Random Fields (DRFs) to model spatial interactions in images in a discriminative framework based on the concept of Conditional Random Fields proposed by Lafferty et al (Lafferty et al., 2001). The DRFs classify image regions by incor- porating neighborhood spatial interactions in the labels as well as the observed data. The DRF framework offers several advantages over the conventional Markov Random Field (MRF) framework. First, the DRFs allow to relax the strong assumption of conditional independence of the observed data generally used in the MRF framework for tractability. This assumption is too restrictive for a large number of applications in computer vision. Second, the DRFs derive their classification power by exploiting the probabilistic discriminative models instead of the generative models used for modeling observations in the MRF framework. Third, the interaction in labels in DRFs is based on the idea of pairwise discrimination of the observed data making it data-adaptive instead of being fixed a priori as in MRFs. Finally, all the parameters in the DRF model are estimated simultaneously from the training data unlike the MRF framework where the likelihood parameters are usually learned separately from the field parameters. We present preliminary experiments with man-made structure detection and binary image restoration tasks, and compare the DRF results with the MRF results." .

    @article{Sanjiv_2006_5468,
    author = "Kumar Sanjiv and Martial Hebert",
    title = "Discriminative Random Fields",
    journal = "International Journal of Computer Vision",
    year = "2006",
    volume = "68",
    number = "2",
    pages = "179-202",
    url = "",
    pdf = "http://www.ri.cmu.edu/pub_files/pub4/sanjiv_kumar_2006_1/sanjiv_kumar_2006_1.pdf",
    keywords = "",
    annote = "In this research we address the problem of classification and labeling of regions given a single static natural image. Natural images exhibit strong spatial dependencies, and modeling these dependencies in a principled manner is crucial to achieve good classification accuracy. In this work, we present Discriminative Random Fields (DRFs) to model spatial interactions in images in a discriminative framework based on the concept of Conditional Random Fields proposed by Lafferty et al (Lafferty et al., 2001). The DRFs classify image regions by incor- porating neighborhood spatial interactions in the labels as well as the observed data. The DRF framework offers several advantages over the conventional Markov Random Field (MRF) framework. First, the DRFs allow to relax the strong assumption of conditional independence of the observed data generally used in the MRF framework for tractability. This assumption is too restrictive for a large number of applications in computer vision. Second, the DRFs derive their classification power by exploiting the probabilistic discriminative models instead of the generative models used for modeling observations in the MRF framework. Third, the interaction in labels in DRFs is based on the idea of pairwise discrimination of the observed data making it data-adaptive instead of being fixed a priori as in MRFs. Finally, all the parameters in the DRF model are estimated simultaneously from the training data unlike the MRF framework where the likelihood parameters are usually learned separately from the field parameters. We present preliminary experiments with man-made structure detection and binary image restoration tasks, and compare the DRF results with the MRF results." 
    }

Conference's articles
  1. James H. Hays, Marius Leordeanu, Alexei A. Efros, and Yanxi Liu. Discovering Texture Regularity via Higher-Order Matching. In 9th European Conference on Computer Vision, May 2006. (url) (pdf)
    Annotation: "" .

    @inproceedings{Hays_2006_5295,
    author = "James H. Hays and Marius Leordeanu and Alexei A. Efros and Yanxi Liu",
    title = "Discovering Texture Regularity via Higher-Order Matching",
    booktitle = "9th European Conference on Computer Vision",
    month = "May",
    year = "2006",
    url = "http://www.ri.cmu.edu/pubs/pub_5295.html",
    pdf = "http://www.ri.cmu.edu/pubs/pub_5295.html",
    keywords = "",
    annote = "" 
    }

  2. Derek Hoiem, Alexei A. Efros, and Martial Hebert. Putting Objects in Perspective. In Proc. IEEE Computer Vision and Pattern Recognition (CVPR), June 2006. (url) (pdf)
    Annotation: "Image understanding requires not only individually esti- mating elements of the visual world but also capturing the interplay among them. In this paper, we provide a frame- work for placing local object detection in the context of the overall 3D scene by modeling the interdependence of ob- jects, surface orientations, and camera viewpoint. Most object detection methods consider all scales and locations in the image as equally likely. We show that with probabilistic estimates of 3D geometry, both in terms of surfaces and world coordinates, we can put objects into perspective and model the scale and location variance in the image. Our approach reflects the cyclical nature of the problem by allowing probabilistic object hypotheses to re- fine geometry and vice-versa. Our framework allows pain- less substitution of almost any object detector and is easily extended to include other aspects of image understanding. Our results confirm the benefits of our integrated approach. " .

    @inproceedings{Hoiem_2006_5467,
    author = "Derek Hoiem and Alexei A. Efros and Martial Hebert",
    title = "Putting Objects in Perspective",
    booktitle = "Proc. IEEE Computer Vision and Pattern Recognition (CVPR)",
    month = "June",
    year = "2006",
    url = "http://www.ri.cmu.edu/pubs/pub_5467.html",
    pdf = "http://www.ri.cmu.edu/pub_files/pub4/hoiem_derek_2006_1/hoiem_derek_2006_1.pdf",
    keywords = "",
    annote = "Image understanding requires not only individually esti- mating elements of the visual world but also capturing the interplay among them. In this paper, we provide a frame- work for placing local object detection in the context of the overall 3D scene by modeling the interdependence of ob- jects, surface orientations, and camera viewpoint. Most object detection methods consider all scales and locations in the image as equally likely. We show that with probabilistic estimates of 3D geometry, both in terms of surfaces and world coordinates, we can put objects into perspective and model the scale and location variance in the image. Our approach reflects the cyclical nature of the problem by allowing probabilistic object hypotheses to re- fine geometry and vice-versa. Our framework allows pain- less substitution of almost any object detector and is easily extended to include other aspects of image understanding. Our results confirm the benefits of our integrated approach. " 
    }

  3. Marius Leordeanu and Martial Hebert. Efficient MAP approximation for dense energy functions. In International Conference on Machine Learning 2006, May 2006. (url) (pdf)
    Annotation: "We present an efficient method for maximizing energy functions with first and second order potentials, suitable for MAP labeling estimation problems that arise in undirected graphical models. Our approach is to relax the integer constraints on the solution in two steps. First we efficiently obtain the relaxed global optimum following a procedure similar to the iterative power method for finding the largest eigenvector of a matrix. Next, we map the relaxed optimum on a simplex and show that the new energy obtained has a certain optimal bound. Starting from this energy we follow an efficient coordinate ascent procedure that is guaranteed to increase the energy at every step and converge to a solution that obeys the initial integral constraints. We also present a sufficient condition for ascent procedures that guarantees the increase in energy at every step. " .

    @inproceedings{Leordeanu_2006_5437,
    author = "Marius Leordeanu and Martial Hebert",
    title = "Efficient MAP approximation for dense energy functions",
    booktitle = "International Conference on Machine Learning 2006",
    month = "May",
    year = "2006",
    url = "http://www.ri.cmu.edu/pubs/pub_5437.html",
    pdf = "http://www.ri.cmu.edu/pub_files/pub4/leordeanu_marius_2006_1/leordeanu_marius_2006_1.pdf",
    keywords = "",
    annote = "We present an efficient method for maximizing energy functions with first and second order potentials, suitable for MAP labeling estimation problems that arise in undirected graphical models. Our approach is to relax the integer constraints on the solution in two steps. First we efficiently obtain the relaxed global optimum following a procedure similar to the iterative power method for finding the largest eigenvector of a matrix. Next, we map the relaxed optimum on a simplex and show that the new energy obtained has a certain optimal bound. Starting from this energy we follow an efficient coordinate ascent procedure that is guaranteed to increase the energy at every step and converge to a solution that obeys the initial integral constraints. We also present a sufficient condition for ascent procedures that guarantees the increase in energy at every step. " 
    }

  4. Caroline Pantofaru, Gyuri Dork·and Cordelia Schmid, and Martial Hebert. Combining Regions and Patches for Object Class Localization. In The Beyond Patches Workshop in conjunction with the IEEE conference on Computer Vision and Pattern Recognition, June 2006. (url) (pdf)
    Annotation: "We introduce a method for object class detection and localization which combines regions generated by image segmentation with local patches. Region-based descriptors can model and match regular textures reliably, but fail on parts of the object which are textureless. They also cannot repeatably identify interest points on their boundaries. By incorporating information from patch-based descriptors near the regions into a new feature, the Region-based Context Feature (RCF), we can address these issues. We apply Region-based Context Features in a semi-supervised learning framework for object detection and localization. This framework produces object-background segmentation masks of deformable objects. Numerical results are presented for pixel-level performance. " .

    @inproceedings{Pantofaru_2006_5432,
    author = "Caroline Pantofaru and Gyuri Dork·and Cordelia Schmid and Martial Hebert",
    title = "Combining Regions and Patches for Object Class Localization",
    booktitle = "The Beyond Patches Workshop in conjunction with the IEEE conference on Computer Vision and Pattern Recognition",
    month = "June",
    year = "2006",
    url = "http://www.ri.cmu.edu/pubs/pub_5432.html",
    pdf = "http://www.ri.cmu.edu/pub_files/pub4/pantofaru_caroline_2006_1/pantofaru_caroline_2006_1.pdf",
    keywords = "",
    annote = "We introduce a method for object class detection and localization which combines regions generated by image segmentation with local patches. Region-based descriptors can model and match regular textures reliably, but fail on parts of the object which are textureless. They also cannot repeatably identify interest points on their boundaries. By incorporating information from patch-based descriptors near the regions into a new feature, the Region-based Context Feature (RCF), we can address these issues. We apply Region-based Context Features in a semi-supervised learning framework for object detection and localization. This framework produces object-background segmentation masks of deformable objects. Numerical results are presented for pixel-level performance. " 
    }

  5. Andrew Stein and Martial Hebert. Local Detection of Occlusion Boundaries in Video. In British Machine Vision Conference, September 2006. (url) (pdf)
    Annotation: "Occlusion boundaries are notoriously difficult for many patch-based computer vision algorithms, but they also provide potentially useful information about scene structure and shape. Using short video clips, we present a novel method for scoring the degree to which edges exhibit occlusion. We first utilize a spatio-temporal edge detector which estimates edge strength, orientation, and normal motion. By then extracting patches from either side of each detected (possibly moving) edglet, we can estimate and compare motion to determine if occlusion is present. This completely local, bottom-up approach is intended to provide powerful low-level information for use by higher-level reasoning methods." .

    @inproceedings{Stein_2006_5472,
    author = "Andrew Stein and Martial Hebert",
    title = "Local Detection of Occlusion Boundaries in Video",
    booktitle = "British Machine Vision Conference",
    month = "September",
    year = "2006",
    pdf = "http://www.ri.cmu.edu/pub_files/pub4/stein_andrew_2006_3/stein_andrew_2006_3.pdf",
    url = "http://www.ri.cmu.edu/pubs/pub_5472.html",
    keywords = "",
    annote = "Occlusion boundaries are notoriously difficult for many patch-based computer vision algorithms, but they also provide potentially useful information about scene structure and shape. Using short video clips, we present a novel method for scoring the degree to which edges exhibit occlusion. We first utilize a spatio-temporal edge detector which estimates edge strength, orientation, and normal motion. By then extracting patches from either side of each detected (possibly moving) edglet, we can estimate and compare motion to determine if occlusion is present. This completely local, bottom-up approach is intended to provide powerful low-level information for use by higher-level reasoning methods." 
    }

  6. Andrew Stein and Martial Hebert. Using Spatio-Temporal Patches for Simultaneous Estimation of Edge Strength, Orientation, and Motion. In Beyond Patches Workshop at IEEE Conference on Computer Vision and Pattern Recognition, June 2006. (url) (pdf)
    Annotation: "We describe an extension to ordinary patch-based edge detection in images using spatio-temporal volumetric patches from video. The inclusion of temporal information enables us to estimate motion normal to edges in addition to edge strength and spatial orientation. The method can handle complex edges in clutter by comparing distributions of data on either half of an extracted patch, rather than modeling the intensity profile of the edge. An efficient approach is provided for building the necessary histograms which samples candidate edge orientations and motions. Results are compared to classical spatio-temporal filtering techniques. " .

    @inproceedings{Stein_2006_5404,
    author = "Andrew Stein and Martial Hebert",
    title = "Using Spatio-Temporal Patches for Simultaneous Estimation of Edge Strength, Orientation, and Motion",
    booktitle = "Beyond Patches Workshop at IEEE Conference on Computer Vision and Pattern Recognition",
    month = "June",
    year = "2006",
    url = "http://www.ri.cmu.edu/pubs/pub_5404.html",
    pdf = "http://www.ri.cmu.edu/pub_files/pub4/stein_andrew_2006_2/stein_andrew_2006_2.pdf",
    keywords = "",
    annote = "We describe an extension to ordinary patch-based edge detection in images using spatio-temporal volumetric patches from video. The inclusion of temporal information enables us to estimate motion normal to edges in addition to edge strength and spatial orientation. The method can handle complex edges in clutter by comparing distributions of data on either half of an extracted patch, rather than modeling the intensity profile of the edge. An efficient approach is provided for building the necessary histograms which samples candidate edge orientations and motions. Results are compared to classical spatio-temporal filtering techniques. " 
    }

  7. Andrew Stein, Andres Huertas, and Larry Matthies. Attenuating Stereo Pixel-Locking via Affine Window Adaptation. In IEEE International Conference on Robotics and Automation, May 2006. (url) (pdf)
    Annotation: "For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as pixel-locking, which produces artificiallypeaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly flat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane. We demonstrate the method on synthetic imagery as well as real stereo data from an autonomous outdoor vehicle." .

    @inproceedings{Stein_2006_5378,
    author = "Andrew Stein and Andres Huertas and Larry Matthies",
    title = "Attenuating Stereo Pixel-Locking via Affine Window Adaptation",
    booktitle = "IEEE International Conference on Robotics and Automation",
    month = "May",
    year = "2006",
    url = "http://www.ri.cmu.edu/pubs/pub_5378.html",
    pdf = "http://www.ri.cmu.edu/pub_files/pub4/stein_andrew_2006_1/stein_andrew_2006_1.pdf",
    keywords = "",
    annote = "For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as pixel-locking, which produces artificiallypeaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly flat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane. We demonstrate the method on synthetic imagery as well as real stereo data from an autonomous outdoor vehicle." 
    }

  8. Ranjith Unnikrishnan and Martial Hebert. Extracting Scale and Illuminant Invariant Regions Through Color. In 17th British Machine Vision Conference, September 2006. (url) (pdf)
    Annotation: "Despite the fact that color is a powerful cue in object recognition, the extraction of scale-invariant interest regions from color images frequently begins with a conversion of the image to grayscale. The isolation of interest points is then completely determined by luminance, and the use of color is deferred to the stage of descriptor formation. This seemingly innocuous conversion to grayscale is known to suppress saliency and can lead to representative regions being undetected by procedures based only on luminance. Furthermore, grayscaled images of the same scene under even slightly different illuminants can appear sufficiently different as to affect the repeatability of detections across images. We propose a method that combines information from the color channels to drive the detection of scale-invariant keypoints. By factoring out the local effect of the illuminant using an expressive linear model, we demonstrate robustness to a change in the illuminant without having to estimate its properties from the image. Results are shown on challenging images from two commonly used color constancy datasets. " .

    @inproceedings{Unnikrishnan_2006_5474,
    author = "Ranjith Unnikrishnan and Martial Hebert",
    title = "Extracting Scale and Illuminant Invariant Regions Through Color",
    booktitle = "17th British Machine Vision Conference",
    month = "September",
    year = "2006",
    url = "http://www.ri.cmu.edu/pubs/pub_5474.html",
    pdf = "http://www.ri.cmu.edu/pub_files/pub4/unnikrishnan_ranjith_2006_3/unnikrishnan_ranjith_2006_3.pdf",
    keywords = "",
    annote = "Despite the fact that color is a powerful cue in object recognition, the extraction of scale-invariant interest regions from color images frequently begins with a conversion of the image to grayscale. The isolation of interest points is then completely determined by luminance, and the use of color is deferred to the stage of descriptor formation. This seemingly innocuous conversion to grayscale is known to suppress saliency and can lead to representative regions being undetected by procedures based only on luminance. Furthermore, grayscaled images of the same scene under even slightly different illuminants can appear sufficiently different as to affect the repeatability of detections across images. We propose a method that combines information from the color channels to drive the detection of scale-invariant keypoints. By factoring out the local effect of the illuminant using an expressive linear model, we demonstrate robustness to a change in the illuminant without having to estimate its properties from the image. Results are shown on challenging images from two commonly used color constancy datasets. " 
    }

  9. Ranjith Unnikrishnan, Jean-Francois Lalonde, Nicolas Vandapel, and Martial Hebert. Scale Selection for the Analysis of Point-Sampled Curves. In Third International Symposium on 3D Processing, Visualization and Transmission (3DPVT 2006), June 2006. (url) (pdf)
    Annotation: "An important task in the analysis and reconstruction of curvilinear structures from unorganized 3-D point samples is the estimation of tangent information at each data point. Its main challenges are in (1) the selection of an appropriate scale of analysis to accommodate noise, density variation and sparsity in the data, and in (2) the formulation of a model and associated objective function that correctly expresses their effects. We pose this problem as one of estimating the neighborhood size for which the principal eigenvector of the data scatter matrix is best aligned with the true tangent of the curve, in a probabilistic sense. We analyze the perturbation on the direction of the eigenvector due to finite samples and noise using the expected statistics of the scatter matrix estimators, and employ a simple iterative procedure to choose the optimal neighborhood size. Experiments on synthetic and real data validate the behavior predicted by the model, and show competitive performance and improved stability over leading polynomial-fitting alternatives that require a preset scale. " .

    @inproceedings{Unnikrishnan_2006_5435,
    author = "Ranjith Unnikrishnan and Jean-Francois Lalonde and Nicolas Vandapel and Martial Hebert",
    title = "Scale Selection for the Analysis of Point-Sampled Curves",
    booktitle = "Third International Symposium on 3D Processing, Visualization and Transmission (3DPVT 2006)",
    month = "June",
    year = "2006",
    url = "http://www.ri.cmu.edu/pubs/pub_5435.html",
    pdf = "http://www.ri.cmu.edu/pub_files/pub4/unnikrishnan_ranjith_2006_1/unnikrishnan_ranjith_2006_1.pdf",
    keywords = "",
    annote = "An important task in the analysis and reconstruction of curvilinear structures from unorganized 3-D point samples is the estimation of tangent information at each data point. Its main challenges are in (1) the selection of an appropriate scale of analysis to accommodate noise, density variation and sparsity in the data, and in (2) the formulation of a model and associated objective function that correctly expresses their effects. We pose this problem as one of estimating the neighborhood size for which the principal eigenvector of the data scatter matrix is best aligned with the true tangent of the curve, in a probabilistic sense. We analyze the perturbation on the direction of the eigenvector due to finite samples and noise using the expected statistics of the scatter matrix estimators, and employ a simple iterative procedure to choose the optimal neighborhood size. Experiments on synthetic and real data validate the behavior predicted by the model, and show competitive performance and improved stability over leading polynomial-fitting alternatives that require a preset scale. " 
    }

Internal reports
  1. Ranjith Unnikrishnan, Jean-Francois Lalonde, Nicolas Vandapel, and Martial Hebert. Scale Selection for the Analysis of Point Sampled Curves: Extended Report. Technical report CMU-RI-TR-06-25, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, June 2006. (url) (pdf)
    Annotation: "An important task in the analysis and reconstruction of curvilinear structures from unorganized 3-D point samples is the estimation of tangent information at each data point. Its main challenges are in (1) the selection of an appropriate scale of analysis to accommodate noise, density variation and sparsity in the data, and in (2) the formulation of a model and associated objective function that correctly expresses their effects. We pose this problem as one of estimating the neighborhood size for which the principal eigenvector of the data scatter matrix is best aligned with the true tangent of the curve, in a probabilistic sense. We analyze the perturbation on the direction of the eigenvector due to finite samples and noise using the expected statistics of the scatter matrix estimators, and employ a simple iterative procedure to choose the optimal neighborhood size. Experiments on synthetic and real data validate the behavior predicted by the model, and show competitive performance and improved stability over leading polynomial-fitting alternatives that require a preset scale. " .

    @techreport{Unnikrishnan_2006_5450,
    author = "Ranjith Unnikrishnan and Jean-Francois Lalonde and Nicolas Vandapel and Martial Hebert",
    title = "Scale Selection for the Analysis of Point Sampled Curves: Extended Report",
    institution = "Robotics Institute, Carnegie Mellon University",
    month = "June",
    year = "2006",
    number = "CMU-RI-TR-06-25",
    address = "Pittsburgh, PA",
    url = "http://www.ri.cmu.edu/pubs/pub_5450.html",
    pdf = "http://www.ri.cmu.edu/pub_files/pub4/unnikrishnan_ranjith_2006_2/unnikrishnan_ranjith_2006_2.pdf",
    keywords = "",
    annote = "An important task in the analysis and reconstruction of curvilinear structures from unorganized 3-D point samples is the estimation of tangent information at each data point. Its main challenges are in (1) the selection of an appropriate scale of analysis to accommodate noise, density variation and sparsity in the data, and in (2) the formulation of a model and associated objective function that correctly expresses their effects. We pose this problem as one of estimating the neighborhood size for which the principal eigenvector of the data scatter matrix is best aligned with the true tangent of the curve, in a probabilistic sense. We analyze the perturbation on the direction of the eigenvector due to finite samples and noise using the expected statistics of the scatter matrix estimators, and employ a simple iterative procedure to choose the optimal neighborhood size. Experiments on synthetic and real data validate the behavior predicted by the model, and show competitive performance and improved stability over leading polynomial-fitting alternatives that require a preset scale. " 
    }

BACK TO INDEX


The VMR Lab is part of the Vision and Autonomous Systems Center within the Robotics Institute in the School of Computer Science, Carnegie Mellon University.
This page was generated by a modified version of bibtex2html written by Gregoire Malandain