Andy Zeng will present his generals exam Tuesday, May 16, 2017 in CS 302 at 11am.
BQ_BEGIN Andy Zeng will present his generals exam Tuesday, May 16, 2017 in CS 302 at 11am. The members of his committee are: Tom Funkhouser (adviser), Szymon Rusinkiewicz, and Elad Hazan. Everyone is invited to attend his talk, and those faculty wishing to remain for the oral exam following are welcome to do so. Title: Self-Supervised Local 3D Geometric Descriptors and RGB-D Object Segmentation Abstract: In this talk, I will present two projects where we proposed unique methods for labeling massive amounts of data automatically. These methods enabled us to train strong data-driven models for RGB-D object segmentation and local 3D descriptors. In the first project, we present a vision system to recognize objects and their poses under noisy and cluttered environments. More specifically, our approach first segments and labels multiple RGB-D views of a scene with a fully convolutional neural network, and then fits pre-scanned 3D object models to the resulting segmentation to get the 6D object poses. However, training a deep neural network for segmentation typically requires a large amount of training data with manual labels. To alleviate this difficulty, we present an automatic method to generate a large labeled segmentation dataset without tedious manual annotations that could be scaled up to more object categories easily. Our system is reliable under a variety of scenarios including warehouse automation, and is the 3rd place winning system in the world-wide Amazon Robotics Challenge 2016. In the second project, we present a data-driven local 3D volumetric patch descriptor for matching local geometric features on noisy, low-resolution, and incomplete depth images from commodity range sensors. To amass training data for our model, we present an automatic method that leverages the millions of correspondence labels found in existing RGB-D reconstructions. Experiments show that our descriptor is not only able to match local geometry in new scenes for reconstruction, but also generalize to different tasks and spatial scales (e.g. instance-level object model alignment for the Amazon Picking Challenge, and mesh surface correspondence). Results show that 3DMatch consistently outperforms other state-of-the-art approaches by a significant margin. Textbook: Computer Vision: Algorithms and Applications [FPFH] Fast point feature histograms (fpfh) for 3d registration [SIFT] Object Recognition from Local Scale-Invariant Features. [SpinImage] Using spin images for efficient object recognition in cluttered 3d scenes [KinectFusion] KinectFusion: Real-Time Dense Surface Mapping and Tracking [PoseEstimation] Learning 6D Object Pose Estimation using 3D Object Coordinates [MatchNet] MatchNet: Unifying Feature and Metric Learning for Patch-Based Matching [Egomotion] Learning to See by Moving [FCN] Fully Convolutional Networks for Semantic Segmentation [Survey3DDescriptors] Performance Evaluation of 3D Local Feature Descriptors [Grasping] Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700 Robot Hours BQ_END
participants (1)
-
Nicki Gotsis