Data & Code


A Unifying Variational Framework for Gaussian Process Motion Planning

We introduce a framework for robot motion planning based on variational Gaussian processes, which unifies and generalizes various probabilistic-inference-based motion planning algorithms, and connects them with optimization-based planners. Our framework provides a principled and flexible way to incorporate equality-based, inequality-based, and soft motion-planning constraints during end to end training, is straightforward to implement, and provides both interval-based and Monte-Carlo-based uncertainty estimates.


Gaussian Process Implicit Surfaces for Surface Reconstruction

We present a method based on Gaussian process regression to build implicit surfaces for 3D surface reconstruction (GPIS) from raw point cloud data, which leads to better accuracy in comparison to the standard GPIS formulation. Our approach encodes local and global shape information from the data to capture the underlying shape. The proposed pipeline works on dense, sparse, and noisy raw point clouds and can be parallelized to improve computational efficiency. We evaluate our approach on synthetic and real point cloud datasets including data from robot visual and tactile sensors.


Visual and Tactile Point Cloud Data from Real Robots for Shape Modeling

Robotic applications often require perception of object shape from sensory data that can be noisy and incomplete. In order to facilitate analysis of new methods and comparison of different approaches for shape modeling (e.g. surface estimation), completion and exploration, we provide real sensory data acquired from exploring various objects of different complexities. The dataset includes visual and tactile readings in the form of 3D point clouds obtained using two different robot setups that are equipped with visual and tactile sensors. During data collection, the robots touch the experiment objects in a predefined manner at various exploration configurations and gather visual and tactile points in the same coordinate frame based on calibration between the robots and the used cameras. The goal of this exhaustive exploration procedure is to sense unseen parts of the objects which are not visible to the cameras, but can be sensed via tactile sensors activated at touched areas.


A Low-Cost Pipeline and Database for Reproducible Manipulation Research

We present a novel approach and database that combines the inexpensive generation of 3D object models via monocular or RGB-D camera images with 3D printing and an object tracking algorithm. The approach does not require expensive and controlled 3D scanning setups and aims to enable anyone with a camera to scan, print, and track complex objects for manipulation research. We present CapriDB – an extensible database resulting from this approach containing initially 40 textured and 3D printable mesh models together with tracking features to facilitate the adoption of the proposed approach.