Skip to content

Supervised Depth Completion of RGB-D Measurements from Reconstruction Loss

License

Notifications You must be signed in to change notification settings

ctu-vras/depth_completion

Repository files navigation

The core of the depth completion pipeline is the differentiable SLAM module, which takes as input RGB-D data and outputs camera trajectory and point cloud map estimate. However, the input sensory data could have noise and missing depth values. Therefore a Depth Completion module is introduced. It is applied to raw RGB-D frames before the propagation of the depth measurements through the SLAM module. For more information, please, refer to the thesis.

mapping_gradslam

Installation

The installation instructions are available at docs/install.md.

Depth Completion

The KITTI Depth Completion data set is utilized to train models. We train the model introduced in J. Uhrig et al, Sparsity Invariant CNNs.

During the training process the two supervisory signals are used (tested separatelly):

  • Mean Squared Error (MSE) loss computed for predicted and truth depth images,
  • Chamfer loss computed for predicted and truth depth clouds.

The pretrained model weights are available via the following link

Running the training pipeline:

cd ./scripts
python main.py

Differentiable SLAM and Subt simulator

mapping_gradslam

Download RGB-D images

And place it to the folder:

./data/

The data is organized in the same format as the ICL-NUIM dataset.

Explore the depth images data from the simulator (requires Open3D installation): ./notebooks/explore_data.ipynb

Mapping with GradSLAM

Prerequisite: install ROS

Construct a map from RGBD images input:

roslaunch depth_completion gradslam_bag.launch odom:=gt

You may also want to visualize a ground truth mesh of the world by passing the additional argument: pub_gt_mesh:=true. Note, that this option requires Pytorch3d installed.

Mapping evaluation

Prerequisite: install Pytorch3d

Ground truth map from the simulator could be represented as a mesh file.

Download meshes of some cave worlds. And place them to ./data/meshes/ folder.

Compare map to mesh ./notebooks/compare_gt_map_mesh_to_point_cloud.ipynb

It will compare a point cloud to a mesh using the following functions:

  • the closest distance from point to mesh edge (averaged across all points in point cloud),
  • the closes distance from point to mesh face (averaged across all points in point cloud).

Record the data

*Note, that this section requires installation of the DARPA Subt simulator and the exploration pipeline.

However, you may use already prerecorded ROS *.bag files and convert them to ICL-NUIM dataset format.

In order to record a bag-file, launch the simulator and simply run:

./scripts/record_bag.sh

You can download prerecorded data from here. Ones you have a recorded bag-file, convert it to the ICL-NUIM format:

roslaunch depth_completion bag2icl.launch bag:=<full/path/to/bag/file.bag>

GradSLAM and KITTI Depth

Instructions on how to run differentiable SLAM on sequences from the KITTI Depth Completion dataset. We utilize camera poses from the KITTI Raw dataset (GPS + IMU) and depth measurements from the KITTI Depth.

mapping_gradslam

Ones you have the data downloaded, please, move it (or create symbolic links) to the following locations:

depth_completion/data/
├── KITTI
│   ├── depth -> ~/data/datasets/KITTI/depth/
│   └── raw -> ~/data/datasets/KITTI/raw/
└── meshes -> ~/data/meshes/

Running GradSLAM on a KITTI Depth sequence with the following configurations:

  • odometry provider used in GradSLAM is set to ground truth poses from data set,
  • depth completion model is used to construct from sparse clouds local maps and provide them for the SLAM,
  • sparse clouds from KITTI Depths are used as input to depth completion model,
  • the pipeline is running on GPU.
roslaunch depth_completion gradslam_kitti.launch odom:=gt depth_completion:=1 depth_type:=sparse device:='cuda:0'

More details about the argument usage are provided in the corresponding launch file.

Citation

Feel free to cite the package, if you find it useful for your research.

@software{Stanek_Supervised_Depth_Completion_2022,
author = {Staněk, Jáchym and Agishev, Ruslan and Petříček, Tomáš and Zimmermann, Karel},
month = {5},
title = {{Supervised Depth Completion of RGB-D Measurements from Reconstruction Loss}},
url = {https://github.com/RuslanAgishev/depth_completion},
version = {0.0.1},
year = {2022}
}

About

Supervised Depth Completion of RGB-D Measurements from Reconstruction Loss

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published