PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds

Related tags

Deep LearningPCAM
Overview

PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds

PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds
Anh-Quan Cao1,2, Gilles Puy1, Alexandre Boulch1, Renaud Marlet1,3
1valeo.ai, France and 2Inria, France and 3ENPC, France

If you find this code or work useful, please cite our paper:

@inproceedings{cao21pcam,
  title={{PCAM}: {P}roduct of {C}ross-{A}ttention {M}atrices for {R}igid {R}egistration of {P}oint {C}louds},
  author={Cao, Anh-Quan and Puy, Gilles and Boulch, Alexandre and Marlet, Renaud},
  booktitle={International Conference on Computer Vision (ICCV)},
  year={2021},
}

Preparation

Installation

  1. This code was implemented with python 3.7, pytorch 1.6.0 and CUDA 10.2. Please install PyTorch.
pip install torch==1.6.0 torchvision==0.7.0
  1. A part of the code (voxelisation) is using MinkowskiEngine 0.4.3. Please install it on your system.
sudo apt-get update
sudo apt install libgl1-mesa-glx
sudo apt install libopenblas-dev g++-7
export CXX=g++-7 
pip install -U MinkowskiEngine==0.4.3 --install-option="--blas=openblas" -v
  1. Clone this repository and install the additional dependencies:
$ git clone https://github.com/valeoai/PCAM.git
$ cd PCAM/
$ pip install -r requirements.txt
  1. Install lightconvpoint [5], which is an early version of FKAConv:
$ pip install -e ./lcp
  1. Finally, install pcam:
$ pip install -e ./

You can edit pcam's code on the fly and import function and classes of pcam in other project as well.

Datasets

3DMatch and KITTI

Follow the instruction on DGR github repository to download both datasets.

Place 3DMatch in the folder /path/to/pcam/data/3dmatch/, which should have the structure described here.

Place KITTI in the folder /path/to/pcam/data/kitti/, which should have the structure described here.

You can create soft links with the command ln -s if the datasets are stored somewhere else on your system.

For these datasets, we use the same dataloaders as in DGR [1-3], up to few modifications for code compatibility.

Modelnet40

Download the dataset here and unzip it in the folder /path/to/pcam/data/modelnet/, which should have the structure described here.

Again, you can create soft links with the command ln -s if the datasets are stored somewhere else on your system.

For this dataset, we use the same dataloader as in PRNet [4], up to few modifications for code compatibility.

Pretrained models

Download PCAM pretrained models here and unzip the file in the folder /path/to/pcam/trained_models/, which should have the structure described here.

Testing PCAM

As we randomly subsample the point clouds in PCAM, there are some slight variations from one run to another. In our paper, we ran 3 independent evaluations on the complete test set and averaged the scores.

3DMatch

We provide two different pre-trained models for 3DMatch: one for PCAM-sparse and one for PCAM-soft, both trained using 4096 input points.

To test the PCAM-soft model, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft.yaml

To test the PCAM-sparse model on the test set of , type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/sparse.yaml

Optional

As in DGR [1], the results can be improved using different levels of post-processing.

  1. Keeping only the pairs of points with highest confidence score (the threshold was optimised on the validation set of 3DMatch).
$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft_filter.yaml
$ python eval.py with ../configs/3dmatch/sparse_filter.yaml
  1. Using in addition the refinement by optimisation proposed by DGR [1].
$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft_refinement.yaml
$ python eval.py with ../configs/3dmatch/sparse_refinement.yaml
  1. Using as well the safeguard proposed by DGR [1].
$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/3dmatch/soft_safeguard.yaml
$ python eval.py with ../configs/3dmatch/sparse_safeguard.yaml

Note: For a fair comparison, we fixed the safeguard condition so that it is applied on the same proportion of scans as in DGR [1].

KITTI

We provide two different pre-trained models for KITTI: one for PCAM-sparse and one for PCAM-soft, both trained using 2048 input points.

To test the PCAM-soft model, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/kitti/soft.yaml

To test the PCAM-sparse model, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/kitti/sparse.yaml

Optional

As in DGR [1], the results can be improved by refining the results using ICP.

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/kitti/soft_icp.yaml
$ python eval.py with ../configs/kitti/sparse_icp.yaml 

ModelNet40

There exist 3 different variants of this dataset. Please refer to [4] for the construction of these variants.

Unseen objects

To test the PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/modelnet/soft.yaml
$ python eval.py with ../configs/modelnet/sparse.yaml

Unseen categories

To test the PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/modelnet/soft_unseen.yaml
$ python eval.py with ../configs/modelnet/sparse_unseen.yaml

Unseen objects with noise

To test the PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python eval.py with ../configs/modelnet/soft_noise.yaml
$ python eval.py with ../configs/modelnet/sparse_noise.yaml

Training

The models are saved in the folder /path/to/pcam/trained_models/new_training/{DATASET}/{CONFIG}, where {DATASET} is the name of the dataset and {CONFIG} give a description of the PCAM architecture and the losses used for training.

3DMatch

To train a PCAM-soft model, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/3dmatch/soft.yaml

You can then test this new model by typing:

$ python eval.py with ../configs/3dmatch/soft.yaml PREFIX='new_training'

To train a PCAM-sparse model, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/3dmatch/sparse.yaml

Training took about 12 days on a Nvidia Tesla V100S-32GB.

You can then test this new model by typing:

$ python eval.py with ../configs/3dmatch/sparse.yaml PREFIX='new_training'

KITTI

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/kitti/soft.yaml
$ python train.py with ../configs/kitti/sparse.yaml

Training took about 1 day on a Nvidia GeForce RTX 2080 Ti.

You can then test these new models by typing:

$ python eval.py with ../configs/kitti/soft.yaml PREFIX='new_training'
$ python eval.py with ../configs/kitti/sparse.yaml PREFIX='new_training'

ModelNet

Training PCAM on ModelNet took about 10 hours on Nvidia GeForce RTX 2080.

Unseen objects

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/modelnet/soft.yaml NB_EPOCHS=10
$ python train.py with ../configs/modelnet/sparse.yaml NB_EPOCHS=10

You can then test these new models by typing:

$ python eval.py with ../configs/modelnet/soft.yaml PREFIX='new_training'
$ python eval.py with ../configs/modelnet/sparse.yaml PREFIX='new_training'

Unseen categories

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/modelnet/soft_unseen.yaml NB_EPOCHS=10
$ python train.py with ../configs/modelnet/sparse_unseen.yaml NB_EPOCHS=10

You can then test these new models by typing:

$ python eval.py with ../configs/modelnet/soft_unseen.yaml PREFIX='new_training'
$ python eval.py with ../configs/modelnet/sparse_unseen.yaml PREFIX='new_training'

Unseen objects with noise

To train PCAM models, type:

$ cd /path/to/pcam/scripts/
$ python train.py with ../configs/modelnet/soft_noise.yaml NB_EPOCHS=10
$ python train.py with ../configs/modelnet/sparse_noise.yaml NB_EPOCHS=10

You can then test these new models by typing:

$ python eval.py with ../configs/modelnet/soft_noise.yaml PREFIX='new_training'
$ python eval.py with ../configs/modelnet/sparse_noise.yaml PREFIX='new_training'

References

[1] Christopher Choy, Wei Dong, Vladlen Koltun. Deep Global Registration, CVPR, 2020.

[2] Christopher Choy, Jaesik Park, Vladlen Koltun. Fully Convolutional Geometric Features. ICCV, 2019.

[3] Christopher Choy, JunYoung Gwak, Silvio Savarese. 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR, 2019.

[4] Yue Wang and Justin M. Solomon. PRNet: Self-Supervised Learning for Partial-to-Partial Registration. NeurIPS, 2019.

[5] Alexandre Boulch, Gilles Puy, Renaud Marlet. FKAConv: Feature-Kernel Alignment for Point Cloud Convolution. ACCV, 2020.

License

PCAM is released under the Apache 2.0 license.

You might also like...
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds
(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds

PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds by Mutian Xu*, Runyu Ding*, Hengshuang Zhao, and Xiaojuan Qi. Int

《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)
《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)

A-CNN: Annularly Convolutional Neural Networks on Point Clouds Created by Artem Komarichev, Zichun Zhong, Jing Hua from Department of Computer Science

(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds
(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds

BRNet Introduction This is a release of the code of our paper Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds,

Self-Supervised Learning for Domain Adaptation on Point-Clouds
Self-Supervised Learning for Domain Adaptation on Point-Clouds

Self-Supervised Learning for Domain Adaptation on Point-Clouds Introduction Self-supervised learning (SSL) allows to learn useful representations from

Rendering Point Clouds with Compute Shaders
Rendering Point Clouds with Compute Shaders

Compute Shader Based Point Cloud Rendering This repository contains the source code to our techreport: Rendering Point Clouds with Compute Shaders and

This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds
This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds

LiDARTag Overview This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds (PDF)(arXiv). This wo

Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

Code for
Code for "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" @ICRA2021

CloudAAE This is an tensorflow implementation of "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" Files log:

Comments
  • How to get the results in the paper?

    How to get the results in the paper?

    I use the eval method from the README, but the results is worse:

    SOFT result: RTE all: 2.6929195 RRE all 1.755938845188313 Recall: 0.8468468468468469 RTE: 0.30647033 RRE: 0.41620454047369715 Times: 0.27450611107738326

    Sparse Result: RTE all: 3.8984199 RRE all 2.97438877706469 Recall: 0.4900900900900901 RTE: 0.37603837 RRE: 0.4989037670898464 Times: 0.2832888589950377

    Do I need to modify any code to get the results showed in paper?

    opened by Outlande 3
Releases(v0.1)
Owner
valeo.ai
We are an international team based in Paris, conducting AI research for Valeo automotive applications, in collaboration with world-class academics.
valeo.ai
4th place solution for the SIGIR 2021 challenge.

SIGIR-2021 (Tinkoff.AI) How to start Download train and test data: https://sigir-ecom.github.io/data-task.html Place it under sigir-2021/data/. Run py

Tinkoff.AI 4 Jul 01, 2022
Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Packt 1.5k Jan 03, 2023
Build and run Docker containers leveraging NVIDIA GPUs

NVIDIA Container Toolkit Introduction The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includ

NVIDIA Corporation 15.6k Jan 01, 2023
HAT: Hierarchical Aggregation Transformers for Person Re-identification

HAT: Hierarchical Aggregation Transformers for Person Re-identification

11 Sep 05, 2022
The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing".

BMC The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing". BibTex entry available here. B

Orange 383 Dec 16, 2022
Img-process-manual - Utilize Python Numpy and Matplotlib to realize OpenCV baisc image processing function

Img-process-manual - Opencv Library basic graphic processing algorithm coding reproduction based on Numpy and Matplotlib library

Jack_Shaw 2 Dec 12, 2022
PyTorch implementation of "Optimization Planning for 3D ConvNets"

Optimization-Planning-for-3D-ConvNets Code for the ICML 2021 paper: Optimization Planning for 3D ConvNets. Authors: Zhaofan Qiu, Ting Yao, Chong-Wah N

Zhaofan Qiu 2 Jan 12, 2022
Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation"

CoCosNet Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation" (CVPR 2020 oral). Update: 202

Lingbo Yang 38 Sep 22, 2021
Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Manipulator Learning This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In par

STARS Laboratory 5 Dec 08, 2022
Corruption Invariant Learning for Re-identification

Corruption Invariant Learning for Re-identification The official repository for Benchmarks for Corruption Invariant Person Re-identification (NeurIPS

Minghui Chen 73 Dec 08, 2022
Reviving Iterative Training with Mask Guidance for Interactive Segmentation

This repository provides the source code for training and testing state-of-the-art click-based interactive segmentation models with the official PyTorch implementation

Visual Understanding Lab @ Samsung AI Center Moscow 406 Jan 01, 2023
Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery (ICCV 2021 Oral) Run this model on Replicate Optimization: Global directions: Mapper: Check ou

3.3k Jan 05, 2023
TransMorph: Transformer for Medical Image Registration

TransMorph: Transformer for Medical Image Registration keywords: Vision Transformer, Swin Transformer, convolutional neural networks, image registrati

Junyu Chen 180 Jan 07, 2023
One-Shot Neural Ensemble Architecture Search by Diversity-Guided Search Space Shrinking

One-Shot Neural Ensemble Architecture Search by Diversity-Guided Search Space Shrinking This is an official implementation for NEAS presented in CVPR

Multimedia Research 19 Sep 08, 2022
Unofficial Implementation of MLP-Mixer, Image Classification Model

MLP-Mixer Unoffical Implementation of MLP-Mixer, easy to use with terminal. Train and test easly. https://arxiv.org/abs/2105.01601 MLP-Mixer is an arc

Oğuzhan Ercan 6 Dec 05, 2022
RepVGG: Making VGG-style ConvNets Great Again

RepVGG: Making VGG-style ConvNets Great Again (PyTorch) This is a super simple ConvNet architecture that achieves over 80% top-1 accuracy on ImageNet

2.8k Jan 04, 2023
This is the source code for the experiments related to the paper Unsupervised Audio Source Separation Using Differentiable Parametric Source Models

Unsupervised Audio Source Separation Using Differentiable Parametric Source Models This is the source code for the experiments related to the paper Un

30 Oct 19, 2022
Code for "My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack" paper

Myo Keylogging This is the source code for our paper My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack by Matthias Ga

Secure Mobile Networking Lab 7 Jan 03, 2023
A robust pointcloud registration pipeline based on correlation.

PHASER: A Robust and Correspondence-Free Global Pointcloud Registration Ubuntu 18.04+ROS Melodic: Overview Pointcloud registration using correspondenc

ETHZ ASL 101 Dec 01, 2022