K Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching (To appear in RA-L 2022)

Overview

KCP

License Build

The official implementation of KCP: k Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching, accepted for publication in the IEEE Robotics and Automation Letters (RA-L).

KCP is an efficient and effective local point cloud registration approach targeting for real-world 3D LiDAR scan matching problem. A simple (and naive) understanding is: ICP iteratively considers the closest point of each source point, but KCP considers the k closest points of each source point in the beginning, and outlier correspondences are mainly rejected by the maximum clique pruning method. KCP is written in C++ and we also support Python binding of KCP (pykcp).

For more, please refer to our paper:

  • Yu-Kai Lin, Wen-Chieh Lin, Chieh-Chih Wang, KCP: k-Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching. To appear in IEEE Robotics and Automation Letters (RA-L), 2022. (pdf) (code) (video)

If you use this project in your research, please cite:

@article{lin2022kcp,
  title={{KCP: k-Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching}},
  author={Lin, Yu-Kai and Lin, Wen-Chieh and Wang, Chieh-Chih},
  journal={IEEE Robotics and Automation Letters},
  volume={#},
  number={#},
  pages={#--#},
  year={2022},
}

and if you find this project helpful or interesting, please Star the repository. Thank you!

Table of Contents

📦 Resources

⚙️ Installation

The project is originally developed in Ubuntu 18.04, and the following instruction supposes that you are using Ubuntu 18.04 as well. I am not sure if it also works with other Ubuntu versions or other Linux distributions, but maybe you can give it a try 👍

Also, please feel free to open an issue if you encounter any problems of the following instruction.

Step 1. Preparing the Dependencies

You have to prepare the following packages or libraries used in KCP:

  1. A C++ compiler supporting C++14 and OpenMP (e.g. GCC 7.5).
  2. CMake3.11
  3. Git
  4. Eigen3 ≥ 3.3
  5. nanoflann
  6. TEASER++d79d0c67

GCC, CMake, Git, and Eigen3

sudo apt update
sudo apt install -y g++ build-essential libeigen3-dev git

sudo apt install -y software-properties-common lsb-release
wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | sudo tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null
sudo apt update
sudo apt install cmake

nanoflann

cd ~
git clone https://github.com/jlblancoc/nanoflann
cd nanoflann
mkdir build && cd build
cmake .. -DNANOFLANN_BUILD_EXAMPLES=OFF -DNANOFLANN_BUILD_TESTS=OFF
make
sudo make install

TEASER++

cd ~
git clone https://github.com/MIT-SPARK/TEASER-plusplus
cd TEASER-plusplus
git checkout d79d0c67
mkdir build && cd build
cmake .. -DBUILD_TESTS=OFF -DBUILD_PYTHON_BINDINGS=OFF -DBUILD_DOC=OFF
make
sudo make install

Step 2. Preparing Dependencies of Python Binding (Optional)

The Python binding of KCP (pykcp) uses pybind11 to achieve operability between C++ and Python. KCP will automatically download and compile pybind11 during the compilation stage. However, you need to prepare a runable Python environment with header files for the Python C API (python3-dev):

sudo apt install -y python3 python3-dev

Step 3. Building KCP

Execute the following commands to build KCP:

Without Python Binding

git clone https://github.com/StephLin/KCP
cd KCP
mkdir build && cd build
cmake ..
make

With Python Binding

git clone https://github.com/StephLin/KCP
cd KCP
mkdir build && cd build
cmake .. -DKCP_BUILD_PYTHON_BINDING=ON -DPYTHON_EXECUTABLE=$(which python3)
make

Step 4. Installing KCP to the System (Optional)

This will make the KCP library available in the system, and any C++ (CMake) project can find the package by find_package(KCP). Think twice before you enter the following command!

# Under /path/to/KCP/build
sudo make install

🌱 Examples

We provide two examples (one for C++ and the other for Python 3) These examples take nuScenes' LiDAR data to perform registration. Please check

for more information.

📝 Some Remarks

Tuning Parameters

The major parameters are

  • kcp::KCP::Params::k and
  • kcp::KCP::Params::teaser::noise_bound,

where k is the number of nearest points of each source point selected to be part of initial correspondences, and noise_bound is the criterion to determine if a correspondence is correct. In our paper, we suggest k=2 and noise_bound the 3-sigma (we use noise_bound=0.06 meters for nuScenes data), and those are default values in the library.

To use different parameters to the KCP solver, please refer to the following snippets:

C++

#include <kcp/solver.hpp>

auto params = kcp::KCP::Params();

params.k                  = 2;
params.teaser.noise_bound = 0.06;

auto solver = kcp::KCP(params);

Python

import pykcp

params = pykcp.KCPParams()
params.k = 2
params.teaser.noise_bound = 0.06

solver = pykcp.KCP(params)

Controlling Computational Cost

Instead of correspondence-free registration in TEASER++, KCP considers k closest point correspondences to reduce the major computational cost of the maximum clique algorithm, and we have expressed the ability for real-world scenarios without any complicate or learning-based feature descriptor in the paper. However, it is still possible to encounter computational time or memory issue if there are too many correspondences fed to the solver.

We suggest controlling your keypoints around 500 for k=2 (in this way the computational time will be much closer to the one presented in the paper).

Torwarding Global Registration Approaches

It is promising that KCP can be extended to a global registration approach if a fast and reliable sparse feature point representation method is employed.

In this way, the role of RANSAC, a fast registration approach usually used in learning based approaches, is similar to KCP's, but the computation results of KCP are deterministic, and also, KCP has better theoretical supports.

🎁 Acknowledgement

This project refers to the computation of the smoothness term defined in LOAM (implemented in Tixiao Shan's excellent project LIO-SAM, which is licensed under BSD-3). We modified the definition of the smoothness term (and it is called the multi-scale curvature in this project).

Owner
Yu-Kai Lin
Studying for a master program of Computer Science in NCTU, Taiwan.
Yu-Kai Lin
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This is a Python package available on PyPI for NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pyto

Artit 'Art' Wangperawong 5 Sep 29, 2021
True Few-Shot Learning with Language Models

This codebase supports using language models (LMs) for true few-shot learning: learning to perform a task using a limited number of examples from a single task distribution.

Ethan Perez 124 Jan 04, 2023
training script for space time memory network

Trainig Script for Space Time Memory Network This codebase implemented training code for Space Time Memory Network with some cyclic features. Requirem

Yuxi Li 100 Dec 20, 2022
Bag of Tricks for Natural Policy Gradient Reinforcement Learning

Bag of Tricks for Natural Policy Gradient Reinforcement Learning [ArXiv] Setup Python 3.8.0 pip install -r req.txt Mujoco 200 license Main Files main.

Brennan Gebotys 1 Oct 10, 2022
Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021

Learning the Best Pooling Strategy for Visual Semantic Embedding Official PyTorch implementation of the paper Learning the Best Pooling Strategy for V

Jiacheng Chen 106 Jan 06, 2023
Code for Domain Adaptive Video Segmentation via Temporal Consistency Regularization in ICCV 2021

Domain Adaptive Video Segmentation via Temporal Consistency Regularization Updates 08/2021: check out our domain adaptation for sematic segmentation p

36 Dec 12, 2022
Deep Reinforced Attention Regression for Partial Sketch Based Image Retrieval.

DARP-SBIR Intro This repository contains the source code implementation for ICDM submission paper Deep Reinforced Attention Regression for Partial Ske

2 Jan 09, 2022
LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice,

LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and eval

Ahmet Erdem 691 Dec 23, 2022
Not Suitable for Work (NSFW) classification using deep neural network Caffe models.

Open nsfw model This repo contains code for running Not Suitable for Work (NSFW) classification deep neural network Caffe models. Please refer our blo

Yahoo 5.6k Jan 05, 2023
Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

Think Big, Teach Small: Do Language Models Distil Occam’s Razor? Software related to the paper "Think Big, Teach Small: Do Language Models Distil Occa

0 Dec 07, 2021
Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness (NeurIPS2021) This repository contains code for the paper "Smo

Jongheon Jeong 17 Dec 27, 2022
Dual Attention Network for Scene Segmentation (CVPR2019)

Dual Attention Network for Scene Segmentation(CVPR2019) Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu Introduction W

Jun Fu 2.2k Dec 28, 2022
Official codebase used to develop Vision Transformer, MLP-Mixer, LiT and more.

Big Vision This codebase is designed for training large-scale vision models on Cloud TPU VMs. It is based on Jax/Flax libraries, and uses tf.data and

Google Research 701 Jan 03, 2023
This initial strategy was developed specifically for larger pools and is based on taking a moving average and deriving Bollinger Bands to create a projected active liquidity range.

Gamma's Strategy One This initial strategy was developed specifically for larger pools and is based on taking a moving average and deriving Bollinger

Gamma Strategies 46 Dec 02, 2022
DIRL: Domain-Invariant Representation Learning

DIRL: Domain-Invariant Representation Learning Domain-Invariant Representation Learning (DIRL) is a novel algorithm that semantically aligns both the

Ajay Tanwani 30 Nov 07, 2022
I-BERT: Integer-only BERT Quantization

I-BERT: Integer-only BERT Quantization HuggingFace Implementation I-BERT is also available in the master branch of HuggingFace! Visit the following li

Sehoon Kim 139 Dec 27, 2022
Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images

Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images This repository contains the implementation of the following paper

Seonggwan Ko 9 Jul 30, 2022
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Kangel Zenn 5 Mar 26, 2022
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.

Pretrained Language Model This repository provides the latest pretrained language models and its related optimization techniques developed by Huawei N

HUAWEI Noah's Ark Lab 2.6k Jan 01, 2023
Sinkformers: Transformers with Doubly Stochastic Attention

Code for the paper : "Sinkformers: Transformers with Doubly Stochastic Attention" Paper You will find our paper here. Compat This package has been dev

Michael E. Sander 31 Dec 29, 2022