Geometry-Aware Learning of Maps for Camera Localization (CVPR2018)

Overview

License CC BY-NC-SA 4.0 Python 2.7

Geometry-Aware Learning of Maps for Camera Localization

This is the PyTorch implementation of our CVPR 2018 paper

"Geometry-Aware Learning of Maps for Camera Localization" - CVPR 2018 (Spotlight). Samarth Brahmbhatt, Jinwei Gu, Kihwan Kim, James Hays, and Jan Kautz

A four-minute video summary (click below for the video)

mapnet

Citation

If you find this code useful for your research, please cite our paper

@inproceedings{mapnet2018,
  title={Geometry-Aware Learning of Maps for Camera Localization},
  author={Samarth Brahmbhatt and Jinwei Gu and Kihwan Kim and James Hays and Jan Kautz},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2018}
}

Table of Contents

Documentation

Setup

MapNet uses a Conda environment that makes it easy to install all dependencies.

  1. Install miniconda with Python 2.7.

  2. Create the mapnet Conda environment: conda env create -f environment.yml.

  3. Activate the environment: conda activate mapnet_release.

  4. Note that our code has been tested with PyTorch v0.4.1 (the environment.yml file should take care of installing the appropriate version).

Data

We support the 7Scenes and Oxford RobotCar datasets right now. You can also write your own PyTorch dataloader for other datasets and put it in the dataset_loaders directory. Refer to this README file for more details.

The datasets live in the data/deepslam_data directory. We provide skeletons with symlinks to get you started. Let us call your 7Scenes download directory 7SCENES_DIR and your main RobotCar download directory (in which you untar all the downloads from the website) ROBOTCAR_DIR. You will need to make the following symlinks:

cd data/deepslam_data && ln -s 7SCENES_DIR 7Scenes && ln -s ROBOTCAR_DIR RobotCar_download


Special instructions for RobotCar: (only needed for RobotCar data)

  1. Download this fork of the dataset SDK, and run cd scripts && ./make_robotcar_symlinks.sh after editing the ROBOTCAR_SDK_ROOT variable in it appropriately.

  2. For each sequence, you need to download the stereo_centre, vo and gps tar files from the dataset website (more details in this comment).

  3. The directory for each 'scene' (e.g. full) has .txt files defining the train/test split. While training MapNet++, you must put the sequences for self-supervised learning (dataset T in the paper) in the test_split.txt file. The dataloader for the MapNet++ models will use both images and ground-truth pose from sequences in train_split.txt and only images from the sequences in test_split.txt.

  4. To make training faster, we pre-processed the images using scripts/process_robotcar_images.py. This script undistorts the images using the camera models provided by the dataset, and scales them such that the shortest side is 256 pixels.


Running the code

Demo/Inference

The trained models for all experiments presented in the paper can be downloaded here. The inference script is scripts/eval.py. Here are some examples, assuming the models are downloaded in scripts/logs. Please go to the scripts folder to run the commands.

7_Scenes

  • MapNet++ with pose-graph optimization (i.e., MapNet+PGO) on heads:
$ python eval.py --dataset 7Scenes --scene heads --model mapnet++ \
--weights logs/7Scenes_heads_mapnet++_mapnet++_7Scenes/epoch_005.pth.tar \
--config_file configs/pgo_inference_7Scenes.ini --val --pose_graph
Median error in translation = 0.12 m
Median error in rotation    = 8.46 degrees

7Scenes_heads_mapnet+pgo

  • For evaluating on the train split remove the --val flag

  • To save the results to disk without showing them on screen (useful for scripts), add the --output_dir ../results/ flag

  • See this README file for more information on hyper-parameters and which config files to use.

  • MapNet++ on heads:

$ python eval.py --dataset 7Scenes --scene heads --model mapnet++ \
--weights logs/7Scenes_heads_mapnet++_mapnet++_7Scenes/epoch_005.pth.tar \
--config_file configs/mapnet.ini --val
Median error in translation = 0.13 m
Median error in rotation    = 11.13 degrees
  • MapNet on heads:
$ python eval.py --dataset 7Scenes --scene heads --model mapnet \
--weights logs/7Scenes_heads_mapnet_mapnet_learn_beta_learn_gamma/epoch_250.pth.tar \
--config_file configs/mapnet.ini --val
Median error in translation = 0.18 m
Median error in rotation    = 13.33 degrees
  • PoseNet (CVPR2017) on heads:
$ python eval.py --dataset 7Scenes --scene heads --model posenet \
--weights logs/7Scenes_heads_posenet_posenet_learn_beta_logq/epoch_300.pth.tar \
--config_file configs/posenet.ini --val
Median error in translation = 0.19 m
Median error in rotation    = 12.15 degrees

RobotCar

  • MapNet++ with pose-graph optimization on loop:
$ python eval.py --dataset RobotCar --scene loop --model mapnet++ \
--weights logs/RobotCar_loop_mapnet++_mapnet++_RobotCar_learn_beta_learn_gamma_2seq/epoch_005.pth.tar \
--config_file configs/pgo_inference_RobotCar.ini --val --pose_graph
Mean error in translation = 6.74 m
Mean error in rotation    = 2.23 degrees

RobotCar_loop_mapnet+pgo

  • MapNet++ on loop:
$ python eval.py --dataset RobotCar --scene loop --model mapnet++ \
--weights logs/RobotCar_loop_mapnet++_mapnet++_RobotCar_learn_beta_learn_gamma_2seq/epoch_005.pth.tar \
--config_file configs/mapnet.ini --val
Mean error in translation = 6.95 m
Mean error in rotation    = 2.38 degrees
  • MapNet on loop:
$ python eval.py --dataset RobotCar --scene loop --model mapnet \
--weights logs/RobotCar_loop_mapnet_mapnet_learn_beta_learn_gamma/epoch_300.pth.tar \
--config_file configs/mapnet.ini --val
Mean error in translation = 9.84 m
Mean error in rotation    = 3.96 degrees

Train

The executable script is scripts/train.py. Please go to the scripts folder to run these commands. For example:

  • PoseNet on chess from 7Scenes: python train.py --dataset 7Scenes --scene chess --config_file configs/posenet.ini --model posenet --device 0 --learn_beta --learn_gamma

train.png

  • MapNet on chess from 7Scenes: python train.py --dataset 7Scenes --scene chess --config_file configs/mapnet.ini --model mapnet --device 0 --learn_beta --learn_gamma

  • MapNet++ is finetuned on top of a trained MapNet model: python train.py --dataset 7Scenes --checkpoint <trained_mapnet_model.pth.tar> --scene chess --config_file configs/mapnet++_7Scenes.ini --model mapnet++ --device 0 --learn_beta --learn_gamma

For example, we can train MapNet++ model on heads from a pretrained MapNet model:

$ python train.py --dataset 7Scenes \
--checkpoint logs/7Scenes_heads_mapnet_mapnet_learn_beta_learn_gamma/epoch_250.pth.tar \
--scene heads --config_file configs/mapnet++_7Scenes.ini --model mapnet++ \
--device 0 --learn_beta --learn_gamma

For MapNet++ training, you will need visual odometry (VO) data (or other sensory inputs such as noisy GPS measurements). For 7Scenes, we provided the preprocessed VO computed with the DSO method. For RobotCar, we use the provided stereo_vo. If you plan to use your own VO data (especially from a monocular camera) for MapNet++ training, you will need to first align the VO with the world coordinate (for rotation and scale). Please refer to the "Align VO" section below for more detailed instructions.

The meanings of various command-line parameters are documented in scripts/train.py. The values of various hyperparameters are defined in a separate .ini file. We provide some examples in the scripts/configs directory, along with a README file explaining some hyper-parameters.

If you have visdom = yes in the config file, you will need to start a Visdom server for logging the training progress:

python -m visdom.server -env_path=scripts/logs/.


Network Attention Visualization

Calculates the network attention visualizations and saves them in a video

  • For the MapNet model trained on chess in 7Scenes:
$ python plot_activations.py --dataset 7Scenes --scene chess
--weights <filename.pth.tar> --device 1 --val --config_file configs/mapnet.ini
--output_dir ../results/

Check here for an example video of computed network attention of PoseNet vs. MapNet++.


Other Tools

Align VO to the ground truth poses

This has to be done before using VO in MapNet++ training. The executable script is scripts/align_vo_poses.py.

  • For the first sequence from chess in 7Scenes: python align_vo_poses.py --dataset 7Scenes --scene chess --seq 1 --vo_lib dso. Note that alignment for 7Scenes needs to be done separately for each sequence, and so the --seq flag is needed

  • For all 7Scenes you can also use the script align_vo_poses_7scenes.sh The script stores the information at the proper location in data

Mean and stdev pixel statistics across a dataset

This must be calculated before any training. Use the scripts/dataset_mean.py, which also saves the information at the proper location. We provide pre-computed values for RobotCar and 7Scenes.

Calculate pose translation statistics

Calculates the mean and stdev and saves them automatically to appropriate files python calc_pose_stats.py --dataset 7Scenes --scene redkitchen This information is needed to normalize the pose regression targets, so this script must be run before any training. We provide pre-computed values for RobotCar and 7Scenes.

Plot the ground truth and VO poses for debugging

python plot_vo_poses.py --dataset 7Scenes --scene heads --vo_lib dso --val. To save the output instead of displaying on screen, add the --output_dir ../results/ flag

Process RobotCar GPS

The scripts/process_robotcar_gps.py script must be run before using GPS for MapNet++ training. It converts the csv file into a format usable for training.

Demosaic and undistort RobotCar images

This is advisable to do beforehand to speed up training. The scripts/process_robotcar_images.py script will do that and save the output images to a centre_processed directory in the stereo directory. After the script finishes, you must rename this directory to centre so that the dataloader uses these undistorted and demosaiced images.

FAQ

Collection of issues and resolution comments that might be useful:

License

Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).

Owner
NVIDIA Research Projects
NVIDIA Research Projects
SalGAN: Visual Saliency Prediction with Generative Adversarial Networks

SalGAN: Visual Saliency Prediction with Adversarial Networks Junting Pan Cristian Canton Ferrer Kevin McGuinness Noel O'Connor Jordi Torres Elisa Sayr

Image Processing Group - BarcelonaTECH - UPC 347 Nov 22, 2022
PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)

PyExplainer PyExplainer is a local rule-based model-agnostic technique for generating explanations (i.e., why a commit is predicted as defective) of J

AI Wizards for Software Management (AWSM) Research Group 14 Nov 13, 2022
Neighborhood Contrastive Learning for Novel Class Discovery

Neighborhood Contrastive Learning for Novel Class Discovery This repository contains the official implementation of our paper: Neighborhood Contrastiv

Zhun Zhong 56 Dec 09, 2022
the official implementation of the paper "Isometric Multi-Shape Matching" (CVPR 2021)

Isometric Multi-Shape Matching (IsoMuSh) Paper-CVF | Paper-arXiv | Video | Code Citation If you find our work useful in your research, please consider

Maolin Gao 9 Jul 17, 2022
Code implementation of "Sparsity Probe: Analysis tool for Deep Learning Models"

Sparsity Probe: Analysis tool for Deep Learning Models This repository is a limited implementation of Sparsity Probe: Analysis tool for Deep Learning

3 Jun 09, 2021
The repository offers the official implementation of our BMVC 2021 paper in PyTorch.

CrossMLP Cascaded Cross MLP-Mixer GANs for Cross-View Image Translation Bin Ren1, Hao Tang2, Nicu Sebe1. 1University of Trento, Italy, 2ETH, Switzerla

Bingoren 16 Jul 27, 2022
DROPO: Sim-to-Real Transfer with Offline Domain Randomization

DROPO: Sim-to-Real Transfer with Offline Domain Randomization Gabriele Tiboni, Karol Arndt, Ville Kyrki. This repository contains the code for the pap

Gabriele Tiboni 8 Dec 19, 2022
This repository attempts to replicate the SqueezeNet architecture and implement the same on an image classification task.

SqueezeNet-Implementation This repository attempts to replicate the SqueezeNet architecture using TensorFlow discussed in the research paper: "Squeeze

Rohan Mathur 3 Dec 13, 2022
A python library for highly configurable transformers - easing model architecture search and experimentation.

A python library for highly configurable transformers - easing model architecture search and experimentation.

Anthony Fuller 51 Nov 20, 2022
Manifold-Mixup implementation for fastai V2

Manifold Mixup Unofficial implementation of ManifoldMixup (Proceedings of ICML 19) for fast.ai (V2) based on Shivam Saboo's pytorch implementation of

Nestor Demeure 16 Jul 25, 2022
Implementation of the paper ''Implicit Feature Refinement for Instance Segmentation''.

Implicit Feature Refinement for Instance Segmentation This repository is an official implementation of the ACM Multimedia 2021 paper Implicit Feature

Lufan Ma 17 Dec 28, 2022
The dataset of tweets pulling from Twitters with keyword: Hydroxychloroquine, location: US, Time: 2020

HCQ_Tweet_Dataset: FREE to Download. Keywords: HCQ, hydroxychloroquine, tweet, twitter, COVID-19 This dataset is associated with the paper "Understand

2 Mar 16, 2022
내가 보려고 정리한 <프로그래밍 기초 Ⅰ> / organized for me

Programming-Basics 프로그래밍 기초 Ⅰ 아카이브 Do it! 점프 투 파이썬 주차 강의주제 비고 1주차 Syllabus 2주차 자료형 - 숫자형 3주차 자료형 - 문자열형 4주차 입력과 출력 5주차 제어문 - 조건문 if 6주차 제어문 - 반복문 whil

KIMMINSEO 1 Mar 07, 2022
Urban mobility simulations with Python3, RLlib (Deep Reinforcement Learning) and Mesa (Agent-based modeling)

Deep Reinforcement Learning for Smart Cities Documentation RLlib: https://docs.ray.io/en/master/rllib.html Mesa: https://mesa.readthedocs.io/en/stable

1 May 15, 2022
Read number plates with https://platerecognizer.com/

HASS-plate-recognizer Read vehicle license plates with https://platerecognizer.com/ which offers free processing of 2500 images per month. You will ne

Robin 69 Dec 30, 2022
Voila - Voilà turns Jupyter notebooks into standalone web applications

Rendering of live Jupyter notebooks with interactive widgets. Introduction Voilà turns Jupyter notebooks into standalone web applications. Unlike the

Voilà Dashboards 4.5k Jan 03, 2023
nextPARS, a novel Illumina-based implementation of in-vitro parallel probing of RNA structures.

nextPARS, a novel Illumina-based implementation of in-vitro parallel probing of RNA structures. Here you will find the scripts necessary to produce th

Jesse Willis 0 Jan 20, 2022
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

7 Feb 10, 2022
A few stylization coreML models that I've trained with CreateML

CoreML-StyleTransfer A few stylization coreML models that I've trained with CreateML You can open and use the .mlmodel files in the "models" folder in

Doron Adler 8 Aug 18, 2022
Crab is a flexible, fast recommender engine for Python that integrates classic information filtering recommendation algorithms in the world of scientific Python packages (numpy, scipy, matplotlib).

Crab - A Recommendation Engine library for Python Crab is a flexible, fast recommender engine for Python that integrates classic information filtering r

python-recsys 1.2k Dec 21, 2022