For visualizing the dair-v2x-i dataset

Overview

3D Detection & Tracking Viewer

The project is based on hailanyi/3D-Detection-Tracking-Viewer and is modified, you can find the original version of the code below: https://github.com/hailanyi/3D-Detection-Tracking-Viewer

This project was developed for viewing 3D object detection results from the Dair-V2X-I datasets.

It supports rendering 3D bounding boxes and rendering boxes on images.

Features

  • Captioning box ids(infos) in 3D scene
  • Projecting 3D box or points on 2D image

Design pattern

This code includes two parts, one for convert tools, other one for visualization of 3D detection results.

Change log

  • (2022.02.01) Adapted to the Dair-V2X-I dataset

Prepare data

  • Dair-V2X-I detection dataset
  • Convert the Dair-V2X-I dataset to kitti format using the conversion tool

Requirements (Updated 2021.11.2)

python==3.7.11
numpy==1.21.4
vedo==2022.0.1
vtk==8.1.2
opencv-python==4.1.1.26
matplotlib==3.4.3
open3d==0.14.1

It is recommended to use anaconda to create the visualization environment

conda create -n dair_vis python=3.8

To activate this environment, use

conda activate dair_vis

Install the requirements

pip install -r requirements.txt

To deactivate an active environment, use

conda deactivate

Convert tools

  • Prepare a dataset of the following structure:
  • "kitti_format" must be an empty folder to store the conversion result
  • "source_format" to store the source Dair-V2X-I datasets.
# For Dair-V2X-I Dataset  
dair_v2x_i
├── kitti_format
├── source_format
│   ├── single-infrastructure-side
│   │   ├── calib
│   │   │   ├── camera_intrinsic
│   │   │   └── virtuallidar_to_camera
│   │   └── label
│   │       ├── camera
│   │       └── virtuallidar
│   ├── single-infrastructure-side-example
│   │   ├── calib
│   │   │   ├── camera_intrinsic
│   │   │   └── virtuallidar_to_camera
│   │   ├── image
│   │   ├── label
│   │   │   ├── camera
│   │   │   └── virtuallidar
│   │   └── velodyne
│   ├── single-infrastructure-side-image
│   └── single-infrastructure-side-velodyne

  • If you have the same folder structure, you only need change the "root path" to your local path from config/config.yaml
  • Running the jupyter notebook server and open the "convert.ipynb"
  • The code is very simple , so there are no input parameters for advanced customization, you need to comment or copy the code to implemented separately following functions : -Convert calib files to KITTI format -Convert camera-based label files to KITTI format -Convert lidar-based label files to KITTI format -Convert image folders to KITTI format -Convert velodyne folders to KITTI format

After the convet you will get the following result. the

dair_v2x_i
├── kitti_format
│   ├── calib
│   ├── image_2
│   ├── label_2
│   ├── label_velodyne
│   └── velodyne
 
  • The label_2 base the camera label, and use the lidar label information replace the size information(w,h,l). In the camera view looks like better.
  • The label_velodyne base the velodyne label.
  • P2 represents the camera internal reference, which is a 3×3 matrix, not the same as KITTI. It convert frome the "cam_K" of the json file.
  • Tr_velo_to_cam: represents the camera to lidar transformation matrix, as a 3×4 matrix.

Usage

1. Set the path to the dataset folder used for input to the visualizer

If you have completed the conversion operation, the path should have been set correctly. Otherwise you need to set "root_path" in the config/config.yaml to the correct path

2. Choose whether camera or lidar based tagging for visualization

You need to set the "label_select" parameter in config.yaml to "cam" or "vel", to specify the label frome label_2 or velodyne_label.

2. Run and Terminate

  • You can start the program with the following command
python dair_3D_detection_viewer.py
  • Pressing space in the lidar window will display the next frame
  • Terminating the program is more complicated, you cannot terminate the program at static image status. You need to press the space quickly to make the frames play continuously, and when it becomes obvious that the system is overloaded with resources and the program can't respond, press Ctrl-C in the terminal window to terminate it. Try a few more times and you will eventually get the hang of it.

Notes on the Dair-V2X-I dataset

  • In the calib file of this dataset, "cam_K" is the real intrinsic matrix parameter of the camera, not "P". Although they are very close in value and structure.
  • There are multiple camera images with different focal and perspectives in this dataset, and the camera intrinsic matrix reference will change with each image file. Therefore, when using this dataset, please make sure that the calib file you are using corresponds to the image file (e.g. do not use only the 000000.txt parameter for all image files)
  • The sequence of files in this dataset is non-contiguous (e.g. missing the 000023), do not only use 00000 to lens(dataset) to get the sequence of file names directly.
  • The dataset provides optimized labels for both lidar and camera, and after testing, there are errors in the projection of the lidar label on camera (but the projection matrix is correct, only the label itself has issues). Likewise, there is a disadvantage of using the camera's label in lidar. Therefore it is recommended to use the corresponding label for lidar, and use the fused label for the camera.
  • There are some other objects in the label, for example you can see some trafficcone.
[ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark Accepted as a spotlight paper at ICLR 2021. Table of content File structure Prerequi

72 Jan 03, 2023
AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation

AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation AniGAN: Style-Guided Generative Adversarial Networks for U

Bing Li 81 Dec 14, 2022
[ICML 2020] DrRepair: Learning to Repair Programs from Error Messages

DrRepair: Learning to Repair Programs from Error Messages This repo provides the source code & data of our paper: Graph-based, Self-Supervised Program

Michihiro Yasunaga 155 Jan 08, 2023
Pytorch implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection"

M-LSD: Towards Light-weight and Real-time Line Segment Detection Pytorch implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Det

123 Jan 04, 2023
PyTorch implementations of deep reinforcement learning algorithms and environments

Deep Reinforcement Learning Algorithms with PyTorch This repository contains PyTorch implementations of deep reinforcement learning algorithms and env

Petros Christodoulou 4.7k Jan 04, 2023
Code for CMaskTrack R-CNN (proposed in Occluded Video Instance Segmentation)

CMaskTrack R-CNN for OVIS This repo serves as the official code release of the CMaskTrack R-CNN model on the Occluded Video Instance Segmentation data

Q . J . Y 61 Nov 25, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayes

Intel Labs 210 Jan 04, 2023
Adaptive, interpretable wavelets across domains (NeurIPS 2021)

Adaptive wavelets Wavelets which adapt given data (and optionally a pre-trained model). This yields models which are faster, more compressible, and mo

Yu Group 50 Dec 16, 2022
Controlling a game using mediapipe hand tracking

These scripts use the Google mediapipe hand tracking solution in combination with a webcam in order to send game instructions to a racing game. It features 2 methods of control

3 May 17, 2022
FedTorch is an open-source Python package for distributed and federated training of machine learning models using PyTorch distributed API

FedTorch is a generic repository for benchmarking different federated and distributed learning algorithms using PyTorch Distributed API.

Machine Learning and Optimization Lab @PennState 136 Dec 23, 2022
Unofficial implementation of the Involution operation from CVPR 2021

involution_pytorch Unofficial PyTorch implementation of "Involution: Inverting the Inherence of Convolution for Visual Recognition" by Li et al. prese

Rishabh Anand 46 Dec 07, 2022
Tzer: TVM Implementation of "Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation (OOPSLA'22)“.

Artifact • Reproduce Bugs • Quick Start • Installation • Extend Tzer Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation This is the s

12 Dec 29, 2022
Dilated Convolution for Semantic Image Segmentation

Multi-Scale Context Aggregation by Dilated Convolutions Introduction Properties of dilated convolution are discussed in our ICLR 2016 conference paper

Fisher Yu 764 Dec 26, 2022
A python library for self-supervised learning on images.

Lightly is a computer vision framework for self-supervised learning. We, at Lightly, are passionate engineers who want to make deep learning more effi

Lightly 2k Jan 08, 2023
A library for uncertainty representation and training in neural networks.

Epistemic Neural Networks A library for uncertainty representation and training in neural networks. Introduction Many applications in deep learning re

DeepMind 211 Dec 12, 2022
The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification".

Rule-based Representation Learner This is a PyTorch implementation of Rule-based Representation Learner (RRL) as described in NeurIPS 2021 paper: Scal

Zhuo Wang 53 Dec 17, 2022
BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalanced Tongue Data

Balanced-Evolutionary-Semi-Stacking Code for the paper ''BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalan

0 Jan 16, 2022
Segmentation Training Pipeline

Segmentation Training Pipeline This package is a part of Musket ML framework. Reasons to use Segmentation Pipeline Segmentation Pipeline was developed

Musket ML 52 Dec 12, 2022
Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Soubhik Sanyal 689 Dec 25, 2022
A criticism of a recent paper on buggy image downsampling methods in popular image processing and deep learning libraries.

A criticism of a recent paper on buggy image downsampling methods in popular image processing and deep learning libraries.

70 Jul 12, 2022