E2EC: An End-to-End Contour-based Method for High-Quality High-Speed Instance Segmentation

Related tags

Deep Learninge2ec
Overview

E2EC: An End-to-End Contour-based Method for High-Quality High-Speed Instance Segmentation

city

E2EC: An End-to-End Contour-based Method for High-Quality High-Speed Instance Segmentation
Tao Zhang, Shiqing Wei, Shunping Ji
CVPR 2022

Any questions or discussions are welcomed!

Installation

Please see INSTALL.md.

Performances

We re-tested the speed on a single RTX3090.

Dtataset AP Image size FPS
SBD val 59.2 512×512 59.60
COCO test-dev 33.8 original size 35.25
KINS val 34.0 768×2496 12.39
Cityscapes val 34.0 1216×2432 8.58

The accuracy and inference speed of the contours at different stages on SBD val set. We also re-tested the speed on a single RTX3090.

stage init coarse final final-dml
AP 51.4 55.9 58.8 59.2
FPS 101.73 91.35 67.48 59.6

The accuracy and inference speed of the contours at different stages on coco val set.

stage init coarse final final-dml
AP 27.8 31.6 33.5 33.6
FPS 80.97 72.81 42.55 35.25

Testing

Testing on COCO

  1. Download the pretrained model here or Baiduyun(password is e2ec).

  2. Prepared the COCO dataset according to the INSTALL.md.

  3. Test:

    # testing segmentation accuracy on coco val set
    python test.py coco --checkpoint /path/to/model_coco.pth --with_nms True
    # testing detection accuracy on coco val set
    python test.py coco --checkpoint /path/to/model_coco.pth --with_nms True --eval bbox
    # testing the speed
    python test.py coco --checkpoint /path/to/model_coco.pth --with_nms True --type speed
    # testing the contours of specified stage(init/coarse/final/final-dml)
    python test.py coco --checkpoint /path/to/model_coco.pth --with_nms True --stage coarse
    # testing on coco test-dev set, run and submit data/result/results.json
    python test.py coco --checkpoint /path/to/model_coco.pth --with_nms True --dataset coco_test
    

Testing on SBD

  1. Download the pretrained model here or Baiduyun(password is e2ec).

  2. Prepared the SBD dataset according to the INSTALL.md.

  3. Test:

    # testing segmentation accuracy on SBD
    python test.py sbd --checkpoint /path/to/model_sbd.pth
    # testing detection accuracy on SBD
    python test.py sbd --checkpoint /path/to/model_sbd.pth --eval bbox
    # testing the speed
    python test.py sbd --checkpoint /path/to/model_sbd.pth --type speed
    # testing the contours of specified stage(init/coarse/final/final-dml)
    python test.py sbd --checkpoint /path/to/model_sbd.pth --stage coarse
    

Testing on KINS

  1. Download the pretrained model here or Baiduyun(password is e2ec).

  2. Prepared the KINS dataset according to the INSTALL.md.

  3. Test:

    Maybe you will find some troules, such as object of type <class 'numpy.float64'> cannot be safely interpreted as an integer. Please modify the /path/to/site-packages/pycocotools/cooceval.py. Replace np.round((0.95 - .5) / .05) in lines 506 and 507 with int(np.round((0.95 - .5) / .05)).

    # testing segmentation accuracy on KINS
    python test.py kitti --checkpoint /path/to/model_kitti.pth
    # testing detection accuracy on KINS
    python test.py kitti --checkpoint /path/to/model_kitti.pth --eval bbox
    # testing the speed
    python test.py kitti --checkpoint /path/to/model_kitti.pth --type speed
    # testing the contours of specified stage(init/coarse/final/final-dml)
    python test.py kitti --checkpoint /path/to/model_kitti.pth --stage coarse
    

Testing on Cityscapes

  1. Download the pretrained model here or Baiduyun(password is e2ec).

  2. Prepared the KINS dataset according to the INSTALL.md.

  3. Test:

    We will soon release the code for e2ec with multi component detection. Currently only supported for testing e2ec performance on cityscapes dataset.

    # testing segmentation accuracy on Cityscapes with coco evaluator
    python test.py cityscapesCoco --checkpoint /path/to/model_cityscapes.pth
    # with cityscapes official evaluator
    python test.py cityscapes --checkpoint /path/to/model_cityscapes.pth
    # testing the detection accuracy
    python test.py cityscapesCoco \
    --checkpoint /path/to/model_cityscapes.pth --eval bbox
    # testing the speed
    python test.py cityscapesCoco \
    --checkpoint /path/to/model_cityscapes.pth --type speed
    # testing the contours of specified stage(init/coarse/final/final-dml)
    python test.py cityscapesCoco \
    --checkpoint /path/to/model_cityscapes.pth --stage coarse
    # testing on test set, run and submit the result file
    python test.py cityscapes --checkpoint /path/to/model_cityscapes.pth \
    --dataset cityscapes_test
    

Evaluate boundary AP

  1. Install the Boundary IOU API according boundary iou.

  2. Testing segmentation accuracy with coco evaluator.

  3. Using offline evaluation pipeline.

    python /path/to/boundary_iou_api/tools/coco_instance_evaluation.py \
        --gt-json-file /path/to/annotation_file.json \
        --dt-json-file data/result/result.json \
        --iou-type boundary
    

Visualization

  1. Download the pretrained model.

  2. Visualize:

    # inference and visualize the images with coco pretrained model
    python visualize.py coco /path/to/images \
    --checkpoint /path/to/model_coco.pth --with_nms True
    # you can using other pretrained model, such as cityscapes 
    python visualize.py cityscapesCoco /path/to/images \
    --checkpoint /path/to/model_cityscapes.pth
    # if you want to save the visualisation, please specify --output_dir
    python visualize.py coco /path/to/images \
    --checkpoint /path/to/model_coco.pth --with_nms True \
    --output_dir /path/to/output_dir
    # visualize the results at different stage
    python visualize.py coco /path/to/images \
    --checkpoint /path/to/model_coco.pth --with_nms True --stage coarse
    # you can reset the score threshold, default is 0.3
    python visualize.py coco /path/to/images \
    --checkpoint /path/to/model_coco.pth --with_nms True --ct_score 0.1
    # if you want to filter some of the jaggedness caused by dml 
    # please using post_process
    python visualize.py coco /path/to/images \
    --checkpoint /path/to/model_coco.pth --with_nms True \
    --with_post_process True
    

Training

We have only released the code for single GPU training, multi GPU training with ddp will be released soon.

Training on SBD

python train_net.py sbd --bs $batch_size
# if you do not want to use dinamic matching loss (significantly improves 
# contour detail but introduces jaggedness), please set --dml as False
python train_net.py sbd --bs $batch_size --dml False

Training on KINS

python train_net.py kitti --bs $batch_size

Training on Cityscapes

python train_net.py cityscapesCoco --bs $batch_size

Training on COCO

In fact it is possible to achieve the same accuracy without training so many epochs.

# first to train with adam
python train_net.py coco --bs $batch_size
# then finetune with sgd
python train_net.py coco_finetune --bs $batch_size \
--type finetune --checkpoint data/model/139.pth

Training on the other dataset

If the annotations is in coco style:

  1. Add dataset information to dataset/info.py.

  2. Modify the configs/coco.py, reset the train.dataset , model.heads['ct_hm'] and test.dataset. Maybe you also need to change the train.epochs, train.optimizer['milestones'] and so on.

  3. Train the network.

    python train_net.py coco --bs $batch_size
    

If the annotations is not in coco style:

  1. Prepare dataset/train/your_dataset.py and dataset/test/your_dataset.py by referring to dataset/train/base.py and dataset/test/base.py.

  2. Prepare evaluator/your_dataset/snake.py by referring to evaluator/coco/snake.py.

  3. Prepare configs/your_dataset.py and by referring to configs/base.py.

  4. Train the network.

    python train_net.py your_dataset --bs $batch_size
    

Citation

If you find this project helpful for your research, please consider citing using BibTeX below:

@article{zhang2022e2ec,
  title={E2EC: An End-to-End Contour-based Method for High-Quality High-Speed Instance Segmentation},
  author={Zhang, Tao and Wei, Shiqing and Ji, Shunping},
  journal={arXiv preprint arXiv:2203.04074},
  year={2022}
}

Acknowledgement

Code is largely based on Deep Snake. Thanks for their wonderful works.

Owner
zhangtao
zhangtao
A scanpy extension to analyse single-cell TCR and BCR data.

Scirpy: A Scanpy extension for analyzing single-cell immune-cell receptor sequencing data Scirpy is a scalable python-toolkit to analyse T cell recept

ICBI 145 Jan 03, 2023
SparseML is a libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art sparsification algorithms such as pruning and quantization to any neural network. General, recipe-dri

Neural Magic 1.5k Dec 30, 2022
Public repo for the ICCV2021-CVAMD paper "Is it Time to Replace CNNs with Transformers for Medical Images?"

Is it Time to Replace CNNs with Transformers for Medical Images? Accepted at ICCV-2021: Workshop on Computer Vision for Automated Medical Diagnosis (C

Christos Matsoukas 80 Dec 27, 2022
Campsite Reservation Finder

yellowstone-camping UPDATE: yellowstone-camping is being expanded and renamed to camply. The updated tool now interfaces with the Recreation.gov API a

Justin Flannery 233 Jan 08, 2023
Image Segmentation Evaluation

Image Segmentation Evaluation Martin Keršner, [email protected] Evaluation

Martin Kersner 273 Oct 28, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

AugMax: Adversarial Composition of Random Augmentations for Robust Training Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, an

VITA 112 Nov 07, 2022
A symbolic-model-guided fuzzer for TLS

tlspuffin TLS Protocol Under FuzzINg A symbolic-model-guided fuzzer for TLS Master Thesis | Thesis Presentation | Documentation Disclaimer: The term "

69 Dec 20, 2022
A PyTorch version of You Only Look at One-level Feature object detector

PyTorch_YOLOF A PyTorch version of You Only Look at One-level Feature object detector. The input image must be resized to have their shorter side bein

Jianhua Yang 25 Dec 30, 2022
Real-time analysis of intracranial neurophysiology recordings.

py_neuromodulation Click this button to run the "Tutorial ML with py_neuro" notebooks: The py_neuromodulation toolbox allows for real time capable pro

Interventional Cognitive Neuromodulation - Neumann Lab Berlin 15 Nov 03, 2022
DiSECt: Differentiable Simulator for Robotic Cutting

DiSECt: Differentiable Simulator for Robotic Cutting Website | Paper | Dataset | Video | Blog post DiSECt is a simulator for the cutting of deformable

NVIDIA Research Projects 73 Oct 29, 2022
LBK 35 Dec 26, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Jan 01, 2023
So-ViT: Mind Visual Tokens for Vision Transformer

So-ViT: Mind Visual Tokens for Vision Transformer        Introduction This repository contains the source code under PyTorch framework and models trai

Jiangtao Xie 44 Nov 24, 2022
Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Deepak Nandwani 1 Dec 31, 2021
NBEATSx: Neural basis expansion analysis with exogenous variables

NBEATSx: Neural basis expansion analysis with exogenous variables We extend the NBEATS model to incorporate exogenous factors. The resulting method, c

Cristian Challu 100 Dec 31, 2022
Pytorch implementation of RED-SDS (NeurIPS 2021).

Recurrent Explicit Duration Switching Dynamical Systems (RED-SDS) This repository contains a reference implementation of RED-SDS, a non-linear state s

Abdul Fatir 10 Dec 02, 2022
Official implementation of Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models at NeurIPS 2021

Representer Point Selection via Local Jacobian Expansion for Classifier Explanation of Deep Neural Networks and Ensemble Models This repository is the

Yi(Amy) Sui 2 Dec 01, 2021
QA-GNN: Question Answering using Language Models and Knowledge Graphs

QA-GNN: Question Answering using Language Models and Knowledge Graphs This repo provides the source code & data of our paper: QA-GNN: Reasoning with L

Michihiro Yasunaga 434 Jan 04, 2023
Towards Part-Based Understanding of RGB-D Scans

Towards Part-Based Understanding of RGB-D Scans (CVPR 2021) We propose the task of part-based scene understanding of real-world 3D environments: from

26 Nov 23, 2022
SmallInitEmb - LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence

SmallInitEmb LayerNorm(SmallInit(Embedding)) in a Transformer I find that when t

PENG Bo 11 Dec 25, 2022