This project hosts the code for implementing the ISAL algorithm for object detection and image classification

Related tags

Deep LearningISAL
Overview

Influence Selection for Active Learning (ISAL)

This project hosts the code for implementing the ISAL algorithm for object detection and image classification, as presented in our paper:

Influence Selection for Active Learning;
Zhuoming Liu, Hao Ding, Huaping Zhong, Weijia Li, Jifeng Dai, Conghui He;
In: Proc. Int. Conf. Computer Vision (ICCV), 2021.
arXiv preprint arXiv:2108.09331

The full paper is available at: https://arxiv.org/abs/2108.09331.

Implementation based on MMDetection is included in MMDetection.

Highlights

  • Task agnostic: We evaluate ISAL in both object detection and image classification. Compared with previous methods, ISAL decreases the annotation cost at least by 12%, 12%, 3%, 13% and 16% on CIFAR10, SVHN, CIFAR100, VOC2012 and COCO, respectively.

  • Model agnostic: We evaluate ISAL with different model in object detection. On COCO dataset, with one-stage anchor-free detector FCOS, ISAL decreases the annotation cost at least by 16%. With two-stage anchor-based detector Faster R-CNN, ISAL decreases the annotation cost at least by 10%.

ISAL just needs to use the model gradients, which can be easily obtained in a neural network no matter what task is and how complex the model structure is, our proposed ISAL is task-agnostic and model-agnostic.

Required hardware

We use 4 NVIDIA V100 GPUs for object detection. We use 1 NVIDIA TITAN Xp GPUs for image classification.

Installation

Our ISAL implementation for object detection is based on mmdetection v2.4.0 with mmcv v1.1.1. Their need Pytorch version = 1.5, CUDA version = 10.1, CUDNN version = 7. We provide a docker file (./detection/Dockerfile) to prepare the environment. Once the environment is prepared, please copy all the files under the folder ./detection into the directory /mmdetection in the docker.

Our ISAL implementation for image classification is based on pycls v0.1. It need Pytorch version = 1.6, CUDA version = 10.1, CUDNN version = 7.

Training

The following command line will perform the ISAL algorithm with FCOS detector on COCO dataset, the active learning algorithm will iterate 20 steps with 4 GPUS:

bash dist_run_isal.sh /workdir /datadir \
    /mmdetection/configs/mining_experiments/ \
    fcos/fcos_r50_caffe_fpn_1x_coco_influence_function.py \
    --mining-method=influence --seed=42 --deterministic \
    --noised-score-thresh=0.1

Note that:

  1. If you want to use fewer GPUs, please change GPUS in shell script. In addition, you may need to change the samples_per_gpu in the config file to mantain the total batch size is equal to 8.
  2. The models and all inference results will be saved into /workdir.
  3. The data should be place in /datadir.
  4. If you want to run our code on VOC or your own dataset, we suggest that you should change the data format into COCO format.
  5. If you want to change the active learning iteration steps, please change the TRAIN_STEP in shell script. If you want to change the image selected by step_0 or the following steps, please change the INIT_IMG_NUM or IMG_NUM in shell script, respectively.
  6. The shell script will delete all the trained models after all the active learning steps. If you want to maintain the models please change the DELETE_MODEL in shell script.

The following command line will perform the ISAL algorithm with ResNet-18 on CIFAR10 dataset, the active learning algorithm will iterate 10 steps with 1 GPU:

bash run_isal.sh /workdir /datadir \
    pycls/configs/archive/cifar/resnet/R-18_nds_1gpu_cifar10.yaml \
    --mining-method=influence --random-seed=0

Note that:

  1. The models and all inference results will be saved into /workdir.
  2. The data should be place in /datadir.
  3. If you want to train SHVN or your own dataset, we suggest that you should change the data format into CIFAR10 format.
  4. The STEP in shell script indicates that in each active learning step the algorithm will add (1/STEP)% of the whole dataset into labeled dataset. The TRAIN_STEP indicates the total steps of active learning algorithm.

Citations

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows.

@inproceedings{liu2021influence,
  title={Influence selection for active learning},
  author={Liu, Zhuoming and Ding, Hao and Zhong, Huaping and Li, Weijia and Dai, Jifeng and He, Conghui},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={9274--9283},
  year={2021}
}

Acknowledgments

We thank Zheng Zhu for implementing the classification pipeline. We thank Bin Wang and Xizhou Zhu for discussion and helping with the experiments. We thank Yuan Tian and Jiamin He for discussing the mathematic derivation.

License

For academic use only. For commercial use, please contact the authors.

Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation

SSWS-loss_function_based_on_MS-TCN Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation Supervised Sliding Window

3 Aug 03, 2022
An end-to-end project on customer segmentation

End-to-end Customer Segmentation Project Note: This project is in progress. Tools Used in This Project Prefect: Orchestrate workflows hydra: Manage co

Ocelot Consulting 8 Oct 06, 2022
Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)

VIN: Value Iteration Networks A quick thank you A few others have released amazing related work which helped inspire and improve my own implementation

Kent Sommer 297 Dec 26, 2022
Evaluating different engineering tricks that make RL work

Reinforcement Learning Tricks, Index This repository contains the code for the paper "Distilling Reinforcement Learning Tricks for Video Games". Short

Anssi 15 Dec 26, 2022
Simple Python application to transform Serial data into OSC messages

SerialToOSC-Bridge Simple Python application to transform Serial data into OSC messages. The current purpose is to be a compatibility layer between ha

Division of Applied Acoustics at Chalmers University of Technology 3 Jun 03, 2021
Customised to detect objects automatically by a given model file(onnx)

LabelImg LabelImg is a graphical image annotation tool. It is written in Python and uses Qt for its graphical interface. Annotations are saved as XML

Heeone Lee 1 Jun 07, 2022
Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper General tensor network decoding of 2D Pauli codes

Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper General tensor network decoding of 2D Pauli codes

Christopher T. Chubb 35 Dec 21, 2022
This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation

This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation (Guillaume Couairon, Holger

Meta Research 31 Oct 17, 2022
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 19 Nov 05, 2022
Multi-Task Learning as a Bargaining Game

Nash-MTL Official implementation of "Multi-Task Learning as a Bargaining Game". Setup environment conda create -n nashmtl python=3.9.7 conda activate

Aviv Navon 87 Dec 26, 2022
TAPEX: Table Pre-training via Learning a Neural SQL Executor

TAPEX: Table Pre-training via Learning a Neural SQL Executor The official repository which contains the code and pre-trained models for our paper TAPE

Microsoft 157 Dec 28, 2022
UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual Embeddings Using the Unified Medical Language System Metathesaurus

UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual Embeddings Using the Unified Medical Language System Metathesaurus General info This is

71 Oct 25, 2022
The implementation of 'Image synthesis via semantic composition'.

Image synthesis via semantic synthesis [Project Page] by Yi Wang, Lu Qi, Ying-Cong Chen, Xiangyu Zhang, Jiaya Jia. Introduction This repository gives

DV Lab 71 Jan 06, 2023
This library contains a Tensorflow implementation of the paper Stability Analysis of Unfolded WMMSE for Power Allocation

UWMMSE-stability Tensorflow implementation of Stability Analysis of UWMMSE Overview This library contains a Tensorflow implementation of the paper Sta

Arindam Chowdhury 1 Nov 16, 2022
Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper

TransGanFormer (wip) Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GansFormer and TransGan paper. I

Phil Wang 146 Dec 06, 2022
Rax is a Learning-to-Rank library written in JAX

🦖 Rax: Composable Learning to Rank using JAX Rax is a Learning-to-Rank library written in JAX. Rax provides off-the-shelf implementations of ranking

Google 247 Dec 27, 2022
Bagua is a flexible and performant distributed training algorithm development framework.

Bagua is a flexible and performant distributed training algorithm development framework.

786 Dec 17, 2022
Code for "OctField: Hierarchical Implicit Functions for 3D Modeling (NeurIPS 2021)"

OctField(Jittor): Hierarchical Implicit Functions for 3D Modeling Introduction This repository is code release for OctField: Hierarchical Implicit Fun

55 Dec 08, 2022
Deal or No Deal? End-to-End Learning for Negotiation Dialogues

Introduction This is a PyTorch implementation of the following research papers: (1) Hierarchical Text Generation and Planning for Strategic Dialogue (

Facebook Research 1.4k Dec 29, 2022
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation

CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation (CVPR 2021, oral presentation) CoCosNet v2: Full-Resolution Correspondence

Microsoft 308 Dec 07, 2022