Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

Related tags

Deep LearningDisAlign
Overview

Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

[Paper][Code]

We implement the classification, object detection and instance segmentation tasks based on our cvpods. The users should install cvpods first and run the experiments in this repo.

Changelog

  • 4.23.2021 Update the DisAlign on LVIS v0.5(Mask R-CNN + Res50)
  • 4.12.2021 Update the README

0. How to Use

  • Step-1: Install the latest cvpods.
  • Step-2: cd cvpods
  • Step-3: Prepare dataset for different tasks.
  • Step-4: git clone https://github.com/Megvii-BaseDetection/DisAlign playground_disalign
  • Step-5: Enter one folder and run pods_train --num-gpus 8
  • Step-6: Use pods_test --num-gpus 8 to evaluate the last the checkpoint

1. Image Classification

We support the the following three datasets:

  • ImageNet-LT Dataset
  • iNaturalist-2018 Dataset
  • Place-LT Dataset

We refer the user to CLS_README for more details.

2. Object Detection/Instance Segmentation

We support the two versions of the LVIS dataset:

  • LVIS v0.5
  • LVIS v1.0

Highlight

  1. To speedup the evaluation on LVIS dataset, we provide the C++ optimized evaluation api by modifying the coco_eval(C++) in cvpods.
  • The C++ version lvis_eval API will save ~30% time when calculating the mAP.
  1. We provide support for the metric of AP_fixed and AP_pool proposed in large-vocab-devil
  2. We will support more recent works on long-tail detection in this project(e.g. EQLv2, CenterNet2, etc.) in the future.

We refer the user to DET_README for more details.

3. Semantic Segmentation

We adopt the mmsegmentation as the codebase for runing all experiments of DisAlign. Currently, the user should use DisAlign_Seg for the semantic segmentation experiments. We will add the support for these experiments in cvpods in the future.

Acknowledgement

Thanks for the following projects:

Citing DisAlign

If you are using the DisAlign in your research or with to refer to the baseline results publised in this repo, please use the following BibTex entry.

@inproceedings{zhang2021disalign,
  title={Distribution Alignment: A Unified Framework for Long-tail Visual Recognition.},
  author={Zhang, Songyang and Li, Zeming and Yan, Shipeng and He, Xuming and Sun, Jian},
  booktitle={CVPR},
  year={2021}
}

License

This repo is released under the Apache 2.0 license. Please see the LICENSE file for more information.

Comments
  • scale in cosine classifier

    scale in cosine classifier

    Hi, thanks for your great work! I notice you use the cosine classifier in many experiments and it can get a better baseline. The formula is as follows

    image

    I am wondering the value of s?

    opened by L1aoXingyu 5
  •  Is it correct to freeze the weight and bias of the DisAlign Linear Layer as well?

    Is it correct to freeze the weight and bias of the DisAlign Linear Layer as well?

    Hello. Thank you for your project! I'm testing your code on my custom dataset. My task is classification. I have a question about your code implementation.

    https://github.com/Megvii-BaseDetection/DisAlign/blob/a2fc3500a108cb83e3942293a5675c97ab3a2c6e/classification/imagenetlt/resnext50/resx50.scratch.imagenet_lt.224size.90e.disalign.10e/net.py#L56-L62

    From my understanding, in stage 2, remove the linear layer used in stage 1 and add DisAlign Linear Layer. And freeze all parts except for logit_scale, logit_bias, and confidence_layer. At this time, the weight and bias of DisAlignLinear are also frozen. (self.weight, self.bias) Is my understanding correct?

    If so, are the weight and bias of DisAlignLinearLayer fixed after the initialization? (The weight and bias of the linear layer in stage 1 are not copied either)

    If my understanding is correct, why is the weight of DisAlignLinear also frozen?

    I will wait for your reply. thanks!

    opened by jeongHwarr 4
  • Where is the DisAlignLinear module?

    Where is the DisAlignLinear module?

    Hello. Thank you for your impressive project!

    I want to apply DisAlign to classification. However, an error occurs in the import part. https://github.com/Megvii-BaseDetection/DisAlign/blob/a2fc3500a108cb83e3942293a5675c97ab3a2c6e/classification/imagenetlt/resnext50/resx50.scratch.imagenet_lt.224size.90e.disalign.10e/net.py#L7 I coudn't find the DisAlignLinear in cvpods.layers. and there also isn't exist at https://github.com/Megvii-BaseDetection/cvpods/tree/master/cvpods/layers How can I solve this problem?

    Thank you!

    opened by jeongHwarr 4
  • Can someone kindly share their codes of Classification task on ImageNet_LT?

    Can someone kindly share their codes of Classification task on ImageNet_LT?

    I tried to train the proposed method on ImageNet_LT, but I can only get an average testing rate about 49%, which is far from the rate described in the paper (52.9). Some of the details regarding my implementations are given as follows: (1) The feature extractor is ResNexT-50 and the head classifier is a linear classifier. The testing accuracy in Stage-One is 43.9%, which is OK.

    (2) The testing accuracy of adopting cRT method in Stage-Two is 49.6%, which is identical to one reported in other papers. (3) When fine-tuning the model in Stage-2, both the feature-extractor and head classifier are frozen, and a DisAliLinear model (which is implemented in CVPODs) is retrained. The testing accuracy can only reach 48.8%, which is far away from the one reported in your paper.

    opened by smallcube 4
  • The code for semantic segmentation is missing

    The code for semantic segmentation is missing

    Hi, thank you for the nice work, but the code for semantic segmentation is missing and the URL for it in the README could not be opened. Could you please fix this issue?

    opened by curiosity654 3
  • About the reference Distribution p_r in Eq. (10)

    About the reference Distribution p_r in Eq. (10)

    Hi, Thank you for providing your code. Here I was wondering the Equation (10) in your paper (The definition of p_r), which seems not to be a distribution. Since every x_i can only have one label, the reference distribution p_r(y| x_i) will be the distribution like (0, 0, 0,...,w_c, 0, 0,...,0). And the sum of this distribution is w_c, but not 1.

    Could you help me understand this equation? Thanks in advance.

    opened by Kevinz-code 3
  • import error

    import error

    Hi, thanks for the great work. Maybe I missed it, but it seems that the code for this project has been incorporated into cvpods. I couldn't launch any experiments due to ImportErrors like: from cvpods.layers import DisAlignLinear ImportError: cannot import name 'DisAlignLinear' from 'cvpods.layers' Also, I didn't find the corresponding functions in cvpods.

    Any help will be appreciated. Thanks.

    opened by YUE-FAN 2
  • about the confidence score σ(x)

    about the confidence score σ(x)

    In the paper, the σ(x) is implemented as a linear layer followed by a non-linear activation function (e.g., sigmoid function) for all input x. How to understand the input x?the matrix of raw iamge, or the extracted features, even or cls_score? Thank you!

    opened by lzed2399 2
  • exp_reweight = exp_reweight / np.sum(exp_reweight) * num_foreground

    exp_reweight = exp_reweight / np.sum(exp_reweight) * num_foreground

    Dear author, I have some questions about the code and paper:

    1. exp_reweight = exp_reweight / np.sum(exp_reweight) * num_foreground Why "exp_reweight" is multiplied by the coefficient "num_foreground"? It is not mentioned in the paper.
    2. Is "K" in the empirical class frequencies r = [r1, · · · , rK] on the training set in the paper the same as the class number C of the training set?
    opened by Liu-wanbing 2
  • The DisAlign_Seg page can't open

    The DisAlign_Seg page can't open

    opened by Kittywyk 1
  • Do you use validation dataset?

    Do you use validation dataset?

    https://github.com/Megvii-BaseDetection/DisAlign/blob/main/classification/imagenetlt/resnext50/resx50.scratch.imagenet_lt.224size.90e.disalign.10e/config.py#L31

    It seems that you only use test dataset? What is the reason for that?

    opened by qianlanwyd 1
  • How can I test and augtest the trained semseg DisAlign model?

    How can I test and augtest the trained semseg DisAlign model?

    opened by jh151170 0
  • the code question in semantic_seg

    the code question in semantic_seg

    Hi, I have a questation about the logit_scale and logit_bias in semantic_seg. The shape of the above parameter is (1, num_classes, 1, 1), why not is (1, num_classes, 512, 512) which is matched the input image size for semantic segmenation.

    opened by Ianresearch 8
  • Value of the learned scale and bias vector?

    Value of the learned scale and bias vector?

    Hi, did you check the value change of the learned scale and bias vector throughout the training process? I find the value of them change in the first few iterations and remain stable in the rest time on my own classification dataset. I wonder how the learned vectors look like in your paper? Thanks!

    opened by Jacobew 1
Owner
BaseDetection Team of Megvii
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 151 Dec 30, 2022
A Broad Study on the Transferability of Visual Representations with Contrastive Learning

A Broad Study on the Transferability of Visual Representations with Contrastive Learning This repository contains code for the paper: A Broad Study on

Ashraful Islam 29 Nov 09, 2022
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
An Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches

Transformer-in-Transformer An Implementation of the Transformer in Transformer paper by Han et al. for image classification, attention inside local pa

Rishit Dagli 40 Jul 25, 2022
Cognition-aware Cognate Detection

Cognition-aware Cognate Detection The repository which contains our code for our EACL 2021 paper titled, "Cognition-aware Cognate Detection". This wor

Prashant K. Sharma 1 Feb 01, 2022
Pytorch Implementation of PointNet and PointNet++++

Pytorch Implementation of PointNet and PointNet++ This repo is implementation for PointNet and PointNet++ in pytorch. Update 2021/03/27: (1) Release p

Luigi Ariano 1 Nov 11, 2021
Gradient representations in ReLU networks as similarity functions

Gradient representations in ReLU networks as similarity functions by Dániel Rácz and Bálint Daróczy. This repo contains the python code related to our

1 Oct 08, 2021
Pytorch implementation of Generative Models as Distributions of Functions 🌿

Generative Models as Distributions of Functions This repo contains code to reproduce all experiments in Generative Models as Distributions of Function

Emilien Dupont 117 Dec 29, 2022
banditml is a lightweight contextual bandit & reinforcement learning library designed to be used in production Python services.

banditml is a lightweight contextual bandit & reinforcement learning library designed to be used in production Python services. This library is developed by Bandit ML and ex-authors of Facebook's app

Bandit ML 51 Dec 22, 2022
Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand

Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand Introduction We propose a generalization of leaderboards, bidimensional leader

4 Dec 03, 2022
Learning Correspondence from the Cycle-consistency of Time (CVPR 2019)

TimeCycle Code for Learning Correspondence from the Cycle-consistency of Time (CVPR 2019, Oral). The code is developed based on the PyTorch framework,

Xiaolong Wang 706 Nov 29, 2022
Multi-Stage Spatial-Temporal Convolutional Neural Network (MS-GCN)

Multi-Stage Spatial-Temporal Convolutional Neural Network (MS-GCN) This code implements the skeleton-based action segmentation MS-GCN model from Autom

Benjamin Filtjens 8 Nov 29, 2022
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022
Predict the latency time of the deep learning models

Deep Neural Network Prediction Step 1. Genernate random parameters and Run them sequentially : $ python3 collect_data.py -gp -ep -pp -pl pooling -num

QAQ 1 Nov 12, 2021
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR)

Ilya Kostrikov 3k Dec 31, 2022
PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick."

PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick." [Project page] [Paper

Gyungin Shin 59 Sep 25, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

144 Dec 30, 2022
Single Red Blood Cell Hydrodynamic Traps Via the Generative Design

Rbc-traps-generative-design - The generative design for single red clood cell hydrodynamic traps using GEFEST framework

Natural Systems Simulation Lab 4 Jun 16, 2022
Writeups for the challenges from DownUnderCTF 2021

cloud Challenge Author Difficulty Release Round Bad Bucket Blue Alder easy round 1 Not as Bad Bucket Blue Alder easy round 1 Lost n Found Blue Alder m

DownUnderCTF 161 Dec 31, 2022
Conditional Generative Adversarial Networks (CGAN) for Mobility Data Fusion

This code implements the paper, Kim et al. (2021). Imputing Qualitative Attributes for Trip Chains Extracted from Smart Card Data Using a Conditional Generative Adversarial Network. Transportation Re

Eui-Jin Kim 2 Feb 03, 2022