A Broad Study on the Transferability of Visual Representations with Contrastive Learning

Overview

A Broad Study on the Transferability of Visual Representations with Contrastive Learning

Paper

This repository contains code for the paper: A Broad Study on the Transferability of Visual Representations with Contrastive Learning

Prerequisites

  • PyTorch 1.7
  • pytorch-lightning 1.1.5

Install the required dependencies by:

pip install -r environments/requirements.txt

How to Run

Download Datasets

The data should be located in ~/datasets/cdfsl folder. To download all the datasets:

bash data_loader/download.sh 

Training

python main.py --system ${system}  --dataset ${train_dataset} --gpus -1 --model resnet50 

where system is one of base_finetune(ce), moco (SelfSupCon), moco_mit (SupCon), base_plus_moco (CE+SelfSupCon), or supervised_mean2 (SupCon+SelfSupCon).

To know more about the cli arguments, see configs.py.

You can also run the training script by bash scripts/run_linear_bn.sh -m train.

Evaluation

Linear evaluation

python main.py --system linear_eval \
  --train_aug true --val_aug false \
  --dataset ${val_data}_train --val_dataset ${val_data}_test \
  --ckpt ${ckpt} --load_base --batch_size ${bs} \
  --lr ${lr} --optim_wd ${wd}  --linear_bn --linear_bn_affine false \
  --scheduler step  --step_lr_milestones ${_milestones}

You can also run the evaluation script by bash scripts/run_linear_bn.sh -m tune to hyper-parameter tune, and then bash scripts/run_linear_bn.sh -m test to do linear-evaluation on the optimal hyper-parameter.

Few-shot

python main.py --system few_shot \
    --val_dataset ${val_data} \
    --load_base --test --model ${model} \
    --ckpt ${ckpt} --num_workers 4

You can also run the evaluation script by bash scripts/run_fewshot.sh.

Full-network finetuning

python main.py --system linear_transfer \
    --dataset ${val_data}_train --val_dataset ${val_data}_test \
    --ckpt ${ckpt} --load_base \
    --batch_size ${bs} --lr ${lr} --optim_wd ${wd} \
    --scheduler step  --step_lr_milestones ${_milestones} \
    --linear_bn --linear_bn_affine false \
    --max_epochs ${max_epochs}

You can also run the evaluation script by bash scripts/run_transfer_bn.sh -m tune to hyper-parameter tune, and then bash scripts/run_transfer_bn.sh -m test to do linear-evaluation on the optimal hyper-parameter.

Pretrained models

  • ImageNet pretrained models can be found here

  • mini-ImageNet pretrained models can be found here

You can also convert our pretrained checkpoint into torchvision resnet style checkpoint by python utils/convert_to_torchvision_resnet.py -i [input ckpt] -o [output path]

Citation

If you find this repo useful for your research, please consider citing the paper:

@misc{islam2021broad,
      title={A Broad Study on the Transferability of Visual Representations with Contrastive Learning}, 
      author={Ashraful Islam and Chun-Fu Chen and Rameswar Panda and Leonid Karlinsky and Richard Radke and Rogerio Feris},
      year={2021},
      eprint={2103.13517},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

You might also like...
SUPERVISED-CONTRASTIVE-LEARNING-FOR-PRE-TRAINED-LANGUAGE-MODEL-FINE-TUNING - The Facebook paper about fine tuning RoBERTa with contrastive loss
PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Representation

How to Reproduce our Results This repository contains PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Represen

Conformer: Local Features Coupling Global Representations for Visual Recognition

Conformer: Local Features Coupling Global Representations for Visual Recognition (arxiv) This repository is built upon DeiT and timm Usage First, inst

ImageNet-CoG is a benchmark for concept generalization. It provides a full evaluation framework for pre-trained visual representations which measure how well they generalize to unseen concepts.

The ImageNet-CoG Benchmark Project Website Paper (arXiv) Code repository for the ImageNet-CoG Benchmark introduced in the paper "Concept Generalizatio

Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Code for Efficient Visual Pretraining with Contrastive Detection

Code for DetCon This repository contains code for the ICCV 2021 paper "Efficient Visual Pretraining with Contrastive Detection" by Olivier J. Hénaff,

The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).
This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).

Core-tuning This repository is the official implementation of ``Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regular

improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

Comments
  • eurosat.zip cannot be found on google drive

    eurosat.zip cannot be found on google drive

    eurosat.zip cannot be found on google drive with the url: https://drive.google.com/uc?id=1FYZvuBePf2tuEsEaBCsACtIHi6eFpSwe

    Can you please check the url? Thank you.

    opened by Cohesion97 2
  • How to get CKA scores between different stages in Figure 4?

    How to get CKA scores between different stages in Figure 4?

    Thanks for your amazing study! I have some questions about the CKA scores shown in Figure 4. Take ResNet-50 as an example, which has five stages.

    1. Does stage 5 include the average pooling layer to output the feature of size 1x2048?
    2. Given an input sample, for the feature after each in-between stage (1-4), do you flatten the original feature map (1 x c x h x w) to a vector (1 x chw) OR do you adopt an extra average pooling process to obtain a vector (1 x c)? I've tried to flatten the feature map after each stage but obtained a very high-dimension vector (about 1M).

    (c: feature dimension; h: height; w: width) Looking forward to your reply, thanks.

    opened by klfsalfjl 0
Releases(v0.1.0)
Owner
Ashraful Islam
Ashraful Islam
Little Ball of Fur - A graph sampling extension library for NetworKit and NetworkX (CIKM 2020)

Little Ball of Fur is a graph sampling extension library for Python. Please look at the Documentation, relevant Paper, Promo video and External Resour

Benedek Rozemberczki 619 Dec 14, 2022
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

The official code for the paper "Inverse Problems Leveraging Pre-trained Contrastive Representations" (to appear in NeurIPS 2021).

Sriram Ravula 26 Dec 10, 2022
Complete* list of autonomous driving related datasets

AD Datasets Complete* and curated list of autonomous driving related datasets Contributing Contributions are very welcome! To add or update a dataset:

Daniel Bogdoll 13 Dec 19, 2022
Video lie detector using xgboost - A video lie detector using OpenFace and xgboost

video_lie_detector_using_xgboost a video lie detector using OpenFace and xgboost

2 Jan 11, 2022
Meaningful titles for tabs and PDF downloads! Also supports tab search.

arxiv-utils If you are a researcher that reads a lot on ArXiv, you'll benefit a lot from this web extension. Renames the title of PDF page to the pape

Johnson 174 Dec 20, 2022
Code for the paper: Adversarial Machine Learning: Bayesian Perspectives

Code for the paper: Adversarial Machine Learning: Bayesian Perspectives This repository contains code for reproducing the experiments in the ** Advers

Roi Naveiro 2 Nov 11, 2022
The official PyTorch code implementation of "Personalized Trajectory Prediction via Distribution Discrimination" in ICCV 2021.

Personalized Trajectory Prediction via Distribution Discrimination (DisDis) The official PyTorch code implementation of "Personalized Trajectory Predi

25 Dec 20, 2022
Composing methods for ML training efficiency

MosaicML Composer contains a library of methods, and ways to compose them together for more efficient ML training.

MosaicML 2.8k Jan 08, 2023
An open-source project for applying deep learning to medical scenarios

Auto Vaidya An open source solution for creating end-end web app for employing the power of deep learning in various clinical scenarios like implant d

Smaranjit Ghose 18 May 29, 2022
Eth brownie struct encoding example

eth-brownie struct encoding example Overview This repository contains an example of encoding a struct, so that it can be used in a function call, usin

Ittai Svidler 2 Mar 04, 2022
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 5.1k Jan 04, 2023
"Learning and Analyzing Generation Order for Undirected Sequence Models" in Findings of EMNLP, 2021

undirected-generation-dev This repo contains the source code of the models described in the following paper "Learning and Analyzing Generation Order f

Yichen Jiang 0 Mar 25, 2022
YOLO-v5 기반 단안 카메라의 영상을 활용해 차간 거리를 일정하게 유지하며 주행하는 Adaptive Cruise Control 기능 구현

자율 주행차의 영상 기반 차간거리 유지 개발 Table of Contents 프로젝트 소개 주요 기능 시스템 구조 디렉토리 구조 결과 실행 방법 참조 팀원 프로젝트 소개 YOLO-v5 기반으로 단안 카메라의 영상을 활용해 차간 거리를 일정하게 유지하며 주행하는 Adap

14 Jun 29, 2022
PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)

1-bit Wide ResNet PyTorch implementation of training 1-bit Wide ResNets from this paper: Training wide residual networks for deployment using a single

Sergey Zagoruyko 122 Dec 07, 2022
This repository contains pre-trained models and some evaluation code for our paper Towards Unsupervised Dense Information Retrieval with Contrastive Learning

Contriever: Towards Unsupervised Dense Information Retrieval with Contrastive Learning This repository contains pre-trained models and some evaluation

Meta Research 207 Jan 08, 2023
💡 Type hints for Numpy

Type hints with dynamic checks for Numpy! (❒) Installation pip install nptyping (❒) Usage (❒) NDArray nptyping.NDArray lets you define the shape and

Ramon Hagenaars 377 Dec 28, 2022
ICLR2021 (Under Review)

Self-Supervised Time Series Representation Learning by Inter-Intra Relational Reasoning This repository contains the official PyTorch implementation o

Haoyi Fan 58 Dec 30, 2022
Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

William Falcon 141 Dec 30, 2022
RIM: Reliable Influence-based Active Learning on Graphs.

RIM: Reliable Influence-based Active Learning on Graphs. This repository is the official implementation of RIM. Requirements To install requirements:

Wentao Zhang 4 Aug 29, 2022
This repository contains the code for our fast polygonal building extraction from overhead images pipeline.

Polygonal Building Segmentation by Frame Field Learning We add a frame field output to an image segmentation neural network to improve segmentation qu

Nicolas Girard 186 Jan 04, 2023