Generative Models as a Data Source for Multiview Representation Learning

Related tags

Deep LearningGenRep
Overview

GenRep

Project Page | Paper

Generative Models as a Data Source for Multiview Representation Learning
Ali Jahanian, Xavier Puig, Yonglong Tian, Phillip Isola

Prerequisites

  • Linux
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Table of Contents:

  1. Setup
  2. Visualizations - plotting image panels, videos, and distributions
  3. Training - pipeline for training your encoder
  4. Testing - pipeline for testing/transfer learning your encoder
  5. Notebooks - some jupyter notebooks, good place to start for trying your own dataset generations
  6. Colab Demo - a colab notebook to demo how the contrastive encoder training works

Setup

  • Clone this repo:
git clone https://github.com/ali-design/GenRep
  • Install dependencies:
    • we provide a Conda environment.yml file listing the dependencies. You can create a Conda environment with the dependencies using:
conda env create -f environment.yml
  • Download resources:
    • we provide a script for downloading associated resources. Fetch these by running:
bash resources/download_resources.sh

Visualizations

Plotting contrasting images:

  • Run simclr_views_paper_figure.ipynb and supcon_views_paper_figure.ipynb to get the anchors and their contrastive pairs showin in the paper.

  • To generate more images run biggan_generate_samples_paper_figure.py.


Training encoders

  • The current implementation covers these variants:
    • Contrastive (SimCLR and SupCon)
    • Inverters
    • Classifiers
  • Some examples of commands for training contrastive encoders:
# train a SimCLR on an unconditional IGM dataset (e.g. your dataset is generated by a Gaussian walk, called my_gauss in a GANs model)
CUDA_VISIBLE_DEVICES=0,1 python main_unified.py --method SimCLR --cosine \ 
	--dataset path_to_your_dataset --walk_method my_gauss \ 
	--cache_folder your_ckpts_path >> log_train_simclr.txt &

# train a SupCon on a conditional IGM dataset (e.g. your dataset is generated by steering walks, called my_steer in a GANs model)
CUDA_VISIBLE_DEVICES=0,1 python main_unified.py --method SupCon --cosine \
	--dataset path_to_your_dataset --walk_method my_steer \ 
	--cache_folder your_ckpts_path >> log_train_supcon.txt &
  • If you want to find out more about training configurations, you can find the yml file of each pretrained models in models_pretrained

Testing encoders

  • You can currently test (i.e. trasfer learn) your encoder on:
    • ImageNet linear classification
    • PASCAL classification
    • PASCAL detection

Imagenet linear classification

Below is the command to train a linear classifier on top of the features learned

# test your unconditional or conditional IGM trained model (i.e. the encoder you trained in the previous section) on ImageNet
CUDA_VISIBLE_DEVICES=0,1 python main_linear.py --learning_rate 0.3 \ 
	--ckpt path_to_your_encoder --data_folder path_to_imagenet \
	>> log_test_your_model_name.txt &

Pascal VOC2007 classification

To test classification on PascalVOC, you will extract features from a pretrained model and run an SVM on top of the futures. You can do that running the following code:

cd transfer_classification
./run_svm_voc.sh 0 path_to_your_encoder name_experiment path_to_pascal_voc

The code is based on FAIR Self-Supervision Benchmark

Pascal VOC2007 detection

To test transfer in detection experiments do the following:

  1. Enter into transfer_detection
  2. Install detectron2, replacing the detectron2 folder.
  3. Convert the checkpoints path_to_your_encoder to detectron2 format:
python convert_ckpt.py path_to_your_encoder output_ckpt.pth
  1. Add a symlink from the PascalVOC07 and PascalVOC12 into the datasets folder.
  2. Train the detection model:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train_net.py \
      --num-gpus 8 \
      --config-file config/pascal_voc_R_50_C4_transfer.yaml \
      MODEL.WEIGHTS ckpts/${name}.pth \
      OUTPUT_DIR outputs/${name}

Notebooks

source activate genrep_env
python -m ipykernel install --user --name genrep_env

Colab

git Acknowledgements

We thank the authors of these repositories:

Citation

If you use this code for your research, please cite our paper:

@article{jahanian2021generative, 
	title={Generative Models as a Data Source for Multiview Representation Learning}, 
	author={Jahanian, Ali and Puig, Xavier and Tian, Yonglong and Isola, Phillip}, 
	journal={arXiv preprint arXiv:2106.05258}, 
	year={2021} 
}
Owner
Ali
Research scientist @ MIT.
Ali
Official source code to CVPR'20 paper, "When2com: Multi-Agent Perception via Communication Graph Grouping"

When2com: Multi-Agent Perception via Communication Graph Grouping This is the PyTorch implementation of our paper: When2com: Multi-Agent Perception vi

34 Nov 09, 2022
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

WenxueCui 7 Nov 07, 2022
Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing

EGFNet Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing Dataset and Results Test maps: 百度网盘 提取码:zust Citation @ARTICLE{ author={Zhou,

ShaohuaDong 10 Dec 08, 2022
NeurIPS workshop paper 'Counter-Strike Deathmatch with Large-Scale Behavioural Cloning'

Counter-Strike Deathmatch with Large-Scale Behavioural Cloning Tim Pearce, Jun Zhu Offline RL workshop, NeurIPS 2021 Paper: https://arxiv.org/abs/2104

Tim Pearce 169 Dec 26, 2022
Wafer Fault Detection using MlOps Integration

Wafer Fault Detection using MlOps Integration This is an end to end machine learning project with MlOps integration for predicting the quality of wafe

Sethu Sai Medamallela 0 Mar 11, 2022
A lossless neural compression framework built on top of JAX.

Kompressor Branch CI Coverage main (active) main development A neural compression framework built on top of JAX. Install setup.py assumes a compatible

Rosalind Franklin Institute 2 Mar 14, 2022
Colab notebook and additional materials for Python-driven analysis of redlining data in Philadelphia

RedliningExploration The Google Colaboratory file contained in this repository contains work inspired by a project on educational inequality in the Ph

Benjamin Warren 1 Jan 20, 2022
PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules

Dynamic Routing Between Capsules - PyTorch implementation PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules from Sara Sabour,

Adam Bielski 475 Dec 24, 2022
Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch.

Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch! Now, Rearrange and Reduce in einops.layers.jittor are support!!

130 Jan 08, 2023
Deep Halftoning with Reversible Binary Pattern

Deep Halftoning with Reversible Binary Pattern ICCV Paper | Project Website | BibTex Overview Existing halftoning algorithms usually drop colors and f

Menghan Xia 17 Nov 22, 2022
Pytorch implementation of U-Net, R2U-Net, Attention U-Net, and Attention R2U-Net.

pytorch Implementation of U-Net, R2U-Net, Attention U-Net, Attention R2U-Net U-Net: Convolutional Networks for Biomedical Image Segmentation https://a

leejunhyun 2k Jan 02, 2023
Official code of paper "PGT: A Progressive Method for Training Models on Long Videos" on CVPR2021

PGT Code for paper PGT: A Progressive Method for Training Models on Long Videos. Install Run pip install -r requirements.txt. Run python setup.py buil

Bo Pang 27 Mar 30, 2022
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

33 Dec 18, 2022
[CVPR 2021] Generative Hierarchical Features from Synthesizing Images

[CVPR 2021] Generative Hierarchical Features from Synthesizing Images

GenForce: May Generative Force Be with You 148 Dec 09, 2022
This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

Pruning Self-attentions into Convolutional Layers in Single Path This is the official repository for our paper: Pruning Self-attentions into Convoluti

Zhuang AI Group 77 Dec 26, 2022
Implementation of Google Brain's WaveGrad high-fidelity vocoder

WaveGrad Implementation (PyTorch) of Google Brain's high-fidelity WaveGrad vocoder (paper). First implementation on GitHub with high-quality generatio

Ivan Vovk 363 Dec 27, 2022
EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising

EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising By Tengfei Liang, Yi Jin, Yidong Li, Tao Wang. Th

workingcoder 115 Jan 05, 2023
A colab notebook for training Stylegan2-ada on colab, transfer learning onto your own dataset.

Stylegan2-Ada-Google-Colab-Starter-Notebook A no thrills colab notebook for training Stylegan2-ada on colab. transfer learning onto your own dataset h

Harnick Khera 66 Dec 16, 2022
[ICCV 2021] Official Tensorflow Implementation for "Single Image Defocus Deblurring Using Kernel-Sharing Parallel Atrous Convolutions"

KPAC: Kernel-Sharing Parallel Atrous Convolutional block This repository contains the official Tensorflow implementation of the following paper: Singl

Hyeongseok Son 50 Dec 29, 2022
On Uncertainty, Tempering, and Data Augmentation in Bayesian Classification

Understanding Bayesian Classification This repository hosts the code to reproduce the results presented in the paper On Uncertainty, Tempering, and Da

Sanyam Kapoor 18 Nov 17, 2022