Generative Models as a Data Source for Multiview Representation Learning

Related tags

Deep LearningGenRep
Overview

GenRep

Project Page | Paper

Generative Models as a Data Source for Multiview Representation Learning
Ali Jahanian, Xavier Puig, Yonglong Tian, Phillip Isola

Prerequisites

  • Linux
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Table of Contents:

  1. Setup
  2. Visualizations - plotting image panels, videos, and distributions
  3. Training - pipeline for training your encoder
  4. Testing - pipeline for testing/transfer learning your encoder
  5. Notebooks - some jupyter notebooks, good place to start for trying your own dataset generations
  6. Colab Demo - a colab notebook to demo how the contrastive encoder training works

Setup

  • Clone this repo:
git clone https://github.com/ali-design/GenRep
  • Install dependencies:
    • we provide a Conda environment.yml file listing the dependencies. You can create a Conda environment with the dependencies using:
conda env create -f environment.yml
  • Download resources:
    • we provide a script for downloading associated resources. Fetch these by running:
bash resources/download_resources.sh

Visualizations

Plotting contrasting images:

  • Run simclr_views_paper_figure.ipynb and supcon_views_paper_figure.ipynb to get the anchors and their contrastive pairs showin in the paper.

  • To generate more images run biggan_generate_samples_paper_figure.py.


Training encoders

  • The current implementation covers these variants:
    • Contrastive (SimCLR and SupCon)
    • Inverters
    • Classifiers
  • Some examples of commands for training contrastive encoders:
# train a SimCLR on an unconditional IGM dataset (e.g. your dataset is generated by a Gaussian walk, called my_gauss in a GANs model)
CUDA_VISIBLE_DEVICES=0,1 python main_unified.py --method SimCLR --cosine \ 
	--dataset path_to_your_dataset --walk_method my_gauss \ 
	--cache_folder your_ckpts_path >> log_train_simclr.txt &

# train a SupCon on a conditional IGM dataset (e.g. your dataset is generated by steering walks, called my_steer in a GANs model)
CUDA_VISIBLE_DEVICES=0,1 python main_unified.py --method SupCon --cosine \
	--dataset path_to_your_dataset --walk_method my_steer \ 
	--cache_folder your_ckpts_path >> log_train_supcon.txt &
  • If you want to find out more about training configurations, you can find the yml file of each pretrained models in models_pretrained

Testing encoders

  • You can currently test (i.e. trasfer learn) your encoder on:
    • ImageNet linear classification
    • PASCAL classification
    • PASCAL detection

Imagenet linear classification

Below is the command to train a linear classifier on top of the features learned

# test your unconditional or conditional IGM trained model (i.e. the encoder you trained in the previous section) on ImageNet
CUDA_VISIBLE_DEVICES=0,1 python main_linear.py --learning_rate 0.3 \ 
	--ckpt path_to_your_encoder --data_folder path_to_imagenet \
	>> log_test_your_model_name.txt &

Pascal VOC2007 classification

To test classification on PascalVOC, you will extract features from a pretrained model and run an SVM on top of the futures. You can do that running the following code:

cd transfer_classification
./run_svm_voc.sh 0 path_to_your_encoder name_experiment path_to_pascal_voc

The code is based on FAIR Self-Supervision Benchmark

Pascal VOC2007 detection

To test transfer in detection experiments do the following:

  1. Enter into transfer_detection
  2. Install detectron2, replacing the detectron2 folder.
  3. Convert the checkpoints path_to_your_encoder to detectron2 format:
python convert_ckpt.py path_to_your_encoder output_ckpt.pth
  1. Add a symlink from the PascalVOC07 and PascalVOC12 into the datasets folder.
  2. Train the detection model:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train_net.py \
      --num-gpus 8 \
      --config-file config/pascal_voc_R_50_C4_transfer.yaml \
      MODEL.WEIGHTS ckpts/${name}.pth \
      OUTPUT_DIR outputs/${name}

Notebooks

source activate genrep_env
python -m ipykernel install --user --name genrep_env

Colab

git Acknowledgements

We thank the authors of these repositories:

Citation

If you use this code for your research, please cite our paper:

@article{jahanian2021generative, 
	title={Generative Models as a Data Source for Multiview Representation Learning}, 
	author={Jahanian, Ali and Puig, Xavier and Tian, Yonglong and Isola, Phillip}, 
	journal={arXiv preprint arXiv:2106.05258}, 
	year={2021} 
}
Owner
Ali
Research scientist @ MIT.
Ali
Ağ tarayıcı.Gönderdiği paketler ile ağa bağlı olan cihazların IP adreslerini gösterir.

NetScanner.py Ağ tarayıcı.Gönderdiği paketler ile ağa bağlı olan cihazların IP adreslerini gösterir. Linux'da Kullanımı: git clone https://github.com/

4 Aug 23, 2021
To provide 100 JAX exercises over different sections structured as a course or tutorials to teach and learn for beginners, intermediates as well as experts

JaxTon 💯 JAX exercises Mission 🚀 To provide 100 JAX exercises over different sections structured as a course or tutorials to teach and learn for beg

Rohan Rao 512 Jan 01, 2023
Implemenets the Contourlet-CNN as described in C-CNN: Contourlet Convolutional Neural Networks, using PyTorch

C-CNN: Contourlet Convolutional Neural Networks This repo implemenets the Contourlet-CNN as described in C-CNN: Contourlet Convolutional Neural Networ

Goh Kun Shun (KHUN) 10 Nov 03, 2022
Deep Learning Theory

Deep Learning Theory 整理了一些深度学习的理论相关内容,持续更新。 Overview Recent advances in deep learning theory 总结了目前深度学习理论研究的六个方向的一些结果,概述型,没做深入探讨(2021)。 1.1 complexity

fq 103 Jan 04, 2023
[ICML 2021] Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data

Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data This repo provides the source code & data of our paper: Break-It-Fix-It: Unsupervised

Michihiro Yasunaga 86 Nov 30, 2022
Outlier Exposure with Confidence Control for Out-of-Distribution Detection

OOD-detection-using-OECC This repository contains the essential code for the paper Outlier Exposure with Confidence Control for Out-of-Distribution De

Nazim Shaikh 64 Nov 02, 2022
code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification

On Robust Prefix-Tuning for Text Classification Prefix-tuning has drawed much attention as it is a parameter-efficient and modular alternative to adap

Zonghan Yang 12 Nov 30, 2022
[CVPR 2022] Official code for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration"

MDCA Calibration This is the official PyTorch implementation for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved

MDCA Calibration 21 Dec 22, 2022
More than a hundred strange attractors

dysts Analyze more than a hundred chaotic systems. Basic Usage Import a model and run a simulation with default initial conditions and parameter value

William Gilpin 185 Dec 23, 2022
Advanced yabai wooting scripts

Yabai Wooting scripts Installation requirements Both https://github.com/xiamaz/python-yabai-client and https://github.com/xiamaz/python-wooting-rgb ne

Max Zhao 3 Dec 31, 2021
Tool for live presentations using manim

manim-presentation Tool for live presentations using manim Install pip install manim-presentation opencv-python Usage Use the class Slide as your sce

Federico Galatolo 146 Jan 06, 2023
An Open-Source Package for Information Retrieval.

OpenMatch An Open-Source Package for Information Retrieval. 😃 What's New Top Spot on TREC-COVID Challenge (May 2020, Round2) The twin goals of the ch

THUNLP 439 Dec 27, 2022
Predicting 10 different clothing types using Xception pre-trained model.

Predicting-Clothing-Types Predicting 10 different clothing types using Xception pre-trained model from Keras library. It is reimplemented version from

AbdAssalam Ahmad 3 Dec 29, 2021
Orbivator AI - To Determine which features of data (measurements) are most important for diagnosing breast cancer and find out if breast cancer occurs or not.

Orbivator_AI Breast Cancer Wisconsin (Diagnostic) GOAL To Determine which features of data (measurements) are most important for diagnosing breast can

anurag kumar singh 1 Jan 02, 2022
NAVER BoostCamp Final Project

CV 14조 final project Super Resolution and Deblur module Inference code & Pretrained weight Repo SwinIR Deblur 실행 방법 streamlit run WebServer/Server_SRD

JiSeong Kim 5 Sep 06, 2022
Official code for "InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization" (ICLR 2020, spotlight)

InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization Authors: Fan-yun Sun, Jordan Hoffm

Fan-Yun Sun 232 Dec 28, 2022
A new benchmark for Icon Question Answering (IconQA) and a large-scale icon dataset Icon645.

IconQA About IconQA is a new diverse abstract visual question answering dataset that highlights the importance of abstract diagram understanding and c

Pan Lu 24 Dec 30, 2022
ImageNet-CoG is a benchmark for concept generalization. It provides a full evaluation framework for pre-trained visual representations which measure how well they generalize to unseen concepts.

The ImageNet-CoG Benchmark Project Website Paper (arXiv) Code repository for the ImageNet-CoG Benchmark introduced in the paper "Concept Generalizatio

NAVER 23 Oct 09, 2022
Code for "Neural 3D Scene Reconstruction with the Manhattan-world Assumption" CVPR 2022 Oral

News 05/10/2022 To make the comparison on ScanNet easier, we provide all quantitative and qualitative results of baselines here, including COLMAP, COL

ZJU3DV 365 Dec 30, 2022
Code for Domain Adaptive Video Segmentation via Temporal Consistency Regularization in ICCV 2021

Domain Adaptive Video Segmentation via Temporal Consistency Regularization Updates 08/2021: check out our domain adaptation for sematic segmentation p

36 Dec 12, 2022