Implementation supporting the ICCV 2017 paper "GANs for Biological Image Synthesis"

Related tags

Deep Learningbiogans
Overview

GANs for Biological Image Synthesis

This codes implements the ICCV-2017 paper "GANs for Biological Image Synthesis". The paper and its supplementary materials is available on arXiv.

This code contains the following pieces:

  • implementation of DCGAN, WGAN, WGAN-GP
  • implementation of green-on-red separable DCGAN, multi-channel DCGAN, star-shaped DCGAN (see our ICCV 2017 paper for details)
  • implementation of the evaluation techniques: classifier two-samples test and reconstruction of the test set

The code is released under Apache v2 License allowing to use the code in any way you want. For the license on the LIN dataset, please contact the authors of Dodgson et al. (2017).

As a teaser, we show our final results (animated interpolations that mimic the cell growth cycle) right away: lin_movie2.gif lin_movie3.gif lin_movie1.gif

Citation

If you are using this software please cite the following paper in any resulting publication:

Anton Osokin, Anatole Chessel, Rafael E. Carazo Salas and Federico Vaggi, GANs for Biological Image Synthesis, in proceedings of the International Conference on Computer Vision (ICCV), 2017.

@InProceedings{osokin2017biogans,
author = {Anton Osokin and Anatole Chessel and Rafael E. Carazo Salas and Federico Vaggi},
title = {{GANs} for Biological Image Synthesis},
booktitle = {Proceedings of the International Conference on Computer Vision (ICCV)},
year = {2017} }

If you are using the LIN dataset, please, also cite this paper:

James Dodgson, Anatole Chessel, Federico Vaggi, Marco Giordan, Miki Yamamoto, Kunio Arai, Marisa Madrid, Marco Geymonat, Juan Francisco Abenza, Jose Cansado, Masamitsu Sato, Attila Csikasz-Nagy and Rafael E. Carazo Salas, Reconstructing regulatory pathways by systematically mapping protein localization interdependency networks, bioRxiv:11674, 2017

@article{Dodgson2017,
author = {Dodgson, James and Chessel, Anatole and Vaggi, Federico and Giordan, Marco and Yamamoto, Miki and Arai, Kunio and Madrid, Marisa and Geymonat, Marco and Abenza, Juan Francisco and Cansado, Jose and Sato, Masamitsu and Csikasz-Nagy, Attila and {Carazo Salas}, Rafael E},
title = {Reconstructing regulatory pathways by systematically mapping protein localization interdependency networks},
year = {2017},
journal = {bioRxiv:11674} }

Authors

Requirements

This software was written for python v3.6.1, pytorch v0.2.0 (earlier version won't work; later versions might face some backward compatibility issues, but should work), torchvision v0.1.8 (comes with pytorch). Many other python packages are required, but the standard Anaconda installation should be sufficient. The code was tested on Ubuntu 16.04 but should run on other systems as well.

Usage

This code release is aimed to reproduce the results of our ICCV 2017 paper. The experiments of this paper consist of the 4 main parts:

  • training and evaluating the models on the dataset by the 6 classes merged together
  • computing C2ST (classifier two-sample test) distances between real images of different classes
  • training and evaluating the models that support conditioning on the class labels
  • reconstructing images of the test set

By classes, we mean proteins imaged in the green channel. The 6 selected proteins include Alp14, Arp3, Cki2, Mkh1, Sid2, Tea1.

Note that rerunning all the experiements would require significant computational resources. We recommend using a cluster of GPU if you want to do that.

Preparations

Get the code

git clone https://github.com/aosokin/biogans.git

Mark the root folder for the code

cd biogans
export ROOT_BIOGANS=`pwd`

Download and unpack the dataset (438MB)

wget -P data http://www.di.ens.fr/sierra/research/biogans/LIN_Normalized_WT_size-48-80.zip
unzip data/LIN_Normalized_WT_size-48-80.zip -d data

If you are interested, there is a version with twice bigger images here (1.3GB).

Models for 6 classes merged together

Prepare the dataset and splits for evaluation

cd $ROOT_BIOGANS/experiments/models_6class_joint
./make_dataset_size-48-80_6class.sh
python make_splits_size-48-80_6class.py

If you just want to play with the trained models, we've release the ones at iteration 500k. You can dowload the model with these lines:

wget -P $ROOT_BIOGANS/models/size-48-80_6class_wgangp-adam http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_wgangp-adam/netG_iter_500000.pth
wget -P $ROOT_BIOGANS/models/size-48-80_6class_wgangp-sep-adam http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_wgangp-sep-adam/netG_iter_500000.pth
wget -P $ROOT_BIOGANS/models/size-48-80_6class_gan-adam http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_gan-adam/netG_iter_500000.pth
wget -P $ROOT_BIOGANS/models/size-48-80_6class_gan-sep-adam http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_gan-sep-adam/netG_iter_500000.pth
wget -P $ROOT_BIOGANS/models/size-48-80_6class_wgan-rmsprop http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_wgan-rmsprop/netG_iter_500000.pth
wget -P $ROOT_BIOGANS/models/size-48-80_6class_wgan-sep-rmsprop http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_wgan-sep-rmsprop/netG_iter_500000.pth

If you want to train the models yourself (might take a while), we used these scripts to get the models reported in our paper:

./train_size-48-80_6class_wgangp-adam.sh
./train_size-48-80_6class_wgangp-sep-adam.sh
./train_size-48-80_6class_gan-adam.sh
./train_size-48-80_6class_gan-sep-adam.sh
./train_size-48-80_6class_wgan-rmsprop.sh
./train_size-48-80_6class_wgan-sep-rmsprop.sh

To perform the full C2ST evaluation presented in Figure 8, generate the job scripts

python make_eval_jobs_size-48-80_6class_fake_vs_real.py
python make_eval_jobs_size-48-80_6class-together_real_vs_real.py

and run all the scripts in jobs_eval_6class_fake_vs_real and jobs_eval_6class-together_real_vs_real. If you are interested in something specific, please, pick the jobs that you want. After all the jobs run, one can redo our figures with analyze_eval_6class_fake_vs_real.ipynb and make_figures_3and4.ipynb.

C2ST for real vs. real images

Prepare the dataset and splits for evaluation

cd $ROOT_BIOGANS/experiments/real_vs_real
./make_dataset_size-48-80_8class.sh
python make_splits_size-48-80_8class.py
./make_splits_size-48-80_8class_real_vs_real.sh

Prepare all the jobs for evaluation

python make_eval_jobs_size-48-80_8class_real_vs_real.py

and runs all the scripts in jobs_eval_8class_real_vs_real. After this is done, you can reproduce Table 1 with analyze_eval_8class_real_vs_real.ipynb.

Models with conditioning on the class labels

Prepare the dataset and splits for evaluation

cd $ROOT_BIOGANS/experiments/models_6class_conditional
./make_dataset_size-48-80_6class_conditional.sh
./make_splits_size-48-80_6class_conditional.sh

If you just want to play with the trained models, we've release some of them at iteration 50k. You can dowload the model with these lines:

wget -P $ROOT_BIOGANS/models/size-48-80_6class_wgangp-star-shaped-adam http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_wgangp-star-shaped-adam/netG_iter_50000.pth
wget -P $ROOT_BIOGANS/models/size-48-80_6class_wgangp-independent-sep-adam http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_wgangp-independent-sep-adam/netG_iter_50000.pth

To train all the models from scratch, please, run these scripts:

./train_size-48-80_6class_wgangp-independent-adam.sh
./train_size-48-80_6class_wgangp-independent-sep-adam.sh
./train_size-48-80_6class_wgangp-multichannel-adam.sh
./train_size-48-80_6class_wgangp-multichannel-sep-adam.sh
./train_size-48-80_6class_wgangp-star-shaped-adam.sh

To train the multi-channel models, you additionally need to created the cache of nearest neighbors:

python $ROOT_BIOGANS/code/nearest_neighbors.py

Prepare evaluation scripts with

python make_eval_jobs_size-48-80_6class_conditional.py

and run all the scripts in jobs_eval_6class_conditional_fake_vs_real. After all of this is done, you can use analyze_eval_6class_star-shaped_fake_vs_real.ipynb, make_teaser.ipynb to reproduce Table 2 and Figure 1. The animated vizualizations and Figure 7 are done with cell_cycle_interpolation.ipynb.

Reconstructing the test set

Prepare the dataset and splits for evaluation

cd $ROOT_BIOGANS/experiments/models_6class_conditional
./make_dataset_size-48-80_6class_conditional.sh

If you just want to play with the trained models, we've release some of them at iteration 50k. You can dowload the model with these lines:

wget -P $ROOT_BIOGANS/models/size-48-80_6class_gan-star-shaped-adam http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_gan-star-shaped-adam/netG_iter_50000.pth
wget -P $ROOT_BIOGANS/models/size-48-80_6class_wgangp-star-shaped-adam http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_wgangp-star-shaped-adam/netG_iter_50000.pth
wget -P $ROOT_BIOGANS/models/size-48-80_6class_gan-independent-sep-adam http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_gan-independent-sep-adam/netG_iter_50000.pth
wget -P $ROOT_BIOGANS/models/size-48-80_6class_wgangp-independent-sep-adam http://www.di.ens.fr/sierra/research/biogans/models/size-48-80_6class_wgangp-independent-sep-adam/netG_iter_50000.pth

To train all the models from scratch, please, run these scripts:

./train_size-48-80_6class_wgangp-star-shaped-adam.sh
./train_size-48-80_6class_wgangp-independent-sep-adam.sh
./train_size-48-80_6class_wgangp-independent-adam.sh
./train_size-48-80_6class_gan-star-shaped-adam.sh
./train_size-48-80_6class_gan-independent-sep-adam.sh
./train_size-48-80_6class_gan-independent-adam.sh

To run all the reconstruction experiments, please, use these scripts:

./reconstruction_size-48-80_6class_wgangp-star-shaped-adam.sh
./reconstruction_size-48-80_6class_wgangp-independent-sep-adam.sh
./reconstruction_size-48-80_6class_wgangp-independent-adam.sh
./reconstruction_size-48-80_6class_gan-star-shaped-adam.sh
./reconstruction_size-48-80_6class_gan-independent-sep-adam.sh
./reconstruction_size-48-80_6class_gan-independent-adam.sh

After all of these done, you can reproduce Table 3 and Figures 6, 10 with analyze_reconstruction_errors.ipynb.

Owner
Anton Osokin
Anton Osokin
[ACM MM 2021] Joint Implicit Image Function for Guided Depth Super-Resolution

Joint Implicit Image Function for Guided Depth Super-Resolution This repository contains the code for: Joint Implicit Image Function for Guided Depth

hawkey 78 Dec 27, 2022
Automatic Differentiation Multipole Moment Molecular Forcefield

Automatic Differentiation Multipole Moment Molecular Forcefield Performance notes On a single gpu, using waterbox_31ang.pdb example from MPIDplugin wh

4 Jan 07, 2022
Google Recaptcha solver.

byerecaptcha - Google Recaptcha solver. Model and some codes takes from embium's repository -Installation- pip install byerecaptcha -How to use- from

Vladislav Zenkevich 21 Dec 19, 2022
A testcase generation tool for Persistent Memory Programs.

PMFuzz PMFuzz is a testcase generation tool to generate high-value tests cases for PM testing tools (XFDetector, PMDebugger, PMTest and Pmemcheck) If

Systems Research at ShiftLab 14 Jul 24, 2022
Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION.

LiMuSE Overview Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION. LiMuSE explores group communication on a multi

Auditory Model and Cognitive Computing Lab 17 Oct 26, 2022
Research shows Google collects 20x more data from Android than Apple collects from iOS. Block this non-consensual telemetry using pihole blocklists.

pihole-antitelemetry Research shows Google collects 20x more data from Android than Apple collects from iOS. Block both using these pihole lists. Proj

Adrian Edwards 290 Jan 09, 2023
Fre-GAN: Adversarial Frequency-consistent Audio Synthesis

Fre-GAN Vocoder Fre-GAN: Adversarial Frequency-consistent Audio Synthesis Training: python train.py --config config.json Citation: @misc{kim2021frega

Rishikesh (ऋषिकेश) 93 Dec 17, 2022
Some bravo or inspiring research works on the topic of curriculum learning.

Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN Official code for NeurIPS 2021 paper "Towards Scalable Unpaired Virtu

131 Jan 07, 2023
FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

HKBU High Performance Machine Learning Lab 6 Nov 18, 2022
[ICCV'2021] "SSH: A Self-Supervised Framework for Image Harmonization", Yifan Jiang, He Zhang, Jianming Zhang, Yilin Wang, Zhe Lin, Kalyan Sunkavalli, Simon Chen, Sohrab Amirghodsi, Sarah Kong, Zhangyang Wang

SSH: A Self-Supervised Framework for Image Harmonization (ICCV 2021) code for SSH Representative Examples Main Pipeline RealHM DataSet Google Drive Pr

VITA 86 Dec 02, 2022
A JAX-based research framework for writing differentiable numerical simulators with arbitrary discretizations

jaxdf - JAX-based Discretization Framework Overview | Example | Installation | Documentation ⚠️ This library is still in development. Breaking changes

UCL Biomedical Ultrasound Group 65 Dec 23, 2022
A small library for creating and manipulating custom JAX Pytree classes

Treeo A small library for creating and manipulating custom JAX Pytree classes Light-weight: has no dependencies other than jax. Compatible: Treeo Tree

Cristian Garcia 58 Nov 23, 2022
Pytorch implementation of the paper: "A Unified Framework for Separating Superimposed Images", in CVPR 2020.

Deep Adversarial Decomposition PDF | Supp | 1min-DemoVideo Pytorch implementation of the paper: "Deep Adversarial Decomposition: A Unified Framework f

Zhengxia Zou 72 Dec 18, 2022
Multi-task Multi-agent Soft Actor Critic for SMAC

Multi-task Multi-agent Soft Actor Critic for SMAC Overview The CARE formulti-task: Multi-Task Reinforcement Learning with Context-based Representation

RuanJingqing 8 Sep 30, 2022
Code accompanying the paper "Wasserstein GAN"

Wasserstein GAN Code accompanying the paper "Wasserstein GAN" A few notes The first time running on the LSUN dataset it can take a long time (up to an

3.1k Jan 01, 2023
Paddle pit - Rethinking Spatial Dimensions of Vision Transformers

基于Paddle实现PiT ——Rethinking Spatial Dimensions of Vision Transformers,arxiv 官方原版代

Hongtao Wen 4 Jan 15, 2022
Orbivator AI - To Determine which features of data (measurements) are most important for diagnosing breast cancer and find out if breast cancer occurs or not.

Orbivator_AI Breast Cancer Wisconsin (Diagnostic) GOAL To Determine which features of data (measurements) are most important for diagnosing breast can

anurag kumar singh 1 Jan 02, 2022
TensorFlow Implementation of Unsupervised Cross-Domain Image Generation

Domain Transfer Network (DTN) TensorFlow implementation of Unsupervised Cross-Domain Image Generation. Requirements Python 2.7 TensorFlow 0.12 Pickle

Yunjey Choi 864 Dec 30, 2022
PyTorch implementation of spectral graph ConvNets, NIPS’16

Graph ConvNets in PyTorch October 15, 2017 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbresson

Xavier Bresson 287 Jan 04, 2023
Easy way to add GoogleMaps to Flask applications. maintainer: @getcake

Flask Google Maps Easy to use Google Maps in your Flask application requires Jinja Flask A google api key get here Contribute To contribute with the p

Flask Extensions 611 Dec 05, 2022