Unsupervised Image to Image Translation with Generative Adversarial Networks

Overview

Unsupervised Image to Image Translation with Generative Adversarial Networks

Paper: Unsupervised Image to Image Translation with Generative Adversarial Networks

Requirements

  • TensorFlow 1.0.0
  • TensorLayer 1.3.11
  • CUDA 8
  • Ubuntu

Dataset

  • Before training the network, please prepare the data
  • CelebA download
  • Cropped SVHN download
  • MNIST download, and put to data/mnist_png

Usage

Step 1: Learning shared feature

python3 train.py --train_step="ac_gan" --retrain=1

Step 2: Learning image encoder

python3 train.py --train_step="imageEncoder" --retrain=1

Step 3: Translation

python3 translate_image.py
  • Samples of all steps will be saved to data/samples/

Network

Want to use different datasets?

  • in train.py and translate_image.py modify the name of dataset flags.DEFINE_string("dataset", "celebA", "The name of dataset [celebA, obama_hillary]")
  • write your own data_loader in data_loader.py
You might also like...
The pytorch implementation of  DG-Font: Deformable Generative Networks for Unsupervised Font Generation
The pytorch implementation of DG-Font: Deformable Generative Networks for Unsupervised Font Generation

DG-Font: Deformable Generative Networks for Unsupervised Font Generation The source code for 'DG-Font: Deformable Generative Networks for Unsupervised

Minimal PyTorch implementation of Generative Latent Optimization from the paper
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)

Regularizing Generative Adversarial Networks under Limited Data [Project Page][Paper] Implementation for our GAN regularization method. The proposed r

NR-GAN: Noise Robust Generative Adversarial Networks
NR-GAN: Noise Robust Generative Adversarial Networks

NR-GAN: Noise Robust Generative Adversarial Networks (CVPR 2020) This repository provides PyTorch implementation for noise robust GAN (NR-GAN). NR-GAN

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis Jungil Kong, Jaehyeon Kim, Jaekyoung Bae In our paper, we p

Generating Anime Images by Implementing Deep Convolutional Generative Adversarial Networks paper
Generating Anime Images by Implementing Deep Convolutional Generative Adversarial Networks paper

AnimeGAN - Deep Convolutional Generative Adverserial Network PyTorch implementation of DCGAN introduced in the paper: Unsupervised Representation Lear

Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch

alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation

PyTorch implementations of Generative Adversarial Networks.
PyTorch implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Code for the paper "TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks"

TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks This is a Python3 / Pytorch implementation of TadGAN paper. The associated

Comments
  • Where can I get “obama_hillary” dataset

    Where can I get “obama_hillary” dataset

    I’m adaping your code

    Now I’m tring to replacement faces

    Is “obama_hillary” is custom dataset? Or public dataset

    Let me know where can I get “obama_hillary”

    Thanks.

    opened by dreamegg 0
  • What is the version of tensorflow?

    What is the version of tensorflow?

    Hi,donghao, I am running this project but I find there are so many errors at the beginning of my training, e.g. Traceback (most recent call last): File "train.py", line 362, in tf.app.run() File "/home/zzw/Program/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "train.py", line 355, in main train_ac_gan() File "train.py", line 98, in train_ac_gan g_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(d_logits_fake, tf.ones_like(d_logits_fake))) File "/home/zzw/Program/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/nn_impl.py", line 149, in sigmoid_cross_entropy_with_logits labels, logits) File "/home/zzw/Program/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1512, in _ensure_xent_args "named arguments (labels=..., logits=..., ...)" % name) ValueError: Only call sigmoid_cross_entropy_with_logits with named arguments (labels=..., logits=..., ...)

    I guess these errors are due to differences between mine and yours,so could you please tell me what is your version of tensorflow?

    opened by zzw1123 3
  • Is the output image size of 256 x 256 an option – or is just 64 x 64 px possible?

    Is the output image size of 256 x 256 an option – or is just 64 x 64 px possible?

    Hey it's me again, browsing through your other repos i found this gem – seems fun! A few months ago i've tested another gender swap network written in TF, but the output resolution was hardcoded and i couldn't figure out how to change it (with my limited knowledge of TF). Your version again seems a lot easier to read – due to the usage of the Tensorlayer library?

    I'm using the celebA dataset and have left all thetf.flags by default. So the default image size is 64 x 64px but i've seen that you've also written quite a few lines in train.py and model.py for a 256 x 256px option.

    if FLAGS.image_size == 64:
        generator = model.generator
        discriminator = model.discriminator
        imageEncoder = model.imageEncoder
    # elif FLAGS.image_size == 256:
    #     generator = model.generator_256
    #     discriminator = model.discriminator_256
    #     imageEncoder = model.imageEncoder_256
    else:
        raise Exception("image_size should be 64 or 256")
    
    ################## 256x256x3
    def generator_256(inputs, is_train=True, reuse=False):
    (...)
    def discriminator_256(inputs, is_train=True, reuse=False):
    (...)
    

    Since the second if-statement (elif FLAGS.image_size == 256:) is commented out and never changes the default 64x64px model generator and encoder, setting flags.DEFINE_integer("image_size", ...) in train.py to 256 doesn't really change the size - is this correct?

    I've tried to uncomment the code and enable the elif line but then ran into this error: ValueError: Shapes (64, 64, 64, 256) and (64, 32, 32, 256) are not compatible

    You've added generator_256, discriminator_256 and imageEncoder_256 to model.py so i'm wondering if you just have just experimented with this image size and then discarded the option (and just left the 64x64 image_size option) or if i'm missing something here...

    There is also a commented out flag for output_size – but this variable doesn't show up anywhere else so i guess it's from a previous version of your code: # flags.DEFINE_integer("output_size", 64, "The size of the output images to produce [64]")

    And this one is also non-functional: # flags.DEFINE_integer("train_size", np.inf, "The size of train images [np.inf]")


    I just wondered if it's possible to crank up the training and output resolution to 256x256px (and maybe finish the training process this year – when i get my 1080 Ti 😎).

    Will try to finish the 64x64px first and save the model-.npz files for later, but it would be interesting to know if the mentioned portions of your code are still functional.

    Thanks!

    opened by subzerofun 1
Releases(0.3)
Owner
Hao
Assistant Professor @ Peking University
Hao
Code base for "On-the-Fly Test-time Adaptation for Medical Image Segmentation"

On-the-Fly Adaptation Official Pytorch Code base for On-the-Fly Test-time Adaptation for Medical Image Segmentation Paper Introduction One major probl

Jeya Maria Jose 17 Nov 10, 2022
Proposed n-stage Latent Dirichlet Allocation method - A Novel Approach for LDA

n-stage Latent Dirichlet Allocation (n-LDA) Proposed n-LDA & A Novel Approach for classical LDA Latent Dirichlet Allocation (LDA) is a generative prob

Anıl Güven 4 Mar 07, 2022
Simple (but Strong) Baselines for POMDPs

Recurrent Model-Free RL is a Strong Baseline for Many POMDPs Welcome to the POMDP world! This repo provides some simple baselines for POMDPs, specific

Tianwei V. Ni 172 Dec 29, 2022
[IJCAI'21] Deep Automatic Natural Image Matting

Deep Automatic Natural Image Matting [IJCAI-21] This is the official repository of the paper Deep Automatic Natural Image Matting. Introduction | Netw

Jizhizi_Li 316 Jan 06, 2023
R3Det based on mmdet 2.19.0

R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object Installation # install mmdetection first if you haven't installed it

SJTU-Thinklab-Det 38 Dec 15, 2022
Rest API Written In Python To Classify NSFW Images.

Rest API Written In Python To Classify NSFW Images.

Wahyusaputra 2 Dec 23, 2021
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

170.1k Jan 05, 2023
Learning Tracking Representations via Dual-Branch Fully Transformer Networks

Learning Tracking Representations via Dual-Branch Fully Transformer Networks DualTFR ⭐ We achieves the runner-ups for both VOT2021ST (short-term) and

phiphi 19 May 04, 2022
[WACV21] Code for our paper: Samuel, Atzmon and Chechik, "From Generalized zero-shot learning to long-tail with class descriptors"

DRAGON: From Generalized zero-shot learning to long-tail with class descriptors Paper Project Website Video Overview DRAGON learns to correct the bias

Dvir Samuel 25 Dec 06, 2022
Procedural 3D data generation pipeline for architecture

Synthetic Dataset Generator Authors: Stanislava Fedorova Alberto Tono Meher Shashwat Nigam Jiayao Zhang Amirhossein Ahmadnia Cecilia bolognesi Dominik

Computational Design Institute 49 Nov 25, 2022
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Jan 02, 2023
Weakly Supervised Segmentation with Tensorflow. Implements instance segmentation as described in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).

Weakly Supervised Segmentation with TensorFlow This repo contains a TensorFlow implementation of weakly supervised instance segmentation as described

Phil Ferriere 220 Dec 13, 2022
Official PyTorch implementation of PS-KD

Self-Knowledge Distillation with Progressive Refinement of Targets (PS-KD) Accepted at ICCV 2021, oral presentation Official PyTorch implementation of

61 Dec 28, 2022
Code for the paper "JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design"

JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design This repository contains code for the paper: JA

Aspuru-Guzik group repo 55 Nov 29, 2022
GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon

42 Dec 02, 2022
Pytorch implementation of PTNet for high-resolution and longitudinal infant MRI synthesis

Pyramid Transformer Net (PTNet) Project | Paper Pytorch implementation of PTNet for high-resolution and longitudinal infant MRI synthesis. PTNet: A Hi

Xuzhe Johnny Zhang 6 Jun 08, 2022
Python implementation of ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images, AAAI2022.

ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images Binh M. Le & Simon S. Woo, "ADD:

2 Oct 24, 2022
Resources complimenting the Machine Learning Course led in the Faculty of mathematics and informatics part of Sofia University.

Machine Learning and Data Mining, Summer 2021-2022 How to learn data science and machine learning? Programming. Learn Python. Basic Statistics. Take a

Simeon Hristov 8 Oct 04, 2022
A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data

A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data Overview Clustering analysis is widely utilized in single-cell RNA-seque

AI-Biomed @NSCC-gz 3 May 08, 2022
Pixel-level Crack Detection From Images Of Levee Systems : A Comparative Study

PIXEL-LEVEL CRACK DETECTION FROM IMAGES OF LEVEE SYSTEMS : A COMPARATIVE STUDY G

Manisha Panta 2 Jul 23, 2022