Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

Overview

Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

The task of age transformation illustrates the change of an individual's appearance over time. Accurately modeling this complex transformation over an input facial image is extremely challenging as it requires making convincing and possibly large changes to facial features and head shape, while still preserving the input identity. In this work, we present an image-to-image translation method that learns to directly encode real facial images into the latent space of a pre-trained unconditional GAN (e.g., StyleGAN) subject to a given aging shift. We employ a pre-trained age regression network used to explicitly guide the encoder to generate the latent codes corresponding to the desired age. In this formulation, our method approaches the continuous aging process as a regression task between the input age and desired target age, providing fine-grained control on the generated image. Moreover, unlike other approaches that operate solely in the latent space using a prior on the path controlling age, our method learns a more disentangled, non-linear path. We demonstrate that the end-to-end nature of our approach, coupled with the rich semantic latent space of StyleGAN, allows for further editing of the generated images. Qualitative and quantitative evaluations show the advantages of our method compared to state-of-the-art approaches.

Description

Official Implementation of our Style-based Age Manipulation (SAM) paper for both training and evaluation. SAM allows modeling fine-grained age transformation using a single input facial image

Getting Started

Prerequisites

  • Linux or macOS
  • NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported)
  • Python 3

Installation

  • Dependencies:
    We recommend running this repository using Anaconda. All dependencies for defining the environment are provided in environment/sam_env.yaml.

Pretrained Models

Please download the pretrained aging model from the following links.

Path Description
SAM SAM trained on the FFHQ dataset for age transformation.

In addition, we provide various auxiliary models needed for training your own SAM model from scratch.
This includes the pretrained pSp encoder model for generating the encodings of the input image and the aging classifier used to compute the aging loss during training.

Path Description
pSp Encoder pSp taken from pixel2style2pixel trained on the FFHQ dataset for StyleGAN inversion.
FFHQ StyleGAN StyleGAN model pretrained on FFHQ taken from rosinality with 1024x1024 output resolution.
IR-SE50 Model Pretrained IR-SE50 model taken from TreB1eN for use in our ID loss during training.
VGG Age Classifier VGG age classifier from DEX and fine-tuned on the FFHQ-Aging dataset for use in our aging loss

By default, we assume that all auxiliary models are downloaded and saved to the directory pretrained_models. However, you may use your own paths by changing the necessary values in configs/path_configs.py.

Training

Preparing your Data

Please refer to configs/paths_config.py to define the necessary data paths and model paths for training and inference.
Then, refer to configs/data_configs.py to define the source/target data paths for the train and test sets as well as the transforms to be used for training and inference.

As an example, we can first go to configs/paths_config.py and define:

dataset_paths = {
    'ffhq': '/path/to/ffhq/images256x256'
    'celeba_test': '/path/to/CelebAMask-HQ/test_img',
}

Then, in configs/data_configs.py, we define:

DATASETS = {
	'ffhq_aging': {
		'transforms': transforms_config.AgingTransforms,
		'train_source_root': dataset_paths['ffhq'],
		'train_target_root': dataset_paths['ffhq'],
		'test_source_root': dataset_paths['celeba_test'],
		'test_target_root': dataset_paths['celeba_test'],
	}
}

When defining the datasets for training and inference, we will use the values defined in the above dictionary.

Training SAM

The main training script can be found in scripts/train.py.
Intermediate training results are saved to opts.exp_dir. This includes checkpoints, train outputs, and test outputs.
Additionally, if you have tensorboard installed, you can visualize tensorboard logs in opts.exp_dir/logs.

Training SAM with the settings used in the paper can be done by running the following command:

python scripts/train.py \
--dataset_type=ffhq_aging \
--exp_dir=/path/to/experiment \
--workers=6 \
--batch_size=6 \
--test_batch_size=6 \
--test_workers=6 \
--val_interval=2500 \
--save_interval=10000 \
--start_from_encoded_w_plus \
--id_lambda=0.1 \
--lpips_lambda=0.1 \
--lpips_lambda_aging=0.1 \
--lpips_lambda_crop=0.6 \
--l2_lambda=0.25 \
--l2_lambda_aging=0.25 \
--l2_lambda_crop=1 \
--w_norm_lambda=0.005 \
--aging_lambda=5 \
--cycle_lambda=1 \
--input_nc=4 \
--target_age=uniform_random \
--use_weighted_id_loss

Additional Notes

  • See options/train_options.py for all training-specific flags.
  • Note that using the flag --start_from_encoded_w_plus requires you to specify the path to the pretrained pSp encoder.
    By default, this path is taken from configs.paths_config.model_paths['pretrained_psp'].
  • If you wish to resume from a specific checkpoint (e.g. a pretrained SAM model), you may do so using --checkpoint_path.

Notebooks

Inference Notebook

To help visualize the results of SAM we provide a Jupyter notebook found in notebooks/inference_playground.ipynb.
The notebook will download the pretrained aging model and run inference on the images found in notebooks/images.

MP4 Notebook

To show full lifespan results using SAM we provide an additional notebook notebooks/animation_inference_playground.ipynb that will run aging on multiple ages between 0 and 100 and interpolate between the results to display full aging. The results will be saved as an MP4 files in notebooks/animations showing the aging and de-aging results.

Testing

Inference

Having trained your model or if you're using a pretrained SAM model, you can use scripts/inference.py to run inference on a set of images.
For example,

python scripts/inference.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--couple_outputs
--target_age=0,10,20,30,40,50,60,70,80

Additional notes to consider:

  • During inference, the options used during training are loaded from the saved checkpoint and are then updated using the test options passed to the inference script.
  • Adding the flag --couple_outputs will save an additional image containing the input and output images side-by-side in the sub-directory inference_coupled. Otherwise, only the output image is saved to the sub-directory inference_results.
  • In the above example, we will run age transformation with target ages 0,10,...,80.
    • The results of each target age are saved to the sub-directories inference_results/TARGET_AGE and inference_coupled/TARGET_AGE.
  • By default, the images will be saved at resolution of 1024x1024, the original output size of StyleGAN.
    • If you wish to save outputs resized to resolutions of 256x256, you can do so by adding the flag --resize_outputs.

Side-by-Side Inference

The above inference script will save each aging result in a different sub-directory for each target age. Sometimes, however, it is more convenient to save all aging results of a given input side-by-side like the following:

To do so, we provide a script inference_side_by_side.py that works in a similar manner as the regular inference script:

python scripts/inference_side_by_side.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--target_age=0,10,20,30,40,50,60,70,80

Here, all aging results 0,10,...,80 will be save side-by-side with the original input image.

Reference-Guided Inference

In the paper, we demonstrated how one can perform style-mixing on the fine-level style inputs with a reference image to control global features such as hair color. For example,

To perform style mixing using reference images, we provide the script reference_guided_inference.py. Here, we first perform aging using the specified target age(s). Then, style mixing is performed using the specified reference images and the specified layers. For example, one can run:

python scripts/reference_guided_inference.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--ref_images_paths_file=/path/to/ref_list.txt \
--latent_mask=8,9 \
--target_age=50,60,70,80

Here, the reference images should be specified in the file defined by --ref_images_paths_file and should have the following format:

/path/to/reference/1.jpg
/path/to/reference/2.jpg
/path/to/reference/3.jpg
/path/to/reference/4.jpg
/path/to/reference/5.jpg

In the above example, we will aging using 4 different target ages. For each target age, we first transform the test samples defined by --data_path and then perform style mixing on layers 8,9 defined by --latent_mask. The results of each target age are saved in its own sub-directory.

Style Mixing

Instead of performing style mixing using a reference image, you can perform style mixing using randomly generated w latent vectors by running the script style_mixing.py. This script works in a similar manner to the reference guided inference except you do not need to specify the --ref_images_paths_file flag.

Repository structure

Path Description
SAM Repository root folder
├  configs Folder containing configs defining model/data paths and data transforms
├  criteria Folder containing various loss criterias for training
├  datasets Folder with various dataset objects and augmentations
├  docs Folder containing images displayed in the README
├  environment Folder containing Anaconda environment used in our experiments
├ models Folder containing all the models and training objects
│  ├  encoders Folder containing various architecture implementations
│  ├  stylegan2 StyleGAN2 model from rosinality
│  ├  psp.py Implementation of pSp encoder
│  └  dex_vgg.py Implementation of DEX VGG classifier used in computation of aging loss
├  notebook Folder with jupyter notebook containing SAM inference playground
├  options Folder with training and test command-line options
├  scripts Folder with running scripts for training and inference
├  training Folder with main training logic and Ranger implementation from lessw2020
├  utils Folder with various utility functions

Credits

StyleGAN2 model and implementation:
https://github.com/rosinality/stylegan2-pytorch
Copyright (c) 2019 Kim Seonghyeon
License (MIT) https://github.com/rosinality/stylegan2-pytorch/blob/master/LICENSE

IR-SE50 model and implementations:
https://github.com/TreB1eN/InsightFace_Pytorch
Copyright (c) 2018 TreB1eN
License (MIT) https://github.com/TreB1eN/InsightFace_Pytorch/blob/master/LICENSE

Ranger optimizer implementation:
https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer
License (Apache License 2.0) https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer/blob/master/LICENSE

LPIPS model and implementation:
https://github.com/S-aiueo32/lpips-pytorch
Copyright (c) 2020, Sou Uchida
License (BSD 2-Clause) https://github.com/S-aiueo32/lpips-pytorch/blob/master/LICENSE

DEX VGG model and implementation:
https://github.com/InterDigitalInc/HRFAE
Copyright (c) 2020, InterDigital R&D France
https://github.com/InterDigitalInc/HRFAE/blob/master/LICENSE.txt

pSp model and implementation:
https://github.com/eladrich/pixel2style2pixel
Copyright (c) 2020 Elad Richardson, Yuval Alaluf
https://github.com/eladrich/pixel2style2pixel/blob/master/LICENSE

Acknowledgments

This code borrows heavily from pixel2style2pixel

Citation

If you use this code for your research, please cite our paper Only a Matter of Style: Age Transformation Using a Style-Based Regression Model:

@misc{alaluf2021matter,
      title={Only a Matter of Style: Age Transformation Using a Style-Based Regression Model}, 
      author={Yuval Alaluf and Or Patashnik and Daniel Cohen-Or},
      year={2021},
      eprint={2102.02754},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Owner
Computer vision research scientist and enthusiast
🏅 Top 5% in 제2회 연구개발특구 인공지능 경진대회 AI SPARK 챌린지

AI_SPARK_CHALLENG_Object_Detection 제2회 연구개발특구 인공지능 경진대회 AI SPARK 챌린지 🏅 Top 5% in mAP(0.75) (443명 중 13등, mAP: 0.98116) 대회 설명 Edge 환경에서의 가축 Object Dete

3 Sep 19, 2022
PyStan, a Python interface to Stan, a platform for statistical modeling. Documentation: https://pystan.readthedocs.io

PyStan NOTE: This documentation describes a BETA release of PyStan 3. PyStan is a Python interface to Stan, a package for Bayesian inference. Stan® is

Stan 229 Dec 29, 2022
BC3407-Group-5-Project - BC3407 Group Project With Python

BC3407-Group-5-Project As the world struggles to contain the ever-changing varia

1 Jan 26, 2022
🛰️ List of earth observation companies and job sites

Earth Observation Companies & Jobs source Portals & Jobs Geospatial Geospatial jobs newsletter: ~biweekly newsletter with geospatial jobs by Ali Ahmad

Dahn 64 Dec 27, 2022
PyTorch Connectomics: segmentation toolbox for EM connectomics

Introduction The field of connectomics aims to reconstruct the wiring diagram of the brain by mapping the neural connections at the level of individua

Zudi Lin 132 Dec 26, 2022
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning

    VarCLR: Variable Representation Pre-training via Contrastive Learning New: Paper accepted by ICSE 2022. Preprint at arXiv! This repository contain

squaresLab 32 Oct 24, 2022
From this paper "SESNet: A Semantically Enhanced Siamese Network for Remote Sensing Change Detection"

SESNet for remote sensing image change detection It is the implementation of the paper: "SESNet: A Semantically Enhanced Siamese Network for Remote Se

1 May 24, 2022
Python script that allows you to automatically setup your Growtopia server.

AutoSetup Python script that allows you to automatically setup your Growtopia server. How To Use Firstly, install all the required modules that used i

Aspire 3 Mar 06, 2022
Voice Conversion by CycleGAN (语音克隆/语音转换):CycleGAN-VC3

CycleGAN-VC3-PyTorch 中文说明 | English This code is a PyTorch implementation for paper: CycleGAN-VC3: Examining and Improving CycleGAN-VCs for Mel-spectr

Kun Ma 110 Dec 24, 2022
Automatically erase objects in the video, such as logo, text, etc.

Video-Auto-Wipe Read English Introduction:Here   本人不定期的基于生成技术制作一些好玩有趣的算法模型,这次带来的作品是“视频擦除”方向的应用模型,它实现的功能是自动感知到视频中我们不想看见的部分(譬如广告、水印、字幕、图标等等)然后进行擦除。由于图标擦

seeprettyface.com 141 Dec 26, 2022
Facial detection, landmark tracking and expression transfer library for Windows, Linux and Mac

Welcome to the CSIRO Face Analysis SDK. Documentation for the SDK can be found in doc/documentation.html. All code in this SDK is provided according t

Luiz Carlos Vieira 7 Jul 16, 2020
Normal Learning in Videos with Attention Prototype Network

Codes_APN Official codes of CVPR21 paper: Normal Learning in Videos with Attention Prototype Network (https://arxiv.org/abs/2108.11055) Overview of ou

11 Dec 13, 2022
code for generating data set ES-ImageNet with corresponding training code

es-imagenet-master code for generating data set ES-ImageNet with corresponding training code dataset generator some codes of ODG algorithm The variabl

Ordinarabbit 18 Dec 25, 2022
A python code to convert Keras pre-trained weights to Pytorch version

Weights_Keras_2_Pytorch 最近想在Pytorch项目里使用一下谷歌的NIMA,但是发现没有预训练好的pytorch权重,于是整理了一下将Keras预训练权重转为Pytorch的代码,目前是支持Keras的Conv2D, Dense, DepthwiseConv2D, Batch

Liu Hengyu 2 Dec 16, 2021
A curated list of automated deep learning (including neural architecture search and hyper-parameter optimization) resources.

Awesome AutoDL A curated list of automated deep learning related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awe

D-X-Y 2k Dec 30, 2022
PyTorch inference for "Progressive Growing of GANs" with CelebA snapshot

Progressive Growing of GANs inference in PyTorch with CelebA training snapshot Description This is an inference sample written in PyTorch of the origi

320 Nov 21, 2022
Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer

AdaConv Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer from "Adaptive Convolutions for Structure-

65 Dec 22, 2022
Over-the-Air Ensemble Inference with Model Privacy

Over-the-Air Ensemble Inference with Model Privacy This repository contains simulations for our private ensemble inference method. Installation Instal

Selim Firat Yilmaz 1 Jun 29, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022