i3DMM: Deep Implicit 3D Morphable Model of Human Heads

Related tags

Deep Learningi3DMM
Overview

i3DMM: Deep Implicit 3D Morphable Model of Human Heads

CVPR 2021 (Oral)

Arxiv | Poject Page

Teaser

This project is the official implementation our work, i3DMM. Much of our code is from DeepSDF's repository. We thank Park et al. for making their code publicly available.

The pretrained model is included in this repository.

Setup

  1. To get started, clone this repository into a local directory.
  2. Install Anaconda, if you don't already have it.
  3. Create a conda environment in the path with the following command:
conda create -p ./i3dmm_env
  1. Activate the conda environment from the same folder:
conda activate ./i3dmm_env
  1. Use the following commands to install required packages:
conda install pytorch=1.1 cudatoolkit=10.0 -c pytorch
pip install opencv-python trimesh[all] scikit-learn mesh-to-sdf plyfile

Preparing Data

Rigid Alignment

We assume that all the input data is rigidly aligned. Therefore, we provide reference 3D landmarks to align your test/training data. Please use centroids.txt file in the model folder to align your data to these landmarks. The landmarks in the file are in the following order:

  1. Right eye left corner
  2. Right eye right corner
  3. Left eye left corner
  4. Left eye right corner
  5. Nose tip
  6. Right lips corner
  7. Left lips corner
  8. Point on the chin The following image shows these landmarks. The centroids.txt file consists of 3D landmarks with coordinates x, y, z. Each file consists of 8 lines. Each line consists of the 3 values in 'x y z' order corresponding to the landmarks described above separated by a space.

Please see our paper for more information on rigid alignment.

Dataset

We closely follow ShapeNet Dataset's folder structure. Please see the a mesh folder in the dataset for an example. The dataset is assumed to be as follows:


   
    /
    
     /
     
      /models/
      
       .obj

       
        /
        
         /
         
          /models/
          
           .mtl 
           
            /
            
             /
             
              /models/
              
               .jpg 
               
                /
                
                 /
                 
                  /models/centroids.txt 
                  
                   /
                   
                    /
                    
                     /models/centroidsEars.txt 
                    
                   
                  
                 
                
               
              
             
            
           
          
         
        
       
      
     
    
   

The model name should be in a specific structure, xxxxx_eyy where xxxxx are 5 characters which identify an identity and yy are unique numbers to specify different expressions and hairstyles. We follow e01 - e10 for different expressions where e07 is neutral expression. e11-e13 are hairstyles in neutral expression. Rest of the expression identifiers are for test expressions.

The centroids.txt file contains landmarks as described in the alignment step. Additionally, to train the model, one could also have centroidEars.txt file which has the 3D ear landmarks in the following order:

  1. Left ear top
  2. Left ear left
  3. Left ear bottom
  4. Left ear right
  5. Right ear top
  6. Right ear left
  7. Right ear bottom
  8. Right ear right These 8 landmarks are as shown in the following image. The file is organized similar to centroids.txt. Please see the a mesh folder in the dataset for an example.

Once the dataset is prepared, create the splits as shown in model/headModel/splits/*.json files. These files are similar to the splits files in DeepSDF.

Preprocessing

The following commands preprocesses the meshes from the dataset described above and places them in data folder. The command must be run from "model" folder. To preprocess training data:

python preprocessData.py --samples_directory ./data --input_meshes_directory 
   
      -e headModel -s Train

   

To preprocess test data:

python preprocessData.py --samples_directory ./data --input_meshes_directory 
   
     -e headModel -s Test

   

'headModel' is the folder containing network settings for the 'specs.json'. The json file also contains split file and preprocessed data paths. The splits files are in model/headModel/splits/*.json These files indicate files that are for testing, training, and reference shape initialisation.

Training the Model

Once data is preprocessed, one can train the model with the following command.

python train_i3DMM.py -e headModel

When working with a large dataset, please consider using batch_split option with a power of 2 (2, 4, 8, 16 etc.). The following command is an example.

python train_i3DMM.py -e headModel --batch_split 2

Additionally, if one considers using landmark supervision or ears constraints for long hair (see paper for details), please export the centroids and ear centroids as a dictionaries with npy files (8 face landmarks: eightCentroids.npy, ear landmarks: gtEarCentroids.npy).

An example entry in the dictionary: {"xxxxx_eyy: 8x3 numpy array"}

Fitting i3DMM to Preprocessed Data

Please see the preprocessing section for preparing the data. Once the data is ready, please use the following command to fit i3DMM to the data.

To save as image:

python fit_i3DMM_to_mesh.py -e headModel -c latest -d data -s 
   
     --imNM True

   

To save as a mesh:

python fit_i3DMM_to_mesh.py -e headModel -c latest -d data -s 
   
     --imNM False

   

Test dataset can be downloaded with this link. Please extract and move the 'heads' folder to dataset folder.

Citation

Please cite our paper if you use any part of this repository.

@inproceedings {yenamandra2020i3dmm,
 author = {T Yenamandra and A Tewari and F Bernard and HP Seidel and M Elgharib and D Cremers and C Theobalt},
 title = {i3DMM: Deep Implicit 3D Morphable Model of Human Heads},
 booktitle = {Proceedings of the IEEE / CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
 month = {June},
 year = {2021}
}
Owner
Tarun Yenamandra
Tarun Yenamandra
ConvMAE: Masked Convolution Meets Masked Autoencoders

ConvMAE ConvMAE: Masked Convolution Meets Masked Autoencoders Peng Gao1, Teli Ma1, Hongsheng Li2, Jifeng Dai3, Yu Qiao1, 1 Shanghai AI Laboratory, 2 M

Alpha VL Team of Shanghai AI Lab 345 Jan 08, 2023
This repository contains the source code for the paper First Order Motion Model for Image Animation

!!! Check out our new paper and framework improved for articulated objects First Order Motion Model for Image Animation This repository contains the s

13k Jan 09, 2023
A new framework, collaborative cascade prediction based on graph neural networks (CCasGNN) to jointly utilize the structural characteristics, sequence features, and user profiles.

CCasGNN A new framework, collaborative cascade prediction based on graph neural networks (CCasGNN) to jointly utilize the structural characteristics,

5 Apr 29, 2022
Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning. Code will be available soon.

Official-PyTorch-Implementation-of-TransMEF Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fu

117 Dec 27, 2022
Official code of Team Yao at Multi-Modal-Fact-Verification-2022

Official code of Team Yao at Multi-Modal-Fact-Verification-2022 A Multi-Modal Fact Verification dataset released as part of the De-Factify workshop in

Wei-Yao Wang 11 Nov 15, 2022
Implementation for paper MLP-Mixer: An all-MLP Architecture for Vision

MLP Mixer Implementation for paper MLP-Mixer: An all-MLP Architecture for Vision. Give us a star if you like this repo. Author: Github: bangoc123 Emai

Ngoc Nguyen Ba 86 Dec 10, 2022
Contextual Attention Localization for Offline Handwritten Text Recognition

CALText This repository contains the source code for CALText model introduced in "CALText: Contextual Attention Localization for Offline Handwritten T

0 Feb 17, 2022
tf2-keras implement yolov5

YOLOv5 in tesnorflow2.x-keras yolov5数据增强jupyter示例 Bilibili视频讲解地址: 《yolov5 解读,训练,复现》 Bilibili视频讲解PPT文件: yolov5_bilibili_talk_ppt.pdf Bilibili视频讲解PPT文件:

yangcheng 254 Jan 08, 2023
Principled Detection of Out-of-Distribution Examples in Neural Networks

ODIN: Out-of-Distribution Detector for Neural Networks This is a PyTorch implementation for detecting out-of-distribution examples in neural networks.

189 Nov 29, 2022
Spam your friends and famly and when you do your famly will disown you and you will have no friends.

SpamBot9000 Spam your friends and family and when you do your family will disown you and you will have no friends. Terms of Use Disclaimer: Please onl

DJ15 0 Jun 09, 2022
Code for CVPR2019 Towards Natural and Accurate Future Motion Prediction of Humans and Animals

Motion prediction with Hierarchical Motion Recurrent Network Introduction This work concerns motion prediction of articulate objects such as human, fi

Shuang Wu 85 Dec 11, 2022
Official code repository for A Simple Long-Tailed Rocognition Baseline via Vision-Language Model.

This is the official code repository for A Simple Long-Tailed Rocognition Baseline via Vision-Language Model.

peng gao 42 Nov 26, 2022
“袋鼯麻麻——智能购物平台”能够精准地定位识别每一个商品

“袋鼯麻麻——智能购物平台”能够精准地定位识别每一个商品,并且能够返回完整地购物清单及顾客应付的实际商品总价格,极大地降低零售行业实际运营过程中巨大的人力成本,提升零售行业无人化、自动化、智能化水平。

thomas-yanxin 192 Jan 05, 2023
Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

Auto-Seg-Loss By Hao Li, Chenxin Tao, Xizhou Zhu, Xiaogang Wang, Gao Huang, Jifeng Dai This is the official implementation of the ICLR 2021 paper Auto

61 Dec 21, 2022
A CV toolkit for my papers.

PyTorch-Encoding created by Hang Zhang Documentation Please visit the Docs for detail instructions of installation and usage. Please visit the link to

Hang Zhang 2k Jan 04, 2023
How to Learn a Domain Adaptive Event Simulator? ACM MM, 2021

LETGAN How to Learn a Domain Adaptive Event Simulator? ACM MM 2021 Running Environment: pytorch=1.4, 1 NVIDIA-1080TI. More details can be found in pap

CVTEAM 4 Sep 20, 2022
This repo is duplication of jwyang/faster-rcnn.pytorch

Faster RCNN Pytorch This repo is duplication of jwyang/faster-rcnn.pytorch C/C++ code are removed and easier to study. Python 3.8.5 Ubuntu 20.04.1 LTS

Kim Jihwan 1 Jan 14, 2022
TensorFlow CNN for fast style transfer

Fast Style Transfer in TensorFlow Add styles from famous paintings to any photo in a fraction of a second! It takes 100ms on a 2015 Titan X to style t

1 Dec 14, 2021
The `rtdl` library + The official implementation of the paper

The `rtdl` library + The official implementation of the paper "Revisiting Deep Learning Models for Tabular Data"

Yandex Research 510 Dec 30, 2022
Multitask Learning Strengthens Adversarial Robustness

Multitask Learning Strengthens Adversarial Robustness

Columbia University 15 Jun 10, 2022