This is the official code release for the paper Shape and Material Capture at Home

Overview

Shape and Material Capture at Home, CVPR 2021.

Daniel Lichy, Jiaye Wu, Soumyadip Sengupta, David Jacobs

A bare-bones capture setup

Overview

This is the official code release for the paper Shape and Material Capture at Home. The code enables you to reconstruct a 3D mesh and Cook-Torrance BRDF from one or more images captured with a flashlight or camera with flash.

We provide:

  • The trained RecNet model.
  • Code to test on the DiLiGenT dataset.
  • Code to test on our dataset from the paper.
  • Code to test on your own dataset.
  • Code to train a new model, including code for visualization and logging.

Dependencies

This project uses the following dependencies:

  • Python 3.8
  • PyTorch (version = 1.8.1)
  • torchvision
  • numpy
  • scipy
  • opencv
  • OpenEXR (only required for training)

The easiest way to run the code is by creating a virtual environment and installing the dependences with pip e.g.

# Create a new python3.8 environment named py3.8
virtualenv py3.8 -p python3.8

# Activate the created environment
source py3.8/bin/activate

#upgrade pip
pip install --upgrade pip

# To install dependencies 
python -m pip install -r requirements.txt
#or
python -m pip install -r requirements_no_exr.txt

Capturing you own dataset

Multi-image captures

The video below shows how to capture the (up to) six images for you own dataset. Angles are approximate and can be estimated by eye. The camera should be approximately 1 to 4 feet from the object. The flashlight should be far enough from the object such that the entire object is in the illumination cone of the flashlight.

We used this flashlight, but any bright flashlight should work. We used this tripod which comes with a handy remote for iPhone and Android.

Please see the Project Page for a higher resolution version of this video.

Example reconstructions:


Single image captures

Our network also provides state-of-the-art results for reconstructing shape and material from a single flash image.

Examples captured with just an iPhone with flash enabled in a dim room (complete darkness is not needed):


Mask Making

For best performance you should supply a segmentation mask with your image. For our paper we used https://github.com/saic-vul/fbrs_interactive_segmentation which enables mask making with just a few clicks.

Normal prediction results are reasonable without the mask, but integrating normals to a mesh without the mask can be challenging.

Test RecNet on the DiLiGenT dataset

# Download and prepare the DiLiGenT dataset
sh scripts/prepare_diligent_dataset.sh

# Test on 3 DiLiGenT images from the front, front-right, and front-left
# if you only have CPUs remove the --gpu argument
python eval_diligent.py results_path --gpu

# To test on a different subset of DiLiGenT images use the argument --image_nums n1 n2 n3 n4 n5 n6
# where n1 to n6 are the image indices of the right, front-right, front, front-left, left, and above
# images, respectively. For images that are no present set the image number to -1
# e.g to test on only the front image (image number 51) run
python eval_diligent.py results_path --gpu --image_nums -1 -1 51 -1 -1 -1 

Test on our dataset/your own dataset

The easiest way to test on you own dataset and our dataset is to format it as follows:

dataset_dir:

  • sample_name1:
    • 0.ext (right)
    • 1.ext (front-right)
    • 2.ext (front)
    • 3.ext (front-left)
    • 4.ext (left)
    • 5.ext (above)
    • mask.ext
  • sample_name2: (if not all images are present just don't add it to the directory)
    • 2.ext (front)
    • 3.ext (front-left)
  • ...

Where .ext is the image extention e.g. .png, .jpg, .exr

For an example of formating your own dataset please look in data/sample_dataset

Then run:

python eval_standard.py results_path --dataset_root path_to_dataset_dir --gpu

# To test on a sample of our dataset run
python eval_standard.py results_path --dataset_root data/sample_dataset --gpu

Download our real dataset

Coming Soon...

Integrating Normal Maps and Producing a Mesh

We include a script to integrate normals and produce a ply mesh with per vertex albedo and roughness.

After running eval_standard.py or eval_diligent.py there with be a file results_path/images/integration_data.csv Running the following command with produce a ply mesh in results_path/images/sample_name/mesh.ply

python integrate_normals.py results_path/images/integration_data.csv --gpu

This is the most time intensive part of the reconstruction and takes about 3 minutes to run on GPU and 5 minutes on CPU.

Training

To train RecNet from scratch:

python train.py log_dir --dr_dataset_root path_to_dr_dataset --sculpt_dataset_root path_to_sculpture_dataset --gpu

Download the training data

Coming Soon...

FAQ

Q1: What should I do if I have problem running your code?

  • Please create an issue if you encounter errors when trying to run the code. Please also feel free to submit a bug report.

Citation

If you find this code or the provided models useful in your research, please cite it as:

@inproceedings{lichy_2021,
  title={Shape and Material Capture at Home},
  author={Lichy, Daniel and Wu, Jiaye and Sengupta, Soumyadip and Jacobs, David W.},
  booktitle={CVPR},
  year={2021}
}

Acknowledgement

Code used for downloading and loading the DiLiGenT dataset is adapted from https://github.com/guanyingc/SDPS-Net

Dcf-game-infrastructure-public - Contains all the components necessary to run a DC finals (attack-defense CTF) game from OOO

dcf-game-infrastructure All the components necessary to run a game of the OOO DC

Order of the Overflow 46 Sep 13, 2022
ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

Zongdai 107 Dec 20, 2022
Asynchronous Advantage Actor-Critic in PyTorch

Asynchronous Advantage Actor-Critic in PyTorch This is PyTorch implementation of A3C as described in Asynchronous Methods for Deep Reinforcement Learn

Reiji Hatsugai 38 Dec 12, 2022
Transfer-Learn is an open-source and well-documented library for Transfer Learning.

Transfer-Learn is an open-source and well-documented library for Transfer Learning. It is based on pure PyTorch with high performance and friendly API. Our code is pythonic, and the design is consist

THUML @ Tsinghua University 2.2k Jan 03, 2023
Wandb-predictions - WANDB Predictions With Python

WANDB API CI/CD Below we capture the CI/CD scenarios that we would expect with o

Anish Shah 6 Oct 07, 2022
Gesture recognition on Event Data

Event based Gesture Recognition Gesture recognition on Event Data usually involv

2 Feb 14, 2022
An official implementation of "Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised Semantic Segmentation" (CVPR 2021) in PyTorch.

BANA This is the implementation of the paper "Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised Semantic Segmentation". For more inf

CV Lab @ Yonsei University 59 Dec 12, 2022
Nvidia Semantic Segmentation monorepo

Paper | YouTube | Cityscapes Score Pytorch implementation of our paper Hierarchical Multi-Scale Attention for Semantic Segmentation. Please refer to t

NVIDIA Corporation 1.6k Jan 04, 2023
Controlling a game using mediapipe hand tracking

These scripts use the Google mediapipe hand tracking solution in combination with a webcam in order to send game instructions to a racing game. It features 2 methods of control

3 May 17, 2022
Rule-based Customer Segmentation

Rule-based Customer Segmentation Business Problem A game company wants to create level-based new customer definitions (personas) by using some feature

Cem Çaluk 2 Jan 03, 2022
Code for KDD'20 "Generative Pre-Training of Graph Neural Networks"

GPT-GNN: Generative Pre-Training of Graph Neural Networks GPT-GNN is a pre-training framework to initialize GNNs by generative pre-training. It can be

Ziniu Hu 346 Dec 19, 2022
A Python framework for developing parallelized Computational Fluid Dynamics software to solve the hyperbolic 2D Euler equations on distributed, multi-block structured grids.

pyHype: Computational Fluid Dynamics in Python pyHype is a Python framework for developing parallelized Computational Fluid Dynamics software to solve

Mohamed Khalil 21 Nov 22, 2022
Misc YOLOL scripts for use in the Starbase space sandbox videogame

starbase-misc Misc YOLOL scripts for use in the Starbase space sandbox videogame. Each directory contains standalone YOLOL scripts. They don't really

4 Oct 17, 2021
Simultaneous Detection and Segmentation

Simultaneous Detection and Segmentation This is code for the ECCV Paper: Simultaneous Detection and Segmentation Bharath Hariharan, Pablo Arbelaez,

Bharath Hariharan 96 Jul 20, 2022
code for CVPR paper Zero-shot Instance Segmentation

Code for CVPR2021 paper Zero-shot Instance Segmentation Code requirements python: python3.7 nvidia GPU pytorch1.1.0 GCC =5.4 NCCL 2 the other python

zhengye 86 Dec 13, 2022
Pytorch implementation of BRECQ, ICLR 2021

BRECQ Pytorch implementation of BRECQ, ICLR 2021 @inproceedings{ li&gong2021brecq, title={BRECQ: Pushing the Limit of Post-Training Quantization by Bl

Yuhang Li 148 Dec 28, 2022
Unsupervised Representation Learning via Neural Activation Coding

Neural Activation Coding This repository contains the code for the paper "Unsupervised Representation Learning via Neural Activation Coding" published

yookoon park 5 May 26, 2022
PyTorch Implementation of the paper Learning to Reweight Examples for Robust Deep Learning

Learning to Reweight Examples for Robust Deep Learning Unofficial PyTorch implementation of Learning to Reweight Examples for Robust Deep Learning. Th

Daniel Stanley Tan 325 Dec 28, 2022
Pytorch implementation of U-Net, R2U-Net, Attention U-Net, and Attention R2U-Net.

pytorch Implementation of U-Net, R2U-Net, Attention U-Net, Attention R2U-Net U-Net: Convolutional Networks for Biomedical Image Segmentation https://a

leejunhyun 2k Jan 02, 2023
A template repository for submitting a job to the Slurm Cluster installed at the DISI - University of Bologna

Cluster di HPC con GPU per esperimenti di calcolo (draft version 1.0) Per poter utilizzare il cluster il primo passo è abilitare l'account istituziona

20 Dec 16, 2022