Rendering color and depth images for ShapeNet models.

Overview

Color & Depth Renderer for ShapeNet


This library includes the tools for rendering multi-view color and depth images of ShapeNet models. Physically based rendering (PBR) is featured based on blender2.79.


Outputs

  1. Color image (20 views)

color_1.png color_2.PNG

  1. Depth image (20 views)

depth_1.png depth_2.PNG

  1. Point cloud and normals (Back-projected from color & depth images)

point_cloud_1.png point_cloud_2.png

  1. Watertight meshes (fused from depth maps)

mesh_1.png mesh_2.png


Install

  1. We recommend to install this repository with conda.
    conda env create -f environment.yml
    conda activate renderer
    
  2. Install Pyfusion by
    cd ./external/pyfusion
    mkdir build
    cd ./build
    cmake ..
    make
    
    Afterwards, compile the Cython code in ./external/pyfusion by
    cd ./external/pyfusion
    python setup.py build_ext --inplace
    
  3. Download & Extract blender2.79b, and specify the path of your blender executable file at ./setting.py by
    g_blender_excutable_path = '../../blender-2.79b-linux-glibc219-x86_64/blender'
    

Usage

  1. Normalize ShapeNet models to a unit cube by

    python normalize_shape.py
    

    The ShapeNetCore.v2 dataset is put in ./datasets/ShapeNetCore.v2. Here we only present some samples in this repository.

  2. Generate multiple camera viewpoints for rendering by

    python create_viewpoints.py
    

    The camera extrinsic parameters will be saved at ./view_points.txt, or you can customize it in this script.

  3. Run renderer to render color and depth images by

    python run_render.py
    

    The rendered images are saved in ./datasets/ShapeNetRenderings. The camera intrinsic and extrinsic parameters are saved in ./datasets/camera_settings. You can change the rendering configurations at ./settings.py, e.g. image sizes and resolution.

  4. The back-projected point cloud and corresponding normals can be visualized by

    python visualization/draw_pc_from_depth.py
    
  5. Watertight meshes can be obtained by

    python depth_fusion.py
    

    The reconstructed meshes are saved in ./datasets/ShapeNetCore.v2_watertight


Citation

This library is used for data preprocessing in our work SK-PCN. If you find it helpful, please consider citing

@inproceedings{NEURIPS2020_ba036d22,
 author = {Nie, Yinyu and Lin, Yiqun and Han, Xiaoguang and Guo, Shihui and Chang, Jian and Cui, Shuguang and Zhang, Jian.J},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
 pages = {16119--16130},
 publisher = {Curran Associates, Inc.},
 title = {Skeleton-bridged Point Completion: From Global Inference to Local Adjustment},
 url = {https://proceedings.neurips.cc/paper/2020/file/ba036d228858d76fb89189853a5503bd-Paper.pdf},
 volume = {33},
 year = {2020}
}


License

This repository is relased under the MIT License.

Owner
Yinyu Nie
Currently a Post-doc researcher in the Visual Computing Group, Technical University of Munich.
Yinyu Nie
[Machine Learning Engineer Basic Guide] 부스트캠프 AI Tech - Product Serving 자료

Boostcamp-AI-Tech-Product-Serving 부스트캠프 AI Tech - Product Serving 자료 Repository 구조 part1(MLOps 개론, Model Serving, 머신러닝 프로젝트 라이프 사이클은 별도의 코드가 없으며, part

Sung Yun Byeon 269 Dec 21, 2022
Puzzle-CAM: Improved localization via matching partial and full features.

Puzzle-CAM The official implementation of "Puzzle-CAM: Improved localization via matching partial and full features".

Sanghyun Jo 150 Nov 14, 2022
Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

CorDA Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation Prerequisite Please create and activate the follo

Qin Wang 60 Nov 30, 2022
Advancing mathematics by guiding human intuition with AI

Advancing mathematics by guiding human intuition with AI This repo contains two colab notebooks which accompany the paper, available online at https:/

DeepMind 315 Dec 26, 2022
BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization

BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization Authors: Wojciech Kryściński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong,

Salesforce 125 Dec 31, 2022
Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting

Official code of APHYNITY Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting (ICLR 2021, Oral) Yuan Yin*, Vincent Le Guen*

Yuan Yin 24 Oct 24, 2022
Implementation of SegNet: A Deep Convolutional Encoder-Decoder Architecture for Semantic Pixel-Wise Labelling

Caffe SegNet This is a modified version of Caffe which supports the SegNet architecture As described in SegNet: A Deep Convolutional Encoder-Decoder A

Alex Kendall 1.1k Jan 02, 2023
Code for our paper A Transformer-Based Feature Segmentation and Region Alignment Method For UAV-View Geo-Localization,

FSRA This repository contains the dataset link and the code for our paper A Transformer-Based Feature Segmentation and Region Alignment Method For UAV

Dmmm 32 Dec 18, 2022
Code for the paper Task Agnostic Morphology Evolution.

Task-Agnostic Morphology Optimization This repository contains code for the paper Task-Agnostic Morphology Evolution by Donald (Joey) Hejna, Pieter Ab

Joey Hejna 18 Aug 04, 2022
Reinforcement learning models in ViZDoom environment

DoomNet DoomNet is a ViZDoom agent trained by reinforcement learning. The agent is a neural network that outputs a probability of actions given only p

Andrey Kolishchak 126 Dec 09, 2022
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Davis Rempe 367 Dec 24, 2022
arxiv-sanity, but very lite, simply providing the core value proposition of the ability to tag arxiv papers of interest and have the program recommend similar papers.

arxiv-sanity, but very lite, simply providing the core value proposition of the ability to tag arxiv papers of interest and have the program recommend similar papers.

Andrej 671 Dec 31, 2022
PyGCL: A PyTorch Library for Graph Contrastive Learning

PyGCL is a PyTorch-based open-source Graph Contrastive Learning (GCL) library, which features modularized GCL components from published papers, standa

PyGCL 588 Dec 31, 2022
Official code for article "Expression is enough: Improving traffic signal control with advanced traffic state representation"

1 Introduction Official code for article "Expression is enough: Improving traffic signal control with advanced traffic state representation". The code s

Liang Zhang 10 Dec 10, 2022
Adjust Decision Boundary for Class Imbalanced Learning

Adjusting Decision Boundary for Class Imbalanced Learning This repository is the official PyTorch implementation of WVN-RS, introduced in Adjusting De

Peyton Byungju Kim 16 Jan 04, 2023
RL-GAN: Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation

RL-GAN: Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation RL-GAN is an official implementation of the paper: T

42 Nov 10, 2022
Ensembling Off-the-shelf Models for GAN Training

Data-Efficient GANs with DiffAugment project | paper | datasets | video | slides Generated using only 100 images of Obama, grumpy cats, pandas, the Br

MIT HAN Lab 1.2k Dec 26, 2022
Leveraging Two Types of Global Graph for Sequential Fashion Recommendation, ICMR 2021

This is the repo for the paper: Leveraging Two Types of Global Graph for Sequential Fashion Recommendation Requirements OS: Ubuntu 16.04 or higher ver

Yujuan Ding 10 Oct 10, 2022
Automated image registration. Registrationimation was too much of a mouthful.

alignimation Automated image registration. Registrationimation was too much of a mouthful. This repo contains the code used for my blog post Alignimat

Ethan Rosenthal 9 Oct 13, 2022
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch

CoCa - Pytorch Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch. They were able to elegantly fit in contras

Phil Wang 565 Dec 30, 2022