[ECCV'20] Convolutional Occupancy Networks

Overview

Convolutional Occupancy Networks

Paper | Supplementary | Video | Teaser Video | Project Page | Blog Post

This repository contains the implementation of the paper:

Convolutional Occupancy Networks
Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys and Andreas Geiger
ECCV 2020 (spotlight)

If you find our code or paper useful, please consider citing

@inproceedings{Peng2020ECCV,
 author =  {Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, Andreas Geiger},
 title = {Convolutional Occupancy Networks},
 booktitle = {European Conference on Computer Vision (ECCV)},
 year = {2020}}

Contact Songyou Peng for questions, comments and reporting bugs.

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called conv_onet using

conda env create -f environment.yaml
conda activate conv_onet

Note: you might need to install torch-scatter mannually following the official instruction:

pip install torch-scatter==2.0.4 -f https://pytorch-geometric.com/whl/torch-1.4.0+cu101.html

Next, compile the extension modules. You can do this via

python setup.py build_ext --inplace

Demo

First, run the script to get the demo data:

bash scripts/download_demo_data.sh

Reconstruct Large-Scale Matterport3D Scene

You can now quickly test our code on the real-world scene shown in the teaser. To this end, simply run:

python generate.py configs/pointcloud_crop/demo_matterport.yaml

This script should create a folder out/demo_matterport/generation where the output meshes and input point cloud are stored.

Note: This experiment corresponds to our fully convolutional model, which we train only on the small crops from our synthetic room dataset. This model can be directly applied to large-scale real-world scenes with real units and generate meshes in a sliding-window manner, as shown in the teaser. More details can be found in section 6 of our supplementary material. For training, you can use the script pointcloud_crop/room_grid64.yaml.

Reconstruct Synthetic Indoor Scene

You can also test on our synthetic room dataset by running:

python generate.py configs/pointcloud/demo_syn_room.yaml

Dataset

To evaluate a pretrained model or train a new model from scratch, you have to obtain the respective dataset. In this paper, we consider 4 different datasets:

ShapeNet

You can download the dataset (73.4 GB) by running the script from Occupancy Networks. After, you should have the dataset in data/ShapeNet folder.

Synthetic Indoor Scene Dataset

For scene-level reconstruction, we create a synthetic dataset of 5000 scenes with multiple objects from ShapeNet (chair, sofa, lamp, cabinet, table). There are also ground planes and randomly sampled walls.

You can download our preprocessed data (144 GB) using

bash scripts/download_data.sh

This script should download and unpack the data automatically into the data/synthetic_room_dataset folder.
Note: We also provide point-wise semantic labels in the dataset, which might be useful.

Alternatively, you can also preprocess the dataset yourself. To this end, you can:

  • download the ShapeNet dataset as described above.
  • check scripts/dataset_synthetic_room/build_dataset.py, modify the path and run the code.

Matterport3D

Download Matterport3D dataset from the official website. And then, use scripts/dataset_matterport/build_dataset.py to preprocess one of your favorite scenes. Put the processed data into data/Matterport3D_processed folder.

ScanNet

Download ScanNet v2 data from the official ScanNet website. Then, you can preprocess data with: scripts/dataset_scannet/build_dataset.py and put into data/ScanNet folder.
Note: Currently, the preprocess script normalizes ScanNet data to a unit cube for the comparison shown in the paper, but you can easily adapt the code to produce data with real-world metric. You can then use our fully convolutional model to run evaluation in a sliding-window manner.

Usage

When you have installed all binary dependencies and obtained the preprocessed data, you are ready to run our pre-trained models and train new models from scratch.

Mesh Generation

To generate meshes using a trained model, use

python generate.py CONFIG.yaml

where you replace CONFIG.yaml with the correct config file.

Use a pre-trained model
The easiest way is to use a pre-trained model. You can do this by using one of the config files under the pretrained folders.

For example, for 3D reconstruction from noisy point cloud with our 3-plane model on the synthetic room dataset, you can simply run:

python generate.py configs/pointcloud/pretrained/room_3plane.yaml

The script will automatically download the pretrained model and run the generation. You can find the outputs in the out/.../generation_pretrained folders

Note that the config files are only for generation, not for training new models: when these configs are used for training, the model will be trained from scratch, but during inference our code will still use the pretrained model.

We provide the following pretrained models:

pointcloud/shapenet_1plane.pt
pointcloud/shapenet_3plane.pt
pointcloud/shapenet_grid32.pt
pointcloud/shapenet_3plane_partial.pt
pointcloud/shapenet_pointconv.pt
pointcloud/room_1plane.pt
pointcloud/room_3plane.pt
pointcloud/room_grid32.pt
pointcloud/room_grid64.pt
pointcloud/room_combine.pt
pointcloud/room_pointconv.pt
pointcloud_crop/room_grid64.pt
voxel/voxel_shapenet_1plane.pt
voxel/voxel_shapenet_3plane.pt
voxel/voxel_shapenet_grid32.pt

Evaluation

For evaluation of the models, we provide the script eval_meshes.py. You can run it using:

python eval_meshes.py CONFIG.yaml

The script takes the meshes generated in the previous step and evaluates them using a standardized protocol. The output will be written to .pkl/.csv files in the corresponding generation folder which can be processed using pandas.

Note: We follow previous works to use "use 1/10 times the maximal edge length of the current object’s bounding box as unit 1" (see Section 4 - Metrics). In practice, this means that we multiply the Chamfer-L1 by a factor of 10 for reporting the numbers in the paper.

Training

Finally, to train a new network from scratch, run:

python train.py CONFIG.yaml

For available training options, please take a look at configs/default.yaml.

Further Information

Please also check out the following concurrent works that either tackle similar problems or share similar ideas:

This is an official source code for implementation on Extensive Deep Temporal Point Process

Extensive Deep Temporal Point Process This is an official source code for implementation on Extensive Deep Temporal Point Process, which is composed o

Haitao Lin 8 Aug 15, 2022
Official Implementation of DDOD (Disentangle your Dense Object Detector), ACM MM2021

Disentangle Your Dense Object Detector This repo contains the supported code and configuration files to reproduce object detection results of Disentan

loveSnowBest 51 Jan 07, 2023
Machine Learning automation and tracking

The Open-Source MLOps Orchestration Framework MLRun is an open-source MLOps framework that offers an integrative approach to managing your machine-lea

873 Jan 04, 2023
PyTorch implementation for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains a PyTorch implementation for the paper Score-Based Genera

Yang Song 757 Jan 04, 2023
N-RPG - Novel role playing game da turfu

N-RPG Ce README sera la page de garde du projet. Contenu Il contiendra la présen

4 Mar 15, 2022
A Closer Look at Reference Learning for Fourier Phase Retrieval

A Closer Look at Reference Learning for Fourier Phase Retrieval This repository contains code for our NeurIPS 2021 Workshop on Deep Learning and Inver

Tobias Uelwer 1 Oct 28, 2021
A unified 3D Transformer Pipeline for visual synthesis

Overview This is the official repo for the paper: "NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion". NÜWA is a unified multimodal

Microsoft 2.6k Jan 03, 2023
FcaNet: Frequency Channel Attention Networks

FcaNet: Frequency Channel Attention Networks PyTorch implementation of the paper "FcaNet: Frequency Channel Attention Networks". Simplest usage Models

327 Dec 27, 2022
Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image

NonCuboidRoom Paper Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image Cheng Yang*, Jia Zheng*, Xili Dai, Rui Tang, Yi Ma, Xiao

67 Dec 15, 2022
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Jiwoon Ahn 337 Dec 15, 2022
Deep Learning Models for Causal Inference

Extensive tutorials for learning how to build deep learning models for causal inference using selection on observables in Tensorflow 2.

Bernard J Koch 151 Dec 31, 2022
Official DGL implementation of "Rethinking High-order Graph Convolutional Networks"

SE Aggregation This is the implementation for Rethinking High-order Graph Convolutional Networks. Here we show the codes for citation networks as an e

Tianqi Zhang (张天启) 32 Jul 19, 2022
ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation

ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation This repository contains the source code of our paper, ESPNet (acc

Sachin Mehta 515 Dec 13, 2022
Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021)

Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021, official Pytorch implementatio

Microsoft 247 Dec 25, 2022
Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

DynaBOA Code repositoty for the paper: Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation Shanyan Guan, Jingwei Xu, Michell

197 Jan 07, 2023
Multimodal Descriptions of Social Concepts: Automatic Modeling and Detection of (Highly Abstract) Social Concepts evoked by Art Images

MUSCO - Multimodal Descriptions of Social Concepts Automatic Modeling of (Highly Abstract) Social Concepts evoked by Art Images This project aims to i

0 Aug 22, 2021
Provide partial dates and retain the date precision through processing

Prefix date parser This is a helper class to parse dates with varied degrees of precision. For example, a data source might state a date as 2001, 2001

Friedrich Lindenberg 13 Dec 14, 2022
This repository contains all source code, pre-trained models related to the paper "An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator"

An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator This is a Pytorch implementation for the paper "An Empirical Study o

Cuong Nguyen 3 Nov 15, 2021
Deep Learning for Time Series Classification

Deep Learning for Time Series Classification This is the companion repository for our paper titled "Deep learning for time series classification: a re

Hassan ISMAIL FAWAZ 1.2k Jan 02, 2023
VLG-Net: Video-Language Graph Matching Networks for Video Grounding

VLG-Net: Video-Language Graph Matching Networks for Video Grounding Introduction Official repository for VLG-Net: Video-Language Graph Matching Networ

Mattia Soldan 25 Dec 04, 2022