Plenoxels: Radiance Fields without Neural Networks, Code release WIP

Related tags

Deep Learningsvox2
Overview

Plenoxels: Radiance Fields without Neural Networks

Alex Yu*, Sara Fridovich-Keil*, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa

UC Berkeley

Website and video: https://alexyu.net/plenoxels

arXiv: https://arxiv.org/abs/2112.05131

Note: This is a preliminary release. We have not carefully tested everything, but feel that it would be better to first put the code out there.

Also, despite the name, it's not strictly intended to be a successor of svox

Citation:

@misc{yu2021plenoxels,
      title={Plenoxels: Radiance Fields without Neural Networks}, 
      author={{Alex Yu and Sara Fridovich-Keil} and Matthew Tancik and Qinhong Chen and Benjamin Recht and Angjoo Kanazawa},
      year={2021},
      eprint={2112.05131},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

This contains the official optimization code. A JAX implementation is also available at https://github.com/sarafridov/plenoxels. However, note that the JAX version is currently feature-limited, running in about 1 hour per epoch and only supporting bounded scenes (at present).

Fast optimization

Overview

Setup

First create the virtualenv; we recommend using conda:

conda env create -f environment.yml
conda activate plenoxel

Then clone the repo and install the library at the root (svox2), which includes a CUDA extension.

If your CUDA toolkit is older than 11, then you will need to install CUB as follows: conda install -c bottler nvidiacub. Since CUDA 11, CUB is shipped with the toolkit.

To install the main library, simply run

pip install .

In the repo root directory.

Getting datasets

We have backends for NeRF-Blender, LLFF, NSVF, and CO3D dataset formats, and the dataset will be auto-detected. Please get the NeRF-synthetic and LLFF datasets from:

https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1

We provide a processed Tanks and temples dataset (with background) in NSVF format at: https://drive.google.com/file/d/1PD4oTP4F8jTtpjd_AQjCsL4h8iYFCyvO/view?usp=sharing

Note this data should be identical to that in NeRF++

Voxel Optimization (aka Training)

For training a single scene, see opt/opt.py. The launch script makes this easier.

Inside opt/, run ./launch.sh <exp_name> <GPU_id> <data_dir> -c <config>

Where <config> should be configs/syn.json for NeRF-synthetic scenes, configs/llff.json for forward-facing scenes, and configs/tnt.json for tanks and temples scenes, for example.

The dataset format will be auto-detected from data_dir. Checkpoints will be in ckpt/exp_name.

Evaluation

Use opt/render_imgs.py

Usage, (in opt/) python render_imgs.py <CHECKPOINT.npz> <data_dir>

By default this saves all frames, which is very slow. Add --no_imsave to avoid this.

Rendering a spiral

Use opt/render_imgs_circle.py

Usage, (in opt/) python render_imgs_circle.py <CHECKPOINT.npz> <data_dir>

Parallel task executor

We provide a parallel task executor based on the task manager from PlenOctrees to automatically schedule many tasks across sets of scenes or hyperparameters. This is used for evaluation, ablations, and hypertuning See opt/autotune.py. Configs in opt/tasks/*.json

For example, to automatically train and eval all synthetic scenes: you will need to change train_root and data_root in tasks/eval.json, then run:

python autotune.py -g '<space delimited GPU ids>' tasks/eval.json

For forward-facing scenes

python autotune.py -g '<space delimited GPU ids>' tasks/eval_ff.json

For Tanks and Temples scenes

python autotune.py -g '<space delimited GPU ids>' tasks/eval_tnt.json

Using a custom image set

First make sure you have colmap installed. Then

(in opt/) bash scripts/proc_colmap.sh <img_dir>

Where <img_dir> should be a directory directly containing png/jpg images from a normal perspective camera. For custom datasets we adopt a data format similar to that in NSVF https://github.com/facebookresearch/NSVF

You should be able to use this dataset directly afterwards. The format will be auto-detected.

To view the data use: python scripts/view_data.py <img_dir>

This should launch a server at localhost:8889

You may need to tune the TV. For forward-facing scenes, often making the TV weights 10x higher is helpful (configs/llff_hitv.json). For the real lego scene I used the config configs/custom.json.

Random tip: how to make pip install faster for native extensions

You may notice that this CUDA extension takes forever to install. A suggestion is using ninja. On Ubuntu, install it with sudo apt install ninja-build. Then set the environment variable MAX_JOBS to the number of CPUS to use in parallel (e.g. 12) in your shell startup script. This will enable parallel compilation and significantly improve iteration speed.

Owner
Alex Yu
Researcher at UC Berkeley
Alex Yu
Time Dependent DFT in Tamm-Dancoff Approximation

Density Function Theory Program - kspy-tddft(tda) This is an implementation of Time-Dependent Density Functional Theory(TDDFT) using the Tamm-Dancoff

Peter Borthwick 2 Nov 17, 2022
This is a Pytorch implementation of paper: DropEdge: Towards Deep Graph Convolutional Networks on Node Classification

DropEdge: Towards Deep Graph Convolutional Networks on Node Classification This is a Pytorch implementation of paper: DropEdge: Towards Deep Graph Con

401 Dec 16, 2022
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

hankook 38 Sep 16, 2022
Simple codebase for flexible neural net training

neural-modular Simple codebase for flexible neural net training. Allows for seamless exchange of models, dataset, and optimizers. Uses hydra for confi

Jannik Kossen 7 Apr 05, 2022
Code-free deep segmentation for computational pathology

NoCodeSeg: Deep segmentation made easy! This is the official repository for the manuscript "Code-free development and deployment of deep segmentation

André Pedersen 26 Nov 23, 2022
Source code for the paper "PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction" in ACL2021

PLOME:Pre-training with Misspelled Knowledge for Chinese Spelling Correction (ACL2021) This repository provides the code and data of the work in ACL20

197 Nov 26, 2022
A curated list of long-tailed recognition resources.

Awesome Long-tailed Recognition A curated list of long-tailed recognition and related resources. Please feel free to pull requests or open an issue to

Zhiwei ZHANG 542 Jan 01, 2023
Implementation of Research Paper "Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation"

Zero-DCE and Zero-DCE++(Lite architechture for Mobile and edge Devices) Papers Abstract The paper presents a novel method, Zero-Reference Deep Curve E

Tauhid Khan 15 Dec 10, 2022
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 121 Dec 25, 2022
Neon-erc20-example - Example of creating SPL token and wrapping it with ERC20 interface in Neon EVM

Example of wrapping SPL token by ERC2-20 interface in Neon Requirements Install

7 Mar 28, 2022
BBB streaming without Xorg and Pulseaudio and Chromium and other nonsense (heavily WIP)

BBB Streamer NG? Makes a conference like this... ...streamable like this! I also recorded a small video showing the basic features: https://www.youtub

Lukas Schauer 60 Oct 21, 2022
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

JAX: Autograd and XLA Quickstart | Transformations | Install guide | Neural net libraries | Change logs | Reference docs | Code search News: JAX tops

Google 21.3k Jan 01, 2023
Topic Modelling for Humans

gensim – Topic Modelling in Python Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Targ

RARE Technologies 13.8k Jan 03, 2023
Audio Source Separation is the process of separating a mixture into isolated sounds from individual sources

Audio Source Separation is the process of separating a mixture into isolated sounds from individual sources (e.g. just the lead vocals).

Victor Basu 14 Nov 07, 2022
Official implementation of Deep Convolutional Dictionary Learning for Image Denoising.

DCDicL for Image Denoising Hongyi Zheng*, Hongwei Yong*, Lei Zhang, "Deep Convolutional Dictionary Learning for Image Denoising," in CVPR 2021. (* Equ

Z80 91 Dec 21, 2022
Breaking the Dilemma of Medical Image-to-image Translation

Breaking the Dilemma of Medical Image-to-image Translation Supervised Pix2Pix and unsupervised Cycle-consistency are two modes that dominate the field

Kid Liet 86 Dec 21, 2022
This is the source code of the 1st place solution for segmentation task (with Dice 90.32%) in 2021 CCF BDCI challenge.

1st place solution in CCF BDCI 2021 ULSEG challenge This is the source code of the 1st place solution for ultrasound image angioma segmentation task (

Chenxu Peng 30 Nov 22, 2022
Code for "Adversarial attack by dropping information." (ICCV 2021)

AdvDrop Code for "AdvDrop: Adversarial Attack to DNNs by Dropping Information(ICCV 2021)." Human can easily recognize visual objects with lost informa

Ranjie Duan 52 Nov 10, 2022
Fully Automatic Page Turning on Real Scores

Fully Automatic Page Turning on Real Scores This repository contains the corresponding code for our extended abstract Henkel F., Schwaiger S. and Widm

Florian Henkel 7 Jan 02, 2022