Detail-Preserving Transformer for Light Field Image Super-Resolution

Related tags

Deep LearningDPT
Overview

DPT

Official Pytorch implementation of the paper "Detail-Preserving Transformer for Light Field Image Super-Resolution" accepted by AAAI 2022 .

Updates

  • 2022.01: Our method is available at the newly-released repository BasicLFSR, an open-source and easy-to-use toolbox for LF image SR.
  • 2022.01: The code is released.

Requirements

  • Python 3.7.7
  • Pytorch=1.5.0
  • torchvision=0.6.0
  • h5py=2.8.0
  • Matlab

Dataset

We use the EPFL, HCInew, HCIold, INRIA and STFgantry datasets for both training and testing. You can download the above dataset from Baidu Drive (key:912V).

Download the visual results

We share the super-resolved results generated by our DPT. Then, researchers can compare their methods to our DPT without performing inference. Results are available at Baidu Drive (key:912V).

Prepare the datasets

To generate the training data,

 Using Matlab to run `GenerateTrainingData.m`

To generate the testing data,

 Using Matlab to run `GenerateTestData.m`

We also provide the processed datasets we used in the paper. The processed datasets are avaliable at Baidu Drive (key:912V).

Train

To perform DPT training, please run

python train.py

Checkpoint will be saved to ./log/.

Test

To evaluate DPT performance, please run

python test.py

The performance of DPT on five datasets will be printed on the screen. The visual result of each scene will be saved in ./Results/. The PSNR and SSIM values of each scene will aslo be saved in ./PSNRSSIM/.

Generate visual results

To generate the visual super-resolved results,

Using Matlab to run `GenerateResultImages.m` 

The '.mat' files in ./Results/ will be converted to '.png' images to ./SRimages/.

To generate the visual gradient results, please run

python generate_visual_gradient_map.py 

Gradient results will be saved to ./GRAimages/.

Citation

If you find this work helpful, please consider citing the following paper:

@article{wang2022detail,
  title={Detail Preserving Transformer for Light Field Image Super-Resolution},
  author={Wang, Shunzhou and Zhou, Tianfei and Lu, Yao and Di, Huijun},
  journal={arXiv preprint arXiv:2201.00346},
  year={2022}
}

Acknowledgements

This code is heavily based on LF-DFNet. We also refer to the codes in VSR-Transformer, COLA-Net, and SPSR. We thank the authors for sharing the codes. We would like to thank Yingqian Wang for his help with LFSR. We would also like to thank Zhengyu Liang for adding our DPT to the repository BasicLFSR.

Contact

If you have any question about this work, feel free to concat with me via [email protected].

LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations

LIMEcraft LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations The LIMEcraft algorithm is an explanatory method based on

MI^2 DataLab 4 Aug 01, 2022
Official implementation of the method ContIG, for self-supervised learning from medical imaging with genomics

ContIG: Self-supervised Multimodal Contrastive Learning for Medical Imaging with Genetics This is the code implementation of the paper "ContIG: Self-s

Digital Health & Machine Learning 22 Dec 13, 2022
UniFormer - official implementation of UniFormer

UniFormer This repo is the official implementation of "Uniformer: Unified Transf

SenseTime X-Lab 573 Jan 04, 2023
The official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness.

This repository is the official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness. Requirements pip install -r requi

Jie Ren 17 Dec 12, 2022
Efficient neural networks for analog audio effect modeling

micro-TCN Efficient neural networks for audio effect modeling

Christian Steinmetz 94 Dec 29, 2022
Reading list for research topics in Masked Image Modeling

awesome-MIM Reading list for research topics in Masked Image Modeling(MIM). We list the most popular methods for MIM, if I missed something, please su

ligang 231 Dec 07, 2022
Machine learning for NeuroImaging in Python

nilearn Nilearn enables approachable and versatile analyses of brain volumes. It provides statistical and machine-learning tools, with instructive doc

919 Dec 25, 2022
MT3: Multi-Task Multitrack Music Transcription

MT3: Multi-Task Multitrack Music Transcription MT3 is a multi-instrument automatic music transcription model that uses the T5X framework. This is not

Magenta 867 Dec 29, 2022
A python package simulating the quasi-2D pseudospin-1/2 Gross-Pitaevskii equation with NVIDIA GPU acceleration.

A python package simulating the quasi-2D pseudospin-1/2 Gross-Pitaevskii equation with NVIDIA GPU acceleration. Introduction spinor-gpe is high-level,

2 Sep 20, 2022
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks (SDPoint) This repository contains the cod

Jason Kuen 17 Jul 04, 2022
Code release for "Conditional Adversarial Domain Adaptation" (NIPS 2018)

CDAN Code release for "Conditional Adversarial Domain Adaptation" (NIPS 2018) New version: https://github.com/thuml/Transfer-Learning-Library Dataset

THUML @ Tsinghua University 363 Dec 20, 2022
Custom TensorFlow2 implementations of forward and backward computation of soft-DTW algorithm in batch mode.

Batch Soft-DTW(Dynamic Time Warping) in TensorFlow2 including forward and backward computation Custom TensorFlow2 implementations of forward and backw

19 Aug 30, 2022
4th place solution for the SIGIR 2021 challenge.

SIGIR-2021 (Tinkoff.AI) How to start Download train and test data: https://sigir-ecom.github.io/data-task.html Place it under sigir-2021/data/. Run py

Tinkoff.AI 4 Jul 01, 2022
Learning to Stylize Novel Views

Learning to Stylize Novel Views [Project] [Paper] Contact: Hsin-Ping Huang ([ema

34 Nov 27, 2022
PyTorch implementation of the NIPS-17 paper "Poincaré Embeddings for Learning Hierarchical Representations"

Poincaré Embeddings for Learning Hierarchical Representations PyTorch implementation of Poincaré Embeddings for Learning Hierarchical Representations

Facebook Research 1.6k Dec 25, 2022
VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning

This is a release of our VIMPAC paper to illustrate the implementations. The pretrained checkpoints and scripts will be soon open-sourced in HuggingFace transformers.

Hao Tan 74 Dec 03, 2022
A tool to prepare websites grabbed with wget for local viewing.

makelocal A tool to prepare websites grabbed with wget for local viewing. exapmples After fetching xkcd.com with: wget -r -no-remove-listing -r -N --p

5 Apr 23, 2022
A lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look At CoefficienTs)

Real-time Instance Segmentation and Lane Detection This is a lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look

Jin 4 Dec 30, 2022
Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks

Continuous Sparsification Implementation of Continuous Sparsification (CS), a method based on l_0 regularization to find sparse neural networks, propo

Pedro Savarese 23 Dec 07, 2022
Automatic differentiation with weighted finite-state transducers.

GTN: Automatic Differentiation with WFSTs Quickstart | Installation | Documentation What is GTN? GTN is a framework for automatic differentiation with

100 Dec 29, 2022