Pytorch code for our paper "Feedback Network for Image Super-Resolution" (CVPR2019)

Overview

Feedback Network for Image Super-Resolution [arXiv] [CVF] [Poster]

Update: Our proposed Gated Multiple Feedback Network (GMFN) will appear in BMVC2019. [Project Website]

"With two time steps and each contains 7 RDBs, the proposed GMFN achieves better reconstruction performance compared to state-of-the-art image SR methods including RDN which contains 16 RDBs."

This repository is Pytorch code for our proposed SRFBN.

The code is developed by Paper99 and penguin1214 based on BasicSR, and tested on Ubuntu 16.04/18.04 environment (Python 3.6/3/7, PyTorch 0.4.0/1.0.0/1.0.1, CUDA 8.0/9.0/10.0) with 2080Ti/1080Ti GPUs.

The architecture of our proposed SRFBN. Blue arrows represent feedback connections. The details about our proposed SRFBN can be found in our main paper.

If you find our work useful in your research or publications, please consider citing:

@inproceedings{li2019srfbn,
    author = {Li, Zhen and Yang, Jinglei and Liu, Zheng and Yang, Xiaomin and Jeon, Gwanggil and Wu, Wei},
    title = {Feedback Network for Image Super-Resolution},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year= {2019}
}

@inproceedings{wang2018esrgan,
    author = {Wang, Xintao and Yu, Ke and Wu, Shixiang and Gu, Jinjin and Liu, Yihao and Dong, Chao and Qiao, Yu and Loy, Chen Change},
    title = {ESRGAN: Enhanced super-resolution generative adversarial networks},
    booktitle = {The European Conference on Computer Vision Workshops (ECCVW)},
    year = {2018}
}

Contents

  1. Requirements
  2. Test
  3. Train
  4. Results
  5. Acknowledgements

Requirements

  • Python 3 (Anaconda is recommended)
  • skimage
  • imageio
  • Pytorch (Pytorch version >=0.4.1 is recommended)
  • tqdm
  • pandas
  • cv2 (pip install opencv-python)
  • Matlab

Test

Quick start

  1. Clone this repository:

    git clone https://github.com/Paper99/SRFBN_CVPR19.git
  2. Download our pre-trained models from the links below, unzip the models and place them to ./models.

    Model Param. Links
    SRFBN 3,631K [GoogleDrive] [BaiduYun](code:6qta)
    SRFBN-S 483K [GoogleDrive] [BaiduYun](code:r4cp)
  3. Then, cd to SRFBN_CVPR19 and run one of following commands for evaluation on Set5:

    # SRFBN
    python test.py -opt options/test/test_SRFBN_x2_BI.json
    python test.py -opt options/test/test_SRFBN_x3_BI.json
    python test.py -opt options/test/test_SRFBN_x4_BI.json
    python test.py -opt options/test/test_SRFBN_x3_BD.json
    python test.py -opt options/test/test_SRFBN_x3_DN.json
    
    # SRFBN-S
    python test.py -opt options/test/test_SRFBN-S_x2_BI.json
    python test.py -opt options/test/test_SRFBN-S_x3_BI.json
    python test.py -opt options/test/test_SRFBN-S_x4_BI.json
  4. Finally, PSNR/SSIM values for Set5 are shown on your screen, you can find the reconstruction images in ./results.

Test on standard SR benchmark

  1. If you have cloned this repository and downloaded our pre-trained models, you can first download SR benchmark (Set5, Set14, B100, Urban100 and Manga109) from GoogleDrive or BaiduYun(code:z6nz).

  2. Run ./results/Prepare_TestData_HR_LR.m in Matlab to generate HR/LR images with different degradation models.

  3. Edit ./options/test/test_SRFBN_example.json for your needs according to ./options/test/README.md.

  4. Then, run command:

    cd SRFBN_CVPR19
    python test.py -opt options/test/test_SRFBN_example.json
  5. Finally, PSNR/SSIM values are shown on your screen, you can find the reconstruction images in ./results. You can further evaluate SR results using ./results/Evaluate_PSNR_SSIM.m.

Test on your own images

  1. If you have cloned this repository and downloaded our pre-trained models, you can first place your own images to ./results/LR/MyImage.

  2. Edit ./options/test/test_SRFBN_example.json for your needs according to ./options/test/README.md.

  3. Then, run command:

    cd SRFBN_CVPR19
    python test.py -opt options/test/test_SRFBN_example.json
  4. Finally, you can find the reconstruction images in ./results.

Train

  1. Download training set DIV2K [Official Link] or DF2K [GoogleDrive] [BaiduYun] (provided by BasicSR).

  2. Run ./scripts/Prepare_TrainData_HR_LR.m in Matlab to generate HR/LR training pairs with corresponding degradation model and scale factor. (Note: Please place generated training data to SSD (Solid-State Drive) for fast training)

  3. Run ./results/Prepare_TestData_HR_LR.m in Matlab to generate HR/LR test images with corresponding degradation model and scale factor, and choose one of SR benchmark for evaluation during training.

  4. Edit ./options/train/train_SRFBN_example.json for your needs according to ./options/train/README.md.

  5. Then, run command:

    cd SRFBN_CVPR19
    python train.py -opt options/train/train_SRFBN_example.json
  6. You can monitor the training process in ./experiments.

  7. Finally, you can follow the test pipeline to evaluate your model.

Results

Quantitative Results

Average PSNR/SSIM for scale factors x2, x3 and x4 with BI degradation model. The best performance is shown in red and the second best performance is shown in blue.

Average PSNR/SSIM values for scale factor x3 with BD and DN degradation models. The best performance is shown in red and the second best performance is shown in blue.

More Qualitative Results

Qualitative results with BI degradation model (x4) on “img 004” from Urban100.

Qualitative results with BD degradation model (x3) on “MisutenaideDaisy” from Manga109.

Qualitative results with DN degradation model (x3) on “head” from Set14.

TODO

  • Curriculum learning for complex degradation models (i.e. BD and DN degradation models).

Acknowledgements

  • Thank penguin1214, who accompanies me to develop this repository.
  • Thank Xintao. Our code structure is derived from his repository BasicSR.
  • Thank authors of BasicSR/RDN/EDSR. They provide many useful codes which facilitate our work.
Owner
Zhen Li
Glad to see you.
Zhen Li
Myia prototyping

Myia Myia is a new differentiable programming language. It aims to support large scale high performance computations (e.g. linear algebra) and their g

Mila 456 Nov 07, 2022
Compartmental epidemic model to assess undocumented infections: applications to SARS-CoV-2 epidemics in Brazil - Datasets and Codes

Compartmental epidemic model to assess undocumented infections: applications to SARS-CoV-2 epidemics in Brazil - Datasets and Codes The codes for simu

1 Jan 12, 2022
MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images

Main repo for ECCV 2020 paper MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. visual.cs.brown.edu/matryodshka

Brown University Visual Computing Group 75 Dec 13, 2022
DECAF: Deep Extreme Classification with Label Features

DECAF DECAF: Deep Extreme Classification with Label Features @InProceedings{Mittal21, author = "Mittal, A. and Dahiya, K. and Agrawal, S. and Sain

46 Nov 06, 2022
A series of Jupyter notebooks with Chinese comment that walk you through the fundamentals of Machine Learning and Deep Learning in python using Scikit-Learn and TensorFlow.

Hands-on-Machine-Learning 目的 这份笔记旨在帮助中文学习者以一种较快较系统的方式入门机器学习, 是在学习Hands-on Machine Learning with Scikit-Learn and TensorFlow这本书的 时候做的个人笔记: 此项目的可取之处 原书的

Baymax 1.5k Dec 21, 2022
KAPAO is an efficient multi-person human pose estimation model that detects keypoints and poses as objects and fuses the detections to predict human poses.

KAPAO (Keypoints and Poses as Objects) KAPAO is an efficient single-stage multi-person human pose estimation model that models keypoints and poses as

Will McNally 664 Dec 30, 2022
Tensorflow implementation of Swin Transformer model.

Swin Transformer (Tensorflow) Tensorflow reimplementation of Swin Transformer model. Based on Official Pytorch implementation. Requirements tensorflow

167 Jan 08, 2023
E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

11 Nov 08, 2022
Robotics with GPU computing

Robotics with GPU computing Cupoch is a library that implements rapid 3D data processing for robotics using CUDA. The goal of this library is to imple

Shirokuma 625 Jan 07, 2023
an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch

revisiting-sepconv This is a reference implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation [1] using PyTorch. Given two f

Simon Niklaus 59 Dec 22, 2022
Exploration & Research into cross-domain MEV. Initial focus on ETH/POLYGON.

xMEV, an apt exploration This is a small exploration on the xMEV opportunities between Polygon and Ethereum. It's a data analysis exercise on a few pa

odyslam.eth 7 Oct 18, 2022
NaturalProofs: Mathematical Theorem Proving in Natural Language

NaturalProofs: Mathematical Theorem Proving in Natural Language NaturalProofs: Mathematical Theorem Proving in Natural Language Sean Welleck, Jiacheng

Sean Welleck 83 Jan 05, 2023
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4.2k Jan 09, 2023
constructing maps of intellectual influence from publication data

Influencemap Project @ ANU Influence in the academic communities has been an area of interest for researchers. This can be seen in the popularity of a

CS Metrics 13 Jun 18, 2022
PPO is a very popular Reinforcement Learning algorithm at present.

PPO is a very popular Reinforcement Learning algorithm at present. OpenAI takes PPO as the current baseline algorithm. We use the PPO algorithm to train a policy to give the best action in any situat

Rosefintech 11 Aug 23, 2021
Implementation of PyTorch-based multi-task pre-trained models

mtdp Library containing implementation related to the research paper "Multi-task pre-training of deep neural networks for digital pathology" (Mormont

Romain Mormont 27 Oct 14, 2022
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation. Our paper is accepted by IEEE Transactions on Cybernetics

290 Dec 25, 2022
A generalized framework for prototyping full-stack cooperative driving automation applications under CARLA+SUMO.

OpenCDA OpenCDA is a SIMULATION tool integrated with a prototype cooperative driving automation (CDA; see SAE J3216) pipeline as well as regular autom

UCLA Mobility Lab 726 Dec 29, 2022
This is the official implementation for the paper "Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization" in NeurIPS 2021.

MPMAB_BEACON This is code used for the paper "Decentralized Multi-player Multi-armed Bandits: Beyond Linear Reward Functions", Neurips 2021. Requireme

Cong Shen Research Group 0 Oct 26, 2021
A PyTorch Implementation of "Watch Your Step: Learning Node Embeddings via Graph Attention" (NeurIPS 2018).

Attention Walk ⠀⠀ A PyTorch Implementation of Watch Your Step: Learning Node Embeddings via Graph Attention (NIPS 2018). Abstract Graph embedding meth

Benedek Rozemberczki 303 Dec 09, 2022