ICCV2021 - A New Journey from SDRTV to HDRTV.

Related tags

Deep LearningHDRTVNet
Overview

HDRTVNet [Paper Link]

A New Journey from SDRTV to HDRTV

By Xiangyu Chen*, Zhengwen Zhang*, Jimmy S. Ren, Lynhoo Tian, Yu Qiao and Chao Dong

(* indicates equal contribution)

This paper is accepted to ICCV 2021.

Overview

Simplified SDRTV/HDRTV formation pipeline:

Overview of the method:

Getting Started

  1. Dataset
  2. Configuration
  3. How to test
  4. How to train
  5. Metrics
  6. Visualization

Dataset

We conduct a dataset using videos with 4K resolutions under HDR10 standard (10-bit, Rec.2020, PQ) and their counterpart SDR versions from Youtube. The dataset consists of a training set with 1235 image pairs and a test set with 117 image pairs. Please refer to the paper for the details on the processing of the dataset. The dataset can be downloaded from Baidu Netdisk (access code: 6qvu) or OneDrive (access code: HDRTVNet).

We also provide the original Youtube links of these videos, which can be found in this file. Note that we cannot provide the download links since we do not have the copyright to distribute. Please download this dataset only for academic use.

Configuration

Please refer to the requirements. Matlab is also used to process the data, but it is not necessary and can be replaced by OpenCV.

How to test

We provide the pretrained models to test, which can be downloaded from Baidu Netdisk (access code: 2me9) or OneDrive (access code: HDRTVNet). Since our method is casaded of three steps, the results also need to be inferenced step by step.

  • Before testing, it is optional to generate the downsampled inputs of the condition network in advance. Make sure the input_folder and save_LR_folder in ./scripts/generate_mod_LR_bic.m are correct, then run the file using Matlab. After that, matlab-bicubic-downsampled versions of the input SDR images are generated that will be input to the condition network. Note that this step is not necessary, but can reproduce more precise performance.
  • For the first part of AGCM, make sure the paths of dataroot_LQ, dataroot_cond, dataroot_GT and pretrain_model_G in ./codes/options/test/test_AGCM.yml are correct, then run
cd codes
python test.py -opt options/test/test_AGCM.yml
  • Note that if the first step is not preformed, the line of dataroot_cond should be commented. The test results will be saved to ./results/Adaptive_Global_Color_Mapping.
  • For the second part of LE, make sure dataroot_LQ is modified into the path of results obtained by AGCM, then run
python test.py -opt options/test/test_LE.yml
  • Note that results generated by LE can achieve the best quantitative performance. The part of HG is for the completeness of the solution and improving the visual quality forthermore. For testing the last part of HG, make sure dataroot_LQ is modified into the path of results obtained by LE, then run
python test.py -opt options/test/test_HG.yml
  • Note that the results of the each step are 16-bit images that can be converted into HDR10 video.

How to train

  • Prepare the data. Generate the sub-images with specific patch size using ./scripts/extract_subimgs_single.py and generate the down-sampled inputs for the condition network (using the ./scripts/generate_mod_LR_bic.m or any other methods).
  • For AGCM, make sure that the paths and settings in ./options/train/train_AGCM.yml are correct, then run
cd codes
python train.py -opt options/train/train_AGCM.yml
  • For LE, the inputs are generated by the trained AGCM model. The original data should be inferenced through the first step (refer to the last part on how to test AGCM) and then be processed by extracting sub-images. After that, modify the corresponding settings in ./options/train/train_LE.yml and run
python train.py -opt options/train/train_LE.yml
  • For HG, the inputs are also obtained by the last part LE, thus the training data need to be processed by similar operations as the previous two parts. When the data is prepared, it is recommended to pretrain the generator at first by running
python train.py -opt options/train/train_HG_Generator.yml
  • After that, choose a pretrained model and modify the path of pretrained model in ./options/train/train_HG_GAN.yml, then run
python train.py -opt options/train/train_HG_GAN.yml
  • All models and training states are stored in ./experiments.

Metrics

Five metrics are used to evaluate the quantitative performance of different methods, including PSNR, SSIM, SR_SIM, Delta EITP (ITU Rec.2124) and HDR-VDP3. Since the latter three metrics are not very common in recent papers, we provide some reference codes in ./metrics for convenient usage.

Visualization

Since HDR10 is an HDR standard using PQ transfer function for the video, the correct way to visualize the results is to synthesize the image results into a video format and display it on the HDR monitor or TVs that support HDR. The HDR images in our dataset are generated by directly extracting frames from the original HDR10 videos, thus these images consisting of PQ values look relatively dark compared to their true appearances. We provide the reference commands of our extracting frames and synthesizing videos in ./scripts. Please use MediaInfo to check the format and the encoding information of synthesized videos before visualization. If circumstances permit, we strongly recommend to observe the HDR results and the original HDR resources by this way on the HDR dispalyer.

If the HDR displayer is not available, some media players with HDR render can play the HDR video and show a relatively realistic look, such as Potplayer. Note that this is only an approximate alternative, and it still cannot fully restore the appearance of HDR content on HDR monitors.

Citation

If our work is helpful to you, please cite our paper:

@inproceedings{chen2021new,
  title={A New Journey from SDRTV to HDRTV}, 
  author={Chen, Xiangyu and Zhang, Zhengwen and Ren, Jimmy S. and Tian, Lynhoo and Qiao, Yu and Dong, Chao},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2021}
}
Owner
XyChen
PhD. Student,Computer Vision
XyChen
PyTorch implementation of MoCo: Momentum Contrast for Unsupervised Visual Representation Learning

MoCo: Momentum Contrast for Unsupervised Visual Representation Learning This is a PyTorch implementation of the MoCo paper: @Article{he2019moco, aut

Meta Research 3.7k Jan 02, 2023
Tackling the Class Imbalance Problem of Deep Learning Based Head and Neck Organ Segmentation

Info This is the code repository of the work Tackling the Class Imbalance Problem of Deep Learning Based Head and Neck Organ Segmentation from Elias T

2 Apr 20, 2022
This repository contains an implementation of ConvMixer for the ICLR 2022 submission "Patches Are All You Need?".

Patches Are All You Need? 🤷 This repository contains an implementation of ConvMixer for the ICLR 2022 submission "Patches Are All You Need?". Code ov

ICLR 2022 Author 934 Dec 30, 2022
As a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).

HAKE-Action HAKE-Action (TensorFlow) is a project to open the SOTA action understanding studies based on our Human Activity Knowledge Engine. It inclu

Yong-Lu Li 94 Nov 18, 2022
Scale-aware Automatic Augmentation for Object Detection (CVPR 2021)

SA-AutoAug Scale-aware Automatic Augmentation for Object Detection Yukang Chen, Yanwei Li, Tao Kong, Lu Qi, Ruihang Chu, Lei Li, Jiaya Jia [Paper] [Bi

DV Lab 182 Dec 29, 2022
A custom DeepStack model that has been trained detecting ONLY the USPS logo

This repository provides a custom DeepStack model that has been trained detecting ONLY the USPS logo. This was created after I discovered that the Deepstack OpenLogo custom model I was using did not

Stephen Stratoti 9 Dec 27, 2022
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
An Approach to Explore Logistic Regression Models

User-centered Regression An Approach to Explore Logistic Regression Models This tool applies the potential of Attribute-RadViz in identifying correlat

0 Nov 12, 2021
Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORAL)

Scribble-Supervised LiDAR Semantic Segmentation Dataset and code release for the paper Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORA

102 Dec 25, 2022
Restricted Boltzmann Machines in Python.

How to Use First, initialize an RBM with the desired number of visible and hidden units. rbm = RBM(num_visible = 6, num_hidden = 2) Next, train the m

Edwin Chen 928 Dec 30, 2022
Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image Segmentation

DistMIS Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image Segmentation. DistriMIS Distributing Deep Learning Hyperparameter Tuning

HiEST 2 Sep 09, 2022
Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

This repository is the official PyTorch implementation of Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

hippopmonkey 4 Dec 11, 2022
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

61.4k Jan 04, 2023
Benchmark VAE - Library for Variational Autoencoder benchmarking

Documentation pythae This library implements some of the most common (Variational) Autoencoder models. In particular it provides the possibility to pe

1.1k Jan 02, 2023
A unified 3D Transformer Pipeline for visual synthesis

Overview This is the official repo for the paper: NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion. NÜWA is a unified multimodal p

Microsoft 2.6k Jan 06, 2023
なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモ

FaceDetection-Anti-Spoof-Demo なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモです。 モデルはPINTO_model_zoo/191_anti-spoof-mn3からONNX形式のモデルを使用しています。 Requirement mediapipe

KazuhitoTakahashi 8 Nov 18, 2022
Infrastructure as Code (IaC) for a self-hosted version of Gnosis Safe on AWS

Welcome to Yearn Gnosis Safe! Setting up your local environment Infrastructure Deploying Gnosis Safe Prerequisites 1. Create infrastructure for secret

Numan 16 Jul 18, 2022
Create time-series datacubes for supervised machine learning with ICEYE SAR images.

ICEcube is a Python library intended to help organize SAR images and annotations for supervised machine learning applications. The library generates m

ICEYE Ltd 65 Jan 03, 2023
Automatic 2D-to-3D Video Conversion with CNNs

Deep3D: Automatic 2D-to-3D Video Conversion with CNNs How To Run To run this code. Please install MXNet following the official document. Deep3D requir

Eric Junyuan Xie 1.2k Dec 30, 2022
Diffusion Normalizing Flow (DiffFlow) Neurips2021

Diffusion Normalizing Flow (DiffFlow) Reproduce setup environment The repo heavily depends on jam, a personal toolbox developed by Qsh.zh. The API may

76 Jan 01, 2023