URIE: Universal Image Enhancementfor Visual Recognition in the Wild

Related tags

Deep Learningurie
Overview

URIE: Universal Image Enhancementfor Visual Recognition in the Wild

This is the implementation of the paper "URIE: Universal Image Enhancement for Visual Recognition in the Wild" by T. Son, J. Kang, N. Kim, S. Cho and S. Kwak. Implemented on Python 3.7 and PyTorch 1.3.1.

urie_arch

For more information, check our project website and the paper on arxiv.

Requirements

You can install dependencies using

pip install -r requirements.txt

Datasets

You need to manually configure following environment variables to run the experiments.
All validation csv contains fixed combination of image, corruption and severity to guarantee the same result.
To conduct validation, you may need to change home folder path in each csv files given.

# DATA PATHS
export IMAGENET_ROOT=PATH_TO_IMAGENET
export IMAGENET_C_ROOT=PATH_TO_IMAGENET_C

# URIE VALIDATION

## ILSVRC VALIDATION
export IMAGENET_CLN_TNG_CSV=PROJECT_PATH/imagenet_dataset/imagenet_cln_train.csv
export IMAGENET_CLN_VAL_CSV=PROJECT_PATH/imagenet_dataset/imagenet_cln_val.csv
export IMAGENET_TNG_VAL_CSV=PROJECT_PATH/imagenet_dataset/imagenet_tng_tsfrm_validation.csv
export IMAGENET_VAL_VAL_CSV=PROJECT_PATH/imagenet_dataset/imagenet_val_tsfrm_validation.csv

## CUB VALIDATION
export CUB_IMAGE=PATH_TO_CUB
export DISTORTED_CUB_IMAGE=PATH_TO_CUB_C
export CUB_TNG_LABEL=PROJECT_PATH/datasets/eval_set/label_train_cub200_2011.csv
export CUB_VAL_LABEL=PROJECT_PATH/datasets/eval_set/label_val_cub200_2011.csv
export CUB_TNG_TRAIN_VAL=PROJECT_PATH/datasets/eval_set/tng_tsfrm_validation.csv
export CUB_TNG_TEST_VAL=PROJECT_PATH/datasets/eval_set/val_tsfrm_validation.csv

ILSVRC2012 Dataset

You can download the dataset from here and use it for training.

CUB dataset

You can download the original Caltech-UCSD Birds-200-2011 dataset from here, and corrupted version of CUB dataset from here.

Training

Training URIE with the proposed method on ILSVRC2012 dataset

python train_urie.py --batch_size BATCH_SIZE \
                     --cuda \
                     --test_batch_size BATCH_SIZE \
                     --epochs 60 \
                     --lr 0.0001 \
                     --seed 5000 \
                     --desc DESCRIPTION \
                     --save SAVE_PATH \
                     --load_classifier \
                     --dataset ilsvrc \
                     --backbone r50 \
                     --multi

Since training on ILSVRC dataset takes too long, you can train / test the model with cub dataset with following command.

python train_urie.py --batch_size BATCH_SIZE \
                     --cuda \
                     --test_batch_size BATCH_SIZE \
                     --epochs 60 \
                     --lr 0.0001 \
                     --seed 5000 \
                     --desc DESCRIPTION \
                     --save SAVE_PATH \
                     --load_classifier \
                     --dataset cub \
                     --backbone r50 \
                     --multi

Validation

You may use our pretrained model to validate or compare the results.

Classification

python inference.py --srcnn_pretrained_path PROJECT_PATH/ECCV_MODELS/ECCV_SKUNET_OURS.ckpt.pt \
                    --dataset DATASET \
                    --test_batch_size 32 \
                    --enhancer ours \
                    --recog r50

Detection

We have conducted object detection experiments using the codes from github.
You may compare the performance with the same evaluation code with attaching our model (or yours) in front of the detection model.

For valid comparison, you need to preprocess your data with mean and standard deviation.

Semantic Segmentation

We have conducted semantic segmentation experiments using the codes from github.
For backbone segmentation network, please you pretrained deeplabv3 on pytorch. You may compare the performance with the same evaluation code with attaching our model (or yours) in front of the segmentation model.

For valid comparison, you need to preprocess your data with mean and standard deviation.

Image Comparison

If you want just simple before & output image comparison, you can use render.py as following command.

python render.py IMAGE_FILE_PATH

Comparison
It runs given image file through pretrained URIE model, and saves enhanced output image comparison in current project file as "output.jpg".

BibTeX

If you use this code for your research, please consider citing:

@InProceedings{son2020urie,
  title={URIE: Universal Image Enhancement for Visual Recognition in the Wild},
  author={Son, Taeyoung and Kang, Juwon and Kim, Namyup and Cho, Sunghyun and Kwak, Suha},
  booktitle={ECCV},
  year={2020}
}
Owner
Taeyoung Son
Graduate student at POSTECH, South Korea
Taeyoung Son
Software for Multimodalty 2D+3D Facial Expression Recognition (FER) UI

EmotionUI Software for Multimodalty 2D+3D Facial Expression Recognition (FER) UI. demo screenshot (with RealSense) required packages Python = 3.6 num

Yang Jiao 2 Dec 23, 2021
Aircraft design optimization made fast through modern automatic differentiation

Aircraft design optimization made fast through modern automatic differentiation. Plug-and-play analysis tools for aerodynamics, propulsion, structures, trajectory design, and much more.

Peter Sharpe 394 Dec 23, 2022
A flexible submap-based framework towards spatio-temporally consistent volumetric mapping and scene understanding.

Panoptic Mapping This package contains panoptic_mapping, a general framework for semantic volumetric mapping. We provide, among other, a submap-based

ETHZ ASL 194 Dec 20, 2022
Reaction SMILES-AA mapping via language modelling

rxn-aa-mapper Reactions SMILES-AA sequence mapping setup conda env create -f conda.yml conda activate rxn_aa_mapper In the following we consider on ex

16 Dec 13, 2022
Repository for "Improving evidential deep learning via multi-task learning," published in AAAI2022

Improving evidential deep learning via multi task learning It is a repository of AAAI2022 paper, โ€œImproving evidential deep learning via multi-task le

deargen 11 Nov 19, 2022
A library built upon PyTorch for building embeddings on discrete event sequences using self-supervision

pytorch-lifestream a library built upon PyTorch for building embeddings on discrete event sequences using self-supervision. It can process terabyte-si

Dmitri Babaev 103 Dec 17, 2022
Grounding Representation Similarity with Statistical Testing

Grounding Representation Similarity with Statistical Testing This repo contains code to replicate the results in our paper, which evaluates representa

26 Dec 02, 2022
This is my codes that can visualize the psnr image in testing videos.

CVPR2018-Baseline-PSNRplot This is my codes that can visualize the psnr image in testing videos. Future Frame Prediction for Anomaly Detection โ€“ A New

Wenhao Yang 12 May 29, 2021
This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Developed By Google!

Machine Learning Hand Detector This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Dev

Popstar Idhant 3 Feb 25, 2022
Paddle implementation for "Cross-Lingual Word Embedding Refinement by โ„“1 Norm Optimisation" (NAACL 2021)

L1-Refinement Paddle implementation for "Cross-Lingual Word Embedding Refinement by โ„“1 Norm Optimisation" (NAACL 2021) ๐Ÿ™ˆ A more detailed readme is co

Lincedo Lab 4 Jun 09, 2021
Artifacts for paper "MMO: Meta Multi-Objectivization for Software Configuration Tuning"

MMO: Meta Multi-Objectivization for Software Configuration Tuning This repository contains the data and code for the following paper that is currently

0 Nov 17, 2021
Pytorch implementation of ICASSP 2022 paper Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

IIGROUP 6 Sep 21, 2022
SSD-based Object Detection in PyTorch

SSD-based Object Detection in PyTorch ์„œ๊ฐ•๋Œ€ํ•™๊ต ํ˜„๋Œ€๋ชจ๋น„์Šค SW ํ”„๋กœ๊ทธ๋žจ์—์„œ ์ง„ํ–‰ํ•œ ์ธ๊ณต์ง€๋Šฅ ํ”„๋กœ์ ํŠธ์ž…๋‹ˆ๋‹ค. Jetson nano๋ฅผ ์ด์šฉํ•ด pre-trained network๋ฅผ fine tuning์‹œ์ผœ ์ฐจ๋Ÿ‰ ๋ฐ ์‹ ํ˜ธ๋“ฑ ์ธ์‹์„ ๊ตฌํ˜„ํ•˜์˜€์Šต๋‹ˆ๋‹ค

Haneul Kim 1 Nov 16, 2021
constructing maps of intellectual influence from publication data

Influencemap Project @ ANU Influence in the academic communities has been an area of interest for researchers. This can be seen in the popularity of a

CS Metrics 13 Jun 18, 2022
learned_optimization: Training and evaluating learned optimizers in JAX

learned_optimization: Training and evaluating learned optimizers in JAX learned_optimization is a research codebase for training learned optimizers. I

Google 533 Dec 30, 2022
Sequence-to-Sequence learning using PyTorch

Seq2Seq in PyTorch This is a complete suite for training sequence-to-sequence models in PyTorch. It consists of several models and code to both train

Elad Hoffer 514 Nov 17, 2022
Pretrained models for Jax/Haiku; MobileNet, ResNet, VGG, Xception.

Pre-trained image classification models for Jax/Haiku Jax/Haiku Applications are deep learning models that are made available alongside pre-trained we

Alper Baris CELIK 14 Dec 20, 2022
[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space

FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space by Quande Liu, Cheng Chen, Ji

Quande Liu 178 Jan 06, 2023
Baseline for the Spoofing-aware Speaker Verification Challenge 2022

Introduction This repository contains several materials that supplements the Spoofing-Aware Speaker Verification (SASV) Challenge 2022 including: calc

40 Dec 28, 2022