RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation

Overview

Multipath RefineNet

A MATLAB based framework for semantic image segmentation and general dense prediction tasks on images.

This is the source code for the following paper and its extension:

  1. RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation; CVPR 2017
    https://arxiv.org/abs/1611.06612
  2. RefineNet extension in TPAMI 2019: DOI Link

Pytorch implementation

This codebase only provides MATLAB and MatConvNet based implementation.

Vladimir Nekrasov kindly provides a Pytorch implementation and a light-weight version of RefineNet at:
https://github.com/DrSleep/refinenet-pytorch

Update notes

  • 23 Dec 2016: We did a major update of our code.
  • (new!) 13 Feb 2018:
    1. Multi-scale prediction and evaluation code are added. We add demo files for multi-scale prediction, fusion and evaluation. Please refer to the Testing section below for more details.
    2. New models available: trained models using improved residual pooling. Available for these datasets: NYUDv2, Person_Parts, PASCAL_Context, SUNRGBD, ADE20k. These models will give better performance than the reported results in our CVPR paper.
    3. New models available: trained models using ResNet-152 for all 7 datasets. Apart from ResNet-101 based models, our ResNet-152 based models of all 7 datasets are now available for download.
    4. Updated trained model for VOC2012: this updated model is slightly better than the previous one. We previously uploaded a wrong model.
    5. All models are now available in Google Drive and Baidu Pan.
    6. More details are provided on testing, training and implementation. Please refer to Important notes in each section below.

Results

  • Results on the CityScapes Dataset (single scale prediction using ResNet-101 based RefineNet) RefineNet Results on the CityScapes Dataset

Trained models

  • (new!) Trained models for the following datasets are available for download.
  1. PASCAL VOC 2012
  2. Cityscapes
  3. NYUDv2
  4. Person_Parts
  5. PASCAL_Context
  6. SUNRGBD
  7. ADE20k
  • Downloads for the above datasets. Put the downloaded models in ./model_trained/
  • Important notes:
    • For the test set performance of our method on the dataset PASCAl VOC and Cityscapes, kindly note that we do not use any images in the validation set for training. Our models are trained only using the training set images.
    • The trained models of the the following datasets are using improved residual pooling: NYUDv2, Person_Parts, PASCAL_Context, SUNRGBD, ADE20k. These models will give better performance than the reported results in our CVPR paper. Please also refer to the Network architecture section below for more details about improved pooling.
    • The model for VOC2012 is updated. We previously uploaded a wrong model.

Network architecture and implementation

  • You can find the network graphs that illustrate our architecture in the folder net_graphs. Please refer to our paper for more details.
  • We include in this folder the details of improved residual pooling which improves the residual pooling block described in our CVPR paper.
  • Important notes:
    • In our up-sampling and fusion layer, we simply use down-sampling for gradient back-propagation. Please refer to the implementation of our fusion layer for details: My_sum_layer.m.
    • please refer to our training demo files for more details on implementation

Installation

  • Install MatConvNet and CuDNN. We have modified MatConvNet for our task. A modified copy of MatConvNet is provided in ./lib/. You need to compile the provided MatConvNet before running. Details of this modification and compiling can be found in main/my_matconvnet_resnet/README.md.

  • An example script for exporting lib paths is main/my_matlab.sh

  • Download the following ImageNet pre-trained models and place them in ./model_trained/:

    • imagenet-resnet-50-dag, imagenet-resnet-101-dag, imagenet-resnet-152-dag.

    They can be downloaded from: MatConvNet, we also have a copy in Google Drive, Baidu Pan.

Testing

1. Multi-scale prediction and evaluation (new!)

  • First download the trained models and put them in ./model_trained/. Please refer to the above section Trained Models.

  • Then refer to the below example scripts for prediction on your images:

    • demo_predict_mscale_[dataset name].m
    • e.g., demo_predict_mscale_voc.m, demo_predict_mscale_nyud, demo_predict_mscale_person_parts
  • You may need to carefully read through the comments in these demo scripts before using.

  • Important notes:

    • In the default setting, the example scripts will perform multi-scale prediction and fuse multi-scale results to generate final prediction.
    • The generated masks and scores maps will be saved in your disk. Note that the score maps are saved in the format of uint8 with values in [0 255]. You need to cast them into double and normalize into [0 1] if you want to use them.
    • The above demo files are able to perform multi-scale prediction and evaluation (e.g., in terms of IoU scores) in a single run. However, in the default setting, the performance evaluation part is disabled. Please refer to the comments in the demo files to turn on the performance evaluation.
    • Trained models using improved residual pooling will give better performance than the reported results in our CVPR paper. Please refer to the above section Trained models for more details.
    • For the images from NYUDv2 dataset, you may need to remove the white borders of the images before applying our models. More details and crop tools can be found in the NYUDv2 dataset webpage.

2. Single scale prediction and evaluation

  • Single scale prediction and evaluation can be done by changing the scale setting in the multi-scale prediction demo files. Please refer the the above section for multi-scale prediction.

  • We also provide simplified demo files for prediction with much less configurations. They are only for single scale prediction. Examples can be found at: demo_test_simple_voc.m and demo_test_simple_city.m.

3. Evaluation and fusion on saved results (score map files and mask files) (new!)

  • We provide an example script to perform multi-scale fusion on a number of predictions (score maps) saved in your disk:
    • demo_fuse_saved_prediction_voc.m : fuse multiple cached predictions to generate the final prediction
  • We provide an example script to evaluate the prediction masks saved in your disk:
    • demo_evaluate_saved_prediction_voc.m : evaluate the segmentation performance, e.g., in terms of IoU scores.

Training

  • The following demo files are provided for training a RefineNet on your own dataset. Please carefully read through the comments in the demo files before using this training code.
    • demo_refinenet_train.m
    • demo_refinenet_train_reduce_learning_rate.m
  • Important notes:
    • We use step-wise policy to reduce learning rate, and more importantly, you need to manually reduce the learning rate during the training stage. The setting of maximum training iteration just serves as a simple example and it should be adapted to your datasets. More details can be found in the comments of the training demo files.
    • We use the improved version of chained pooling in this training code, which may achieve better result than using the above provided models.

Citation

If you find the code useful, please cite our work as

@inproceedings{Lin:2017:RefineNet,
  title = {Refine{N}et: {M}ulti-Path Refinement Networks for High-Resolution Semantic Segmentation},
  shorttitle = {RefineNet: Multi-Path Refinement Networks},
  booktitle = {CVPR},
  author = {Lin, G. and Milan, A. and Shen, C. and Reid, I.},
  month = jul,
  year = {2017}
}

and

@article{lin2019refinenet,
  title={RefineNet: Multi-Path Refinement Networks for Dense Prediction},
  author={Lin, Guosheng and Liu, Fayao and Milan, Anton and Shen, Chunhua and Reid, Ian},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  year={2019},
  publisher={IEEE},
  doi={10.1109/TPAMI.2019.2893630}, 
}

License

For academic usage, the code is released under the permissive BSD license. For any commercial purpose, please contact the authors.

Owner
Guosheng Lin
Guosheng Lin
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 08, 2023
Notebooks for my "Deep Learning with TensorFlow 2 and Keras" course

Deep Learning with TensorFlow 2 and Keras – Notebooks This project accompanies my Deep Learning with TensorFlow 2 and Keras trainings. It contains the

Aurélien Geron 1.9k Dec 15, 2022
Monk is a low code Deep Learning tool and a unified wrapper for Computer Vision.

Monk - A computer vision toolkit for everyone Why use Monk Issue: Want to begin learning computer vision Solution: Start with Monk's hands-on study ro

Tessellate Imaging 507 Dec 04, 2022
GNPy: Optical Route Planning and DWDM Network Optimization

GNPy is an open-source, community-developed library for building route planning and optimization tools in real-world mesh optical networks

Telecom Infra Project 140 Dec 19, 2022
A PyTorch implementation: "LASAFT-Net-v2: Listen, Attend and Separate by Attentively aggregating Frequency Transformation"

LASAFT-Net-v2 Listen, Attend and Separate by Attentively aggregating Frequency Transformation Woosung Choi, Yeong-Seok Jeong, Jinsung Kim, Jaehwa Chun

Woosung Choi 29 Jun 04, 2022
Code for Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations

Implementation for Iso-Points (CVPR 2021) Official code for paper Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations paper |

Yifan Wang 66 Nov 08, 2022
Rank1 Conversation Emotion Detection Task

Rank1-Conversation_Emotion_Detection_Task accuracy macro-f1 recall 0.826 0.7544 0.719 基于预训练模型和时序预测模型的对话情感探测任务 1 摘要 针对对话情感探测任务,本文将其分为文本分类和时间序列预测两个子任务,分

Yuchen Han 2 Nov 28, 2021
An Active Automata Learning Library Written in Python

AALpy An Active Automata Learning Library AALpy is a light-weight active automata learning library written in pure Python. You can start learning auto

TU Graz - SAL Dependable Embedded Systems Lab (DES Lab) 78 Dec 30, 2022
A curated list of programmatic weak supervision papers and resources

A curated list of programmatic weak supervision papers and resources

Jieyu Zhang 118 Jan 02, 2023
Covid-19 Test AI (Deep Learning - NNs) Software. Accuracy is the %96.5, loss is the 0.09 :)

Covid-19 Test AI (Deep Learning - NNs) Software I developed a segmentation algorithm to understand whether Covid-19 Test Photos are positive or negati

Emirhan BULUT 28 Dec 04, 2021
Cancer-and-Tumor-Detection-Using-Inception-model - In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks, specifically here the Inception model by google.

Cancer-and-Tumor-Detection-Using-Inception-model In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks

Deepak Nandwani 1 Jan 01, 2022
Pytorch reimplement of the paper "A Novel Cascade Binary Tagging Framework for Relational Triple Extraction" ACL2020. The original code is written in keras.

CasRel-pytorch-reimplement Pytorch reimplement of the paper "A Novel Cascade Binary Tagging Framework for Relational Triple Extraction" ACL2020. The o

longlongman 170 Dec 01, 2022
Unsupervised Image-to-Image Translation

UNIT: UNsupervised Image-to-image Translation Networks Imaginaire Repository We have a reimplementation of the UNIT method that is more performant. It

Ming-Yu Liu 劉洺堉 1.9k Dec 26, 2022
PyTorch implementation of our paper: Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based Motion Recognition

Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based Motion Recognition, arxiv This is a PyTorch implementation of our paper. 1. Re

DamoCV 11 Nov 19, 2022
A project for developing transformer-based models for clinical relation extraction

Clinical Relation Extration with Transformers Aim This package is developed for researchers easily to use state-of-the-art transformers models for ext

uf-hobi-informatics-lab 101 Dec 19, 2022
DPC: Unsupervised Deep Point Correspondence via Cross and Self Construction (3DV 2021)

DPC: Unsupervised Deep Point Correspondence via Cross and Self Construction (3DV 2021) This repo is the implementation of DPC. Tested environment Pyth

Dvir Ginzburg 30 Nov 30, 2022
Code for the paper "Offline Reinforcement Learning as One Big Sequence Modeling Problem"

Trajectory Transformer Code release for Offline Reinforcement Learning as One Big Sequence Modeling Problem. Installation All python dependencies are

Michael Janner 266 Dec 27, 2022
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

93 Nov 06, 2022
Use unsupervised and supervised learning to predict stocks

AIAlpha: Multilayer neural network architecture for stock return prediction This project is meant to be an advanced implementation of stacked neural n

Vivek Palaniappan 1.5k Dec 26, 2022
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron,

Pratul Srinivasan 65 Dec 14, 2022