[CVPR 2021] Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach

Overview

Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach

This is the repo to host the dataset TextSeg and code for TexRNet from the following paper:

Xingqian Xu, Zhifei Zhang, Zhaowen Wang, Brian Price, Zhonghao Wang and Humphrey Shi, Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach, ArXiv Link

Note:

[2021.04.21] So far, our dataset is partially released with images and semantic labels. Since many people may request the dataset for OCR or non-segmentation tasks, please stay tuned, and we will release the dataset in full ASAP.

[2021.06.18] Our dataset is now fully released. To download the data, please send a request email to [email protected] and tell us which school you are affiliated with. Please be aware the released dataset is version 2, and the annotations are slightly different from the one in the paper. In order to provide the most accurate dataset, we went through a second round of quality assurance, in which we fixed some faulty annotations and made them more consistent across the dataset. Since our TexRNet in the paper doesn't use OCR and character instance labels (i.e. word- and character-level bounding polygons; character-level masks;), we will not release the older version of these labels. However, we release the retroactive semantic_label_v1.tar.gz for researchers to reproduce the results in the paper. For more details about the dataset, please see below.

Introduction

Text in the real world is extremely diverse, yet current text dataset does not reflect such diversity very well. To bridge this gap, we proposed TextSeg, a large-scale fine-annotated and multi-purpose text dataset, collecting scene and design text with six types of annotations: word- and character-wise bounding polygons, masks and transcriptions. We also introduce Text Refinement Network (TexRNet), a novel text segmentation approach that adapts to the unique properties of text, e.g. non-convex boundary, diverse texture, etc., which often impose burdens on traditional segmentation models. TexRNet refines results from common segmentation approach via key features pooling and attention, so that wrong-activated text regions can be adjusted. We also introduce trimap and discriminator losses that show significant improvement on text segmentation.

TextSeg Dataset

Image Collection

Annotation

Download

Our dataset (TextSeg) is academia-only and cannot be used on any commercial project and research. To download the data, please send a request email to [email protected] and tell us which school you are affiliated with.

A full download should contain these files:

  • image.tar.gz contains 4024 images.
  • annotation.tar.gz labels corresponding to the images. These three types of files are included:
    • [dataID]_anno.json contains all word- and character-level translations and bounding polygons.
    • [dataID]_mask.png contains all character masks. Character mask label value will be ordered from 1 to n. Label value 0 means background, 255 means ignore.
    • [dataID]_maskeff.png contains all character masks with effect.
    • Adobe_Research_License_TextSeg.txt license file.
  • semantic_label.tar.gz contains all word-level (semantic-level) masks. It contains:
    • [dataID]_maskfg.png 0 means background, 100 means word, 200 means word-effect, 255 means ignore. (The [dataID]_maskfg.png can also be generated using [dataID]_mask.png and [dataID]_maskeff.png)
  • split.json the official split of train, val and test.
  • [Optional] semantic_label_v1.tar.gz the old version of label that was used in our paper. One can download it to reproduce our paper results.

TexRNet Structure and Results

In this table, we report the performance of our TexRNet on 5 text segmentation dataset including ours.

TextSeg(Ours) ICDAR13 FST COCO_TS MLT_S Total-Text
Method fgIoU F-score fgIoU F-score fgIoU F-score fgIoU F-score fgIoU F-score
DeeplabV3+ 84.07 0.914 69.27 0.802 72.07 0.641 84.63 0.837 74.44 0.824
HRNetV2-W48 85.03 0.914 70.98 0.822 68.93 0.629 83.26 0.836 75.29 0.825
HRNetV2-W48 + OCR 85.98 0.918 72.45 0.830 69.54 0.627 83.49 0.838 76.23 0.832
Ours: TexRNet + DeeplabV3+ 86.06 0.921 72.16 0.835 73.98 0.722 86.31 0.830 76.53 0.844
Ours: TexRNet + HRNetV2-W48 86.84 0.924 73.38 0.850 72.39 0.720 86.09 0.865 78.47 0.848

To run the code

Set up the environment

conda create -n texrnet python=3.7
conda activate texrnet
pip install -r requirement.txt

To eval

First, make the following directories to hold pre-trained models, dataset, and running logs:

mkdir ./pretrained
mkdir ./data
mkdir ./log

Second, download the models from this link. Move those downloaded models to ./pretrained.

Thrid, make sure that ./data contains the data. A sample root directory for TextSeg would be ./data/TextSeg.

Lastly, evaluate the model and compute fgIoU/F-score with the following command:

python main.py --eval --pth [model path] [--hrnet] [--gpu 0 1 ...] --dsname [dataset name]

Here is the sample command to eval a TexRNet_HRNet on TextSeg with 4 GPUs:

python main.py --eval --pth pretrained/texrnet_hrnet.pth --hrnet --gpu 0 1 2 3 --dsname textseg

The program will store results and execution log in ./log/eval.

To train

Similarly, these directories need to be created:

mkdir ./pretrained
mkdir ./pretrained/init
mkdir ./data
mkdir ./log

Second, we use multiple pre-trained models for training. Download these initial models from this link. Move those models to ./pretrained/init. Also, make sure that ./data contains the data.

Lastly, execute the training code with the following command:

python main.py [--hrnet] [--gpu 0 1 ...] --dsname [dataset name] [--trainwithcls]

Here is the sample command to train a TexRNet_HRNet on TextSeg with classifier and discriminate loss using 4 GPUs:

python main.py --hrnet --gpu 0 1 2 3 --dsname textseg --trainwithcls

The training configs, logs, and models will be stored in ./log/texrnet_[dsname]/[exid]_[signature].

Bibtex

@article{xu2020rethinking,
  title={Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach},
  author={Xu, Xingqian and Zhang, Zhifei and Wang, Zhaowen and Price, Brian and Wang, Zhonghao and Shi, Humphrey},
  journal={arXiv preprint arXiv:2011.14021},
  year={2020}
}

Acknowledgements

The directory .\hrnet_code is directly copied from the HRNet official github website (link). HRNet code ownership should be credited to HRNet authors, and users should follow their terms of usage.

Owner
SHI Lab
Research in Synergetic & Holistic Intelligence, with current focus on Computer Vision, Machine Learning, and AI Systems & Applications
SHI Lab
Densely Connected Convolutional Networks, In CVPR 2017 (Best Paper Award).

Densely Connected Convolutional Networks (DenseNets) This repository contains the code for DenseNet introduced in the following paper Densely Connecte

Zhuang Liu 4.5k Jan 03, 2023
[AAAI 2022] Sparse Structure Learning via Graph Neural Networks for Inductive Document Classification

Sparse Structure Learning via Graph Neural Networks for inductive document classification Make graph dataset create co-occurrence graph for datasets.

16 Dec 22, 2022
Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”

Official implementation for TransDA Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”. Overview: Result: Prerequisites:

stanley 54 Dec 22, 2022
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Jinpeng Wang 114 Oct 16, 2022
Official Implementation of "Designing an Encoder for StyleGAN Image Manipulation"

Designing an Encoder for StyleGAN Image Manipulation (SIGGRAPH 2021) Recently, there has been a surge of diverse methods for performing image editing

749 Jan 09, 2023
Code for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling Using BERT Adapter"

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter Code and checkpoints for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling

274 Dec 06, 2022
A diff tool for language models

LMdiff Qualitative comparison of large language models. Demo & Paper: http://lmdiff.net LMdiff is a MIT-IBM Watson AI Lab collaboration between: Hendr

Hendrik Strobelt 27 Dec 29, 2022
Code for "Intra-hour Photovoltaic Generation Forecasting based on Multi-source Data and Deep Learning Methods."

pv_predict_unet-lstm Code for "Intra-hour Photovoltaic Generation Forecasting based on Multi-source Data and Deep Learning Methods." IEEE Transactions

FolkScientistInDL 8 Oct 08, 2022
BABEL: Bodies, Action and Behavior with English Labels [CVPR 2021]

BABEL is a large dataset with language labels describing the actions being performed in mocap sequences. BABEL labels about 43 hours of mocap sequences from AMASS [1] with action labels.

113 Dec 28, 2022
TensorFlow GNN is a library to build Graph Neural Networks on the TensorFlow platform.

TensorFlow GNN This is an early (alpha) release to get community feedback. It's under active development and we may break API compatibility in the fut

889 Dec 30, 2022
The Face Mask recognition system uses AI technology to detect the person with or without a mask.

Face Mask Detection Face Mask Detection system built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect

Rohan Kasabe 4 Apr 05, 2022
CryptoFrog - My First Strategy for freqtrade

cryptofrog-strategies CryptoFrog - My First Strategy for freqtrade NB: (2021-04-20) You'll need the latest freqtrade develop branch otherwise you migh

Robert Davey 137 Jan 01, 2023
Most popular metrics used to evaluate object detection algorithms.

Most popular metrics used to evaluate object detection algorithms.

Rafael Padilla 4.4k Dec 25, 2022
Official PyTorch implementation of paper: Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation (ICCV 2021 Oral Presentation)

SML (ICCV 2021, Oral) : Official Pytorch Implementation This repository provides the official PyTorch implementation of the following paper: Standardi

SangHun 61 Dec 27, 2022
Tutorial on active learning with the Nvidia Transfer Learning Toolkit (TLT).

Active Learning with the Nvidia TLT Tutorial on active learning with the Nvidia Transfer Learning Toolkit (TLT). In this tutorial, we will show you ho

Lightly 25 Dec 03, 2022
Transformer based SAR image despeckling

Transformer based SAR image despeckling Using the code: The code is stable while using Python 3.6.13, CUDA =10.1 Clone this repository: git clone htt

27 Nov 13, 2022
Official PyTorch implementation of the paper "Graph-based Generative Face Anonymisation with Pose Preservation" in ICIAP 2021

Contents AnonyGAN Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Evaluation Acknowledgments Citat

Nicola Dall'Asen 10 May 24, 2022
Implementation of ETSformer, state of the art time-series Transformer, in Pytorch

ETSformer - Pytorch Implementation of ETSformer, state of the art time-series Transformer, in Pytorch Install $ pip install etsformer-pytorch Usage im

Phil Wang 121 Dec 30, 2022
Finding an Unsupervised Image Segmenter in each of your Deep Generative Models

Finding an Unsupervised Image Segmenter in each of your Deep Generative Models Description Recent research has shown that numerous human-interpretable

Luke Melas-Kyriazi 61 Oct 17, 2022
codes for Self-paced Deep Regression Forests with Consideration on Ranking Fairness

Self-paced Deep Regression Forests with Consideration on Ranking Fairness This is official codes for paper Self-paced Deep Regression Forests with Con

Learning in Vision 4 Sep 11, 2022