Skip to content

Implementation of using esrgan to super-resolution text image for scene text detection-recognition base on ABCNet

License

Notifications You must be signed in to change notification settings

quangvy2703/ABCNet-ESRGAN-SRTEXT

Repository files navigation

AdelaiDet

AdelaiDet is an open source toolbox for multiple instance-level recognition tasks on top of Detectron2. All instance-level recognition works from our group are open-sourced here.

To date, AdelaiDet implements the following algorithms:

Quantized AdelaiDet can be found here.

Models

COCO Object Detecton Baselines with FCOS

Name inf. time box AP download
FCOS_R_50_1x 16 FPS 38.7 model
FCOS_MS_R_101_2x 12 FPS 43.1 model
FCOS_MS_X_101_32x8d_2x 6.6 FPS 43.9 model
FCOS_MS_X_101_32x8d_dcnv2_2x 4.6 FPS 46.6 model
FCOS_RT_MS_DLA_34_4x_shtw 52 FPS 39.1 model

More models can be found in FCOS README.md.

COCO Instance Segmentation Baselines with BlendMask

Model Name inf. time box AP mask AP download
Mask R-CNN R_101_3x 10 FPS 42.9 38.6
BlendMask R_101_3x 11 FPS 44.8 39.5 model
BlendMask R_101_dcni3_5x 10 FPS 46.8 41.1 model

For more models and information, please refer to BlendMask README.md.

COCO Instance Segmentation Baselines with MEInst

Name inf. time box AP mask AP download
MEInst_R_50_3x 12 FPS 43.6 34.5 model

For more models and information, please refer to MEInst README.md.

Total_Text results with ABCNet

Name inf. time e2e-hmean det-hmean download
attn_R_50 11 FPS 67.1 86.0 model

For more models and information, please refer to ABCNet README.md.

COCO Instance Segmentation Baselines with CondInst

Name inf. time box AP mask AP download
CondInst_MS_R_50_1x 14 FPS 39.7 35.7 model
CondInst_MS_R_50_BiFPN_3x_sem 13 FPS 44.7 39.4 model
CondInst_MS_R_101_3x 11 FPS 43.3 38.6 model
CondInst_MS_R_101_BiFPN_3x_sem 10 FPS 45.7 40.2 model

For more models and information, please refer to CondInst README.md.

Note that:

  • Inference time for all projects is measured on a NVIDIA 1080Ti with batch size 1.
  • APs are evaluated on COCO2017 val split unless specified.

Installation

First install Detectron2 following the official guide: INSTALL.md. Please use Detectron2 with commit id 9eb4831 for now. The incompatibility with the latest one will be fixed soon.

Then build AdelaiDet with:

git clone https://github.com/aim-uofa/AdelaiDet.git
cd AdelaiDet
python setup.py build develop

Some projects may require special setup, please follow their own README.md in configs.

Quick Start

Inference with Pre-trained Models

  1. Pick a model and its config file, for example, fcos_R_50_1x.yaml.
  2. Download the model wget https://cloudstor.aarnet.edu.au/plus/s/glqFc13cCoEyHYy/download -O fcos_R_50_1x.pth
  3. Run the demo with
python demo/demo.py \
    --config-file configs/FCOS-Detection/R_50_1x.yaml \
    --input input1.jpg input2.jpg \
    --opts MODEL.WEIGHTS fcos_R_50_1x.pth

Train Your Own Models

To train a model with "train_net.py", first setup the corresponding datasets following datasets/README.md, then run:

OMP_NUM_THREADS=1 python tools/train_net.py \
    --config-file configs/FCOS-Detection/R_50_1x.yaml \
    --num-gpus 8 \
    OUTPUT_DIR training_dir/fcos_R_50_1x

To evaluate the model after training, run:

OMP_NUM_THREADS=1 python tools/train_net.py \
    --config-file configs/FCOS-Detection/R_50_1x.yaml \
    --eval-only \
    --num-gpus 8 \
    OUTPUT_DIR training_dir/fcos_R_50_1x \
    MODEL.WEIGHTS training_dir/fcos_R_50_1x/model_final.pth

Note that:

  • The configs are made for 8-GPU training. To train on another number of GPUs, change the --num-gpus.
  • If you want to measure the inference time, please change --num-gpus to 1.
  • We set OMP_NUM_THREADS=1 by default, which achieves the best speed on our machines, please change it as needed.
  • This quick start is made for FCOS. If you are using other projects, please check the projects' own README.md in configs.

Citing AdelaiDet

If you use this toolbox in your research or wish to refer to the baseline results, please use the following BibTeX entries.

@inproceedings{tian2019fcos,
  title     =  {{FCOS}: Fully Convolutional One-Stage Object Detection},
  author    =  {Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong},
  booktitle =  {Proc. Int. Conf. Computer Vision (ICCV)},
  year      =  {2019}
}

@inproceedings{chen2020blendmask,
  title     =  {{BlendMask}: Top-Down Meets Bottom-Up for Instance Segmentation},
  author    =  {Chen, Hao and Sun, Kunyang and Tian, Zhi and Shen, Chunhua and Huang, Yongming and Yan, Youliang},
  booktitle =  {Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR)},
  year      =  {2020}
}

@inproceedings{zhang2020MEInst,
  title     =  {Mask Encoding for Single Shot Instance Segmentation},
  author    =  {Zhang, Rufeng and Tian, Zhi and Shen, Chunhua and You, Mingyu and Yan, Youliang},
  booktitle =  {Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR)},
  year      =  {2020}
}

@inproceedings{liu2020abcnet,
  title     =  {{ABCNet}: Real-time Scene Text Spotting with Adaptive Bezier-Curve Network},
  author    =  {Liu, Yuliang and Chen, Hao and Shen, Chunhua and He, Tong and Jin, Lianwen and Wang, Liangwei},
  booktitle =  {Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR)},
  year      =  {2020}
}

@inproceedings{wang2020solo,
  title     =  {{SOLO}: Segmenting Objects by Locations},
  author    =  {Wang, Xinlong and Kong, Tao and Shen, Chunhua and Jiang, Yuning and Li, Lei},
  booktitle =  {Proc. Eur. Conf. Computer Vision (ECCV)},
  year      =  {2020}
}

@article{wang2020solov2,
  title   =  {{SOLOv2}: Dynamic, Faster and Stronger},
  author  =  {Wang, Xinlong and Zhang, Rufeng and Kong, Tao and Li, Lei and Shen, Chunhua},
  journal =  {arXiv preprint arXiv:2003.10152},
  year    =  {2020}
}

@article{tian2019directpose,
  title   =  {{DirectPose}: Direct End-to-End Multi-Person Pose Estimation},
  author  =  {Tian, Zhi and Chen, Hao and Shen, Chunhua},
  journal =  {arXiv preprint arXiv:1911.07451},
  year    =  {2019}
}

@inproceedings{tian2020conditional,
  title     =  {Conditional Convolutions for Instance Segmentation},
  author    =  {Tian, Zhi and Shen, Chunhua and Chen, Hao},
  booktitle =  {Proc. Eur. Conf. Computer Vision (ECCV)},
  year      =  {2020}
}

License

For academic use, this project is licensed under the 2-clause BSD License - see the LICENSE file for details. For commercial use, please contact Chunhua Shen.

About

Implementation of using esrgan to super-resolution text image for scene text detection-recognition base on ABCNet

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published