TensorFlow reimplementation of "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021). This project is different from our original implementation that builds on the privacy codebase FastOCR of the company. You can also find PyTorch reimplementation at MASTER-pytorch repository, and the performance is almost identical. (PS. Logo inspired by the Master Oogway in Kung Fu Panda)
- 2021/07: MASTER-mmocr, reimplementation of MASTER by mmocr. @Jiaquan Ye
- 2021/07: TableMASTER-mmocr, 2nd solution of ICDAR 2021 Competition on Scientific Literature Parsing Task B based on MASTER. @Jiaquan Ye
- 2021/07: Talk can be found at here (Chinese).
- 2021/05: Savior, which aims to provide a simple, lightweight, fast integrated, pipelined deployment framework for RPA, is now integrated MASTER for captcha recognition. @Tao Luo
- 2021/04: Slides can be found at here.
- 1st place (2021/05) solution to ICDAR 2021 Competition on Scientific Table Image Recognition to LaTeX (Subtask I: Table structure reconstruction)
- 1st place (2021/05) solution to ICDAR 2021 Competition on Scientific Table Image Recognition to LaTeX (Subtask II: Table content reconstruction)
- 2nd place (2021/05) solution to ICDAR 2021 Competition on Scientific Literature Parsing Task B: Table recognition
- 1st place (2020/10) solution to ICDAR 2019 Robust Reading Challenge on Reading Chinese Text on Signboard (task2)
- 2nd and 5th places (2020/10) in The 5th China Innovation Challenge on Handwritten Mathematical Expression Recognition
- 4th place (2019/08) of ICDAR 2017 Robust Reading Challenge on COCO-Text (task2)
- More will be released
MASTER is a self-attention based scene text recognizer that (1) not only encodes the input-output attention, but also learns self-attention which encodes feature-feature and target-target relationships inside the encoder and decoder and (2) learns a more powerful and robust intermediate representation to spatial distortion and (3) owns a better training and evaluation efficiency. Overall architecture shown follows.
This repo contains the following features.
- Multi-gpu Training
- Greedy Decoding
- Single image inference
- Eval iiit5k
- Convert Checkpoint to SavedModel format
- Refactory codes to be more tensorflow-style and be more consistent to graph mode
- Support tensorflow serving mode
It is highly recommended that install tensorflow-gpu using conda.
Python3.7 is preferred.
pip install -r requirements.txt
I use Clovaai's MJ training split for training.
please check src/dataset/benchmark_data_generator.py
for details.
Eval datasets are some real scene text datasets. You can downloaded directly from here.
# training from scratch
python train.py -c [your_config].yaml
# resume training from last checkpoint
python train.py -c [your_config].yaml -r
# finetune with some checkpoint
python train.py -c [your_config].yaml -f [checkpoint]
Since I made change to the usage of gcb block, the weight could not be suitable to HEAD. If you want to test the model, please use https://github.com/jiangxiluning/MASTER-TF/commit/85f9217af8697e41aefe5121e580efa0d6d04d92
Currently, you can download checkpoint from here with code o6g9, or from Google Driver, this checkpoint was trained with MJ and selected for the best performance of iiit5k dataset. Below is the comparision between pytorch version and tensorflow version.
Framework | Dataset | Word Accuracy | Training Details |
---|---|---|---|
Pytorch | MJ | 85.05% | 3 V100 4 epochs Batch Size: 3*128 |
Tensorflow | MJ | 85.53% | 2 2080ti 4 epochs Batch Size: 2 * 50 |
Please download the checkpoint and model config from here with code o6g9 and unzip it, and you can get this metric by running:
python eval_iiit5k.py --ckpt [checkpoint file] --cfg [model config] -o [output dir] -i [iiit5k lmdb test dataset]
The checkpoint file argument should be ${where you unzip}/backup/512_8_3_3_2048_2048_0.2_0_Adam_mj_my/checkpoints/OCRTransformer-Best
For tensorflow serving, you should use savedModel format, I provided test case to show you how to convert a checkpoint to savedModel and how to use it.
pytest -s tests/test_units::test_savedModel #check the test case test_savedModel in tests/test_units
pytest -s tests/test_units::test_loadModel # call decode to inference and get predicted transcript and logits out.
If you find MASTER useful please cite our paper:
@article{Lu2021MASTER,
title={{MASTER}: Multi-Aspect Non-local Network for Scene Text Recognition},
author={Ning Lu and Wenwen Yu and Xianbiao Qi and Yihao Chen and Ping Gong and Rong Xiao and Xiang Bai},
journal={Pattern Recognition},
year={2021}
}
This project is licensed under the MIT License. See LICENSE for more details.
Thanks to the authors and their repo: