You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OpenOCR aims to establish a unified training and evaluation benchmark for scene text detection and recognition algorithms, at the same time, serves as the official code repository for the OCR team from the [FVL](https://fvl.fudan.edu.cn) Laboratory, Fudan University.
4
-
5
-
We are actively developing and refining it and expect to release the first version as soon as possible.
3
+
We aim to establishing a unified benchmark for training and evaluating models for scene text detection and recognition. Based on this benchmark, we introduce an accurate and efficient general OCR system, OpenOCR. Additionally, this repository will serve as the official codebase for the OCR team from the [FVL](https://fvl.fudan.edu.cn) Laboratory, Fudan University.
6
4
7
5
We sincerely welcome the researcher to recommend OCR or relevant algorithms and point out any potential factual errors or bugs. Upon receiving the suggestions, we will promptly evaluate and critically reproduce them. We look forward to collaborating with you to advance the development of OpenOCR and continuously contribute to the OCR community!
8
6
7
+
## Features
8
+
9
+
- 🔥**OpenOCR: A general OCR system for accuracy and efficiency**
- A practical version of the model builds on SVTRv2.
13
+
- Outperforming [PP-OCRv4](<>) released by [PaddleOCR](<>) by 4.5% on the [OCR competition leaderboard](<>).
14
+
-[x] Supporting Chinese and English text detection and recognition.
15
+
-[x] Providing server model and mobile model.
16
+
-[ ] Fine-tuning OpenOCR on a custom dataset
17
+
-[ ] Export to ONNX engine
18
+
- 🔥**SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition**
19
+
-\[[Paper](../configs/rec/svtrv2/SVTRv2.pdf)\]\[[Model](./configs/rec/svtrv2/readme.md#11-models-and-results)\]\[[Config, Training and Inference](./configs/rec/svtrv2/readme.md#3-model-training--evaluation)\]
20
+
-[Introduction](./docs/svtrv2.md)
21
+
- Developing a unified training and evaluation benchmark for Scene Text Recognition
22
+
- Supporting for 24 Scene Text Recognition methods trained from scratch on large-scale real datasets, and will continue to add the latest methods.
23
+
- Improving results by 20-30% compared to training on synthetic datasets.
24
+
- Towards Arbitrary-Shaped Text Recognition and Language modeling with a Single Visual Model.
25
+
- Surpasses Attention-based Decoder Methods across challenging scenarios in terms of accuracy and speed
26
+
-[Get Started](./docs/svtrv2.md#get-started-with-training-a-sota-scene-text-recognition-model-from-scratch) with training a SoTA Scene Text Recognition model from scratch.
27
+
9
28
## Ours STR algorithms
10
29
11
30
-[**DPTR**](<>) (*Shuai Zhao, Yongkun Du, Zhineng Chen\*, Yu-Gang Jiang. Decoder Pre-Training with only Text for Scene Text Recognition,* ACM MM 2024. [paper](https://arxiv.org/abs/2408.05706))
12
-
-[**IGTR**](./configs/rec/igtr/) (*Yongkun Du, Zhineng Chen\*, Yuchen Su, Caiyan Jia, Yu-Gang Jiang. Instruction-Guided Scene Text Recognition,* Under TPAMI minor revision 2024. [Doc](./configs/rec/igtr/readme.md), [paper](https://arxiv.org/abs/2401.17851))
31
+
-[**IGTR**](./configs/rec/igtr/) (*Yongkun Du, Zhineng Chen\*, Yuchen Su, Caiyan Jia, Yu-Gang Jiang. Instruction-Guided Scene Text Recognition,* Under TPAMI minor revison 2024. [Doc](./configs/rec/igtr/readme.md), [paper](https://arxiv.org/abs/2401.17851))
13
32
-[**SVTRv2**](./configs/rec/svtrv2) (*Yongkun Du, Zhineng Chen\*, Hongtao Xie, Caiyan Jia, Yu-Gang Jiang. SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition,* 2024. [paper](./configs/rec/svtrv2/SVTRv2.pdf))
14
33
-[**SMTR&FocalSVTR**](./configs/rec/smtr/) (*Yongkun Du, Zhineng Chen\*, Caiyan Jia, Xieping Gao, Yu-Gang Jiang. Out of Length Text Recognition with Sub-String Matching,* 2024. [paper](https://arxiv.org/abs/2407.12317))
15
34
-[**CDistNet**](./configs/rec/cdistnet/) (*Tianlun Zheng, Zhineng Chen\*, Shancheng Fang, Hongtao Xie, Yu-Gang Jiang. CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition,* IJCV 2024. [paper](https://link.springer.com/article/10.1007/s11263-023-01880-0))
@@ -19,9 +38,78 @@ We sincerely welcome the researcher to recommend OCR or relevant algorithms and
19
38
-[**SVTR**](./configs/rec/svtr/) (*Yongkun Du, Zhineng Chen\*, Caiyan Jia, Xiaoting Yin, Tianlun Zheng, Chenxia Li, Yuning Du, Yu-Gang Jiang. SVTR: Scene Text Recognition with a Single Visual Model,* IJCAI 2022 (Long). [PaddleOCR Doc](https://github.com/Topdu/PaddleOCR/blob/main/doc/doc_ch/algorithm_rec_svtr.md), [paper](https://www.ijcai.org/proceedings/2022/124))
20
39
-[**NRTR**](./configs/rec/nrtr/) (*Fenfen Sheng, Zhineng Chen\*, Bo Xu. NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition,* ICDAR 2019. [paper](https://arxiv.org/abs/1806.00926))
21
40
22
-
## STR
41
+
## Recent Updates
42
+
43
+
-**🔥 2024.11.23 release notes**:
44
+
-**OpenOCR: A general OCR system for accuracy and efficiency**
-**SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition**
48
+
-\[[Paper](../configs/rec/svtrv2/SVTRv2.pdf)\]\[[Model](./configs/rec/svtrv2/readme.md#11-models-and-results)\]\[[Config, Training and Inference](./configs/rec/svtrv2/readme.md#3-model-training--evaluation)\]
49
+
-[Introduction](./docs/svtrv2.md)
50
+
-[Get Started](./docs/svtrv2.md#get-started-with-training-a-sota-scene-text-recognition-model-from-scratch) with training a SoTA Scene Text Recognition model from scratch.
Yiming Lei ([pretto0](https://github.com/pretto0)) and Xingsong Ye ([YesianRohn](https://github.com/YesianRohn)) from the [FVL](https://fvl.fudan.edu.cn) Laboratory, Fudan University, under the guidance of Professor Zhineng Chen, completed the majority of the algorithm reproduction work. Grateful for their outstanding contributions.
0 commit comments