Scene text recognition method based on man-machine cooperation

A technology of text recognition and human-machine collaboration, applied in the field of end-to-end scene text recognition, it can solve the problems of time-consuming, labor-intensive and cost-increasing, achieve high performance and reduce the proportion of manual annotation.

Inactive Publication Date: 2020-02-14
TIANJIN UNIV
View PDF6 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

In this patented system, there are several benefits such as deep learning techniques used during image processing, advanced learning algorithms applied over vast amounts of dataset, effective ways to annotate scenes accurately by creating representative sample sets based on their content, efficient way to judge how well they're being identified, reducing effort required while maintaining accuracy. Additionally, it suggests adding indexes like these into models trained from them instead of manually assigning truth values. Overall, this technical effect allows for faster and more accurate identification of objects within images captured through cameras.

Problems solved by technology

This patented technical problem addressed in this patents relates to improving scanning techniques like road signage recognizers or autonomous vehicles through machine learning models trained over scenes captured during video recording. Current approaches require extensive annotated datasets due to their complexity and costs. There have been attempts at combining both Active Learning and Progressive Deep Learning into one framework.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scene text recognition method based on man-machine cooperation
  • Scene text recognition method based on man-machine cooperation
  • Scene text recognition method based on man-machine cooperation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The present invention will be described in detail below in conjunction with specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that those skilled in the art can make several modifications and improvements without departing from the concept of the present invention. These all belong to the protection scope of the present invention.

[0037] Such as figure 1 As mentioned above, the embodiment of the present invention provides a scene text recognition method based on human-machine collaboration, which uses human-computer collaboration to label currently unlabeled samples, including the following steps:

[0038] S1. Classify the existing scene text data set FSNS into Labeled data (labeled data) and Unlabeled data (unlabeled data), and select 20% from Labeled data as a test set, and the remaining 80% of Labeled data as a pre-training

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a scene text recognition method based on man-machine cooperation, and the method comprises the following steps: S1, carrying out preliminary processing of an existing scene text data set, and selecting a pre-training data set, a training set and a test set from the scene text data set; S2, training an SEE network by using the pre-training data set to obtain a pre-training model; S3, predicting the unmarked training set by adopting a pre-training model, and dividing the unmarked training set into a Hard sample and an Easy sample according to the degree of confidence of aprediction label generated by the model for the unmarked training set; performing manual annotation on the Hard sample, performing pseudo annotation on the Easy sample by using a model, and then performing fine tuning on the scene text recognition model by using the annotated sample; and S4, repeating the step S3 until the performance of the model meets the expected requirements. According to themethod, the marking cost of the scene text data set can be reduced, and the character recognition model performance can be improved.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products