CN113610064B - Handwriting recognition method and device - Google Patents

Handwriting recognition method and device Download PDF

Info

Publication number
CN113610064B
CN113610064B CN202111173289.3A CN202111173289A CN113610064B CN 113610064 B CN113610064 B CN 113610064B CN 202111173289 A CN202111173289 A CN 202111173289A CN 113610064 B CN113610064 B CN 113610064B
Authority
CN
China
Prior art keywords
handwriting
image
user
set text
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111173289.3A
Other languages
Chinese (zh)
Other versions
CN113610064A (en
Inventor
刘军
秦勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Century TAL Education Technology Co Ltd
Original Assignee
Beijing Century TAL Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Century TAL Education Technology Co Ltd filed Critical Beijing Century TAL Education Technology Co Ltd
Priority to CN202111173289.3A priority Critical patent/CN113610064B/en
Publication of CN113610064A publication Critical patent/CN113610064A/en
Application granted granted Critical
Publication of CN113610064B publication Critical patent/CN113610064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a handwriting recognition method and device, and belongs to the field of image processing. The method comprises the following steps: acquiring a target image containing handwriting of a target user; determining a simulated handwriting image of the handwriting of the target user according to the target image, wherein the simulated handwriting image comprises set text information; acquiring a first preset text handwriting image of each user, wherein the first preset text handwriting image comprises the set text information; respectively determining the similarity between the simulated handwriting image and the first set text handwriting image of each user; and determining a target user to which the handwriting of the target user belongs according to the similarity between the simulated handwriting image and the first set text handwriting image of each user. By adopting the method and the device, the identification efficiency can be improved.

Description

Handwriting recognition method and device
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a handwriting recognition method and apparatus.
Background
Online education is a form of education that is highly popular with parents and students, as there are no limitations on places and physical materials. In some application scenarios, the user handwriting needs to be recognized. For example, in an online test scenario, after the user answers the user offline, the user can shoot an answer image through the terminal and upload the answer image to the server. However, in practical application, a situation that one terminal is used by multiple persons often occurs, in such a situation, specific users (i.e. respondents) cannot be effectively distinguished at present, and if the specific users can be effectively distinguished, learning conditions of the users can be mastered by accumulating answering images and modifying contents of the same user, so that learning situation analysis is provided for the users, and learning recommendation is made in a targeted manner, so that the users can be helped to learn better.
At present, the method for distinguishing users is handwriting recognition, and image similarity evaluation is a method for handwriting recognition. The evaluation of image Similarity is an ancient research topic, from the evaluation of early empirical formula calculation to the mode recognition method, an operator designed by artificial experience is used, and various deep learning methods are used, in the process, a large number of classical models and methods emerge, the method for evaluating the Similarity of two images by using the empirical formula in the early stage comprises PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity), namely the Similarity of two images is judged by calculation directly from pixel values, the mode recognition method uses an operator designed by artificial experience, such as SIFT (Scale-Invariant Feature Transform), RF (Speeded Up Robust Features), to respectively extract Feature points of two images to form Feature vectors, and then a certain measurement mode is adopted, such as cosine distance, Euclidean distance, Hamming distance and the like, the distance between two feature vectors is calculated, then the similarity of two images is judged according to a preset threshold value, the deep learning method is the most widely used method at present and is the best method, the method represents that the characteristics of the two images are respectively extracted by using two neural network model branches by using a Simese (twin network) and a Pseudo-twin network, and the extracted characteristics are combined to obtain a similarity analysis result.
The three methods have advantages, the calculation of the empirical formula is objective, but the method only utilizes the image pixel value to evaluate the image similarity, the semantic (content) information of the image cannot be utilized, the operator designed by artificial experience can utilize the semantic information of the image to a certain extent, but because the evaluation of the similarity requires the artificial setting of a threshold value, the result quality is related to the artificial experience to a certain extent, the deep learning method utilizes the neural network model to extract the characteristics of the image, the numerical information and the semantic information of the image can be fully utilized, and the similarity measurement result is handed to the network to be judged, thereby avoiding the artificial experience to set the threshold value, achieving better effect, but compared with the former two methods, the deep learning method needs to use a large amount of artificially labeled data to train the neural network model, and the quantity and quality of the training data are key for restricting the prediction capability of the neural network model.
Disclosure of Invention
In order to solve the problems of the prior art, the embodiments of the present disclosure provide a handwriting recognition method and apparatus. The technical scheme is as follows:
according to an aspect of the present disclosure, there is provided a handwriting recognition method, the method including:
acquiring a target image containing handwriting of a target user;
determining a simulated handwriting image of the handwriting of the target user according to the target image, wherein the simulated handwriting image comprises set text information;
acquiring a first preset text handwriting image of each user, wherein the first preset text handwriting image comprises the set text information;
respectively determining the similarity between the simulated handwriting image and the first set text handwriting image of each user;
and determining a target user to which the handwriting of the target user belongs according to the similarity between the simulated handwriting image and the first set text handwriting image of each user.
According to another aspect of the present disclosure, there is provided a handwriting recognition apparatus, the apparatus including:
the acquisition module is used for acquiring a target image containing the handwriting of a target user;
the simulation module is used for determining a simulation handwriting image of the handwriting of the target user according to the target image, and the simulation handwriting image comprises set text information;
the acquisition module is further used for acquiring a first preset text handwriting image of each user, which is stored in advance, wherein the first preset text handwriting image comprises the set text information;
the determining module is used for respectively determining the similarity between the simulated handwriting image and the first set text handwriting image of each user; and determining a target user to which the handwriting of the target user belongs according to the similarity between the simulated handwriting image and the first set text handwriting image of each user.
According to another aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing a program, wherein the program is stored in the memory,
wherein the program comprises instructions which, when executed by the processor, cause the processor to carry out the handwriting recognition method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to execute the handwriting recognition method described above.
In the embodiment of the disclosure, the handwriting image of the target user when writing the set text can be simulated according to the handwriting of the target user, and the target user is identified according to the similarity between the simulated handwriting image and the pre-stored first set text handwriting image of each user. Because the content of the first set text handwriting image of each user is the set text, the difference lies in the handwriting style of each user, the interference caused by different image contents is reduced, and the information quantity of the set text is fixed and limited, so that the processing speed is higher, and the recognition efficiency can be improved.
Drawings
Further details, features and advantages of the disclosure are disclosed in the following description of exemplary embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 shows a schematic illustration of an implementation environment according to an exemplary embodiment of the present disclosure;
FIG. 2 shows a flow chart of a method of handwriting recognition according to an example embodiment of the present disclosure;
FIG. 3 shows a model structure diagram in accordance with an exemplary embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of setting a text script image according to an example embodiment of the present disclosure;
FIG. 5 shows a flowchart of a method for training a handwriting simulation model according to an exemplary embodiment of the present disclosure;
FIG. 6 shows a flowchart of a method of training an image similarity determination model according to an exemplary embodiment of the present disclosure;
FIG. 7 shows a schematic block diagram of a handwriting recognition apparatus according to an example embodiment of the present disclosure;
FIG. 8 shows a schematic block diagram of a handwriting recognition apparatus according to an example embodiment of the present disclosure;
FIG. 9 shows a schematic block diagram of a handwriting recognition apparatus according to an example embodiment of the present disclosure;
FIG. 10 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description. It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
For clarity of description of the methods provided by the embodiments of the present disclosure, the following describes the techniques involved in the present disclosure:
1. MatchNet (comparison network)
The MatchNet mainly comprises a characteristic network and a measurement network, wherein the characteristic network is a convolutional neural network model and comprises 2 branches, each branch comprises 5 convolutional layers and 3 pooling layers, and the 2 branches share weight; the measurement network consists of 3 full-connection layers and an objective function, and the third full-connection layer is followed by a softmax (normalized index) function, wherein the objective function is a cross entropy loss function (cross entropy loss). MatchNet can be regarded as an improvement on the basis of Siemese (twin network), different branches of a neural network model are used for extracting features of an input image, then extracted information such as textures, edges, corner points, high-level semantics and the like is subjected to vector quantization, a certain distance measurement is used for two vectors representing the two images, and then whether the two images are similar or the similarity degree of the two images is judged according to a set threshold value.
2. CGAN (Conditional Generation adaptive Nets, Conditional countermeasure Generation network)
The CGAN is substantially the same as a general generation countermeasure network (GAN), and is composed of a generator and a discriminator, where the generator of the GAN generally takes gaussian distributed noise as input and outputs an image like a designated real image, the discriminator takes the image generated by the generator or the designated real image as input and outputs 1 or 0, indicating whether the input is a real image, and usually takes KL Divergence (relative entropy) or JS Divergence (Jensen-Shannon Divergence, another relative entropy) as an optimization target, and when the discriminator cannot distinguish whether the input is a real image or an image generated by the generator, it is considered that the generator and the discriminator reach game balance, and the image can be generated by the generator.
The CGAN is an improved version of GAN, and the difference between them is that the generator output of the CGAN is generally some designated tag information (or called a priori information), the discriminator of the CGAN takes the generator output plus tag information or the designated real image plus tag information as input, and the CGAN makes the content of the generator output strongly related to the tag, i.e. roughly controllable, compared to GAN due to the change of input. The CGAN has the advantages of high quality and high speed of generated images, is easy to train in a scene with low image resolution, and can effectively avoid mode collapse and non-convergence in the training process of the GAN related method.
MatchNet is a typical image similarity discrimination network and has achieved very good effect at present; the CGAN is an important generative model, can effectively generate controllable images according to input, and can effectively solve the problem that a loss function is difficult to design in multiple tasks by the aid of a design idea. Based on this, the embodiment of the disclosure provides a handwriting recognition method, which makes full use of the design ideas of CGAN and MatchNet, analyzes the specific situation of the handwriting recognition application scene, and constructs a brand-new handwriting recognition model, so as to realize user differentiation through handwriting, establish a learning file for each user, and analyze the learning situation, thereby powerfully helping the user to operate and learn more efficiently.
The method may be performed by a terminal, server, and/or other processing-capable device. The method provided by the embodiment of the present disclosure may be completed by any one of the above devices, or may be completed by a plurality of devices together, which is not limited in the present disclosure. Taking the schematic diagram of the implementation environment shown in fig. 1 as an example, the implementation environment may be composed of a terminal and a server, and the terminal may communicate with the server.
The terminal may use an Application program for online education, which may be an APP (Application) or a web-based Application program. The terminal may be a mobile phone, a tablet computer, a desktop computer, a notebook computer, an intelligent wearable device, and the like, which is not limited in this embodiment.
The server can provide background services for the application program and can comprise a storage server and a processing server. The storage server can be used for storing a database, and the database can store data used by the handwriting recognition method, such as a sample image, an analog handwriting image, a set text handwriting image, a user handwriting image dictionary and the like; the processing server may be used to perform corresponding processing of the application, such as the related processing of the handwriting recognition method. The processing server may perform data interaction with the storage server. Of course, both storage and processing may be performed by one server, and the embodiment of the present disclosure is implemented by one server as an example.
The method of handwriting recognition will be described with reference to the flowchart of the method of handwriting recognition shown in fig. 2 and the schematic diagram of the model structure shown in fig. 3.
In step 201, a server obtains a target image containing handwriting of a target user.
In a possible implementation manner, the user may obtain the target image to be recognized through the terminal, for example, the target image may be obtained by shooting handwritten content through the terminal, or may also be obtained by receiving the target image sent by another terminal, which is not limited in this embodiment. Then, the user can click the uploading option, and then the terminal can upload the target image to the server. At this time, the server may acquire the target image.
Step 202, the server determines a simulated handwriting image of the handwriting of the target user according to the target image.
Wherein, the analog handwriting image can comprise the setting text information. For example, the simulated handwriting image may be an image simulating that the user writes "this is my handwriting", which is the set text. The present embodiment does not limit the specific setting text.
The server can simulate the handwriting image of the set text written by the target user according to the information of the handwriting of the target user in the target image, namely generate the simulated handwriting image.
Optionally, the server may implement the processing of step 202 through a handwriting simulation model, and the corresponding processing may be as follows: the server calls the trained handwriting simulation model; and processing the target image through the handwriting simulation model to obtain a simulated handwriting image of the handwriting of the target user.
In a possible implementation manner, the server may train the handwriting simulation model in advance, and store the trained handwriting simulation model, and a specific training method will be described in another embodiment, which is not described in detail in this embodiment.
The stored handwriting simulation model may be invoked when the server performs the task of handwriting recognition. Then, the target image can be used as the input of the handwriting simulation model, the information of the handwriting of the target user is learned through the handwriting simulation model, the simulation handwriting image is generated according to the information of the handwriting of the target user, and the simulation handwriting image is output.
Illustratively, the handwriting simulation model may be constructed based on a generator of the CGAN. Naturally, the handwriting simulation model may also be constructed based on other model structures, and the common point of these models is that the generated image is controllable, that is, the handwriting image containing the set text may be generated.
Step 203, the server acquires a first preset text handwriting image of each user, which is stored in advance.
Wherein the first set text script image may include set text information.
In a possible implementation manner, as shown in the schematic diagram of the setting text script image shown in fig. 4, each user may write a plurality of sets of setting texts, and the terminal may collect the setting text script image of each user and upload the setting text script image to the server. Further, the server may receive and store the set text script image of each user. That is, each user may correspond to a plurality of set text script images. For convenience of introduction, the setting text handwriting image stored by the server for calculating the similarity is referred to as a first setting text handwriting image.
In addition, the server may also scale each first set text script image to a set size.
In order to reduce the interference caused by the extra information in the image, the user may write under the specified background condition when writing the setting text, for example, the specified background condition may be a white background, and the user may write on a uniform white paper. Or, the server may further perform uniform background processing on the acquired set text handwriting image, for example, after the user handwriting is stroked, the image background may be set as a uniform set background. This embodiment is not limited to this.
That is, the image size and the image content are the same between each first set text script image stored in the server, and the difference is the user script therein.
Optionally, the first set text handwriting image of each user may be stored in a user handwriting image dictionary, the key information of the user handwriting image dictionary is the first set text handwriting image, and the value information is the corresponding user identifier. When one user corresponds to a plurality of first set text handwriting images, any one of the first set text handwriting images can be used as a reference image and stored in a user handwriting image dictionary. Of course, the server may also adopt other storage structures to store the first setting text script image and the user identifier of each user, for example, the first setting text script image and the user identifier may be in the form of a corresponding relationship table, which is not limited in this embodiment.
In step 204, the server determines the similarity between the simulated handwriting image and the first set text handwriting image of each user, respectively.
Optionally, the server may implement the processing of step 204 through an image similarity determination model, and the corresponding processing may be as follows: the server calls the trained image similarity determination model; and respectively processing the simulated handwriting image and the first set text handwriting image of each user through the image similarity determination model to obtain the similarity between the simulated handwriting image and the first set text handwriting image of each user.
In a possible implementation manner, the server may train the image similarity determination model in advance, and store the trained image similarity determination model, and a specific training method will be described in another embodiment, which is not described in detail in this embodiment.
The stored image similarity determination model may be invoked when the server performs the task of handwriting recognition. The server may pair the analog handwriting images and the first set text handwriting images of each user two by two, and a pair of handwriting images may include the analog handwriting image and the first set text handwriting image of one user. Then, the server can process each pair of handwriting image input image similarity determination models respectively to determine the similarity between each pair of handwriting images, namely the similarity between the simulated handwriting image and the first set text handwriting image of each user. The value range of the similarity may be [0, 1], which is not limited in this embodiment.
For example, the image similarity determination model may be constructed based on the model structure of MatchNet. The image similarity determination model consists of a sequence feature extraction network and a similarity measurement network, wherein the sequence feature extraction network consists of two identical branches, each branch comprises two layers of convolutional neural networks and two layers of bidirectional LSTM (Long Short-Term Memory networks), and the input of each time step is obtained by intercepting the features of an input image after passing through a convolutional layer according to the number of the time steps, wherein the features are determined in a window according to the number of the time steps; the similarity measurement network is composed of 3 layers of full connection layers, and the number of nodes of the last full connection layer is 2. The output of the two branches of the sequence feature extraction network is spliced and then input into the similarity measurement network.
MatchNet has a good effect on natural image judgment, but is difficult to obtain the effect on text images, mainly because the natural images are rich in texture, color and definition, the text images are dense in characters and single in background transformation, and on the task of similarity distinguishing, compared with the natural images, the MatchNet is more complex.
Of course, the image similarity determination model may also be constructed based on other model structures, such as a Siamese network, and the specific model structure of the image similarity determination model is not limited in this embodiment. Alternatively, other algorithms for determining Similarity may also be used to calculate the Similarity, for example, SSIM (Structural Similarity), and this embodiment does not limit the specific algorithm for determining Similarity.
In step 205, the server determines the target user to which the handwriting of the target user belongs according to the similarity between the simulated handwriting image and the first set text handwriting image of each user.
Specifically, the processing of step 205 may be as follows: the server acquires a target handwriting image with the similarity larger than a set threshold; and determining the user corresponding to the target handwriting image as the target user to which the target user handwriting belongs.
When the first set text handwriting image with the similarity larger than the set threshold does not exist, the server can also store the simulated handwriting image as the first set text handwriting image of the new user.
In a possible implementation manner, the server may perform similarity calculation by traversing each first set text handwriting image in the user handwriting image dictionary, and may acquire the first set text handwriting image with the maximum similarity when a plurality of first set text handwriting images with the similarity greater than a set threshold exist in the user handwriting dictionary.
When a first set text handwriting image with the similarity larger than a set threshold value does not exist in the user handwriting image dictionary, the server can establish a user identifier of a new user; and taking the simulated handwriting image of the new user as a first set text handwriting image as key information, taking the user identification of the new user as corresponding value information, and updating the user handwriting image dictionary.
That is, when the user corresponding to the analog handwriting image is a new user, the server cannot search the first set text handwriting image with the similarity greater than the set threshold value in the user handwriting image dictionary, so that a new user identifier can be established for the new user, and the analog handwriting image of the new user is added to the user handwriting dictionary for storage as the first set text handwriting image. The server can find the corresponding first set text handwriting image in the user handwriting image dictionary next time when recognizing the handwriting image of the new user.
The embodiment of the disclosure does not directly perform user recognition processing on a target image containing target user handwriting, but converts the target image into an analog handwriting image and then recognizes the analog handwriting image, and uses a first set text handwriting image as a reference image of a user. If the target image containing the handwriting of the target user is directly processed by the user recognition, the new user cannot be effectively recognized, and the effective expansion cannot be realized. Therefore, the handwriting recognition method provided by the embodiment of the disclosure recognizes the user through the simulated handwriting image, does not need to update the algorithm for determining the similarity for the new user when the new user is added, and only needs to add the simulated handwriting image of the new user as the first set text handwriting image into the user handwriting dictionary, so that the new user is easy to expand, and the robustness is high.
In addition, the handwriting recognition method provided by the embodiment of the disclosure has a short related link, can effectively accelerate and avoid accumulated errors, and has higher practicability.
In the embodiment of the disclosure, the server may simulate the handwriting image of the target user when writing the set text according to the handwriting of the target user, and further identify the target user according to the similarity between the simulated handwriting image and the pre-stored first set text handwriting image of each user. Because the content of the first set text handwriting image of each user is the set text, the difference lies in the handwriting style of each user, the interference caused by different image contents is reduced, and the information quantity of the set text is fixed and limited, so that the processing speed is higher, and the recognition efficiency can be improved.
The handwriting simulation model used in the above disclosed embodiments may be a machine learning model that may be trained prior to the above processing using the handwriting simulation model. The following describes a training method of the handwriting simulation model by a flow chart of the training method of the handwriting simulation model as shown in fig. 5.
Step 501, the server builds an initial handwriting generating model.
The handwriting generating module can be used for simulating a handwriting image containing a set text based on an input image, and the handwriting distinguishing module can be used for distinguishing the input image of the handwriting distinguishing module from a second set text handwriting image written by a user or a simulated handwriting image output by the handwriting generating module. The second set text script image also includes set text information, in the same way as the first set text script image. For convenience of introduction, the setting text script image written by the user is referred to as a second setting text script image hereinafter.
In one possible embodiment, the handwriting simulation model may be a handwriting generation module of a handwriting generation model, such as a generator of CGAN. Therefore, the server trains the handwriting generating model, namely the handwriting simulation model. On the basis, the server can construct the handwriting generating model according to the model structure and the model parameters set by the technical personnel.
Step 502, the server obtains a first training sample.
The first training sample may include sample images of a plurality of users including user handwriting, and a second set text handwriting image written by the plurality of users.
In one possible implementation, the database of the server may store an image (e.g., a reply image) of each user containing the user writing, and a second set text writing image of each user writing. Before training the handwriting generating model, the server may obtain the image including the user handwriting as a sample image, and a second set text handwriting image as a training sample of the handwriting generating model. For ease of description, the training sample used herein will be referred to as the first training sample.
Step 503, the server trains the initial handwriting generating model according to the first training sample to obtain a trained handwriting generating model.
The input image of the handwriting generating module may be a sample image in the first training sample, and the output image may be a simulated handwriting image corresponding to the input image. The input image of the handwriting distinguishing module may include a positive example sample image and a negative example sample image, the positive example sample image may include a second set text handwriting image and a sample image of the corresponding user, and the negative example sample image may include an analog handwriting image output by the handwriting generating module and a sample image of the corresponding user.
In a possible implementation manner, the server may use the sample image of each user as an input of the handwriting generating module, the label is a second set text handwriting image corresponding to the user, and after training is finished, the handwriting generating module may fully utilize handwriting information in the input image, and output a simulated handwriting image including the set text based on any input image. And because the input image of the handwriting distinguishing module comprises the sample image of the input handwriting generating module, the analog handwriting image output by the handwriting generating module is strongly related to the set text handwriting image.
By way of example, the following describes a specific training process of the handwriting-generated model, taking the example that the handwriting-generated model is CGAN. The CGAN consists of a generator and a discriminator, wherein the generator comprises 8 convolutional layers and 8 deconvolution layers; the discriminator is composed of 5 convolutional layers and 2 full-link layers and is used for judging whether the input is a positive example sample image or a negative example sample image.
The server can input a sample image of a user into the generator, convolution processing is carried out on the sample image through the convolution layers, corresponding image characteristics are output, deconvolution processing is carried out on the image characteristics through the deconvolution layers, the output of each deconvolution layer and the output of the convolution layer with the corresponding characteristic dimension size are added point by point channel by channel, and a current analog handwriting image is output. The size of the input sample image is larger than that of the output analog handwriting image, and the set proportion is kept.
Furthermore, the server can serially superpose the analog handwriting image output by the generator and the input sample image to be used as a negative sample image; and serially overlapping the set text handwriting image corresponding to the user and the input sample image to obtain a positive sample image. Because the analog handwriting image and the sample image have different sizes, the analog handwriting image can be expanded to the size of the sample image, and the expansion mode can be to copy the original analog handwriting image. Similarly, when the size of the text handwriting image is set to be different from that of the sample image, the text handwriting image can be expanded based on the above method, and details are not repeated here.
The server can input the positive sample image or the negative sample image into the discriminator, perform convolution processing through the convolution layer, and map the convolution layer to the one-dimensional space through the full connection layer, so as to output the probability of the positive sample image or the probability of the negative sample image.
The server may perform the above processing on the data of each user, which is not described herein again.
For the generator, the server may determine a generation opposition loss and a style loss between the simulated writing image and the corresponding second set text writing image. The style loss may be obtained by passing through a pre-trained VGG (Visual Geometry Group, super-resolution test sequence) network, then inputting the simulated handwriting image and the second set text handwriting image respectively to obtain two sets of feature maps, then performing a difference between the two sets of feature maps channel by channel point by point, and then summing and averaging. The server may determine tuning parameters for the generator based on the losses and tune model parameters for the generator based on the tuning parameters.
For the discriminator, the server may determine a generation countermeasure loss between the output probability and a label, where the label of the positive example sample image may be 1 and the label of the negative example sample image may be 0. The server can determine the adjustment parameters of the discriminator according to the loss, and adjust the model parameters of the discriminator according to the adjustment parameters.
When the training end condition is reached, the current handwriting generating model can be obtained and used as the trained handwriting generating model. The training end condition may be that the number of times of training reaches a first threshold, and/or the model accuracy reaches a second threshold, and/or the loss function is lower than a third threshold. The first threshold, the second threshold, and the third threshold may be set empirically. The present embodiment does not limit the specific training end conditions.
And step 504, the server constructs a handwriting simulation model based on a handwriting generation module of the trained handwriting generation model.
In a possible implementation manner, the server may delete the handwriting discriminating module in the handwriting generating model, and reserve the handwriting generating module as the handwriting simulation model, or may further obtain model parameters of the handwriting generating module, and construct a new handwriting simulation model based on the obtained model parameters. This embodiment is not limited to this.
In the embodiment of the disclosure, the server may train the handwriting generating model based on the sample image including the user handwriting and the second set text handwriting image, so that the simulated handwriting image output by the handwriting generating module is strongly correlated with the second set text handwriting image, and further, the handwriting simulating model constructed based on the handwriting generating module may generate the simulated handwriting image, and may be applied to the handwriting recognition method.
Also, the image similarity determination model used in the above-described disclosed embodiments may be a machine learning model, which may be trained before the above-described processing is performed using the image similarity determination model. The following describes a training method of the image similarity determination model with a flowchart of the training method of the image similarity determination model shown in fig. 6.
Step 601, the server builds an initial image similarity determination model.
In one possible implementation, the server may construct the image similarity determination model according to a model structure and model parameters set by a technician.
At step 602, the server obtains a second training sample.
The second training sample may include a plurality of positive example samples and a plurality of negative example samples, the positive example samples may include a pair of second set text script images belonging to the same user writing, and the negative example samples may include a pair of second set text script images belonging to different user writing.
In one possible implementation, the server may obtain the second set text script image of each user from the database, and randomly extract the pairwise matching of the second set text script images. If a pair of second set text handwriting images belong to the same user, taking the pair of second set text handwriting images as a sample of a positive example, and marking a label as 1; and if the pair of second set text handwriting images belong to different users, taking the pair of second set text handwriting images as negative example samples, and marking the labels as 0. The server may obtain a plurality of positive examples and a plurality of negative examples, and corresponding labels, as a second training example.
Step 603, the server trains the initial image similarity determination model according to the second training sample to obtain a trained image similarity determination model.
In a possible implementation manner, the server may process each pair of second set text handwriting image input image similarity determination models, and the specific process is the same as the process of step 204, which is not described herein again. The server may then calculate the predicted similarity and the corresponding loss between the labels, and adjust model parameters of the initial recognition image similarity determination model based on the loss. For example, when the image similarity determination model is MatchNet, the loss function used may be a two-class cross-entropy loss function.
And when the training end condition is reached, acquiring a current image similarity determination model as the trained image similarity determination model. The training end condition is the same as the handwriting generating model, and is not described herein again.
In the embodiment of the disclosure, the server may train the image similarity determination model based on the second set text image written by each user, so that the image similarity determination model may determine the similarity between the handwriting images including the set text, and may be applied to the handwriting recognition method.
The embodiment of the disclosure provides a handwriting recognition device, which is used for realizing the handwriting recognition method. As shown in FIG. 7, a schematic block diagram of a handwriting recognition apparatus 700 includes: an obtaining module 701, a simulating module 702, and a determining module 703.
An obtaining module 701, configured to obtain a target image including a target user handwriting;
a simulation module 702, configured to determine, according to the target image, a simulated handwriting image of the handwriting of the target user, where the simulated handwriting image includes set text information;
the obtaining module 701 is further configured to obtain a first preset text script image of each user, where the first preset text script image includes the set text information;
a determining module 703, configured to determine similarity between the analog handwriting image and the first set text handwriting image of each user respectively; and determining a target user to which the handwriting of the target user belongs according to the similarity between the simulated handwriting image and the first set text handwriting image of each user.
Optionally, the determining module 703 is configured to:
acquiring a target handwriting image with the similarity larger than a set threshold;
determining a user corresponding to the target handwriting image as a target user to which the target user handwriting belongs;
and when the first set text handwriting image with the similarity larger than the set threshold value does not exist, storing the simulation handwriting image as the first set text handwriting image of the new user.
Optionally, the first set text handwriting image of each user is stored in a user handwriting image dictionary, the key information of the user handwriting image dictionary is the first set text handwriting image, and the value information is the user identifier.
Optionally, the simulation module 702 is configured to:
calling the trained handwriting simulation model;
and processing the target image through the handwriting simulation model to obtain a simulated handwriting image of the handwriting of the target user.
Optionally, as shown in the schematic block diagram of the handwriting recognition apparatus in fig. 8, the apparatus further includes a first training module 704, where the first training module 704 is configured to:
constructing an initial handwriting generating model, wherein the handwriting generating model comprises a handwriting generating module and a handwriting distinguishing module, the handwriting generating module is used for simulating a handwriting image containing a set text based on an input image, and the handwriting distinguishing module is used for distinguishing the input image of the handwriting distinguishing module as a second set text handwriting image written by a user or a simulated handwriting image output by the handwriting generating module;
acquiring a first training sample, wherein the first training sample comprises sample images containing user handwriting of a plurality of users and second set text handwriting images written by the plurality of users;
training the initial handwriting generating model according to the first training sample to obtain a trained handwriting generating model;
and constructing a handwriting simulation model based on the handwriting generation module of the trained handwriting generation model.
Optionally, an input image of the handwriting generation module is a sample image in the first training sample, and an output image is an analog handwriting image corresponding to the input image;
the input image of the handwriting distinguishing module comprises a positive example sample image and a negative example sample image, the positive example sample image comprises the second set text handwriting image and a sample image corresponding to the user, and the negative example sample image comprises the analog handwriting image output by the handwriting generating module and the sample image corresponding to the user.
Optionally, the determining module 703 is configured to:
calling the trained image similarity determination model;
and respectively processing the simulated handwriting image and the first set text handwriting image of each user through the image similarity determination model to obtain the similarity between the simulated handwriting image and the first set text handwriting image of each user.
Optionally, as shown in the schematic block diagram of the handwriting recognition apparatus in fig. 9, the apparatus further includes a second training module 705, where the second training module 705 is configured to:
constructing an initial image similarity determination model;
acquiring a second training sample, wherein the second training sample comprises a plurality of positive example samples and a plurality of negative example samples, the positive example samples comprise a pair of second set text handwriting images which belong to the same user writing, and the negative example samples comprise a pair of second set text handwriting images which belong to different user writing;
and training the initial image similarity determination model according to the second training sample to obtain a trained image similarity determination model.
In the embodiment of the disclosure, the handwriting image of the target user when writing the set text can be simulated according to the handwriting of the target user, and the target user is identified according to the similarity between the simulated handwriting image and the pre-stored first set text handwriting image of each user. Because the content of the first set text handwriting image of each user is the set text, the difference lies in the handwriting style of each user, the interference caused by different image contents is reduced, and the information quantity of the set text is fixed and limited, so that the processing speed is higher, and the recognition efficiency can be improved.
An exemplary embodiment of the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor, the computer program, when executed by the at least one processor, is for causing the electronic device to perform a method according to an embodiment of the disclosure.
The disclosed exemplary embodiments also provide a non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is adapted to cause the computer to perform a method according to an embodiment of the present disclosure.
The exemplary embodiments of the present disclosure also provide a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is adapted to cause the computer to perform a method according to an embodiment of the present disclosure.
Referring to fig. 10, a block diagram of a structure of an electronic device 1000, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the electronic device 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the device 1000 can also be stored. The calculation unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
A number of components in the electronic device 1000 are connected to the I/O interface 1005, including: input section 1006, output section 1007, storage section 1008, and communication section 1009. The input unit 1006 may be any type of device capable of inputting information to the electronic device 1000, and the input unit 1006 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. Output unit 1007 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 1004 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 1009 allows the electronic device 1000 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers, and/or chipsets, such as bluetooth (TM) devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
Computing unit 1001 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 1001 executes the respective methods and processes described above. For example, in some embodiments, the handwriting recognition method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto electronic device 1000 via ROM 1002 and/or communications unit 1009. In some embodiments, the computing unit 1001 may be configured to perform the handwriting recognition method in any other suitable manner (e.g., by means of firmware).
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Claims (10)

1. A method of handwriting recognition, the method comprising:
acquiring a target image containing handwriting of a target user, wherein the target image is obtained based on shooting;
calling the trained handwriting simulation model;
processing the target image through the handwriting simulation model to obtain a simulated handwriting image of the handwriting of the target user, wherein the simulated handwriting image comprises set text information;
acquiring a first preset text handwriting image of each user, wherein the first preset text handwriting image comprises the set text information;
respectively determining the similarity between the simulated handwriting image and the first set text handwriting image of each user through a trained image similarity determination model;
and determining a target user to which the handwriting of the target user belongs according to the similarity between the simulated handwriting image and the first set text handwriting image of each user.
2. The handwriting recognition method according to claim 1, wherein the determining the target user to which the target user handwriting belongs according to the similarity between the simulated handwriting image and the first set text handwriting image of each user comprises:
acquiring a target handwriting image with the similarity larger than a set threshold;
determining a user corresponding to the target handwriting image as a target user to which the target user handwriting belongs;
the method further comprises the following steps:
and when the first set text handwriting image with the similarity larger than the set threshold value does not exist, storing the simulation handwriting image as the first set text handwriting image of the new user.
3. The handwriting recognition method according to claim 1, wherein the first set text script image of each user is stored in a user script image dictionary, the key information of the user script image dictionary is the first set text script image, and the value information is the corresponding user identification.
4. A method of handwriting recognition according to claim 1 and wherein said method of training a handwriting simulation model comprises:
constructing an initial handwriting generating model, wherein the handwriting generating model comprises a handwriting generating module and a handwriting distinguishing module, the handwriting generating module is used for simulating a handwriting image containing a set text based on an input image, and the handwriting distinguishing module is used for distinguishing the input image of the handwriting distinguishing module as a second set text handwriting image written by a user or a simulated handwriting image output by the handwriting generating module;
acquiring a first training sample, wherein the first training sample comprises sample images containing user handwriting of a plurality of users and second set text handwriting images written by the plurality of users;
training the initial handwriting generating model according to the first training sample to obtain a trained handwriting generating model;
and constructing a handwriting simulation model based on the handwriting generation module of the trained handwriting generation model.
5. The handwriting recognition method according to claim 4, wherein the input image of the handwriting generation module is a sample image in the first training sample, and the output image is a simulated handwriting image corresponding to the input image;
the input image of the handwriting distinguishing module comprises a positive example sample image and a negative example sample image, the positive example sample image comprises the second set text handwriting image and a sample image corresponding to the user, and the negative example sample image comprises the analog handwriting image output by the handwriting generating module and the sample image corresponding to the user.
6. The handwriting recognition method according to claim 1, wherein said determining similarity between said simulated handwriting image and said set text handwriting image of each user separately by said trained image similarity determination model comprises:
calling the trained image similarity determination model;
and respectively processing the simulated handwriting image and the first set text handwriting image of each user through the image similarity determination model to obtain the similarity between the simulated handwriting image and the first set text handwriting image of each user.
7. The handwriting recognition method according to claim 6, wherein the training method of the image similarity determination model comprises:
constructing an initial image similarity determination model;
acquiring a second training sample, wherein the second training sample comprises a plurality of positive example samples and a plurality of negative example samples, the positive example samples comprise a pair of second set text handwriting images which belong to the same user writing, and the negative example samples comprise a pair of second set text handwriting images which belong to different user writing;
and training the initial image similarity determination model according to the second training sample to obtain a trained image similarity determination model.
8. A handwriting recognition apparatus, comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a target image containing target user handwriting, and the target image is obtained based on shooting;
the simulation module is used for calling the trained handwriting simulation model; processing the target image through the handwriting simulation model to obtain a simulated handwriting image of the handwriting of the target user, wherein the simulated handwriting image comprises set text information;
the acquisition module is further used for acquiring a first preset text handwriting image of each user, which is stored in advance, wherein the first preset text handwriting image comprises the set text information;
the determining module is used for determining the similarity between the simulated handwriting image and the first set text handwriting image of each user respectively through the trained image similarity determining model; and determining a target user to which the handwriting of the target user belongs according to the similarity between the simulated handwriting image and the first set text handwriting image of each user.
9. An electronic device, comprising:
a processor; and
a memory for storing a program, wherein the program is stored in the memory,
wherein the program comprises instructions which, when executed by the processor, cause the processor to carry out the method according to any one of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-7.
CN202111173289.3A 2021-10-09 2021-10-09 Handwriting recognition method and device Active CN113610064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111173289.3A CN113610064B (en) 2021-10-09 2021-10-09 Handwriting recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111173289.3A CN113610064B (en) 2021-10-09 2021-10-09 Handwriting recognition method and device

Publications (2)

Publication Number Publication Date
CN113610064A CN113610064A (en) 2021-11-05
CN113610064B true CN113610064B (en) 2022-02-08

Family

ID=78310863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111173289.3A Active CN113610064B (en) 2021-10-09 2021-10-09 Handwriting recognition method and device

Country Status (1)

Country Link
CN (1) CN113610064B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299000A (en) * 2014-10-09 2015-01-21 南通大学 Handwriting recognition method based on local fragment distribution characteristics
CN106778151A (en) * 2016-11-14 2017-05-31 北京爱知之星科技股份有限公司 Method for identifying ID and device based on person's handwriting
CN106803082A (en) * 2017-01-23 2017-06-06 重庆邮电大学 A kind of online handwriting recognition methods based on conditional generation confrontation network
CN108345397A (en) * 2018-03-02 2018-07-31 上海麦田映像信息技术有限公司 A kind of copying method and system
CN113095158A (en) * 2021-03-23 2021-07-09 西安深信科创信息技术有限公司 Handwriting generation method and device based on countermeasure generation network
CN113378609A (en) * 2020-03-10 2021-09-10 中国移动通信集团辽宁有限公司 Method and device for identifying agent signature

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258157B (en) * 2013-04-18 2016-09-07 武汉汉德瑞庭科技有限公司 A kind of online handwriting authentication method based on finger information and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299000A (en) * 2014-10-09 2015-01-21 南通大学 Handwriting recognition method based on local fragment distribution characteristics
CN106778151A (en) * 2016-11-14 2017-05-31 北京爱知之星科技股份有限公司 Method for identifying ID and device based on person's handwriting
CN106803082A (en) * 2017-01-23 2017-06-06 重庆邮电大学 A kind of online handwriting recognition methods based on conditional generation confrontation network
CN108345397A (en) * 2018-03-02 2018-07-31 上海麦田映像信息技术有限公司 A kind of copying method and system
CN113378609A (en) * 2020-03-10 2021-09-10 中国移动通信集团辽宁有限公司 Method and device for identifying agent signature
CN113095158A (en) * 2021-03-23 2021-07-09 西安深信科创信息技术有限公司 Handwriting generation method and device based on countermeasure generation network

Also Published As

Publication number Publication date
CN113610064A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
US11144800B2 (en) Image disambiguation method and apparatus, storage medium, and electronic device
CN115063875B (en) Model training method, image processing method and device and electronic equipment
CN113254654B (en) Model training method, text recognition method, device, equipment and medium
CN111414946B (en) Artificial intelligence-based medical image noise data identification method and related device
CN114120299B (en) Information acquisition method, device, storage medium and equipment
CN112417158A (en) Training method, classification method, device and equipment of text data classification model
CN113254491A (en) Information recommendation method and device, computer equipment and storage medium
CN115100659B (en) Text recognition method, device, electronic equipment and storage medium
CN114639096B (en) Text recognition method, device, electronic equipment and storage medium
CN113688955B (en) Text recognition method, device, equipment and medium
CN114548218A (en) Image matching method, device, storage medium and electronic device
CN113610064B (en) Handwriting recognition method and device
CN113837157B (en) Topic type identification method, system and storage medium
CN115984975A (en) Method, system, device and medium for verifying electronic signature based on graph convolution neural network
CN115937660A (en) Verification code identification method and device
CN114758331A (en) Text recognition method and device, electronic equipment and storage medium
CN113988316A (en) Method and device for training machine learning model
CN114186039A (en) Visual question answering method and device and electronic equipment
CN113920291A (en) Error correction method and device based on picture recognition result, electronic equipment and medium
CN113722466B (en) Correction model training method, correction method, device, electronic equipment and medium
CN113627399B (en) Topic processing method, device, equipment and storage medium
CN116798048A (en) Text recognition method, device, equipment and storage medium
CN113610065A (en) Handwriting recognition method and device
CN115761717A (en) Method and device for identifying topic image, electronic equipment and storage medium
CN115376140A (en) Image processing method, apparatus, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant