CN110825875B - Text entity type identification method and device, electronic equipment and storage medium - Google Patents

Text entity type identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110825875B
CN110825875B CN201911060988.XA CN201911060988A CN110825875B CN 110825875 B CN110825875 B CN 110825875B CN 201911060988 A CN201911060988 A CN 201911060988A CN 110825875 B CN110825875 B CN 110825875B
Authority
CN
China
Prior art keywords
participle
entity type
synonym
layer
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911060988.XA
Other languages
Chinese (zh)
Other versions
CN110825875A (en
Inventor
詹文超
沙晶
付瑞吉
王士进
魏思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN201911060988.XA priority Critical patent/CN110825875B/en
Publication of CN110825875A publication Critical patent/CN110825875A/en
Application granted granted Critical
Publication of CN110825875B publication Critical patent/CN110825875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computational Linguistics (AREA)
  • Character Discrimination (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the invention provides a text entity type identification method, a text entity type identification device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a text to be recognized; inputting each participle in the text to be recognized into the entity type recognition model to obtain an entity type recognition result corresponding to each participle output by the entity type recognition model; the entity type recognition model is constructed based on a synonym interaction attention mechanism and is obtained through training of each sample participle in a sample text, sample entity type identification of each sample participle and a synonym set dictionary. According to the method, the device, the electronic equipment and the storage medium provided by the embodiment of the invention, each participle of the text to be recognized is input into the entity type recognition model constructed based on the synonym interaction attention mechanism to recognize the entity type, so that the problem of difficult recognition caused by the changeability of the text entity expression modes is solved, and the accuracy and the reliability of text entity type recognition are improved.

Description

Text entity type identification method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of natural language processing, in particular to a text entity type identification method and device, electronic equipment and a storage medium.
Background
Today, artificial intelligence and big data technology play an important role in the field of education. In the process of artificial intelligence education, a large amount of test question text data used for examination and exercise of students can be generated, more answer text data can be generated after the students answer, the data size is huge, and the information structure is complex. The text entity type identification of the text data is the premise of relevant application of subsequent intelligent correction operation, topic difficulty prediction, topic knowledge point prediction and the like.
The existing text entity type recognition usually performs entity type recognition on a person name, a place name, a mechanism name or other vocabularies with specific meanings in Chinese and English text data, and the expression mode of an entity type to be recognized is generally fixed. However, in test question text data, especially in mathematical text data, there may be multiple expression modes for the same type of text entity, and the variability of the expression modes of the text entity brings difficulties for the identification of the type of the text entity.
Disclosure of Invention
The embodiment of the invention provides a text entity type identification method and device, electronic equipment and a storage medium, which are used for solving the problem of low text entity type identification accuracy caused by variable expression modes of text entities.
In a first aspect, an embodiment of the present invention provides a text entity type identification method, including:
determining a text to be recognized;
inputting each word in the text to be recognized into an entity type recognition model to obtain an entity type recognition result corresponding to each word output by the entity type recognition model;
the entity type recognition model is constructed based on a synonym interaction attention mechanism and is obtained through training of each sample word in a sample text, sample entity type identification of each sample word and a synonym set dictionary.
Preferably, the entity type recognition model comprises an input layer, a synonym interaction attention layer and a classification output layer;
correspondingly, the inputting each word segmentation in the text to be recognized into an entity type recognition model to obtain an entity type recognition result corresponding to each word segmentation output by the entity type recognition model specifically includes:
inputting each word segmentation in the text to be recognized into the input layer to obtain a word vector of each word segmentation output by the input layer;
inputting the word vector of each participle into the synonym interactive attention layer to obtain an enhanced word vector of each participle output by the synonym interactive attention layer;
and inputting the enhanced word vector of each word segmentation into the classification output layer to obtain an entity type identification result of each word segmentation output by the classification output layer.
Preferably, the inputting the enhanced word vector of each word segmentation into the classification output layer to obtain the entity type recognition result of each word segmentation output by the classification output layer further includes:
and aiming at any participle, updating the enhanced word vector of the participle into a spliced vector of the enhanced word vector of the participle and the character feature vector of the participle.
Preferably, the entity type identification model further comprises a formula semantic prediction layer;
correspondingly, the inputting the word vector of each participle into the synonym interaction attention layer to obtain the enhanced word vector of each participle output by the synonym interaction attention layer, and then further comprising:
determining a word vector of a formula in the text to be recognized based on the enhanced word vector of each word segmentation;
inputting the word vector of the formula into the formula semantic prediction layer to obtain formula semantics output by the formula semantic prediction layer;
correspondingly, the entity type recognition model is obtained by training based on each sample word segmentation and sample entity type identification thereof in the sample text, a synonym set dictionary and sample formula semantics of the sample formula in the sample text.
Preferably, the inputting the word vector of each participle into the synonym interaction attention layer to obtain the enhanced word vector of each participle output by the synonym interaction attention layer specifically includes:
based on the synonym set dictionary, selecting a word vector of a synonym of any participle from word vectors of each participle, and constructing a synonym set of any participle; the synonym set comprises a word vector for each synonym;
determining the similarity of any participle and any synonym in the synonym set based on the word vector of any participle and the word vector of any synonym in the synonym set;
and outputting the enhanced word vector of any participle based on the similarity between any participle and each synonym in the synonym set.
Preferably, the outputting the enhanced word vector of any participle based on the similarity between any participle and each synonym in the synonym set includes:
determining the weight corresponding to the word vector of each synonym based on the similarity between any participle and each synonym in the synonym set;
determining an attention word vector based on the word vector of each synonym and the corresponding weight of the word vector;
determining an enhanced word vector for the any segmented word based on the word vector for the any segmented word and the attention word vector.
Preferably, the output classification layer comprises a context layer and a classification layer;
correspondingly, the inputting the enhanced word vector of each word segmentation into the classification output layer to obtain the entity type recognition result of each word segmentation output by the classification output layer specifically includes:
inputting the enhanced word vector of each participle into the context layer to obtain a sequence vector of each participle output by the context layer;
and inputting the sequence vector of each word segmentation into the classification layer to obtain an entity type identification result of each word segmentation output by the classification layer.
Preferably, the classification layer comprises a softmax output layer and a random condition field CRF layer;
correspondingly, the inputting the sequence vector of each word segmentation into the classification layer to obtain the entity type recognition result of each word segmentation output by the classification layer specifically includes:
inputting the sequence vector of each participle into the softmax output layer to obtain a candidate recognition result of each participle output by the softmax output layer;
and inputting the candidate classification result of each participle into the random condition field CRF layer to obtain the entity type identification result of each participle output by the random condition field CRF layer.
Preferably, the text to be recognized is a mathematical text.
In a second aspect, an embodiment of the present invention provides a text entity type identification apparatus, including:
the text determining unit is used for determining a text to be recognized;
the entity identification unit is used for inputting each word in the text to be identified into an entity type identification model to obtain an entity type identification result corresponding to each word output by the entity type identification model;
the entity type recognition model is constructed based on a synonym interaction attention mechanism and is obtained through training of each sample word in a sample text, sample entity type identification of each sample word and a synonym set dictionary.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a bus, where the processor and the communication interface, the memory complete communication with each other through the bus, and the processor may call a logic instruction in the memory to perform the steps of the method provided in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method as provided in the first aspect.
According to the text entity type identification method, the text entity type identification device, the electronic equipment and the storage medium, each participle of the text to be identified is input into the entity type identification model constructed based on the synonym interaction attention mechanism to identify the entity type, so that the problem of difficult identification caused by the changeability of the text entity expression modes is solved, and the accuracy and the reliability of text entity type identification are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a text entity type identification method according to an embodiment of the present invention;
FIG. 2 is a schematic flowchart of a method for predicting an entity type recognition model according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for implementing a synonym interaction attention mechanism according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating an entity type classification output method according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an entity type identification model according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a text entity type recognition apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the artificial intelligence education process, a large amount of test question text data can be generated, the quantity of the test questions is large, and the test questions are applied to examination and exercise of students so as to generate more answering text data. In the artificial intelligence education, to teach the factors of each student, all the historical answer text data and test question text data of the students need to be analyzed, and then the detailed learning condition of the students is obtained. At present, test question text data and answer text data are mainly generated by teachers, subject editors or students, and the information structure is complex. How to process, analyze and mine the text data is a premise of relevant applications such as follow-up intelligent correction operation, topic difficulty prediction, topic knowledge point prediction and the like.
Taking the mathematics subject as an example, in the test question text data and the answer text data of the data subject, a plurality of expression modes may exist for the same type of text entity, wherein one line segment can be represented by a single lower case letter, such as a line segment a, and can also be represented by an upper case letter corresponding to two end points of the line segment, such as a line segment AB; an angle can be represented by three capital letters, such as ≈ AOC where O is the vertex, can also be represented by one capital letter, such as ═ O, and can also be represented by a number or a greek letter, such as ═ β. The diversity of text entity expression modes brings difficulties for text entity type identification.
In view of the above problems, an embodiment of the present invention provides a text entity type identification method, which may be used for text entity type identification of test question text data and answer text data of a mathematical subject, text entity type identification of test question text data and answer text data of other subjects, such as a physical subject, and text entity type identification of thesis sorting and classification.
Fig. 1 is a schematic flowchart of a text entity type identification method provided in an embodiment of the present invention, as shown in fig. 1, the method includes:
step 110, determining a text to be recognized.
Here, the text to be recognized is a text that needs to be subjected to text entity type Recognition, the text to be recognized may be manually entered, or may be obtained by recognizing a picture including the text to be recognized by an OCR (Optical Character Recognition) technology, which is not specifically limited in the embodiment of the present invention.
Step 120, inputting each participle in the text to be recognized into the entity type recognition model to obtain an entity type recognition result corresponding to each participle output by the entity type recognition model; the entity type recognition model is constructed based on a synonym interaction attention mechanism and is obtained through training of each sample word segmentation and sample entity type identification in a sample text and a synonym set dictionary.
Specifically, in deep learning, attention Mechanism (Attention Mechanism) is similar to that of human vision, namely Attention is focused on important points in a plurality of information, key information is selected, and other unimportant information is ignored. In the embodiment of the invention, aiming at the characteristic that the expression modes of the text entities are changeable, different expression modes of the text entities of the same type are defined as synonyms, and a synonym interaction attention mechanism is established. Under the synonym interaction attention mechanism, aiming at any participle, attention is focused on the same information of the participle and the synonym thereof, and different information between the participle and the synonym thereof is weakened, so that the characteristic information of the text entity corresponding to the participle is highlighted, and the difficulty in identification caused by the changeability of the expression modes of the text entity is overcome.
The entity type recognition model is used for predicting the text entity type of each participle in the input text to be recognized under the synonym interaction attention mechanism and outputting the entity type recognition result of each participle. Here, the result of the entity type recognition of any word is the entity type corresponding to the word, or the probability that the word corresponds to each entity type.
In addition, before step 120 is executed, the entity type recognition model may be obtained through pre-training, and specifically, the entity type recognition model may be obtained through training in the following manner:
firstly, a large amount of sample texts are collected, and each sample word segmentation in the sample texts is marked with a corresponding sample entity type identifier. Here, the sample entity type identifies an entity type indicating a sample word segmentation. Meanwhile, a synonym set dictionary is constructed. Here, the synonym set dictionary represents different expression modes of the same entity type, for example, the entity categories of f (x) and g (x) are both "general functions", so the synonym set E = { f (x), g (x) } of the "general functions" is constructed, and the synonym set dictionary contains synonym sets of a large number of entity types. The synonym set dictionary can be constructed after a worker reads and understands a sample text, or can be obtained by manual screening and filtering on the basis of a sample entity type identifier of each sample word in the sample text.
And then training the initial model based on each sample word in the sample text, the sample entity type identification of the sample word and the synonym set dictionary to obtain an entity type recognition model. The initial model may be a single neural network model or a combination of a plurality of neural network models, and the embodiment of the present invention does not specifically limit the type and structure of the initial model.
According to the method provided by the embodiment of the invention, each participle of the text to be recognized is input into the entity type recognition model constructed based on the synonym interaction attention mechanism to recognize the entity type, so that the problem of difficult recognition caused by the changeful text entity expression modes is solved, and the accuracy and the reliability of text entity type recognition are improved.
Based on any of the above embodiments, fig. 2 is a schematic flowchart of a method for predicting an entity type recognition model according to an embodiment of the present invention, and as shown in fig. 2, in the method, the entity type recognition model includes an input layer, a synonym interaction attention layer, and a classification output layer. Correspondingly, step 120 specifically includes:
and step 121, inputting each word segmentation in the text to be recognized into the input layer to obtain a word vector of each word segmentation output by the input layer.
Specifically, the input layer is a language model obtained by pre-training and used for determining word vector representations corresponding to the participles. Here, the language model may be a long-short memory network LSTM or a recurrent neural network RNN, or another type of network, such as an ELMO model. The ELMO model is provided by an allenai laboratory in 2018, the ELMO model utilizes different layers of a language model to encode different types of information of participles, various character representations can be freely combined by connecting all the layers, complex features of semantics and grammar can be captured, different contexts can be accurately modeled, and finally a better word vector is obtained.
And step 122, inputting the word vector of each participle into the synonym interactive attention layer to obtain an enhanced word vector of each participle output by the synonym interactive attention layer.
Here, the synonym interaction attention layer is constructed based on a synonym interaction attention mechanism, and the synonym interaction attention layer judges whether a synonym exists in each inputted participle based on a synonym set dictionary, so that interaction enhancement is performed on the word vectors of the synonyms to highlight the same vector characteristics among the synonyms, and an enhanced word vector of each participle is output. Here, the enhanced word vector is a word vector enhanced by the synonym interactive attention layer.
And step 123, inputting the enhanced word vector of each participle into the classification output layer to obtain an entity type identification result of each participle output by the classification output layer.
The classification output layer is used for analyzing and predicting the enhanced word vector of each inputted word, judging the probability of each word corresponding to the entity type, and outputting the entity type recognition result of each word.
In practical application, the text to be recognized may have the problem of language sparsity. Taking the mathematical discipline as an example, in the test question text data and the answer text data of the data discipline, the mathematical entities used for identifying the mathematical objects are extremely sparse, and for this problem, based on any of the above embodiments, in the method, between step 122 and step 123, further includes: and aiming at any participle, updating the enhanced word vector of the participle into a spliced vector of the enhanced word vector of the participle and the character feature vector of the participle.
Here, the character feature vector of any participle is used to characterize the character type of the participle, for example, in mathematical text, the character type includes four types of chinese, english, numeral and symbol. Generally, most of chinese does not include specific mathematical objects, and english is mostly used as a reference of the mathematical objects, most of english can be regarded as mathematical entities, and english, numbers and symbols are often mixed, for example, a common mathematical formula usually contains a large number of numbers, symbols and capital and small letters, which results in an increase in difficulty in identifying entity types. In contrast, the embodiment of the invention encodes the character type of each participle to generate the corresponding character feature vector, splices the enhanced word vector of the participle with the character feature vector, and inputs the spliced vector into the classification output layer for entity type recognition.
The method provided by the embodiment of the invention carries out entity type identification by combining the enhanced word vector with the character feature vector, overcomes the problem of language sparsity, and can further improve the accuracy of entity type identification.
Based on any of the above embodiments, in the method, the entity type identification model further includes a formula semantic prediction layer; correspondingly, step 122 is followed by: determining a word vector of a formula in the text to be recognized based on the enhanced word vector of each word segmentation; and inputting the word vector of the formula into a formula semantic prediction layer to obtain formula semantics output by the formula semantic prediction layer.
Specifically, formula semantics refers to specific mathematical probabilities, theorems, or general mathematical terms that are widely accepted for formulas, for example, formula semantics corresponding to the formula "f (x) = x +1" is "linear function", and formula semantics corresponding to the formula "sin (x + y) = sinx × cosy + cosx = siny" is "trigonometric function formula of two corners and difference".
The enhanced word vector of each participle output by the synonym interaction attention layer can also be used for predicting the formula semantic type in the text to be recognized. Any formula may correspond to one or more participles, after the corresponding relation between the formula and the participles is known, the enhanced word vector of the participle corresponding to the formula can be determined, the word vector of the formula is further determined by averaging or splicing and the like, the word vector of the formula is input to a formula semantic prediction layer, and the formula semantic prediction layer predicts the formula semantic.
In the embodiment of the invention, complementary relationship exists between the prediction of formula semantics and the entity type identification of each participle in the formula, for example, if the formula semantics of the current formula is a trigonometric function formula, the entity type of each participle in the current formula is most likely to be a trigonometric function or an angle. After the formula semantics are determined, the entity types of the corresponding participles in the formula are limited in a limited space conforming to the formula semantics expression, so that an entity type recognition model obtained by training the sample formula semantics of the sample formula in the sample text can be used on the basis of each sample participle in the sample text and the sample entity type identifier thereof, the synonym set dictionary and the sample formula semantics of the sample formula in the sample text.
Specifically, in the training process of the entity type recognition model, two loss functions can be set, which are respectively used for measuring the error between the entity type result obtained by prediction and the sample entity type identifier and the error between the formula semantics obtained by prediction and the sample formula semantics, and the results of the two loss functions are added according to the weight to obtain the overall training loss of the entity type recognition model. It should be noted that, in the embodiment of the present invention, no specific limitation is made on the weight of the loss function, the weight of the loss function may be adjusted according to the corresponding task, and as an optimization, in the entity type identification model provided in the embodiment of the present invention, formula semantics are predicted as an auxiliary task, and the weight of the loss function corresponding to the formula semantics is smaller.
The method provided by the embodiment of the invention can be used for assisting in predicting the entity category by using formula semantics, fully mining the context meaning of the word segmentation and improving the accuracy of entity type identification.
Based on any one of the foregoing embodiments, fig. 3 is a schematic flowchart of a method for implementing a synonym interaction attention mechanism according to an embodiment of the present invention, as shown in fig. 3, in the method, step 122 specifically includes:
step 1221, based on the synonym set dictionary, selecting a word vector of a synonym of any participle from the word vectors of each participle, and constructing a synonym set of the participle; the synonym set includes a word vector for each synonym.
Suppose that the number of word vectors of the participles input to the synonym interactive attention layer is n, n participles are respectively represented as a participle 1, a participle 2, \8230, the word vectors of the participles n, n participles are respectively represented as a word vector 1, a word vector 2, \8230, and the word vector n. And i is a positive integer less than or equal to n, searching a synonym set dictionary preset for the participle i for the participle 1, the participle 2, \8230, the participle i-1, the participle i +1, \8230, the participle 8230, and the participle n is a synonym of the participle i, and if so, adding a word vector of the participle into the synonym set of the participle i. Assuming that synonyms of the participle 2 are a participle 5 and a participle 10, the synonym set E of the participle 2 = { word vector 5, word vector 10}; assuming that the participle 3 does not have a synonym in each of the currently input participles, the synonym set E of the participle 3 is an empty set.
Step 1222, determining the similarity between the participle and the synonym based on the word vector of the participle and the word vector of any synonym in the synonym set.
Assuming that synonym set E = { word vector 5, word vector 10} for participle 2, similarity of word vector 2 to word vector 5 and similarity of word vector 2 to word vector 10 are calculated, respectively. Here, the calculation of the similarity between word vectors may be implemented by a similarity measurement formula, where the similarity measurement formula includes a dot product, a concatenation, or a perceptron, and the like, and this is not specifically limited in the embodiment of the present invention.
And 1223, outputting the enhanced word vector of the participle based on the similarity between the participle and each synonym in the synonym set.
Specifically, the word vector of the participle and the word vector of the synonym in the synonym set can be fused according to the similarity between the participle and each synonym in the synonym set, so that the same information in the word vector of the participle and the word vector of the synonym is enhanced, different information between the word vector of the participle and the word vector of the synonym is weakened, the enhancement of the synonym interaction attention of the word vector of the participle is realized, and the enhanced word vector of the participle is obtained.
It should be noted that, in step 1221, if the synonym set for any participle is an empty set, that is, the synonym in the to-be-recognized text does not include the participle, the enhanced word vector of the participle output by the synonym interaction attention layer is still the word vector of the participle.
Based on any of the above embodiments, in the method, step 1223 specifically includes: determining the weight corresponding to the word vector of each synonym based on the similarity between the participle and each synonym in the synonym set; determining an attention word vector based on the word vector of each synonym and the corresponding weight thereof; based on the word vector and the attention word vector of the segmented word, an enhanced word vector of the segmented word is determined.
Specifically, for any participle, the similarity between the participle and each synonym in the synonym set is subjected to weight normalization to obtain the weight corresponding to the word vector of each synonym. Then, the word vectors of each synonym are weighted and summed, and the result of the weighted summation is taken as the attention word vector. And finally, fusing the word vector of the word segmentation and the attention word vector to obtain an incremental word vector of the word segmentation, wherein the fusing mode can be averaging or splicing and the like.
For example, synonym set E = { word vector 5, word vector 10} for a participle 2, then the similarity of word vector 2 to word vector 5 is 75%, the similarity of word vector 2 to word vector 10 is 85%, the two similarities are weight normalized, resulting in a weight = 75%/(75% + 85%) of word vector 5 =0.46875, and the weight = 85%/(75% + 85%) of word vector 10 =0.53125. Thus, the attention word vector of participle 2 is found to be 0.46875 word vector 5+0.53125 word vector 10. On the basis, the word vector 2 and the attention word vector are averaged to obtain an enhanced word vector of the participle 2.
Based on any of the above embodiments, fig. 4 is a schematic flowchart of an entity type classification output method provided by an embodiment of the present invention, as shown in fig. 4, in the method, an output classification layer includes a context layer and a classification layer; correspondingly, step 123 specifically includes:
step 1231, the enhanced word vector of each participle is input to the context layer, and a sequence vector of each participle output by the context layer is obtained.
And step 1232, inputting the sequence vector of each participle into the classification layer to obtain an entity type recognition result of each participle output by the classification layer.
Specifically, the context layer is configured to analyze context information of each segmented word according to an input enhanced word vector of each segmented word, and output a sequence vector of each segmented word. The classification layer is used for analyzing and predicting the sequence vector of each input word segmentation, judging the probability of each word segmentation corresponding to the entity type and outputting the entity type identification result of each word segmentation.
Here, the sequence vector includes both information of the word segmentation itself and context information of the word segmentation. The context layer can be realized by a long and short memory network LSTM, and preferably, the context layer in the embodiment of the present invention is a two-layer bidirectional long and short memory network Bi-LSTM.
According to any one of the above embodiments, in the method, the classification layer comprises a softmax output layer and a random condition field CRF layer; correspondingly, step 1232 specifically includes: inputting the sequence vector of each participle into a softmax output layer to obtain a candidate recognition result of each participle output by the softmax output layer; and inputting the candidate classification result of each participle into a random condition field CRF layer to obtain an entity type identification result of each participle output by the random condition field CRF layer.
Specifically, the softmax output layer determines, based on the sequence vector of the inputted participles, a probability that the participle corresponds to each entity type, and outputs the probability that the participle corresponds to each entity type as a candidate recognition result of the participle. And the random condition field CRF layer limits and adjusts the candidate recognition result of the participle based on the candidate recognition result of the previous participle of any participle, and outputs the adjusted candidate recognition result as the entity type recognition result of the participle.
Since an entity may be composed of one or more segments, the embodiment of the present invention uses a BIO (Begin, inside, out) label form for labeling the recognition result, where B indicates that the segment is at the beginning of an entity (Begin), I indicates that the segment is inside the entity (inside), and O indicates that the segment is outside the entity (out), i.e. not belonging to the entity. In the random conditional field CRF layer, assuming that the candidate recognition result of any participle is the start "B-angle" of a triangle, the next participle of the participle cannot be an entity "I-circle" of the type of a circle. The random condition field CRF layer can reduce the number of invalid predicted entity sequences by learning the constraint relation among the entities.
Based on any one of the above embodiments, in the method, the text to be recognized is a mathematical text. Here, the mathematical text is complete mathematical question information, including three parts of question stem, answer and resolution.
Fig. 5 is a schematic structural diagram of an entity type identification model provided in an embodiment of the present invention, and referring to fig. 5, when a text to be identified is a mathematical text, the text entity type identification method specifically includes:
firstly, preprocessing is carried out on a mathematical text to be recognized, wherein the preprocessing mainly refers to word segmentation processing. The word segmentation can adopt a rule-based method, a model prediction method based on sequence labeling and the like. Because the mathematical text contains Chinese, english, numbers and symbols, the embodiment of the invention fuses a rule-based method and a Chinese jieba word segmentation tool to obtain the word segmentation result of the mathematical text.
After word segmentation is finished, each word in any sentence and the character feature vector of each word for representing the character type of the word are input into the entity type recognition model by taking the sentence of the mathematical text as a unit so as to obtain the entity type recognition result in the sentence.
Here, the entity type recognition model includes an input layer, a synonym interaction attention layer, a context layer, a Softmax output layer, a random condition field CRF layer, and a formula semantic prediction layer. The input layer is a language model obtained by pre-training and is used for determining word vectors corresponding to the participles; the synonym interaction attention layer judges whether each inputted participle has a synonym in the clause or not based on the synonym set dictionary, so that the word vector of the synonym is interactively enhanced, and the enhanced word vector of each participle is output; the context layer is a two-layer bidirectional long and short memory network Bi-LSTM and is used for extracting context information of each participle from a spliced vector of an enhanced word vector and a character feature vector of each participle and outputting a sequence vector of each participle; the softmax output layer determines the probability of the participle corresponding to each entity type based on the input sequence vector of the participle, and outputs the probability of the participle corresponding to each entity type as a candidate recognition result of the participle. The random condition field CRF layer limits and adjusts the candidate recognition result of each participle and outputs the adjusted candidate recognition result as the entity type recognition result of the participle; and the formula semantic prediction layer is used for predicting and outputting formula semantics according to the word vectors of the formula in the clause. Here, the word vector of the formula is determined according to the enhanced word vector of the participle corresponding to the formula.
In the embodiment of the invention, the synonym interactive attention mechanism is utilized to enhance the word vector, the enhanced word vector is combined with the unique character characteristic vector of the mathematical text, and the formula semantics are utilized to assist in predicting the entity category, so that the context meaning and the word segmentation characteristic of the word segmentation can be fully mined, the problem of language sparsity of the mathematical text can be better solved, and the problem of entity identification under the condition of unbalanced entity category number can be better solved.
Based on any one of the embodiments, the embodiment of the present invention provides a training method for an entity type recognition model, including:
firstly, a large amount of sample mathematical texts are collected, and each sample word segmentation in the sample mathematical texts is marked with a corresponding sample entity type identifier. For mathematical entities, the number of types of the mathematical entities far exceeds the number of common Chinese and English named entity identification tasks, so the embodiment of the invention defines the types of the mathematical entities according to the hierarchical relationship, for example, the mathematical entities such as a quadrangle are defined, and the quadrangle can specifically comprise a rectangle, a square, a parallelogram and other mathematical entities. When entity identification is performed, the mathematical entity labels can be normalized according to specific requirements, and mathematical entity type identification results with different granularities can be obtained.
Meanwhile, a synonym set dictionary is constructed. The synonym set dictionary can be constructed after a worker reads and understands a sample data text, or can be obtained after filtering non-synonyms through manual screening on the basis of sample entity type identification of each sample participle in the sample data text.
In addition, formula semantics corresponding to a formula part in the sample math text also needs to be marked.
And then, constructing a neural network structure consisting of an input layer, a synonym interaction attention layer, a context layer, a Softmax output layer, a random conditional field CRF layer and a formula semantic prediction layer, identifying the type of the mathematical entity as a main task, taking the formula semantic in the prediction text as an auxiliary task, and training in a multi-task learning mode to obtain the probability distribution of each participle in the sample mathematical text corresponding to each mathematical entity type.
Based on any of the foregoing embodiments, fig. 6 is a schematic structural diagram of a text entity type identification apparatus provided in an embodiment of the present invention, and as shown in fig. 6, the apparatus includes a text determination unit 610 and an entity identification unit 620;
the text determining unit 610 is configured to determine a text to be recognized;
the entity identification unit 620 is configured to input each word in the text to be identified to an entity type identification model, so as to obtain an entity type identification result corresponding to each word output by the entity type identification model;
the entity type recognition model is constructed based on a synonym interaction attention mechanism and is obtained through training of each sample word in a sample text, sample entity type identification of each sample word and a synonym set dictionary.
According to the device provided by the embodiment of the invention, each participle of the text to be recognized is input into the entity type recognition model constructed based on the synonym interaction attention mechanism to recognize the entity type, so that the problem of difficult recognition caused by the changeability of the text entity expression modes is solved, and the accuracy and reliability of text entity type recognition are improved.
Based on any one of the above embodiments, in the apparatus, the entity type identification model includes an input layer, a synonym interaction attention layer, and a classification output layer;
correspondingly, the entity identifying unit 620 includes:
the input subunit is used for inputting each word in the text to be recognized into the input layer to obtain a word vector of each word output by the input layer;
the attention subunit is configured to input the word vector of each participle into the synonym interaction attention layer, and obtain an enhanced word vector of each participle output by the synonym interaction attention layer;
and the classification output subunit is used for inputting the enhanced word vector of each word segmentation into the classification output layer to obtain an entity type identification result of each word segmentation output by the classification output layer.
Based on any of the above embodiments, in the apparatus, the entity identifying unit 620 further includes:
and the vector splicing subunit is used for updating the enhanced word vector of any participle into a spliced vector of the enhanced word vector of any participle and the character feature vector of any participle aiming at any participle.
Based on any of the above embodiments, in the apparatus, the entity type identification model further includes a formula semantic prediction layer;
correspondingly, the entity identifying unit 620 further includes:
the semantic prediction subunit is used for determining a word vector of a formula in the text to be recognized based on the enhanced word vector of each participle;
inputting the word vector of the formula into the formula semantic prediction layer to obtain formula semantics output by the formula semantic prediction layer;
correspondingly, the entity type recognition model is obtained by training based on each sample word segmentation and sample entity type identification thereof in the sample text, a synonym set dictionary and sample formula semantics of the sample formula in the sample text.
Based on any one of the above embodiments, in the apparatus, the attention subunit includes:
a synonym determining module, configured to select a word vector of a synonym of any participle from word vectors of each participle based on the synonym set dictionary, and construct a synonym set of any participle; the synonym set comprises a word vector for each synonym;
a similarity determination module, configured to determine a similarity between the any participle and any synonym in the synonym set based on the word vector of the any participle and the word vector of any synonym in the synonym set;
and the vector enhancement module is used for outputting an enhanced word vector of any participle based on the similarity between the any participle and each synonym in the synonym set.
Based on any of the above embodiments, in the apparatus, the vector enhancement module is specifically configured to:
determining the weight corresponding to the word vector of each synonym based on the similarity between any participle and each synonym in the synonym set;
determining an attention word vector based on the word vector of each synonym and the corresponding weight thereof;
determining an enhanced word vector for the any segmented word based on the word vector for the any segmented word and the attention word vector.
According to any of the above embodiments, in the apparatus, the output classification layer includes a context layer and a classification layer;
correspondingly, the classification output subunit includes:
the context module is used for inputting the enhanced word vector of each participle into the context layer to obtain a sequence vector of each participle output by the context layer;
and the classification module is used for inputting the sequence vector of each word segmentation into the classification layer to obtain an entity type identification result of each word segmentation output by the classification layer.
According to any one of the above embodiments, in the apparatus, the classification layer comprises a softmax output layer and a random condition field CRF layer;
correspondingly, the classification module is specifically configured to:
inputting the sequence vector of each participle into the softmax output layer to obtain a candidate recognition result of each participle output by the softmax output layer;
and inputting the candidate classification result of each participle into the random condition field CRF layer to obtain the entity type identification result of each participle output by the random condition field CRF layer.
According to any one of the above embodiments, in the device, the text to be recognized is a mathematical text.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 7, the electronic device may include: a processor (processor) 710, a communication Interface (Communications Interface) 720, a memory (memory) 730, and a communication bus 740, wherein the processor 710, the communication Interface 720, and the memory 730 communicate with each other via the communication bus 740. Processor 710 may call logic instructions in memory 730 to perform the following method: determining a text to be recognized; inputting each word in the text to be recognized into an entity type recognition model to obtain an entity type recognition result corresponding to each word output by the entity type recognition model; the entity type recognition model is constructed based on a synonym interaction attention mechanism and is obtained through training of each sample word in a sample text, sample entity type identification of each sample word and a synonym set dictionary.
In addition, the logic instructions in the memory 730 can be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Embodiments of the present invention further provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the method provided in the foregoing embodiments when executed by a processor, for example, the method includes: determining a text to be recognized; inputting each word segmentation in the text to be recognized into an entity type recognition model to obtain an entity type recognition result corresponding to each word segmentation output by the entity type recognition model; the entity type recognition model is constructed based on a synonym interaction attention mechanism and is obtained through training of each sample word in a sample text, sample entity type identification of each sample word and a synonym set dictionary.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. A text entity type identification method is characterized by comprising the following steps:
determining a text to be recognized;
inputting each word segmentation in the text to be recognized into an entity type recognition model to obtain an entity type recognition result corresponding to each word segmentation output by the entity type recognition model;
the entity type recognition model is constructed based on a synonym interaction attention mechanism and is obtained through training of each sample participle in a sample text, sample entity type identification of each sample participle and a synonym set dictionary;
the entity type recognition model comprises an input layer, a synonym interaction attention layer and a classification output layer;
correspondingly, the inputting each word in the text to be recognized into an entity type recognition model to obtain an entity type recognition result corresponding to each word output by the entity type recognition model specifically includes:
inputting each word segmentation in the text to be recognized into the input layer to obtain a word vector of each word segmentation output by the input layer;
inputting the word vector of each participle into the synonym interactive attention layer to obtain an enhanced word vector of each participle output by the synonym interactive attention layer;
inputting the enhanced word vector of each participle into the classification output layer to obtain an entity type identification result of each participle output by the classification output layer;
and the synonym interaction attention layer judges whether synonyms exist in each inputted participle based on the synonym set dictionary, so that the word vectors of the synonyms are interactively enhanced, and the enhanced word vectors of each participle are output.
2. The method according to claim 1, wherein the step of inputting the enhanced word vector of each segment into the classification output layer to obtain the entity type recognition result of each segment output by the classification output layer further comprises:
and aiming at any participle, updating the enhanced word vector of the participle into a spliced vector of the enhanced word vector of the participle and the character feature vector of the participle.
3. The text entity type recognition method of claim 1, wherein the entity type recognition model further comprises a formula semantic prediction layer;
correspondingly, the inputting the word vector of each participle into the synonym interaction attention layer to obtain the enhanced word vector of each participle output by the synonym interaction attention layer, and then further comprising:
determining a word vector of a formula in the text to be recognized based on the enhanced word vector of each word segmentation;
inputting the word vector of the formula into the formula semantic prediction layer to obtain formula semantics output by the formula semantic prediction layer;
correspondingly, the entity type recognition model is obtained by training based on each sample word segmentation and sample entity type identification thereof in the sample text, a synonym set dictionary and sample formula semantics of the sample formula in the sample text.
4. The method according to claim 1, wherein the inputting the word vector of each word into the synonym interaction attention layer to obtain the enhanced word vector of each word output by the synonym interaction attention layer specifically comprises:
based on the synonym set dictionary, selecting a word vector of a synonym of any participle from word vectors of each participle, and constructing a synonym set of any participle; the synonym set includes a word vector for each synonym;
determining the similarity of any participle and any synonym in the synonym set based on the word vector of any participle and the word vector of any synonym in the synonym set;
and outputting the enhanced word vector of any participle based on the similarity between any participle and each synonym in the synonym set.
5. The method according to claim 4, wherein the outputting the enhanced word vector of any participle based on the similarity between the any participle and each synonym in the set of synonyms comprises:
determining the weight corresponding to the word vector of each synonym based on the similarity between any participle and each synonym in the synonym set;
determining an attention word vector based on the word vector of each synonym and the corresponding weight of the word vector;
determining an enhanced word vector for the any participle based on the word vector for the any participle and the attention word vector.
6. The text entity type identification method according to claim 1, wherein the classification output layer comprises a context layer and a classification layer;
correspondingly, the inputting the enhanced word vector of each word segmentation into the classification output layer to obtain the entity type recognition result of each word segmentation output by the classification output layer specifically includes:
inputting the enhanced word vector of each participle into the context layer to obtain a sequence vector of each participle output by the context layer;
and inputting the sequence vector of each word segmentation into the classification layer to obtain an entity type recognition result of each word segmentation output by the classification layer.
7. The text entity type recognition method of claim 6, wherein the classification layers comprise a softmax output layer and a random condition field CRF layer;
correspondingly, the inputting the sequence vector of each word segmentation into the classification layer to obtain the entity type recognition result of each word segmentation output by the classification layer specifically includes:
inputting the sequence vector of each participle into the softmax output layer to obtain a candidate recognition result of each participle output by the softmax output layer;
and inputting the candidate classification result of each participle into the random condition field CRF layer to obtain an entity type identification result of each participle output by the random condition field CRF layer.
8. The text entity type recognition method according to any one of claims 1 to 7, wherein the text to be recognized is a mathematical text.
9. A text entity type recognition apparatus, comprising:
the text determining unit is used for determining a text to be recognized;
the entity identification unit is used for inputting each participle in the text to be identified into an entity type identification model to obtain an entity type identification result corresponding to each participle output by the entity type identification model;
the entity type recognition model is constructed based on a synonym interaction attention mechanism and is obtained by training each sample word in a sample text, sample entity type identification of each sample word and a synonym set dictionary;
the entity type identification model comprises an input layer, a synonym interaction attention layer and a classification output layer;
correspondingly, the entity identification unit is specifically configured to:
inputting each participle in the text to be recognized into the input layer to obtain a word vector of each participle output by the input layer;
inputting the word vector of each participle into the synonym interactive attention layer to obtain an enhanced word vector of each participle output by the synonym interactive attention layer;
inputting the enhanced word vector of each participle into the classification output layer to obtain an entity type identification result of each participle output by the classification output layer;
and the synonym interaction attention layer judges whether synonyms exist in each input participle or not based on the synonym set dictionary, so that the interaction enhancement is carried out on the word vectors of the synonyms, and the enhanced word vectors of each participle are output.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the text entity type recognition method according to any one of claims 1 to 8 are implemented by the processor when executing the program.
11. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the text entity type recognition method according to any one of claims 1 to 8.
CN201911060988.XA 2019-11-01 2019-11-01 Text entity type identification method and device, electronic equipment and storage medium Active CN110825875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911060988.XA CN110825875B (en) 2019-11-01 2019-11-01 Text entity type identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911060988.XA CN110825875B (en) 2019-11-01 2019-11-01 Text entity type identification method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110825875A CN110825875A (en) 2020-02-21
CN110825875B true CN110825875B (en) 2022-12-06

Family

ID=69552021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911060988.XA Active CN110825875B (en) 2019-11-01 2019-11-01 Text entity type identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110825875B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401066B (en) * 2020-03-12 2022-04-12 腾讯科技(深圳)有限公司 Artificial intelligence-based word classification model training method, word processing method and device
CN111414439B (en) * 2020-03-17 2023-08-29 科大讯飞(苏州)科技有限公司 Method, device, electronic equipment and storage medium for splitting and linking complex tail entity
CN111539209B (en) * 2020-04-15 2023-09-15 北京百度网讯科技有限公司 Method and apparatus for entity classification
CN111324749B (en) * 2020-05-15 2020-08-18 支付宝(杭州)信息技术有限公司 Entity classification method, system and device
CN111611801B (en) * 2020-06-02 2021-09-14 腾讯科技(深圳)有限公司 Method, device, server and storage medium for identifying text region attribute
CN113761904A (en) * 2020-06-05 2021-12-07 阿里巴巴集团控股有限公司 Training method and device of text recognition model, electronic equipment and storage medium
CN112784593B (en) * 2020-06-05 2023-02-03 珠海金山办公软件有限公司 Document processing method and device, electronic equipment and readable storage medium
CN111861253A (en) * 2020-07-29 2020-10-30 北京车薄荷科技有限公司 Personnel capacity determining method and system
CN112560477B (en) * 2020-12-09 2024-04-16 科大讯飞(北京)有限公司 Text completion method, electronic equipment and storage device
CN112528662A (en) * 2020-12-15 2021-03-19 深圳壹账通智能科技有限公司 Entity category identification method, device, equipment and storage medium based on meta-learning
CN112926327B (en) * 2021-03-02 2022-05-20 首都师范大学 Entity identification method, device, equipment and storage medium
CN114579740B (en) * 2022-01-20 2023-12-05 马上消费金融股份有限公司 Text classification method, device, electronic equipment and storage medium
CN114860941A (en) * 2022-07-05 2022-08-05 南京云创大数据科技股份有限公司 Industry data management method and system based on data brain

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104965819A (en) * 2015-07-12 2015-10-07 大连理工大学 Biomedical event trigger word identification method based on syntactic word vector
CN107977361A (en) * 2017-12-06 2018-05-01 哈尔滨工业大学深圳研究生院 The Chinese clinical treatment entity recognition method represented based on deep semantic information
CN109271632A (en) * 2018-09-14 2019-01-25 重庆邂智科技有限公司 A kind of term vector learning method of supervision
CN109885825A (en) * 2019-01-07 2019-06-14 平安科技(深圳)有限公司 Name entity recognition method, device and computer equipment based on attention mechanism
CN109918680A (en) * 2019-03-28 2019-06-21 腾讯科技(上海)有限公司 Entity recognition method, device and computer equipment
CN110134954A (en) * 2019-05-06 2019-08-16 北京工业大学 A kind of name entity recognition method based on Attention mechanism

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004097664A2 (en) * 2003-05-01 2004-11-11 Axonwave Software Inc. A method and system for concept generation and management
US11144830B2 (en) * 2017-11-21 2021-10-12 Microsoft Technology Licensing, Llc Entity linking via disambiguation using machine learning techniques
CN108280061B (en) * 2018-01-17 2021-10-26 北京百度网讯科技有限公司 Text processing method and device based on ambiguous entity words

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104965819A (en) * 2015-07-12 2015-10-07 大连理工大学 Biomedical event trigger word identification method based on syntactic word vector
CN107977361A (en) * 2017-12-06 2018-05-01 哈尔滨工业大学深圳研究生院 The Chinese clinical treatment entity recognition method represented based on deep semantic information
CN109271632A (en) * 2018-09-14 2019-01-25 重庆邂智科技有限公司 A kind of term vector learning method of supervision
CN109885825A (en) * 2019-01-07 2019-06-14 平安科技(深圳)有限公司 Name entity recognition method, device and computer equipment based on attention mechanism
CN109918680A (en) * 2019-03-28 2019-06-21 腾讯科技(上海)有限公司 Entity recognition method, device and computer equipment
CN110134954A (en) * 2019-05-06 2019-08-16 北京工业大学 A kind of name entity recognition method based on Attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于高层语义注意力机制的中文实体关系抽取;武文雅等;《广西师范大学学报》;20190131;第37卷(第1期);第32-41页 *

Also Published As

Publication number Publication date
CN110825875A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110825875B (en) Text entity type identification method and device, electronic equipment and storage medium
CN110852087B (en) Chinese error correction method and device, storage medium and electronic device
CN110727779A (en) Question-answering method and system based on multi-model fusion
CN109960728B (en) Method and system for identifying named entities of open domain conference information
CN108182177A (en) A kind of mathematics knowledge-ID automation mask method and device
CN112364170B (en) Data emotion analysis method and device, electronic equipment and medium
CN113553412B (en) Question-answering processing method, question-answering processing device, electronic equipment and storage medium
CN112100332A (en) Word embedding expression learning method and device and text recall method and device
CN110825867A (en) Similar text recommendation method and device, electronic equipment and storage medium
CN111832278B (en) Document fluency detection method and device, electronic equipment and medium
CN112434520A (en) Named entity recognition method and device and readable storage medium
CN116578688A (en) Text processing method, device, equipment and storage medium based on multiple rounds of questions and answers
CN113255331A (en) Text error correction method, device and storage medium
CN110765241B (en) Super-outline detection method and device for recommendation questions, electronic equipment and storage medium
CN112182167A (en) Text matching method and device, terminal equipment and storage medium
CN116561275A (en) Object understanding method, device, equipment and storage medium
CN116595023A (en) Address information updating method and device, electronic equipment and storage medium
CN115545030A (en) Entity extraction model training method, entity relation extraction method and device
CN113221553A (en) Text processing method, device and equipment and readable storage medium
CN112183060B (en) Reference resolution method of multi-round dialogue system
CN114511084A (en) Answer extraction method and system for automatic question-answering system for enhancing question-answering interaction information
CN113705207A (en) Grammar error recognition method and device
CN113657092B (en) Method, device, equipment and medium for identifying tag
CN116775451A (en) Intelligent scoring method and device for test cases, terminal equipment and computer medium
CN113190659B (en) Language and language machine reading understanding method based on multi-task joint training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Wang Shijin

Inventor after: Zhan Wenchao

Inventor after: Sha Jing

Inventor after: Fu Ruiji

Inventor after: Wei Si

Inventor before: Zhan Wenchao

Inventor before: Sha Jing

Inventor before: Fu Ruiji

Inventor before: Wang Shijin

Inventor before: Wei Si