CN114023319B - Slot position identification method and device, electronic equipment and readable storage medium - Google Patents
Slot position identification method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN114023319B CN114023319B CN202111287097.5A CN202111287097A CN114023319B CN 114023319 B CN114023319 B CN 114023319B CN 202111287097 A CN202111287097 A CN 202111287097A CN 114023319 B CN114023319 B CN 114023319B
- Authority
- CN
- China
- Prior art keywords
- slot
- identification result
- determining
- voice instruction
- recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000013145 classification model Methods 0.000 claims description 22
- 230000015654 memory Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 description 6
- 238000012549 training Methods 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000003066 decision tree Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
- G06F40/295—Named entity recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
- G10L2015/0631—Creating reference templates; Clustering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Telephonic Communication Services (AREA)
- Telephone Function (AREA)
Abstract
The disclosure provides a slot identification method, a slot identification device, electronic equipment and a readable storage medium. The slot position identification method comprises the following steps: receiving a first voice instruction; respectively carrying out slot recognition on the first voice instruction by adopting a resolver parser and a named entity recognition NER model to obtain a first slot recognition result corresponding to the parser and a second slot recognition result corresponding to the NER model; and determining the slot position of the first voice instruction according to the first slot position identification result and the second slot position identification result. The accuracy of slot identification can be improved.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of artificial intelligence, in particular to a slot identification method, a slot identification device, electronic equipment and a readable storage medium.
Background
At present, when the slot recognition is performed, the matching is usually performed through a preset sentence pattern template and a voice instruction sent by a user. When the voice command and the sentence pattern template are successfully matched, the slot positions can be identified through the sentence pattern template and the voice command. However, since the sentence patterns of the voice command actually issued by the user are open, it is not possible to accurately predict what sentence pattern the user will use to issue the voice command, so a lot of manpower is required to write sentence pattern templates in various possible situations, and the sentence pattern templates cannot be completely and exhaustively enumerated, resulting in lower accuracy of slot recognition.
Disclosure of Invention
The embodiment of the disclosure provides a slot identification method, a slot identification device, electronic equipment and a readable storage medium, so as to solve the problem of low accuracy of the existing slot identification.
To solve the above problems, the present disclosure is implemented as follows:
in a first aspect, an embodiment of the present disclosure provides a slot identification method, where the method includes:
Receiving a first voice instruction;
Respectively carrying out slot recognition on the first voice instruction by adopting a resolver parser and a named entity recognition NER model to obtain a first slot recognition result corresponding to the parser and a second slot recognition result corresponding to the NER model;
And determining the slot position of the first voice instruction according to the first slot position identification result and the second slot position identification result.
In a second aspect, embodiments of the present disclosure further provide an electronic device, including:
the receiving module is used for receiving a first voice instruction;
The recognition module is used for respectively carrying out slot recognition on the first voice instruction by adopting a resolver parser and a named entity recognition NER model to obtain a first slot recognition result corresponding to the parser and a second slot recognition result corresponding to the NER model;
and the determining module is used for determining the slot position of the first voice instruction according to the first slot position identification result and the second slot position identification result.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including a processor, a memory, and a program stored on the memory and executable on the processor, where the program when executed by the processor implements the steps of the slot identification method as described above.
In a fourth aspect, the embodiments of the present disclosure also provide a readable storage medium having a program stored thereon, which when executed by a processor, implements the steps of the slot identification method applied to an electronic device as described above.
In the embodiment of the disclosure, the slot positions of the voice instruction are identified by adopting a mode of combining the parser model and the NER model, so that the accuracy of slot position identification can be improved. In addition, since the parser supports recursive configuration, embodiments of the present disclosure may also simplify the configuration of the enumerable slot identification.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is a flowchart illustrating a slot identification method according to an exemplary embodiment;
FIG. 2 is a block diagram of a slot identification device according to an exemplary embodiment;
Fig. 3 is a block diagram of an electronic device, according to an example embodiment.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Detailed Description
The following description of the technical solutions in the embodiments of the present disclosure will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
The terms "first," "second," and the like in this disclosure are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The following describes a slot identification method according to an embodiment of the present disclosure.
Referring to fig. 1, fig. 1 is a flowchart illustrating a slot identification method according to an exemplary embodiment. The slot identification method disclosed by the embodiment of the disclosure is applied to electronic equipment. In practical applications, the electronic device may be a mobile phone, a computer, a television, a wearable device, a vehicle-mounted device, or the like.
As shown in fig. 1, the slot identification method may include the following steps:
in step 101, a first voice command is received.
The first voice instruction may be understood as: any voice instruction received by the electronic equipment.
In step 102, a parser and a named entity recognition NER model are adopted to respectively perform slot recognition on the first voice command, so as to obtain a first slot recognition result corresponding to the parser and a second slot recognition result corresponding to the NER model.
In the embodiment of the present disclosure, after receiving a certain voice command, the electronic device may perform slot Recognition on the voice command, that is, perform a slot Recognition operation, using a parser (parser) and a Named Entity Recognition (NER) model, respectively. In particular, when the two slot position recognition operations are implemented, the two slot position recognition operations may be performed simultaneously or sequentially, and the embodiment of the disclosure does not limit the execution time of the two slot position recognition operations, and may be specifically determined according to actual requirements.
In addition, before the electronic device performs the slot recognition operation on the voice command, the voice command may be converted into a text, and then the text is subjected to operation recognition. Of course, the electronic device may also directly perform slot recognition on the voice, which may be specifically determined according to the actual situation, which is not limited in the embodiments of the present disclosure.
The working principles of the parser and NER models are described below, respectively.
1. And (5) a parser.
The parser corresponds to a context free grammar that can be represented and analyzed by a recursive transfer network (Recursive Transition Network). The biggest difference between the recursive transfer network and the finite state machine is that: the edges of the connection nodes in the finite state machine must be terminators; while the edge of the connecting nodes in the recursive transfer network may be the network.
Similar to regular expressions, the recursive transfer network needs to write grammar rules first, and when in recognition, a voice instruction is transmitted to the grammar rules for recognition, and a corresponding result is returned according to configuration rules. Unlike regular expressions, recursive transfer networks support recursive configuration in the configuration process, i.e. networks can be nested with each other, all modules can be shared similar to static functions when writing codes, and complexity and workload of rule configuration are reduced.
The parser can also return specific mapping results according to different requirements, namely, not only can return words in the voice instruction, but also can map the words and return corresponding mapping results. In addition, the parser can also perform different priority orders according to the length or the number of the identified slots, and different results are returned.
In terms of rule and keyword configuration, the parser has stronger performance and a lower and more convenient configuration mode, and the manual configuration cost can be greatly reduced on the premise of ensuring the identification accuracy.
2. NER model.
The NER model may be expressed specifically as: long Short-Term Memory (LSTM) -conditional random field (Conditional Random Field, CRF), bi-directional LSTM (BiLSTM) -CRF or pre-training language model (Bidirectional Encoder Representation from Transformers, BERT) -BiLSTM-CRF, etc., the main purpose of which is to identify entities in sentences through training, such as name, location, organization and time, etc. The identified entities may be used in various downstream applications, or as a feature of a machine learning system, for other natural language processing tasks.
The NER model may label each word according to the training result, e.g., B, M, E, I, O, where B represents the first word that the entity is on, M represents the middle word of the entity, E represents the last word of the entity, I represents a single word entity, O represents a non-entity word, etc. The NER model may also add entity class information to the entity and may identify different classes of entities. The NER model is characterized in that the non-enumeratable slot positions, such as names and addresses, can be identified, manual rule writing is not needed, and training is only needed through marked data.
Therefore, the slot positions are identified in a mode of combining the Parser and the NER model, so that not only can the enumeration slot positions be accurately identified, but also the non-enumeration slot positions can be well identified, and the slot positions of voice instructions can be identified maximally.
In step 103, determining a slot of the first voice command according to the first slot identification result and the second slot identification result.
In the implementation, in the first embodiment, the electronic device may determine the slot of the first voice instruction by selecting and using the first slot recognition result or the second slot recognition result according to the slot type of each slot in the first slot recognition result and the second slot recognition result.
In the second embodiment, the electronic device may select to use the first slot identification result and/or the second slot identification result according to a certain rule to determine the slot of the first voice command.
In a third embodiment, the electronic device may determine the slot of the first voice instruction by using the first slot identification result and the second slot identification result by means of a classification model.
The above-described embodiments are merely examples and thus do not limit the implementation of step 103. Any implementation manner of performing slot identification by combining a parser model and a NER model can fall within the protection scope of the present disclosure.
In this embodiment of the present disclosure, the slot of the first voice command may be a slot in the first slot identification result, may also be a slot in the second slot identification result, or may be a union or a set of slots in the first slot identification result and the second slot identification result, which may specifically be determined according to an actual situation, and this embodiment of the present disclosure is not limited thereto.
According to the slot position identification method disclosed by the embodiment of the disclosure, the slot position of the voice instruction is identified by adopting a mode of combining the parser model and the NER model, so that the accuracy of slot position identification can be improved. In addition, since the parser supports recursive configuration, embodiments of the present disclosure may also simplify the configuration of the enumerable slot identification.
The specific implementation of step 103 is described below.
As is clear from the foregoing, the first embodiment is more suitable for identifying an enumeratable slot such as a country or a city, and the NER model is more suitable for identifying an uneparatable slot such as a name or a place. Therefore, in this embodiment, it may be determined whether to select the slot recognition result corresponding to the parser or the slot recognition result corresponding to the NER model, as the final slot recognition result of the first voice command, by the number of each slot type in the voice command.
In one implementation manner, optionally, the determining the slot of the first voice instruction according to the first slot identification result and the second slot identification result includes:
determining the slot type of each slot in the first slot identification result and the second slot identification result, wherein the slot type comprises an enumeratable slot and a non-enumeratable slot;
and determining the slot position of the first voice instruction according to the slot position number of each slot position type in the first slot position identification result and the second slot position identification result.
In a specific implementation, in an implementation manner, optionally, determining the slot of the first voice instruction according to the number of slots of each slot type in the first slot identification result and the second slot identification result includes:
Determining the slot in the first slot identification result as the slot of the first voice instruction under the condition that the number of the slot which can enumerate the slot in the first slot identification result and the second slot identification result is larger than a first threshold value;
And determining the slot in the second slot identification result as the slot of the first voice instruction under the condition that the number of the slot which can enumerate the slot in the first slot identification result and the second slot identification result is smaller than or equal to the first threshold value.
In this alternative implementation, the electronic device may set a threshold corresponding to the enumerable slot, i.e., a first threshold, which is a reference for determining whether the enumerable slot is more or less. Therefore, the electronic device can determine whether to select the slot identification result corresponding to the parser or the slot identification result corresponding to the NER model based on the slot number of the enumeratable slots in the first slot identification result and the second slot identification result and the comparison result of the first threshold value, and the slot identification result is used as the final slot identification result of the first voice command, so that the accuracy of slot identification can be improved.
If the first number is greater than the first threshold, it is indicated that the number of the enumeratable slots in the first slot identification result and the second slot identification result is greater, and a slot identification result of a partner may be used as a final slot identification result of the first voice instruction.
If the first number is smaller than or equal to the first threshold, the first slot identification result and the second slot identification result indicate that the number of the enumeratable slots is smaller, and the slot identification result of the NER model can be used as a final slot identification result of the first voice instruction.
In practical applications, the first threshold may be set according to practical requirements, which is not limited in the embodiments of the present disclosure.
In another implementation manner, the electronic device may directly compare the number of the enumerable slots and the non-enumerable slots in the first slot identification result and the second slot identification result, and if the number of enumerable slots is greater than the number of non-enumerable slots, may use the slot identification result of the parser as the final slot identification result of the first voice instruction; if the number of the enumerated slots is less than the number of the non-enumerated slots, a slot recognition result of the NER model may be employed as a final slot recognition result of the first voice instruction.
By the method, the slot positions of the voice command can be determined based on the comparison result of the number of the enumeratable slot positions in the voice command and the first threshold value, so that the accuracy of slot position determination can be improved.
For the second embodiment, the electronic device may determine the slot of the first voice instruction based on any one of the following rules, in combination with the first slot identification result and the second slot identification result:
rule 1: determining one item with longer slot length in two word slots obtained by respectively carrying out slot identification on the same word by adopting the parser and NER models as the slot of the word;
Rule 2: determining the slot position of the voice command based on the slot position number in two slot position recognition results obtained by respectively carrying out slot position recognition on the same voice command by adopting the parser and NER models;
Rule 3: and under the condition that the same characters exist in two slot recognition results obtained by respectively carrying out slot recognition on the same voice command by adopting the parser and NER models, merging the two slot recognition results to obtain the slot of the voice command.
In rule 1, the electronic device determines the slot position of the voice command based on the slot position recognition result of the words in the voice command focused on the partial consideration of the whole sentence; in rule 2 and rule 3, the electronic device determines the slot of the voice command based on the slot recognition result focused on the voice command in consideration of the whole sentence. The concrete explanation is as follows:
In rule 1, the electronic device may compare lengths of word slots obtained by identifying the same word by using the first and NER models, and determine a final slot identification result of the word, which may be specifically expressed as: the longer one of the two terms is selected as the final slot recognition result of the term.
In practical applications, the length comparison results identified by the parser and NER models may be the same or different for different words in the voice command. It will be understood, therefore, that in the case of determining the slot of the voice command according to rule 1, the slot of the voice command may be all the recognition result of the parameter, may be all the recognition result of the NER model, may also include the partial recognition result of the parameter, and may be specifically determined according to the actual situation, which is not limited by the embodiment of the present disclosure.
In rule 2, the electronic device may compare the number of slots obtained by the recognition of the same voice command by the parser and NER models, and determine the final slot recognition result of the voice command.
In particular implementations, rule 2 may include at least one of:
Rule 2-1: the method comprises the steps that one item with more quantity of two slot identification results obtained by respectively carrying out slot identification on the same voice instruction by adopting the parser and NER models is used as a final slot identification result of the voice instruction;
Rule 2-2: and under the condition that the total character quantity in two slot recognition results obtained by respectively carrying out slot recognition on the same voice command by adopting the parser and NER models is equal, taking the one with smaller quantity in the two slot recognition results obtained by respectively carrying out slot recognition on the same voice command by adopting the parser and NER models as the final slot recognition result of the voice command.
It should be noted that, in rule 2-1, whether the total number of characters in two slot recognition results obtained by respectively performing slot recognition on the same voice command by using the parser and NER models is the same, the electronic device may always select one of the two slot recognition results with a larger number as the final slot recognition result of the voice command. In rule 2-2, the final slot recognition result of the voice command is related to the total number of characters in two slot recognition results obtained by respectively performing slot recognition on the voice command by using the parser and NER models, specifically: under the condition that the total number of characters in two slot recognition results obtained by respectively carrying out slot recognition on the same voice command by adopting the parser and NER models is equal, the electronic equipment can select one of the two slot recognition results with smaller number as the final slot recognition result of the voice command; under the condition that the total character number in two slot recognition results obtained by respectively carrying out slot recognition on the same voice command by adopting the parser and NER models is different, the electronic equipment can select one of the two slot recognition results with more numbers as the final slot recognition result of the voice command.
In the case of determining the slot of the voice command according to rule 2-1 or 2-2, the slot of the voice command may be all the recognition result of the parameter or may be all the recognition result of the NER model.
In rule 3, the electronic device may determine whether the slots obtained by identifying the same voice command by using the first and NER models have the same character, if so, may combine two items as a final slot identification result of the voice command, and if not, may determine the final slot identification result of the voice command by using any one of rule 1 and rule 2 in the first embodiment, the second embodiment, or the third implementation.
It should be noted that, in some implementations, the electronic device may determine whether the same character exists in the slot obtained by identifying the same word by using the first and NER models, and if so, may combine two items as the final slot identification result of the word, where the implementation differs from rule 3 mainly in that the focusing object is different: this implementation focuses on words, rule 3 focuses on voice instructions.
In implementation, the electronic device may apply the above rule to determine the slot position of the first voice command, which is specifically described as follows:
implementation 1: optionally, the first voice instruction includes P words, and P is a positive integer;
The determining the slot position of the first voice instruction according to the first slot position identification result and the second slot position identification result comprises the following steps:
determining a first word slot position corresponding to a target word in the first slot position identification result;
determining a second word slot corresponding to the target word in the second slot identification result;
determining one of the first word slot and the second word slot with a longer length as a target word slot of the target word, wherein the slot of the first voice instruction comprises the target word slot;
Wherein the target word is any word of the P words.
In this implementation manner, the slot of each word in the first voice instruction is one of two words with a longer length obtained by carrying out slot recognition on the word based on the parser and NER models. Such as: for the target word, if the length of the first word slot is greater than the length of the second word slot, the first word slot may be determined to be the final slot of the target word, and conversely, the second word slot may be determined to be the final slot of the target word.
It may be understood that, for the slots of different words in the first voice command, the slots may be the recognition result of the parser or the recognition result of the NER model, so that the slots of the first voice command may be all the recognition result of the parser, all the recognition result of the NER model, some recognition results including the parser, and some recognition result of the NER model, which may be specifically determined according to the actual situation, which is not limited in the embodiments of the present disclosure. Implementation 1 may be understood as the application of rule 1 described above, but may also be independent of rule 1.
Implementation 2: optionally, the determining the slot of the first voice instruction according to the first slot identification result and the second slot identification result includes:
determining the number of slots in the first slot identification result to obtain a first number of slots;
determining the number of slots in the second slot identification result to obtain a second number of slots;
And determining the first slot identification result or the second slot identification result as the slot of the first voice instruction according to the comparison result of the first slot number and the second slot number.
Further, the determining, according to the comparison result of the first slot number and the second slot number, the first slot identification result or the second slot identification result as the slot of the first voice instruction includes:
implementation 2-1: determining a slot identification result corresponding to a first target slot number as a slot of the first voice instruction, wherein the first target slot number is one of the first slot number and the second slot number; or alternatively
Implementation 2-1: and determining a slot identification result corresponding to a second target slot number as the slot of the first voice instruction under the condition that the total number of slots in the first slot identification result is equal to the total number of slots in the second slot identification result, wherein the second target slot number is one of the first slot number and the second slot number.
In implementation manner 2-1, the electronic device may directly determine, as the slot of the first voice instruction, one of the first slot identification result and the second slot identification result that has a larger number of slots. Specifically, when the number of slots of the first slot recognition result is greater than the number of slots of the second slot recognition result, the slots of the first slot recognition result are determined to be the slots of the first voice instruction, and conversely, the slots of the second slot recognition result are determined to be the slots of the first voice instruction.
In implementation 2-2, the electronic device may first determine whether the total number of slots included in the first slot identification result and the second slot identification result are equal.
If the first voice command and the second voice command are equal, one of the first slot identification result and the second slot identification result, which has a smaller number of slots, can be determined as the slot of the first voice command. Specifically, if the total number of words of the first slot identification result is equal to the total number of words of the second slot identification result, if the number of slots of the first slot identification result is greater than the number of slots of the second slot identification result, determining the slots of the second slot identification result as the slots of the first voice instruction, otherwise, determining the slots of the first slot identification result as the slots of the first voice instruction.
If not, determining one of the first slot identification result and the second slot identification result with more total words of the slots as the slot of the first voice instruction. Specifically, if the total number of words of the first slot identification result is not equal to the total number of words of the second slot identification result, if the total number of words of the first slot identification result is greater than the number of slots of the second slot identification result, determining the slots of the first slot identification result as the slots of the first voice instruction, otherwise, determining the slots of the second slot identification result as the slots of the first voice instruction.
In implementation 2, the slots of the first voice instruction may be all recognition results of the parser or all recognition results of the NER model, which may be specifically determined according to the actual situation, which is not limited in the embodiments of the present disclosure. Implementation 2-1 may be understood as the application of rule 2-1 described above, but may also be independent of rule 2-1. Implementation 2-2 may be understood as the application of rule 2-2 described above, but may also be independent of rule 2-2.
Implementation 3: optionally, the determining the slot of the first voice instruction according to the first slot identification result and the second slot identification result includes:
determining whether the same character exists in the first slot position identification result and the second slot position identification result;
And under the condition that the same characters exist in the first slot position identification result and the second slot position identification result, merging the first slot position identification result and the second slot position identification result to obtain the slot position of the first voice instruction.
In implementation 3, when the first slot identification result and the second slot identification result have the same character, the first slot identification result and the second slot identification result are combined to obtain the slot of the first voice instruction. Otherwise, the slot position of the first voice command may be determined in other manners, and may specifically be determined according to actual requirements, which is not limited in the embodiments of the present disclosure. Implementation 3 is to be understood as the application of rule 3 described above, but may also be independent of rule 3.
By the method, the slot positions of the voice instructions can be determined by means of the rules, so that the flexibility of slot position determination can be improved.
With respect to the third embodiment, optionally, the determining the slot of the first voice command according to the first slot identification result and the second slot identification result includes:
Inputting the first slot position recognition result, the second slot position recognition result and a first text into a classification model, wherein the first text is a text corresponding to the first voice instruction;
Determining the output of the classification model as a slot of the first voice instruction;
The classification model is used for determining whether the slot position input into the classification model is the slot position of the first voice instruction.
The classification model is a classification model, and can be specifically realized by any one of the following: gradient lifting decision tree (Gradient Boosting Decision Tree, GBDT), limiting gradient lifting (extreme gradient boosting, XGBoost) or naive Bayes @Bayes) and the like.
When the slot is input into the classification model, the identification mode of the slot can be marked, namely the identification mode is obtained by adopting a parser identification or a NER model identification. The classification model can perform classification judgment on the slot position based on the slot position and the first text, namely judging whether the slot position is a real slot position or not, and if so, outputting the slot position; otherwise, the slot is not output.
By means of the method, the groove position of the voice command can be determined by means of the classification model, and therefore the flexibility of groove position determination can be improved.
It should be noted that, the various optional implementations described in the embodiments of the present disclosure may be implemented in combination with each other without collision with each other, or may be implemented separately, which is not limited to the embodiments of the present disclosure.
For ease of understanding, examples are illustrated below:
The slot positions are identified in a way of combining the Parser and the NER, so that not only can the enumeration slot positions be accurately identified, but also the non-enumeration slot positions can be well identified, and entity slot position information which is said by a user can be maximally identified. The combination mode of the Parser and the NER can be various, and can be determined according to practical application scenes, including but not limited to the following cases:
1. when the slot information contains less non-enumeratable information such as name and time, a parser priority scheme may be adopted, for example: sun Zhongshan, yellow river street, louis, etc., wherein all contain information such as name, place name, etc., if adopting NER to discern the high probability to discern incompletely, consequently, adopting the scheme that the priority is higher, can give priority or other such entity slot positions, the configuration mode is like "[ name ] homed," etc..
2. When there is less enumeration information, NER-priority schemes may be employed, such as more occurrences of person names, addresses, times, etc.
3. The priorities of the parser and the NER can be dynamically configured according to the characteristics of the physical slot, and generally, the priorities can be based on the longer length of the identified physical slot, such as "yellow river street", the NER can identify "yellow river", the parser can identify "yellow river street", and then the longer "yellow river street" is more suitable.
4. If the whole sentence is considered, the recognition scheme can be dynamically determined according to the total number of the recognized slots in the sentence, and at this time, a dynamic programming mode can be adopted to select the situation that the number of the recognized words in the sentence is the largest, for example, the "present local epidemic situation" can recognize that the slots are "present/Japanese/land location" or "present/local/location", and the latter recognition word slots can be seen to be longer and more accurate. For example, "quanyang county and peace valley county" can be identified as two kinds of "quanyang county/peace valley county" and "quanyang county/and/or peace valley county", and the latter is obviously more correct.
5. When the lengths of the words are the same, the situation that the number of the identification slots is smaller is usually considered, for example, "I shall handle ten five-membered telephone fee package", the situation can be identified as "I shall recharge fifteen-membered telephone fee/flow" and "I shall recharge fifteen-membered telephone fee flow", and the latter semantic is more complete.
6. If overlapping words appear between Parser and NER, the words can be processed in a combined mode, such as 'Chinese and foreign science names', the Parser gets 'Chinese and foreign science', the NER gets 'science names', and overlapping parts exist, and the words can be combined into 'Chinese and foreign science names'.
7. The decision class model may be used for merging, including but not limited to GBDT, XGBoost, or XGBoostBayes, etc. The model is used for carrying out classification judgment on all entity slots identified by NER and Parser, and the benefit is that slot decision can be processed more flexibly. If the principle of independent autonomy and equal reciprocity is adopted, the parser obtains the independent autonomy and the equal reciprocity, the NER obtains the equal reciprocity, at the moment, a decision can be made through a model, a trained decision model is obtained through a mode of labeling corpus training, and the equal reciprocity of the independent autonomy and the NER respectively adopting the parser is obtained through the decision model.
Referring to fig. 2, fig. 2 is a block diagram illustrating a slot identification apparatus according to an exemplary embodiment. As shown in fig. 2, the slot recognition apparatus 200 includes:
A receiving module 201, configured to receive a first voice instruction;
the recognition module 202 is configured to perform slot recognition on the first voice command by using a parser and a named entity recognition NER model, so as to obtain a first slot recognition result corresponding to the parser and a second slot recognition result corresponding to the NER model;
And the determining module 203 is configured to determine a slot of the first voice instruction according to the first slot identification result and the second slot identification result.
Optionally, the determining module 203 includes:
The first determining unit is used for determining the slot type of each slot in the first slot identification result and the second slot identification result, wherein the slot type comprises an enumeratable slot and a non-enumeratable slot;
and the second determining unit is used for determining the slot positions of the first voice instruction according to the slot position number of each slot position type in the first slot position identification result and the second slot position identification result.
Optionally, the second determining unit is specifically configured to:
Determining the slot in the first slot identification result as the slot of the first voice instruction under the condition that the number of the slot which can enumerate the slot in the first slot identification result and the second slot identification result is larger than a first threshold value;
And determining the slot in the second slot identification result as the slot of the first voice instruction under the condition that the number of the slot which can enumerate the slot in the first slot identification result and the second slot identification result is smaller than or equal to the first threshold value.
Optionally, the first voice instruction includes P words, and P is a positive integer;
the determining module 203 is specifically configured to:
determining a first word slot position corresponding to a target word in the first slot position identification result;
determining a second word slot corresponding to the target word in the second slot identification result;
determining one of the first word slot and the second word slot with a longer length as a target word slot of the target word, wherein the slot of the first voice instruction comprises the target word slot;
Wherein the target word is any word of the P words.
Optionally, the determining module 203 is specifically configured to:
determining the number of slots in the first slot identification result to obtain a first number of slots;
determining the number of slots in the second slot identification result to obtain a second number of slots;
And determining the first slot identification result or the second slot identification result as the slot of the first voice instruction according to the comparison result of the first slot number and the second slot number.
Optionally, the determining module 203 is specifically configured to:
determining a slot identification result corresponding to a first target slot number as a slot of the first voice instruction, wherein the first target slot number is one of the first slot number and the second slot number; or alternatively
And determining a slot identification result corresponding to a second target slot number as the slot of the first voice instruction under the condition that the total number of slots in the first slot identification result is equal to the total number of slots in the second slot identification result, wherein the second target slot number is one of the first slot number and the second slot number.
Optionally, the determining module 203 is specifically configured to:
determining whether the same character exists in the first slot position identification result and the second slot position identification result;
And under the condition that the same characters exist in the first slot position identification result and the second slot position identification result, merging the first slot position identification result and the second slot position identification result to obtain the slot position of the first voice instruction.
Optionally, the determining module 203 includes:
the input unit is used for inputting the first slot position recognition result, the second slot position recognition result and a first text into a classification model, wherein the first text is a text corresponding to the first voice instruction;
a third determining unit, configured to determine an output of the classification model as a slot of the first voice instruction;
The classification model is used for determining whether the slot position input into the classification model is the slot position of the first voice instruction.
The electronic device 200 can implement each process that can be implemented by the electronic device in the embodiment of the method disclosed herein, and achieve the same beneficial effects, and in order to avoid repetition, a detailed description is omitted here.
Referring to fig. 3, fig. 3 is a block diagram of an electronic device shown according to an exemplary embodiment. As shown in fig. 3, the electronic device 300 includes: a processor 301, a memory 302, a user interface 303, a transceiver 304 and a bus interface.
Wherein, in the embodiment of the present disclosure, the electronic device 300 further includes: a program stored on the memory 302 and executable on the processor 301, which when executed by the processor 301 performs the steps of:
Receiving a first voice instruction through the user interface 303;
Respectively carrying out slot recognition on the first voice instruction by adopting a resolver parser and a named entity recognition NER model to obtain a first slot recognition result corresponding to the parser and a second slot recognition result corresponding to the NER model;
And determining the slot position of the first voice instruction according to the first slot position identification result and the second slot position identification result.
Optionally, the program when executed by the processor 301 performs the steps of:
determining the slot type of each slot in the first slot identification result and the second slot identification result, wherein the slot type comprises an enumeratable slot and a non-enumeratable slot;
and determining the slot position of the first voice instruction according to the slot position number of each slot position type in the first slot position identification result and the second slot position identification result.
Optionally, the target slot type is an enumerated slot;
The program when executed by the processor 301 performs the steps of:
Determining the slot in the first slot identification result as the slot of the first voice instruction under the condition that the number of the slot which can enumerate the slot in the first slot identification result and the second slot identification result is larger than a first threshold value;
And determining the slot in the second slot identification result as the slot of the first voice instruction under the condition that the number of the slot which can enumerate the slot in the first slot identification result and the second slot identification result is smaller than or equal to the first threshold value.
Optionally, the first voice instruction includes P words, and P is a positive integer; the program when executed by the processor 301 performs the steps of:
determining a first word slot position corresponding to a target word in the first slot position identification result;
determining a second word slot corresponding to the target word in the second slot identification result;
determining one of the first word slot and the second word slot with a longer length as a target word slot of the target word, wherein the slot of the first voice instruction comprises the target word slot;
Wherein the target word is any word of the P words.
Optionally, the program when executed by the processor 301 performs the steps of:
determining the number of slots in the first slot identification result to obtain a first number of slots;
determining the number of slots in the second slot identification result to obtain a second number of slots;
And determining the first slot identification result or the second slot identification result as the slot of the first voice instruction according to the comparison result of the first slot number and the second slot number.
Optionally, the program when executed by the processor 301 performs the steps of:
determining a slot identification result corresponding to a first target slot number as a slot of the first voice instruction, wherein the first target slot number is one of the first slot number and the second slot number; or alternatively
And determining a slot identification result corresponding to a second target slot number as the slot of the first voice instruction under the condition that the total number of slots in the first slot identification result is equal to the total number of slots in the second slot identification result, wherein the second target slot number is one of the first slot number and the second slot number.
Optionally, the program when executed by the processor 301 performs the steps of:
determining whether the same character exists in the first slot position identification result and the second slot position identification result;
And under the condition that the same characters exist in the first slot position identification result and the second slot position identification result, merging the first slot position identification result and the second slot position identification result to obtain the slot position of the first voice instruction.
Optionally, the program when executed by the processor 301 performs the steps of:
Inputting the first slot position recognition result, the second slot position recognition result and a first text into a classification model, wherein the first text is a text corresponding to the first voice instruction;
Determining the output of the classification model as a slot of the first voice instruction;
The classification model is used for determining whether the slot position input into the classification model is the slot position of the first voice instruction.
In fig. 3, a bus architecture may comprise any number of interconnected buses and bridges, with one or more processors, represented by processor 301, and various circuits of memory, represented by memory 302, being linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. Transceiver 304 may be a number of elements, including a transmitter and a receiver, providing a means for communicating with various other apparatus over a transmission medium. The user interface 303 may also be an interface capable of interfacing with an inscribed desired device for a different user device, including but not limited to a keypad, display, speaker, microphone, joystick, etc.
The processor 301 is responsible for managing the bus architecture and general processing, and the memory 302 may store data used by the processor 2601 in performing operations.
The electronic device 300 can implement the respective processes of the above-described method embodiments, and in order to avoid repetition, a description thereof will be omitted.
The embodiment of the present disclosure further provides a readable storage medium, on which a program is stored, where the program, when executed by a processor, implements each process of the above embodiment of the slot identification method, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. Wherein the readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The embodiments of the present disclosure have been described above with reference to the accompanying drawings, but the present disclosure is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the disclosure and the scope of the claims, which are all within the protection of the present disclosure.
Claims (8)
1. A method for identifying a slot, the method comprising:
Receiving a first voice instruction;
Respectively carrying out slot recognition on the first voice instruction by adopting a resolver parser and a named entity recognition NER model to obtain a first slot recognition result corresponding to the parser and a second slot recognition result corresponding to the NER model;
determining the slot position of the first voice instruction according to the first slot position identification result and the second slot position identification result;
The determining the slot position of the first voice instruction according to the first slot position identification result and the second slot position identification result comprises the following steps:
determining the slot type of each slot in the first slot identification result and the second slot identification result, wherein the slot type comprises an enumeratable slot and a non-enumeratable slot;
Determining the slot position of the first voice instruction according to the slot position number of each slot position type in the first slot position identification result and the second slot position identification result;
the determining the slot position of the first voice instruction according to the slot position number of each slot position type in the first slot position identification result and the second slot position identification result comprises the following steps:
Determining the slot in the first slot identification result as the slot of the first voice instruction under the condition that the number of the slot which can enumerate the slot in the first slot identification result and the second slot identification result is larger than a first threshold value;
Determining the slot in the second slot identification result as the slot of the first voice instruction under the condition that the number of the slot which can enumerate the slot in the first slot identification result and the second slot identification result is smaller than or equal to the first threshold value;
the first voice instruction comprises P words, and P is a positive integer;
The determining the slot position of the first voice instruction according to the first slot position identification result and the second slot position identification result comprises the following steps:
determining a first word slot position corresponding to a target word in the first slot position identification result;
determining a second word slot corresponding to the target word in the second slot identification result;
determining one of the first word slot and the second word slot with a longer length as a target word slot of the target word, wherein the slot of the first voice instruction comprises the target word slot;
Wherein the target word is any word of the P words.
2. The method of claim 1, wherein determining the slot of the first voice instruction based on the first slot identification result and the second slot identification result comprises:
determining the number of slots in the first slot identification result to obtain a first number of slots;
determining the number of slots in the second slot identification result to obtain a second number of slots;
And determining the first slot identification result or the second slot identification result as the slot of the first voice instruction according to the comparison result of the first slot number and the second slot number.
3. The method of claim 2, wherein determining the first slot identification result or the second slot identification result as the slot of the first voice instruction based on the comparison of the first slot number and the second slot number comprises:
determining a slot identification result corresponding to a first target slot number as a slot of the first voice instruction, wherein the first target slot number is one of the first slot number and the second slot number; or alternatively
And determining a slot identification result corresponding to a second target slot number as the slot of the first voice instruction under the condition that the total number of slots in the first slot identification result is equal to the total number of slots in the second slot identification result, wherein the second target slot number is one of the first slot number and the second slot number.
4. The method of claim 1, wherein determining the slot of the first voice instruction based on the first slot identification result and the second slot identification result comprises:
determining whether the same character exists in the first slot position identification result and the second slot position identification result;
And under the condition that the same characters exist in the first slot position identification result and the second slot position identification result, merging the first slot position identification result and the second slot position identification result to obtain the slot position of the first voice instruction.
5. The method of claim 1, wherein determining the slot of the first voice instruction based on the first slot identification result and the second slot identification result comprises:
Inputting the first slot position recognition result, the second slot position recognition result and a first text into a classification model, wherein the first text is a text corresponding to the first voice instruction;
Determining the output of the classification model as a slot of the first voice instruction;
The classification model is used for determining whether the slot position input into the classification model is the slot position of the first voice instruction.
6. A slot identification device, comprising:
the receiving module is used for receiving a first voice instruction;
The recognition module is used for respectively carrying out slot recognition on the first voice instruction by adopting a resolver parser and a named entity recognition NER model to obtain a first slot recognition result corresponding to the parser and a second slot recognition result corresponding to the NER model;
The determining module is used for determining the slot position of the first voice instruction according to the first slot position identification result and the second slot position identification result;
The determining module includes:
The first determining unit is used for determining the slot type of each slot in the first slot identification result and the second slot identification result, wherein the slot type comprises an enumeratable slot and a non-enumeratable slot;
the second determining unit is used for determining the slot positions of the first voice instruction according to the slot position number of each slot position type in the first slot position identification result and the second slot position identification result;
The second determining unit is specifically configured to:
Determining the slot in the first slot identification result as the slot of the first voice instruction under the condition that the number of the slot which can enumerate the slot in the first slot identification result and the second slot identification result is larger than a first threshold value;
Determining the slot in the second slot identification result as the slot of the first voice instruction under the condition that the number of the slot which can enumerate the slot in the first slot identification result and the second slot identification result is smaller than or equal to the first threshold value;
the first voice instruction comprises P words, and P is a positive integer;
The determining module is specifically configured to:
determining a first word slot position corresponding to a target word in the first slot position identification result;
determining a second word slot corresponding to the target word in the second slot identification result;
determining one of the first word slot and the second word slot with a longer length as a target word slot of the target word, wherein the slot of the first voice instruction comprises the target word slot;
Wherein the target word is any word of the P words.
7. An electronic device comprising a processor, a memory and a program stored on the memory and executable on the processor, the program when executed by the processor implementing the steps of the slot identification method of any one of claims 1 to 5.
8. A readable storage medium, characterized in that the readable storage medium has stored thereon a program which, when executed by a processor, implements the steps of the slot identification method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111287097.5A CN114023319B (en) | 2021-11-02 | 2021-11-02 | Slot position identification method and device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111287097.5A CN114023319B (en) | 2021-11-02 | 2021-11-02 | Slot position identification method and device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114023319A CN114023319A (en) | 2022-02-08 |
CN114023319B true CN114023319B (en) | 2024-09-17 |
Family
ID=80059757
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111287097.5A Active CN114023319B (en) | 2021-11-02 | 2021-11-02 | Slot position identification method and device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114023319B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109918680A (en) * | 2019-03-28 | 2019-06-21 | 腾讯科技(上海)有限公司 | Entity recognition method, device and computer equipment |
CN111506723A (en) * | 2020-07-01 | 2020-08-07 | 平安国际智慧城市科技股份有限公司 | Question-answer response method, device, equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10304444B2 (en) * | 2016-03-23 | 2019-05-28 | Amazon Technologies, Inc. | Fine-grained natural language understanding |
CN111833872B (en) * | 2020-07-08 | 2021-04-30 | 北京声智科技有限公司 | Voice control method, device, equipment, system and medium for elevator |
CN111816180B (en) * | 2020-07-08 | 2022-02-08 | 北京声智科技有限公司 | Method, device, equipment, system and medium for controlling elevator based on voice |
CN112149429A (en) * | 2020-10-21 | 2020-12-29 | 成都小美伴旅信息技术有限公司 | High-accuracy semantic understanding and identifying method based on word slot order model |
CN112686674A (en) * | 2020-12-25 | 2021-04-20 | 科讯嘉联信息技术有限公司 | Customer service conversation work order summarizing method |
-
2021
- 2021-11-02 CN CN202111287097.5A patent/CN114023319B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109918680A (en) * | 2019-03-28 | 2019-06-21 | 腾讯科技(上海)有限公司 | Entity recognition method, device and computer equipment |
CN111506723A (en) * | 2020-07-01 | 2020-08-07 | 平安国际智慧城市科技股份有限公司 | Question-answer response method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114023319A (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109063221B (en) | Query intention identification method and device based on mixed strategy | |
CN111708869B (en) | Processing method and device for man-machine conversation | |
CN110457689B (en) | Semantic processing method and related device | |
CN111382248B (en) | Question replying method and device, storage medium and terminal equipment | |
CN111738016A (en) | Multi-intention recognition method and related equipment | |
CN111401064B (en) | Named entity identification method and device and terminal equipment | |
CN111967264B (en) | Named entity identification method | |
CN113326702B (en) | Semantic recognition method, semantic recognition device, electronic equipment and storage medium | |
CN113220835B (en) | Text information processing method, device, electronic equipment and storage medium | |
CN113221555A (en) | Keyword identification method, device and equipment based on multitask model | |
CN111737990B (en) | Word slot filling method, device, equipment and storage medium | |
CN111046674B (en) | Semantic understanding method and device, electronic equipment and storage medium | |
CN114281996B (en) | Method, device, equipment and storage medium for classifying long text | |
CN112487813B (en) | Named entity recognition method and system, electronic equipment and storage medium | |
CN114023319B (en) | Slot position identification method and device, electronic equipment and readable storage medium | |
CN115270728A (en) | Conference record processing method, device, equipment and storage medium | |
CN114330359A (en) | Semantic recognition method and device and electronic equipment | |
CN109977420B (en) | Offline semantic recognition adjusting method, device, equipment and storage medium | |
CN110750967B (en) | Pronunciation labeling method and device, computer equipment and storage medium | |
CN112632956A (en) | Text matching method, device, terminal and storage medium | |
CN110895924B (en) | Method and device for reading document content aloud, electronic equipment and readable storage medium | |
CN113806475B (en) | Information reply method, device, electronic equipment and storage medium | |
CN117235205A (en) | Named entity recognition method, named entity recognition device and computer readable storage medium | |
CN114116975A (en) | Multi-intention identification method and system | |
CN115221298A (en) | Question and answer matching method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |