CN114818665B - Multi-purpose recognition method and system based on bert+bilstm+crf and xgboost model - Google Patents
Multi-purpose recognition method and system based on bert+bilstm+crf and xgboost model Download PDFInfo
- Publication number
- CN114818665B CN114818665B CN202210432349.7A CN202210432349A CN114818665B CN 114818665 B CN114818665 B CN 114818665B CN 202210432349 A CN202210432349 A CN 202210432349A CN 114818665 B CN114818665 B CN 114818665B
- Authority
- CN
- China
- Prior art keywords
- model
- bert
- intention
- crf
- bilstm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Probability & Statistics with Applications (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Character Discrimination (AREA)
- Machine Translation (AREA)
Abstract
The invention belongs to the technology in the field of natural language understanding, and particularly relates to a multi-purpose recognition method and system based on a bert+bilstm+crf and xgboost model; in the technical scheme, the data set which is preprocessed by using the bert is used for obtaining the dynamic word vector, which is different from the word vector obtained by using the word2vec or the glove model in the past. The word vector output by the Bert model has dynamic characteristics, and can solve the problem of word ambiguity. The word vector is converted into sentence vector through bi stm+crf, the bi stm+crf model can process context text information with longer distance at the same time, and the optimal sentence vector prediction sequence is obtained through the relation of neighbor labels. The Xgboost model is used in the aspect of ideas recognition, and the recognition accuracy of the model is higher and more flexible, so that the model is used for main intents. After all idea graphs are obtained, we use TF-I DF model to choose standard intention, and use the intention as the basis for judgment. Sentence vectors processed by the bert+bi stm+crf model are input into a new bert model, and sub-intents are finally output.
Description
Technical Field
The invention belongs to the technology in the field of natural language understanding, and particularly relates to a multi-purpose recognition method and system based on a bert+bilstm+crf and xgboost model.
Background
The intention recognition mainly refers to that in the interaction of a person and a machine, the machine carries out natural language understanding on voice or text sent by a user, judges the actual intention of the user and provides accurate service for the user.
Currently, intent recognition is mostly applied to classification or matching purposes that cope with a single intent of a user. Single use, as the name implies, refers to text or speech uttered by a user having and only one intent, while in other cases, where the speech or text interaction uttered by the user contains multiple intents, single intent recognition may be difficult to handle such interactions.
In order to achieve multi-purpose recognition, the current mainstream mode is to split the instruction sent by the user. But split is simply by way of the surface layers of the instruction, such as punctuation split, verb split, etc. But if the user inputs a voice command, or a single verb does not fully express the user's intent.
Disclosure of Invention
Aiming at the problems that the traditional single intent recognition in the prior art cannot meet the requirement under the multi-intent language instruction of a user, and the traditional scheme for splitting sentences by multi-intent recognition cannot fundamentally split one multi-intent sentence into a plurality of single-intent sentences, the invention aims to provide a multi-intent recognition method based on a bert+bilstm+crf and xgboost model, which analyzes multi-intent information contained in the sentences by analyzing sentences; on the one hand, the invention uses the bert+bilstm+crf and Xgboost models for ideas identification. On the other hand, the intention classification is carried out by using the bert+bilstm, so that the intention can be effectively classified into the corresponding categories.
The technical scheme adopted by the invention is as follows:
the multi-purpose recognition method based on the bert+bilstm+crf and xgboost models is characterized by comprising the following steps of:
step 1: constructing a data set by obtaining interactive text or voice information of a user;
step 2: preprocessing the data set to obtain standard format data;
step 3: converting the standard format data into feature sentence vectors through a bert+bilstm+crf model;
step 4: training a corresponding feature sentence vector model through the Xgboost model to identify intention, identifying user interaction intention and outputting all idea;
step 5: calculating the contribution of intentions in all text data of the same main intention to the intentions by using a TF-IDF model, determining a standard intention, wherein other intentions are sub-intentions, and taking sentence vectors of the standard intention as standard sentence vectors;
step 6: classifying each sub-intention through a bert model and outputting sub-intention categories.
By adopting the technical scheme, the data set which is preprocessed by using the bert is used for obtaining the dynamic word vector, which is different from the word vector obtained by using the word2vec or the glove model in the past. The word vector output by the Bert model has dynamic characteristics, and can solve the problem of word ambiguity. The word vector is converted into sentence vector through the bilstm+crf, the bilstm+crf model can process the context text information with a longer distance at the same time, and the optimal sentence vector prediction sequence is obtained through the relation of the neighbor labels. The Xgboost model is used in the aspect of ideas recognition, and the recognition accuracy of the model is higher and more flexible, so that the model is used for main intents. After all idea graphs are obtained, we use TF-IDF model to select standard intention, which is taken as the intention judgment basis. Sentence vectors processed by the bert+bilstm+crf model are input into a new bert model, and a sub-intention is finally output.
Further, the preprocessing in step 2 includes deactivating words and labels on the data set.
Further, the feature sentence vector includes a word vector, a part-of-speech vector, and a named entity vector.
Further, the step 3 specifically includes; the standard format data is converted into word vectors by using the bert model, then the part-of-speech vectors and the named entity vectors are calculated by using the bilstm+crf model, and all main intentions of the user are output by using the Xgboost model.
Further, the formula of the bert+bilstm+crf model is as follows:
(1)head i =Attention(QW i Q ,KW i K ,VW i V )
(3)MultiHead(Q,K,V)=Concat(head 1 ,…,headn)W O
wherein: q, K, V is a matrix of word vectors, d k The embedded layer dimension, while the multi-head self-attention mechanism is to splice the self-attention mechanism results, such as formula (2) and formula (3), by projecting Q, K, V.
A multi-intention recognition system based on the bert+bilstm+crf and xgboost models, which is used for executing the multi-intention recognition method, comprises a voice receiving module, wherein the function of the module is used for recognizing user voices; the voice conversion text module is used for receiving a text conversion function after voice; the intention recognition module is used for selecting the standard intention based on the bert+bilstm+crf and recognizing multiple intents; and the interaction module is used for identifying the intention of the user and then executing the intention.
An electronic device, the electronic device comprising: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space surrounded by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for executing the multi-intention recognition method described above.
A computer-readable storage medium storing one or more programs executable by one or more processors to implement the multi-intent recognition method described above.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
1) The word vector obtained in the training of the bert+bilstm+crf model can be dynamically represented for multiple meanings at one time, the context semantic information can be fully utilized, and compared with the traditional bert model, the optimized sentence vector sequence can be obtained through the neighbor label relation;
2) Accurately outputting all main intents of the user by using the Xgboost model;
3) Using the TF-IDF model as a standard intent selection model; 4. classifying the sub-intents by using the bert model improves the classification accuracy.
Drawings
The invention will now be described by way of example and with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a multi-purpose recognition method based on the bert+bilstm+crf and Xgboost models in the invention;
FIG. 2 is a schematic structural diagram of a bert model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a fusion model of bert+bilstm+crf according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
In the description of the embodiments of the present application, it should be noted that, directions or positional relationships indicated by terms such as "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., are directions or positional relationships based on those shown in the drawings, or those that are conventionally put in use of the inventive product, are merely for convenience of description and simplicity of description, and are not indicative or implying that the apparatus or element to be referred to must have a specific direction, be configured and operated in a specific direction, and thus should not be construed as limiting the present application. Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
The present invention will be described in detail with reference to fig. 1 to 4.
As shown in fig. 1, an embodiment of a multi-purpose recognition method based on bert+bilstm+crf and xgboost models according to the present invention includes:
1. constructing a data set by obtaining interactive text or voice information of a user;
2. preprocessing the data set, including deactivating words, labeling and the like, and finally obtaining standard format data;
3. converting the data into word vectors by using a bert model, and calculating sentence vectors, named entity vectors and the like by using a bilstm+crf model; firstly, word segmentation processing is carried out to obtain a word segmentation text sequence; then, the whole word Mask is carried out on partial words of the word segmentation sequence, a special mark [ CLS ] is added to the beginning of the sequence, and sentences are separated by marks [ SEP ]. The output of each word of the sequence consists of 3 parts: token Embedding, segment Embedding and Position Embedding. And inputting the sequence vector into a bidirectional transducer for feature extraction, and finally obtaining the sequence vector containing rich semantic features.
4. Training a corresponding sentence vector model by using an Xgboost model to identify intention, dividing sample characteristics of an input sentence vector by using the Xgboost model, and then carrying out importance assessment of user interaction intention by using a weight mechanism to output all main intention;
5. selecting a standard sentence vector, calculating the contribution of intentions in all text data of the same main intention to the intention by using a TF-IDF model, and determining a standard intention; TF-IDF is a statistical method for evaluating the importance of a word to a document in a document set or corpus. The importance of a word increases proportionally with the number of times it appears in the file, but at the same time decreases inversely with the frequency of its occurrence in the corpus. The main ideas of TF-IDF are: if a word appears frequently in an article TF is high and rarely in other articles, the word or phrase is considered to have good category discrimination and is suitable for classification.
TF is word Frequency (Term Frequency)
Where ni, j is the number of occurrences of the word in the article dj, and the denominator is the sum of the number of occurrences of all words in the article dj.
IDF is the reverse document frequency (Inverse Document Frequency), and the IDF of a particular term can be obtained by dividing the total number of articles by the number of documents containing the term, and taking the logarithm of the quotient obtained.
If fewer documents containing the term t are included, the larger the IDF is, indicating that the term has good category discrimination.
Where |D| is the total number of articles in the corpus. Where t represents the number of articles containing the word t, if the word is no longer in the corpus, this results in a denominator of 0, so 1+ [ j ] t is typically used
Then there is TF-idf=tf-IDF
6. The sub intents are classified by using the bert model, and sub intent categories are output.
Fig. 2 is a bert model, in which a standard text corpus is mainly input to obtain a final word vector, the most critical part for bert is a transformers structure, the critical part of the encoder is a self-attention mechanism, and a weight coefficient matrix is regulated and controlled through the relativity between words in the same sentence to finally obtain the word vector.
head 1 =Attention(QW i Q ,KW i K ,VW i V ) (1)
Wherein Q, K, V is a matrix of word vectors, d k Is the dimension of the embedded layer, while the MultiHead self-attention mechanism is implemented by projecting Q, K, V, and splicing the self-attention mechanism results, such as formula (2) and formula (3)
MultiHed(Q,K,V)=Concat(head 1 ,…,head n )W O (3)
Fig. 3 shows the combination of the bert model and the bilstm+crf model, wherein the bilstm model uses a combination of forward and backward lstm networks, and contains forward and backward information for each time instant. The crf module can effectively process the problem that the relationship between adjacent labels cannot be processed due to bilstm, and output an optimal sentence vector sequence.
We compare the multi-intent recognition approach of the present example with the conventional approach:
index/technique | The technical method | Conventional method |
Recall | 84.2% | 80.7% |
Precision | 85.6% | 81.3% |
f1-score | 84.4% | 81.1% |
In another aspect, an embodiment of the present invention further provides a multi-purpose recognition system based on the bert+bilstm+crf and xgboost models, for performing the method described in any one of the foregoing embodiments, including a speech receiving module, where the function of the module is to recognize a user's speech; the voice conversion text module is used for receiving a text conversion function after voice; the intention recognition module is used for selecting the standard intention based on the bert+bilstm+crf and recognizing multiple intents; and the interaction module is used for identifying the intention of the user and then executing the intention.
Another aspect of the embodiment of the present invention further provides an electronic device, as shown in fig. 4, where the flow of the embodiment of fig. 1 of the present invention may be implemented, as shown in fig. 4, where the electronic device may include: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space surrounded by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory for performing the method according to any of the foregoing embodiments.
The specific implementation of the above steps by the processor and the processing of the steps further performed by running executable program codes may be referred to the description of the embodiment of fig. 1 of the present invention, and will not be repeated herein.
The electronic device exists in a variety of forms including, but not limited to:
(1) A mobile communication device: such devices are characterized by mobile communication capabilities and are primarily aimed at providing voice, data communications. Such terminals include: smart phones (e.g., iPhone), multimedia phones, functional phones, and low-end phones, etc.
(2) Ultra mobile personal computer device: such devices are in the category of personal computers, having computing and processing functions, and generally also having mobile internet access characteristics. Such terminals include: PDA, MID, and UMPC devices, etc., such as iPad.
(3) Portable entertainment device: such devices may display and play multimedia content. The device comprises: audio, video players (e.g., iPod), palm game consoles, electronic books, and smart toys and portable car navigation devices.
(4) And (3) a server: the configuration of the server includes a processor, a hard disk, a memory, a system bus, and the like, and the server is similar to a general computer architecture, but is required to provide highly reliable services, and thus has high requirements in terms of processing capacity, stability, reliability, security, scalability, manageability, and the like.
(5) Other electronic devices with data interaction functions.
Another aspect of the embodiments of the present invention also provides a computer-readable storage medium storing one or more programs executable by one or more processors to implement the aforementioned knowledge-graph-based multi-intent recognition method.
Aiming at an input text of a user question, the invention automatically carries out multi-intention recognition according to a knowledge graph, is not limited to punctuation, sentence pattern and syntactic analysis, and automatically cuts intention and generates an answer; when the intention of the user is unclear, intention convergence can be realized through automatic back-questioning after self-reasoning, so that the problem recognition rate and accuracy are effectively improved, the flexibility of the question-answering robot is greatly improved, and the dialogue is natural.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention, and are intended to be included within the scope of the appended claims and description.
Claims (8)
1. The multi-purpose recognition method based on the bert+bilstm+crf and xgboost models is characterized by comprising the following steps of:
step 1: constructing a data set by obtaining interactive text or voice information of a user;
step 2: preprocessing the data set to obtain standard format data;
step 3: converting the standard format data into feature sentence vectors through a bert+bilstm+crf model;
step 4: training a corresponding feature sentence vector model through the Xgboost model to identify intention, identifying user interaction intention and outputting all idea;
step 5: calculating the contribution of intentions in all text data of the same main intention to the intentions by using a TF-IDF model, determining a standard intention, wherein other intentions are sub-intentions, and taking sentence vectors of the standard intention as standard sentence vectors;
step 6: classifying each sub-intention through a bert model and outputting sub-intention categories.
2. The method of claim 1, wherein the preprocessing in step 2 includes deactivating words and labeling the data set.
3. The method of claim 1, wherein the feature sentence vectors include word vectors, part-of-speech vectors, and named entity vectors.
4. The multi-purpose recognition method based on the bert+bilstm+crf and xgboost models according to claim 3, wherein the step 3 specifically comprises; the standard format data is converted into word vectors by using the bert model, and then the part-of-speech vectors and the named entity vectors are calculated by using the bilstm+crf model.
5. The multi-purpose recognition method based on the bert+bilstm+crf and xgboost models according to claim 1, wherein the formula of the bert+bilstm+crf model is as follows:
(1)head i =Attention(QW i Q ,KW i K ,VW i v )
(3)MultiHead(Q,K,V)=Concat(head 1 ,...,head n )W o
wherein: q, K, V is a matrix of word vectors, d k The embedded layer dimension, while the multi-head self-attention mechanism is to splice the self-attention mechanism results, such as formula (2) and formula (3), by projecting Q, K, V.
6. A multi-purpose recognition system based on the bert+bilstm+crf and xgboost models for performing the method of any of claims 1-5, comprising a speech receiving module, the function of which is to recognize the user's speech; the voice conversion text module is used for receiving a text conversion function after voice; the intention recognition module is used for selecting the standard intention based on the bert+bilstm+crf and recognizing multiple intents; and the interaction module is used for identifying the intention of the user and then executing the intention.
7. An electronic device, the electronic device comprising: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space surrounded by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; a processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory for performing the method of any one of claims 1-5.
8. A computer readable storage medium storing one or more programs executable by one or more processors to implement the method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210432349.7A CN114818665B (en) | 2022-04-22 | 2022-04-22 | Multi-purpose recognition method and system based on bert+bilstm+crf and xgboost model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210432349.7A CN114818665B (en) | 2022-04-22 | 2022-04-22 | Multi-purpose recognition method and system based on bert+bilstm+crf and xgboost model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114818665A CN114818665A (en) | 2022-07-29 |
CN114818665B true CN114818665B (en) | 2023-05-12 |
Family
ID=82507057
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210432349.7A Active CN114818665B (en) | 2022-04-22 | 2022-04-22 | Multi-purpose recognition method and system based on bert+bilstm+crf and xgboost model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114818665B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115359786A (en) * | 2022-08-19 | 2022-11-18 | 思必驰科技股份有限公司 | Multi-intention semantic understanding model training and using method and device |
CN116226356B (en) * | 2023-05-08 | 2023-07-04 | 深圳市拓保软件有限公司 | NLP-based intelligent customer service interaction method and system |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110532558A (en) * | 2019-08-29 | 2019-12-03 | 杭州涂鸦信息技术有限公司 | A kind of more intension recognizing methods and system based on the parsing of sentence structure deep layer |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102154676B1 (en) * | 2015-05-14 | 2020-09-10 | 한국과학기술원 | Method for training top-down selective attention in artificial neural networks |
CN109522556B (en) * | 2018-11-16 | 2024-03-12 | 北京九狐时代智能科技有限公司 | Intention recognition method and device |
CN111159332A (en) * | 2019-12-03 | 2020-05-15 | 厦门快商通科技股份有限公司 | Text multi-intention identification method based on bert |
CN111583968A (en) * | 2020-05-25 | 2020-08-25 | 桂林电子科技大学 | Speech emotion recognition method and system |
CN113032568A (en) * | 2021-04-02 | 2021-06-25 | 同方知网(北京)技术有限公司 | Query intention identification method based on bert + bilstm + crf and combined sentence pattern analysis |
-
2022
- 2022-04-22 CN CN202210432349.7A patent/CN114818665B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110532558A (en) * | 2019-08-29 | 2019-12-03 | 杭州涂鸦信息技术有限公司 | A kind of more intension recognizing methods and system based on the parsing of sentence structure deep layer |
Also Published As
Publication number | Publication date |
---|---|
CN114818665A (en) | 2022-07-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109918680B (en) | Entity identification method and device and computer equipment | |
CN108829893B (en) | Method and device for determining video label, storage medium and terminal equipment | |
CN113205817B (en) | Speech semantic recognition method, system, device and medium | |
CN111241237B (en) | Intelligent question-answer data processing method and device based on operation and maintenance service | |
Wu et al. | Emotion recognition from text using semantic labels and separable mixture models | |
CN114357973B (en) | Intention recognition method and device, electronic equipment and storage medium | |
CN108491433A (en) | Chat answer method, electronic device and storage medium | |
CN114818665B (en) | Multi-purpose recognition method and system based on bert+bilstm+crf and xgboost model | |
CN112699686A (en) | Semantic understanding method, device, equipment and medium based on task type dialog system | |
CN115470338B (en) | Multi-scenario intelligent question answering method and system based on multi-path recall | |
CN113761377A (en) | Attention mechanism multi-feature fusion-based false information detection method and device, electronic equipment and storage medium | |
CN112668333A (en) | Named entity recognition method and device, and computer-readable storage medium | |
CN113158656A (en) | Ironic content identification method, ironic content identification device, electronic device, and storage medium | |
CN112818091A (en) | Object query method, device, medium and equipment based on keyword extraction | |
TWI734085B (en) | Dialogue system using intention detection ensemble learning and method thereof | |
CN113468894A (en) | Dialogue interaction method and device, electronic equipment and computer-readable storage medium | |
CN111368066B (en) | Method, apparatus and computer readable storage medium for obtaining dialogue abstract | |
CN115394321A (en) | Audio emotion recognition method, device, equipment, storage medium and product | |
CN113705207A (en) | Grammar error recognition method and device | |
CN114880994B (en) | Text style conversion method and device from direct white text to irony text | |
CN114528851B (en) | Reply sentence determination method, reply sentence determination device, electronic equipment and storage medium | |
CN113505293B (en) | Information pushing method and device, electronic equipment and storage medium | |
CN114444609B (en) | Data processing method, device, electronic equipment and computer readable storage medium | |
CN114154517B (en) | Dialogue quality assessment method and system based on deep learning | |
CN114090885A (en) | Product title core word extraction method, related device and computer program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |