CN113869068A - Scene service recommendation method, device, equipment and storage medium - Google Patents
Scene service recommendation method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113869068A CN113869068A CN202111149885.8A CN202111149885A CN113869068A CN 113869068 A CN113869068 A CN 113869068A CN 202111149885 A CN202111149885 A CN 202111149885A CN 113869068 A CN113869068 A CN 113869068A
- Authority
- CN
- China
- Prior art keywords
- intention
- scene
- services
- information
- service
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000013145 classification model Methods 0.000 claims abstract description 52
- 239000011159 matrix material Substances 0.000 claims description 40
- 239000013598 vector Substances 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 23
- 230000011218 segmentation Effects 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 13
- 238000012795 verification Methods 0.000 claims description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 230000007246 mechanism Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 5
- 238000004806 packaging method and process Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 abstract description 18
- 238000013473 artificial intelligence Methods 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 4
- 238000007726 management method Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Business, Economics & Management (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application provides a scene service recommendation method, a scene service recommendation device, scene service recommendation equipment and a storage medium. Relates to the technical field of artificial intelligence, and the method comprises the following steps: acquiring first dialogue information in a plurality of rounds of dialogue and second dialogue information related to the first dialogue information; calling an intention classification model to respectively predict the intention of the first dialogue information and the second dialogue information, and determining a first intention corresponding to the first dialogue information and a second intention corresponding to the second dialogue information; and determining a target scene service according to the N first scene services associated with the first intention and the M second scene services associated with the second intention, and outputting the target scene service. The accuracy of the target scene service can be effectively improved. The present application may relate to blockchain techniques, such as target scene services may be written into blockchains. The application also relates to the field of digital medical technology, such as dialog messages for describing information of the field of digital medical technology.
Description
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a storage medium for recommending a scene service.
Background
With the rapid development of internet technology, people have an increasing demand for customer service staff. Due to the factors of long training period of manual customer service or high labor cost and the like, the intelligent customer service robot gradually enters the lives of people.
However, currently, in a conversation process between a user and an intelligent customer service robot, the intelligent customer service robot often can only search corresponding reply contents according to questions provided by the user, and cannot provide effective scene services for the associated questions.
Disclosure of Invention
The embodiment of the application provides a scene service recommendation method, a scene service recommendation device and a storage medium, which can effectively improve the accuracy of target scene service and improve the user viscosity.
In a first aspect, an embodiment of the present application provides a method for recommending a scenario service, where the method includes:
acquiring first dialogue information in a plurality of rounds of dialogue and second dialogue information related to the first dialogue information;
calling an intention classification model to respectively predict the intention of the first dialogue information and the second dialogue information, and determining a first intention corresponding to the first dialogue information and a second intention corresponding to the second dialogue information;
acquiring N first scene services related to a first intention and M second scene services related to a second intention; wherein N and M are positive integers;
and determining a target scene service according to the N first scene services and the M second scene services, and outputting the target scene service.
In a second aspect, an embodiment of the present application provides a scene service recommendation device, where the scene service recommendation device includes:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring first dialogue information in a plurality of rounds of dialogues and second dialogue information related to the first dialogue information;
the intention prediction unit is used for calling an intention classification model to respectively carry out intention prediction on the first dialogue information and the second dialogue information, and determining a first intention corresponding to the first dialogue information and a second intention corresponding to the second dialogue information;
the acquiring unit is further configured to acquire N first scenario services associated with the first intention and M second scenario services associated with the second intention; wherein N and M are positive integers;
and the determining unit is used for determining the target scene service according to the N first scene services and the M second scene services and outputting the target scene service.
In a third aspect, an embodiment of the present application provides a scene service recommendation device, where the scene service recommendation device includes an input interface and an output interface, and the scene service recommendation device further includes:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium having stored thereon one or more instructions adapted to be loaded by a processor and to perform the method of the first aspect.
In a fourth aspect, the present application further provides a computer-readable storage medium, which stores computer program instructions, and when the computer program instructions are executed by a processor, the computer program instructions are configured to perform the method of the first aspect.
In the embodiment of the application, first dialogue information and second dialogue information related to the first dialogue information in multiple rounds of dialogues can be obtained, an intention classification model is called to respectively conduct intention prediction on the first dialogue information and the second dialogue information, a first intention corresponding to the first dialogue information is determined, and a second intention corresponding to the second dialogue information is determined; the method comprises the steps of obtaining N first scene services related to a first intention and M second scene services related to a second intention, determining a target scene service according to the N first scene services and the M second scene services, and outputting the target scene service. In order to obtain the target scene service, an intention classification model can be called to predict the intention of the dialogue information, the intention corresponding to the dialogue information is determined, then the scene service related to the intention is obtained, and the pushed target scene service is determined based on the scene service. The method can be adaptive to the user requirements, intelligently output the target scene service to the user and improve the user experience. And when the classification task is executed, the intention classification model is called to determine the intention, and then the associated scene services are determined according to the intention, so that the excessive candidate items of the classification task can be avoided, the long tail phenomenon is avoided, and the maintenance is easier. In addition, since the first session information and the second session information are session information having an association relationship, a target scenario service determined based on a first scenario service corresponding to the first session information and a second scenario service corresponding to the second session information is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic architecture diagram of a scenario service recommendation system according to an embodiment of the present application;
fig. 2 is a flowchart illustrating a scene service recommendation method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a training process of an intention classification model according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating another scene service recommendation method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a scene service recommendation device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a scene service recommendation device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the rapid development of internet technology, electronic commerce gradually enters the daily life of people, and the demand of people for customer service personnel is increasing. Due to the factors of long training period of manual customer service or high labor cost, the intelligent customer service robot gradually becomes a research focus. However, at present, in a conversation process between a user and an intelligent customer service robot, a business process is complex, and how to determine scene services included in the business process is a problem which needs to be solved urgently.
Based on this, the embodiment of the application provides a scene service recommendation method, device, equipment and storage medium. In the scene service recommendation method, the scene service recommendation device may acquire first dialogue information and second dialogue information associated with the first dialogue information in multiple rounds of dialogues, call an intention classification model to perform intention prediction on the first dialogue information and the second dialogue information respectively, and determine a first intention corresponding to the first dialogue information and a second intention corresponding to the second dialogue information. N first scene services associated with a first intention and M second scene services associated with a second intention can be obtained; and determining a target scene service according to the N first scene services and the M second scene services, and outputting the target scene service. The accuracy of the target scene service can be effectively improved, and the user viscosity is improved.
The scene service recommendation method is wide in application scene, and can be applied to various service platforms recorded with session information, such as e-commerce platforms, financial platforms, medical platforms and the like.
In one embodiment, the scene service recommendation method of the embodiment of the application can be executed based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like. Optionally, the scene service recommendation method may construct an intention classification model based on a machine learning algorithm, and when the dialog information in multiple rounds of dialogs is obtained, the intention classification model may be called to determine the intention corresponding to the dialog information.
In one embodiment, the method for recommending scene service can be used for recommending scene service. The method for recommending a scene service may be applied to a scene service recommendation system shown in fig. 1, and as shown in fig. 1, the scene service recommendation system may at least include: a scene service recommending device 11 and a terminal device 12, wherein the scene service recommending device 11 is a device running a conscious graph classification model, and the scene service recommending device 11 can be an intelligent customer service robot. Optionally, the scene service recommendation device 11 may be a server as shown in fig. 1, where the server may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a Content Delivery Network (CDN), a middleware service, a domain name service, a security service, and a big data and artificial intelligence platform, and the like. The scene service recommendation device 11 may also be a terminal device, wherein the terminal device may include but is not limited to: smart phones, tablets, laptops, wearable devices, desktop computers, and the like. Terminal device 12 may be, among other things, a terminal device associated with a user.
Please refer to fig. 2, which is a flowchart illustrating a method for recommending a scene service according to an embodiment of the present application. As shown in fig. 2, the scene service recommendation method includes S201 to S204:
s201: first session information in a plurality of sessions and second session information associated with the first session information are obtained.
The multiple rounds of conversations may refer to conversations between the scene recommendation device and the terminal device. The multi-turn dialog may be a dialog actively initiated by a terminal device associated with the user, or may be a dialog actively initiated by the scene recommendation device. The dialog information may refer to text information included in multiple rounds of dialog.
In one embodiment, the dialog information may include dialog text describing any domain. For example, the session information may include session text in the medical field, such as personal health profile, prescription, examination report, and the like.
In one embodiment, the scene service recommendation device may obtain initial information of any modality and obtain text information based on the initial information. The initial information may include, but is not limited to, one or more of text information, voice information, and image information. When the initial information is text information, then the initial information may be directly used as the dialogue information. When the initial information is voice information, the voice information can be converted into text information based on an offline or cloud voice recognition technology. When the initial information is image information, text information in the image information may be extracted using an Optical Character Recognition technique (OCR).
In one embodiment, the first dialog information may be dialog information for any of a plurality of dialog rounds. The second dialog information may be dialog information associated with the first dialog information in a plurality of dialogs. Optionally, since the information included in two adjacent conversations in the multiple conversations is similar, the two adjacent conversations in the multiple conversations may be respectively used as the first conversation information and the second conversation information associated with the first conversation information, that is, the second conversation information and the first conversation information may be conversation information included in the front and rear rounds of the multiple conversations. For example, the second session information is the session information of the ith round, and the first session information is the session information of the (i + 1) th round, where i is a positive integer. For another example, the second session information is the j +1 th session information, and the first session information is the j th session information, where j is a positive integer. Alternatively, the second session information may be session information belonging to the same domain as the first session information. For example, the first session information is "what the exchange rate of rmb to dollar is", and the second session information is "store one ten thousand yuan", both of which belong to the financial field.
S202: and calling an intention classification model to respectively carry out intention prediction on the first dialogue information and the second dialogue information, and determining a first intention corresponding to the first dialogue information and a second intention corresponding to the second dialogue information.
In one embodiment, an intention classification model may be invoked to determine first confidences of the first dialog information under respective candidate intentions, and the candidate intention indicated by the highest first confidence may be determined as the first intention to which the first dialog information corresponds. And calling an intention classification model to determine second confidence degrees of the second dialogue information under the candidate intentions, and determining the candidate intention indicated by the highest second confidence degree as the second intention corresponding to the second dialogue information.
Alternatively, the confidence level may include a keyword hit probability, and the keyword hit probability may refer to a ratio of the word number of the keyword contained in the dialogue information to the total word number of the dialogue information. Specifically, one or more keywords may be labeled for each candidate intention in advance, the dialog information may be hit-matched with the one or more keywords corresponding to each candidate intention, the number of words of the keyword including each candidate intention in the dialog information is determined, and the ratio of the number of words of the keyword including each candidate intention to the total number of words of the dialog information is calculated to obtain the keyword hit probability of the dialog information under each candidate intention. For example, assuming that the total word number included in the dialog information is z, and the word number of the keyword including the candidate intention a in the dialog information is b, the keyword hit probability corresponding to the candidate intention a may be: a/z, the word number of the keyword including the candidate intention B in the dialog information is B, and then the hit probability of the keyword corresponding to the candidate intention B may be: b/z. The more keywords in the dialog information that contain one candidate intention, the higher the hit probability of the keywords in the one candidate intention, and the higher the confidence that the dialog information belongs to the one candidate intention. Therefore, the hit probability of the keyword of the dialog information under the candidate intention can be used as the confidence corresponding to the dialog information. Specifically, an intention classification model may be invoked to perform hit matching on the first dialogue information and the keywords corresponding to each candidate intention, to obtain a first keyword hit probability corresponding to each candidate intention, to determine a highest first keyword hit probability (i.e., a first confidence) from the first keyword hit probabilities corresponding to each candidate intention, and to determine the candidate intention indicated by the highest first keyword hit probability as the first intention. Similarly, for the second dialog information, an intention classification model may be invoked to perform hit matching on the second dialog information and the keywords corresponding to each candidate intention, to obtain a second keyword hit probability (i.e., a second confidence) corresponding to each candidate intention, to determine a highest second keyword hit probability from the second keyword hit probabilities corresponding to each candidate intention, and to determine the candidate intention indicated by the highest second keyword hit probability as the second intention.
Optionally, the confidence may include a prediction probability. Specifically, the dialog information may be preprocessed to obtain a feature matrix corresponding to the dialog information, and then, an intention classification model is invoked to perform prediction processing on the feature matrix, so as to determine a prediction probability of the feature matrix under each candidate intention, where the higher the prediction probability is, the higher the confidence degree that the feature matrix belongs to the candidate intention is represented, and the lower the prediction probability is, the lower the confidence degree that the feature matrix belongs to the candidate intention is represented, so that the prediction probability of the feature matrix under each candidate intention may be used as the confidence degree of the dialog information in each candidate intention. Specifically, the first dialog information may be preprocessed to obtain a first feature matrix, an intention classification model is called to perform prediction processing on the first feature matrix, a first prediction probability (i.e., a first confidence) of the first feature matrix under each candidate intention is determined, a highest first prediction probability is determined from the first prediction probabilities under each candidate intention, and a candidate intention indicated by the highest first prediction probability is determined as the first intention. And preprocessing the second dialogue information to obtain a second feature matrix, calling an intention classification model to perform prediction processing on the second feature matrix, determining a second prediction probability (namely, a second confidence) of the second feature matrix under each candidate intention, determining a highest second prediction probability from the second prediction probabilities under each candidate intention, and determining the candidate intention indicated by the highest second prediction probability as the second intention.
In one embodiment, an implementation manner of preprocessing the dialog information to obtain a feature matrix corresponding to the dialog information may include: the method comprises the steps of carrying out word segmentation on conversation information to obtain the conversation information after word segmentation, mapping the conversation information after word segmentation to obtain word vectors, calling a convolutional neural network to extract local features of the word vectors to obtain local feature vectors, and carrying out feature extraction on the local feature vectors based on an attention mechanism to obtain a feature matrix. Wherein the dialogue information may include first dialogue information or second dialogue information.
When the dialog information is a chinese text, because there are some differences in the representation methods between the chinese text and the english text, the english text naturally has a space character as a boundary between words, but the chinese text does not, and therefore word segmentation processing needs to be performed on the chinese text. Namely, the dialogue information can be subjected to word segmentation processing to obtain the dialogue information after word segmentation. In one embodiment, the segmentation of the conversational information may be performed based on a dictionary segmentation algorithm (e.g., forward maximum matching, reverse maximum matching, two-way matching segmentation, etc.), or may be performed based on a statistical machine learning algorithm (e.g., hidden markov model, conditional random field model, SVM algorithm, deep learning algorithm, etc.).
In one embodiment, after the dialog information is subjected to Word segmentation processing to obtain the segmented dialog information, the segmented dialog information may be mapped through a Word Embedding (Word Embedding) layer to obtain a Word vector. Specifically, a common Word2vec network can be adopted to perform high-dimensional mapping on the segmented dialogue information. The Word2vec network uses a Huffman tree as a data structure to replace a traditional Deep Neural Network (DNN), leaf nodes of the Huffman tree are used for serving as neuron output, and the number of the leaf nodes is set according to the size of a dictionary.
In one embodiment, a convolutional neural network may be used to obtain local feature vectors from the mapped word vectors. Specifically, the word vectors may be divided into two-dimensional matrices of the same shape and input into a convolutional neural network. The convolutional neural network can incorporate the feature of the dialogue information relating to a plurality of convolutional layers, for example, four convolutional layers of 1 x 3, 1 x 4, 1 x 5 and 1 x 6 are included in the convolutional neural network. The convolutional neural network can be called to perform sliding convolution on the dialogue information to obtain a local feature vector.
In one embodiment, the local feature vectors derived from the word vectors may lack some information, considering that the word vectors may lack some information. Therefore, feature extraction can be performed on the local feature vectors based on the attention mechanism, and a global-based feature matrix is obtained. Optionally, a BiLSTM network with an attention mechanism may be used to perform feature extraction on the local feature vectors to obtain a global-based feature matrix. Due to the serialization characteristic of the dialogue information, the word order and the dialogue information can be reserved to the maximum extent by utilizing the BilSTM to extract the features.
In one embodiment, the intention classification model needs to be trained before invoking the intention classification model to perform prediction processing on the feature matrix and determining the prediction probability of the feature matrix under each candidate intention. As shown in fig. 3, fig. 3 shows a schematic diagram of a training process of an intention classification model, which may specifically include the following steps: S31-S32:
s31: a set of dialog samples is obtained and a feature matrix for each dialog sample in the set of dialog samples is determined.
It should be noted that the implementation manner of determining the feature matrix of the dialog sample is similar to the manner of determining the feature matrix of the dialog information, and reference may be specifically made to the foregoing description, which is not described herein again.
S32: classifying the conversation sample set according to the feature matrix corresponding to each conversation sample to obtain conversation sample subsets under different categories, wherein the feature matrix corresponding to the conversation sample in the conversation sample subset under one category corresponds to one candidate intention.
The intention classification model can be constructed based on a machine learning algorithm, and parameter tuning is continuously carried out in the construction process to construct an optimal intention classification model. Specifically, the scene recommendation service device may classify the feature matrix corresponding to each dialog sample in the dialog sample set by using the initial classification model to obtain dialog sample subsets of different categories, and update the initial parameters of the initial classification model according to the dialog sample subsets of different categories. And after multiple updates, an intention classification model can be obtained through training. Wherein the feature matrix corresponding to the dialog samples contained in the subset of dialog samples of a category corresponds to a candidate intent.
S203: acquiring N first scene services related to a first intention and M second scene services related to a second intention; wherein, N and M are positive integers.
The scenario service may refer to a business process node set based on a user requirement. For example, the business process of "leaving school and purchasing money" may include business process nodes such as material pre-review, purchase and transfer application form submission, offline website reservation, and the like. Due to the fact that the number of scene services of the service platform is large, the classification model is not convenient to directly construct. Also, some scene services have strong relevance, so one or more scene services having similar features may be associated with one candidate intention, so that the intention may be determined based on the dialog information and the scene service associated with the intention may be acquired.
In one embodiment, the candidate intent may be associated with a scenario service through a scenario service management interface. Specifically, a scene service management interface may be displayed, where the scene service management interface includes a meaning graph entry field and a scene service entry field; and inputting an intention in the intention input column, and inputting a scene service in the scene service input column, so that the intention input in the intention input column is associated with the scene service input in the scene service input column.
S204: and determining a target scene service according to the N first scene services and the M second scene services, and outputting the target scene service.
In one embodiment, the N first scenario services and the M second scenario services may be directly determined as the target scenario services
In one embodiment, after the target scenario service is determined, the target scenario service may be directly output through the scenario service recommendation device. Optionally, after the target scenario service is determined, the target scenario service may be further sent to the terminal device associated with the user, so that the terminal device associated with the user outputs the target scenario service.
Further, each scene service may also correspond to a component, where the component includes an execution instruction of the scene service and description information of the scene service. Specifically, when determining the target scene service, the scene service recommendation device may acquire the target component corresponding to the target scene service from the knowledge graph, and output the target component corresponding to the target scene service while outputting the target scene service.
In the embodiment of the application, first dialogue information and second dialogue information related to the first dialogue information in multiple rounds of dialogues can be obtained, an intention classification model is called to respectively conduct intention prediction on the first dialogue information and the second dialogue information, a first intention corresponding to the first dialogue information is determined, and a second intention corresponding to the second dialogue information is determined; the method comprises the steps of obtaining N first scene services related to a first intention and M second scene services related to a second intention, determining a target scene service according to the N first scene services and the M second scene services, and outputting the target scene service. In order to obtain the target scene service, an intention classification model can be called to predict the intention of the dialogue information, the intention corresponding to the dialogue information is determined, then the scene service related to the intention is obtained, and the pushed target scene service is determined based on the scene service. The method can be adaptive to the user requirements, intelligently output the target scene service to the user and improve the user experience. And when the classification task is executed, the intention classification model is called to determine the intention, and then the associated scene services are determined according to the intention, so that the excessive candidate items of the classification task can be avoided, the long tail phenomenon is avoided, and the maintenance is easier. In addition, since the first session information and the second session information are session information having an association relationship, a target scenario service determined based on a first scenario service corresponding to the first session information and a second scenario service corresponding to the second session information is more accurate.
As can be seen from the above description of the method embodiment shown in fig. 2, the scenario service recommendation method shown in fig. 2 may invoke an intention classification model to determine a first intention corresponding to the first dialog information and a second intention corresponding to the second dialog information, and determine a target scenario service according to the first scenario service associated with the first intention and the second scenario service associated with the second intention. In some embodiments, the importance of each target context service may be different, so the target context service may also be selectively determined from a first context service associated with a first intent and a second context service associated with a second intent. Based on the above, the embodiment of the application also provides a scene service recommendation method. As shown in fig. 4, the scene service recommendation method includes S401-S404:
s401: first session information in a plurality of sessions and second session information associated with the first session information are obtained.
S402: and calling an intention classification model to respectively carry out intention prediction on the first dialogue information and the second dialogue information, and determining a first intention corresponding to the first dialogue information and a second intention corresponding to the second dialogue information.
S403: acquiring N first scene services related to a first intention and M second scene services related to a second intention; wherein, N and M are positive integers.
Specific implementation manners of S401 to S403 may refer to specific descriptions of related embodiments in fig. 2, which are not described herein again.
S404: a target scenario service is determined from the N first scenario services and the M second scenario services. The target scene service is the intersection of the N first scene services and the M second scene services.
In one embodiment, an intersection of the N first scenario services and the M second scenario services may be obtained, and a scenario service in the intersection may be determined as a target scenario service. Assume that the N first scenario services with which the first intention is associated include scenario service a, scenario service b, and scenario service c. The M second scenario services associated with the second intention include scenario service a, scenario service b, and scenario service d. Acquiring the intersection of the N first scenario services and the M second scenario services as scenario service a and scenario service b, and then taking scenario service a and scenario service b as target scenario services.
In another embodiment, an intersection of the N first scenario services and the M second scenario services may be further obtained, and a selection instruction for the intersection is obtained, and a target scenario service is determined from the intersection of the N first scenario services and the M second scenario services based on the selection instruction.
Alternatively, the selection instruction may be generated based on a user operation, which may include one or more of clicking, sliding, long-pressing, double-clicking, etc. operations within the user interface. Alternatively, the selection instruction may be generated based on an operation simulation model. The operation simulation model may be a model generated based on a simulation user operation technology, for example, uiautomator technology. Specifically, the selection instruction can be generated by simulating a user clicking a screen by utilizing the uiautomator technology. Alternatively, the selection instructions may also be preconfigured, and so on.
In still another embodiment, the scene service recommendation device may further obtain the attention model, and call the attention model to process the N first scene services and the M second scene services, so as to determine the target scene service. Specifically, the scene service recommendation device may perform attention processing on the N first scene services and the M second scene services through an attention mechanism in the attention model to obtain the target scene service. Wherein attention mechanism means that attention can be focused on the actually important feature by attention weight. For example, when the attention model focuses more on the first intention corresponding to the first dialogue information, the attention weights of the N first scenario services may be set to be greater than the attention weights of the M second scenario services. For another example, when the attention model focuses more on the second intention corresponding to the second session information, the attention weights of the M second scenario services may be set to be greater than the attention weights of the N first scenario services.
S405: and outputting the target scene service.
In one embodiment, the target scene service may be written into a Block Chain (Block Chain) so as to push the target scene service directly in a subsequent time period. Specifically, the scene service recommendation device may verify the target scene service, and if the target scene service passes the verification, perform consensus verification on the target scene through a consensus node in the block chain network; and if the consensus verification is passed, packaging the target scene service into a block, and writing the block into a block chain.
The block chain is a chain data structure formed by combining data blocks in a sequential connection mode according to a time sequence, and a distributed account book which can not be tampered and forged of data is guaranteed in a cryptographic mode. Multiple independent distributed nodes maintain the same record. The blockchain technology realizes decentralization and becomes a foundation for credible digital asset storage, transfer and transaction.
In the embodiment of the application, the importance degree of each target scene service is considered, so that the target scene service determined in the first scene service associated with the first intention and the second scene service associated with the second intention is more accurate.
The embodiment of the application also discloses a scene service recommendation device, which can be a computer program (including a program code) running in the above-mentioned scene service recommendation device. The scene service recommending apparatus may perform the method shown in fig. 2 or fig. 4. Referring to fig. 5, the scene service recommending apparatus may operate the following units:
an obtaining unit 501, configured to obtain first session information in multiple rounds of sessions and second session information associated with the first session information;
an intention prediction unit 502, configured to invoke an intention classification model to perform intention prediction on the first dialog information and the second dialog information respectively, determine a first intention corresponding to the first dialog information, and determine a second intention corresponding to the second dialog information;
the obtaining unit 501 is further configured to obtain N first scenario services associated with the first intention and obtain M second scenario services associated with the second intention; wherein N and M are positive integers;
the determining unit 503 is configured to determine a target scenario service according to the N first scenario services and the M second scenario services, and output the target scenario service.
In some possible embodiments, the intention prediction unit 502 is configured to invoke an intention classification model to perform intention prediction on the first dialog information, and determine a first intention corresponding to the first dialog information, including:
performing hit matching on the first dialogue information and the keywords corresponding to each candidate intention respectively to obtain a first keyword hit probability corresponding to each candidate intention;
determining the highest first keyword hit probability from the first keyword hit probabilities corresponding to each candidate intention;
and determining the candidate intention corresponding to the highest first keyword hit probability as the first intention.
In some possible embodiments, the intention prediction unit 502 is configured to invoke an intention classification model to perform intention prediction on the first dialog information, and determine a first intention corresponding to the first dialog information, including:
preprocessing the first dialogue information to obtain a first feature matrix;
calling an intention classification model to perform prediction processing on the first feature matrix, and determining a first prediction probability of the first feature matrix under each candidate intention;
a first intent is determined from the first predicted probability under each candidate intent.
In some possible embodiments, the intention prediction unit 502 is configured to preprocess the first dialog information to obtain a first feature matrix, including:
performing word segmentation processing on the first dialogue information to obtain first dialogue information after word segmentation, and mapping the first dialogue information after word segmentation to obtain a first word vector;
calling a convolutional neural network to extract local features of the first word vector to obtain a first local feature vector;
and performing feature extraction on the first local feature vector based on an attention mechanism to obtain a first feature matrix.
In some possible embodiments, the determining unit 503 is configured to determine the target scenario service according to the N first scenario services and the M second scenario services, and includes:
acquiring intersections of the N first scene services and the M second scene services;
acquiring a selection instruction aiming at intersection;
and determining a target scene service from the intersection of the N first scene services and the M second scene services based on the selection instruction.
In some possible embodiments, the determining unit 503 is configured to determine the target scenario service according to the N first scenario services and the M second scenario services, and includes:
acquiring an attention model;
and calling an attention model to process the N first scene services and the M second scene services, and determining a target scene service.
In some possible embodiments, after the determining unit 503 is configured to determine the target scenario service according to the N first scenario services and the M second scenario services, the determining unit 503 is further configured to:
the target scene service is verified, and if the verification is passed, the target scene service is subjected to consensus verification through a consensus node in the block chain network;
and if the consensus verification is passed, packaging the target scene service into a block, and writing the block into a block chain.
According to another embodiment of the present application, the units in the scene service recommendation device shown in fig. 5 may be respectively or entirely combined into one or several other units to form the scene service recommendation device, or some of the units may be further split into multiple functionally smaller units to form the scene service recommendation device, which may implement the same operation without affecting implementation of technical effects of the embodiment of the present application. The units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the scene-based service recommendation apparatus may also include other units, and in practical applications, these functions may also be implemented by assistance of other units, and may be implemented by cooperation of multiple units.
According to another embodiment of the present application, the Processing element and the memory element may include a Central Processing Unit (CPU), a random access memory medium (RAM), a read only memory medium (ROM), and the like. A general purpose computing device, such as a computer, runs a computer program (including program code) capable of executing the steps involved in the corresponding method shown in fig. 2 or fig. 4, so as to construct a scene service recommendation apparatus shown in fig. 5, and to implement the scene service recommendation method of the embodiment of the present application. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the scene service recommendation device via the computer-readable recording medium.
In the embodiment of the application, first dialogue information and second dialogue information related to the first dialogue information in multiple rounds of dialogues can be obtained, an intention classification model is called to respectively conduct intention prediction on the first dialogue information and the second dialogue information, a first intention corresponding to the first dialogue information is determined, and a second intention corresponding to the second dialogue information is determined; the method comprises the steps of obtaining N first scene services related to a first intention and M second scene services related to a second intention, determining a target scene service according to the N first scene services and the M second scene services, and outputting the target scene service. In order to obtain the target scene service, an intention classification model can be called to predict the intention of the dialogue information, the intention corresponding to the dialogue information is determined, then the scene service related to the intention is obtained, and the pushed target scene service is determined based on the scene service. The method can be adaptive to the user requirements, intelligently output the target scene service to the user and improve the user experience. And when the classification task is executed, the intention classification model is called to determine the intention, and then the associated scene services are determined according to the intention, so that the excessive candidate items of the classification task can be avoided, the long tail phenomenon is avoided, and the maintenance is easier. In addition, since the first session information and the second session information are session information having an association relationship, a target scenario service determined based on a first scenario service corresponding to the first session information and a second scenario service corresponding to the second session information is more accurate.
Based on the description of the embodiment of the scene service recommendation method, the embodiment of the application also discloses a scene service recommendation device. Referring to fig. 6, the scene service recommendation device at least includes a processor 601, an input interface 602, an output interface 603, and a computer storage medium 604, which may be connected by a bus or other means.
The computer storage medium 604 is a memory device in the scene service recommendation device, and is used for storing programs and data. It is understood that the computer storage medium 604 herein may include a built-in storage medium of the scene service recommendation device, and may also include an extended storage medium supported by the scene service recommendation device. The computer storage medium 604 provides a storage space that stores an operating system of the scenario service recommendation device. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by processor 601. Note that the computer storage media herein can be high-speed RAM memory; optionally, the system may further include at least one computer storage medium remote from the processor, where the processor may be referred to as a Central Processing Unit (CPU), and is a core and a control center of the scene service recommendation device, and the processor is adapted to implement one or more instructions, and specifically load and execute the one or more instructions to implement the corresponding method flow or function.
In one embodiment, one or more instructions stored in the computer storage medium 604 may be loaded and executed by the processor 601 to implement the steps involved in performing the corresponding method as shown in fig. 2 or fig. 4, and in particular, the one or more instructions in the computer storage medium 604 may be loaded and executed by the processor 601 to implement the steps of:
acquiring first dialogue information in a plurality of rounds of dialogue and second dialogue information related to the first dialogue information;
calling an intention classification model to respectively carry out intention prediction on the first dialogue information and the second dialogue information, and determining a first intention corresponding to the first dialogue information and a second intention corresponding to the second dialogue information;
acquiring N first scene services related to a first intention and M second scene services related to a second intention; wherein N and M are positive integers;
and determining a target scene service according to the N first scene services and the M second scene services, and outputting the target scene service.
In some possible embodiments, the processor 601 is configured to invoke the intention classification model to perform intention prediction on the first dialogue information, and determine a first intention corresponding to the first dialogue information, including:
performing hit matching on the first dialogue information and the keywords corresponding to each candidate intention respectively to obtain a first keyword hit probability corresponding to each candidate intention;
determining the highest first keyword hit probability from the first keyword hit probabilities corresponding to each candidate intention;
and determining the candidate intention corresponding to the highest first keyword hit probability as the first intention.
In some possible embodiments, the processor 601 is configured to invoke the intention classification model to perform intention prediction on the first dialogue information, and determine a first intention corresponding to the first dialogue information, including:
preprocessing the first dialogue information to obtain a first feature matrix;
calling an intention classification model to perform prediction processing on the first feature matrix, and determining a first prediction probability of the first feature matrix under each candidate intention;
a first intent is determined from the first predicted probability under each candidate intent.
In some possible embodiments, the processor 601 is configured to preprocess the first dialog information to obtain a first feature matrix, and includes:
performing word segmentation processing on the first dialogue information to obtain first dialogue information after word segmentation, and mapping the first dialogue information after word segmentation to obtain a first word vector;
calling a convolutional neural network to extract local features of the first word vector to obtain a first local feature vector;
and performing feature extraction on the first local feature vector based on an attention mechanism to obtain a first feature matrix.
In some possible embodiments, the processor 601 is configured to determine the target scenario service according to the N first scenario services and the M second scenario services, including:
acquiring intersections of the N first scene services and the M second scene services;
acquiring a selection instruction aiming at intersection;
and determining a target scene service from the intersection of the N first scene services and the M second scene services based on the selection instruction.
In some possible embodiments, the processor 601 is configured to determine the target scenario service according to the N first scenario services and the M second scenario services, including:
acquiring an attention model;
and calling an attention model to process the N first scene services and the M second scene services, and determining a target scene service.
In some possible embodiments, after the processor 601 is configured to determine the target scenario service according to the N first scenario services and the M second scenario services, the processor 601 is further configured to:
the target scene service is verified, and if the verification is passed, the target scene service is subjected to consensus verification through a consensus node in the block chain network;
and if the consensus verification is passed, packaging the target scene service into a block, and writing the block into a block chain.
In the embodiment of the application, first dialogue information and second dialogue information related to the first dialogue information in multiple rounds of dialogues can be obtained, an intention classification model is called to respectively conduct intention prediction on the first dialogue information and the second dialogue information, a first intention corresponding to the first dialogue information is determined, and a second intention corresponding to the second dialogue information is determined; the method comprises the steps of obtaining N first scene services related to a first intention and M second scene services related to a second intention, determining a target scene service according to the N first scene services and the M second scene services, and outputting the target scene service. In order to obtain the target scene service, an intention classification model can be called to predict the intention of the dialogue information, the intention corresponding to the dialogue information is determined, then the scene service related to the intention is obtained, and the pushed target scene service is determined based on the scene service. The method can be adaptive to the user requirements, intelligently output the target scene service to the user and improve the user experience. And when the classification task is executed, the intention classification model is called to determine the intention, and then the associated scene services are determined according to the intention, so that the excessive candidate items of the classification task can be avoided, the long tail phenomenon is avoided, and the maintenance is easier. In addition, since the first session information and the second session information are session information having an association relationship, a target scenario service determined based on a first scenario service corresponding to the first session information and a second scenario service corresponding to the second session information is more accurate.
It should be noted that the present application also provides a computer program product or a computer program, where the computer program product or the computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium. The processor of the scene service recommendation device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the scene service recommendation device executes the steps executed in fig. 2 or fig. 4 of the above-mentioned scene service recommendation method embodiment.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Claims (10)
1. A method for recommending scene services, the method comprising:
acquiring first dialogue information in a plurality of rounds of dialogues and second dialogue information related to the first dialogue information;
calling an intention classification model to respectively carry out intention prediction on the first dialogue information and the second dialogue information, and determining a first intention corresponding to the first dialogue information and a second intention corresponding to the second dialogue information;
acquiring N first scene services associated with the first intention and acquiring M second scene services associated with the second intention; wherein N and M are positive integers;
and determining a target scene service according to the N first scene services and the M second scene services, and outputting the target scene service.
2. The method of claim 1, wherein invoking the intent classification model to make an intent prediction for the first dialog information, determining a first intent corresponding to the first dialog information, comprises:
performing hit matching on the first dialogue information and the keywords corresponding to each candidate intention respectively to obtain a first keyword hit probability corresponding to each candidate intention;
determining the highest first keyword hit probability from the first keyword hit probabilities corresponding to each candidate intention;
and determining the candidate intention corresponding to the highest first keyword hit probability as the first intention.
3. The method of claim 1, wherein invoking the intent classification model to make an intent prediction for the first dialog information, determining a first intent corresponding to the first dialog information, comprises:
preprocessing the first dialogue information to obtain a first feature matrix;
calling the intention classification model to carry out prediction processing on the first feature matrix, and determining a first prediction probability of the first feature matrix under each candidate intention;
determining the first intent according to a first predicted probability under each of the candidate intents.
4. The method of claim 3, wherein preprocessing the first dialog information to obtain a first feature matrix comprises:
performing word segmentation processing on the first dialogue information to obtain first dialogue information after word segmentation, and mapping the first dialogue information after word segmentation to obtain a first word vector;
calling a convolutional neural network to extract local features of the first word vector to obtain a first local feature vector;
and performing feature extraction on the first local feature vector based on an attention mechanism to obtain the first feature matrix.
5. The method of any of claims 1-4, wherein said determining a target context service from the N first context services and the M second context services comprises:
acquiring the intersection of the N first scene services and the M second scene services;
acquiring a selection instruction aiming at the intersection;
determining the target scenario service from an intersection of the N first scenario services and the M second scenario services based on the selection instruction.
6. The method of any of claims 1-4, wherein said determining a target context service from the N first context services and the M second context services comprises:
acquiring an attention model;
and calling the attention model to process the N first scene services and the M second scene services, and determining the target scene service.
7. The method according to any of claims 1-4, wherein after determining the target scenario service according to the N first scenario services and the M second scenario services, the method further comprises:
the target scene service is verified, and if the verification is passed, the target scene service is subjected to consensus verification through a consensus node in a block chain network;
and if the consensus verification is passed, packaging the target scene service into a block, and writing the block into a block chain.
8. A scenario service recommendation device, the method comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring first conversation information in a plurality of rounds of conversations and second conversation information related to the first conversation information;
the intention prediction unit is used for calling an intention classification model to respectively carry out intention prediction on the first dialogue information and the second dialogue information, and determining a first intention corresponding to the first dialogue information and a second intention corresponding to the second dialogue information;
the obtaining unit is further configured to obtain N first scenario services associated with the first intention and obtain M second scenario services associated with the second intention; wherein N and M are positive integers;
a determining unit, configured to determine a target scenario service according to the N first scenario services and the M second scenario services, and output the target scenario service.
9. A scenic service recommendation device comprising a processor and a memory, wherein the memory is configured to store a computer program comprising a program, and wherein the processor is configured to invoke the program to perform a scenic service recommendation method as claimed in any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the scene service recommendation method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111149885.8A CN113869068A (en) | 2021-09-29 | 2021-09-29 | Scene service recommendation method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111149885.8A CN113869068A (en) | 2021-09-29 | 2021-09-29 | Scene service recommendation method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113869068A true CN113869068A (en) | 2021-12-31 |
Family
ID=78992513
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111149885.8A Pending CN113869068A (en) | 2021-09-29 | 2021-09-29 | Scene service recommendation method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113869068A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114996421A (en) * | 2022-05-20 | 2022-09-02 | 阿里巴巴(中国)有限公司 | Customer service information processing method, device and equipment |
CN115809669A (en) * | 2022-12-30 | 2023-03-17 | 联通智网科技股份有限公司 | Conversation management method and electronic equipment |
-
2021
- 2021-09-29 CN CN202111149885.8A patent/CN113869068A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114996421A (en) * | 2022-05-20 | 2022-09-02 | 阿里巴巴(中国)有限公司 | Customer service information processing method, device and equipment |
CN115809669A (en) * | 2022-12-30 | 2023-03-17 | 联通智网科技股份有限公司 | Conversation management method and electronic equipment |
CN115809669B (en) * | 2022-12-30 | 2024-03-29 | 联通智网科技股份有限公司 | Dialogue management method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ay Karakuş et al. | Evaluating deep learning models for sentiment classification | |
CN112732911B (en) | Semantic recognition-based speaking recommendation method, device, equipment and storage medium | |
CN112131350B (en) | Text label determining method, device, terminal and readable storage medium | |
CN113901320A (en) | Scene service recommendation method, device, equipment and storage medium | |
US11720761B2 (en) | Systems and methods for intelligent routing of source content for translation services | |
CN113011186B (en) | Named entity recognition method, named entity recognition device, named entity recognition equipment and computer readable storage medium | |
CN113961666B (en) | Keyword recognition method, apparatus, device, medium, and computer program product | |
CN115017288A (en) | Model training method, model training device, equipment and storage medium | |
CN113011884A (en) | Account feature extraction method, device and equipment and readable storage medium | |
CN116541492A (en) | Data processing method and related equipment | |
CN113869068A (en) | Scene service recommendation method, device, equipment and storage medium | |
CN113688232A (en) | Method and device for classifying bidding texts, storage medium and terminal | |
CN113222022A (en) | Webpage classification identification method and device | |
Chen et al. | Emotion and sentiment analysis for intelligent customer service conversation using a multi-task ensemble framework | |
CN113886539A (en) | Method and device for recommending dialect, customer service equipment and storage medium | |
CN116955529A (en) | Data processing method and device and electronic equipment | |
CN113297367B (en) | Method and related equipment for generating user dialogue links | |
CN113505293B (en) | Information pushing method and device, electronic equipment and storage medium | |
CN113807920A (en) | Artificial intelligence based product recommendation method, device, equipment and storage medium | |
CN116089605A (en) | Text emotion analysis method based on transfer learning and improved word bag model | |
CN115905865A (en) | Training method of text merging judgment model and text merging judgment method | |
Gadri et al. | An efficient system to predict customers’ satisfaction on touristic services using ML and DL approaches | |
CN115129863A (en) | Intention recognition method, device, equipment, storage medium and computer program product | |
CN117235271A (en) | Information extraction method and device, computer storage medium and electronic equipment | |
CN114330701A (en) | Model training method, device, computer equipment, storage medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |