CN109034203B - Method, device, equipment and medium for training expression recommendation model and recommending expression - Google Patents
Method, device, equipment and medium for training expression recommendation model and recommending expression Download PDFInfo
- Publication number
- CN109034203B CN109034203B CN201810695138.6A CN201810695138A CN109034203B CN 109034203 B CN109034203 B CN 109034203B CN 201810695138 A CN201810695138 A CN 201810695138A CN 109034203 B CN109034203 B CN 109034203B
- Authority
- CN
- China
- Prior art keywords
- expression
- log
- model
- expressions
- text information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the invention discloses a method, a device, equipment and a medium for training an expression recommendation model and recommending expressions. The training method of the expression recommendation model comprises the following steps: constructing an expression recommendation training sample according to historical input logs of at least two users, wherein the expression recommendation training sample comprises: the method comprises the following steps of text information and expressions corresponding to the text information; and training a set machine learning model by using the expression recommendation training sample to obtain the expression recommendation model. By the technical scheme of the embodiment of the invention, the expression searching efficiency and the use generalization of the expression recommendation model can be improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of computer application, in particular to a method, a device, equipment and a medium for training an expression recommendation model and recommending expressions.
Background
With the development of internet technology, more and more users use chat software or microblogs to communicate. In the process of character communication, expression images are often required to be used for expression so as to enable the expression contents to be more vivid.
At present, mainstream chat software (similar to WeChat, QQ, Baidu Hi and the like), micro blogs, interest communities (similar to Baidu Bar, bean, microblog and Twitter) and the like have a plurality of built-in expressions for users to use. However, because the built-in expressions are very many, and some software has even thousands of expressions, the user can hardly search the built-in expressions in actual use, and a large number of built-in expressions are not used by people.
For the problem, expression recommendation models used in some software are constructed according to the sources and styles of expressions, specifically, the built-in expressions are stored in pages, so that a user needs to select the sources and styles of the expressions first and then select specific expressions in the using process; some software or expression recommendation models used in communities are constructed by labeling each expression with a specific keyword, so that an expression can be recommended to a user only when the user types a keyword that is consistent with the expression name.
The defects of the prior art are as follows: the first expression recommendation model stores expressions of the same style in pages, so that a user still needs to check each page and find a proper expression in the using process, and the model searching efficiency is low; the second expression recommendation model has no generalization, for example, an expression with a labeled keyword of "thank you", and only when a text input by a user is "thank you", the expression recommendation is triggered, and only an expression corresponding to the keyword is recommended, and when the text input by the user is "thank you" or "thanks", and the like, a related expression cannot be recommended.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a medium for training an expression recommendation model and recommending an expression, so as to improve the expression search efficiency and the use generalization of the expression recommendation model.
In a first aspect, an embodiment of the present invention provides a method for training an expression recommendation model, including:
constructing an expression recommendation training sample according to historical input logs of at least two users, wherein the expression recommendation training sample comprises: the method comprises the following steps of text information and expressions corresponding to the text information;
and training a set machine learning model by using the expression recommendation training sample to obtain the expression recommendation model.
In a second aspect, an embodiment of the present invention further provides an expression recommendation method, where the method includes:
acquiring text information input by a user;
inputting the text information and at least two alternative expressions into a pre-trained expression recommendation model respectively, wherein the input of the expression recommendation model is a text and an expression, and the output of the expression recommendation model is a result of whether the text is related to the expression or not;
and determining associated expressions corresponding to the text information in the alternative expressions to be provided for the user according to the output results of the expression recommendation model for the alternative expressions.
In a third aspect, an embodiment of the present invention further provides a training device for an expression recommendation model, including:
the sample construction module is used for constructing an expression recommendation training sample according to historical input logs of at least two users, and the expression recommendation training sample comprises: the method comprises the following steps of text information and expressions corresponding to the text information;
and the model training module is used for training a set machine learning model by using the expression recommendation training sample to obtain the expression recommendation model.
In a fourth aspect, an embodiment of the present invention further provides an expression recommendation apparatus, where the apparatus includes:
the information acquisition module is used for acquiring text information input by a user;
the expression input module is used for respectively inputting the text information and at least two alternative expressions into a pre-trained expression recommendation model, wherein the input of the expression recommendation model is a text and an expression, and the output of the expression recommendation model is a result of whether the text is related to the expression or not;
and the expression determining module is used for determining the associated expression corresponding to the text information in the alternative expressions and providing the associated expression to the user according to the output result of the expression recommendation model for each alternative expression.
In a fifth aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method for training an expression recommendation model or the method for ranking candidate texts according to an embodiment of the present invention when executing the computer program.
In a sixth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for training an expression recommendation model or the method for ranking candidate texts according to the embodiment of the present invention.
According to the embodiment of the invention, the expression recommendation training sample is constructed by using the historical input logs of at least two users, and the set machine learning model is trained by using the expression recommendation training sample to obtain the expression recommendation model. Because the constructed expression recommendation training sample contains the text information extracted from the historical input log and the corresponding expression thereof, the relevance between the text information and the expression can be learned in the process of training the machine learning model by using the sample, so that the most appropriate expression can be recommended to the user in real time according to the semantics of the input text of the user when the obtained expression recommendation model is used, and the expression search efficiency and the use generalization of the expression recommendation model are improved.
Drawings
Fig. 1a is a schematic flowchart of a training method for an expression recommendation model according to an embodiment of the present invention;
FIG. 1b is a schematic diagram of a machine learning model suitable for use in embodiments of the present invention;
fig. 2a is a schematic flowchart of a training method for an expression recommendation model according to a second embodiment of the present invention;
FIG. 2b is a schematic diagram of an expression recommendation training process applicable to a second embodiment of the present invention;
fig. 3 is a schematic flowchart of an expression recommendation method according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a training device for an expression recommendation model according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of an expression recommendation device according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer device according to a sixth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1a is a schematic flow diagram of a training method for an expression recommendation model according to an embodiment of the present invention, where this embodiment is applicable to a case of training an expression recommendation model, and the method may be executed by a training apparatus for an expression recommendation model according to an embodiment of the present invention, and the apparatus may be implemented in a software and/or hardware manner and may be generally integrated in a server. As shown in fig. 1a, the method of this embodiment specifically includes:
s110, constructing an expression recommendation training sample according to historical input logs of at least two users, wherein the expression recommendation training sample comprises: text information, and an expression corresponding to the text information.
In this embodiment, the history input log includes, but is not limited to, historical chat data of at least two users recorded in the system, blogs published by the users, contents of user comments, and the like. Correspondingly, the system can be chat software, also can be micro blogs, also can be interest communities and the like. Specifically, the effective texts and the corresponding expression information thereof can be extracted from the obtained historical input logs of all the users to construct an expression recommendation training sample. The corresponding relation between the text and the expression can be positive correlation or negative correlation, for example, the text is that "i have a happy feeling and see you" is positive correlation with the smiling face expression, and is negative correlation with the crying face expression. Correspondingly, the constructed expression recommendation training samples can be positive example samples or negative example samples.
The expression recommendation training sample for training the machine learning model is constructed according to the historical input log of the user, and the expression recommendation training sample has the advantages that multiple text expression modes corresponding to the same expression can be used for training the model, so that when similar texts are input, the expression recommendation model can perform corresponding expression recommendation, the flexibility of the corresponding relation between the input text and the expression recommendation is improved, and the use generalization of the expression recommendation model is improved.
And S120, training the set machine learning model by using the expression recommendation training sample to obtain an expression recommendation model.
The set machine learning model in this embodiment may be a training model established based on a set machine learning algorithm, where the set machine learning algorithm includes, but is not limited to, a Recurrent Neural Network (RNN), a Convolutional Neural Network (CNN), and the like. Specifically, the expressions in the expression recommendation training samples can be used as labels of text information, and a preset algorithm is used for training the machine learning model, wherein the preset algorithm can be a Back Propagation (BP) algorithm. The process of training the model may be a process of adjusting parameters of each model, and through continuous training, optimal model parameters are obtained, and the model with the optimal model parameters is the model to be finally obtained.
Illustratively, after a plurality of expression recommendation training samples are obtained, a BP algorithm is used for training a set machine learning model, a loss function of each sample is optimized and classified, model parameters in the machine learning model are continuously optimized, the machine learning model has the capacity of judging relevance of input texts and expressions, and therefore the expression recommendation model is obtained.
In an alternative implementation of this embodiment, as shown in fig. 1b, setting the machine learning model may include: the system comprises a text terminal model 5, an expression terminal model 6, a first full connection layer 7 and a classification layer 8, wherein the first full connection layer 7 is respectively connected with the text terminal model 5 and the expression terminal model 6, and the classification layer 8 is connected with the first full connection layer 7; the text terminal model 5 specifically includes: at least one term semantic layer 51, a first bidirectional RNN layer 52 connected to the term semantic layer 51, a first pooling layer 53 connected to the first bidirectional RNN layer 52; the term semantic layer 51 specifically includes: a second bidirectional RNN layer 511, a second pooling layer 512 connected to the second bidirectional RNN layer 511, and a first splicing layer 513 connected to the second pooling layer 512; expression terminal model 6 specifically includes: a third bidirectional RNN layer 61, a third pooling layer 62 connected to the third bidirectional RNN layer 61, and a second stitching layer 63 connected to the third pooling layer 62; the word semantic layer 51 is used for inputting the word characteristics of the participles in the input text information into the first splicing layer 513 and inputting the character characteristics of each character in the participles into the second bidirectional RNN layer 511; the expression terminal model 6 is used for inputting the input expression identifiers and the word characteristics represented by the expression words into the second splicing layer 63 and inputting the word characteristics represented by the expression words into the third bidirectional RNN layer 61; and the classification layer 8 is used for outputting a result whether the text information is related to the expression or not.
Illustratively, as shown in fig. 1b, the text terminal model 5 inputs text information, and the used text features include word features and/or word features. For example, the entered text information is "this collaboration is very pleasant". Specifically, firstly, text information is input into the word semantic layer 51, and the word feature part obtains a semantic representation a of a word level by using a semantic representation layer (i.e. an Embedding layer), for example, a word of "cooperation" is converted into a corresponding word vector; in the character characteristic part, another Embedding layer is used to obtain semantic representation of a character layer, for example, the word of 'cooperation' is decomposed into two characters of 'combination' and 'doing', and then converted into corresponding character vectors. The specific processing of the character feature part is to obtain global representation of each character through the second bidirectional RNN layer 511, splice the positive and negative bidirectional representations, and perform dimension reduction processing through the second pooling layer 512 to obtain the overall semantic representation B of a word at the character granularity level. Then, the semantic representation a and the semantic representation B are spliced by the first splicing layer 513, and are input to the first bidirectional RNN layer 52 at the upper layer as a whole, and the forward and reverse bidirectional representations are spliced to obtain a semantic representation of the entire text information through the first pooling layer 53 at the upper layer. The model obtains the global representation of each word through the RNN, and comprehensively models the conditions of the expressions at different positions in the text by using the pooling layer in the CNN.
In fig. 1B, the expression terminal model 6 inputs expression identifiers and expression word representations (for example, ID corresponding to the expression image of the handshake and the corresponding preset word "handshake"), and the semantic representation a corresponding to the word feature of the expression word representation and the semantic representation B corresponding to the word feature can be obtained by the same technical means as described above. In addition, the ID (also called expression identifier) of the expression image is used as a new one-dimensional feature, and a semantic representation C is obtained through the emotion layer of the expression, and then the three are input to the second stitching layer 63 for stitching, so that the semantic representation of the expression is obtained.
And finally, inputting the semantic representation of the text information and the semantic representation of the expression into the first full-connection layer 7 for splicing, and then inputting the semantic representation of the text information and the semantic representation of the expression into the classification layer 8 to distinguish whether the text information and the expression have relevance. Of course, those skilled in the art may understand that the first full connection layer 7 and the classification layer 8 in the preset machine learning model provided in this embodiment may also be replaced with other similarity calculation layers, such as a cosine similarity calculation layer, that is, for each sample datum, the corresponding expression is used as a positive example, other expressions in the expression library are randomly supplemented as negative examples, a paired Pairwise training method is used, and a cosine similarity calculation method is adopted to model a relationship between a sentence and an expression, as long as a model structure capable of calculating the similarity between a text and an expression is available, which is not limited in this embodiment.
According to the technical scheme of the embodiment of the invention, the expression recommendation training sample is constructed by using the historical input logs of at least two users, and the set machine learning model is trained by using the expression recommendation training sample to obtain the expression recommendation model. Because the constructed expression recommendation training sample contains the text information extracted from the historical input log and the corresponding expression thereof, the relevance between the text information and the expression can be learned in the process of training the machine learning model by using the sample, so that the most appropriate expression can be recommended to the user in real time according to the semantics of the input text of the user when the obtained expression recommendation model is used, and the expression search efficiency and the use generalization of the expression recommendation model are improved.
Example two
Fig. 2a is a schematic flow chart of a training method for an expression recommendation model according to a second embodiment of the present invention, which is embodied based on the above embodiments. In this embodiment, constructing an expression recommendation training sample according to the historical input logs of at least two users is further optimized to include: in the history input log, acquiring a log comprising expressions as a reference log; constructing a positive example sample in the expression recommendation training sample according to the text information and the expressions in the reference log; and constructing a negative example sample in the expression recommendation training sample according to the text information included in the reference log and other expressions except the expressions included in the reference log.
Correspondingly, the method of the embodiment includes:
s210, in the history input log, acquiring a log comprising the expression as a reference log.
The position of the expression in the log may be at the beginning of the log, may be in the middle of the log, or may be at the end of the log, which is not limited herein. Specifically, a log segment containing the emotions in the history input log can be intercepted as a reference log, and other log segments not containing the emotions in the history input log can be deleted. In a practical example, a chat record of a user stored in the system is obtained, and a chat record containing emoticons is extracted as a reference log.
Optionally, before acquiring a log including an expression as a reference log in the history input log, the method further includes: if the historical input logs comprise logs to be merged, the length of which is smaller than a set threshold value, the logs to be merged are obtained, and a first target user corresponding to the logs to be merged is obtained; acquiring a first reference merging log corresponding to a first target user, wherein the first generation time of the first reference merging log is adjacent to the second generation time of a log to be merged, and the first generation time is before the second generation time; and if the time difference value between the first generation time and the second generation time meets the set time threshold condition, merging the first reference merged log and the log to be merged to obtain a new historical input log.
In an actual reference log obtaining process, a special case is that a text length of a certain log exists in a history input log, and text information contained in the log is often difficult to reflect semantic environmental characteristics, so that in order to ensure validity of the text information when a recommended training sample is subsequently constructed by using an extracted reference log, a log with a shorter text length in the history input log and a previous log can be merged before the reference log is obtained, wherein a precondition for merging is a difference value between a generation time of the log and a generation time of the previous log, a set time threshold condition is met, for example, the difference value is less than a set time duration, and thus a certain semantic relevance can be ensured between the two logs. For example, if a certain record in the chat records of the user is a 'bar' and the difference between the generation time of the record and the generation time of the last record 'eating just lunch' is less than a preset one-minute time length, the record and the last record are merged.
Optionally, after acquiring a log including an expression as a reference log in the history input log, the method further includes: if the reference log is determined to include a null text log only with expressions, acquiring a second target user corresponding to the null text log; acquiring a second reference merged log corresponding to a second target user, wherein the third generation time of the second reference merged log is adjacent to the fourth generation time of the empty text log, and the third generation time is before the fourth generation time; and if the time difference value between the third generation time and the fourth generation time meets the set time threshold condition and the second reference merged log comprises text information, merging the second reference merged log and the empty text log to obtain a new reference log.
After the reference log is obtained, a special case may also occur in which the obtained reference log only contains an expression, that is, the reference log does not contain text information, and the log is further incapable of representing text semantic environmental characteristics when the expression is used, so that, in order to ensure validity of text information when a recommended training sample is subsequently constructed by using the extracted reference log, an empty text log only having the expression in the reference log may be merged with the previous log after the reference log is obtained, where a precondition for merging is that a difference between a generation time of the log and a generation time of the previous log meets a set time threshold condition, for example, is less than a set duration, so that a certain semantic relevance between the two logs can be ensured. For example, if a record in the chat records of the user is a handshake expression and the difference between the generation time of the record and the previous record "this time is very pleasant" is less than a preset one-minute time, the record is merged with the previous record.
Optionally, after acquiring a log including an expression as a reference log in the history input log, the method further includes: detecting whether the text information in the reference log comprises a pre-labeled named entity: and if so, replacing the named entity detected in the text information by using the entity tag.
For example, after the reference log is obtained, word segmentation processing may be performed on the text information in the reference log, and then recognition and anonymization processing of the named entity are performed, that is, the named entity detected in the text information is replaced with the entity tag, and the anonymized entity tag is retained in the reference log. The word segmentation processing is to divide the text information into a combination of a plurality of words, and the named entity recognition and anonymization processing may specifically include converting proper nouns such as names of people, place names, and time involved in the reference log into corresponding IDs. For example, the text information "trouble zhang san to call lie si quan", 13000000000 "in the reference log is processed to" trouble PER0 to call PER1, NUM0 ". The processing method has the advantages that sparsity of the special name word features in the text information can be reduced, and model effect is optimized.
S220, constructing a regular sample in the expression recommendation training sample according to the text information and the expressions in the reference log.
Since the text information and the expressions in the reference log are positively correlated, the text information and the expressions in the reference log can be used for constructing a positive example sample in the expression recommendation training sample.
The purpose of constructing the normal sample is to provide a sample with incidence relation between text information and expressions during model training and perform normal training on the model.
Further, constructing a regular sample in the expression recommendation training sample according to the text information and the expressions included in the reference log, including: and if the reference log comprises at least two expressions, constructing at least two positive example samples by respectively using each expression and the text information in the reference log.
As a specific example, if a reference log contains both a pleasant emotion and an emotion of a handshake, where the reference log contains text information "happy to cooperate with you", two positive examples can be constructed from the reference log, which are: "very happy with you" and its corresponding happy expression, "very happy with you" and its corresponding expression of handshaking.
And S230, constructing a negative example sample in the expression recommendation training sample according to the text information included in the reference log and other expressions except the expression included in the reference log.
Since the text information included in the reference log is negatively correlated with the other expressions except for the expression included in the reference log, the text information included in the reference log and the other expressions except for the expression included in the reference log selected from the preset expression library can be used for constructing the negative example sample in the expression recommendation training sample.
The purpose of constructing the negative example sample is to provide a sample with non-association relation between text information and expressions during model training corresponding to the positive example sample, and perform negative example training on the model. The model is subjected to positive training and negative training, the similarity between corresponding output results of the positive samples and the negative samples can be pulled as far as possible, the loss function in the machine learning model can be optimized better, and finally the relevance judgment accuracy of the expression recommendation model is improved.
S240, training the set machine learning model by using the expression recommendation training sample to obtain an expression recommendation model.
According to the technical scheme, the log comprising the expression is obtained from the historical input log as the reference log, the positive sample and the negative sample in the expression recommendation training sample are constructed according to the text information and the expression in the reference log and other expressions except the expression in the reference log, so that the similarity between corresponding output results of the positive sample and the negative sample can be pulled as far as possible in the training process of the set machine learning model, namely, a loss function in the machine learning model can be better optimized, and the relevance judgment accuracy of the expression recommendation model is finally improved.
On the basis of the above embodiments, for the training process of the expression recommendation model, the trained expression recommendation model may be finally obtained after the expression recommendation training process shown in fig. 2b is adopted. As shown in fig. 2b, chat data, published blogs and comment data of all users are obtained from the online system, word segmentation and anonymization are performed after texts are extracted from the chat data, and then expressions used in the texts are extracted to serve as labels of the remaining texts, so that the model is trained, and finally an expression recommendation model is obtained.
EXAMPLE III
Fig. 3 is a schematic flow chart of an expression recommendation method according to a third embodiment of the present invention, where this embodiment is applicable to an expression recommendation when a user inputs a text, and the method according to this embodiment may be executed by an expression recommendation device according to the third embodiment of the present invention, and the device may be implemented in a software and/or hardware manner and may be generally integrated in an expression server. As shown in fig. 3, the method of this embodiment specifically includes:
and S310, acquiring text information input by a user.
In this embodiment, the text information input by the user includes, but is not limited to, characters input in the process of using chat software, interest communities, micro blogs and other software. For example, when the user uses the chat software, the characters input by the user in the dialog box are acquired in real time, or when the user clicks the emoticon button, the characters input by the user in the dialog box are acquired.
Of course, it can be understood by those skilled in the art that the manner of acquiring the text information input by the user differs according to the software used, and the embodiment is not limited thereto.
And S320, respectively inputting the text information and the at least two candidate expressions into a pre-trained expression recommendation model, wherein the input of the expression recommendation model is a text and an expression, and the output of the expression recommendation model is a result of whether the text is related to the expression.
For example, a plurality of candidate expressions may be preset in an expression library, and when text information is acquired, each candidate expression and the text information are sequentially input into a pre-trained expression recommendation model for recognition. The expression input into the expression recommendation model can comprise an expression identifier and expression word representation thereof. For example, the ID corresponding to the handshake expression image and the corresponding preset word "handshake". The training method of the expression recommendation model may adopt the training method of the expression recommendation model described in the above embodiments, and details are not repeated herein.
Specifically, before the text information is input into the expression recommendation model, the text information may be preprocessed, where the preprocessing may include word segmentation, named entity identification, and anonymization, that is, a named entity detected in the text information is replaced with an entity tag, and the anonymized entity tag is retained in the reference log. The word segmentation processing is to divide the text information into a combination of a plurality of words, and the named entity recognition and anonymization processing may specifically include converting proper nouns such as names of people, place names, and time involved in the reference log into corresponding IDs. The processing method has the advantages that the sparsity of the special name word features in the text information can be reduced, and the model recommendation effect is improved.
S330, determining associated expressions corresponding to the text information in the alternative expressions and providing the associated expressions to the user according to output results of the expression recommendation model aiming at the alternative expressions.
In this embodiment, after each alternative expression and the same acquired text information are input to the expression recommendation model, an output result corresponding to each alternative expression is obtained, so as to represent whether the alternative expression is associated with the acquired text information. For example, all the alternative expressions can be classified into two types according to the output result of the expression recommendation model for each alternative expression, one type is associated expression, and the other type is non-associated expression, and the associated expressions are recommended to the user.
As an actual example, the associated emotions can be provided to the user in a real-time emotions recommendation mode in the input method for the user to select, the associated emotions can also be placed in emoticons of software such as chat software, interest communities, micro blogs and the like, and when the user clicks an emotions button, the emotions recommended for the currently input text can be viewed.
Of course, it is understood that other ways of recommending associated expressions, which can be distinguished from common expressions, may be used in the present embodiment, and are not limited herein.
Optionally, determining, from the candidate expressions, an associated expression corresponding to the text information and providing the associated expression to the user includes: and providing the associated expression in the setting page in the expression bar associated with the current input operation of the user.
For example, a page can be added in an expression bar of a text input interface on the basis of not changing the position of an original expression to serve as a recommended expression group, the system can perform relevance prediction according to a text currently input by a user, places expressions with relevance in the page and recommends the expressions to the user, wherein the arrangement mode of the relevant expressions in the page can be arranged from large to small according to the relevance so as to facilitate the selection of the user.
The method has the advantages that the associated expressions are provided in the pages, the recommended expressions can be conveniently browsed by the user on the basis that the positions of the original expressions are not influenced, the user can quickly find the expressions recommended by the system in real time, other non-recommended expressions can also be quickly found according to the original positions, and user experience is improved.
Optionally, after determining, according to the output result of the expression recommendation model for each candidate expression, an associated expression corresponding to the text information in the candidate expressions and providing the associated expression to the user, the method further includes: and constructing a new expression recommendation training sample to retrain the expression recommendation model according to the selection of the user on the provided associated expressions.
In order to update the model parameters in the expression recommendation model in real time and enable the expression recommended by the expression recommendation model to be more personalized, a new expression recommendation training sample can be constructed by the selection of the provided associated expression and the corresponding text information input by the user, the expression recommendation model is trained in real time, a data closed loop is formed, and the recommendation effect of the expression recommendation model is continuously improved. The expression recommendation training samples in the embodiments may be constructed by the method described in the above embodiments, and details are not repeated herein.
The embodiment of the invention provides an expression recommendation method, which comprises the steps of respectively inputting acquired text information input by a user and at least two alternative expressions into a pre-trained expression recommendation model to obtain an output result aiming at each alternative expression, and further determining a relevant expression corresponding to the text information in the alternative expressions according to the output result to provide the relevant expression for the user, so that the condition that the user searches for a needed expression in a large number of built-in expressions is avoided, the diversity of input texts is improved, the fact that the user can recommend the most appropriate expression for the user in real time from the large number of built-in expressions only by inputting any text by the user is realized, the expression search efficiency is improved, the expression cost of the user is reduced, and the user experience is improved.
Example four
Fig. 4 is a schematic structural diagram of a training device for an expression recommendation model according to a fourth embodiment of the present invention, and as shown in fig. 4, the device includes: a sample construction module 410 and a model training module 420.
A sample construction module 410, configured to construct an expression recommendation training sample according to historical input logs of at least two users, where the expression recommendation training sample includes: the method comprises the following steps of text information and expressions corresponding to the text information;
and the model training module 420 is configured to train a set machine learning model by using the expression recommendation training sample to obtain the expression recommendation model.
The embodiment of the invention provides a training device for an expression recommendation model, which is characterized in that an expression recommendation training sample is constructed by using historical input logs of at least two users, and a set machine learning model is trained by using the expression recommendation training sample to obtain the expression recommendation model. Because the constructed expression recommendation training sample contains the text information extracted from the historical input log and the corresponding expression thereof, the relevance between the text information and the expression can be learned in the process of training the machine learning model by using the sample, so that the most appropriate expression can be recommended to the user in real time according to the semantics of the input text of the user when the obtained expression recommendation model is used, and the expression search efficiency and the use generalization of the expression recommendation model are improved.
Further, the sample construction module 410 may include:
a reference log obtaining submodule, configured to obtain, from the history input log, a log including an expression as a reference log;
the positive example sample construction sub-module is used for constructing a positive example sample in the expression recommendation training sample according to the text information and the expressions included in the reference log;
and the negative example sample construction sub-module is used for constructing the negative example sample in the expression recommendation training sample according to the text information included in the reference log and other expressions except the expression included in the reference log.
Further, the sample construction module 410 may further include:
the first user acquisition submodule is used for acquiring a log including expressions in the historical input log as a reference log, and acquiring a first target user corresponding to the log to be merged if the historical input log is determined to include the log to be merged, wherein the text length of the log to be merged is smaller than a set threshold value;
a first log obtaining sub-module, configured to obtain a first reference merged log corresponding to the first target user, where a first generation time of the first reference merged log is adjacent to a second generation time of the log to be merged, and the first generation time is before the second generation time;
and the history log generation submodule is used for merging the first reference merging log and the log to be merged to obtain a new history input log if the time difference between the first generation time and the second generation time meets a set time threshold condition.
Further, the sample construction module 410 may further include:
the second user acquisition submodule is used for acquiring a log including expressions in the historical input log as a reference log, and acquiring a second target user corresponding to a null text log only having expressions if the null text log is determined to be included in the reference log;
a second log obtaining sub-module, configured to obtain a second reference merged log corresponding to the second target user, where a third generation time of the second reference merged log is adjacent to a fourth generation time of the empty text log, and the third generation time is before the fourth generation time;
and the reference log generation submodule is used for merging the second reference merged log and the empty text log to obtain a new reference log if a time difference value between the third generation time and the fourth generation time meets a set time threshold condition and the second reference merged log comprises text information.
Further, the sample construction module 410 may further include:
a named entity detection submodule, configured to, after acquiring a log including an expression from the history input log as a reference log, detect whether text information in the reference log includes a pre-labeled named entity: and if so, replacing the named entity detected in the text information by using an entity tag.
Further, the positive example sample construction submodule may specifically be configured to:
and if the reference log comprises at least two expressions, constructing at least two positive example samples by respectively using the expressions and the text information included in the reference log.
Further, the setting the machine learning model may include: the system comprises a text terminal model, an expression terminal model, a first full connecting layer and a classification layer, wherein the first full connecting layer is respectively connected with the text terminal model and the expression terminal model, and the classification layer is connected with the first full connecting layer; the text terminal model may specifically include: the system comprises at least one term semantic layer, a first bidirectional RNN layer connected with the term semantic layer and a first pooling layer connected with the first bidirectional RNN layer; the term semantic layer may specifically include: a second bidirectional RNN layer, a second pooling layer connected to the second bidirectional RNN layer, and a first splicing layer connected to the second pooling layer; the expression terminal model may specifically include: a third bidirectional RNN layer, a third pooling layer connected to the third bidirectional RNN layer, and a second splicing layer connected to the third pooling layer; the word semantic layer is used for inputting word features of participles in the input text information into the first splicing layer and inputting character features of all characters in the participles into the second bidirectional RNN layer; the expression terminal model is used for respectively inputting the input expression identifiers and the word characteristics expressed by expression words into the second splicing layer, and inputting the character characteristics expressed by the expression words into the third bidirectional RNN layer; and the classification layer is used for outputting a result of whether the text information is related to the expression or not.
The training device of the expression recommendation model can execute the training method of the expression recommendation model provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the training method of the expression recommendation model.
EXAMPLE five
Fig. 5 is a schematic structural diagram of an expression recommendation device according to a fifth embodiment of the present invention, and as shown in fig. 5, the device includes: an information acquisition module 510, an expression input module 520, and an expression determination module 530.
An information obtaining module 510, configured to obtain text information input by a user;
the expression input module 520 is configured to input the text information and at least two candidate expressions into a pre-trained expression recommendation model respectively, where the input of the expression recommendation model is a text and an expression, and the output of the expression recommendation model is a result of whether the text is associated with the expression;
and an expression determining module 530, configured to determine, according to the output result of the expression recommendation model for each candidate expression, an associated expression corresponding to the text information in the candidate expressions and provide the associated expression to the user.
The embodiment of the invention provides an expression recommendation device, which is used for obtaining output results aiming at various alternative expressions by respectively inputting acquired text information input by a user and at least two alternative expressions into a pre-trained expression recommendation model, further determining the associated expressions corresponding to the text information in the alternative expressions according to the output results and providing the associated expressions to the user, so that the condition that the user searches for needed expressions in a large number of built-in expressions is avoided, meanwhile, the diversity of input texts is improved, the fact that the user can recommend the most appropriate expressions from the large number of built-in expressions in real time by inputting any text, the expression search efficiency is improved, the expression cost of the user is reduced, and the user experience is improved.
Further, the expression determining module 530 may be specifically configured to:
and providing the associated expression in a setting page in an expression bar associated with the current input operation of the user.
Further, the expression recommendation device may further include:
and the model retraining module is used for constructing a new expression recommendation training sample to retrain the expression recommendation model according to the selection of the user on the provided associated expression after determining the associated expression corresponding to the text information in the alternative expressions to provide for the user according to the output result of the expression recommendation model on each alternative expression.
The expression recommendation device can execute the expression recommendation method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the expression recommendation method.
EXAMPLE six
Fig. 6 is a schematic structural diagram of a computer device according to a sixth embodiment of the present invention. FIG. 6 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in FIG. 6 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 6, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, implementing a training method of an expression recommendation model provided by an embodiment of the present invention. That is, the processing unit implements, when executing the program: constructing an expression recommendation training sample according to historical input logs of at least two users, wherein the expression recommendation training sample comprises: the method comprises the following steps of text information and expressions corresponding to the text information; and training a set machine learning model by using the expression recommendation training sample to obtain the expression recommendation model.
Another example is: the expression recommendation method provided by the embodiment of the invention is realized. Namely, acquiring text information input by a user; inputting the text information and at least two alternative expressions into a pre-trained expression recommendation model respectively, wherein the input of the expression recommendation model is a text and an expression, and the output of the expression recommendation model is a result of whether the text is related to the expression or not; and determining associated expressions corresponding to the text information in the alternative expressions to be provided for the user according to the output results of the expression recommendation model for the alternative expressions.
EXAMPLE seven
The seventh embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for training an expression recommendation model according to all the embodiments of the present invention: that is, the program when executed by the processor implements: constructing an expression recommendation training sample according to historical input logs of at least two users, wherein the expression recommendation training sample comprises: the method comprises the following steps of text information and expressions corresponding to the text information; and training a set machine learning model by using the expression recommendation training sample to obtain the expression recommendation model.
Or, implementing the expression recommendation method provided by all the invention embodiments of the present application: that is, the program when executed by the processor implements: acquiring text information input by a user; inputting the text information and at least two alternative expressions into a pre-trained expression recommendation model respectively, wherein the input of the expression recommendation model is a text and an expression, and the output of the expression recommendation model is a result of whether the text is related to the expression or not; and determining associated expressions corresponding to the text information in the alternative expressions to be provided for the user according to the output results of the expression recommendation model for the alternative expressions.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (14)
1. A training method of an expression recommendation model is characterized by comprising the following steps:
constructing an expression recommendation training sample according to historical input logs of at least two users, wherein the expression recommendation training sample comprises: the method comprises the following steps of text information and expressions corresponding to the text information;
training a set machine learning model by using the expression recommendation training sample to obtain the expression recommendation model;
wherein the setting the machine learning model comprises: the system comprises a text terminal model, an expression terminal model, a first full connecting layer and a classification layer, wherein the first full connecting layer is respectively connected with the text terminal model and the expression terminal model, and the classification layer is connected with the first full connecting layer; and the output of the expression recommendation model is the result of whether the text is associated with the expression or not.
2. The method of claim 1, wherein constructing an expression recommendation training sample from historical input logs of at least two users comprises:
acquiring a log comprising expressions from the historical input log as a reference log;
constructing a regular example sample in the expression recommendation training sample according to the text information and the expressions in the reference log;
and constructing a negative example sample in the expression recommendation training sample according to the text information included in the reference log and other expressions except the expression included in the reference log.
3. The method according to claim 2, wherein before acquiring, as a reference log, a log including an expression from the history input log, the method further comprises:
if the historical input logs comprise logs to be merged, the length of which is smaller than a set threshold value, a first target user corresponding to the logs to be merged is obtained;
acquiring a first reference merging log corresponding to the first target user, wherein the first generation time of the first reference merging log is adjacent to the second generation time of the log to be merged, and the first generation time is before the second generation time;
and if the time difference between the first generation time and the second generation time meets a set time threshold condition, merging the first reference merged log and the log to be merged to obtain a new historical input log.
4. The method according to claim 2, wherein after acquiring a log including an expression as a reference log in the history input log, the method further comprises:
if the reference log is determined to include a null text log only having an expression, acquiring a second target user corresponding to the null text log;
acquiring a second reference merged log corresponding to the second target user, wherein a third generation time of the second reference merged log is adjacent to a fourth generation time of the empty text log, and the third generation time is before the fourth generation time;
and if the time difference value between the third generation time and the fourth generation time meets a set time threshold condition and the second reference merged log comprises text information, merging the second reference merged log and the empty text log to obtain a new reference log.
5. The method according to claim 2, wherein after acquiring a log including an expression as a reference log in the history input log, the method further comprises:
detecting whether text information in the reference log comprises a pre-labeled named entity: and if so, replacing the named entity detected in the text information by using an entity tag.
6. The method of claim 2, wherein constructing the positive example sample of the expression recommendation training samples according to the text information and the expressions included in the reference log comprises:
and if the reference log comprises at least two expressions, constructing at least two positive example samples by respectively using the expressions and the text information included in the reference log.
7. The method according to any one of claims 1-6, wherein:
the text terminal model specifically comprises: the system comprises at least one term semantic layer, a first bidirectional RNN layer connected with the term semantic layer and a first pooling layer connected with the first bidirectional RNN layer;
the term semantic layer specifically comprises: a second bidirectional RNN layer, a second pooling layer connected to the second bidirectional RNN layer, and a first splicing layer connected to the second pooling layer;
the expression terminal model specifically comprises: a third bidirectional RNN layer, a third pooling layer connected to the third bidirectional RNN layer, and a second splicing layer connected to the third pooling layer;
the word semantic layer is used for inputting word features of participles in the input text information into the first splicing layer and inputting character features of all characters in the participles into the second bidirectional RNN layer;
the expression terminal model is used for respectively inputting the input expression identifiers and the word characteristics expressed by expression words into the second splicing layer, and inputting the character characteristics expressed by the expression words into the third bidirectional RNN layer;
and the classification layer is used for outputting a result of whether the text information is related to the expression or not.
8. An expression recommendation method, comprising:
acquiring text information input by a user;
inputting the text information and at least two alternative expressions into a pre-trained expression recommendation model respectively, wherein the input of the expression recommendation model is a text and an expression, and the output of the expression recommendation model is a result of whether the text is related to the expression or not;
determining associated expressions corresponding to the text information in the alternative expressions to be provided to the user according to output results of the expression recommendation model for the alternative expressions;
the expression recommendation model is obtained by training by adopting the training method of the expression recommendation model according to any one of claims 1 to 7.
9. The method of claim 8, wherein determining the associated expression corresponding to the text message from the candidate expressions is provided to the user comprises:
and providing the associated expression in a setting page in an expression bar associated with the current input operation of the user.
10. The method of claim 8, wherein after determining, according to the output result of the expression recommendation model for each candidate expression, an associated expression corresponding to the text information in the candidate expressions to provide to the user, the method further comprises:
and constructing a new expression recommendation training sample to retrain the expression recommendation model according to the selection of the user on the provided associated expression.
11. A training device for an expression recommendation model is characterized by comprising:
the sample construction module is used for constructing an expression recommendation training sample according to historical input logs of at least two users, and the expression recommendation training sample comprises: the method comprises the following steps of text information and expressions corresponding to the text information;
the model training module is used for training a set machine learning model by using the expression recommendation training sample to obtain the expression recommendation model;
wherein the setting the machine learning model comprises: the system comprises a text terminal model, an expression terminal model, a first full connecting layer and a classification layer, wherein the first full connecting layer is respectively connected with the text terminal model and the expression terminal model, and the classification layer is connected with the first full connecting layer; and the output of the expression recommendation model is the result of whether the text is associated with the expression or not.
12. An expression recommendation device, comprising:
the information acquisition module is used for acquiring text information input by a user;
the expression input module is used for respectively inputting the text information and at least two alternative expressions into a pre-trained expression recommendation model, wherein the input of the expression recommendation model is a text and an expression, and the output of the expression recommendation model is a result of whether the text is related to the expression or not;
the expression determining module is used for determining associated expressions corresponding to the text information in the alternative expressions and providing the associated expressions to the user according to output results of the expression recommendation model for the alternative expressions;
the expression recommendation model is obtained by training by adopting the training method of the expression recommendation model according to any one of claims 1 to 7.
13. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements a method of training an expression recommendation model according to any one of claims 1-7 or a method of expression recommendation according to any one of claims 8-10 when executing the program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a method of training an expression recommendation model according to any one of claims 1 to 7 or a method of expression recommendation according to any one of claims 8 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810695138.6A CN109034203B (en) | 2018-06-29 | 2018-06-29 | Method, device, equipment and medium for training expression recommendation model and recommending expression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810695138.6A CN109034203B (en) | 2018-06-29 | 2018-06-29 | Method, device, equipment and medium for training expression recommendation model and recommending expression |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109034203A CN109034203A (en) | 2018-12-18 |
CN109034203B true CN109034203B (en) | 2021-04-30 |
Family
ID=65520913
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810695138.6A Active CN109034203B (en) | 2018-06-29 | 2018-06-29 | Method, device, equipment and medium for training expression recommendation model and recommending expression |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109034203B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977409A (en) * | 2019-03-28 | 2019-07-05 | 北京科技大学 | A kind of intelligent expression recommended method and system based on user's chat habit |
CN110163220B (en) * | 2019-04-26 | 2024-08-13 | 腾讯科技(深圳)有限公司 | Picture feature extraction model training method and device and computer equipment |
CN110163121B (en) * | 2019-04-30 | 2023-09-05 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer equipment and storage medium |
CN113010023A (en) * | 2019-12-20 | 2021-06-22 | 北京搜狗科技发展有限公司 | Information recommendation method and device and electronic equipment |
CN111695357B (en) * | 2020-05-28 | 2024-11-01 | 平安科技(深圳)有限公司 | Text labeling method and related product |
CN112231605A (en) * | 2020-10-09 | 2021-01-15 | 北京三快在线科技有限公司 | Information display method and device |
CN112364241A (en) * | 2020-10-27 | 2021-02-12 | 北京五八信息技术有限公司 | Information pushing method and device, electronic equipment and storage medium |
CN112860979B (en) * | 2021-02-09 | 2024-03-26 | 北京达佳互联信息技术有限公司 | Resource searching method, device, equipment and storage medium |
CN114792097B (en) * | 2022-05-14 | 2022-12-06 | 北京百度网讯科技有限公司 | Method and device for determining prompt vector of pre-training model and electronic equipment |
CN114997251B (en) * | 2022-08-02 | 2022-11-08 | 中国农业科学院农业信息研究所 | Mixed gas identification method and system based on gas sensor array |
CN117076702B (en) * | 2023-09-14 | 2023-12-15 | 荣耀终端有限公司 | Image searching method and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104933113A (en) * | 2014-06-06 | 2015-09-23 | 北京搜狗科技发展有限公司 | Expression input method and device based on semantic understanding |
US9767094B1 (en) * | 2016-07-07 | 2017-09-19 | International Business Machines Corporation | User interface for supplementing an answer key of a question answering system using semantically equivalent variants of natural language expressions |
CN107818160A (en) * | 2017-10-31 | 2018-03-20 | 上海掌门科技有限公司 | Expression label updates and realized method, equipment and the system that expression obtains |
-
2018
- 2018-06-29 CN CN201810695138.6A patent/CN109034203B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104933113A (en) * | 2014-06-06 | 2015-09-23 | 北京搜狗科技发展有限公司 | Expression input method and device based on semantic understanding |
US9767094B1 (en) * | 2016-07-07 | 2017-09-19 | International Business Machines Corporation | User interface for supplementing an answer key of a question answering system using semantically equivalent variants of natural language expressions |
CN107818160A (en) * | 2017-10-31 | 2018-03-20 | 上海掌门科技有限公司 | Expression label updates and realized method, equipment and the system that expression obtains |
Also Published As
Publication number | Publication date |
---|---|
CN109034203A (en) | 2018-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109034203B (en) | Method, device, equipment and medium for training expression recommendation model and recommending expression | |
US11403680B2 (en) | Method, apparatus for evaluating review, device and storage medium | |
US11948058B2 (en) | Utilizing recurrent neural networks to recognize and extract open intent from text inputs | |
US11645314B2 (en) | Interactive information retrieval using knowledge graphs | |
CN107330023B (en) | Text content recommendation method and device based on attention points | |
CN112015859A (en) | Text knowledge hierarchy extraction method and device, computer equipment and readable medium | |
CN112380331A (en) | Information pushing method and device | |
US11194963B1 (en) | Auditing citations in a textual document | |
CN112613306B (en) | Method, device, electronic equipment and storage medium for extracting entity relationship | |
CN116821318B (en) | Business knowledge recommendation method, device and storage medium based on large language model | |
CN112347760A (en) | Method and device for training intention recognition model and method and device for recognizing intention | |
Saha et al. | Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning | |
CN116955591A (en) | Recommendation language generation method, related device and medium for content recommendation | |
CN117011737A (en) | Video classification method and device, electronic equipment and storage medium | |
CN117056575B (en) | Method for data acquisition based on intelligent book recommendation system | |
Sahin et al. | Introduction to Apple ML tools | |
US11501071B2 (en) | Word and image relationships in combined vector space | |
CN117290596A (en) | Recommendation label generation method, device, equipment and medium for multi-mode data model | |
US11558471B1 (en) | Multimedia content differentiation | |
CN116090450A (en) | Text processing method and computing device | |
CN114780757A (en) | Short media label extraction method and device, computer equipment and storage medium | |
CN113807920A (en) | Artificial intelligence based product recommendation method, device, equipment and storage medium | |
CN117909505B (en) | Event argument extraction method and related equipment | |
US12141208B2 (en) | Multi-chunk relationship extraction and maximization of query answer coherence | |
CN118051607B (en) | Deep learning-based policy information service recommendation method, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |