CN113254711B - Interactive image display method and device, computer equipment and storage medium - Google Patents
Interactive image display method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN113254711B CN113254711B CN202110728078.5A CN202110728078A CN113254711B CN 113254711 B CN113254711 B CN 113254711B CN 202110728078 A CN202110728078 A CN 202110728078A CN 113254711 B CN113254711 B CN 113254711B
- Authority
- CN
- China
- Prior art keywords
- interactive
- content
- image
- text
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 546
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000000605 extraction Methods 0.000 claims abstract description 223
- 238000012545 processing Methods 0.000 claims abstract description 171
- 238000012216 screening Methods 0.000 claims abstract description 42
- 238000012163 sequencing technique Methods 0.000 claims abstract description 34
- 230000003993 interaction Effects 0.000 claims description 240
- 238000012549 training Methods 0.000 claims description 37
- 238000013507 mapping Methods 0.000 claims description 31
- 230000011218 segmentation Effects 0.000 claims description 27
- 238000011156 evaluation Methods 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 17
- 230000015654 memory Effects 0.000 claims description 16
- 238000007477 logistic regression Methods 0.000 claims description 15
- 238000007499 fusion processing Methods 0.000 claims description 7
- 239000013598 vector Substances 0.000 description 60
- 238000004590 computer program Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 230000006399 behavior Effects 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 8
- 230000007246 mechanism Effects 0.000 description 8
- 238000013473 artificial intelligence Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 5
- 238000003058 natural language processing Methods 0.000 description 5
- 238000012552 review Methods 0.000 description 5
- 241000282414 Homo sapiens Species 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 230000010365 information processing Effects 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 229920000742 Cotton Polymers 0.000 description 2
- 241000282412 Homo Species 0.000 description 2
- 230000002354 daily effect Effects 0.000 description 2
- 230000008451 emotion Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/74—Browsing; Visualisation therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/75—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Library & Information Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses a display method and device of an interactive image, computer equipment and a storage medium; the method and the device can acquire the release content of the application program, and the current object aims at the interactive data of the release content and the object data of the object; respectively carrying out feature extraction processing on the release content, the interactive data and the object data to obtain the content features of the release content, the interactive features aiming at the release content and the object features of the current object; screening at least one target interactive image meeting the similar conditions of the preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics; sequencing at least one target interactive image to obtain a sequenced interactive image; displaying the sequenced interactive images on an application program; according to the method and the device, the utilization rate of the objects to the application programs is improved by realizing intelligent sequencing of the interactive images.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying an interactive image, a computer device, and a storage medium.
Background
With the development of information technology and the rapid development and popularization of internet technology, application programs in computer equipment are prosperous. The application program is used as the extension of the functions of the computer equipment, not only promotes the prosperous development of the internet field, but also brings convenience to the daily learning and life of people, and becomes an indispensable part of the daily learning and life of people.
With the development of the application programs, the published contents on the application programs are more and more colorful, and the interactive behaviors between the objects and the application programs are more and more, such as interactive behaviors of comment, like comment and forward. The comments are the interactive behaviors which can show the idea of the object most, and when the object issues comments aiming at the published content, the object can express the own viewpoint by using interactive characters or interactive images and the like. However, in the current application program, the display mode of the interactive image is single, the interactive image can only be displayed according to a fixed sequence, the interactive image cannot be intelligently displayed on the display sequence, and the utilization rate of the object to the application program is reduced.
Disclosure of Invention
The embodiment of the application provides a display method and device of an interactive image, computer equipment and a storage medium, and improves the utilization rate of an object to an application program.
The embodiment of the application provides a display method of an interactive image, which comprises the following steps:
acquiring release content of an application program, and interactive data of a current object aiming at the release content and object data of the current object;
respectively carrying out feature extraction processing on the release content, the interaction data and the object data to obtain content features of the release content, interaction features aiming at the release content and object features of the current object;
screening at least one target interactive image meeting the similar conditions of preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics;
acquiring historical interaction characteristics of a historical object interacting by using the target interaction image and historical object characteristics of the historical object;
sequencing the at least one target interactive image according to the historical interactive features and the historical object features of the target interactive image and the object features of the current object to obtain a sequenced interactive image;
and displaying the sequenced interactive images on the application program.
Correspondingly, the embodiment of the present application further provides a display device of an interactive image, including:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring the release content of an application program, and the current object aims at the interactive data of the release content and the object data of the current object;
the feature extraction processing unit is used for respectively performing feature extraction processing on the release content, the interaction data and the object data to obtain content features of the release content, the interaction features aiming at the release content and object features of the current object;
the screening unit is used for screening at least one target interactive image meeting the similar conditions of preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics;
the second acquisition unit is used for acquiring historical interaction characteristics of historical objects which interact by using the target interaction image and historical object characteristics of the historical objects;
the sequencing unit is used for sequencing the at least one target interaction image according to the historical interaction characteristics and the historical object characteristics of the target interaction image and the object characteristics of the current object to obtain a sequenced interaction image;
and the display unit is used for displaying the sequenced interactive images on the application program.
In one embodiment, the feature extraction processing unit includes:
the information identification subunit is configured to perform information identification processing on the release content, the interactive data, and the object data, respectively, to obtain a type of the release content, a type of the interactive data, and a type of the object data;
the release content feature extraction subunit is used for performing feature extraction processing on the release content according to the type of the release content to obtain the content features;
the interactive data feature extraction subunit is used for performing feature extraction processing on the interactive data according to the type of the interactive data to obtain the interactive features;
and the object data feature extraction subunit is used for performing feature extraction processing on the object data according to the type of the object data to obtain the object features.
In one embodiment, the release content feature extraction subunit includes:
the feature extraction module is used for performing feature extraction processing on the text release content on a plurality of different dimensions to obtain text features output by each dimension;
the screening module is used for screening out at least one target text characteristic which meets a preset accurate threshold value from the plurality of text characteristics;
and the splicing module is used for splicing the at least one target text characteristic to obtain the content characteristic of the release content.
In one embodiment, the feature extraction module includes:
the word segmentation processing submodule is used for carrying out word segmentation processing on the text release content to obtain a plurality of text characters of the text release content;
the characteristic identification submodule is used for carrying out characteristic identification processing on each text character to obtain characteristic identification information of each text character;
and the generating submodule is used for generating the text characteristics of the text release content according to the characteristic identification information of each text character.
In an embodiment, the interactive data feature extraction subunit includes:
the determining module is used for determining a feature extraction mode corresponding to each type of interactive data according to the type of the interactive data;
and the feature extraction module is used for performing feature extraction processing on the type of interactive data by using the feature extraction mode to obtain the interactive features of the interactive data.
In one embodiment, the feature extraction module includes:
the logic operation processing submodule is used for carrying out logic operation processing on the interaction time data to obtain a time evaluation coefficient of the interaction time data;
the judgment processing submodule is used for judging the time evaluation coefficient by utilizing a preset evaluation threshold value to obtain a judgment result;
and the determining submodule is used for determining the interaction time characteristics of the interaction time data according to the judgment result.
In one embodiment, the screening unit includes:
the acquisition subunit is used for acquiring semantic features of the preset interactive images;
the first similarity calculation subunit is used for calculating the similarity of the content features and the semantic features of the preset interactive image to obtain a first similarity of the preset interactive image;
the second similarity calculation operator unit is used for calculating the similarity of the interaction features and the semantic features of the preset interaction image to obtain a second similarity of the preset interaction image;
and the screening subunit is used for screening at least one target interactive image meeting the preset image similarity condition from a plurality of preset interactive images according to the first similarity and the second similarity.
In one embodiment, the screening subunit includes:
the comparison module is used for comparing the first similarity with the second similarity and determining the target similarity of the preset interactive image in the first similarity and the second similarity according to the comparison result;
the ordering processing module is used for ordering the preset interactive images according to the target similarity of the preset interactive images to obtain an ordering result;
and the screening module is used for screening at least one target interactive image which meets the similar conditions of the preset images from the plurality of preset interactive images according to the sorting result.
In one embodiment, the sorting unit includes:
the heat value mapping processing subunit is configured to perform heat value mapping processing on the target interaction image by using the historical interaction feature, the historical object feature, and the object feature of the current object, so as to obtain a heat value of the target interaction image;
and the sequencing subunit is used for sequencing the at least one target interactive image according to the heat value of the target interactive image to obtain a sequenced interactive image.
In one embodiment, the heat value mapping processing subunit includes:
the classification module is used for predicting the target interaction image in a plurality of preset prediction trees according to the historical interaction characteristics, the historical object characteristics and the object characteristics of the current object to obtain heat prediction information of the target interaction image in each prediction tree;
the logistic regression processing module is used for carrying out logistic regression processing on the heat prediction information of each preset prediction tree to obtain the prediction characteristics of each heat prediction information;
and the feature fusion processing module is used for performing feature fusion processing on the prediction features of each heat prediction information to obtain the heat value of the target interactive image.
In one embodiment, the classification module includes:
the first class matching sub-module is used for carrying out class matching on the object features of the current object and leaf nodes on the preset classification tree to obtain a first target leaf node of the target interaction image on the preset classification tree;
the second category matching submodule is used for performing category matching on the historical interaction features, the historical object features and leaf nodes on the preset classification tree to obtain second target leaf nodes of the target interaction image on the preset classification tree;
and the generation submodule is used for generating the category prediction information of the target interactive image according to the first target leaf node and the second target leaf node.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the method provided in the various alternatives of the above aspect.
Correspondingly, the embodiment of the present application further provides a storage medium, where the storage medium stores instructions, and the instructions, when executed by a processor, implement the method for displaying an interactive image provided in any embodiment of the present application.
The method and the device can acquire the release content of the application program, and the interactive data of the current object aiming at the release content and the object data of the current object; respectively carrying out feature extraction processing on the release content, the interactive data and the object data to obtain the content feature of the release content, the interactive feature aiming at the release content and the object feature of the current object; screening at least one target interactive image meeting the similar conditions of the preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics; acquiring historical interaction characteristics of a historical object interacting by using a target interaction image and historical object characteristics of the historical object; sequencing at least one target interactive image according to the historical interactive features and the historical object features of the target interactive image and the object features of the current object to obtain sequenced interactive images; displaying the sequenced interactive images on an application program; according to the method and the device, the utilization rate of the objects to the application programs is improved by realizing intelligent sequencing of the interactive images.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of a method for displaying an interactive image according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for displaying an interactive image according to an embodiment of the present disclosure;
FIG. 3 is a scene schematic diagram of an interactive image provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for displaying an interactive image according to an embodiment of the present disclosure;
FIG. 5 is a scene diagram of a text feature extraction model provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a scenario of a ranking model provided in an embodiment of the present application;
fig. 7 is a schematic view of another scene of a display method of an interactive image according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a display device provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, however, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a display method of an interactive image, which can be executed by a display device of the interactive image, and the display device of the interactive image can be integrated in computer equipment. Wherein the computer device may comprise at least one of a terminal and a server, etc. That is, the method for displaying an interactive image provided in the embodiment of the present application may be executed by a terminal, a server, or both of the terminal and the server that are capable of communicating with each other.
The terminal can be a smart phone, a tablet Computer, a notebook Computer, a Personal Computer (PC), a smart home, a wearable electronic device, a VR/AR device, a vehicle-mounted Computer, and the like. The server may be an interworking server or a background server among a plurality of heterogeneous systems, an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, big data and artificial intelligence platforms, and the like.
In an embodiment, as shown in fig. 1, the display device of the interactive image may be integrated on a computer device such as a terminal or a server, so as to implement the display method of the interactive image provided in the embodiment of the present application. Specifically, the computer device may obtain the release content of the application program, the interactive data of the current object with respect to the release content, and the object data of the current object; respectively carrying out feature extraction processing on the release content, the interactive data and the object data to obtain the content features of the release content, the interactive features aiming at the release content and the object features of the current object; screening at least one target interactive image meeting the similar conditions of the preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics; acquiring historical interaction characteristics of a historical object interacting by using a target interaction image and historical object characteristics of the historical object; sequencing at least one target interactive image according to the historical interactive features and the historical object features of the target interactive image and the object features of the current object to obtain sequenced interactive images; and displaying the sequenced interactive images on the application program.
The following are detailed below, and it should be noted that the order of description of the following examples is not intended to limit the preferred order of the examples.
The embodiment of the present application will be described from the perspective of a display device of an interactive image, where the display device of the interactive image may be integrated in a computer device, and the computer device may be a server or a terminal.
As shown in fig. 2, a method for displaying an interactive image is provided, which includes the following steps:
101. and acquiring the release content of the application program, the interactive data of the current object aiming at the release content and the object data of the current object.
The application program comprises a computer program for completing one or more specific functions, which is operated in an interactive mode, can interact with an object used by the application program and has a visual display interface.
For example, the application programs may include application software on various terminals. For example, the Application program may include various mobile phone software (APP), tablet Application software, Application software on a notebook Computer, Application software on a Personal Computer (PC), and the like.
For example, the application programs may include browsers, video playback software, game software, and so forth.
As another example, an application may also include an applet. Among them, the applet (Mini Program) is an application that can be used without downloading and installing. In order to provide more diversified business services to users, developers can develop corresponding applets for applications (such as instant messaging applications, shopping applications, mail applications, and the like) of the terminal, the applets can be embedded into the applications of the terminal as sub-applications, and the corresponding business services can be provided for the users by running the sub-applications (i.e., the corresponding applets) in the applications.
The published content includes content published on an application program, and objects can browse and interact with the content. For example, the publication content may include at least one of text, images, video, audio, etc. published on the application. For example, when the application is an APP of news information type, the published content may be an article published on the APP. For another example, when the application is a social APP, the published content may be pictures, text, or videos published on the APP. For another example, when the application is an APP of the music class, the release content may be music released on the APP, or the like.
In one embodiment, the publication content may be divided into multiple types, depending on the publication content. For example, the distribution content may be divided into text distribution content, image distribution content, video distribution content, audio distribution content, and the like.
The text release content comprises release content with text content. The image distribution content includes distribution content whose content is an image. The video distribution content includes distribution content whose content is a video. The audio distribution content includes distribution content whose content is audio.
For example, when the post content is text post content, the post content may be one of a title, a tag, or a category of the content. Wherein a tag may refer to a content subject of the published content. For example, the tag of the release content is "cotton", which indicates that the content subject of the release content is related to cotton. For another example, the label of the release content is "meow", which indicates that the content subject of the release content is related to the cat. Wherein the classification may refer to a content classification of the distribution content. For example, the classification of the posting may be current, lovely, fashionable, anecdotal, and so forth.
The interactive data may include data generated when the object performs an interactive action with the published content. The interaction behavior may include a behavior of interaction between the object and the published content. For example, the object approves, replies, forwards, comments on the published content, and the like, all belong to interactive behaviors.
In one embodiment, since the objects have different interactive behaviors, there may be different types of interactive data. For example, when an object approves of the published content, the interaction data may include an amount of approval. For another example, when the object forwards the published content, the interaction data may include a forwarding amount. For another example, when an object comments on published content, the interactive data may include the comment content. For another example, when an object replies to a comment that is posting content, the interaction data may include an amount of reply. The comment content can include text comment content and image comment content. The text comment content comprises content composed of text, and the image comment content comprises content composed of images.
The interaction data may also include interaction time, and the like. The interaction time may include a time difference between the time when the object and the distribution content interact with each other and the current time. For example, when an object approves of publishing contents, the interaction time may refer to a time difference between the approval time of the object to publish the contents and the current time. For another example, when the object comments on the published content, the interaction time may refer to a time difference between a time when the object comments on the published content and a current time. For another example, when the object forwards the published content, the interaction time may refer to a time difference between the time when the object forwards the published content and the current time. For another example, when the object approves, replies, collects, and forwards the published content at the same time, the interaction time may refer to a time difference between the time when the object approves, replies, collects, and forwards the published content at the same time and the current time.
Wherein the current time comprises the latest time when the object interacts with the published content.
The current object comprises a user who uses the application program to browse the published content and interacts with the published content. The object data of the current object may include data indicating characteristics of the current object. For example, the subject data of the current subject may include the gender, age, rating in the application, etc. of the current subject.
In an embodiment, when the current object interacts with the published content in the application and needs to use the interactive image, the display device of the interactive image may obtain the published content of the application, and the object is directed to the interactive data of the published content and the object data of the object.
For example, when the object uses the interactive image to comment on the published content in the application, the display device of the interactive image can acquire the published content of the application, the interactive data of the object with respect to the published content, and the object data of the object.
The display device of the interactive image may request the storage server to acquire the release content of the application program, the interactive data of the object with respect to the release content, and the object data of the object.
102. And respectively carrying out feature extraction processing on the release content, the interactive data and the object data to obtain the content features of the release content, the interactive features aiming at the release content and the object features of the current object.
In an embodiment, after the release content, the interactive data, and the object data are obtained, feature extraction processing may be performed on the release content, the interactive data, and the object data, respectively, to obtain a content feature of the release content, an interactive feature for the release content, and an object feature of the current object.
Wherein, the distribution content, the interactive data and the object data are all of various types.
For example, the distribution content may be divided into text distribution content, image distribution content, video distribution content, audio distribution content, and the like. Wherein, different types of release contents have different feature extraction processing modes. For example, the text distribution content corresponds to a feature extraction processing mode. For another example, the image distribution content corresponds to another feature extraction processing method, and the like.
As another example, the interaction data may include an amount of approval, an amount of reply, an amount of forwarding, an amount of collection, comment content, and interaction time, among others. Different types of interactive data also have different feature extraction processing modes. For example, the praise amount corresponds to one feature extraction manner, the reply amount corresponds to another feature extraction manner, and the like.
As another example, the subject data for the current subject may include the gender, age, rating in the application, etc. of the current subject.
Therefore, when feature extraction processing is performed on the release content, the interactive data and the object data, information identification processing can be performed on the release content, the interactive data and the object data to obtain the type of the release content, the type of the interactive data and the type of the object data. And then, according to the type of the release content, the type of the interactive data and the type of the object data, performing feature extraction processing on the release content, the interactive data and the object data. Specifically, the step of "respectively performing feature extraction processing on the release content, the interactive data, and the object data to obtain a content feature of the release content, an interactive feature for the release content, and an object feature of the current object" may include:
respectively carrying out information identification processing on the release content, the interactive data and the object data to obtain the type of the release content, the type of the interactive data and the type of the object data;
according to the type of the released content, performing feature extraction processing on the released content to obtain the content features of the released content;
according to the type of the interactive data, performing feature extraction processing on the interactive data to obtain interactive features;
and according to the type of the object data, carrying out feature extraction processing on the object data to obtain object features.
In one embodiment, different types of published content may have different type identifications. For example, the type identification of the text posting may be "001", the type identification of the image posting may be "010", the type identification of the video posting may be "011", and so on. Therefore, when the information identification processing is performed on the distribution content, the type identification of the distribution content can be identified, and the type of the distribution content can be identified according to the type of the distribution content.
For example, when it is recognized that the type identification of the post content is "001", the type of the post content may be determined to be text post content. For another example, when it is recognized that the type identification of the distribution content is "010", it may be determined that the type of the distribution content is the image distribution content.
Similarly, the interactive data and the object data may also be subjected to information identification processing in a similar manner, so as to obtain the type identifier of the interactive data and the type identifier of the object data. And then, determining the type of the interactive data according to the type identification of the interactive data. And determining the type of the object data according to the type identification of the object data.
In an embodiment, when the release content is a text release content and the feature extraction is performed on the release content, the feature extraction processing may be performed on the text release content in a plurality of different dimensions to obtain a text content feature output in each dimension, and then the content feature of the release content is obtained according to the text content feature. Specifically, the step of "performing feature extraction on the release content to obtain the content feature of the release content" may include:
performing feature extraction processing on the text release content on a plurality of different dimensions to obtain text content features output by each dimension;
screening at least one target text content characteristic meeting a preset accurate threshold value from the plurality of text content characteristics;
and splicing at least one target text content characteristic to obtain the content characteristic of the released content.
In an embodiment, in order to improve the accuracy of performing feature extraction processing on the text release content, feature extraction processing may be performed on the text release content in a plurality of different dimensions, so as to obtain text content features output in each dimension.
The feature extraction processing modes corresponding to each dimension may have differences.
For example, when feature extraction processing is performed on text release content, attention feature extraction processing may be performed on the text release content using a Multi-head-attention mechanism (Multi-head-attention).
Among them, the attention mechanism is derived from the study of human vision. In cognitive science, humans selectively focus on a portion of all information while ignoring other visible information due to bottlenecks in information processing. The above mechanism is commonly referred to as an attention mechanism. Different parts of the human retina have different degrees of information processing capabilities, i.e., acuity, with only the foveal part having the strongest acuity. In order to make reasonable use of limited visual information processing resources, a human needs to select a specific portion in a visual region and then focus on it. For example, when a person is reading, only a few words to be read are usually attended to and processed.
In summary, the attention mechanism has two main aspects: deciding which part of the input needs to be focused on; limited information processing resources are allocated to the important parts. The multi-head attention mechanism uses multiple queries to compute multiple selections of information from the input information in parallel. Wherein, each attention focuses on different parts of the input information, and the part focused by each attention can be regarded as a dimension.
For another example, a text feature extraction model may be used to perform feature extraction processing on the text release content in multiple different dimensions, so as to obtain a text feature output in each dimension.
The feature extraction processing model may include various artificial intelligence models, among others. For example, any of Convolutional Neural Networks (CNNs), text Convolutional Neural Networks (textcnns), deconvolution Neural Networks (des-Convolutional Networks, DNs), Deep Neural Networks (DNNs), Deep Convolutional Inverse Networks (DCIGNs), Region-based Convolutional Networks (rcnnns), Region-based fast Convolutional Networks (fast RCNNs), Long Short-Term Memory artificial Neural Networks (Long Short-Term Networks, LSTM), and Bidirectional codec (binary Encoder models), and so on.
For example, a Bert model may be used as the text feature extraction model, an RCNN may be used as the text feature extraction model, and the like.
The Bert model is a pre-training model in Natural Language Processing (NLP) based on Machine Learning (ML), and adopts a Transformer as a basic framework of the model.
Wherein, the Transformer is composed of a decoder and an encoder. Wherein, the encoder can be composed of an attention mechanism, a forward propagation module and the like. The encoder may also be composed of an attention mechanism and a forward propagation module, etc. The encoder may be configured to perform vector conversion processing on the input text release content to obtain a representation vector corresponding to the text release content. The decoder may be configured to perform decoding prediction on the representation vector corresponding to the text release content, and restore the text release content through the representation vector.
The ML is a multi-field interdisciplinary subject and relates to a plurality of subjects such as probability theory, statistics, approximation theory, convex analysis and algorithm complexity theory. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and the like.
The NLP is an important direction in the fields of computer science and artificial intelligence. It studies various theories and methods that enable efficient communication between humans and computers using natural language. Natural language processing is a science integrating linguistics, computer science and mathematics. Therefore, the research in this field will involve natural language, i.e. the language that people use everyday, so it is closely related to the research of linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic question and answer, knowledge mapping, and the like.
In an embodiment, when the text feature extraction model is a Bert model, the Bert model architecture may be set as 12 layers of transformers, where each layer of transformers may perform feature extraction on text content as a dimension.
In an embodiment, before feature extraction processing is performed on the text release content in multiple different dimensions by using a text feature extraction model to obtain a text feature output in each dimension, the text feature extraction model can be trained, so that the text feature extraction model can correctly extract the text feature of the text release content. Specifically, the step of performing feature extraction processing on the text release content in a plurality of different dimensions by using a text feature extraction model to obtain a text feature output in each dimension may include:
acquiring a plurality of training text contents and a text feature extraction model to be trained;
pre-training the text feature extraction model to be trained by using the plurality of training text contents to obtain a pre-trained text feature extraction model;
performing word segmentation processing on the training text content to obtain word segmentation content of the training text content;
and performing incremental training on the pre-trained text feature extraction model by using the word segmentation content to obtain the text feature extraction model.
The text feature extraction model to be trained can comprise an untrained text feature extraction model with model performance reaching an index.
The training text content may include text content extracted and collected and used for training the text feature extraction model to be trained. For example, the training text content may be historical text posting content in an application, historical review content of an object, and so on.
In an embodiment, a pre-training may be performed on a to-be-trained text feature extraction model by using a plurality of training text contents, so as to obtain a pre-trained text feature extraction model.
For example, a plurality of training text contents can be input into the feature extraction model of the text to be trained, and feature extraction is performed on the training text contents by using the feature extraction model of the text to be trained, so that the feature extraction model of the text to be trained learns how to perform feature extraction on the training text, and a pre-trained feature extraction model of the text after pre-training is obtained.
After the pre-trained feature extraction model is obtained, incremental training can be performed on the pre-trained feature extraction model in order to further improve the accuracy and precision of the text feature extraction model.
Specifically, the word segmentation processing may be performed on the training text content first to obtain the word segmentation content of the training text content. When the word segmentation processing is performed on the training text content, the word segmentation tool can be used for performing word segmentation on the training text content. For example, the training file may be word-segmented using a jieba library.
Wherein, the jieba library is a third-party library which can perform word segmentation on the text content. The jieba library includes three modes, which are a precision mode, a full mode, and a search engine mode. Wherein the exact mode can precisely segment the text without redundant words. Full mode can cut all possible words in the text apart, but there will be redundant units. The search engine mode can cut the long words again on the basis of the accurate mode.
And then, performing incremental training on the pre-trained text feature extraction model by using the word segmentation content to obtain a text feature extraction model.
After the pre-training, the text feature extraction model has certain feature extraction capability after being pre-trained. Therefore, when the pre-trained text feature extraction model is subjected to incremental training by using the word segmentation content, the pre-trained text feature extraction model can be output for incremental training after part of the word segmentation content is covered (Mask).
For example, the word segmentation content comprises "today", "weather", "very" and "good", and the "weather" Mask is input into a pre-trained post-text feature extraction model for incremental training, so that a text feature extraction model is obtained.
The efficiency of training the pre-trained post-feature extraction model can be improved by performing incremental training on the pre-trained post-feature extraction model.
In an embodiment, when feature extraction processing is performed on text release contents on a plurality of different dimensions by using a plurality of transformers in a Bert model, word segmentation processing can be performed on the text release contents to obtain a plurality of text characters of the text release contents; and finally, generating text characteristics of the text release content according to the characteristic identification information of each text character. Specifically, the step of performing feature extraction processing on the text release content in a plurality of different dimensions to obtain text features output in each dimension may include:
performing word segmentation processing on the text release content to obtain a plurality of text characters of the text release content;
performing characteristic identification processing on each text character to obtain characteristic identification information of each text character;
and generating the text characteristics of the text release content according to the characteristic identification information of each text character.
For example, the text distribution content may be word-segmented in the current dimension to obtain a plurality of text characters of the text distribution content. Then, feature identification processing is carried out on each text character to obtain feature identification information of each text character. And finally, generating the text characteristics of the text release content under the current dimension according to the characteristic identification information of each text character.
When the multi-text release content is subjected to word segmentation processing, a word segmentation tool can be adopted to perform word segmentation processing on the text release content. For example, the jieba library may be used to perform word segmentation on the text distribution content, so as to obtain a plurality of text characters of the text distribution content.
Then, feature identification processing can be performed on each text character, so that feature identification information of each text character is obtained.
Wherein the feature identification information may represent information of a feature of each text character. For example, the identification feature information may include at least one of position information, semantic information, and syntax information of text characters, and the like.
The position information may include a positional relationship between text characters, among others. For example, the location information may include spacing of locations between text characters, and so on. For example, when the text characters include "say", "get", "not" and "wrong", the location information may include the interval of locations between "say" and "wrong".
The syntax information may include information of organization rules and structural relationships between text characters, among other things. For example, when the text character includes "say", "get", "no", "wrong", the grammar information may include "say" next to "get", and so on.
Where semantic information may include interpretations between text characters. For example, when the text character includes "today", "is", "spring festival", the semantic information of "today" is "spring festival".
In an embodiment, when each text character is subjected to identification processing, when the focus of attention is a position relationship between each text character, a plurality of target text characters may be determined in the text characters, position information between each text character and the target text character may be determined, and feature identification information may be added to the text characters according to the position information. Wherein the target text character may be determined according to an information amount of the text character.
For example, when the text characters include "you", "this", "game", "always", "stuck", since the amount of information of "stuck" is large, it is possible to determine "stuck" as the target text character and determine the position information of "stuck" as 0. And then calculating the position space between each text character and the target text character so as to obtain the position information of each text character. For example, if the position distance between "always" and "stuck" is 1, the position information of "always" may be determined as 1, and the position information may be used as the feature identification information. And the position interval between "game" and "katon" is 2, the characteristic identification information of "game" can be determined to be 2.
In addition, a plurality of target text characters may also be determined among the text characters. For example, "game" and "katon" may be set as target text characters, and then a position interval between each text character and the target text character is calculated, thereby obtaining position information of each text character, and feature identification information may be added to the text character according to the position information.
For example, the text character may be first targeted to "game", and then the position spacing between the other text characters ("your", "this", "always", "katon") and "game" may be calculated, thereby obtaining the first position information of each text character.
Then, the text character "katon" can be targeted, and then the position spacing between the other text characters ("your", "this", "game", "always") and "katon" can be calculated, thereby obtaining the second position information of each text character.
And then coding the first position information and the second position information of each text character to obtain the position information of each text character, and using the position information as the characteristic identification information of the text character.
For another example, when feature identification processing is performed on each text character, when the emphasis is on the grammatical relationship between each text character, grammatical information between each text character may be determined, and identification feature information may be added to the text character according to the grammatical information.
For example, when the text characters include "you", "these", "this", "APP", "always", "is", "card", "pause", it is possible to record at which position each text character can be used together with which text characters, and to add identification feature information to each text character according to the recording result.
For another example, when feature identification processing is performed on each text character, when the emphasis is on the semantic relationship of each text character, semantic information of each text character may be determined, and identification feature information may be added to the semantic information according to the semantic information.
For another example, when performing the feature identification processing on each text character, in consideration of various aspects of each text character, the position information, the syntax information, and the semantic information of each text character may be determined, and the identification feature information may be added to each text character according to the position information, the syntax information, and the semantic information of each text character.
In an embodiment, after the feature identification information of each text character is obtained, a text feature corresponding to the text release content may be generated according to the feature identification information of each text character.
For example, the feature identification information of each text character may be concatenated together to obtain the text feature corresponding to the text character. For another example, the feature identification information of each text character may be weighted and then spliced together, so as to obtain a text feature corresponding to the text distribution content, and the like.
Wherein the text feature can be presented in the form of a vector.
For example, when the position information of the text characters is used as the identification feature information, the identification feature information of each text character can be spliced together, so as to obtain the text features corresponding to the text distribution content.
For another example, when the position information, semantic information, and syntax information of a text character are used as the identification feature information, the position information, semantic information, and syntax information of each text character may be multiplied by the corresponding weight coefficients and then concatenated to obtain the total identification feature information of each text character. And then, splicing the total identification characteristic information of each text character to obtain the text characteristics of the text release content.
For example, the position information, semantic information and syntax information of the text character are respectively、、. Then, the position information, semantic information and grammar information of each text character can be multiplied by the corresponding weight coefficient and added to obtain the total identification characteristic information of each text character. Wherein,、andare respectively、Andthe corresponding weight coefficients. Finally, the total identification feature information of each text character can be spliced, so that the text features of the text release content can be obtained.
In an embodiment, since feature extraction processing is performed on the text release content in a plurality of different dimensions, the accuracy of text features output by some dimensions may be not high, thereby affecting subsequent steps. Therefore, after the text features output in each dimension are obtained, at least one target text feature meeting a preset accuracy threshold can be screened out from the plurality of text features.
For example, since each text feature may be a vector, after the text feature output for each dimension is obtained, the vector value in each text feature is checked. And screening out target text features through vector values in each text feature. For example, a text feature in which vector values in the text feature fit a preset mathematical distribution may be screened out as a target text feature, and so on. For example, the more vector values in a text feature fit to a preset mathematical distribution, the higher the accuracy of the text feature is, and therefore the text feature can be screened out as a target text feature.
In an embodiment, after the at least one target text feature is screened out, the at least one target text feature may be spliced, so as to obtain the content feature of the published content.
For example, target text feature 1 is [2, 3, 4, 5, 6], and target text feature 2 is [10, 11, 12, 13, 14 ]. And splicing the target text characteristics 1 and the target text characteristics 2 to obtain the content characteristics of the released content, namely [2, 3, 4, 5, 6, 10, 11, 12, 13 and 14 ].
In one embodiment, since there are multiple types of interactive data, different types of interactive data have different feature extraction methods. Therefore, when feature extraction is performed on the interactive data according to the type of the interactive data, information identification processing can be performed on the interactive data to obtain the type of the interactive data and determine a corresponding feature extraction mode. And then, carrying out feature extraction processing on the type of interactive data according to the feature extraction mode. Specifically, the step of performing feature extraction processing on the interactive data according to the type of the interactive data to obtain the interactive features of the interactive data may include:
determining a feature extraction mode corresponding to the type of the interactive data according to the type of the interactive data;
and performing feature extraction processing on the type interactive data by using a feature extraction mode to obtain the interactive features of the interactive data.
In an embodiment, according to the type of the interactive data, a feature extraction manner corresponding to each type of the interactive data may be determined. The type of the interactive data and the corresponding feature extraction mode can be stored in a preset mapping table. Therefore, the feature extraction mode corresponding to each type of interaction data can be determined according to the preset mapping table.
In an embodiment, after determining the feature extraction method corresponding to each type of interactive data, the feature extraction method may be used to perform feature extraction processing on the type of interactive data.
The like amount can be subjected to feature extraction processing according to the following modes:
wherein,the total amount of approval to publish the content may be indicated. For example, when the total amount of approval for distribution of the content is 1000May be 1000. Wherein,the like feature obtained after the feature extraction processing is performed on the like amount can be represented.
Wherein, aiming at the interactive time data (i.e. the interactive time), the step "using the feature extraction processing mode to perform the feature extraction processing on the interactive data of the type to obtain the interactive feature" may include:
performing logical operation processing on the interactive time data to obtain a time evaluation coefficient of the interactive time data;
judging the time evaluation coefficient by using a preset evaluation threshold value to obtain a judgment result;
and determining the interaction time characteristics of the interaction time data according to the judgment result.
The interactive time data can be subjected to logic operation processing according to the following formula:
where t may represent interaction time data. value may represent a time evaluation coefficient of the interaction time data.
After the time evaluation coefficient of the interaction time is obtained, the time evaluation coefficient can be judged by using a preset evaluation threshold value to obtain a judgment result. And finally, determining the interaction characteristics of the interaction time according to the judgment result. In particular, it can be done according to the following formula:
where w _ time may represent a temporal characteristic of the interactive time data.
For the recovery quantity, the feature extraction processing can be performed on the recovery quantity according to the following mode:
wherein,the amount of recovery can be expressed.A reply characteristic that may represent a reply quantity.
In one embodiment, the comment content is often composed of text content and a preset interactive image in an application program. And each preset interactive image has the corresponding image semantic. Therefore, when feature extraction processing is performed on the comment content, feature extraction processing can be respectively performed on the text content and the image semantics of the preset interactive image, so that text features corresponding to the text content and image semantics features corresponding to the image semantics of the preset interactive image are obtained. Then, the computer device can perform feature fusion on the text features and the image semantic features, so as to obtain comment features of the comment content.
For example, a text feature extraction model may be used to perform feature extraction processing on the text content and the image semantics of the preset interactive image, so as to obtain text features corresponding to the text content and image semantics features corresponding to the image semantics of the preset interactive image.
When the text feature extraction model is a Bert model, the text feature extraction model may be composed of 12 layers of transformers.
At this time, the text content may be input into the text feature extraction model, and then a text content vector output by each layer of the Transformer is obtained. Then, at least one target text content vector meeting a preset accurate threshold value can be screened out from the plurality of text content vectors, and the at least one target text content vector is spliced, so that text features corresponding to the text content are obtained.
Similarly, the image semantics of the preset interactive image can be extracted from the text feature extraction model, and then the image semantic vector output by each layer of the transform is obtained. Then, at least one target image semantic vector meeting a preset accurate threshold value can be screened from the plurality of image semantic vectors, and the at least one target image semantic vector is spliced, so that image semantic features corresponding to the image semantics of the preset interactive image are obtained.
In one embodiment, since the object data has a plurality of types, different types of object data have different feature extraction manners. Therefore, when feature extraction is performed on the object data according to the type of the object data, information recognition processing may be performed on the object data to obtain a feature extraction method corresponding to the type of the object data. Then, feature extraction processing is performed on the type of object data according to the feature extraction method. Specifically, the step of "performing feature extraction processing on the object data according to the type of the object data to obtain the object feature of the object data" may include:
determining a feature extraction processing mode corresponding to the object data of the type according to the type of the object data;
and performing feature extraction processing on the type object data by using a feature extraction processing mode to obtain object features.
For the age target data of the object, the characteristic extraction processing can be performed on the age target data according to the following mode:
where Age may represent Age object data of the object.Can indicate the corresponding of the age object dataThe age characteristic of (a).
For the gender object data of the object, the gender object data of the object can be mapped according to a preset gender attribute mapping table, so that gender characteristics corresponding to the gender object data are obtained.
For example, in the preset gender attribute mapping table, the gender object data "male" is mapped to 1; mapping gender object data "female" to 0; gender object data "unknown" is mapped to 2.
Therefore, the gender object data of the object can be mapped according to the preset gender attribute mapping table, so that the gender characteristics corresponding to the gender object data are obtained.
The level object data of the object can be mapped according to a preset level attribute mapping table, so that the level characteristics corresponding to the level object data are obtained.
For example, in the preset level attribute mapping table, level N is directly mapped to N. Therefore, the level object data of the object can be mapped according to the preset level attribute mapping table, so that the level characteristics corresponding to the level object data are obtained.
103. And screening at least one target interactive image meeting the similar conditions of the preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics.
In an embodiment, after the content feature of the release content, the interaction feature of the interaction data, and the object feature of the object data are obtained, at least one target interaction image meeting the similarity condition of the preset images may be screened from a plurality of preset interaction images in the application program according to the content feature and the interaction feature.
The interactive image comprises an image which can be used for expressing specific emotion and meaning.
In one embodiment, the interactive image may include a static interactive image and a dynamic interactive image. The static interactive image may include an emoticon presented in the form of a picture. The dynamic interactive image can comprise an expression bag presented in a video form. Wherein an emoticon is a block of digital media featuring a picture or video. The expression package is a series of pictures or videos caused by a popular culture formed after social software or a social network site is started, and a series of matched characters are matched with materials in a certain form, such as popular celebrities, language records, cartoons, movie screenshots or fragments and the like, so as to express a specific emotion.
In one embodiment, in order to prevent the occurrence of the interactive images which do not conform to the specification, the interactive images which can be used in the application program are all subjected to auditing. In addition, the interactive images that can be used in the application are marked with specific meanings, so all the interactive images have corresponding text interpretations. For example, as shown in FIG. 3, the text interpretation of the interactive image 1011 is sales; the text interpretation of the interactive image 1012 is smart.
In an embodiment, since all the interactive images have corresponding text interpretations, the text interpretations of each interactive image can be vectorized, so as to obtain semantic features of each interactive image.
When vectorizing the text interpretation of the interactive image, a Bert model can be adopted to convert the text interpretation of the interactive image into a text vector, so that the semantic features of the interactive image are obtained.
For example, the text interpretation "sell through" of the interactive image 1011 may be converted to a text vector. As another example, the text interpretation of the interactive image 1012 may be "smart" to be translated into a text vector.
In an embodiment, after obtaining the semantic features of the interactive images, similarity calculation may be performed on the content features and the semantic features of the interactive features and the interactive features, respectively, so as to screen at least one target interactive image that meets the preset image similarity conditions from a plurality of preset interactive images. Specifically, the step of "screening out at least one target interactive image meeting the similarity condition of a preset image from a plurality of preset interactive images in an application program according to the content feature and the interactive feature" may include:
obtaining semantic features of a plurality of preset interactive images;
similarity calculation is carried out on the content features and the semantic features of the preset interactive image to obtain a first similarity of the preset interactive image;
similarity calculation is carried out on the interactive features and the semantic features of the preset interactive images to obtain a second similarity of the preset interactive images;
and screening at least one target interactive image meeting the preset image similarity condition from the plurality of preset interactive images according to the first similarity and the second similarity.
In an embodiment, a plurality of similarity calculation methods can be adopted to calculate the similarity of the content features and the semantic features of the preset interactive image. For example, similarity calculation may be performed on the content features and semantic features of the preset interactive image by using a similarity algorithm such as a cosine distance or an euclidean distance, so as to obtain a first similarity of the budget interactive image.
For example, when the similarity calculation is performed on the content feature and the semantic feature by using the euclidean distance algorithm, the similarity calculation may be performed according to the following formula:
wherein,a first similarity of the preset interactive image may be represented. X may represent a content feature. Wherein, when the content feature is a vector,vector values in the vector may be represented. Y may represent a semantic feature. Wherein, when the semantic features are vectors,vector values in the vector may be represented. n may represent the number of vector values in the vector.
For another example, when the similarity between the content feature and the semantic feature is calculated by using a cosine distance algorithm, the similarity can be calculated according to the following formula:
in an embodiment, a plurality of similarity calculation methods may also be adopted to perform similarity calculation on the interactive features and the semantic features of the preset interactive image. For example, similarity calculation may be performed on the interactive features and semantic features of the preset interactive image by using a similarity algorithm such as a cosine distance or an euclidean distance, so as to obtain a second similarity of the budget interactive image.
In an embodiment, after the first similarity and the second similarity are obtained, at least one target interactive image meeting the preset image similarity condition may be screened from the plurality of preset interactive images according to the first similarity and the second similarity. Specifically, the step of "screening at least one target interactive image meeting the preset image similarity condition from a plurality of preset interactive images according to the first similarity and the second similarity" may include:
comparing the first similarity with the second similarity, and determining the target similarity of the preset interactive image in the first similarity and the second similarity according to the comparison result;
according to the target similarity of the preset interactive images, sequencing the preset interactive images to obtain a sequencing result;
and screening at least one target interactive image meeting the similar conditions of the preset images from the preset interactive images according to the sorting result.
In one embodiment, the metric forms of the similarity of the calculation processes are different due to different similarity calculation algorithms. For example, the similarity calculated by some similarity calculation algorithms is more similar as the similarity is smaller. The similarity calculated by some similarity calculation algorithms is that the greater the similarity is, the more similar the similarity is.
Therefore, when the first similarity is compared with the second similarity, the target similarity of the preset interactive image can be determined in the first similarity and the second similarity according to the comparison result.
When the similarity is smaller and more similar, the small similarity can be selected as the target similarity of the preset interactive image. For example, when the first similarity is smaller than the second similarity, the first similarity may be selected as the target similarity of the preset interactive image.
When the similarity is larger and more similar, the large similarity can be selected as the target similarity of the preset interactive image. For example, when the first similarity is greater than the second similarity, the first similarity may be selected as the target similarity of the preset interactive image.
In an embodiment, after the target similarity of the preset interactive images is determined, the preset interactive images may be sorted according to the target similarity to obtain a sorting result.
When the plurality of preset interactive images are subjected to sequencing processing, the preset interactive images can be sequenced according to the measurement form of the similarity.
For example, when the similarity is smaller and more similar, the target similarities of the interactive images can be ranked from small to large, so as to obtain the interactive images with the target similarities ranked from strong to weak.
For another example, when the similarity is larger and more similar, the target similarities of the interactive images may be sorted from large to small, so as to obtain the interactive images with the target similarities sorted from strong to weak.
In an embodiment, after the predetermined interactive images are sorted, at least one target interactive image meeting the similarity condition of the predetermined images can be screened from the predetermined interactive images according to the sorting result.
The preset image similarity condition may include a preset condition that needs to be satisfied when a plurality of preset interactive images are screened.
For example, the preset image similarity condition may be an interactive image with the target similarity of the preset interactive image ranked in the top 10 digits. For another example, the preset image similarity condition may be that the target similarity of the preset interactive image meets a preset threshold, and the like.
For example, when the preset interactive image similarity condition is a preset interactive image with the target similarity ranked in the top 10 digits, the preset interactive image with the target similarity ranked in the top 10 digits can be screened out as the target interactive image.
104. And acquiring historical interaction characteristics of the historical object and historical object characteristics of the historical object, wherein the historical object utilizes the target interaction image to carry out interaction.
After the target interaction image is screened out, the historical interaction characteristics and the historical object characteristics of the target interaction image can be obtained.
Wherein the history object includes an object that has interacted with the distribution content using the target interaction image. For example, the history object includes an object that commented on the posting content using the target interactive image, and the like. The historical object features comprise features which can describe the characteristics of the historical object.
The historical interaction features can include features generated when historical objects interact with the target interaction image. For example, when the historical object commends the posting content with the target interactive image, the historical interactive features may include a content vector commenting on text content in the content, and so on.
In an embodiment, before obtaining the historical interaction features of the historical objects interacting with the target interaction image and the historical object features of the historical objects, the target interaction data can be screened from the historical interaction data interacting with the target interaction image according to the target interaction image, and the historical objects of the target interaction data can be published.
For example, from published comment contents that interact with a target interaction image, comment contents with top 10 may be screened out as target history interaction data, and an object that published the comment contents may be set as a history object.
Then, feature extraction processing can be performed on the target historical interaction data, so that historical interaction features are obtained. Similarly, feature extraction processing may be performed on the object data of the history object to obtain the history object feature.
When the target historical interaction data is published comment content, the published comment content generally consists of text content and a target interaction image. And each target interactive image has the corresponding image semantic. Therefore, when feature extraction processing is performed on the published comment content, feature extraction processing can be performed on the text content of the published comment content and the image semantics of the target interactive image respectively, so that text features corresponding to the text content and image semantics features corresponding to the image semantics are obtained. Then, feature fusion can be performed on the text features and the image semantic features, so that historical interaction features of the target historical interaction data are obtained.
The object data of the historical object may include, among other things, the age, gender, rating in the application, etc. of the historical object. Therefore, when the feature extraction processing is performed on the object data of the history object, the age and the gender of the history object and the level in the application program can be respectively subjected to feature extraction, so that the age feature, the gender feature and the level feature of the history object can be obtained. And the age characteristic, the gender characteristic and the grade characteristic of the historical object are taken as the historical object characteristics of the historical object.
105. And sequencing at least one target interactive image according to the historical interactive features and the historical object features of the target interactive images and the object features of the current object to obtain the sequenced interactive images.
In an example, at least one target interactive image may be sorted according to the historical interactive features and historical object features of the target interactive images and the object data to obtain sorted interactive images. Specifically, the step of sorting at least one target interactive image according to the historical interactive features and historical object features of the target interactive image and the object features of the current object to obtain a sorted interactive image may include:
performing heat value mapping processing on the target interaction image by using the historical interaction characteristics and the historical object characteristics of the target interaction image and the object characteristics of the current object to obtain a heat value of the target interaction image;
and sequencing at least one target interactive image according to the heat value of the target interactive image to obtain a sequenced interactive image.
In an embodiment, the heat value of the target interaction image may be obtained by performing heat value mapping processing on the target interaction image by using the historical interaction features and the historical object features of the target interaction image and the object features of the current object.
The method comprises the steps of utilizing historical interaction characteristics and historical object characteristics of a target interaction image and object characteristics of a current object to carry out heat value mapping processing on the target interaction image, enabling the obtained heat value to reflect behavior characteristics of the object and combining public use conditions of the target interaction image, and accordingly improving information content and reference value of the heat value. Specifically, the step of performing heat value mapping processing on the target interaction image by using the object feature, the content interaction feature, and the usage object feature to obtain the heat value of the target interaction image may include:
predicting the target interaction image in a plurality of preset prediction trees according to the historical interaction characteristics and the historical object characteristics of the target interaction image and the object characteristics of the current object to obtain heat prediction information of the target interaction image in each prediction tree;
performing logistic regression processing on the heat prediction information of each preset prediction tree to obtain the prediction characteristics of each heat prediction information;
and performing feature fusion processing on the prediction features of each heat prediction information to obtain the heat value of the target interactive image.
The preset prediction tree may include a trunk and leaf nodes. Wherein the trunk comprises a judgment condition. The judgment condition can be used for carrying out classification judgment on the target interactive image. When the target interactive image meets the judgment condition, the leaf node can give heat prediction information of the target interactive image. In an embodiment, the preset prediction tree may be a model structure in an Extreme Gradient Boosting (XGboost) model, and the feature may be predicted by using the model structure, and prediction information of the feature may be obtained.
The heat prediction information may include information obtained by predicting the heat of the target interactive image. Wherein the heat prediction information may be a prediction value. Each leaf node on the preset prediction tree has a prediction value, and when the target interaction image meets the judgment condition on the trunk, the prediction values on the leaf nodes can be endowed to the target interaction image.
In an embodiment, when the target interaction image is predicted by using the preset prediction tree, the object feature, the content interaction feature, and the leaf node on the target interaction image on the preset prediction tree may be respectively matched with the object feature and the leaf node on the preset prediction tree to obtain the target leaf node of the target interaction image on the preset prediction tree. And then, generating heat prediction information of the target interactive image according to the target leaf node. Specifically, the step of predicting the target interaction image in a plurality of preset prediction trees according to the historical interaction features and the historical object features of the target interaction image and the object features of the current object to obtain heat prediction information of the target interaction image in each prediction tree may include:
matching object features of a current object with leaf nodes on a preset prediction tree to obtain a first target leaf node of the target interaction image on the preset prediction tree;
matching the historical interaction characteristics, the historical object characteristics and leaf nodes on a preset prediction tree to obtain second target leaf nodes of the target interaction image on the preset prediction tree;
and generating heat prediction information of the target interactive image according to the first target leaf node and the second target leaf node.
When the object feature is matched with the leaf node on the preset prediction tree, the object feature can be judged by using the judgment condition of the trunk on the preset prediction tree. When the object characteristics meet the judgment condition, the object characteristics are matched with the leaf nodes, otherwise, the object characteristics are not matched with the leaf nodes. And matching the object characteristics with leaf nodes on a preset prediction tree to obtain a first target leaf node of the target interaction image on the preset prediction tree.
Similarly, the content interaction feature and the use object feature can be matched with leaf nodes on the preset prediction tree, so that a second target node of the target interaction image in the preset prediction tree is obtained.
The leaf nodes have predicted values, so that the predicted values of the first target leaf node and the second target leaf node can be subjected to logic operation and then are endowed to the target interactive image, and therefore heat prediction information of the target interactive image is generated. For example, the predicted values at the first target leaf node and the second target leaf node may be added to generate the heat prediction information of the target interactive image. For another example, the predicted values at the first target leaf node and the second target leaf node may be multiplied to generate the heat prediction information of the target interactive image, and so on.
In an embodiment, after obtaining the heat prediction information of the target interaction image in each prediction tree, performing logistic regression processing on the heat prediction information of each preset prediction tree to obtain the prediction feature of each heat prediction information.
For example, the heat prediction information of each preset prediction tree may be subjected to nonlinear change after being calculated according to a preset linear model, so as to obtain the prediction characteristic of each heat prediction information. The preset linear model may be a Logistic Regression (LR) model, and the like.
Then, feature fusion processing can be performed on the prediction features of each heat prediction information to obtain a heat value of the target interactive image. For example, each of the predicted features may be added to obtain a heat value of the target interactive image. For another example, each of the prediction features may be weighted and then added to obtain a heat value of the target interactive image, and so on.
In an embodiment, after the heat value of the target interactive image is obtained, at least one target interactive image is sorted according to the heat value of the target interactive image, so as to obtain a sorted interactive image.
For example, according to the heat value of the target interactive image, the target interactive image with a high heat value is arranged in front of the target interactive image, and the target interactive image with a low heat value is arranged behind the target interactive image, so that the sequenced interactive images are obtained.
For example, there are 3 target interaction images. Wherein the heat value of the target interactive image 1 is 90; the heat value of the target interaction image 2 is 80; the heat value of the target interactive image 3 is 60. Therefore, the target interaction images can be sorted in the order of "target interaction image 1, target interaction image 2, and target interaction image 3" according to the heat value of the target interaction images.
In an embodiment, the at least one target interaction image may be further sorted using a sorting model. The ranking model may be, among other things, various artificial intelligence models. Such as CNN, DNN, random forest, Extreme Gradient Boosting (XGboost) model, Logistic Regression (LR) model, and so forth.
For example, the XGBoost model and the LR model may be combined as an ordering model. Then, the content interaction characteristics, the use object characteristics and the object characteristics of the target interaction image are input into the sequencing model, so that the sequencing model can perform heat value mapping processing on the target interaction image according to the historical interaction characteristics, the historical object characteristics and the object characteristics of the current object, and the heat value of the target interaction image is obtained. Then, the ranking model may calculate the heat value of the target interaction image and the target similarity of the target interaction image to obtain a target heat value of the target interaction image, and rank the target interaction image according to the target heat value.
For example, the heat value of the output target interaction image in the ranking model is. Then, the heat value and the target similarity of the target interactive image can be calculated according to the following formula, so as to obtain the target heat value of the target interactive image:
wherein,a target heat value of the target interaction image may be identified.A target similarity of the target interaction image may be identified. i may represent the output of 4 target heat values in the ranking model.
106. And displaying the sequenced interactive images on the application program.
In one embodiment, after the sorted interactive images are obtained, the sorted interactive images may be displayed on an application. For example, the sorted interactive images may be displayed on an interactive image display interface of the application.
Because the sequenced interactive images are subjected to multi-feature sequencing, the sequenced interactive images can better conform to the habit of using the interactive images by the object. In addition, through multi-feature sequencing, the display of the interactive image is more fit to the published content in the application program. By displaying the sequenced interactive images on the application program, the objects can be more intelligent and convenient when interacting with the published contents.
For example, the object wants to comment on the post content using the interactive image a. When the interactive image display interface is opened, the interactive image A is the interactive image which is seen at the first position of the interactive image display interface. Therefore, the interactive image A can be directly selected to give comments without searching the interactive image A in the interactive image display interface, the convenience of the interactive image used by the object is improved, and the utilization rate of the object to the application program is improved.
The embodiment of the application provides a method for displaying an interactive image, which can acquire release content of an application program, and interactive data of a current object aiming at the release content and object data of the current object; respectively carrying out feature extraction processing on the release content, the interactive data and the object data to obtain the content feature of the release content, the interactive feature aiming at the release content and the object feature of the current object; screening at least one target interactive image meeting the similar conditions of the preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics; acquiring historical interaction characteristics of a historical object interacting by using a target interaction image and historical object characteristics of the historical object; sequencing at least one target interactive image according to the historical interactive features and the historical object features of the target interactive image and the object features of the current object to obtain sequenced interactive images; displaying the sequenced interactive images on an application program; according to the method and the device, the utilization rate of the objects to the application programs is improved by realizing intelligent sequencing of the interactive images.
In addition, in the embodiment of the application, the target interactive images meeting the preset image similarity condition are screened out from the preset interactive images, so that when the target interactive images are sequenced, massive interactive images do not need to be sequenced, only the target interactive images need to be sequenced, and the display efficiency of the interactive images is improved.
When the method and the device perform feature extraction processing on the release content, the interactive data and the object data, corresponding feature extraction modes can be adopted according to the type of the release content, the type of the interactive data and the type of the object data, and the accuracy rate of feature extraction is improved.
The method described in the above examples is further illustrated in detail below by way of example.
The method of the embodiment of the present application will be described by taking an example in which a display method of an interactive image is integrated on a computer device.
In an embodiment, as shown in fig. 4, a method for displaying an interactive image includes the following specific steps:
201. the computer equipment acquires the release content of the application program, the interactive data of the current object aiming at the release content and the object data of the current object.
The published content may include content published on an application program, and objects may browse and interact with the content.
For example, when the application is a news advisory APP, the published content may include various articles published on the application.
For example, when the application is a social-class APP, the published content may be various dynamic content published on the application.
The interactive data may include information generated when the object performs an interactive action with the published content. For example, the interaction data may include the amount of approval, the amount of forwarding, the amount of collection, the content of comments, the time of interaction, and the like of the object to publish the content.
The interaction time may include a time difference between the time when the object and the distribution content interact with each other and the current time. For example, when an object approves of publishing contents, the interaction time may refer to a time difference between the approval time of the object to publish the contents and the current time. For another example, when the object comments on the published content, the interaction time may refer to a time difference between a time when the object comments on the published content and a current time. For another example, when the object forwards the published content, the interaction time may refer to a time difference between the time when the object forwards the published content and the current time. For another example, when the object approves, collects, and forwards the published content at the same time, the interaction time may refer to a time difference between the time when the object approves, collects, and forwards the published content at the same time and the current time.
For example, if the time for the object to comment on the published content is 0 point at 1/0/2021 and the current time is 43 points at 21/13/2021/2/43, the interaction time may be 43 points at 21/34 days.
Wherein the object data of the object may include information indicating a characteristic of the object. For example, the subject data for the subject may include the subject's gender, age, rating in the application, and the like.
202. The computer equipment respectively carries out feature extraction processing on the release content, the interactive data and the object data to obtain the content features of the release content, the interactive features aiming at the release content and the object features of the current object.
In an embodiment, when the computer device performs the feature extraction processing on the interactive data, the computer device may perform the feature extraction processing on the interactive data in different feature extraction manners according to different types of interactive data. In one embodiment, all information may also be divided into continuous information, discrete information, and textual information. For example, the amount of approval, the amount of reply, the time of interaction, and the age of the subject may be divided into continuous information. The gender and the level of the subject in the application may then be divided into discrete information. Then, the comment content and the posting content are divided into text information. Wherein different information has different feature processing methods.
For example, the continuous information may be subjected to a feature extraction process by using a logical operation method, so as to obtain continuous features. For another example, the discrete information may be subjected to feature extraction processing by using an information mapping method, so as to obtain discrete features of the discrete information. For another example, the text information may be subjected to feature extraction processing by using a text feature extraction method, so as to obtain discrete features of the discrete information.
For example, for the continuous information, feature extraction may be performed according to the following manner, so as to obtain continuous features of the continuous information:
for the praise amount, the computer device may perform feature extraction processing on the praise amount according to the following manner:
wherein,the total amount of approval of the object for publishing the content may be represented. For example, when there are 1000 objects in favor of publishing content,may be 1000. Wherein,the like feature obtained after the feature extraction processing is performed on the like amount can be represented.
Wherein, aiming at the interactive time data, the computer equipment can carry out feature extraction processing on the interactive time data according to the following modes:
where t may represent interaction time data.A time evaluation coefficient of the interaction time data may be represented.The temporal characteristics of the interaction time data may be represented.
The value of value is continuously reduced along with the increase of t, and the evaluation coefficient representing time is continuously reduced along with the increase of interaction time data, so that the weight of newly published comments is enhanced.
Wherein, for the reply quantity, the computer device may perform the feature extraction processing on the reply quantity according to the following manner:
wherein,the amount of recovery can be expressed.A reply characteristic that may represent a reply quantity.
For the age target data of the object, the characteristic extraction processing can be performed on the age target data according to the following mode:
where Age may represent Age object data of the object.May represent age characteristics corresponding to the age object data.
For another example, for text information, feature extraction may be performed according to the following manner, so as to obtain text features of the text information:
for the comment content of the object to the published content, the computer device can adopt a text feature extraction model to perform feature extraction processing on the comment content.
The feature extraction processing model may include various artificial intelligence models, among others. For example, any one of CNN, DN, DNN, DCIGN, RCNN, Faster RCNN, and Bert, and the like.
For example, a Bert model may be used as the text feature extraction model, an RCNN may be used as the text feature extraction model, and the like.
In one embodiment, the comment content is often composed of text content and a preset interactive image in an application program. And each preset interactive image has the corresponding image semantic. Therefore, when feature extraction processing is performed on the comment content, the computer device can perform feature extraction processing on the text content and the image semantics of the preset interactive image respectively, so as to obtain text features corresponding to the text content and image semantics corresponding to the image semantics of the preset interactive image. Then, the computer device can perform feature fusion on the text features and the image semantic features, so as to obtain comment features of the comment content.
For example, the computer device may use the text feature extraction model to perform feature extraction processing on the text content and the image semantics of the preset interactive image, so as to obtain the text feature corresponding to the text content and the image semantics feature corresponding to the image semantics of the preset interactive image.
When the text feature extraction model is a Bert model, the text feature extraction model may be composed of 12 layers of transformers.
At this time, the computer device may input the text content into the text feature extraction model, and then obtain the text content vector output by each layer of the Transformer. Next, the computer device may screen out at least one target text content vector meeting a preset accurate threshold from the plurality of text content vectors, and splice the at least one target text content vector, so as to obtain text features corresponding to the text content.
Similarly, the computer device can extract the image semantic of the preset interactive image from the text feature extraction model, and then obtain the image semantic vector output by each layer of the transformers. Then, the computer device can screen out at least one target image semantic vector meeting a preset accurate threshold from the plurality of image semantic vectors, and splice the at least one target image semantic vector, so as to obtain image semantic features corresponding to the image semantics of the preset interactive image.
When the release content is the text release content, the text feature extraction model can be used for carrying out feature extraction processing on the text release content.
For example, when the text feature extraction model is a Bert model, the computer device may input text publishing contents into the text feature extraction model and then obtain a text publishing vector output by each layer of the Transformer. Next, the computer device may screen out at least one target text vector meeting a preset accurate threshold from the plurality of text-publishing vectors, and concatenate the at least one target text vector, thereby obtaining text content features corresponding to the text-publishing content.
For example, as shown in FIG. 5, the Bert model consists of 12 layers of transformers. Assuming that the input of Bert is text with L length, wherein the output of each layer is L +2, including the vector of each word and the CLS vector and SEP vector, the invention selects the CLS vector of the fourth layer Transformer to splice the CLS vector of the twelfth layer as the vector result of the text input. Where the CLS vector may represent text features of the entire sentence. SEP is a split vector used to make a separator between two sentences.
For another example, for discrete information, the feature extraction may be performed according to the following manner, so as to obtain discrete features of the discrete information:
for the gender object data of the object, the computer device may map the gender object data of the object according to a preset gender attribute mapping table, so as to obtain a gender feature corresponding to the gender object data.
For example, in the preset gender attribute mapping table, the gender object data "male" is mapped to 1; mapping gender object data "female" to 0; gender object data "unknown" is mapped to 2.
Therefore, the computer device can map the gender object data of the object according to the preset gender attribute mapping table, so as to obtain the gender characteristics corresponding to the gender object data.
For the level object data of the object, the computer device may map the level object data of the object according to a preset level attribute mapping table, so as to obtain a level feature corresponding to the level object data.
For example, in the preset level attribute mapping table, level N is directly mapped to N. Therefore, the computer device can map the level object data of the object according to the preset level attribute mapping table, so as to obtain the level characteristics corresponding to the level object data.
203. And the computer equipment screens out at least one target interactive image meeting the similar conditions of the preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics.
For example, the computer device may calculate a similarity between a content feature of the distribution content and a semantic feature of a preset interactive image using the euclidean distance as the first similarity. Then, the computer device may calculate a similarity between the interactive feature of the interactive data and the semantic feature of the preset interactive image using the euclidean distance as a second similarity.
The formula for calculating the euclidean distance may be as follows:
wherein,a first similarity of the preset interactive image may be represented. X may represent a content feature. Wherein, when the content feature is a vector,vector values in the vector may be represented. Y may represent a semantic feature. Wherein, when the semantic features are vectors,vector values in the vector may be represented. n may represent the number of vector values in the vector.
Then, the computer device may select a smaller value between the first similarity and the second similarity as the target similarity of the preset interactive image.
In one embodiment, the computer device may determine the target similarity of the preset interactive image according to the following formula:
wherein,semantic features of the preset interactive image can be represented.Textual content characteristics of the textual post may be represented.A targeted review feature of the review content may be represented. In an embodiment, there may be multiple review content for the published content. For example, there may be thousands of reviews for a news article. Therefore, the popular comment content with the top 10 degree of popularity can be screened out from the plurality of comment contents. Then, the average value of the comment features of the 10 pieces of popular comment content is taken as the target comment feature. Wherein,euclidean distance may be represented.The target similarity of the preset interactive image can be represented.
204. The computer equipment acquires the historical interaction characteristics of the historical objects which interact by using the target interaction image and the historical object characteristics of the historical objects.
Through similarity matching, the computer equipment can screen out and release a plurality of target interactive images with the highest matching degree of the content and the interactive data. For example, the computer device may screen out and distribute 8 target interactive images with the highest matching degree of the content and the interactive data.
For each target interaction image, the computer equipment can select the published comment content of the top five in the current comment area as historical interaction data, and an object publishing the published comment content is used as a historical object. Then, the computer device can perform feature extraction on the historical interaction data to obtain historical interaction features. In addition, the computer equipment can also perform feature extraction on the object data of the historical object to obtain the historical object features of the historical object.
205. And the computer equipment sorts at least one target interactive image according to the historical interactive features and the historical object features of the target interactive image and the object features of the current object to obtain the sorted interactive image.
Then, in order to obtain the actual ordering of the target interactive images, the target interactive images may be ordered by using an LR + XGboost ordering model based on similarity. For example, as shown in fig. 6.
For example, the historical interaction features and the historical object features of the target interaction image, and the object features of the current object may be input into the LR + XGboost ranking model, so as to achieve ranking of the target interaction image.
And then, calculating the target heat value of the target interactive image according to a formula so as to obtain the target heat value of the target interactive image. Wherein the formula may be as follows:
wherein,a target heat value of the target interaction image may be identified.A target similarity of the target interaction image may be identified.May be the heat value of the target interaction image output by the order model.
And then, sequencing the target interactive images according to the target heat value to obtain the sequenced target interactive images.
Wherein the ranking model may be trained prior to using the ranking model. Specifically, the XGBoost model may be trained using pre-existing features. And then, new features are constructed by utilizing the trees learned by the XGboost model, and finally, the new features are added into the original features to train the model. Each element of the constructed new feature vector corresponds to a leaf node of the tree in the XGboost model. When a training content finally falls on a leaf node of a tree through the tree, the element value corresponding to the leaf node in the new feature vector is 1, and the element values corresponding to other leaf nodes of the tree are 0. And finally, inputting the new features into an LR model for training.
206. The computer device displays the sequenced interactive images on the application program.
For example, as shown in fig. 7, 2011 in fig. 7 indicates a display situation of the interactive image in the interactive image display area 2013 when the method proposed in the embodiment of the present application is not implemented. At this time, the interactive images can be displayed in only a single order. And 2012 in fig. 7 is a display condition of the interactive image in the interactive image display area 2013 when the method proposed in the embodiment of the present application is implemented. At this time, the interactive image in the interactive image display region 2014 may perform intelligent display on the interactive data of the distributed content and the object data of the object according to the distributed content of the application program.
In the embodiment of the application, the computer equipment can acquire the release content of the application program, and the interactive data of the current object aiming at the release content and the object data of the current object; the computer equipment respectively carries out feature extraction processing on the release content, the interactive data and the object data to obtain the content features of the release content, the interactive features aiming at the release content and the object features of the current object; the computer equipment screens out at least one target interactive image which meets the similar conditions of the preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics; the computer equipment acquires historical interaction characteristics of historical objects which interact by using the target interaction image and historical object characteristics of the historical objects; the computer equipment sorts at least one target interaction image according to the historical interaction characteristics and the historical object characteristics of the target interaction image and the object characteristics of the current object to obtain a sorted interaction image; the computer equipment displays the sequenced interactive images on an application program; according to the method and the device, the utilization rate of the objects to the application programs is improved by realizing intelligent sequencing of the interactive images.
In order to better implement the method for displaying an interactive image provided by the embodiment of the present application, in an embodiment, a display device for an interactive image is further provided, and the display device for an interactive image may be integrated in a computer device. The meaning of the noun is the same as that in the above-mentioned display method of the interactive image, and the specific implementation details can refer to the description in the method embodiment.
In an embodiment, there is provided a display apparatus for interactive images, which may be specifically integrated in a computer device, as shown in fig. 8, the display apparatus for interactive images includes: the first obtaining unit 301, the feature extraction processing unit 302, the screening unit 303, the second obtaining unit 304, the sorting unit 305, and the display unit 306 are specifically as follows:
a first obtaining unit 301, configured to obtain release content of an application program, interaction data of a current object for the release content, and object data of the current object;
a feature extraction processing unit 302, configured to perform feature extraction processing on the release content, the interaction data, and the object data, respectively, to obtain a content feature of the release content, the interaction feature for the release content, and an object feature of the current object;
a screening unit 303, configured to screen, according to the content feature and the interaction feature, at least one target interaction image that meets a preset image similarity condition from a plurality of preset interaction images in the application program;
a second obtaining unit 304, configured to obtain a history interaction feature of the history object interacting with the target interaction image and a history object feature of the history object;
a sorting unit 305, configured to sort the at least one target interaction image according to an internal historical interaction feature and a historical object feature of the target interaction image, and an object feature of the current object, so as to obtain a sorted interaction image;
a display unit 306, configured to display the sorted interactive images on the application program.
In one embodiment, the feature extraction processing unit includes:
the information identification subunit is configured to perform information identification processing on the release content, the interactive data, and the object data, respectively, to obtain a type of the release content, a type of the interactive data, and a type of the object data;
the release content feature extraction subunit is used for performing feature extraction processing on the release content according to the type of the release content to obtain the content features;
the interactive data feature extraction subunit is used for performing feature extraction processing on the interactive data according to the type of the interactive data to obtain the interactive features;
and the object data feature extraction subunit is used for performing feature extraction processing on the object data according to the type of the object data to obtain the object features.
In one embodiment, the release content feature extraction subunit includes:
the feature extraction module is used for performing feature extraction processing on the text release content on a plurality of different dimensions to obtain text features output by each dimension;
the screening module is used for screening out at least one target text characteristic which meets a preset accurate threshold value from the plurality of text characteristics;
and the splicing module is used for splicing the at least one target text characteristic to obtain the content characteristic of the release content.
In one embodiment, the feature extraction module includes:
the word segmentation processing submodule is used for carrying out word segmentation processing on the text release content to obtain a plurality of text characters of the text release content;
the characteristic identification submodule is used for carrying out characteristic identification processing on each text character to obtain characteristic identification information of each text character;
and the generating submodule is used for generating the text characteristics of the text release content according to the characteristic identification information of each text character.
In an embodiment, the interactive data feature extraction subunit includes:
the determining module is used for determining a feature extraction mode corresponding to each type of interactive data according to the type of the interactive data;
and the feature extraction module is used for performing feature extraction processing on the type of interactive data by using the feature extraction mode to obtain the interactive features of the interactive data.
In one embodiment, the feature extraction module includes:
the logic operation processing submodule is used for carrying out logic operation processing on the interaction time data to obtain a time evaluation coefficient of the interaction time data;
the judgment processing submodule is used for judging the time evaluation coefficient by utilizing a preset evaluation threshold value to obtain a judgment result;
and the determining submodule is used for determining the interaction time characteristics of the interaction time data according to the judgment result.
In an embodiment, the screening unit may include:
the acquisition subunit is used for acquiring semantic features of the preset interactive images;
the first similarity calculation subunit is used for calculating the similarity of the content features and the semantic features of the preset interactive image to obtain a first similarity of the preset interactive image;
the second similarity calculation operator unit is used for calculating the similarity of the interaction features and the semantic features of the preset interaction image to obtain a second similarity of the preset interaction image;
and the screening subunit is used for screening at least one target interactive image meeting the preset image similarity condition from a plurality of preset interactive images according to the first similarity and the second similarity.
In an embodiment, the screening subunit may include:
the comparison module is used for comparing the first similarity with the second similarity and determining the target similarity of the preset interactive image in the first similarity and the second similarity according to the comparison result;
the ordering processing module is used for ordering the preset interactive images according to the target similarity of the preset interactive images to obtain an ordering result;
and the screening module is used for screening at least one target interactive image which meets the similar conditions of the preset images from the plurality of preset interactive images according to the sorting result.
In an embodiment, the sorting unit may include:
the acquisition subunit is used for acquiring the content interaction characteristics and the use object characteristics of the target interaction image according to the target interaction image;
the heat value mapping processing subunit is configured to perform heat value mapping processing on the target interaction image by using the object features, the content interaction features, and the use object features to obtain a heat value of the target interaction image;
and the sequencing subunit is used for sequencing the at least one target interactive image according to the heat value of the target interactive image to obtain a sequenced interactive image.
In an embodiment, the heat value mapping processing subunit may include:
the classification module is used for predicting the target interaction image in a plurality of preset prediction trees according to the historical interaction characteristics, the historical object characteristics and the object characteristics of the current object to obtain heat prediction information of the target interaction image in each prediction tree;
the logistic regression processing module is used for carrying out logistic regression processing on the heat prediction information of each preset prediction tree to obtain the prediction characteristics of each heat prediction information;
and the feature fusion processing module is used for performing feature fusion processing on the prediction features of each heat prediction information to obtain the heat value of the target interactive image.
In one embodiment, the classification module may include:
the first class matching sub-module is used for carrying out class matching on the object features of the current object and leaf nodes on the preset classification tree to obtain a first target leaf node of the target interaction image on the preset classification tree;
the second category matching submodule is used for performing category matching on the historical interaction features, the historical object features and leaf nodes on the preset classification tree to obtain second target leaf nodes of the target interaction image on the preset classification tree;
and the generation submodule is used for generating the category prediction information of the target interactive image according to the first target leaf node and the second target leaf node.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
By means of the interactive image display device, the intelligent sequencing of the interactive images can be achieved, and the utilization rate of the objects to the application programs is improved.
The embodiment of the present application further provides a computer device, where the computer device may include a terminal or a server, for example, the computer device may be used as a display terminal for interactive images, and the terminal may be a mobile phone, a tablet computer, or the like; for another example, the computer device may be a server, such as a display server for interactive images. As shown in fig. 9, it shows a schematic structural diagram of a terminal according to an embodiment of the present application, specifically:
the computer device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 9 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby monitoring the computer device as a whole. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user pages, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The computer device further comprises a power supply 403 for supplying power to the various components, and preferably, the power supply 403 is logically connected to the processor 401 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The computer device may also include an input unit 404, the input unit 404 being operable to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions as follows:
acquiring release content of an application program, and interactive data of a current object aiming at the release content and object data of the current object;
respectively carrying out feature extraction processing on the release content, the interaction data and the object data to obtain content features of the release content, interaction features aiming at the release content and object features of the current object;
screening at least one target interactive image meeting the similar conditions of preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics;
acquiring historical interaction characteristics of a historical object interacting by using the target interaction image and historical object characteristics of the historical object;
sequencing the at least one target interactive image according to the historical interactive features and the historical object features of the target interactive image and the object features of the current object to obtain a sequenced interactive image;
and displaying the sequenced interactive images on the application program.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations of the above embodiments.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by a computer program, which may be stored in a computer-readable storage medium and loaded and executed by a processor, or by related hardware controlled by the computer program.
To this end, an embodiment of the present application further provides a storage medium, in which a computer program is stored, where the computer program can be loaded by a processor to execute the steps in any one of the methods for displaying an interactive image provided in the embodiments of the present application. For example, the computer program may perform the steps of:
acquiring release content of an application program, and interactive data of a current object aiming at the release content and object data of the current object;
respectively carrying out feature extraction processing on the release content, the interaction data and the object data to obtain content features of the release content, interaction features aiming at the release content and object features of the current object;
screening at least one target interactive image meeting the similar conditions of preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics;
acquiring historical interaction characteristics of a historical object interacting by using the target interaction image and historical object characteristics of the historical object;
sequencing the at least one target interactive image according to the historical interactive features and the historical object features of the target interactive image and the object features of the current object to obtain a sequenced interactive image;
and displaying the sequenced interactive images on the application program.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Since the computer program stored in the storage medium can execute the steps in any method for displaying an interactive image provided in the embodiment of the present application, the beneficial effects that can be achieved by any method for displaying an interactive image provided in the embodiment of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The foregoing detailed description is directed to a method, an apparatus, a computer device, and a storage medium for displaying an interactive image provided in an embodiment of the present application, and a specific example is applied in the detailed description to explain the principles and embodiments of the present application, and the description of the foregoing embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (15)
1. A method for displaying an interactive image, comprising:
acquiring release content of an application program, and interactive data of a current object aiming at the release content and object data of the current object;
respectively carrying out feature extraction processing on the release content, the interaction data and the object data to obtain content features of the release content, interaction features aiming at the release content and object features of the current object;
screening at least one target interactive image meeting the similar conditions of preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics;
acquiring historical interaction characteristics of a historical object interacting by using the target interaction image and historical object characteristics of the historical object;
sequencing the at least one target interactive image according to the historical interactive features and the historical object features of the target interactive image and the object features of the current object to obtain a sequenced interactive image;
and displaying the sequenced interactive images on the application program.
2. The method for displaying an interactive image according to claim 1, wherein the performing feature extraction processing on the release content, the interactive data, and the object data to obtain the content feature of the release content, the interactive feature for the release content, and the object feature of the current object, respectively, comprises:
respectively carrying out information identification processing on the release content, the interactive data and the object data to obtain the type of the release content, the type of the interactive data and the type of the object data;
according to the type of the release content, performing feature extraction processing on the release content to obtain the content features;
according to the type of the interactive data, performing feature extraction processing on the interactive data to obtain the interactive features;
and according to the type of the object data, performing feature extraction processing on the object data to obtain the object features.
3. The method of displaying an interactive image according to claim 2, wherein the type of the distribution content includes a text distribution content; the performing feature extraction processing on the release content according to the type of the release content to obtain the content features includes:
performing feature extraction processing on the text release content on a plurality of different dimensions to obtain text features output by each dimension;
screening at least one target text characteristic which meets a preset accurate threshold value from the plurality of text characteristics;
and splicing the at least one target text characteristic to obtain the content characteristic of the release content.
4. The method for displaying an interactive image according to claim 3, wherein the performing feature extraction on the text release content in a plurality of different dimensions to obtain the text features output in each dimension comprises:
performing word segmentation processing on the text release content to obtain a plurality of text characters of the text release content;
performing characteristic identification processing on each text character to obtain characteristic identification information of each text character;
and generating the text characteristics of the text release content according to the characteristic identification information of each text character.
5. The method for displaying an interactive image according to claim 3, wherein the performing feature extraction on the text release content in a plurality of different dimensions to obtain the text features output in each dimension comprises:
performing feature extraction processing on the text release content on a plurality of different dimensions by using a text feature extraction model to obtain text features output by each dimension;
before the feature extraction processing is performed on the text release content on a plurality of different dimensions by using a text feature extraction model to obtain the text features output by each dimension, the method comprises the following steps:
acquiring a plurality of training text contents and a text feature extraction model to be trained;
pre-training the text feature extraction model to be trained by using the plurality of training text contents to obtain a pre-trained text feature extraction model;
performing word segmentation processing on the training text content to obtain word segmentation content of the training text content;
and performing incremental training on the pre-trained text feature extraction model by using the word segmentation content to obtain the text feature extraction model.
6. The method for displaying an interactive image according to claim 2, wherein the performing a feature extraction process on the interactive data according to the type of the interactive data to obtain the interactive features comprises:
determining a feature extraction mode corresponding to the type of the interactive data according to the type of the interactive data;
and performing feature extraction processing on the type of interactive data by using the feature extraction mode to obtain the interactive features.
7. The method of claim 6, wherein the type of the interactive data comprises interactive time data; the method for extracting the features of the interactive data of the type by using the feature extraction mode to obtain the interactive features comprises the following steps:
performing logical operation processing on the interaction time data to obtain a time evaluation coefficient of the interaction time data;
judging the time evaluation coefficient by using a preset evaluation threshold value to obtain a judgment result;
and determining the interaction time characteristics of the interaction time data according to the judgment result.
8. The method for displaying an interactive image according to claim 1, wherein the step of screening at least one target interactive image meeting a preset image similarity condition from a plurality of preset interactive images in the application program according to the content features and the interactive features comprises:
obtaining semantic features of the preset interactive images;
similarity calculation is carried out on the content features and the semantic features of the preset interactive image, and first similarity of the preset interactive image is obtained;
similarity calculation is carried out on the interaction features and the semantic features of the preset interaction image, and second similarity of the preset interaction image is obtained;
and screening at least one target interactive image meeting the preset image similarity condition from a plurality of preset interactive images according to the first similarity and the second similarity.
9. The method for displaying interactive images according to claim 8, wherein the step of screening at least one target interactive image meeting a preset image similarity condition from a plurality of preset interactive images according to the first similarity and the second similarity comprises:
comparing the first similarity with the second similarity, and determining the target similarity of the preset interactive image in the first similarity and the second similarity according to the comparison result;
according to the target similarity of the preset interactive images, sequencing the preset interactive images to obtain a sequencing result;
and screening at least one target interactive image which meets the similar conditions of the preset images from the plurality of preset interactive images according to the sorting result.
10. The method for displaying an interactive image according to claim 1, wherein the step of sorting the at least one target interactive image according to the historical interactive features and the historical object features of the target interactive image and the object features of the current object to obtain a sorted interactive image comprises:
performing heat value mapping processing on the target interaction image by using the historical interaction characteristics, the historical object characteristics and the object characteristics of the current object to obtain a heat value of the target interaction image;
and sequencing the at least one target interactive image according to the heat value of the target interactive image to obtain a sequenced interactive image.
11. The method for displaying an interactive image according to claim 10, wherein the obtaining the heat value of the target interactive image by performing heat value mapping processing on the target interactive image by using the historical interactive features, the historical object features, and the object features of the current object comprises:
predicting the target interaction image in a plurality of preset prediction trees according to the historical interaction features, the historical object features and the object features of the current object to obtain heat prediction information of the target interaction image in each prediction tree;
performing logistic regression processing on the heat prediction information of each prediction tree to obtain the prediction characteristics of each heat prediction information;
and performing feature fusion processing on the prediction features of each heat prediction information to obtain the heat value of the target interaction image.
12. The method of claim 11, wherein the step of predicting the target interactive image in a plurality of preset prediction trees according to the historical interactive features, the historical object features, and the object features of the current object to obtain heat prediction information of the target interactive image in each prediction tree comprises:
matching the object characteristics of the current object with leaf nodes on the preset prediction tree to obtain a first target leaf node of the target interaction image on the preset prediction tree;
matching the historical interaction features, the historical object features and leaf nodes on the preset prediction tree to obtain a second target leaf node of the target interaction image on the preset prediction tree;
and generating heat prediction information of the target interactive image according to the first target leaf node and the second target leaf node.
13. An interactive image display device, comprising:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring the release content of an application program, and the current object aims at the interactive data of the release content and the object data of the current object;
the feature extraction processing unit is used for respectively performing feature extraction processing on the release content, the interaction data and the object data to obtain content features of the release content, the interaction features aiming at the release content and object features of the current object;
the screening unit is used for screening at least one target interactive image meeting the similar conditions of preset images from a plurality of preset interactive images in the application program according to the content characteristics and the interactive characteristics;
the second acquisition unit is used for acquiring historical interaction characteristics of historical objects which interact by using the target interaction image and historical object characteristics of the historical objects;
the sequencing unit is used for sequencing the at least one target interaction image according to the historical interaction characteristics and the historical object characteristics of the target interaction image and the object characteristics of the current object to obtain a sequenced interaction image;
and the display unit is used for displaying the sequenced interactive images on the application program.
14. A computer device comprising a memory and a processor; the memory stores an application program, and the processor is configured to execute the application program in the memory to perform the operations of the method for displaying an interactive image according to any one of claims 1 to 12.
15. A storage medium storing a plurality of instructions, the instructions being suitable for being loaded by a processor to execute the steps of the method for displaying an interactive image according to any one of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110728078.5A CN113254711B (en) | 2021-06-29 | 2021-06-29 | Interactive image display method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110728078.5A CN113254711B (en) | 2021-06-29 | 2021-06-29 | Interactive image display method and device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113254711A CN113254711A (en) | 2021-08-13 |
CN113254711B true CN113254711B (en) | 2021-09-14 |
Family
ID=77190091
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110728078.5A Active CN113254711B (en) | 2021-06-29 | 2021-06-29 | Interactive image display method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113254711B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113835813A (en) * | 2021-09-26 | 2021-12-24 | 乐美科技股份私人有限公司 | Page display method and device |
CN116010638A (en) * | 2021-10-22 | 2023-04-25 | 腾讯科技(深圳)有限公司 | Interactive image display method and device, computer equipment and storage medium |
CN114049678B (en) * | 2022-01-11 | 2022-04-12 | 之江实验室 | Facial motion capturing method and system based on deep learning |
CN114863437B (en) * | 2022-04-21 | 2023-04-07 | 北京百度网讯科技有限公司 | Text recognition method and device, electronic equipment and storage medium |
CN116975654B (en) * | 2023-08-22 | 2024-01-05 | 腾讯科技(深圳)有限公司 | Object interaction method and device, electronic equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3227855A4 (en) * | 2014-12-04 | 2018-06-20 | Le Henaff, Guy | System and method for interacting with information posted in the media |
US10270839B2 (en) * | 2016-03-29 | 2019-04-23 | Snap Inc. | Content collection navigation and autoforwarding |
US10318109B2 (en) * | 2017-06-09 | 2019-06-11 | Microsoft Technology Licensing, Llc | Emoji suggester and adapted user interface |
CN111475731B (en) * | 2020-04-13 | 2021-10-15 | 腾讯科技(深圳)有限公司 | Data processing method, device, storage medium and equipment |
CN113010702B (en) * | 2021-03-03 | 2024-07-26 | 腾讯科技(深圳)有限公司 | Interactive processing method and device for multimedia information, electronic equipment and storage medium |
-
2021
- 2021-06-29 CN CN202110728078.5A patent/CN113254711B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113254711A (en) | 2021-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111444428B (en) | Information recommendation method and device based on artificial intelligence, electronic equipment and storage medium | |
CN113254711B (en) | Interactive image display method and device, computer equipment and storage medium | |
CN111125422A (en) | Image classification method and device, electronic equipment and storage medium | |
US11126682B1 (en) | Hyperlink based multimedia processing | |
Paul et al. | Focused domain contextual AI chatbot framework for resource poor languages | |
CN112989212B (en) | Media content recommendation method, device and equipment and computer storage medium | |
CN111625715B (en) | Information extraction method and device, electronic equipment and storage medium | |
CN112528136A (en) | Viewpoint label generation method and device, electronic equipment and storage medium | |
CN114328807A (en) | Text processing method, device, equipment and storage medium | |
CN110209810A (en) | Similar Text recognition methods and device | |
CN115114395A (en) | Content retrieval and model training method and device, electronic equipment and storage medium | |
CN116977701A (en) | Video classification model training method, video classification method and device | |
CN112165639B (en) | Content distribution method, device, electronic equipment and storage medium | |
CN114328800A (en) | Text processing method and device, electronic equipment and computer readable storage medium | |
CN117216535A (en) | Training method, device, equipment and medium for recommended text generation model | |
CN116975349A (en) | Image retrieval method, device, electronic equipment and storage medium | |
CN113741759B (en) | Comment information display method and device, computer equipment and storage medium | |
CN115186085A (en) | Reply content processing method and interaction method of media content interaction content | |
CN116186197A (en) | Topic recommendation method, device, electronic equipment and storage medium | |
CN114329004A (en) | Digital fingerprint generation method, digital fingerprint generation device, data push method, data push device and storage medium | |
CN114741587A (en) | Article recommendation method, device, medium and equipment | |
CN116881462A (en) | Text data processing, text representation and text clustering method and equipment | |
CN115168609A (en) | Text matching method and device, computer equipment and storage medium | |
CN116484085A (en) | Information delivery method, device, equipment, storage medium and program product | |
CN112712056A (en) | Video semantic analysis method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40050111 Country of ref document: HK |