CN106330875B - Message display method and device - Google Patents

Message display method and device Download PDF

Info

Publication number
CN106330875B
CN106330875B CN201610682561.3A CN201610682561A CN106330875B CN 106330875 B CN106330875 B CN 106330875B CN 201610682561 A CN201610682561 A CN 201610682561A CN 106330875 B CN106330875 B CN 106330875B
Authority
CN
China
Prior art keywords
message
voice
voice message
link information
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610682561.3A
Other languages
Chinese (zh)
Other versions
CN106330875A (en
Inventor
赵娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201610682561.3A priority Critical patent/CN106330875B/en
Publication of CN106330875A publication Critical patent/CN106330875A/en
Application granted granted Critical
Publication of CN106330875B publication Critical patent/CN106330875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/765Media network packet handling intermediate

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention discloses a message display method and device, and belongs to the technical field of communication. The invention receives the voice extraction request of the target video message sent by the first terminal, the voice extraction request at least carries the link information of the target video message, the link information is used for indicating the storage position of a video file of the target video message in the server, acquiring audio data in the video file based on the link information, generating at least one voice message based on the audio data, sending the at least one voice message to the first terminal, so that the first terminal displays the at least one voice message in a designated area in the display interface of the target video message, and thus, by acquiring the voice message in the video message and displaying the voice message, under the condition that the video file of the video message is not required to be downloaded, the user can still know the information in the video message, and the effect of saving the flow is achieved.

Description

Message display method and device
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a message display method and apparatus.
Background
With the rapid development of communication technology, communication forms become more and more diversified, for example, terminals can communicate through applications such as instant messaging application, social application, and the like, that is, through the applications, the terminals can send messages to each other, wherein the messages may include text messages, voice messages, picture messages, and the like, and besides, the messages may also include video messages, which refer to messages containing video files. After receiving the message, the terminal displays the message in the display interface.
Currently, for video messages, the implementation process of message display may include: when the terminal receives the video message, displaying a preview picture of the video in a display interface, wherein the preview picture comprises link information of the video, a user can click the preview picture to trigger a video acquisition instruction, the terminal downloads the video from a server based on the link information of the video after receiving the video acquisition instruction, and after downloading the video to the local, the terminal can play the video in the display interface when receiving a play instruction, thereby completing the display of the video message.
However, in the message display process provided above, when the message is a video message, the terminal is required to download the video file in the video message, and thus, traffic is easily wasted.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present invention provide a message display method and apparatus. The technical scheme is as follows:
in a first aspect, a message display method is provided, the method including:
receiving a voice extraction request of a target video message sent by a first terminal, wherein the voice extraction request at least carries link information of the target video message, the link information is used for indicating a storage position of a video file of the target video message in a server, and the link information is sent when the target video message is sent to the first terminal;
acquiring audio data in the video file based on the link information;
generating at least one voice message based on the audio data;
and sending the at least one voice message to the first terminal so that the first terminal displays the at least one voice message in a designated area in a display interface of the target video message.
In a second aspect, there is provided a message display apparatus, the apparatus comprising:
when a voice extraction instruction of a target video message is received, acquiring link information of the target video message, wherein the link information is used for indicating the storage position of a video file of the target video message in a server;
sending the voice extraction request to a server, wherein the voice extraction request carries the link information, so that the server returns at least one voice message based on the link information, and the at least one voice message is obtained from audio data in the video file based on the link information;
and receiving the at least one voice message sent by the server, and displaying the at least one voice message in a designated area in a display interface of the target video message.
In a third aspect, there is provided a message display apparatus, the apparatus comprising:
a receiving module, configured to receive a voice extraction request of a target video message sent by a first terminal, where the voice extraction request at least carries link information of the target video message, the link information is used to indicate a storage location of a video file of the target video message in a server, and the link information is sent when the target video message is sent to the first terminal;
the first acquisition module is used for acquiring audio data in the video file based on the link information received by the receiving module;
the generating module is used for generating at least one voice message based on the audio data acquired by the first acquiring module;
the first sending module is used for sending the at least one voice message generated by the generating module to the first terminal so that the first terminal can display the at least one voice message in a designated area in a display interface of the target video message.
In a fourth aspect, there is provided a message display apparatus, the apparatus comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring link information of a target video message when a voice extraction instruction of the target video message is received, and the link information is used for indicating the storage position of a video file of the target video message in a server;
a sending module, configured to send the voice extraction request to a server, where the voice extraction request carries the link information, so that the server returns at least one voice message based on the link information, where the at least one voice message is obtained by the server from audio data in the video file based on the link information;
and the receiving module is used for receiving the at least one voice message sent by the server and displaying the at least one voice message in a designated area in a display interface of the target video message.
The technical scheme provided by the embodiment of the invention has the following beneficial effects: the method comprises the steps that a server receives a voice extraction request of a target video message sent by a first terminal, the voice extraction request at least carries link information used for indicating the storage position of a video file of the target video message in the server, then the server obtains audio data in the video file of the target video message based on the link information and generates at least one voice message based on the audio data, and then the server sends the at least one voice message to the first terminal. In addition, after the first terminal acquires the at least one voice message, the at least one voice message is displayed in the designated area in the display interface of the target video message, so that the user can click to listen to the at least one voice message, that is, under the condition that some flow is limited, the user still can acquire the information in the target video message by listening to the at least one voice message without downloading the video file in the target video message, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1A is a schematic diagram illustrating one implementation environment in accordance with an illustrative embodiment.
FIG. 1B is a flow chart illustrating a message display method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a message display method according to another exemplary embodiment.
Fig. 3A is a flow chart illustrating a message display method according to another exemplary embodiment.
Fig. 3B (1) is an interface schematic diagram of a message display according to the embodiment of fig. 3A.
Fig. 3B (2) is an interface diagram of another message display according to the embodiment of fig. 3A.
Fig. 3C is an interface diagram of a message display according to the embodiment of fig. 3A.
Fig. 4A is a schematic diagram illustrating a structure of a message display apparatus according to an exemplary embodiment.
Fig. 4B is a schematic structural diagram illustrating a message display apparatus according to another exemplary embodiment.
Fig. 5 is a schematic structural diagram illustrating a message display apparatus according to another exemplary embodiment.
Fig. 6 is a schematic diagram of a server structure of a message display apparatus according to an exemplary embodiment.
Fig. 7 is a schematic diagram illustrating a terminal structure of a message display apparatus according to an exemplary embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1A is a schematic diagram illustrating an implementation environment according to an exemplary embodiment, which mainly includes a server 110, a first terminal 120, and a second terminal 130. Wherein, the first terminal 120 and the second terminal 130 establish a communication connection with the server 110 through a wired network or a wireless network, respectively. Wherein, the first terminal 120 and the second terminal 130 both run applications that can be used to implement communication.
Fig. 1B is a flowchart illustrating a message display method according to an exemplary embodiment, which may include the following steps:
step 101: receiving a voice extraction request of a target video message sent by a first terminal, wherein the voice extraction request at least carries link information of the target video message, the link information is used for indicating a storage position of a video file of the target video message in a server, and the link information is sent when the target video message is sent to the first terminal.
Step 102: and acquiring the audio data in the video file based on the link information.
Step 103: at least one voice message is generated based on the audio data.
Step 104: and sending the at least one voice message to the first terminal so that the first terminal displays the at least one voice message in a designated area in the display interface of the target video message.
In the embodiment of the invention, the server receives the voice extraction request of the target video message sent by the first terminal, the voice extraction request carries at least link information for indicating the storage location of the video file of the target video message in the server, and then, the server acquires audio data in a video file of the target video message based on the link information, and generates at least one voice message based on the audio data, and thereafter, the server sends the at least one voice message to the first terminal, so that the first terminal acquires the voice message in the target video message, for the first terminal, compared with downloading the video file in the target video message, the flow consumed for acquiring at least one voice message in the target video message is smaller, so that the purpose of saving the flow is achieved. In addition, after the first terminal acquires the at least one voice message, the at least one voice message is displayed in the designated area in the display interface of the target video message, so that the user can click to listen to the at least one voice message, that is, under the condition that some flow is limited, the user still can acquire the information in the target video message by listening to the at least one voice message without downloading the video file in the target video message, and the user experience is improved.
Optionally, generating at least one voice message based on the audio data comprises:
when the voice extraction request also carries the operating system identifier of the first terminal, determining a voice format supported by the operating system of the first terminal based on the operating system identifier;
converting the audio data into a voice format supported by an operating system of the first terminal to obtain a preprocessed voice message;
the at least one voice message is generated based on the pre-processed voice message.
Optionally, generating the at least one voice message based on the preprocessed voice message comprises:
judging whether the voice time length of the preprocessed voice message is greater than a preset time length or not;
if the voice time of the preprocessed voice message is longer than the preset time, cutting the preprocessed voice message according to the preset time to obtain at least one short voice message, wherein the voice time of each short voice message in the at least one short voice message is less than or equal to the preset time;
the at least one short voice message is determined to be the at least one voice message.
Optionally, after sending the at least one voice message to the first terminal, the method further includes:
correspondingly storing the at least one voice message and the link information;
when a voice extraction request of the target video message sent by a second terminal is received, acquiring at least one voice message which is stored corresponding to the link information based on the link information carried in the voice extraction request;
and sending the at least one voice message to the second terminal.
Optionally, acquiring the audio data in the video file based on the link information includes:
acquiring a video file of the target video message based on the link information;
the audio data is extracted from the video file.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present invention, which is not described in detail herein.
Fig. 2 is a flowchart illustrating a message display method according to another exemplary embodiment, which may include the steps of:
step 201: when a voice extraction instruction of a target video message is received, link information of the target video message is obtained, and the link information is used for indicating the storage position of a video file of the target video message in a server.
Step 202: and sending the voice extraction request to a server, wherein the voice extraction request carries the link information, so that the server returns at least one voice message based on the link information, and the at least one voice message is obtained from the audio data in the video file based on the link information.
Step 203: and receiving the at least one voice message sent by the server, and displaying the at least one voice message in a designated area in a display interface of the target video message.
In the embodiment of the invention, when receiving the voice extraction instruction of the target video message, the first terminal indicates that the user wants to acquire the voice message in the target video message, the first terminal acquires link information for indicating the storage position of the video file of the target video message in the server, and then sends a voice extraction request carrying the link information to the server, the server receives the voice extraction request, acquires audio data in the video file based on the link information, generates at least one voice message based on the audio data, and then, the server sends the at least one voice message to the first terminal, and compared with downloading the video file in the target video message, the flow consumed for obtaining the at least one voice message in the target video message is small, so that the purpose of saving the flow is achieved. In addition, after the first terminal receives the at least one voice message, the at least one voice message is displayed in the designated area in the display interface of the target video message, so that the user can click to listen to the at least one voice message, that is, under the condition that some flow is limited, the user still can obtain the information in the target video message by listening to the at least one voice message without downloading the video file in the target video message, and the user experience is improved.
Optionally, the designated area is an area that is a preset distance away from the display position of the target video message.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present invention, which is not described in detail herein.
Fig. 3A is a flowchart of a message display method according to another exemplary embodiment, which is illustrated by way of example in the embodiment that the message display method is implemented in a multi-party interactive manner, and the message display method may include the following steps:
step 301: when a first terminal receives a voice extraction instruction of a target video message, link information of the target video message is obtained, the link information is used for indicating the storage position of a video file of the target video message in a server, and the link information is sent when the target video message is sent to the first terminal.
At present, with the diversification development of communication modes, terminals may implement communication through various types of applications, for example, the applications may include but are not limited to instant messaging applications, social applications, payment applications, and the like, that is, through the applications, the terminals may send messages to each other, and after receiving the messages, the terminals may display the messages. As described above, the message may include not only a text message, a voice message, a picture message, and the like, but also a video message. However, at present, for a video message, in the process of implementing message display, the video file needs to be downloaded, so that when a network to which a terminal accesses is a mobile network, waste of traffic is easily caused.
The voice extracting instruction may be triggered by a user, where the user may trigger through a specified operation, and the specified operation may include a click operation, a sliding operation, and the like, which is not limited in the embodiment of the present invention.
Referring to fig. 3B (1), in an actual application process, when the first terminal receives the video message, a preview picture of a video file of the target video message is usually displayed in a display interface, as shown in 31 in fig. 3B (1). In a possible implementation manner, if the user wants to obtain the voice message in the target video message, the preview picture 31 may be pressed for a long time to trigger an option display instruction, and after receiving the option display instruction, the first terminal displays an option interface in the current display interface, as shown in 32 in fig. 3B (2), where the option interface includes an "extract voice message" option, as shown in 321 in fig. 3B (2), and the user may click the "extract voice message" option to trigger the voice extraction instruction.
And the preview picture comprises link information of the video file, wherein the link information is sent by equipment for sending the target video message to the first terminal. Correspondingly, after receiving the voice extraction instruction, the first terminal acquires the link information from the preview picture, and then the first terminal can generate a voice extraction request based on the link information, that is, the link information is carried in the voice extraction request.
Step 302: and the first terminal sends the voice extraction request to a server, wherein the link information is carried in the voice extraction request.
The first terminal generates a voice extraction request based on the link information, and then transmits the voice extraction request to the server.
Step 303: the server receives a voice extraction request of a target video message sent by a first terminal, wherein the voice extraction request at least carries link information of the target video message.
Step 304: the server acquires the audio data in the video file based on the link information.
The specific implementation process of the server acquiring the audio data in the video file based on the link information may include: the server acquires a video file of the target video message based on the link information, and extracts the audio data from the video file.
That is, after receiving the voice extraction request, the server obtains the link information of the target video message from the voice extraction request, obtains the video file from the video file of the target video message indicated by the link information in the storage location in the server, and then extracts the audio data from the video file.
The specific implementation process of the server extracting the audio data from the video file may include: the server decodes the video file to obtain a decoded video file, wherein when the video file is stored, the video data and the audio data are usually compressed and then stored, that is, the video data and the audio data are actually two independent parts, so that after the video file is decoded, the video data and the audio data are separated, and thus, the server can extract the audio data.
It should be noted that, in the embodiment of the present invention, the specific implementation manner of extracting the audio data from the video file is merely an example, and in another embodiment, the audio data may also be extracted from the video file by other manners, which is not limited by the embodiment of the present invention.
Step 305: the server generates at least one voice message based on the audio data.
After the server extracts the audio data from the video file, at least one voice message may be generated based on the audio data. Wherein, based on the audio data, the implementation process of generating at least one voice message may include: when the voice extraction request also carries the operating system identifier of the first terminal, determining a voice format supported by the operating system of the first terminal based on the operating system identifier, converting the audio data into the voice format supported by the operating system of the first terminal to obtain a preprocessed voice message, and generating the at least one voice message based on the preprocessed voice message.
Wherein the operating system identification can be used to uniquely identify an operating system.
That is, since the operating system type of the first terminal is different and the voice format that the first terminal can support is also different, for example, the voice format that the android operating system can support includes an AMR (adaptive Multi-Rate) format, etc., in order to enable the first terminal to normally play the generated voice message, the server needs to perform format conversion on the obtained audio data according to the operating system type of the first terminal.
The specific implementation process of converting the audio data into a voice format supported by the operating system of the first terminal may include: and coding the audio data according to a preset coding format to obtain a preprocessed voice message in a voice format supported by an operating system of the first terminal. The preset encoding format may be set by a user according to actual requirements in a self-defined manner, or may be set by the server in a default manner, which is not limited in the embodiment of the present invention.
It should be noted that, in the embodiment of the present invention, the conversion of the audio data into the voice format supported by the operating system of the first terminal is only implemented in the foregoing manner, and in another embodiment, the audio data may also be converted into the voice format supported by the operating system of the first terminal in other manners, which is not limited in the embodiment of the present invention.
The specific implementation process of generating the at least one voice message based on the preprocessed voice message may include: judging whether the voice time of the preprocessed voice message is longer than a preset time, if so, cutting the preprocessed voice message according to the preset time to obtain at least one short voice message, wherein the voice time of each short voice message in the at least one short voice message is shorter than or equal to the preset time, and determining the at least one short voice message as the at least one voice message.
The preset duration may be set by a user according to actual needs in a self-defined manner, or may be set by a server in a default manner, which is not limited in the embodiment of the present invention.
In an actual implementation process, when the preprocessed voice message is longer than the preset duration, the display by the first terminal is not convenient, and therefore, the preprocessed voice message generally needs to be segmented, so that the voice duration of each segmented voice message is smaller than or equal to the preset duration.
The specific implementation process of cutting the preprocessed voice message according to the preset duration to obtain at least one phrase voice message may include: and cutting the preprocessed voice message once every preset time length according to the playing sequence of the preprocessed voice message until the voice time length of the cut preprocessed voice message is less than or equal to the preset time length, thus obtaining at least one phrase voice message.
For example, the preset time duration may be 60 seconds, in which case, if the voice time duration of the pre-processed voice message is 95 seconds, the pre-processed voice message is cut to obtain two short voice messages, and the voice time durations of the two short voice messages are 60 seconds and 35 seconds, respectively.
It should be noted that the voice duration of the preprocessed voice message may be obtained from a video file, that is, the video file obtained in the step 304 includes attribute information of the audio data, where the attribute information includes the voice duration of the audio data, and the voice duration of the audio data is the voice duration of the preprocessed voice message.
It should be noted that, in the foregoing implementation manner, if the voice duration of the preprocessed voice message is less than the preset duration, the preprocessed voice message may be directly determined as the at least one voice message.
Step 306: the server sends the at least one voice message to the first terminal.
Step 307: and the first terminal receives the at least one voice message sent by the server and displays the at least one voice message in a designated area in a display interface of the target video message.
And after receiving the at least one voice message sent by the server, the first terminal can display the at least one voice message. Wherein, the appointed area is an area which is away from the display position of the target video message by a preset distance.
The preset distance may be set by a user according to actual requirements in a self-defined manner, or may be set by the first terminal in a default manner, which is not limited in the embodiment of the present invention.
For example, referring to fig. 3C, the designated area is below the display position of the target video message and is a predetermined distance away from the display position of the target video message, and the at least one voice message is 33 in fig. 3C.
After the first terminal displays the voice message in the designated area, a user can click any voice message in the at least one voice message to trigger a voice playing instruction, and when the first terminal detects the voice playing instruction, the voice message indicated by the voice playing instruction is played, so that the user can obtain information in the target video message.
In addition, in the embodiment of the present invention, after the server sends the at least one voice message to the first terminal, a request for extracting a voice message in the target video message sent by another terminal may also be received, that is, in addition to the first terminal, another terminal may also need to extract a voice from the target video message, for this reason, in order to avoid the server repeatedly performing the above operation for extracting a voice message, in the embodiment of the present invention, after the server sends the at least one voice message to the first terminal, the following operations are further performed:
and correspondingly storing the at least one voice message and the link information, when a voice extraction request of the target video message sent by a second terminal is received, acquiring the at least one voice message correspondingly stored with the link information based on the link information carried in the voice extraction request, and sending the at least one voice message to the second terminal.
In a possible implementation manner, a specific implementation manner of correspondingly storing the at least one voice message and the link information may include: and establishing an association relation between the ID (identification) of each voice message in the at least one voice message and the ID of the link information.
Specifically, if the ID of the link information is 100, the ID of each voice message in the at least one voice message may be set to a value having an association relationship with the ID of the link information, for example, the ID of each voice message in the at least one voice message may be set to 100_ x, for example, if the at least one voice message includes two voice messages, the IDs of each voice message in the two voice messages may be 100_1 and 100_2, respectively, that is, the link information is a primary message, and the at least one voice message is a secondary message, so that the association between the ID of each voice message in the at least one voice message and the ID of the link information is realized, that is, the corresponding storage between the at least one voice message and the link information is realized.
It should be noted that, of course, the above-mentioned manner of implementing the corresponding storage between the at least one voice message and the link information by establishing the association relationship between the ID of each voice message in the at least one voice message and the ID of the link information is only exemplary, and in another embodiment, the at least one voice message and the link information may also be stored in other manners, which is not limited in the embodiment of the present invention.
It should be noted that, in the embodiment of the present invention, it is only described by taking an example that the server directly stores the at least one voice message and the link information in a corresponding manner after sending the at least one voice message to the first terminal, in another embodiment, the server may also store the at least one voice message and the link information in a corresponding manner after receiving a notification message sent by the first terminal, where the notification message is used to notify the server that the first terminal has received the at least one voice message, that is, the first terminal sends the notification message to the server after receiving the at least one voice message, and then the server stores the at least one voice message and the link information in a corresponding manner, which is not limited in the embodiment of the present invention.
In addition, after the server correspondingly stores the at least one voice message and the link information, a message pull request sent by the first terminal may also be received, where the message pull request is used to instruct the server to return all messages in a specified duration before and closest to the pull time, that is, in a possible implementation manner, a user may want to obtain a history message record, and the history message record includes the target video message, in this case, if the server receives the message pull request, it is determined whether a sending time point of the link information of the target video message is in the specified duration before and closest to the pull time, if the sending time point of the link information is in the specified duration before and closest to the pull time, the server transmits the video file of the target video message and the at least one voice message to the first terminal.
The specified duration may be set by a user according to actual requirements in a self-defined manner, or may be set by the first terminal in a default manner, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, the server receives the voice extraction request of the target video message sent by the first terminal, the voice extraction request carries at least link information for indicating the storage location of the video file of the target video message in the server, and then, the server acquires audio data in a video file of the target video message based on the link information, and generates at least one voice message based on the audio data, and thereafter, the server sends the at least one voice message to the first terminal, so that the first terminal acquires the voice message in the target video message, for the first terminal, compared with downloading the video file in the target video message, the flow consumed for acquiring at least one voice message in the target video message is smaller, so that the purpose of saving the flow is achieved. In addition, after the first terminal acquires the at least one voice message, the at least one voice message is displayed in the designated area in the display interface of the target video message, so that the user can click to listen to the at least one voice message, that is, under the condition that some flow is limited, the user still can acquire the information in the target video message by listening to the at least one voice message without downloading the video file in the target video message, and the user experience is improved.
Fig. 4A is a schematic diagram illustrating a structure of a message display apparatus according to an exemplary embodiment, which may be implemented by software, hardware, or a combination of both. The message display apparatus may include:
a receiving module 410, configured to receive a voice extraction request of a target video message sent by a first terminal, where the voice extraction request at least carries link information of the target video message, the link information is used to indicate a storage location of a video file of the target video message in a server, and the link information is sent when the target video message is sent to the first terminal;
a first obtaining module 420, configured to obtain audio data in the video file based on the link information received by the receiving module 410;
a generating module 430, configured to generate at least one voice message based on the audio data acquired by the first acquiring module 420;
a first sending module 440, configured to send the at least one voice message generated by the generating module 430 to the first terminal, so that the first terminal displays the at least one voice message in a designated area in the display interface of the target video message.
Optionally, the generating module 430 includes:
a determining unit, configured to determine, when the voice extraction request further carries an operating system identifier of the first terminal, a voice format supported by an operating system of the first terminal based on the operating system identifier;
the conversion unit is used for converting the audio data into a voice format supported by an operating system of the first terminal to obtain a preprocessed voice message;
a generating unit for generating the at least one voice message based on the preprocessed voice message.
Optionally, the generating unit is configured to:
judging whether the voice time length of the preprocessed voice message is greater than a preset time length or not;
if the voice time of the preprocessed voice message is longer than the preset time, cutting the preprocessed voice message according to the preset time to obtain at least one short voice message, wherein the voice time of each short voice message in the at least one short voice message is less than or equal to the preset time;
the at least one short voice message is determined to be the at least one voice message.
Optionally, referring to fig. 4B, the apparatus further includes:
a storage module 450, configured to correspondingly store the at least one voice message and the link information;
a second obtaining module 460, configured to, when a voice extraction request of the target video message sent by a second terminal is received, obtain, based on the link information carried in the voice extraction request, at least one voice message stored in correspondence with the link information;
a second sending module 470, configured to send the at least one voice message to the second terminal.
Optionally, the first obtaining module 420 includes:
an obtaining unit, configured to obtain a video file of the target video message based on the link information;
an extracting unit for extracting the audio data from the video file.
In the embodiment of the invention, the server receives the voice extraction request of the target video message sent by the first terminal, the voice extraction request carries at least link information for indicating the storage location of the video file of the target video message in the server, and then, the server acquires audio data in a video file of the target video message based on the link information, and generates at least one voice message based on the audio data, and thereafter, the server sends the at least one voice message to the first terminal, so that the first terminal acquires the voice message in the target video message, for the first terminal, compared with downloading the video file in the target video message, the flow consumed for acquiring at least one voice message in the target video message is smaller, so that the purpose of saving the flow is achieved. In addition, after the first terminal acquires the at least one voice message, the at least one voice message is displayed in the designated area in the display interface of the target video message, so that the user can click to listen to the at least one voice message, that is, under the condition that some flow is limited, the user still can acquire the information in the target video message by listening to the at least one voice message without downloading the video file in the target video message, and the user experience is improved.
Fig. 5 is a schematic diagram illustrating a structure of a message display apparatus according to an exemplary embodiment, which may be implemented by software, hardware, or a combination of both. The message display apparatus may include:
an obtaining module 510, configured to obtain link information of a target video message when a voice extracting instruction of the target video message is received, where the link information is used to indicate a storage location of a video file of the target video message in a server;
a sending module 520, configured to send the voice extraction request to a server, where the voice extraction request carries the link information, so that the server returns at least one voice message based on the link information, where the at least one voice message is obtained from audio data in the video file based on the link information;
a receiving module 530, configured to receive the at least one voice message sent by the server, and display the at least one voice message in a designated area in a display interface of the target video message.
Optionally, the designated area is an area that is a preset distance away from the display position of the target video message.
In the embodiment of the invention, when receiving the voice extraction instruction of the target video message, the first terminal indicates that the user wants to acquire the voice message in the target video message, the first terminal acquires link information for indicating the storage position of the video file of the target video message in the server, and then sends a voice extraction request carrying the link information to the server, the server receives the voice extraction request, acquires audio data in the video file based on the link information, generates at least one voice message based on the audio data, and then, the server sends the at least one voice message to the first terminal, and compared with downloading the video file in the target video message, the flow consumed for obtaining the at least one voice message in the target video message is small, so that the purpose of saving the flow is achieved. In addition, after the first terminal receives the at least one voice message, the at least one voice message is displayed in the designated area in the display interface of the target video message, so that the user can click to listen to the at least one voice message, that is, under the condition that some flow is limited, the user still can obtain the information in the target video message by listening to the at least one voice message without downloading the video file in the target video message, and the user experience is improved.
It should be noted that: in the message display apparatus provided in the foregoing embodiment, when implementing the message display method, only the division of the functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the embodiments of the message display apparatus and the message display method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 6 is a schematic diagram of a server structure of a message display apparatus according to an exemplary embodiment. The server may be a server in a cluster of background servers. Specifically, the method comprises the following steps:
the server 600 includes a Central Processing Unit (CPU)601, a system memory 604 including a Random Access Memory (RAM)602 and a Read Only Memory (ROM)603, and a system bus 605 connecting the system memory 604 and the central processing unit 601. The server 600 also includes a basic input/output system (I/O system) 606, which facilitates the transfer of information between devices within the computer, and a mass storage device 607, which stores an operating system 613, application programs 614, and other program modules 615.
The basic input/output system 606 includes a display 608 for displaying information and an input device 609 such as a mouse, keyboard, etc. for user input of information. Wherein a display 608 and an input device 609 are connected to the central processing unit 601 through an input output controller 610 connected to the system bus 605. The basic input/output system 606 may also include an input/output controller 610 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input/output controller 610 may also provide output to a display screen, a printer, or other type of output device.
The mass storage device 607 is connected to the central processing unit 601 through a mass storage controller (not shown) connected to the system bus 605. The mass storage device 607 and its associated computer-readable media provide non-volatile storage for the server 600. That is, mass storage device 607 may include a computer-readable medium (not shown), such as a hard disk or CD-ROM drive.
Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 604 and mass storage device 607 described above may be collectively referred to as memory.
According to various embodiments of the invention, the server 600 may also operate as a remote computer connected to a network through a network, such as the Internet. That is, the server 600 may be connected to the network 612 through the network interface unit 611 connected to the system bus 605, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 611.
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU. The one or more programs include instructions for performing a message display method provided by an embodiment of the present invention, including:
receiving a voice extraction request of a target video message sent by a first terminal, wherein the voice extraction request at least carries link information of the target video message, the link information is used for indicating the storage position of a video file of the target video message in a server, and the link information is sent when the target video message is sent to the first terminal;
acquiring audio data in the video file based on the link information;
generating at least one voice message based on the audio data;
and sending the at least one voice message to the first terminal so that the first terminal displays the at least one voice message in a designated area in the display interface of the target video message.
Optionally, generating at least one voice message based on the audio data comprises:
when the voice extraction request also carries the operating system identifier of the first terminal, determining a voice format supported by the operating system of the first terminal based on the operating system identifier;
converting the audio data into a voice format supported by an operating system of the first terminal to obtain a preprocessed voice message;
the at least one voice message is generated based on the pre-processed voice message.
Optionally, generating the at least one voice message based on the preprocessed voice message comprises:
judging whether the voice time length of the preprocessed voice message is greater than a preset time length or not;
if the voice time of the preprocessed voice message is longer than the preset time, cutting the preprocessed voice message according to the preset time to obtain at least one short voice message, wherein the voice time of each short voice message in the at least one short voice message is less than or equal to the preset time;
the at least one short voice message is determined to be the at least one voice message.
Optionally, after sending the at least one voice message to the first terminal, the method further includes:
correspondingly storing the at least one voice message and the link information;
when a voice extraction request of the target video message sent by a second terminal is received, acquiring at least one voice message which is stored corresponding to the link information based on the link information carried in the voice extraction request;
and sending the at least one voice message to the second terminal.
Optionally, acquiring the audio data in the video file based on the link information includes:
acquiring a video file of the target video message based on the link information;
the audio data is extracted from the video file.
In the embodiment of the invention, the server receives the voice extraction request of the target video message sent by the first terminal, the voice extraction request carries at least link information for indicating the storage location of the video file of the target video message in the server, and then, the server acquires audio data in a video file of the target video message based on the link information, and generates at least one voice message based on the audio data, and thereafter, the server sends the at least one voice message to the first terminal, so that the first terminal acquires the voice message in the target video message, for the first terminal, compared with downloading the video file in the target video message, the flow consumed for acquiring at least one voice message in the target video message is smaller, so that the purpose of saving the flow is achieved. In addition, after the first terminal acquires the at least one voice message, the at least one voice message is displayed in the designated area in the display interface of the target video message, so that the user can click to listen to the at least one voice message, that is, under the condition that some flow is limited, the user still can acquire the information in the target video message by listening to the at least one voice message without downloading the video file in the target video message, and the user experience is improved.
Fig. 7 is a schematic diagram illustrating a terminal structure of a message display apparatus according to an exemplary embodiment. Referring to fig. 7, a terminal 700 may include components such as a communication unit 710, a memory 720 including one or more computer-readable storage media, an input unit 730, a display unit 740, a sensor 750, an audio circuit 760, a WIFI (Wireless Fidelity) module 770, a processor 780 including one or more processing cores, and a power supply 790. Those skilled in the art will appreciate that the terminal structure shown in fig. 7 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the communication unit 710 may be used for receiving and transmitting information or signals during a call, and the communication unit 710 may be an RF (Radio Frequency) circuit, a router, a modem, or other network communication devices. In particular, when the communication unit 710 is an RF circuit, downlink information of a base station is received and then delivered to one or more processors 780 for processing; in addition, data relating to uplink is transmitted to the base station. Generally, the RF circuit as a communication unit includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, the communication unit 710 may also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (general packet Radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (long term Evolution), email, SMS (Short Messaging Service), and the like. The memory 720 may be used to store software programs and modules, and the processor 780 performs various functional applications and data processing by operating the software programs and modules stored in the memory 720. The memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal 700, and the like. Further, the memory 720 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 720 may also include a memory controller to provide access to memory 720 by processor 780 and input unit 730.
The input unit 730 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. Preferably, the input unit 730 may include a touch-sensitive surface 731 as well as other input devices 732. Touch-sensitive surface 731, also referred to as a touch display screen or touch pad, can collect touch operations by a user on or near touch-sensitive surface 731 (e.g., operations by a user on or near touch-sensitive surface 731 using a finger, stylus, or any other suitable object or attachment) and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 731 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 780, and can receive and execute commands from the processor 780. In addition, the touch-sensitive surface 731 can be implemented in a variety of types, including resistive, capacitive, infrared, and surface acoustic wave. The input unit 730 may also include other input devices 732 in addition to the touch-sensitive surface 731. Preferably, other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 740 may be used to display information input by or provided to the user and various graphic user interfaces of the terminal 700, which may be configured by graphics, text, icons, video, and any combination thereof. The Display unit 740 may include a Display panel 741, and optionally, the Display panel 741 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, touch-sensitive surface 731 can overlay display panel 741, such that when touch-sensitive surface 731 detects a touch event thereon or nearby, processor 780 can determine the type of touch event, and processor 780 can then provide a corresponding visual output on display panel 741 based on the type of touch event. Although in FIG. 7 the touch-sensitive surface 731 and the display panel 741 are implemented as two separate components to implement input and output functions, in some embodiments the touch-sensitive surface 731 and the display panel 741 may be integrated to implement input and output functions.
The terminal 700 can also include at least one sensor 750, such as a light sensor, a motion sensor, and other sensors. The light sensor may include an ambient light sensor that may adjust the brightness of the display panel 741 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 741 and/or a backlight when the terminal 700 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal 700, detailed descriptions thereof are omitted.
Audio circuitry 760, speaker 761, and microphone 762 may provide an audio interface between a user and terminal 700. The audio circuit 760 can transmit the electrical signal converted from the received audio data to the speaker 761, and the electrical signal is converted into a sound signal by the speaker 761 and output; on the other hand, the microphone 762 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 760, processes the audio data by the audio data output processor 780, and transmits the processed audio data to, for example, another terminal via the communication unit 710, or outputs the audio data to the memory 720 for further processing. The audio circuitry 760 may also include an earbud jack to provide communication of a peripheral headset with the terminal 700.
In order to implement wireless communication, a wireless communication unit 770 may be configured on the terminal, and the wireless communication unit 770 may be a WIFI module. WIFI belongs to a short-distance wireless transmission technology, and the terminal 700 can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the wireless communication unit 770, and provides wireless broadband internet access for the user. Although the wireless communication unit 770 is shown in the drawing, it is understood that it does not belong to the essential constitution of the terminal 700 and may be omitted entirely within the scope not changing the essence of the invention as needed.
The processor 780 is a control center of the terminal 700, connects various parts of the entire handset using various interfaces and lines, and performs various functions of the terminal 700 and processes data by operating or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby integrally monitoring the handset. Optionally, processor 780 may include one or more processing cores; preferably, the processor 780 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 780.
The terminal 700 also includes a power supply 790 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 780 via a power management system that may be used to manage charging, discharging, and power consumption. The power supply 760 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal 700 may further include a camera, a bluetooth module, etc., which will not be described herein.
In this embodiment, the terminal further includes one or more programs, the one or more programs are stored in the memory and configured to be executed by one or more processors, and the one or more programs include instructions for performing the message display method provided in this embodiment, including:
when a voice extraction instruction of a target video message is received, acquiring link information of the target video message, wherein the link information is used for indicating the storage position of a video file of the target video message in a server;
sending the voice extraction request to a server, wherein the voice extraction request carries the link information, so that the server returns at least one voice message based on the link information, and the at least one voice message is obtained from the audio data in the video file based on the link information;
and receiving the at least one voice message sent by the server, and displaying the at least one voice message in a designated area in a display interface of the target video message.
Optionally, the designated area is an area that is a preset distance away from the display position of the target video message.
In the embodiment of the invention, when receiving the voice extraction instruction of the target video message, the first terminal indicates that the user wants to acquire the voice message in the target video message, the first terminal acquires link information for indicating the storage position of the video file of the target video message in the server, and then sends a voice extraction request carrying the link information to the server, the server receives the voice extraction request, acquires audio data in the video file based on the link information, generates at least one voice message based on the audio data, and then, the server sends the at least one voice message to the first terminal, and compared with downloading the video file in the target video message, the flow consumed for obtaining the at least one voice message in the target video message is small, so that the purpose of saving the flow is achieved. In addition, after the first terminal receives the at least one voice message, the at least one voice message is displayed in the designated area in the display interface of the target video message, so that the user can click to listen to the at least one voice message, that is, under the condition that some flow is limited, the user still can obtain the information in the target video message by listening to the at least one voice message without downloading the video file in the target video message, and the user experience is improved.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (15)

1. A method for displaying messages, the method comprising:
receiving a voice extraction request of a target video message sent by a first terminal, wherein the voice extraction request at least carries link information of the target video message, the link information is used for indicating a storage position of a video file of the target video message in a server, and the link information is sent when the target video message is sent to the first terminal through an instant messaging application, a social contact application or a payment application;
acquiring audio data in the video file based on the link information;
generating at least one voice message based on the audio data;
and sending the at least one voice message to the first terminal so that the first terminal displays the at least one voice message in a designated area in a display interface of the target video message, wherein the designated area is an area which is away from the display position of the target video message by a preset distance.
2. The method of claim 1, wherein generating at least one voice message based on the audio data comprises:
when the voice extraction request also carries an operating system identifier of the first terminal, determining a voice format supported by the operating system of the first terminal based on the operating system identifier;
converting the audio data into a voice format supported by an operating system of the first terminal to obtain a preprocessed voice message;
generating the at least one voice message based on the pre-processed voice message.
3. The method of claim 2, wherein said generating the at least one voice message based on the pre-processed voice message comprises:
judging whether the voice time length of the preprocessed voice message is greater than a preset time length or not;
if the voice time of the preprocessed voice message is longer than the preset time, cutting the preprocessed voice message according to the preset time to obtain at least one short voice message, wherein the voice time of each short voice message in the at least one short voice message is less than or equal to the preset time;
determining the at least one short voice message as the at least one voice message.
4. The method of any of claims 1-3, wherein after sending the at least one voice message to the first terminal, further comprising:
correspondingly storing the at least one voice message and the link information;
when a voice extraction request of the target video message sent by a second terminal is received, acquiring at least one voice message which is stored corresponding to the link information based on the link information carried in the voice extraction request;
and sending the at least one voice message to the second terminal.
5. The method of any of claims 1-3, wherein the obtaining audio data in the video file based on the link information comprises:
acquiring a video file of the target video message based on the link information;
extracting the audio data from the video file.
6. A method for displaying messages, the method comprising:
when a voice extraction instruction of a target video message is received, acquiring link information of the target video message, wherein the link information is used for indicating the storage position of a video file of the target video message in a server, and the target video message is received through an instant messaging application, a social contact application or a payment application;
sending the voice extraction request to a server, wherein the voice extraction request carries the link information, so that the server returns at least one voice message based on the link information, and the at least one voice message is obtained from audio data in the video file based on the link information;
and receiving the at least one voice message sent by the server, and displaying the at least one voice message in a designated area in a display interface of the target video message, wherein the designated area is an area which is away from the display position of the target video message by a preset distance.
7. A message display apparatus, characterized in that the apparatus comprises:
the system comprises a receiving module, a processing module and a sending module, wherein the receiving module is used for receiving a voice extraction request of a target video message sent by a first terminal, the voice extraction request at least carries link information of the target video message, the link information is used for indicating the storage position of a video file of the target video message in a server, and the link information is sent when the target video message is sent to the first terminal through an instant messaging application, a social application or a payment application;
the first acquisition module is used for acquiring audio data in the video file based on the link information received by the receiving module;
the generating module is used for generating at least one voice message based on the audio data acquired by the first acquiring module;
the first sending module is configured to send the at least one voice message generated by the generating module to the first terminal, so that the first terminal displays the at least one voice message in a designated area in a display interface of the target video message, where the designated area is an area that is a preset distance away from a display position of the target video message.
8. The apparatus of claim 7, wherein the generating module comprises:
a determining unit, configured to determine, when the voice extraction request further carries an operating system identifier of the first terminal, a voice format supported by an operating system of the first terminal based on the operating system identifier;
the conversion unit is used for converting the audio data into a voice format supported by an operating system of the first terminal to obtain a preprocessed voice message;
a generating unit configured to generate the at least one voice message based on the preprocessed voice message.
9. The apparatus of claim 8, wherein the generating unit is to:
judging whether the voice time length of the preprocessed voice message is greater than a preset time length or not;
if the voice time of the preprocessed voice message is longer than the preset time, cutting the preprocessed voice message according to the preset time to obtain at least one short voice message, wherein the voice time of each short voice message in the at least one short voice message is less than or equal to the preset time;
determining the at least one short voice message as the at least one voice message.
10. The apparatus of any of claims 7-9, wherein the apparatus further comprises:
the storage module is used for correspondingly storing the at least one voice message and the link information;
the second obtaining module is used for obtaining at least one voice message which is stored correspondingly to the link information based on the link information carried in the voice extraction request when the voice extraction request of the target video message sent by a second terminal is received;
and the second sending module is used for sending the at least one voice message to the second terminal.
11. The apparatus of any one of claims 7-9, wherein the first obtaining module comprises:
an obtaining unit, configured to obtain a video file of the target video message based on the link information;
an extracting unit configured to extract the audio data from the video file.
12. A message display apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring link information of a target video message when a voice extraction instruction of the target video message is received, the link information is used for indicating the storage position of a video file of the target video message in a server, and the target video message is received through an instant messaging application, a social contact application or a payment application;
a sending module, configured to send the voice extraction request to a server, where the voice extraction request carries the link information, so that the server returns at least one voice message based on the link information, where the at least one voice message is obtained by the server from audio data in the video file based on the link information;
and the receiving module is used for receiving the at least one voice message sent by the server and displaying the at least one voice message in a designated area in a display interface of the target video message, wherein the designated area is an area which is away from the display position of the target video message by a preset distance.
13. A server, comprising one or more memories and one or more processors, the memory comprising one or more programs configured to be executed by the one or more processors to implement the message display method of any one of claims 1 to 5.
14. A terminal, comprising one or more memories and one or more processors, the memory comprising one or more programs configured to be executed by the one or more processors to implement the message display method of claim 6.
15. A computer-readable storage medium storing one or more programs configured to be executed by a processor to implement the message display method according to any one of claims 1 to 5; or, the one or more programs are configured to be executed by a processor to implement the message display method of claim 6.
CN201610682561.3A 2016-08-17 2016-08-17 Message display method and device Active CN106330875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610682561.3A CN106330875B (en) 2016-08-17 2016-08-17 Message display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610682561.3A CN106330875B (en) 2016-08-17 2016-08-17 Message display method and device

Publications (2)

Publication Number Publication Date
CN106330875A CN106330875A (en) 2017-01-11
CN106330875B true CN106330875B (en) 2019-12-24

Family

ID=57743923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610682561.3A Active CN106330875B (en) 2016-08-17 2016-08-17 Message display method and device

Country Status (1)

Country Link
CN (1) CN106330875B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920619A (en) * 2018-06-28 2018-11-30 Oppo广东移动通信有限公司 Document display method, device, storage medium and electronic equipment
CN109391540A (en) * 2018-10-31 2019-02-26 珠海市小源科技有限公司 A kind of processing method and processing device of RCS message
CN112863478A (en) * 2020-12-30 2021-05-28 东风汽车有限公司 Chat interaction display method in driving process, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102916951A (en) * 2012-10-11 2013-02-06 北京百度网讯科技有限公司 Multimedia information conversion method, system and device
CN103929658A (en) * 2014-05-08 2014-07-16 深圳如果技术有限公司 Video program listening method and system based on cloud server
CN103945272A (en) * 2013-01-23 2014-07-23 腾讯科技(北京)有限公司 Video interaction method, apparatus and system
CN104616652A (en) * 2015-01-13 2015-05-13 小米科技有限责任公司 Voice transmission method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102916951A (en) * 2012-10-11 2013-02-06 北京百度网讯科技有限公司 Multimedia information conversion method, system and device
CN103945272A (en) * 2013-01-23 2014-07-23 腾讯科技(北京)有限公司 Video interaction method, apparatus and system
CN103929658A (en) * 2014-05-08 2014-07-16 深圳如果技术有限公司 Video program listening method and system based on cloud server
CN104616652A (en) * 2015-01-13 2015-05-13 小米科技有限责任公司 Voice transmission method and device

Also Published As

Publication number Publication date
CN106330875A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
CN106686396B (en) Method and system for switching live broadcast room
KR101978590B1 (en) Message updating method, device and terminal
CN106210755B (en) A kind of methods, devices and systems playing live video
CN106371964B (en) Method and device for prompting message
CN106254910B (en) Method and device for recording image
WO2018196588A1 (en) Information sharing method, apparatus and system
CN107786424B (en) Audio and video communication method, terminal and server
CN103455330A (en) Application program management method, terminal, equipment and system
CN103491240B (en) A kind of alarm clock ringing method, device and mobile terminal
CN106293738B (en) Expression image updating method and device
WO2015010466A1 (en) Information display method and apparatus, and mobile terminal
CN104518945A (en) Method, device, and system for sending and receiving social network information
CN106101764A (en) A kind of methods, devices and systems showing video data
CN106210919A (en) A kind of main broadcaster of broadcasting sings the methods, devices and systems of video
CN104660769B (en) A kind of methods, devices and systems for adding associated person information
CN109495769B (en) Video communication method, terminal, smart television, server and storage medium
CN104917905B (en) Processing method, terminal and the server of Stranger Calls
CN106330875B (en) Message display method and device
CN111273955B (en) Thermal restoration plug-in optimization method and device, storage medium and electronic equipment
CN106682189B (en) File name display method and device
CN107317828B (en) File downloading method and device
CN106302101B (en) Message reminding method, terminal and server
CN109728918B (en) Virtual article transmission method, virtual article reception method, device, and storage medium
CN105577712B (en) A kind of file uploading method, device and system
CN105159655B (en) Behavior event playing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant